Auswahl der wissenschaftlichen Literatur zum Thema „Fair Machine Learning“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Fair Machine Learning" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Fair Machine Learning"

1

Basu Roy Chowdhury, Somnath, und Snigdha Chaturvedi. „Sustaining Fairness via Incremental Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 37, Nr. 6 (26.06.2023): 6797–805. http://dx.doi.org/10.1609/aaai.v37i6.25833.

Der volle Inhalt der Quelle
Annotation:
Machine learning systems are often deployed for making critical decisions like credit lending, hiring, etc. While making decisions, such systems often encode the user's demographic information (like gender, age) in their intermediate representations. This can lead to decisions that are biased towards specific demographics. Prior work has focused on debiasing intermediate representations to ensure fair decisions. However, these approaches fail to remain fair with changes in the task or demographic distribution. To ensure fairness in the wild, it is important for a system to adapt to such changes as it accesses new data in an incremental fashion. In this work, we propose to address this issue by introducing the problem of learning fair representations in an incremental learning setting. To this end, we present Fairness-aware Incremental Representation Learning (FaIRL), a representation learning system that can sustain fairness while incrementally learning new tasks. FaIRL is able to achieve fairness and learn new tasks by controlling the rate-distortion function of the learned representations. Our empirical evaluations show that FaIRL is able to make fair decisions while achieving high performance on the target task, outperforming several baselines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Perello, Nick, und Przemyslaw Grabowicz. „Fair Machine Learning Post Affirmative Action“. ACM SIGCAS Computers and Society 52, Nr. 2 (September 2023): 22. http://dx.doi.org/10.1145/3656021.3656029.

Der volle Inhalt der Quelle
Annotation:
The U.S. Supreme Court, in a 6-3 decision on June 29, effectively ended the use of race in college admissions [1]. Indeed, national polls found that a plurality of Americans - 42%, according to a poll conducted by the University of Massachusetts [2] - agree that the policy should be discontinued, while 33% support its continued use in admissions decisions. As scholars of fair machine learning, we ponder how the Supreme Court decision shifts points of focus in the field. The most popular fair machine learning methods aim to achieve some form of "impact parity" by diminishing or removing the correlation between decisions and protected attributes, such as race or gender, similarly to the 80% rule of thumb of the Equal Employment Opportunity Commision. Impact parity can be achieved by reversing historical discrimination, which corresponds to affirmative actions, or by diminishing or removing the influence of the attributes correlated with the protected attributes, which is impractical as it severely undermines model accuracy. Besides, impact disparity is not necessarily a bad thing, e.g., African-American patients suffer from a higher rate of chronic illnesses than White patients and, hence, it may be justified to admit them to care programs at a proportionally higher rate [3]. The U.S. burden-shifting framework under Title VII offers solutions alternative to impact parity. To determine employment discrimination, U.S. courts rely on the McDonnell-Douglas burden-shifting framework where the explanations, justifications, and comparisons of employment practices play a central role. Can similar methods be applied in machine learning?
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Oneto, Luca. „Learning fair models and representations“. Intelligenza Artificiale 14, Nr. 1 (17.09.2020): 151–78. http://dx.doi.org/10.3233/ia-190034.

Der volle Inhalt der Quelle
Annotation:
Machine learning based systems and products are reaching society at large in many aspects of everyday life, including financial lending, online advertising, pretrial and immigration detention, child maltreatment screening, health care, social services, and education. This phenomenon has been accompanied by an increase in concern about the ethical issues that may rise from the adoption of these technologies. In response to this concern, a new area of machine learning has recently emerged that studies how to address disparate treatment caused by algorithmic errors and bias in the data. The central question is how to ensure that the learned model does not treat subgroups in the population unfairly. While the design of solutions to this issue requires an interdisciplinary effort, fundamental progress can only be achieved through a radical change in the machine learning paradigm. In this work, we will describe the state of the art on algorithmic fairness using statistical learning theory, machine learning, and deep learning approaches that are able to learn fair models and data representation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kim, Yun-Myung. „Data and Fair use“. Korea Copyright Commission 141 (30.03.2023): 5–53. http://dx.doi.org/10.30582/kdps.2023.36.1.5.

Der volle Inhalt der Quelle
Annotation:
Data collection and use are the beginning and end of machine learning. Looking at ChatGPT, data is making machines comparable to human capabilities. Commercial purposes are not naturally rejected in the judgment of fair use of the process of producing or securing data for system learning. The UK, Germany, and the EU are also introducing copyright restrictions for data mining for non-profit purposes such as research studies, and Japan is more active. Japan’s active legislation is the reason why there are no comprehensive fair use regulations like Korea and the United States, but it shows its willingness to lead the artificial intelligence industry. In 2020, a revision to the Copyright Act was proposed in Korea to introduce restrictions for information analysis. It will be able to increase the predictability for operators. However, the legislation of the amendment is expected to be opposed by the right holder and may take time. Therefore, it was examined whether machine learning such as data crawling and TDM corresponds to fair use through fair use under the current copyright law. In conclusion, it was considered that it may correspond to fair use, citing that it is different from human use behavior. However, it is questionable whether it is reasonable to attribute all exclusive negligence to the business operator by using the works of others according to fair use. The reason why the compensation system for profits earned by operators through the use of machine works generated by TDM or machine learning cannot be excluded from the possibility of serious consequences for a fair competitive environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kim, Yun-Myung. „Data and Fair use“. Korea Copyright Commission 141 (30.03.2023): 5–53. http://dx.doi.org/10.30582/kdps.2023.36.1.5.

Der volle Inhalt der Quelle
Annotation:
Data collection and use are the beginning and end of machine learning. Looking at ChatGPT, data is making machines comparable to human capabilities. Commercial purposes are not naturally rejected in the judgment of fair use of the process of producing or securing data for system learning. The UK, Germany, and the EU are also introducing copyright restrictions for data mining for non-profit purposes such as research studies, and Japan is more active. Japan’s active legislation is the reason why there are no comprehensive fair use regulations like Korea and the United States, but it shows its willingness to lead the artificial intelligence industry. In 2020, a revision to the Copyright Act was proposed in Korea to introduce restrictions for information analysis. It will be able to increase the predictability for operators. However, the legislation of the amendment is expected to be opposed by the right holder and may take time. Therefore, it was examined whether machine learning such as data crawling and TDM corresponds to fair use through fair use under the current copyright law. In conclusion, it was considered that it may correspond to fair use, citing that it is different from human use behavior. However, it is questionable whether it is reasonable to attribute all exclusive negligence to the business operator by using the works of others according to fair use. The reason why the compensation system for profits earned by operators through the use of machine works generated by TDM or machine learning cannot be excluded from the possibility of serious consequences for a fair competitive environment.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Zhang, Xueru, Mohammad Mahdi Khalili und Mingyan Liu. „Long-Term Impacts of Fair Machine Learning“. Ergonomics in Design: The Quarterly of Human Factors Applications 28, Nr. 3 (25.10.2019): 7–11. http://dx.doi.org/10.1177/1064804619884160.

Der volle Inhalt der Quelle
Annotation:
Machine learning models developed from real-world data can inherit potential, preexisting bias in the dataset. When these models are used to inform decisions involving human beings, fairness concerns inevitably arise. Imposing certain fairness constraints in the training of models can be effective only if appropriate criteria are applied. However, a fairness criterion can be defined/assessed only when the interaction between the decisions and the underlying population is well understood. We introduce two feedback models describing how people react when receiving machine-aided decisions and illustrate that some commonly used fairness criteria can end with undesirable consequences while reinforcing discrimination.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zhu, Yunlan. „The Comparative Analysis of Fair Use of Works in Machine Learning“. SHS Web of Conferences 178 (2023): 01015. http://dx.doi.org/10.1051/shsconf/202317801015.

Der volle Inhalt der Quelle
Annotation:
Before generative AI outputs the content, it copies a large amount of text content. This process is machine learning. For the development of artificial intelligence technology and cultural prosperity, many countries have included machine learning within the scope of fair use. However, China’s copyright law currently does not legislate the fair use of machine learning works. This paper will construct a Chinese model of fair use of machine learning works through comparative analysis of the legislation of other countries. This is a fair use model that balances the flexibility of the United States with the rigor of the European Union.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Redko, Ievgen, und Charlotte Laclau. „On Fair Cost Sharing Games in Machine Learning“. Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 4790–97. http://dx.doi.org/10.1609/aaai.v33i01.33014790.

Der volle Inhalt der Quelle
Annotation:
Machine learning and game theory are known to exhibit a very strong link as they mutually provide each other with solutions and models allowing to study and analyze the optimal behaviour of a set of agents. In this paper, we take a closer look at a special class of games, known as fair cost sharing games, from a machine learning perspective. We show that this particular kind of games, where agents can choose between selfish behaviour and cooperation with shared costs, has a natural link to several machine learning scenarios including collaborative learning with homogeneous and heterogeneous sources of data. We further demonstrate how the game-theoretical results bounding the ratio between the best Nash equilibrium (or its approximate counterpart) and the optimal solution of a given game can be used to provide the upper bound of the gain achievable by the collaborative learning expressed as the expected risk and the sample complexity for homogeneous and heterogeneous cases, respectively. We believe that the established link can spur many possible future implications for other learning scenarios as well, with privacy-aware learning being among the most noticeable examples.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lee, Joshua, Yuheng Bu, Prasanna Sattigeri, Rameswar Panda, Gregory W. Wornell, Leonid Karlinsky und Rogerio Schmidt Feris. „A Maximal Correlation Framework for Fair Machine Learning“. Entropy 24, Nr. 4 (26.03.2022): 461. http://dx.doi.org/10.3390/e24040461.

Der volle Inhalt der Quelle
Annotation:
As machine learning algorithms grow in popularity and diversify to many industries, ethical and legal concerns regarding their fairness have become increasingly relevant. We explore the problem of algorithmic fairness, taking an information–theoretic view. The maximal correlation framework is introduced for expressing fairness constraints and is shown to be capable of being used to derive regularizers that enforce independence and separation-based fairness criteria, which admit optimization algorithms for both discrete and continuous variables that are more computationally efficient than existing algorithms. We show that these algorithms provide smooth performance–fairness tradeoff curves and perform competitively with state-of-the-art methods on both discrete datasets (COMPAS, Adult) and continuous datasets (Communities and Crimes).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

van Berkel, Niels, Jorge Goncalves, Danula Hettiachchi, Senuri Wijenayake, Ryan M. Kelly und Vassilis Kostakos. „Crowdsourcing Perceptions of Fair Predictors for Machine Learning“. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (07.11.2019): 1–21. http://dx.doi.org/10.1145/3359130.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Fair Machine Learning"

1

Schildt, Alexandra, und Jenny Luo. „Tools and Methods for Companies to Build Transparent and Fair Machine Learning Systems“. Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279659.

Der volle Inhalt der Quelle
Annotation:
AI has quickly grown from being a vast concept to an emerging technology that many companies are looking to integrate into their businesses, generally considered an ongoing “revolution” transforming science and society altogether. Researchers and organizations agree that AI and the recent rapid developments in machine learning carry huge potential benefits. At the same time, there is an increasing worry that ethical challenges are not being addressed in the design and implementation of AI systems. As a result, AI has sparked a debate about what principles and values should guide its development and use. However, there is a lack of consensus about what values and principles should guide the development, as well as what practical tools should be used to translate such principles into practice. Although researchers, organizations and authorities have proposed tools and strategies for working with ethical AI within organizations, there is a lack of a holistic perspective, tying together the tools and strategies proposed in ethical, technical and organizational discourses. The thesis aims to contribute with knowledge to bridge this gap by addressing the following purpose: to explore and present the different tools and methods companies and organizations should have in order to build machine learning applications in a fair and transparent manner. The study is of qualitative nature and data collection was conducted through a literature review and interviews with subject matter experts. In our findings, we present a number of tools and methods to increase fairness and transparency. Our findings also show that companies should work with a combination of tools and methods, both outside and inside the development process, as well as in different stages of the machine learning development process. Tools used outside the development process, such as ethical guidelines, appointed roles, workshops and trainings, have positive effects on alignment, engagement and knowledge while providing valuable opportunities for improvement. Furthermore, the findings suggest that it is crucial to translate high-level values into low-level requirements that are measurable and can be evaluated against. We propose a number of pre-model, in-model and post-model techniques that companies can and should implement in each other to increase fairness and transparency in their machine learning systems.
AI har snabbt vuxit från att vara ett vagt koncept till en ny teknik som många företag vill eller är i färd med att implementera. Forskare och organisationer är överens om att AI och utvecklingen inom maskininlärning har enorma potentiella fördelar. Samtidigt finns det en ökande oro för att utformningen och implementeringen av AI-system inte tar de etiska riskerna i beaktning. Detta har triggat en debatt kring vilka principer och värderingar som bör vägleda AI i dess utveckling och användning. Det saknas enighet kring vilka värderingar och principer som bör vägleda AI-utvecklingen, men också kring vilka praktiska verktyg som skall användas för att implementera dessa principer i praktiken. Trots att forskare, organisationer och myndigheter har föreslagit verktyg och strategier för att arbeta med etiskt AI inom organisationer, saknas ett helhetsperspektiv som binder samman de verktyg och strategier som föreslås i etiska, tekniska och organisatoriska diskurser. Rapporten syftar till överbrygga detta gap med följande syfte: att utforska och presentera olika verktyg och metoder som företag och organisationer bör ha för att bygga maskininlärningsapplikationer på ett rättvist och transparent sätt. Studien är av kvalitativ karaktär och datainsamlingen genomfördes genom en litteraturstudie och intervjuer med ämnesexperter från forskning och näringsliv. I våra resultat presenteras ett antal verktyg och metoder för att öka rättvisa och transparens i maskininlärningssystem. Våra resultat visar också att företag bör arbeta med en kombination av verktyg och metoder, både utanför och inuti utvecklingsprocessen men också i olika stadier i utvecklingsprocessen. Verktyg utanför utvecklingsprocessen så som etiska riktlinjer, utsedda roller, workshops och utbildningar har positiva effekter på engagemang och kunskap samtidigt som de ger värdefulla möjligheter till förbättringar. Dessutom indikerar resultaten att det är kritiskt att principer på hög nivå översätts till mätbara kravspecifikationer. Vi föreslår ett antal verktyg i pre-model, in-model och post-model som företag och organisationer kan implementera för att öka rättvisa och transparens i sina maskininlärningssystem.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Dalgren, Anton, und Ylva Lundegård. „GreenML : A methodology for fair evaluation of machine learning algorithms with respect to resource consumption“. Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159837.

Der volle Inhalt der Quelle
Annotation:
Impressive results can be achieved when stacking deep neural networks hierarchies together. Several machine learning papers claim state-of-the-art results when evaluating their models with different accuracy metrics. However, these models come at a cost, which is rarely taken into consideration. This thesis aims to shed light on the resource consumption of machine learning algorithms, and therefore, five efficiency metrics are proposed. These should be used for evaluating machine learning models, taking accuracy, model size, and time and energy consumption for both training and inference into account. These metrics are intended to allow for a fairer evaluation of machine learning models, not only looking at accuracy. This thesis presents an example of how these metrics can be used by applying them to both text and image classification tasks using the algorithms SVM, MLP, and CNN.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Gordaliza, Pastor Paula. „Fair learning : une approche basée sur le transport optimale“. Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30084.

Der volle Inhalt der Quelle
Annotation:
L'objectif de cette thèse est double. D'une part, les méthodes de transport optimal sont étudiées pour l'inférence statistique. D'autre part, le récent problème de l'apprentissage équitable est considéré avec des contributions à travers le prisme de la théorie du transport optimal. L'utilisation généralisée des applications basées sur les modèles d'apprentissage automatique dans la vie quotidienne et le monde professionnel s'est accompagnée de préoccupations quant aux questions éthiques qui peuvent découler de l'adoption de ces technologies. Dans la première partie de cette thèse, nous motivons le problème de l'équité en présentant quelques résultats statistiques complets en étudiant le critère statistical parity par l'analyse de l'indice disparate impact sur l'ensemble de données réel Adult income. Il est important de noter que nous montrons qu'il peut être particulièrement difficile de créer des modèles d'apprentissage machine équitables, surtout lorsque les observations de formation contiennent des biais. Ensuite, une revue des mathématiques pour l'équité dans l'apprentissage machine est donné dans un cadre général, avec également quelques contributions nouvelles dans l'analyse du prix pour l'équité dans la régression et la classification. Dans cette dernière, nous terminons cette première partie en reformulant les liens entre l'équité et la prévisibilité en termes de mesures de probabilité. Nous analysons les méthodes de réparation basées sur le transport de distributions conditionnelles vers le barycentre de Wasserstein. Enfin, nous proposons le random repair qui permet de trouver un compromis entre une perte minimale d'information et un certain degré d'équité. La deuxième partie est dédiée à la théorie asymptotique du coût de transport empirique. Nous fournissons un Théorème de Limite Centrale pour la distance de Monge-Kantorovich entre deux distributions empiriques de tailles différentes n et m, Wp(Pn,Qm), p > = 1, avec observations sur R. Dans le cas de p > 1, nos hypothèses sont nettes en termes de moments et de régularité. Nous prouvons des résultats portant sur le choix des constantes de centrage. Nous fournissons une estimation consistente de la variance asymptotique qui permet de construire tests à deux échantillons et des intervalles de confiance pour certifier la similarité entre deux distributions. Ceux-ci sont ensuite utilisés pour évaluer un nouveau critère d'équité de l'ensemble des données dans la classification. En outre, nous fournissons un principe de déviations modérées pour le coût de transport empirique dans la dimension générale. Enfin, les barycentres de Wasserstein et le critère de variance en termes de la distance de Wasserstein sont utilisés dans de nombreux problèmes pour analyser l'homogénéité des ensembles de distributions et les relations structurelles entre les observations. Nous proposons l'estimation des quantiles du processus empirique de la variation de Wasserstein en utilisant une procédure bootstrap. Ensuite, nous utilisons ces résultats pour l'inférence statistique sur un modèle d'enregistrement de distribution avec des fonctions de déformation générale. Les tests sont basés sur la variance des distributions par rapport à leurs barycentres de Wasserstein pour lesquels nous prouvons les théorèmes de limite centrale, y compris les versions bootstrap
The aim of this thesis is two-fold. On the one hand, optimal transportation methods are studied for statistical inference purposes. On the other hand, the recent problem of fair learning is addressed through the prism of optimal transport theory. The generalization of applications based on machine learning models in the everyday life and the professional world has been accompanied by concerns about the ethical issues that may arise from the adoption of these technologies. In the first part of the thesis, we motivate the fairness problem by presenting some comprehensive results from the study of the statistical parity criterion through the analysis of the disparate impact index on the real and well-known Adult Income dataset. Importantly, we show that trying to make fair machine learning models may be a particularly challenging task, especially when the training observations contain bias. Then a review of Mathematics for fairness in machine learning is given in a general setting, with some novel contributions in the analysis of the price for fairness in regression and classification. In the latter, we finish this first part by recasting the links between fairness and predictability in terms of probability metrics. We analyze repair methods based on mapping conditional distributions to the Wasserstein barycenter. Finally, we propose a random repair which yields a tradeoff between minimal information loss and a certain amount of fairness. The second part is devoted to the asymptotic theory of the empirical transportation cost. We provide a Central Limit Theorem for the Monge-Kantorovich distance between two empirical distributions with different sizes n and m, Wp(Pn,Qm), p > = 1, for observations on R. In the case p > 1 our assumptions are sharp in terms of moments and smoothness. We prove results dealing with the choice of centering constants. We provide a consistent estimate of the asymptotic variance which enables to build two sample tests and confidence intervals to certify the similarity between two distributions. These are then used to assess a new criterion of data set fairness in classification. Additionally, we provide a moderate deviation principle for the empirical transportation cost in general dimension. Finally, Wasserstein barycenters and variance-like criterion using Wasserstein distance are used in many problems to analyze the homogeneity of collections of distributions and structural relationships between the observations. We propose the estimation of the quantiles of the empirical process of the Wasserstein's variation using a bootstrap procedure. Then we use these results for statistical inference on a distribution registration model for general deformation functions. The tests are based on the variance of the distributions with respect to their Wasserstein's barycenters for which we prove central limit theorems, including bootstrap versions
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Grari, Vincent. „Adversarial mitigation to reduce unwanted biases in machine learning“. Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS096.

Der volle Inhalt der Quelle
Annotation:
Ces dernières années, on a assisté à une augmentation spectaculaire de l’intérêt académique et sociétal pour l’apprentissage automatique équitable. En conséquence, des travaux significatifs ont été réalisés pour inclure des contraintes d’équité dans les algorithmes d’apprentissage automatique. Le but principal est de s’assurer que les prédictions des modèles ne dépendent d’aucun attribut sensible comme le genre ou l’origine d’une personne par exemple. Bien que cette notion d’indépendance soit incontestable dans un contexte général, elle peut théoriquement être définie de manière totalement différente selon la façon dont on voit l’équité. Par conséquent, de nombreux articles récents abordent ce défi en utilisant leurs "propres" objectifs et notions d’équité. Les objectifs sont catégorisés en deux familles différentes : L’équité individuelle et l’équité de groupe. Cette thèse donne d’une part, une vue d’ensemble des méthodologies appliquées dans ces différentes familles afin d’encourager les bonnes pratiques. Ensuite, nous identifions et complétons les lacunes en présentant de nouvelles métriques et des algorithmes de machine learning équitables qui sont plus appropriés pour des contextes spécifiques
The past few years have seen a dramatic rise of academic and societal interest in fair machine learning. As a result, significant work has been done to include fairness constraints in the training objective of machine learning algorithms. Its primary purpose is to ensure that model predictions do not depend on any sensitive attribute as gender or race, for example. Although this notion of independence is incontestable in a general context, it can theoretically be defined in many different ways depending on how one sees fairness. As a result, many recent papers tackle this challenge by using their "own" objectives and notions of fairness. Objectives can be categorized in two different families: Individual and Group fairness. This thesis gives an overview of the methodologies applied in these different families in order to encourage good practices. Then, we identify and complete gaps by presenting new metrics and new Fair-ML algorithms that are more appropriate for specific contexts
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Berisha, Visar. „AI as a Threat to Democracy : Towards an Empirically Grounded Theory“. Thesis, Uppsala universitet, Statsvetenskapliga institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-340733.

Der volle Inhalt der Quelle
Annotation:
Artificial intelligence has in recent years taken center stage in the technological development. Major corporations, operating in a variety of economic sectors, are investing heavily in AI in order to stay competitive in the years and decades to come. What differentiates this technology from traditional computing is that it can carry out tasks previously limited to humans. As such it contains the possibility to revolutionize every aspect of our society. Until now, social science has not given the proper attention that this emerging technological phenomena deserves, a phenomena which, according to some, is increasing in strength exponentially. This paper aims to problematize AI in the light of democratic elections, both as an analytical tool and as a tool for manipulation. It also looks at three recent empirical cases where AI technology was used extensively. The results show that there in fact are reasons to worry. AI as an instrument can be used to covertly affect the public debate, to depress voter turnout, to polarize the population, and to hinder understanding of political issues.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sitruk, Jonathan. „Fais Ce Qu'il Te Plaît... Mais Fais Le Comme Je L'aime : Amélioration des performances en crowdfunding par l’utilisation des catégories et des récits“. Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR0018.

Der volle Inhalt der Quelle
Annotation:
Cette thèse vise à fournir aux entrepreneurs une meilleure compréhension de la façon d'améliorer leur performance lors de la collecte de fonds auprès d’investisseurs. Les entrepreneurs ont des difficultés notoires à accéder aux ressources financières et au capital parce qu'ils souffrent d'un aléa de la nouveauté. Cette condition inhérente est due à leur manque de légitimité dans leur marché cible et conduit les investisseurs à les considérer comme intrinsèquement risqués. Les moyens de financement des entrepreneurs ont traditionnellement été l'épargne personnelle, la famille et les amis, les banques ou les investisseurs professionnels. Le financement participatif est apparu comme une alternative à ceux-ci et les chercheurs dans le domaine de la gestion et de l'entrepreneuriat ont pris un grand intérêt à comprendre ses facettes multiples. La majorité de la recherche sur le financement participatif s’est concentrée sur des éléments quantifiables que les investisseurs utilisent pour déterminer la qualité de la startup. Plus la qualité perçue est élevée, plus les investisseurs ont des chances d'investir. Cependant, en complément de ces éléments de qualité, et non abordés par la recherche jusqu’à présent, sont les éléments qualitatifs qui permettent aux projets d’être plus clairs aux yeux des bailleurs de fonds potentiels tout en transmettant des informations en accord avec les attentes de ces mêmes investisseurs. Cette thèse vise à explorer les stratégies que les entrepreneurs peuvent utiliser pour augmenter leur performance dans le financement participatif en comprenant comment les investisseurs donnent du sens aux projets et comment ils les évaluent étant donné la nature de la plateforme utilisée par l'entrepreneur. Cette thèse contribue aux littératures du crowdfunding, de la catégorisation et des plateformes. La thèse explore d'abord comment les entrepreneurs peuvent utiliser les catégories et les stratégies narratives comme des leviers stratégiques pour améliorer leur performance en abaissant le niveau d'ambiguïté de leur offre tout en alignant leurs stratégies narratives aux attentes de la plateforme qu'ils utilisent. Deuxièmement, cette dissertation empreinte un chemin relativement inexploré en fournissant une critique de la relation qui existe entre l’utilisation de plusieurs catégories, l'ambiguïté et la créativité. De plus, la théorie de la catégorisation est enrichie par une analyse approfondie de l'importance des réseaux sémantiques et des images dans le processus de création de sens (« sense making ») en utilisant une approche empirique nouvelle. Les images sont d'un intérêt particulier étant donné qu'elles ont leur importance à l’origine de la théorie de la catégorisation. Elles sont également traitées par des moyens cognitifs différents de ceux des mots et sont d'une importance vitale dans le monde d'aujourd'hui. Enfin, cette thèse explore la relation entre les plateformes et les récits en théorisant que les premiers sont des types particuliers d'organisations dont l'identité est forgée par leurs parties prenantes internes et externes. L’identité d’une plateforme est vulnérable aux changements tels que les chocs exogènes. Les entrepreneurs doivent apprendre à identifier ces identités ainsi que les changements potentiels afin d'adapter leurs stratégies narratives dans l’espoir d’augmenter leur performance
This dissertation aims to provide entrepreneurs with a better understanding of how to improve their performance when raising funds from investors. Entrepreneurs have difficulty accessing financial resources and capital because they suffer from a liability of newness. This inherent condition is due to their lack of legitimacy in their target market and leads investors to see them as inherently risky. The traditional means of financing new venture ideas have been through personal savings, family and friends, banks, or professional investors. Crowdfunding has emerged as an alternative to these and scholars in the field of management and entrepreneurship have taken great interest in understanding its multiple facets. Most research in crowdfunding has focused on quantifiable elements that investors use in order to determine the quality of an entrepreneur’s venture. The higher the perceived quality, the higher the likelihood investors have of investing in it. However, orthogonal to these elements of quality, and not addressed in current research, are those qualitative elements that allow projects to become clearer in the eyes of potential funders and transmit valuable information about the venture in a coherent fashion regarding the medium they are raising funds from. This dissertation aims to explore strategies entrepreneurs can use to increase their performance in crowdfunding by understanding how investors make sense of projects and how they evaluate them given the nature of the platform used by the entrepreneur. This thesis contributes to the literature on crowdfunding, categorization, and platforms. The thesis first explores how entrepreneurs can use categories and narrative strategies as strategic levers to improve their performance by lowering the level of ambiguity of their offer while aligning their narrative strategies to the expectations of the platform they use. On a second level, the dissertation provides a deeper understanding of the relation that exists between category spanning, ambiguity, and creativity by addressing this relatively unexplored path. Categorization theory is further enriched through a closer examination of the importance of semantic networks and visuals in the sense making process by using a novel empirical approach. Visuals are of particular interest given they were of seminal importance at the foundation of categorization theory, are processed by different cognitive means than words, and are of vital importance in today’s world. Finally, the dissertation explores the relation between platforms and narratives by theorizing that the former are particular types of organizations whose identity is forged by their internal and external stakeholders. Platform identities are vulnerable to change such as exogenous shocks. Entrepreneurs need to learn how to identify these identities and potential changes in order to tailor their narrative strategies in the hopes of increasing their performance
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Muriithi, Paul Mutuanyingi. „A case for memory enhancement : ethical, social, legal, and policy implications for enhancing the memory“. Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/a-case-for-memory-enhancement-ethical-social-legal-and-policy-implications-for-enhancing-the-memory(bf11d09d-6326-49d2-8ef3-a40340471acf).html.

Der volle Inhalt der Quelle
Annotation:
The desire to enhance and make ourselves better is not a new one and it has continued to intrigue throughout the ages. Individuals have continued to seek ways to improve and enhance their well-being for example through nutrition, physical exercise, education and so on. Crucial to this improvement of their well-being is improving their ability to remember. Hence, people interested in improving their well-being, are often interested in memory as well. The rationale being that memory is crucial to our well-being. The desire to improve one’s memory then is almost certainly as old as the desire to improve one’s well-being. Traditionally, people have used different means in an attempt to enhance their memories: for example in learning through storytelling, studying, and apprenticeship. In remembering through practices like mnemonics, repetition, singing, and drumming. In retaining, storing and consolidating memories through nutrition and stimulants like coffee to help keep awake; and by external aids like notepads and computers. In forgetting through rituals and rites. Recent scientific advances in biotechnology, nanotechnology, molecular biology, neuroscience, and information technologies, present a wide variety of technologies to enhance many different aspects of human functioning. Thus, some commentators have identified human enhancement as central and one of the most fascinating subject in bioethics in the last two decades. Within, this period, most of the commentators have addressed the Ethical, Social, Legal and Policy (ESLP) issues in human enhancements as a whole as opposed to specific enhancements. However, this is problematic and recently various commentators have found this to be deficient and called for a contextualized case-by-case analysis to human enhancements for example genetic enhancement, moral enhancement, and in my case memory enhancement (ME). The rationale being that the reasons for accepting/rejecting a particular enhancement vary depending on the enhancement itself. Given this enormous variation, moral and legal generalizations about all enhancement processes and technologies are unwise and they should instead be evaluated individually. Taking this as a point of departure, this research will focus specifically on making a case for ME and in doing so assessing the ESLP implications arising from ME. My analysis will draw on the already existing literature for and against enhancement, especially in part two of this thesis; but it will be novel in providing a much more in-depth analysis of ME. From this perspective, I will contribute to the ME debate through two reviews that address the question how we enhance the memory, and through four original papers discussed in part three of this thesis, where I examine and evaluate critically specific ESLP issues that arise with the use of ME. In the conclusion, I will amalgamate all my contribution to the ME debate and suggest the future direction for the ME debate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Azami, Sajjad. „Exploring fair machine learning in sequential prediction and supervised learning“. Thesis, 2020. http://hdl.handle.net/1828/12098.

Der volle Inhalt der Quelle
Annotation:
Algorithms that are being used in sensitive contexts such as deciding to give a job offer or giving inmates parole should be accurate as well as being non-discriminatory. The latter is important especially due to emerging concerns about automatic decision making being unfair to individuals belonging to certain groups. The machine learning literature has seen a rapid evolution in research on this topic. In this thesis, we study various problems in sequential decision making motivated by challenges in algorithmic fairness. As part of this thesis, we modify the fundamental framework of prediction with expert advice. We assume a learning agent is making decisions using the advice provided by a set of experts while this set can shrink. In other words, experts can become unavailable due to scenarios such as emerging anti-discriminatory laws prohibiting the learner from using experts detected to be unfair. We provide efficient algorithms for this setup, as well as a detailed analysis of the optimality of them. Later we explore a problem concerned with providing any-time fairness guarantees using the well-known exponential weights algorithm, which leads to an open question about a lower bound on the cumulative loss of exponential weights algorithm. Finally, we introduce a novel fairness notion for supervised learning tasks motivated by the concept of envy-freeness. We show how this notion might bypass certain issues of existing fairness notions such as equalized odds. We provide solutions for a simplified version of this problem and insights to deal with further challenges that arise by adopting this notion.
Graduate
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Allabadi, Swati. „Algorithms for Fair Clustering“. Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5709.

Der volle Inhalt der Quelle
Annotation:
Many decisions today are taken by various machine learning algorithms, hence it is crucial to accommodate fairness in such algorithms to remove/reduce any kind of bias in the decision. We incorporate fairness in the problem of clustering. Clustering is a classical machine learning problem in which the task is to partition the data points into various groups such that the data points belonging to one group are more similar to each other than the data points belonging to some other group in the partition. In our model, each data point belongs to one or more number of categories. We define fairness in terms of two constraints, restricted dominance and minority protection. While ensuring fairness in the clustering, we consider each data point in only one of the categories from the set of categories it belongs to. Our model ensures that no category is either in minority or in dominance in any of the clusters. Representation of a category in a cluster is considered not in absolute terms but in proportion to its presence in the whole dataset. We give bi-criteria approximation for fair clustering whose objective is to minimise Lp-norm. Here, the Lp-norm is defined as Lp(V, ϕ) = X v∈V d(v, ϕ(v))p !1/p , where V is the dataset, C is the set of centers chosen for clustering, Φ : V → C is the assignment which minimises the cost of clustering while satisfying the fairness constraints and p can take any positive integral value. Our solution violates the fairness constraints by an additive violation of at most 2. We implement this algorithm and do experiments to compare it with the stateof- the-art. For any ϵ > 0, we give a (1 + ϵ)-approximate algorithm for fair clustering for points lying in Euclidean space whose objective is to minimise L1-norm (or L2-norm). This algorithm also violates the fairness constraints by an additive violation of at most 2. For points lying in Rd, the run time of this algorithm for L2-norm is O nd · 2˜O(k/ϵ) + poly(n) · 2˜O (k/ϵ), where n represents the size of the dataset. For L1-norm, the run time of this algorithm is O nd · 2˜O(k/ϵO(1)) +poly(n) · 2˜O(k/ϵO(1)). Given a γ-perturbation resilient instance of clustering in the metric space (V, d), we also give a bi-criteria approximation for the fair clustering of the same instance while changing its metric to d′. Here, d′ is any metric which is a γ-perturbation of (V, d). This solution also violates the fairness constraints by an additive violation of at most 2.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Fair Machine Learning"

1

Practicing Trustworthy Machine Learning: Consistent, Transparent, and Fair AI Pipelines. O'Reilly Media, Incorporated, 2022.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Vallor, Shannon, und George A. Bekey. Artificial Intelligence and the Ethics of Self-Learning Robots. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780190652951.003.0022.

Der volle Inhalt der Quelle
Annotation:
The convergence of robotics technology with the science of artificial intelligence is rapidly enabling the development of robots that emulate a wide range of intelligent human behaviors. Recent advances in machine learning techniques have produced artificial agents that can acquire highly complex skills formerly thought to be the exclusive province of human intelligence. These developments raise a host of new ethical concerns about the responsible design, manufacture, and use of robots enabled with artificial intelligence—particularly those equipped with self-learning capacities. While the potential benefits of self-learning robots are immense, their potential dangers are equally serious. While some warn of a future where AI escapes the control of its human creators or even turns against us, this chapter focuses on other, far less cinematic risks of AI that are much nearer to hand, requiring immediate study and action by technologists, lawmakers, and other stakeholders.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Fair Machine Learning"

1

Pérez-Suay, Adrián, Valero Laparra, Gonzalo Mateo-García, Jordi Muñoz-Marí, Luis Gómez-Chova und Gustau Camps-Valls. „Fair Kernel Learning“. In Machine Learning and Knowledge Discovery in Databases, 339–55. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-71249-9_21.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Freitas, Alex, und James Brookhouse. „Evolutionary Algorithms for Fair Machine Learning“. In Handbook of Evolutionary Machine Learning, 507–31. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-3814-8_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Van, Minh-Hao, Wei Du, Xintao Wu und Aidong Lu. „Poisoning Attacks on Fair Machine Learning“. In Database Systems for Advanced Applications, 370–86. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00123-9_30.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Van, Minh-Hao, Wei Du, Xintao Wu und Aidong Lu. „Poisoning Attacks on Fair Machine Learning“. In Database Systems for Advanced Applications, 370–86. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-00123-9_30.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wu, Yongkai, Lu Zhang und Xintao Wu. „Fair Machine Learning Through the Lens of Causality“. In Machine Learning for Causal Inference, 103–35. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-35051-1_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Abdollahi, Behnoush, und Olfa Nasraoui. „Transparency in Fair Machine Learning: the Case of Explainable Recommender Systems“. In Human and Machine Learning, 21–35. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-90403-0_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lappas, Theodoros, und Evimaria Terzi. „Toward a Fair Review-Management System“. In Machine Learning and Knowledge Discovery in Databases, 293–309. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23783-6_19.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zhang, Mingwu, Xiao Chen, Gang Shen und Yong Ding. „A Fair and Efficient Secret Sharing Scheme Based on Cloud Assisting“. In Machine Learning for Cyber Security, 348–60. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30619-9_25.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Rančić, Sanja, Sandro Radovanović und Boris Delibašić. „Investigating Oversampling Techniques for Fair Machine Learning Models“. In Lecture Notes in Business Information Processing, 110–23. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-73976-8_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Wu, Zhou, und Mingxiang Guan. „Research on Fair Scheduling Algorithm of 5G Intelligent Wireless System Based on Machine Learning“. In Machine Learning and Intelligent Communications, 53–58. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-66785-6_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Fair Machine Learning"

1

Perrier, Elija. „Quantum Fair Machine Learning“. In AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462611.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Kearns, Michael. „Fair Algorithms for Machine Learning“. In EC '17: ACM Conference on Economics and Computation. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3033274.3084096.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Dai, Jessica, Sina Fazelpour und Zachary Lipton. „Fair Machine Learning Under Partial Compliance“. In AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462521.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Liu, Lydia T., Sarah Dean, Esther Rolf, Max Simchowitz und Moritz Hardt. „Delayed Impact of Fair Machine Learning“. In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/862.

Der volle Inhalt der Quelle
Annotation:
Static classification has been the predominant focus of the study of fairness in machine learning. While most models do not consider how decisions change populations over time, it is conventional wisdom that fairness criteria promote the long-term well-being of groups they aim to protect. This work studies the interaction of static fairness criteria with temporal indicators of well-being. We show a simple one-step feedback model in which common criteria do not generally promote improvement over time, and may in fact cause harm. Our results highlight the importance of temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Wang, Haoyu, Hanyu Hu, Mingrui Zhuang und Jiayi Shen. „Integrating Machine Learning into Fair Inference“. In The International Conference on New Media Development and Modernized Education. SCITEPRESS - Science and Technology Publications, 2022. http://dx.doi.org/10.5220/0011908000003613.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Jorgensen, Mackenzie, Hannah Richert, Elizabeth Black, Natalia Criado und Jose Such. „Not So Fair: The Impact of Presumably Fair Machine Learning Models“. In AIES '23: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3600211.3604699.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sahlgren, Otto. „What's (Not) Ideal about Fair Machine Learning?“ In AIES '22: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3514094.3539543.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hu, Shengyuan, Zhiwei Steven Wu und Virginia Smith. „Fair Federated Learning via Bounded Group Loss“. In 2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML). IEEE, 2024. http://dx.doi.org/10.1109/satml59370.2024.00015.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Belitz, Clara, Lan Jiang und Nigel Bosch. „Automating Procedurally Fair Feature Selection in Machine Learning“. In AIES '21: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3461702.3462585.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Shimao, Hajime, Warut Khern-am-nuai, Karthik Kannan und Maxime C. Cohen. „Strategic Best Response Fairness in Fair Machine Learning“. In AIES '22: AAAI/ACM Conference on AI, Ethics, and Society. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3514094.3534194.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Fair Machine Learning"

1

Nickerson, Jeffrey, Kalle Lyytinen und John L. King. Automated Vehicles: A Human/Machine Co-learning Perspective. SAE International, April 2022. http://dx.doi.org/10.4271/epr2022009.

Der volle Inhalt der Quelle
Annotation:
Automated vehicles (AVs)—and the automated driving systems (ADSs) that enable them—are increasing in prevalence but remain far from ubiquitous. Progress has occurred in spurts, followed by lulls, while the motor transportation system learns to design, deploy, and regulate AVs. Automated Vehicles: A Human/Machine Co-learning Experience focuses on how engineers, regulators, and road users are all learning about a technology that has the potential to transform society. Those engaged in the design of ADSs and AVs may find it useful to consider that the spurts and lulls and stakeholder tussles are a normal part of technology transformations; however, this report will provide suggestions for effective stakeholder engagement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Adegoke, Damilola, Natasha Chilambo, Adeoti Dipeolu, Ibrahim Machina, Ade Obafemi-Olopade und Dolapo Yusuf. Public discourses and Engagement on Governance of Covid-19 in Ekiti State, Nigeria. African Leadership Center, King's College London, Dezember 2021. http://dx.doi.org/10.47697/lab.202101.

Der volle Inhalt der Quelle
Annotation:
Numerous studies have emerged so far on Covid-19 (SARS-CoV-2) across different disciplines. There is virtually no facet of human experience and relationships that have not been studied. In Nigeria, these studies include knowledge and attitude, risk perception, public perception of Covid-19 management, e-learning, palliatives, precautionary behaviours etc.,, Studies have also been carried out on public framing of Covid-19 discourses in Nigeria; these have explored both offline and online messaging and issues from the perspectives of citizens towards government’s policy responses such as palliative distributions, social distancing and lockdown. The investigators of these thematic concerns deployed different methodological tools in their studies. These tools include policy evaluations, content analysis, sentiment analysis, discourse analysis, survey questionnaires, focus group discussions, in depth-interviews as well as machine learning., These studies nearly always focus on the national government policy response, with little or no focus on the constituent states. In many of the studies, the researchers work with newspaper articles for analysis of public opinions while others use social media generated contents such as tweets) as sources for analysis of sentiments and opinions. Although there are others who rely on the use of survey questionnaires and other tools outlined above; the limitations of these approaches necessitated the research plan adopted by this study. Most of the social media users in Nigeria are domiciled in cities and their demography comprises the middle class (socio-economic) who are more likely to be literate with access to internet technologies. Hence, the opinions of a majority of the population who are most likely rural dwellers with limited access to internet technologies are very often excluded. This is not in any way to disparage social media content analysis findings; because the opinions expressed by opinion leaders usually represent the larger subset of opinions prevalent in the society. Analysing public perception using questionnaires is also fraught with its challenges, as well as reliance on newspaper articles. A lot of the newspapers and news media organisations in Nigeria are politically hinged; some of them have active politicians and their associates as their proprietors. Getting unbiased opinions from these sources might be difficult. The news articles are also most likely to reflect and amplify official positions through press releases and interviews which usually privilege elite actors. These gaps motivated this collaboration between Ekiti State Government and the African Leadership Centre at King’s College London to embark on research that will primarily assess public perceptions of government leadership response to Covid-19 in Ekiti State. The timeframe of the study covers the first phase of the pandemic in Ekiti State (March/April to August 2020).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie