Siga este link para ver outros tipos de publicações sobre o tema: Algorithmic Auditing.

Artigos de revistas sobre o tema "Algorithmic Auditing"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Algorithmic Auditing".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Dash, Abhisek, Stefan Bechtold, Jens Frankenreiter, Abhijnan Chakraborty, Saptarshi Ghosh, Animesh Mukherjee e Krishna P. Gummadi. "Antitrust, Amazon, and Algorithmic Auditing". Journal of Institutional and Theoretical Economics 180, n.º 2 (2024): 319. http://dx.doi.org/10.1628/jite-2024-0014.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Shen, Hong, Alicia DeVos, Motahhare Eslami e Kenneth Holstein. "Everyday Algorithm Auditing: Understanding the Power of Everyday Users in Surfacing Harmful Algorithmic Behaviors". Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (13 de outubro de 2021): 1–29. http://dx.doi.org/10.1145/3479577.

Texto completo da fonte
Resumo:
A growing body of literature has proposed formal approaches to audit algorithmic systems for biased and harmful behaviors. While formal auditing approaches have been greatly impactful, they often suffer major blindspots, with critical issues surfacing only in the context of everyday use once systems are deployed. Recent years have seen many cases in which everyday users of algorithmic systems detect and raise awareness about harmful behaviors that they encounter in the course of their everyday interactions with these systems. However, to date little academic attention has been granted to these bottom-up, user-driven auditing processes. In this paper, we propose and explore the concept of everyday algorithm auditing, a process in which users detect, understand, and interrogate problematic machine behaviors via their day-to-day interactions with algorithmic systems. We argue that everyday users are powerful in surfacing problematic machine behaviors that may elude detection via more centrally-organized forms of auditing, regardless of users' knowledge about the underlying algorithms. We analyze several real-world cases of everyday algorithm auditing, drawing lessons from these cases for the design of future platforms and tools that facilitate such auditing behaviors. Finally, we discuss work that lies ahead, toward bridging the gaps between formal auditing approaches and the organic auditing behaviors that emerge in everyday use of algorithmic systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Raji, Inioluwa Deborah, e Joy Buolamwini. "Actionable Auditing Revisited". Communications of the ACM 66, n.º 1 (20 de dezembro de 2022): 101–8. http://dx.doi.org/10.1145/3571151.

Texto completo da fonte
Resumo:
Although algorithmic auditing has emerged as a key strategy to expose systematic biases embedded in software platforms, we struggle to understand the real-world impact of these audits and continue to find it difficult to translate such independent assessments into meaningful corporate accountability. To analyze the impact of publicly naming and disclosing performance results of biased AI systems, we investigate the commercial impact of Gender Shades, the first algorithmic audit of gender- and skin-type performance disparities in commercial facial analysis models. This paper (1) outlines the audit design and structured disclosure procedure used in the Gender Shades study, (2) presents new performance metrics from targeted companies such as IBM, Microsoft, and Megvii (Face++) on the Pilot Parliaments Benchmark (PPB) as of August 2018, (3) provides performance results on PPB by non-target companies such as Amazon and Kairos, and (4) explores differences in company responses as shared through corporate communications that contextualize differences in performance on PPB. Within 7 months of the original audit, we find that all three targets released new application program interface (API) versions. All targets reduced accuracy disparities between males and females and darker- and lighter-skinned subgroups, with the most significant update occurring for the darker-skinned female subgroup that underwent a 17.7--30.4% reduction in error between audit periods. Minimizing these disparities led to a 5.72--8.3% reduction in overall error on the Pilot Parliaments Benchmark (PPB) for target corporation APIs. The overall performance of non-targets Amazon and Kairos lags significantly behind that of the targets, with error rates of 8.66% and 6.60% overall, and error rates of 31.37% and 22.50% for the darker female subgroup, respectively. This is an expanded version of an earlier publication of these results, revised for a more general audience, and updated to include commentary on further developments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Broussard, Meredith. "How to Investigate an Algorithm". Issues in Science and Technology 39, n.º 4 (3 de julho de 2023): 85–89. http://dx.doi.org/10.58875/oake4546.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Metaxa, Danaë, Joon Sung Park, Ronald E. Robertson, Karrie Karahalios, Christo Wilson, Jeff Hancock e Christian Sandvig. "Auditing Algorithms: Understanding Algorithmic Systems from the Outside In". Foundations and Trends® in Human–Computer Interaction 14, n.º 4 (2021): 272–344. http://dx.doi.org/10.1561/1100000083.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Conitzer, Vincent, Gillian K. Hadfield e Shannon Vallor. "Technical Perspective: The Impact of Auditing for Algorithmic Bias". Communications of the ACM 66, n.º 1 (20 de dezembro de 2022): 100. http://dx.doi.org/10.1145/3571152.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Seidelin, Cathrine, Therese Moreau, Irina Shklovski e Naja Holten Møller. "Auditing Risk Prediction of Long-Term Unemployment". Proceedings of the ACM on Human-Computer Interaction 6, GROUP (14 de janeiro de 2022): 1–12. http://dx.doi.org/10.1145/3492827.

Texto completo da fonte
Resumo:
As more and more governments adopt algorithms to support bureaucratic decision-making processes, it becomes urgent to address issues of responsible use and accountability. We examine a contested public service algorithm used in Danish job placement for assessing an individual's risk of long-term unemployment. The study takes inspiration from cooperative audits and was carried out in dialogue with the Danish unemployment services agency. Our audit investigated the practical implementation of algorithms. We find (1) a divergence between the formal documentation and the model tuning code, (2) that the algorithmic model relies on subjectivity, namely the variable which focus on the individual's self-assessment of how long it will take before they get a job, (3) that the algorithm uses the variable "origin" to determine its predictions, and (4) that the documentation neglects to consider the implications of using variables indicating personal characteristics when predicting employment outcomes. We discuss the benefits and limitations of cooperative audits in a public sector context. We specifically focus on the importance of collaboration across different public actors when investigating the use of algorithms in the algorithmic society.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Jin, Xing, Mingchu Li, Xiaomei Sun, Cheng Guo e Jia Liu. "Reputation-based multi-auditing algorithmic mechanism for reliable mobile crowdsensing". Pervasive and Mobile Computing 51 (dezembro de 2018): 73–87. http://dx.doi.org/10.1016/j.pmcj.2018.10.001.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Nguyen, Lan N., J. David Smith, Jinsung Bae, Jungmin Kang, Jungtaek Seo e My T. Thai. "Auditing on Smart-Grid With Dynamic Traffic Flows: An Algorithmic Approach". IEEE Transactions on Smart Grid 11, n.º 3 (maio de 2020): 2293–302. http://dx.doi.org/10.1109/tsg.2019.2951505.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Beatrice Oyinkansola Adelakun. "THE IMPACT OF AI ON INTERNAL AUDITING: TRANSFORMING PRACTICES AND ENSURING COMPLIANCE". Finance & Accounting Research Journal 4, n.º 6 (30 de dezembro de 2022): 350–70. http://dx.doi.org/10.51594/farj.v4i6.1316.

Texto completo da fonte
Resumo:
Artificial Intelligence (AI) is revolutionizing internal auditing by transforming traditional practices and enhancing compliance mechanisms. This abstract explores the multifaceted impact of AI on internal auditing, highlighting key advancements in efficiency, accuracy, risk management, and regulatory adherence. AI technologies, particularly machine learning and advanced data analytics, are enhancing the capabilities of internal auditors to analyze large volumes of data swiftly and with greater precision. Traditional internal auditing methods, often constrained by manual processes and sampling techniques, are being supplanted by AI-driven approaches that offer comprehensive analysis and real-time insights. This shift enables auditors to identify anomalies, fraud, and operational inefficiencies more effectively, thereby improving the overall accuracy and reliability of audit outcomes. One of the significant benefits of AI in internal auditing is its ability to automate routine and repetitive tasks. By leveraging AI, auditors can focus on higher-value activities, such as strategic risk assessment and decision-making, thus enhancing the overall productivity of the audit function. Furthermore, AI-driven tools can continuously monitor financial transactions and operational processes, providing real-time alerts and insights that help in early detection of potential issues and proactive risk management. AI also plays a crucial role in ensuring compliance with regulatory standards. By integrating AI systems with compliance frameworks, organizations can automate the tracking and reporting of compliance-related activities. This not only reduces the risk of human error but also ensures that organizations stay updated with evolving regulatory requirements. AI’s ability to process and analyze regulatory texts enables organizations to swiftly adapt to new compliance mandates, thereby mitigating the risk of non-compliance penalties. However, the integration of AI into internal auditing is not without challenges. Ensuring data quality and integrity is paramount, as AI systems rely on accurate data inputs to function effectively. Additionally, the "black box" nature of some AI algorithms can pose transparency issues, making it difficult for auditors to explain how specific conclusions were reached. Addressing algorithmic biases and maintaining auditor expertise in AI technologies are also critical considerations. In conclusion, AI is significantly transforming internal auditing practices by enhancing efficiency, accuracy, and compliance. While the benefits are substantial, careful management of data quality, transparency, and algorithmic biases is essential to fully realize the potential of AI in internal auditing. Keywords: Impact, Ai, Internal Auditing, Transforming Practices, Ensuring Compliance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Lam, Michelle S., Ayush Pandit, Colin H. Kalicki, Rachit Gupta, Poonam Sahoo e Danaë Metaxa. "Sociotechnical Audits: Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising". Proceedings of the ACM on Human-Computer Interaction 7, CSCW2 (28 de setembro de 2023): 1–37. http://dx.doi.org/10.1145/3610209.

Texto completo da fonte
Resumo:
Algorithm audits are powerful tools for studying black-box systems without direct knowledge of their inner workings. While very effective in examining technical components, the method stops short of a sociotechnical frame, which would also consider users themselves as an integral and dynamic part of the system. Addressing this limitation, we propose the concept of sociotechnical auditing: auditing methods that evaluate algorithmic systems at the sociotechnical level, focusing on the interplay between algorithms and users as each impacts the other. Just as algorithm audits probe an algorithm with varied inputs and observe outputs, a sociotechnical audit (STA) additionally probes users, exposing them to different algorithmic behavior and measuring their resulting attitudes and behaviors. As an example of this method, we develop Intervenr, a platform for conducting browser-based, longitudinal sociotechnical audits with consenting, compensated participants. Intervenr investigates the algorithmic content users encounter online, and also coordinates systematic client-side interventions to understand how users change in response. As a case study, we deploy Intervenr in a two-week sociotechnical audit of online advertising (N = 244) to investigate the central premise that personalized ad targeting is more effective on users. In the first week, we observe and collect all browser ads delivered to users, and in the second, we deploy an ablation-style intervention that disrupts normal targeting by randomly pairing participants and swapping all their ads. We collect user-oriented metrics (self-reported ad interest and feeling of representation) and advertiser-oriented metrics (ad views, clicks, and recognition) throughout, along with a total of over 500,000 ads. Our STA finds that targeted ads indeed perform better with users, but also that users begin to acclimate to different ads in only a week, casting doubt on the primacy of personalized ad targeting given the impact of repeated exposure. In comparison with other evaluation methods that only study technical components, or only experiment on users, sociotechnical audits evaluate sociotechnical systems through the interplay of their technical and human components.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Ahmad, Muhammad Ayyaz, George Baryannis e Richard Hill. "Defining Complex Adaptive Systems: An Algorithmic Approach". Systems 12, n.º 2 (30 de janeiro de 2024): 45. http://dx.doi.org/10.3390/systems12020045.

Texto completo da fonte
Resumo:
Despite a profusion of literature on complex adaptive system (CAS) definitions, it is still challenging to definitely answer whether a given system is or is not a CAS. The challenge generally lies in deciding where the boundaries lie between a complex system (CS) and a CAS. In this work, we propose a novel definition for CASs in the form of a concise, robust, and scientific algorithmic framework. The definition allows a two-stage evaluation of a system to first determine whether it meets complexity-related attributes before exploring a series of attributes related to adaptivity, including autonomy, memory, self-organisation, and emergence. We demonstrate the appropriateness of the definition by applying it to two case studies in the medical and supply chain domains. We envision that the proposed algorithmic approach can provide an efficient auditing tool to determine whether a system is a CAS, also providing insights for the relevant communities to optimise their processes and organisational structures.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Keyes, Os, e Jeanie Austin. "Feeling fixes: Mess and emotion in algorithmic audits". Big Data & Society 9, n.º 2 (julho de 2022): 205395172211137. http://dx.doi.org/10.1177/20539517221113772.

Texto completo da fonte
Resumo:
Efforts to address algorithmic harms have gathered particular steam over the last few years. One area of proposed opportunity is the notion of an “algorithmic audit,” specifically an “internal audit,” a process in which a system’s developers evaluate its construction and likely consequences. These processes are broadly endorsed in theory—but how do they work in practice? In this paper, we conduct not only an audit but an autoethnography of our experiences doing so. Exploring the history and legacy of a facial recognition dataset, we find paradigmatic examples of algorithmic injustices. But we also find that the process of discovery is interwoven with questions of affect and infrastructural brittleness that internal audit processes fail to articulate. For auditing to not only address existing harms but avoid producing new ones in turn, we argue that these processes must attend to the “mess” of engaging with algorithmic systems in practice. Doing so not only reduces the risks of audit processes but—through a more nuanced consideration of the emotive parts of that mess—may enhance the benefits of a form of governance premised entirely on altering future practices.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Imane, Laamari. "Artificial intelligence in financial auditing: improving efficiency and addressing ethical and regulatory challenges". Brazilian Journal of Business 7, n.º 1 (15 de janeiro de 2025): e76833. https://doi.org/10.34140/bjbv7n1-017.

Texto completo da fonte
Resumo:
Artificial Intelligence has been increasingly reshaping the face of financial auditing for improved efficiency and effectiveness in fraud detection, serving also to strengthen stakeholder trust. This study examines the impact of adopting AI on audit practices, in terms of effectiveness in improving efficiency, fraud detection, ethical challenges, regulatory barriers, and stakeholder trust. In this paper, 460 professional auditors, accountants, and organizational stakeholders have been analyzed using descriptive statistics, correlation, regression, and structural equation modeling. The results indicate that AI has a positive influence on efficiency, value creation, and fraud detection ability, while positively influencing stakeholder trust in organizations. However, ethical concerns from algorithmic bias to a lack of transparency and regulatory risks associated with compliance through data protection laws are also significant barriers. It thus concludes that AI has a enormous potential in revolutionizing auditing practices, and that addressing these barriers through training, transparent AI models, ethical safeguards, and supportive regulatory frameworks is also very important for its widespread adoption. Recommendations and future research avenues are presented to guide the responsible integration of AI into the auditing profession.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Zhang, Shaoyang. "Research on the Application of AI Technology in Auditing". Economic Management & Global Business Studies 3, n.º 1 (29 de agosto de 2024): 1–19. http://dx.doi.org/10.69610/j.emgbs.20240831.

Texto completo da fonte
Resumo:
With the rapid development of AI technology, the application in the field of auditing has become an important means to improve the efficiency and quality of auditing. AI technology has significantly improved the efficiency and quality of audit work by automating data analysis, risk assessment, and audit processes. However, as technology evolves, so do challenges such as data privacy, security, algorithmic transparency, and lack of talent. In order to address these issues, it is recommended to strengthen data protection, improve the accuracy and transparency of algorithms, formulate unified AI audit standards, and focus on cultivating audit talents with AI skills. In the future, the audit industry needs to actively manage and control risks while enjoying the convenience brought by AI, ensure the quality and credibility of audit work, and promote the development of audit services in a more efficient and intelligent direction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Lee, Francis. "Enacting the Pandemic". Science & Technology Studies 34, n.º 1 (25 de novembro de 2020): 65–90. http://dx.doi.org/10.23987/sts.75323.

Texto completo da fonte
Resumo:
This article has two objectives: First, the article seeks to make a methodological intervention in the social study of algorithms. Today, there is a worrying trend to analytically reduce algorithms to coherent and stable objects whose computational logic can be audited for biases to create fairness, accountability, and transparency (FAccT). To counter this reductionist and determinist tendency, this article proposes three methodological rules that allows an analysis of algorithmic power in practice. Second, the article traces ethnographically how an algorithm was used to enact a pandemic, and how the power to construct this disease outbreak was moved around through by an algorithmic assemblage. To do this, the article traces the assembling of a recent epidemic at the European Centre for Disease Control and Prevention—the Zika outbreak starting in 2015—and shows how an epidemic was put together using an array of computational resources, with very different spaces for intervening. A key argument is that we, analysts of algorithms, need to attend to how multiple spaces for agency, opacity, and power open and close in different parts of algorithmic assemblages. The crux of the matter is that actors experience different degrees of agency and opacity in different parts of any algorithmic assemblage. Consequently, rather than auditing algorithms for biased logic, the article shows the usefulness of examining algorithmic power as enacted and situated in practice.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Bandy, Jack, e Nicholas Diakopoulos. "Auditing News Curation Systems: A Case Study Examining Algorithmic and Editorial Logic in Apple News". Proceedings of the International AAAI Conference on Web and Social Media 14 (26 de maio de 2020): 36–47. http://dx.doi.org/10.1609/icwsm.v14i1.7277.

Texto completo da fonte
Resumo:
This work presents an audit study of Apple News as a sociotechnical news curation system that exercises gatekeeping power in the media. We examine the mechanisms behind Apple News as well as the content presented in the app, outlining the social, political, and economic implications of both aspects. We focus on the Trending Stories section, which is algorithmically curated, and the Top Stories section, which is human-curated. Results from a crowdsourced audit showed minimal content personalization in the Trending Stories section, and a sock-puppet audit showed no location-based content adaptation. Finally, we perform an extended two-month data collection to compare the human-curated Top Stories section with the algorithmically-curated Trending Stories section. Within these two sections, human curation outperformed algorithmic curation in several measures of source diversity, concentration, and evenness. Furthermore, algorithmic curation featured more “soft news” about celebrities and entertainment, while editorial curation featured more news about policy and international events. To our knowledge, this study provides the first data-backed characterization of Apple News in the United States.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Wang, Stephanie, Shengchun Huang, Alvin Zhou e Danaë Metaxa. "Lower Quantity, Higher Quality: Auditing News Content and User Perceptions on Twitter/X Algorithmic versus Chronological Timelines". Proceedings of the ACM on Human-Computer Interaction 8, CSCW2 (7 de novembro de 2024): 1–25. http://dx.doi.org/10.1145/3687046.

Texto completo da fonte
Resumo:
Social media personalization algorithms increasingly influence the flow of civic information through society, resulting in concerns about "filter bubbles'', "echo chambers'', and other ways they might exacerbate ideological segregation and fan the spread of polarizing content. To address these concerns, we designed and conducted a sociotechnical audit (STA) to investigate how Twitter/X's timeline algorithm affects news curation while also tracking how user perceptions change in response. We deployed a custom-built system that, over the course of three weeks, passively tracked all tweets loaded in users' browsers in the first week, then in the second week enacted an intervention to users' Twitter/X homepage to restrict their view to only the algorithmic or chronological timeline (randomized). We flipped this condition for each user in the third week. We ran our audit in late 2023, collecting user-centered metrics (self-reported survey measures) and platform-centered metrics (views, clicks, likes) for 243 users, along with over 800,000 tweets. Using the STA framework, our results are two-fold: (1) Our algorithm audit finds that Twitter/X's algorithmic timeline resulted in a lower quantity but higher quality of news --- less ideologically congruent, less extreme, and slightly more reliable --- compared to the chronological timeline. (2) Our user audit suggests that although our timeline intervention had significant effects on users' behaviors, it had little impact on their overall perceptions of the platform. Our paper discusses these findings and their broader implications in the context of algorithmic news curation, user-centric audits, and avenues for independent social science research.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Mousavi, Sepehr, Krishna P. Gummadi e Savvas Zannettou. "Auditing Algorithmic Explanations of Social Media Feeds: A Case Study of TikTok Video Explanations". Proceedings of the International AAAI Conference on Web and Social Media 18 (28 de maio de 2024): 1110–22. http://dx.doi.org/10.1609/icwsm.v18i1.31376.

Texto completo da fonte
Resumo:
In recent years, user feeds on social media platforms have shifted from simple, chronologically ordered content posted by their network connections (i.e., friends) to opaque, algorithmically ordered and curated content. This shift has led to regulations that require platforms to offer end users greater transparency and control over their algorithmic recommendation-based feeds. In response, social media platforms such as TikTok have recently started explaining why specific videos are recommended to end users. However, we still lack a good understanding of how these explanations are generated and whether they offer the desired transparency to end users. In this work, we audit explanations provided on short-format videos on TikTok. We collect a large dataset of short-format videos and explanations provided by TikTok (when available) using automated sockpuppet accounts. Then, we systematically characterize the explanations, focusing on their accuracy and comprehensiveness. For our assessments, we compare the provided explanations with video metadata and the behavior of our sockpuppet accounts. Our analysis shows that some generic (non-personalized) reasons are always included in explanations (e.g., "This video is popular in your country"), while at the same time, we find that a large number of provided explanations are incompatible with the behavior of our sockpuppet accounts; (e.g., an account that made zero comments on the platform, was presented with the explanation "You commented on similar videos" in 34% of all recommended videos.) Overall, our audit of TikTok video explanations highlights the need for more accurate, fine-grained, and useful explanations for the end users. We will make our code and dataset available to assist the research community.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Schwalbe, Ulrich. "ALGORITHMS, MACHINE LEARNING, AND COLLUSION". Journal of Competition Law & Economics 14, n.º 4 (2018): 568–607. http://dx.doi.org/10.1093/joclec/nhz004.

Texto completo da fonte
Resumo:
Abstract This paper discusses whether self-learning price-setting algorithms can coordinate their pricing behavior to achieve a collusive outcome that maximizes the joint profits of the firms using them. Although legal scholars have generally assumed that algorithmic collusion is not only possible but also exceptionally easy, computer scientists examining cooperation between algorithms as well as economists investigating collusion in experimental oligopolies have countered that coordinated, tacitly collusive behavior is not as rapid, easy, or even inevitable as often suggested. Research in experimental economics has shown that the exchange of information is vital to collusion when more than two firms operate within a given market. Communication between algorithms is also a topic in research on artificial intelligence, in which some scholars have recently indicated that algorithms can learn to communicate, albeit in somewhat limited ways. Taken together, algorithmic collusion currently seems far more difficult to achieve than legal scholars have often assumed and is thus not a particularly relevant competitive concern at present. Moreover, there are several legal problems associated with algorithmic collusion, including questions of liability, of auditing and monitoring algorithms, and of enforcing competition law.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Criado, J. Ignacio, Julián Valero e Julián Villodre. "Algorithmic transparency and bureaucratic discretion: The case of SALER early warning system". Information Polity 25, n.º 4 (4 de dezembro de 2020): 449–70. http://dx.doi.org/10.3233/ip-200260.

Texto completo da fonte
Resumo:
The governance of public sector organizations has been challenged by the growing adoption and use of Artificial Intelligence (AI) systems and algorithms. Algorithmic transparency, conceptualized here using the dimensions of accessibility and explainability, fosters the appraisal of algorithms’ footprint in decisions of public agencies, and should include impacts on civil servants’ work. However, although discretion will not disappear, AI innovations might have a negative impact on how public employees support their decisions. This article is intended to answer the following research questions: RQ1. To what extent algorithms affect discretionary power of civil servants to make decisions?RQ2. How algorithmic transparency can impact discretionary power of civil servants? To do so, we analyze SALER, a case based on a set of algorithms focused on the prevention of irregularities in the Valencian regional administration (GVA), Spain, using a qualitative methodology supported on semi-structured interviews and documentary analysis. Among the results of the study, our empirical work suggests the existence of a series of factors that might be linked to the positive impacts of algorithms on the work and discretionary power of civil servants. Also, we identify different pathways for achieving algorithmic transparency, such as the involvement of civil servants in active development, or auditing processes being recognized by law, among others.
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Kingsley, Sara, Jiayin Zhi, Wesley Hanwen Deng, Jaimie Lee, Sizhe Zhang, Motahhare Eslami, Kenneth Holstein, Jason I. Hong, Tianshi Li e Hong Shen. "Investigating What Factors Influence Users’ Rating of Harmful Algorithmic Bias and Discrimination". Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 12 (14 de outubro de 2024): 75–85. http://dx.doi.org/10.1609/hcomp.v12i1.31602.

Texto completo da fonte
Resumo:
There has been growing recognition of the crucial role users, especially those from marginalized groups, play in uncovering harmful algorithmic biases. However, it remains unclear how users’ identities and experiences might impact their rating of harmful biases. We present an online experiment (N=2,197) examining these factors: demographics, discrimination experiences, and social and technical knowledge. Participants were shown examples of image search results, including ones that previous literature has identified as biased against marginalized racial, gender, or sexual orientation groups. We found participants from marginalized gender or sexual orientation groups were more likely to rate the examples as more severely harmful. Belonging to marginalized races did not have a similar pattern. Additional factors affecting users’ ratings included discrimination experiences, and having friends or family belonging to marginalized demographics. A qualitative analysis offers insights into users' bias recognition, and why they see biases the way they do. We provide guidance for designing future methods to support effective user-driven auditing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Dutta, Sanghamitra, e Faisal Hamman. "A Review of Partial Information Decomposition in Algorithmic Fairness and Explainability". Entropy 25, n.º 5 (13 de maio de 2023): 795. http://dx.doi.org/10.3390/e25050795.

Texto completo da fonte
Resumo:
Partial Information Decomposition (PID) is a body of work within information theory that allows one to quantify the information that several random variables provide about another random variable, either individually (unique information), redundantly (shared information), or only jointly (synergistic information). This review article aims to provide a survey of some recent and emerging applications of partial information decomposition in algorithmic fairness and explainability, which are of immense importance given the growing use of machine learning in high-stakes applications. For instance, PID, in conjunction with causality, has enabled the disentanglement of the non-exempt disparity which is the part of the overall disparity that is not due to critical job necessities. Similarly, in federated learning, PID has enabled the quantification of tradeoffs between local and global disparities. We introduce a taxonomy that highlights the role of PID in algorithmic fairness and explainability in three main avenues: (i) Quantifying the legally non-exempt disparity for auditing or training; (ii) Explaining contributions of various features or data points; and (iii) Formalizing tradeoffs among different disparities in federated learning. Lastly, we also review techniques for the estimation of PID measures, as well as discuss some challenges and future directions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Bernard Owusu Antwi, Beatrice Oyinkansola Adelakun, Damilola Temitayo Fatogun e Omolara Patricia Olaiya. "Enhancing audit accuracy: The role of AI in detecting financial anomalies and fraud". Finance & Accounting Research Journal 6, n.º 6 (15 de junho de 2024): 1049–68. http://dx.doi.org/10.51594/farj.v6i6.1235.

Texto completo da fonte
Resumo:
Artificial Intelligence (AI) is transforming the field of auditing by significantly enhancing the ability to detect financial anomalies and fraud. The integration of AI in auditing processes offers unprecedented capabilities for analyzing vast datasets with greater speed and precision than traditional methods. This review explores the impact of AI on audit accuracy, focusing on its role in identifying irregularities and fraudulent activities. AI-driven auditing tools leverage machine learning algorithms and advanced data analytics to scrutinize financial records with a high level of detail. These tools can process extensive amounts of financial data rapidly, identifying patterns and deviations that may indicate anomalies or fraudulent behavior. Unlike conventional audit techniques, which often rely on sampling and manual checks, AI can evaluate entire datasets, ensuring comprehensive coverage and reducing the likelihood of undetected issues. One of the primary benefits of AI in auditing is its ability to enhance anomaly detection. Machine learning models are trained to recognize normal financial behaviors and flag deviations that may warrant further investigation. This capability is particularly valuable in identifying subtle or complex patterns of fraud that might be missed by human auditors. For example, AI can detect unusual transaction patterns, inconsistencies in financial statements, or irregularities in vendor or customer behaviors, which are common indicators of fraud. Moreover, AI's predictive analytics can proactively identify potential risks by analyzing historical data and forecasting future trends. This allows auditors to anticipate areas of concern and allocate resources more effectively, improving the overall efficiency and effectiveness of the audit process. Additionally, AI systems continuously learn and adapt, enhancing their accuracy and reliability over time. Despite its advantages, the implementation of AI in auditing also presents challenges. Ensuring data quality and integrity, addressing algorithmic biases, and maintaining transparency in AI decision-making processes are critical considerations. Auditors must also stay updated with evolving AI technologies and regulatory requirements to maximize the benefits while mitigating risks. In conclusion, AI holds significant promise for enhancing audit accuracy by improving the detection of financial anomalies and fraud. By integrating AI into auditing practices, organizations can achieve more thorough and reliable audits, ultimately strengthening financial oversight and integrity. However, careful management of the associated challenges is essential to fully realize AI's potential in the auditing domain. Keywords: Fraud, Financial Anomalies, AI, Audit Accuracy, Detecting.
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Barlas, Pınar, Kyriakos Kyriakou, Styliani Kleanthous e Jahna Otterbacher. "Social B(eye)as: Human and Machine Descriptions of People Images". Proceedings of the International AAAI Conference on Web and Social Media 13 (6 de julho de 2019): 583–91. http://dx.doi.org/10.1609/icwsm.v13i01.3255.

Texto completo da fonte
Resumo:
Image analysis algorithms have become an indispensable tool in our information ecosystem, facilitating new forms of visual communication and information sharing. At the same time, they enable large-scale socio-technical research which would otherwise be difficult to carry out. However, their outputs may exhibit social bias, especially when analyzing people images. Since most algorithms are proprietary and opaque, we propose a method of auditing their outputs for social biases. To be able to compare how algorithms interpret a controlled set of people images, we collected descriptions across six image tagging algorithms. In order to compare these results to human behavior, we also collected descriptions on the same images from crowdworkers in two anglophone regions. The dataset we present consists of tags from these eight taggers, along with a typology of concepts, and a python script to calculate vector scores for each image and tagger. Using our methodology, researchers can see the behaviors of the image tagging algorithms and compare them to those of crowdworkers. Beyond computer vision auditing, the dataset of humanand machine-produced tags, the typology, and the vectorization method can be used to explore a range of research questions related to both algorithmic and human behaviors.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Lam, Michelle S., Mitchell L. Gordon, Danaë Metaxa, Jeffrey T. Hancock, James A. Landay e Michael S. Bernstein. "End-User Audits: A System Empowering Communities to Lead Large-Scale Investigations of Harmful Algorithmic Behavior". Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (7 de novembro de 2022): 1–34. http://dx.doi.org/10.1145/3555625.

Texto completo da fonte
Resumo:
Because algorithm audits are conducted by technical experts, audits are necessarily limited to the hypotheses that experts think to test. End users hold the promise to expand this purview, as they inhabit spaces and witness algorithmic impacts that auditors do not. In pursuit of this goal, we propose end-user audits-system-scale audits led by non-technical users-and present an approach that scaffolds end users in hypothesis generation, evidence identification, and results communication. Today, performing a system-scale audit requires substantial user effort to label thousands of system outputs, so we introduce a collaborative filtering technique that leverages the algorithmic system's own disaggregated training data to project from a small number of end user labels onto the full test set. Our end-user auditing tool, IndieLabel, employs these predicted labels so that users can rapidly explore where their opinions diverge from the algorithmic system's outputs. By highlighting topic areas where the system is under-performing for the user and surfacing sets of likely error cases, the tool guides the user in authoring an audit report. In an evaluation of end-user audits on a popular comment toxicity model with 17 non-technical participants, participants both replicated issues that formal audits had previously identified and also raised previously underreported issues such as under-flagging on veiled forms of hate that perpetuate stigma and over-flagging of slurs that have been reclaimed by marginalized communities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Oritsematosan Faith Dudu, Olakunle Babatunde Alao e Enoch O. Alonge. "Conceptual framework for AI-driven tax compliance in fintech ecosystems". International Journal of Frontiers in Engineering and Technology Research 7, n.º 2 (30 de novembro de 2024): 001–10. http://dx.doi.org/10.53294/ijfetr.2024.7.2.0045.

Texto completo da fonte
Resumo:
This paper comprehensively reviews the integration of Artificial Intelligence (AI) into tax compliance processes within fintech ecosystems. It explores the theoretical foundations of AI technologies, such as machine learning and predictive analytics, and how they can automate tax reporting, auditing, and compliance monitoring. Conceptual models for AI-driven tax compliance are proposed, highlighting the potential for increased efficiency, accuracy, and cost reduction. The paper also examines challenges associated with AI adoption, including algorithmic biases, ethical concerns, data privacy issues, and regulatory hurdles. Strategies for overcoming these challenges and fostering broader adoption are discussed. Finally, the paper offers recommendations for fintech companies and policymakers, emphasizing the need for transparent AI models, bias mitigation, data privacy, and updated regulatory frameworks to ensure fair and effective AI-enabled tax systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

DELIU, Delia. "Professional Judgment and Skepticism Amidst the Interaction of Artificial Intelligence and Human Intelligence". Audit Financiar 22, n.º 176 (15 de outubro de 2024): 724–41. http://dx.doi.org/10.20869/auditf/2024/176/024.

Texto completo da fonte
Resumo:
Artificial Intelligence (AI) has revolutionized various industries by learning from data, mimicking human behavior, and making autonomous decisions. However, despite AI's advancements in data processing and decision-making, it cannot fully replicate human attributes such as emotional understanding and ethical judgment. This paper explores the intersection of AI and Human Intelligence (HI) within the audit profession, focusing on the implications for the auditor’s professional judgment and skepticism. The integration of AI in auditing promises enhanced efficiency, precision, and data processing capabilities beyond human limits. However, it also raises ethical concerns regarding data privacy, algorithmic bias, and accountability. These concerns highlight the importance of maintaining human oversight and ethical standards in audit practices. Through a comprehensive literature review, this study compares the cognitive abilities, functional capabilities, and ethical implications of AI and human auditors. Key findings underscore AI's potential to complement human auditors by improving accuracy and uncovering anomalies, while recognizing the irreplaceable role of human judgment in complex decision-making processes. The study provides insights into the transformative impact of AI on the audit profession, advocating for a balanced approach that harnesses AI's capabilities while preserving the integrity and critical thinking of human auditors. The findings contribute to a deeper understanding of AI's integration into auditing, informing best practices and guiding future research in maintaining the profession's standards amidst technological advancements.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Chandio, Sarmad, Muhammad Daniyal Pirwani Dar e Rishab Nithyanand. "How Audit Methods Impact Our Understanding of YouTube’s Recommendation Systems". Proceedings of the International AAAI Conference on Web and Social Media 18 (28 de maio de 2024): 241–53. http://dx.doi.org/10.1609/icwsm.v18i1.31311.

Texto completo da fonte
Resumo:
Computational audits of social media websites have generated data that forms the basis of our understanding of the problematic behaviors of algorithmic recommendation systems. Focusing on YouTube, this paper demonstrates that conducting audits to make specific inferences about the underlying content recommendation system is more methodologically challenging than one might expect. Obtaining scientifically valid results requires considering many methodological decisions, and each of these decisions incurs costs. For example, should an auditor use logged-in YouTube accounts while gathering recommendations to ensure more accurate inferences from the collected data? We systematically explore the impact of this and many other decisions and make important discoveries about the methodological choices that impact YouTube’s recommendations. Assessed together, our research suggests auditing configurations that can be used by researchers and auditors to reduce economic and computing costs, without sacrificing inference quality and accuracy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Beatrice Oyinkansola Adelakun, Damilola Temitayo Fatogun, Tomiwa Gabriel Majekodunmi e Gbenga Adeniyi Adediran. "Integrating machine learning algorithms into audit processes: Benefits and challenges". Finance & Accounting Research Journal 6, n.º 6 (15 de junho de 2024): 1000–1016. http://dx.doi.org/10.51594/farj.v6i6.1233.

Texto completo da fonte
Resumo:
The integration of machine learning (ML) algorithms into audit processes represents a significant advancement in the field of auditing, offering substantial benefits in terms of efficiency, accuracy, and risk management. This review examines the transformative potential of ML in auditing, highlighting its key benefits and the challenges that must be addressed to fully leverage its capabilities. Machine learning algorithms, with their ability to analyze large datasets and identify patterns, enhance the accuracy and thoroughness of audits. Traditional auditing methods often rely on sampling and manual checks, which can miss anomalies and fraudulent activities. In contrast, ML algorithms can process entire datasets, uncovering subtle patterns and irregularities that may indicate fraud or errors. This comprehensive analysis reduces the risk of oversight and improves the reliability of audit findings. One of the primary benefits of ML in auditing is its capacity for anomaly detection. ML models can be trained on historical data to understand normal financial behavior and flag deviations that might signify irregularities. This ability to detect anomalies in real-time enables auditors to identify potential issues promptly, reducing the time lag between occurrence and detection of fraud. Predictive analytics, powered by ML, further enhances audit processes by forecasting future risks based on historical data. This proactive approach allows auditors to anticipate and mitigate risks before they materialize, contributing to more robust risk management strategies. Despite these advantages, integrating ML into audit processes presents several challenges. Ensuring data quality and integrity is crucial, as ML algorithms are only as good as the data they analyze. Poor-quality data can lead to inaccurate predictions and conclusions. Additionally, the "black box" nature of some ML algorithms can pose transparency issues, making it difficult for auditors to explain how specific conclusions were reached, which is critical for stakeholder trust and regulatory compliance. Another significant challenge is the potential for algorithmic bias. ML models can inadvertently perpetuate existing biases in the data, leading to unfair or skewed audit outcomes. Continuous monitoring and validation of ML algorithms are necessary to detect and mitigate such biases. In conclusion, while integrating machine learning algorithms into audit processes offers substantial benefits in terms of accuracy, efficiency, and risk management, it also necessitates careful attention to data quality, transparency, and bias mitigation. Addressing these challenges is essential to fully realize the potential of ML in enhancing audit practices. Keywords: Benefits, Challenges, Audit Processes, Algorithms, ML.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Mantilla-León, Laura Clemencia, e Oscar Javier Maldonado Castañeda. "“Nosotras de robot no tenemos nada”: arreglos intersubjetivos tecnosociales en el trabajo doméstico mediado digitalmente". Revista de Estudios Sociales, n.º 89 (2 de julho de 2024): 61–80. http://dx.doi.org/10.7440/res89.2024.04.

Texto completo da fonte
Resumo:
This article explores the intersubjective relationships between clients and workers on two digital domestic work platforms in Bogotá, Colombia. Drawing on feminist social studies of science and technology, it investigates the control, auditing, and subordination mechanisms affecting workers, with clients playing a key role in setting performance conditions. While much of the literature on digital labor focuses on technological intermediation and algorithmic management—used to track, monitor, and rate domestic workers—this article looks at the relationships with clients and how these shape the work process and technologies involved. Based on over thirty in-depth interviews with workers and clients, the study examines three areas: (i) clients’ expectations of mechanical efficiency; (ii) the use of rating systems; and (iii) domestic work as emotional labor. The article concludes by reflecting on the dual subordination experienced by workers due to digital service intermediaries and emphasizes the need for fair technologies that consider workers’ experiences and perspectives.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Ding, Xueying, Rui Xi e Leman Akoglu. "Outlier Detection Bias Busted: Understanding Sources of Algorithmic Bias through Data-centric Factors". Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (16 de outubro de 2024): 384–95. http://dx.doi.org/10.1609/aies.v7i1.31644.

Texto completo da fonte
Resumo:
The astonishing successes of ML have raised growing concern for the fairness of modern methods when deployed in real world settings. However, studies on fairness have mostly focused on supervised ML, while unsupervised outlier detection (OD), with numerous applications in finance, security, etc., have attracted little attention. While a few studies proposed fairness-enhanced OD algorithms, they remain agnostic to the underlying driving mechanisms or sources of unfairness. Even within the supervised ML literature, there exists debate on whether unfairness stems solely from algorithmic biases (i.e. design choices) or from the biases encoded in the data on which they are trained. To close this gap, this work aims to shed light on the possible sources of unfairness in OD by auditing detection models under different data-centric factors.By injecting various known biases into the input data---as pertain to sample size disparity, under-representation, feature measurement noise, and group membership obfuscation---we find that the OD algorithms under the study all exhibit fairness pitfalls, although differing in which types of data bias they are more susceptible to. Most notable of our study is to demonstrate that OD algorithm bias is not merely a data bias problem. A key realization is that the data properties that emerge from bias injection could as well be organic---as pertain to natural group differences w.r.t. sparsity, base rate, variance, and multi-modality. Either natural or biased, such data properties can give rise to unfairness as they interact with certain algorithmic design choices. Our work provides a deeper understanding of the possible sources of OD unfairness, and serves as a framework for assessing the unfairness of future OD algorithms under specific data-centric factors. It also paves the way for future work on mitigation strategies by underscoring the susceptibility of various design choices.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Oyekunle Claudius Oyeniran, Adebunmi Okechukwu Adewusi, Adams Gbolahan Adeleke, Lucy Anthony Akwawa e Chidimma Francisca Azubuko. "Ethical AI: Addressing bias in machine learning models and software applications". Computer Science & IT Research Journal 3, n.º 3 (30 de dezembro de 2022): 115–26. http://dx.doi.org/10.51594/csitrj.v3i3.1559.

Texto completo da fonte
Resumo:
As artificial intelligence (AI) increasingly integrates into various aspects of society, addressing bias in machine learning models and software applications has become crucial. Bias in AI systems can originate from various sources, including unrepresentative datasets, algorithmic assumptions, and human factors. These biases can perpetuate discrimination and inequity, leading to significant social and ethical consequences. This paper explores the nature of bias in AI, emphasizing the need for ethical AI practices to ensure fairness and accountability. We first define and categorize the different types of bias—data bias, algorithmic bias, and human-induced bias—highlighting real-world examples and their impacts. The discussion then shifts to methods for mitigating bias, including strategies for improving data quality, developing fairness-aware algorithms, and implementing robust auditing processes. We also review existing ethical guidelines and frameworks, such as those proposed by IEEE and the European Union, which provide a foundation for ethical AI development. Challenges in identifying and addressing bias are examined, such as the trade-offs between fairness and model accuracy, and the complexities of legal and regulatory requirements. Future directions are considered, including emerging trends in ethical AI, the importance of interdisciplinary collaboration, and innovations in bias detection and mitigation. In conclusion, ongoing vigilance and commitment to ethical practices are essential for developing AI systems that are equitable and just. This paper calls for continuous improvement and proactive measures from developers, researchers, and policymakers to create AI technologies that serve all individuals fairly and without bias. Keywords: Ethical AI, Bias, Machine Learning, Models, Software Applications.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Shahbazi, Nima, Nikola Danevski, Fatemeh Nargesian, Abolfazl Asudeh e Divesh Srivastava. "Through the Fairness Lens: Experimental Analysis and Evaluation of Entity Matching". Proceedings of the VLDB Endowment 16, n.º 11 (julho de 2023): 3279–92. http://dx.doi.org/10.14778/3611479.3611525.

Texto completo da fonte
Resumo:
Entity matching (EM) is a challenging problem studied by different communities for over half a century. Algorithmic fairness has also become a timely topic to address machine bias and its societal impacts. Despite extensive research on these two topics, little attention has been paid to the fairness of entity matching. Towards addressing this gap, we perform an extensive experimental evaluation of a variety of EM techniques in this paper. We generated two social datasets from publicly available datasets for the purpose of auditing EM through the lens of fairness. Our findings underscore potential unfairness under two common conditions in real-world societies: (i) when some demographic groups are over-represented, and (ii) when names are more similar in some groups compared to others. Among our many findings, it is noteworthy to mention that while various fairness definitions are valuable for different settings, due to EM's class imbalance nature, measures such as positive predictive value parity and true positive rate parity are, in general, more capable of revealing EM unfairness.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Cattell, Sven, Avijit Ghosh e Lucie-Aimée Kaffee. "Coordinated Flaw Disclosure for AI: Beyond Security Vulnerabilities". Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society 7 (16 de outubro de 2024): 267–80. http://dx.doi.org/10.1609/aies.v7i1.31635.

Texto completo da fonte
Resumo:
Harm reporting in Artificial Intelligence (AI) currently lacks a structured process for disclosing and addressing algorithmic flaws, relying largely on an ad-hoc approach. This contrasts sharply with the well-established Coordinated Vulnerability Disclosure (CVD) ecosystem in software security. While global efforts to establish frameworks for AI transparency and collaboration are underway, the unique challenges presented by machine learning (ML) models demand a specialized approach. To address this gap, we propose implementing a Coordinated Flaw Disclosure (CFD) framework tailored to the complexities of ML and AI issues. This paper reviews the evolution of ML disclosure practices, from ad hoc reporting to emerging participatory auditing methods, and compares them with cybersecurity norms. Our framework introduces innovations such as extended model cards, dynamic scope expansion, an independent adjudication panel, and an automated verification process. We also outline a forthcoming real-world pilot of CFD. We argue that CFD could significantly enhance public trust in AI systems. By balancing organizational and community interests, CFD aims to improve AI accountability in a rapidly evolving technological landscape.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Horta Ribeiro, Manoel, Veniamin Veselovsky e Robert West. "The Amplification Paradox in Recommender Systems". Proceedings of the International AAAI Conference on Web and Social Media 17 (2 de junho de 2023): 1138–42. http://dx.doi.org/10.1609/icwsm.v17i1.22223.

Texto completo da fonte
Resumo:
Automated audits of recommender systems found that blindly following recommendations leads users to increasingly partisan, conspiratorial, or false content. At the same time, studies using real user traces suggest that recommender systems are not the primary driver of attention toward extreme content; on the contrary, such content is mostly reached through other means, e.g., other websites. In this paper, we explain the following apparent paradox: if the recommendation algorithm favors extreme content, why is it not driving its consumption? With a simple agent-based model where users attribute different utilities to items in the recommender system, we show through simulations that the collaborative-filtering nature of recommender systems and the nicheness of extreme content can resolve the apparent paradox: although blindly following recommendations would indeed lead users to niche content, users rarely consume niche content when given the option because it is of low utility to them, which can lead the recommender system to deamplify such content. Our results call for a nuanced interpretation of "algorithmic amplification" and highlight the importance of modeling the utility of content to users when auditing recommender systems. Code available: https://github.com/epfl-dlab/amplification_paradox.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Amiri, Ghulam Ali, Shahwali Shahidi, Meqdad Mehri, Farid Ahmad Darmel, Jawid Ahmad Niazi e Mohammad Alim Anwari. "Decoding Gender Representation and Bias in Voice User Interfaces (VUIs)". International Journal of Computer Science and Mobile Computing 13, n.º 5 (30 de maio de 2024): 76–88. http://dx.doi.org/10.47760/ijcsmc.2024.v13i05.007.

Texto completo da fonte
Resumo:
This article offers a thorough exploration of gender representation and bias within Voice User Interfaces (VUIs), delving into their intricate impact on technology design, user interaction, and broader societal dynamics. It scrutinizes the entrenched gender stereotypes inherent in VUI design, revealing the intricate interplay between technology, cultural norms, and gender expectations[1]. It highlights the ethical implications of such biases, emphasizing the need for diverse perspectives in the development of VUIs[2], [3]. the article advocates for a multifaceted approach to understanding and identifying bias in VUIs, incorporating methodologies such as data analysis, algorithmic auditing, and user testing[4]. Beyond individual interactions, it discusses how gender bias in VUIs can erode trust, diminish user satisfaction, and perpetuate systemic inequalities[1]. Proposing a shift towards gender-neutral design principles, it advocates for inclusivity and equity, championing personalized experiences and diverse representation. Looking ahead, the article outlines future directions in VUI technology aimed at advancing gender equality through enhanced natural language understanding, sentiment analysis, and collaborative design approaches[4]. By unraveling and addressing gender representation and bias in VUIs, this article lays a robust foundation for fostering a more inclusive and equitable landscape in voice technology.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Ugwudike, Pamela. "AI audits for assessing design logics and building ethical systems: the case of predictive policing algorithms". AI and Ethics 2, n.º 1 (13 de dezembro de 2021): 199–208. http://dx.doi.org/10.1007/s43681-021-00117-5.

Texto completo da fonte
Resumo:
AbstractOrganisations, governments, institutions and others across several jurisdictions are using AI systems for a constellation of high-stakes decisions that pose implications for human rights and civil liberties. But a fast-growing multidisciplinary scholarship on AI bias is currently documenting problems such as the discriminatory labelling and surveillance of historically marginalised subgroups. One of the ways in which AI systems generate such downstream outcomes is through their inputs. This paper focuses on a specific input dynamic which is the theoretical foundation that informs the design, operation, and outputs of such systems. The paper uses the set of technologies known as predictive policing algorithms as a case example to illustrate how theoretical assumptions can pose adverse social consequences and should therefore be systematically evaluated during audits if the objective is to detect unknown risks, avoid AI harms, and build ethical systems. In its analysis of these issues, the paper adds a new dimension to the literature on AI ethics and audits by investigating algorithmic impact in the context of underpinning theory. In doing so, the paper provides insights that can usefully inform auditing policy and practice instituted by relevant stakeholders including the developers, vendors, and procurers of AI systems as well as independent auditors.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Aldabbous, Nagham, e Mohamed Ismail Mohamed Riyath. "Role of AI and Digital Transformation in Corporate Reporting: A Bibliometric Review". International Business Research 17, n.º 5 (26 de setembro de 2024): 52. http://dx.doi.org/10.5539/ibr.v17n5p52.

Texto completo da fonte
Resumo:
This systematic literature review (SLR) aims to investigate the role and evolution of the academic literature on the role of AI and digital transformation in corporate reporting from a bibliometric perspective. The study follows the PRISMA framework and collects Web of Science, Scopus, and Semantic Scholar literature, yielding 617 records. After screening and eligibility assessment, 192 studies were finally included in the review. The study uses Biblioshiny, the R package for analysing scientific production, citations, author and journal impact, collaborations, and VOSviewer for visualising co-occurrence network mapping of keywords and research trends over time. Further, content analysis is used to explore the role of digital technologies, its benefits, challenges, and future trends in digital technologies integration in corporate reporting. The analysis reveals Troshani, Rowbottom, Locke, and Lehner as key contributors and the global impact led by China, the UK, the USA, and Hong Kong, China. Digital transformation, sustainability reporting, and fintech are emerging trends. Digital technologies automate data collection and analysis, improving the quality of financial and non-financial disclosures. Improved accuracy, real-time insights for decision-making, and risk mitigation were benefits while facing challenges regarding data privacy, required technical expertise, algorithmic biases, and regulatory issues. Future trends involve predictive analytics, technology integration, and AI auditing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Mohammad Aljanabi. "Safeguarding Connected Health: Leveraging Trustworthy AI Techniques to Harden Intrusion Detection Systems Against Data Poisoning Threats in IoMT Environments". Babylonian Journal of Internet of Things 2023 (17 de maio de 2023): 31–37. http://dx.doi.org/10.58496/bjiot/2023/005.

Texto completo da fonte
Resumo:
Internet of Medical Things (IoMT) environments introduce vast security exposures including vulnerabilities to data poisoning threats that undermine integrity of automated patient health analytics like diagnosis models. This research explores applying trustworthy artificial intelligence (AI) methodologies including explainability, bias mitigation, and adversarial sample detection to substantially enhance resilience of medical intrusion detection systems. We architect an integrated anomaly detector featuring purpose-built modules for model interpretability, bias quantification, and advanced malicious input recognition alongside conventional classifier pipelines. Additional infrastructure provides full-lifecycle accountability via independent auditing. Our experimental intrusion detection system design embodying multiple trustworthy AI principles is rigorously evaluated against staged electronic record poisoning attacks emulating realistic threats to healthcare IoMT ecosystems spanning wearables, edge devices, and hospital information systems. Results demonstrate significantly strengthened threat response capabilities versus baseline detectors lacking safeguards. Explainability mechanisms build justified trust in model behaviors by surfacing rationale for each prediction to human operators. Continuous bias tracking enables preemptively identifying and mitigating unfair performance gaps before they widen into operational exposures over time. SafeML classifiers reliably detect even camouflaged data manipulation attempts with 97% accuracy. Together the integrated modules restore classification performance to baseline levels even when overwhelmed with 30% contaminated data across all samples. Findings strongly motivate prioritizing adoption of ethical ML practices to fulfill duty of care around patient safety and data integrity as algorithmic capabilities advance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Supriyadi, Heri, Dominicus Savio Priyarsono, Noer Azam Achsani e Trias Andati. "An integrated GRC approach to combating fraud in microloan services". International Journal of Innovative Research and Scientific Studies 7, n.º 4 (23 de agosto de 2024): 1580–91. http://dx.doi.org/10.53894/ijirss.v7i4.3457.

Texto completo da fonte
Resumo:
This research aims to reduce fraud risks in Indonesian banks and non-bank financial institutions providing microloan services. The study employs data analytics and machine learning techniques using employee data from Bank X spanning 2017-2019 with samples of 28,004 workers (2017-2018) and 27,274 employees (2019). Confirmatory factor analysis and XGBoost predictive modelling are applied within the fraud triangle framework to identify critical fraud risk indicators related to employee pressure. An algorithmic approach categorizes personnel based on fraud risk ratings enabling the detection of potentially suspicious activities for proactive intervention. The analysis reveals that incorporating data analytics into governance, risk management and compliance (GRC) systems can accurately forecast fraud probability by focusing on factors associated with employee pressure and opportunities. This facilitates targeted fraud prevention solutions by integrating control mechanisms, risk processes and auditing standards. The predictive model provides valuable insights for policymakers to combat fraud by enhancing governance and risk management practices specific to microloans. This research concludes that the predictive model is a pragmatic decision-making tool for banks offering micro-loans. It mitigates dangers by detecting high-risk personnel and transactions. Integrating data analytics with robust GRC frameworks enables financial institutions to uphold integrity through proactive fraud monitoring and targeted preventive interventions tailored to identify risk profiles. The study offers an integrated technological organizational approach to protect microlending activities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Park, Jinkyung, Ramanathan Arunachalam, Vincent Silenzio e Vivek K. Singh. "Fairness in Mobile Phone–Based Mental Health Assessment Algorithms: Exploratory Study". JMIR Formative Research 6, n.º 6 (14 de junho de 2022): e34366. http://dx.doi.org/10.2196/34366.

Texto completo da fonte
Resumo:
Background Approximately 1 in 5 American adults experience mental illness every year. Thus, mobile phone–based mental health prediction apps that use phone data and artificial intelligence techniques for mental health assessment have become increasingly important and are being rapidly developed. At the same time, multiple artificial intelligence–related technologies (eg, face recognition and search results) have recently been reported to be biased regarding age, gender, and race. This study moves this discussion to a new domain: phone-based mental health assessment algorithms. It is important to ensure that such algorithms do not contribute to gender disparities through biased predictions across gender groups. Objective This research aimed to analyze the susceptibility of multiple commonly used machine learning approaches for gender bias in mobile mental health assessment and explore the use of an algorithmic disparate impact remover (DIR) approach to reduce bias levels while maintaining high accuracy. Methods First, we performed preprocessing and model training using the data set (N=55) obtained from a previous study. Accuracy levels and differences in accuracy across genders were computed using 5 different machine learning models. We selected the random forest model, which yielded the highest accuracy, for a more detailed audit and computed multiple metrics that are commonly used for fairness in the machine learning literature. Finally, we applied the DIR approach to reduce bias in the mental health assessment algorithm. Results The highest observed accuracy for the mental health assessment was 78.57%. Although this accuracy level raises optimism, the audit based on gender revealed that the performance of the algorithm was statistically significantly different between the male and female groups (eg, difference in accuracy across genders was 15.85%; P<.001). Similar trends were obtained for other fairness metrics. This disparity in performance was found to reduce significantly after the application of the DIR approach by adapting the data used for modeling (eg, the difference in accuracy across genders was 1.66%, and the reduction is statistically significant with P<.001). Conclusions This study grounds the need for algorithmic auditing in phone-based mental health assessment algorithms and the use of gender as a protected attribute to study fairness in such settings. Such audits and remedial steps are the building blocks for the widespread adoption of fair and accurate mental health assessment algorithms in the future.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Joseph, Sunday Abayomi, Titilayo Modupe Kolade, Onyinye Obioha Val, Olubukola Omolara Adebiyi, Olumide Samuel Ogungbemi e Oluwaseun Oladeji Olaniyi. "AI-Powered Information Governance: Balancing Automation and Human Oversight for Optimal Organization Productivity". Asian Journal of Research in Computer Science 17, n.º 10 (19 de outubro de 2024): 110–31. http://dx.doi.org/10.9734/ajrcos/2024/v17i10513.

Texto completo da fonte
Resumo:
This study employs a mixed-methods approach to examine the optimal balance between AI-powered automation and human oversight in information governance frameworks, aiming to enhance organizational productivity, efficiency, and compliance. Quantitative data collected from 384 respondents were analyzed using Pearson correlation, regression models, and Structural Equation Modeling (SEM). The results reveal strong positive correlations between AI automation levels and both organization size (r = 0.55, p < .01) and AI adoption duration (r = 0.62, p < .01). Regression analysis indicates that higher levels of AI automation significantly improve error reduction (β = 1.12, p < .001) and compliance (β = 1.05, p < .001), especially in larger organizations with longer AI adoption periods. SEM findings highlight that human oversight positively impacts error reduction (β = 0.65, p < .001) and compliance improvement (β = 0.72, p < .001), and the interaction between human oversight and AI automation further enhances these outcomes (error reduction: β = 0.32, p < .001; compliance improvement: β = 0.35, p < .001). The qualitative analysis, involving thematic extraction from industry reports, reveals ethical challenges such as data quality issues, algorithmic bias, and privacy concerns. Hence, it is necessary to integrate human oversight to ensure ethical standards and build stakeholder trust in AI-driven systems. The study concludes with practical recommendations for organizations: establishing transparent AI governance frameworks, investing in continuous training for employees, and regularly auditing AI processes to mitigate risks. By addressing both the technological and ethical dimensions, organizations can implement AI-powered information governance that not only boosts productivity and efficiency but also ensures compliance and ethical integrity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Carvalho Júnior, César Valentim de Oliveira, Edgard Cornacchione, Armando Freitas da Rocha e Fábio Theoto Rocha. "Cognitive brain mapping of auditors and accountants in going concern judgments". Revista Contabilidade & Finanças 28, n.º 73 (abril de 2017): 132–47. http://dx.doi.org/10.1590/1808-057x201703430.

Texto completo da fonte
Resumo:
ABSTRACT This study aims to explain the extent to which brain mapping patterns follow behavioral patterns of auditors and accountants’ judgments when assessing evidence for decisions involving going concern. This multidisciplinary research involved investigating the relation between the theory of belief revision, neuroscience, and neuroaccounting with a sample of auditors and accountants. We developed a randomized controlled trial study with 12 auditors and 13 accountants. Auditors and accountants presented similar judgments about going concern, specially demonstrating greater sensitivity to negative evidence. Despite similar judgments, results showed diverging brain processing patterns between groups, as distinct reasoning was used to reach going concern estimates. During the decision process, auditors presented homogeneous brain processing patterns, while accountants evidenced conflicts and greater cognitive effort. For both groups, the occurrence of maximization (minimization) of judgments is observed in brain areas associated with identification of needs and motivations linked to individuals’ relations with their social group. This was strengthened by the lack of significant differences between the regression maps of auditors and accountants, leading to interpretation of the groups’ findings as homogeneous brain behavior. Despite familiarity with the executed task and knowledge of auditing standards, as a result of the greater use of algorithmic reasoning the auditors’ judgments were similar to that of accountants. On the other hand, the accountants’ greater cognitive effort, due to the experiencing of greater conflict in the decision-making process, made them use more quantic brain processing abilities, which are responsible for conscious reasoning. This was observed in the maximizations (minimizations) of the estimates in brain areas related to concerns with the judgments’ social repercussions, which culminated in some degree of “conservatism” in their decisions. Furthermore, these findings reveal another opportunity to discuss the assumption of the brain as the original accounting institution.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Joseph, Jisha, Asha George e Mohamed Shameem P. "Enhanced Data Duplication and Regeneration Scheme for Cloud Storage Using Seed-block Algorithm with Public Auditing". Journal of Advanced Research in Dynamical and Control Systems 11, n.º 0009-SPECIAL ISSUE (25 de setembro de 2019): 661–66. http://dx.doi.org/10.5373/jardcs/v11/20192619.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Senoussi, Mehdi, e Laura Dugué. "La vision : un modèle d’étude de la cognition". Intellectica. Revue de l'Association pour la Recherche Cognitive 72, n.º 1 (2020): 275–99. http://dx.doi.org/10.3406/intel.2020.1957.

Texto completo da fonte
Resumo:
Vision : a Model to Study Cognition. Our senses – vision, audition, touch, taste and smell – constantly receive a large amount of information. This information is processed and used in order to guide our actions. Cognitive sciences consist in studying mental abilities through different disciplines, e. g. linguistic, neuropsychology, neuroscience or modelling. Each discipline considers mental phenomena and their physical substrate, the nervous system, as a tool to process information in order to guide behavior adaptively (Collins, Andler, & Tallon-Baudry, 2018). Cognitive functions are a collection of processing systems serving different goals, and whose interactions are key to the complexity of cognition. Studying cognition often implies operationalizing each of these functions separately. For example, memory allows to store and reuse information, and attention allows to select relevant information for the task at hand, and to facilitate its processing. To characterize the processes of specific cognitive functions, it is thus necessary to provide to the studied subject – here we concentrate on human and non-human primates – an information to be processed, through different sensory modalities. In this essay, we concentrate on vision as a unique model to study cognition through different fields of cognitive sciences, from cognitive psychology to neurosciences, mentioning also briefly modeling and neuropsychology. Our objective is not to do an exhaustive description of the visual system, nor to compare in detail vision with other sensory modalities, but to argue that the accumulation of evidence on the visual system, as well as its characteristic perceptual, algorithmic and physiological organization, make it a particularly rich model to study cognitive functions. After a brief presentation of some properties of vision, we will illustrate our argument focusing on a specific cognitive function : attention, and in particular its study in cognitive psychology and neuroscience. We will discuss how our knowledge of vision allowed us to understand the behavioral and neuronal mechanisms underlying attentional selection and facilitation of information. We will finally conclude that sensory systems can be used as models to study cognition in different fields of cognitive sciences.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Hacohen, Uri Y. "User-Based Algorithmic Auditing". SSRN Electronic Journal, 2023. http://dx.doi.org/10.2139/ssrn.4540163.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

O'Neil, Cathy, Holli Sargeant e Jacob Appel. "Explainable Fairness in Regulatory Algorithmic Auditing". SSRN Electronic Journal, 2024. http://dx.doi.org/10.2139/ssrn.4756637.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

"Exploring the Role of Media Scrutiny in Algorithmic Auditing". Academic Journal of Humanities & Social Sciences 6, n.º 2 (2023). http://dx.doi.org/10.25236/ajhss.2023.060220.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Heath, Marie K., Daniel G. Krutka e Benjamin Gleason. "“See results anyway”: auditing social media as educational technology". Information and Learning Sciences, 10 de julho de 2024. http://dx.doi.org/10.1108/ils-12-2023-0205.

Texto completo da fonte
Resumo:
Purpose This paper aims to consider the role of social media platforms as educational technologies given growing evidence of harms to democracy, society and individuals, particularly through logics of efficiency, racism, misogyny and surveillance inextricably designed into the architectural and algorithmic bones of social media. The paper aims to uncover downsides and drawbacks of for-profit social media, as well as consider the discriminatory design embedded within its blueprints. Design/methodology/approach The authors used a method of a technological audit, framed through the lenses of technoskepticism and discriminatory design, to consider the unintended downsides and consequences of Twitter and Instagram. Findings The authors provide evidence from a variety of sources to demonstrate that Instagram and Twitter’s intersection of technological design, systemic oppression, platform capitalism and algorithmic manipulation cause material harm to marginalized people and youth. Research limitations/implications The authors contend that it is a conflict of professional ethics to treat social media as an educational technology that should be used by youth in educational settings. Thus, they suggest that future scholarship focus more on addressing methods of teaching about social media rather than teaching with social media. Practical implications The paper concludes with recommendations for educators who might work alongside young people to learn about social media while taking informed social actions for more just technological futures. Originality/value This paper fulfills an identified need to challenge the direction of the field of social media and education research. It is of use to education scholars, practitioners and policy makers.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia