Academic literature on the topic 'Fairness, accountability, transparency, trust and ethics of computer systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Fairness, accountability, transparency, trust and ethics of computer systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Fairness, accountability, transparency, trust and ethics of computer systems"

1

Kuberkar, Sachin, Tarun Kumar Singhal, and Shikha Singh. "Fate of AI for Smart City Services in India." International Journal of Electronic Government Research 18, no. 2 (April 2022): 1–21. http://dx.doi.org/10.4018/ijegr.298216.

Full text
Abstract:
With the rollout of the smart city initiative in India, this study explores potential risks and opportunities in adopting artificial intelligence (AI) for citizen services. The study deploys expert interview technique and the data collected from various sources are analyzed using qualitative analysis. It was found that AI implementation needs a critical examination of various socio-technological factors to avoid any undesirable impacts on citizens. Fairness, accountability, transparency, and ethics (FATE) play an important role during the design and execution of AI-based systems. This study provides vital insights into AI implications to smart city managers, citizen groups, and policymakers while delivering promised smart city experience. The study has social implications in terms of ensuring that proper guidelines are developed for using AI technology for citizen services, thereby bridging the ever-critical trust gap between citizens and city administration.
APA, Harvard, Vancouver, ISO, and other styles
2

Quinn, Regina Ammicht. "Artificial intelligence and the role of ethics." Statistical Journal of the IAOS 37, no. 1 (March 22, 2021): 75–77. http://dx.doi.org/10.3233/sji-210791.

Full text
Abstract:
An ethical approach to AI does not function as bicycle brake on an intercontinental airplane. Ethics does not put insufficient brakes on progress. It does, however, asks how principles and values that are important for a democratic society can be translated into a digital democratic society. Beyond discussions of transparency, accountability, explainability, fairness and trustworthiness, this text focusses on two major issues: representation gaps – where minorities and a majority (women) are under- or misrepresented in data; and data silhouettes – where the body, the self and human life seems to be deciphered by data alone. Ethical reasoning thus insists that the non-quantifiable areas of human life are as important as any quantifiable aspects. An extensive quantification of the social, the political and the individual person must be continuously examined for its effects. Good regulation is not an obstacle to research and business, but that is necessary to create trust in AI systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Bragg, Danielle, Naomi Caselli, Julie A. Hochgesang, Matt Huenerfauth, Leah Katz-Hernandez, Oscar Koller, Raja Kushalnagar, Christian Vogler, and Richard E. Ladner. "The FATE Landscape of Sign Language AI Datasets." ACM Transactions on Accessible Computing 14, no. 2 (July 2021): 1–45. http://dx.doi.org/10.1145/3436996.

Full text
Abstract:
Sign language datasets are essential to developing many sign language technologies. In particular, datasets are required for training artificial intelligence (AI) and machine learning (ML) systems. Though the idea of using AI/ML for sign languages is not new, technology has now advanced to a point where developing such sign language technologies is becoming increasingly tractable. This critical juncture provides an opportunity to be thoughtful about an array of Fairness, Accountability, Transparency, and Ethics (FATE) considerations. Sign language datasets typically contain recordings of people signing, which is highly personal. The rights and responsibilities of the parties involved in data collection and storage are also complex and involve individual data contributors, data collectors or owners, and data users who may interact through a variety of exchange and access mechanisms. Deaf community members (and signers, more generally) are also central stakeholders in any end applications of sign language data. The centrality of sign language to deaf culture identity, coupled with a history of oppression, makes usage by technologists particularly sensitive. This piece presents many of these issues that characterize working with sign language AI datasets, based on the authors’ experiences living, working, and studying in this space.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Bo, Peng Qi, Bo Liu, Shuai Di, Jingen Liu, Jiquan Pei, Jinfeng Yi, and Bowen Zhou. "Trustworthy AI: From Principles to Practices." ACM Computing Surveys, August 18, 2022. http://dx.doi.org/10.1145/3555803.

Full text
Abstract:
The rapid development of Artificial Intelligence (AI) technology has enabled the deployment of various systems based on it. However, many current AI systems are found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection. These shortcomings degrade user experience and erode people’s trust in all AI systems. In this review, we provide AI practitioners with a comprehensive guide for building trustworthy AI systems. We first introduce the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, and accountability. To unify currently available but fragmented approaches toward trustworthy AI, we organize them in a systematic approach that considers the entire lifecycle of AI systems, ranging from data acquisition to model development, to system development and deployment, finally to continuous monitoring and governance. In this framework, we offer concrete action items for practitioners and societal stakeholders (e.g., researchers, engineers, and regulators) to improve AI trustworthiness. Finally, we identify key opportunities and challenges for the future development of trustworthy AI systems, where we identify the need for a paradigm shift toward comprehensively trustworthy AI systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Sapienza, Salvatore, and Anton Vedder. "Principle-based recommendations for big data and machine learning in food safety: the P-SAFETY model." AI & SOCIETY, October 11, 2021. http://dx.doi.org/10.1007/s00146-021-01282-1.

Full text
Abstract:
AbstractBig data and Machine learning Techniques are reshaping the way in which food safety risk assessment is conducted. The ongoing ‘datafication’ of food safety risk assessment activities and the progressive deployment of probabilistic models in their practices requires a discussion on the advantages and disadvantages of these advances. In particular, the low level of trust in EU food safety risk assessment framework highlighted in 2019 by an EU-funded survey could be exacerbated by novel methods of analysis. The variety of processed data raises unique questions regarding the interplay of multiple regulatory systems alongside food safety legislation. Provisions aiming to preserve the confidentiality of data and protect personal information are juxtaposed to norms prescribing the public disclosure of scientific information. This research is intended to provide guidance for data governance and data ownership issues that unfold from the ongoing transformation of the technical and legal domains of food safety risk assessment. Following the reconstruction of technological advances in data collection and analysis and the description of recent amendments to food safety legislation, emerging concerns are discussed in light of the individual, collective and social implications of the deployment of cutting-edge Big Data collection and analysis techniques. Then, a set of principle-based recommendations is proposed by adapting high-level principles enshrined in institutional documents about Artificial Intelligence to the realm of food safety risk assessment. The proposed set of recommendations adopts Safety, Accountability, Fairness, Explainability, Transparency as core principles (SAFETY), whereas Privacy and data protection are used as a meta-principle.
APA, Harvard, Vancouver, ISO, and other styles
6

Jethani, Suneel, and Robbie Fordyce. "Darkness, Datafication, and Provenance as an Illuminating Methodology." M/C Journal 24, no. 2 (April 27, 2021). http://dx.doi.org/10.5204/mcj.2758.

Full text
Abstract:
Data are generated and employed for many ends, including governing societies, managing organisations, leveraging profit, and regulating places. In all these cases, data are key inputs into systems that paradoxically are implemented in the name of making societies more secure, safe, competitive, productive, efficient, transparent and accountable, yet do so through processes that monitor, discipline, repress, coerce, and exploit people. (Kitchin, 165) Introduction Provenance refers to the place of origin or earliest known history of a thing. It refers to the custodial history of objects. It is a term that is commonly used in the art-world but also has come into the language of other disciplines such as computer science. It has also been applied in reference to the transactional nature of objects in supply chains and circular economies. In an interview with Scotland’s Institute for Public Policy Research, Adam Greenfield suggests that provenance has a role to play in the “establishment of reliability” given that a “transaction or artifact has a specified provenance, then that assertion can be tested and verified to the satisfaction of all parities” (Lawrence). Recent debates on the unrecognised effects of digital media have convincingly argued that data is fully embroiled within capitalism, but it is necessary to remember that data is more than just a transactable commodity. One challenge in bringing processes of datafication into critical light is how we understand what happens to data from its point of acquisition to the point where it becomes instrumental in the production of outcomes that are of ethical concern. All data gather their meaning through relationality; whether acting as a representation of an exterior world or representing relations between other data points. Data objectifies relations, and despite any higher-order complexities, at its core, data is involved in factualising a relation into a binary. Assumptions like these about data shape reasoning, decision-making and evidence-based practice in private, personal and economic contexts. If processes of datafication are to be better understood, then we need to seek out conceptual frameworks that are adequate to the way that data is used and understood by its users. Deborah Lupton suggests that often we give data “other vital capacities because they are about human life itself, have implications for human life opportunities and livelihoods, [and] can have recursive effects on human lives (shaping action and concepts of embodiment ... selfhood [and subjectivity]) and generate economic value”. But when data are afforded such capacities, the analysis of its politics also calls for us to “consider context” and “making the labour [of datafication] visible” (D’Ignazio and Klein). For Jenny L. Davis, getting beyond simply thinking about what data affords involves bringing to light how continually and dynamically to requests, demands, encourages, discourages, and refuses certain operations and interpretations. It is in this re-orientation of the question from what to how where “practical analytical tool[s]” (Davis) can be found. Davis writes: requests and demands are bids placed by technological objects, on user-subjects. Encourage, discourage and refuse are the ways technologies respond to bids user-subjects place upon them. Allow pertains equally to bids from technological objects and the object’s response to user-subjects. (Davis) Building on Lupton, Davis, and D’Ignazio and Klein, we see three principles that we consider crucial for work on data, darkness and light: data is not simply a technological object that exists within sociotechnical systems without having undergone any priming or processing, so as a consequence the data collecting entity imposes standards and way of imagining data before it comes into contact with user-subjects; data is not neutral and does not possess qualities that make it equivalent to the things that it comes to represent; data is partial, situated, and contingent on technical processes, but the outcomes of its use afford it properties beyond those that are purely informational. This article builds from these principles and traces a framework for investigating the complications arising when data moves from one context to another. We draw from the “data provenance” as it is applied in the computing and informational sciences where it is used to query the location and accuracy of data in databases. In developing “data provenance”, we adapt provenance from an approach that solely focuses on technical infrastructures and material processes that move data from one place to another and turn to sociotechnical, institutional, and discursive forces that bring about data acquisition, sharing, interpretation, and re-use. As data passes through open, opaque, and darkened spaces within sociotechnical systems, we argue that provenance can shed light on gaps and overlaps in technical, legal, ethical, and ideological forms of data governance. Whether data becomes exclusive by moving from light to dark (as has happened with the removal of many pages and links from Facebook around the Australian news revenue-sharing bill), or is publicised by shifting from dark to light (such as the Australian government releasing investigative journalist Andie Fox’s welfare history to the press), or even recontextualised from one dark space to another (as with genetic data shifting from medical to legal contexts, or the theft of personal financial data), there is still a process of transmission here that we can assess and critique through provenance. These different modalities, which guide data acquisition, sharing, interpretation, and re-use, cascade and influence different elements and apparatuses within data-driven sociotechnical systems to different extents depending on context. Attempts to illuminate and make sense of these complex forces, we argue, exposes data-driven practices as inherently political in terms of whose interests they serve. Provenance in Darkness and in Light When processes of data capture, sharing, interpretation, and re-use are obscured, it impacts on the extent to which we might retrospectively examine cases where malpractice in responsible data custodianship and stewardship has occurred, because it makes it difficult to see how things have been rendered real and knowable, changed over time, had causality ascribed to them, and to what degree of confidence a decision has been made based on a given dataset. To borrow from this issue’s concerns, the paradigm of dark spaces covers a range of different kinds of valences on the idea of private, secret, or exclusive contexts. We can parallel it with the idea of ‘light’ spaces, which equally holds a range of different concepts about what is open, public, or accessible. For instance, in the use of social data garnered from online platforms, the practices of academic researchers and analysts working in the private sector often fall within a grey zone when it comes to consent and transparency. Here the binary notion of public and private is complicated by the passage of data from light to dark (and back to light). Writing in a different context, Michael Warner complicates the notion of publicness. He observes that the idea of something being public is in and of itself always sectioned off, divorced from being fully generalisable, and it is “just whatever people in a given context think it is” (11). Michael Hardt and Antonio Negri argue that publicness is already shadowed by an idea of state ownership, leaving us in a situation where public and private already both sit on the same side of the propertied/commons divide as if the “only alternative to the private is the public, that is, what is managed and regulated by states and other governmental authorities” (vii). The same can be said about the way data is conceived as a public good or common asset. These ideas of light and dark are useful categorisations for deliberately moving past the tensions that arise when trying to qualify different subspecies of privacy and openness. The problem with specific linguistic dyads of private vs. public, or open vs. closed, and so on, is that they are embedded within legal, moral, technical, economic, or rhetorical distinctions that already involve normative judgements on whether such categories are appropriate or valid. Data may be located in a dark space for legal reasons that fall under the legal domain of ‘private’ or it may be dark because it has been stolen. It may simply be inaccessible, encrypted away behind a lost password on a forgotten external drive. Equally, there are distinctions around lightness that can be glossed – the openness of Open Data (see: theodi.org) is of an entirely separate category to the AACS encryption key, which was illegally but enthusiastically shared across the internet in 2007 to the point where it is now accessible on Wikipedia. The language of light and dark spaces allows us to cut across these distinctions and discuss in deliberately loose terms the degree to which something is accessed, with any normative judgments reserved for the cases themselves. Data provenance, in this sense, can be used as a methodology to critique the way that data is recontextualised from light to dark, dark to light, and even within these distinctions. Data provenance critiques the way that data is presented as if it were “there for the taking”. This also suggests that when data is used for some or another secondary purpose – generally for value creation – some form of closure or darkening is to be expected. Data in the public domain is more than simply a specific informational thing: there is always context, and this contextual specificity, we argue, extends far beyond anything that can be captured in a metadata schema or a licensing model. Even the transfer of data from one open, public, or light context to another will evoke new degrees of openness and luminosity that should not be assumed to be straightforward. And with this a new set of relations between data-user-subjects and stewards emerges. The movement of data between public and private contexts by virtue of the growing amount of personal information that is generated through the traces left behind as people make use of increasingly digitised services going about their everyday lives means that data-motile processes are constantly occurring behind the scenes – in darkness – where it comes into the view, or possession, of third parties without obvious mechanisms of consent, disclosure, or justification. Given that there are “many hands” (D’Iganzio and Klein) involved in making data portable between light and dark spaces, equally there can be diversity in the approaches taken to generate critical literacies of these relations. There are two complexities that we argue are important for considering the ethics of data motility from light to dark, and this differs from the concerns that we might have when we think about other illuminating tactics such as open data publishing, freedom-of-information requests, or when data is anonymously leaked in the public interest. The first is that the terms of ethics must be communicable to individuals and groups whose data literacy may be low, effectively non-existent, or not oriented around the objective of upholding or generating data-luminosity as an element of a wider, more general form of responsible data stewardship. Historically, a productive approach to data literacy has been finding appropriate metaphors from adjacent fields that can help add depth – by way of analogy – to understanding data motility. Here we return to our earlier assertion that data is more than simply a transactable commodity. Consider the notion of “giving” and “taking” in the context of darkness and light. The analogy of giving and taking is deeply embedded into the notion of data acquisition and sharing by virtue of the etymology of the word data itself: in Latin, “things having been given”, whereby in French données, a natural gift, perhaps one that is given to those that attempt capture for the purposes of empiricism – representation in quantitative form is a quality that is given to phenomena being brought into the light. However, in the contemporary parlance of “analytics” data is “taken” in the form of recording, measuring, and tracking. Data is considered to be something valuable enough to give or take because of its capacity to stand in for real things. The empiricist’s preferred method is to take rather than to accept what is given (Kitchin, 2); the data-capitalist’s is to incentivise the act of giving or to take what is already given (or yet to be taken). Because data-motile processes are not simply passive forms of reading what is contained within a dataset, the materiality and subjectivity of data extraction and interpretation is something that should not be ignored. These processes represent the recontextualisation of data from one space to another and are expressed in the landmark case of Cambridge Analytica, where a private research company extracted data from Facebook and used it to engage in psychometric analysis of unknowing users. Data Capture Mechanism Characteristics and Approach to Data Stewardship Historical Information created, recorded, or gathered about people of things directly from the source or a delegate but accessed for secondary purposes. Observational Represents patterns and realities of everyday life, collected by subjects by their own choice and with some degree of discretion over the methods. Third parties access this data through reciprocal arrangement with the subject (e.g., in exchange for providing a digital service such as online shopping, banking, healthcare, or social networking). Purposeful Data gathered with a specific purpose in mind and collected with the objective to manipulate its analysis to achieve certain ends. Integrative Places less emphasis on specific data types but rather looks towards social and cultural factors that afford access to and facilitate the integration and linkage of disparate datasets Table 1: Mechanisms of Data Capture There are ethical challenges associated with data that has been sourced from pre-existing sets or that has been extracted from websites and online platforms through scraping data and then enriching it through cleaning, annotation, de-identification, aggregation, or linking to other data sources (tab. 1). As a way to address this challenge, our suggestion of “data provenance” can be defined as where a data point comes from, how it came into being, and how it became valuable for some or another purpose. In developing this idea, we borrow from both the computational and biological sciences (Buneman et al.) where provenance, as a form of qualitative inquiry into data-motile processes, centres around understanding the origin of a data point as part of a broader almost forensic analysis of quality and error-potential in datasets. Provenance is an evaluation of a priori computational inputs and outputs from the results of database queries and audits. Provenance can also be applied to other contexts where data passes through sociotechnical systems, such as behavioural analytics, targeted advertising, machine learning, and algorithmic decision-making. Conventionally, data provenance is based on understanding where data has come from and why it was collected. Both these questions are concerned with the evaluation of the nature of a data point within the wider context of a database that is itself situated within a larger sociotechnical system where the data is made available for use. In its conventional sense, provenance is a means of ensuring that a data point is maintained as a single source of truth (Buneman, 89), and by way of a reproducible mechanism which allows for its path through a set of technical processes, it affords the assessment of a how reliable a system’s output might be by sheer virtue of the ability for one to retrace the steps from point A to B. “Where” and “why” questions are illuminating because they offer an ends-and-means view of the relation between the origins and ultimate uses of a given data point or set. Provenance is interesting when studying data luminosity because means and ends have much to tell us about the origins and uses of data in ways that gesture towards a more accurate and structured research agenda for data ethics that takes the emphasis away from individual moral patients and reorients it towards practices that occur within information management environments. Provenance offers researchers seeking to study data-driven practices a similar heuristic to a journalist’s line of questioning who, what, when, where, why, and how? This last question of how is something that can be incorporated into conventional models of provenance that make it useful in data ethics. The question of how data comes into being extends questions of power, legality, literacy, permission-seeking, and harm in an entangled way and notes how these factors shape the nature of personal data as it moves between contexts. Forms of provenance accumulate from transaction to transaction, cascading along, as a dataset ‘picks up’ the types of provenance that have led to its creation. This may involve multiple forms of overlapping provenance – methodological and epistemological, legal and illegal – which modulate different elements and apparatuses. Provenance, we argue is an important methodological consideration for workers in the humanities and social sciences. Provenance provides a set of shared questions on which models of transparency, accountability, and trust may be established. It points us towards tactics that might help data-subjects understand privacy in a contextual manner (Nissenbaum) and even establish practices of obfuscation and “informational self-defence” against regimes of datafication (Brunton and Nissenbaum). Here provenance is not just a declaration of what means and ends of data capture, sharing, linkage, and analysis are. We sketch the outlines of a provenance model in table 2 below. Type Metaphorical frame Dark Light What? The epistemological structure of a database determines the accuracy of subsequent decisions. Data must be consistent. What data is asked of a person beyond what is strictly needed for service delivery. Data that is collected for a specific stated purpose with informed consent from the data-subject. How does the decision about what to collect disrupt existing polities and communities? What demands for conformity does the database make of its subjects? Where? The contents of a database is important for making informed decisions. Data must be represented. The parameters of inclusion/exclusion that create unjust risks or costs to people because of their inclusion or exclusion in a dataset. The parameters of inclusion or exclusion that afford individuals representation or acknowledgement by being included or excluded from a dataset. How are populations recruited into a dataset? What divides exist that systematically exclude individuals? Who? Who has access to data, and how privacy is framed is important for the security of data-subjects. Data access is political. Access to the data by parties not disclosed to the data-subject. Who has collected the data and who has or will access it? How is the data made available to those beyond the data subjects? How? Data is created with a purpose and is never neutral. Data is instrumental. How the data is used, to what ends, discursively, practically, instrumentally. Is it a private record, a source of value creation, the subject of extortion or blackmail? How the data was intended to be used at the time that it was collected. Why? Data is created by people who are shaped by ideological factors. Data has potential. The political rationality that shapes data governance with regard to technological innovation. The trade-offs that are made known to individuals when they contribute data into sociotechnical systems over which they have limited control. Table 2: Forms of Data Provenance Conclusion As an illuminating methodology, provenance offers a specific line of questioning practices that take information through darkness and light. The emphasis that it places on a narrative for data assets themselves (asking what when, who, how, and why) offers a mechanism for traceability and has potential for application across contexts and cases that allows us to see data malpractice as something that can be productively generalised and understood as a series of ideologically driven technical events with social and political consequences without being marred by perceptions of exceptionality of individual, localised cases of data harm or data violence. References Brunton, Finn, and Helen Nissenbaum. "Political and Ethical Perspectives on Data Obfuscation." Privacy, Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology. Eds. Mireille Hildebrandt and Katja de Vries. New York: Routledge, 2013. 171-195. Buneman, Peter, Sanjeev Khanna, and Wang-Chiew Tan. "Data Provenance: Some Basic Issues." International Conference on Foundations of Software Technology and Theoretical Computer Science. Berlin: Springer, 2000. Davis, Jenny L. How Artifacts Afford: The Power and Politics of Everyday Things. Cambridge: MIT Press, 2020. D'Ignazio, Catherine, and Lauren F. Klein. Data Feminism. Cambridge: MIT Press, 2020. Hardt, Michael, and Antonio Negri. Commonwealth. Cambridge: Harvard UP, 2009. Kitchin, Rob. "Big Data, New Epistemologies and Paradigm Shifts." Big Data & Society 1.1 (2014). Lawrence, Matthew. “Emerging Technology: An Interview with Adam Greenfield. ‘God Forbid That Anyone Stopped to Ask What Harm This Might Do to Us’. Institute for Public Policy Research, 13 Oct. 2017. <https://www.ippr.org/juncture-item/emerging-technology-an-interview-with-adam-greenfield-god-forbid-that-anyone-stopped-to-ask-what-harm-this-might-do-us>. Lupton, Deborah. "Vital Materialism and the Thing-Power of Lively Digital Data." Social Theory, Health and Education. Eds. Deana Leahy, Katie Fitzpatrick, and Jan Wright. London: Routledge, 2018. Nissenbaum, Helen F. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford: Stanford Law Books, 2020. Warner, Michael. "Publics and Counterpublics." Public Culture 14.1 (2002): 49-90.
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Ashlin. "In the Shadow of Platforms." M/C Journal 24, no. 2 (April 27, 2021). http://dx.doi.org/10.5204/mcj.2750.

Full text
Abstract:
Introduction This article explores the changing relational quality of “the shadow of hierarchy”, in the context of the merging of platforms with infrastructure as the source of the shadow of hierarchy. In governance and regulatory studies, the shadow of hierarchy (or variations thereof), describes the space of influence that hierarchal organisations and infrastructures have (Héritier and Lehmkuhl; Lance et al.). A shift in who/what casts the shadow of hierarchy will necessarily result in changes to the attendant relational values, logics, and (techno)socialities that constitute the shadow, and a new arrangement of shadow that presents new challenges and opportunities. This article reflects on relevant literature to consider two different ways the shadow of hierarchy has qualitatively changed as platforms, rather than infrastructures, come to cast the shadow of hierarchy – an increase in scalability; and new socio-technical arrangements of (non)participation – and the opportunities and challenges therein. The article concludes that more concerted efforts are needed to design the shadow, given a seemingly directionless desire to enact data-driven solutions. The Shadow of Hierarchy, Infrastructures, and Platforms The shadow of hierarchy refers to how institutional, infrastructural, and organisational hierarchies create a relational zone of influence over a particular space. This commonly refers to executive decisions and legislation created by nation states, which are cast over private and non-governmental actors (Héritier and Lehmkuhl, 2). Lance et al. (252–53) argue that the shadow of hierarchy is a productive and desirable thing. Exploring the shadow of hierarchy in the context of how geospatial data agencies govern their data, Lance et al. find that the shadow of hierarchy enables the networked governance approaches that agencies adopt. This is because operating in the shadow of institutions provides authority, confers bureaucratic legitimacy and top-down power, and offers financial support. The darkness of the shadow is thus less a moral or ethicopolitical statement (such as that suggested by Fisher and Bolter, who use the idea of darkness to unpack the morality of tourism involving death and human suffering), and instead a relationality; an expression of differing values, logics, and (techno)socialities internal and external to those infrastructures and institutions that cast it (Gehl and McKelvey). The shadow of hierarchy might therefore be thought of as a field of relational influences and power that a social body casts over society, by virtue of a privileged position vis-a-vis society. It modulates society’s “light”; the resources (Bourdieu) and power relationships (Foucault) that run through social life, as parsed through a certain institutional and infrastructural worldview (the thing that blocks the light to create the shadow). In this way the shadow of hierarchy is not a field of absolute blackness that obscures, but instead a gradient of light and dark that creates certain effects. The shadow of hierarchy is now, however, also being cast by decentralised, privately held, and non-hierarchal platforms that are replacing or merging with public infrastructure, creating new social effects. Platforms are digital, socio-technical systems that create relationships between different entities. They are most commonly built around a relatively fixed core function (such as a social media service like Facebook), that then interacts with a peripheral set of complementors (advertising companies and app developers in the case of social media; Baldwin and Woodard), to create new relationships, forms of value, and other interactions (van Dijck, The Culture of Connectivity). In creating these relationships, platforms become inherently political (Gillespie), shaping relationships and content on the platform (Suzor) and in embodied life (Ajunwa; Eubanks). While platforms are often associated with optional consumer platforms (such as streaming services like Spotify), they have increasingly come to occupy the place of public infrastructure, and act as a powerful enabler to different socio-technical, economic, and political relationships (van Dijck, Governing Digital Societies). For instance, Plantin et al. argue that platforms have merged with infrastructures, and that once publicly held and funded institutions and essential services now share many characteristics with for-profit, privately held platforms. For example, Australia has had a long history of outsourcing employment services (Webster and Harding), and nearly privatised its entire visa processing data infrastructure (Jenkins). Platforms therefore have a greater role in casting the shadow of hierarchy than before. In doing so, they cast a shadow that is qualitatively different, modulated through a different set of relational values and (techno)socialities. Scalability A key difference and selling point of platforms is their scalability; since they can rapidly and easily up- and down-scale their functionalities in a way that traditional infrastructure cannot (Plantin et al.). The ability to respond “on-demand” to infrastructural requirements has made platforms the go-to service delivery option in the neo-liberalised public infrastructure environment (van Dijck, Governing Digital Societies). For instance, services providers like Amazon Web Services or Microsoft Azure provide on demand computing capacity for many nations’ most valuable services, including their intelligence and security capabilities (Amoore, Cloud Ethics; Konkel). The value of such platforms to government lies in the reduced cost and risk that comes with using rented capabilities, and the enhanced flexibility to increase or decrease their usage as required, without any of the economic sunk costs attached to owning the infrastructure. Scalability is, however, not just about on-demand technical capability, but about how platforms can change the scale of socio-technical relationships and services that are mediated through the platform. This changes the relational quality of the shadow of hierarchy, as activities and services occurring within the shadow are now connected into a larger and rapidly modulating scale. Scalability allows the shadow of hierarchy to extend from those in proximity to institutions to the broader population in general. For example, individual citizens can more easily “reach up” into governmental services and agencies as a part of completing their everyday business through platform such as MyGov in Australia (Services Australia). Using a smartphone application, citizens are afforded a more personalised and adaptive experience of the welfare state, as engaging with welfare services is no-longer tied to specific “brick-and-mortar” locations, but constantly available through a smartphone app and web portal. Multiple government services including healthcare and taxation are also connected to this platform, allowing users to reach across multiple government service domains to complete their personal business, seeking information and services that would have once required separate communications with different branches of government. The individual’s capacities to engage with the state have therefore upscaled with this change in the shadow, retaining a productivity and capacity enhancing quality that is reminiscent of older infrastructures and institutions, as the individual and their lived context is brought closer to the institutions themselves. Scale, however, comes with complications. The fundamental driver for scalability and its adaptive qualities is datafication. This means individuals and organisations are inflecting their operational and relational logics with the logic of datafication: a need to capture all data, at all times (van Dijck, Datafication; Fourcade and Healy). Platforms, especially privately held platforms, benefit significantly from this, as they rely on data to drive and refine their algorithmic tools, and ultimately create actionable intelligence that benefits their operations. Thus, scalability allows platforms to better “reach down” into individual lives and different social domains to fuel their operations. For example, as public transport services become increasingly datafied into mobility-as-a-service (MAAS) systems, ride sharing and on-demand transportation platforms like Uber and Lyft become incorporated into the public transport ecosystem (Lyons et al.). These platforms capture geospatial, behavioural, and reputational data from users and drivers during their interactions with the platform (Rosenblat and Stark; Attoh et al.). This generates additional value, and profits, for the platform itself with limited value returned to the user or the broader public it supports, outside of the transport service. It also places the platform in a position to gain wider access to the population and their data, by virtue of operating as a part of a public service. In this way the shadow of hierarchy may exacerbate inequity. The (dis)benefits of the shadow of hierarchy become unevenly spread amongst actors within its field, a function of an increased scalability that connects individuals into much broader assemblages of datafication. For Eubank, this can entrench existing economic and social inequalities by forcing those in need to engage with digitally mediated welfare systems that rely on distant and opaque computational judgements. Local services are subject to increased digital surveillance, a removal of agency from frontline advocates, and algorithmic judgement at scale. More fortunate citizens are also still at risk, with Nardi and Ekbia arguing that many digitally scaled relationships are examples of “heteromation”, whereby platforms convince actors in the platform to labour for free, such as through providing ratings which establish a platform’s reputational economy. Such labour fuels the operation of the platform through exploiting users, who become both a product/resource (as a source of data for third party advertisers) and a performer of unrewarded digital labour, such as through providing user reviews that help guide a platform’s algorithm(s). Both these examples represent a particularly disconcerting outcome for the shadow of hierarchy, which has its roots in public sector institutions who operate for a common good through shared and publicly held infrastructure. In shifting towards platforms, especially privately held platforms, value is transmitted to private corporations and not the public or the commons, as was the case with traditional infrastructure. The public also comes to own the risks attached to platforms if they become tied to public services, placing a further burden on the public if the platform fails, while reaping none of the profit and value generated through datafication. This is a poor bargain at best. (Non)Participation Scalability forms the basis for a further predicament: a changing socio-technical dynamic of (non)participation between individuals and services. According to Star (118), infrastructures are defined through their relationships to a given context. These relationships, which often exist as boundary objects between different communities, are “loosely structured in common use, and become tightly bound in particular locations” (Star, 118). While platforms are certainly boundary objects and relationally defined, the affordances of cloud computing have enabled a decoupling from physical location, and the operation of platforms across time and space through distributed digital nodes (smartphones, computers, and other localised hardware) and powerful algorithms that sort and process requests for service. This does not mean location is not important for the cloud (see Amoore, Cloud Geographies), but platforms are less likely to have a physically co-located presence in the same way traditional infrastructures had. Without the same institutional and infrastructural footprint, the modality for participating in and with the shadow of hierarchy that platforms cast becomes qualitatively different and predicated on digital intermediaries. Replacing a physical and human footprint with algorithmically supported and decentralised computing power allows scalability and some efficiency improvements, but it also removes taken-for-granted touchpoints for contestation and recourse. For example, ride-sharing platform Uber operates globally, and has expressed interest in operating in complement to (and perhaps in competition with) public transport services in some cities (Hall et al.; Conger). Given that Uber would come to operate as a part of the shadow of hierarchy that transport authorities cast over said cities, it would not be unreasonable to expect Uber to be subject to comparable advocacy, adjudication, transparency, and complaint-handling requirements. Unfortunately, it is unclear if this would be the case, with examples suggesting that Uber would use the scalability of its platform to avoid these mechanisms. This is revealed by ongoing legal action launched by concerned Uber drivers in the United Kingdom, who have sought access to the profiling data that Uber uses to manage and monitor its drivers (Sawers). The challenge has relied on transnational law (the European Union’s General Data Protection Regulation), with UK-based drivers lodging claims in Amsterdam to initiate the challenge. Such costly and complex actions are beyond the means of many, but demonstrate how reasonable participation in socio-technical and governance relationships (like contestations) might become limited, depending on how the shadow of hierarchy changes with the incorporation of platforms. Even if legal challenges for transparency are successful, they may not produce meaningful change. For instance, O’Neil links algorithmic bias to mathematical shortcomings in the variables used to measure the world; in the creation of irritational feedback loops based on incorrect data; and in the use of unsound data analysis techniques. These three factors contribute to inequitable digital metrics like predictive policing algorithms that disproportionately target racial minorities. Large amounts of selective data on minorities create myopic algorithms that direct police to target minorities, creating more selective data that reinforces the spurious model. These biases, however, are persistently inaccessible, and even when visible are often unintelligible to experts (Ananny and Crawford). The visibility of the technical “installed base” that support institutions and public services is therefore not a panacea, especially when the installed base (un)intentionally obfuscates participation in meaningful engagement like complaints handling. A negative outcome is, however, also not an inevitable thing. It is entirely possible to design platforms to allow individual users to scale up and have opportunities for enhanced participation. For instance, eGovernance and mobile governance literature have explored how citizens engage with state services at scale (Thomas and Streib; Foth et al.), and the open government movement has demonstrated the effectiveness of open data in understanding government operations (Barns; Janssen et al.), although these both have their challenges (Chadwick; Dawes). It is not a fantasy to imagine alternative configurations of the shadow of hierarchy that allow more participatory relationships. Open data could facilitate the governance of platforms at scale (Box et al.), where users are enfranchised into a platform by some form of membership right and given access to financial and governance records, in the same way that corporate shareholders are enfranchised, facilitated by the same app that provides a service. This could also be extended to decision making through voting and polling functions. Such a governance form would require radically different legal, business, and institutional structures to create and enforce this arrangement. Delacoix and Lawrence, for instance, suggest that data trusts, where a trustee is assigned legal and fiduciary responsibility to achieve maximum benefit for a specific group’s data, can be used to negotiate legal and governance relationships that meaningfully benefit the users of the trust. Trustees can be instructed to only share data to services whose algorithms are regularly audited for bias and provide datasets that are accurate representations of their users, for instance, avoiding erroneous proxies that disrupt algorithmic models. While these developments are in their infancy, it is not unreasonable to reflect on such endeavours now, as the technologies to achieve these are already in use. Conclusions There is a persistent myth that data will yield better, faster, more complete results in whatever field it is applied (Lee and Cook; Fourcade and Healy; Mayer-Schönberger and Cukier; Kitchin). This myth has led to data-driven assemblages, including artificial intelligence, platforms, surveillance, and other data-technologies, being deployed throughout social life. The public sector is no exception to this, but the deployment of any technological solution within the traditional institutions of the shadow of hierarchy is fraught with challenges, and often results in failure or unintended consequences (Henman). The complexity of these systems combined with time, budgetary, and political pressures can create a contested environment. It is this environment that moulds societies' light and resources to cast the shadow of hierarchy. Relationality within a shadow of hierarchy that reflects the complicated and competing interests of platforms is likely to present a range of unintended social consequences that are inherently emergent because they are entering into a complex system – society – that is extremely hard to model. The relational qualities of the shadow of hierarchy are therefore now more multidimensional and emergent, and experiences relating to socio-technical features like scale, and as a follow-on (non)participation, are evidence of this. Yet by being emergent, they are also directionless, a product of complex systems rather than designed and strategic intent. This is not an inherently bad thing, but given the potential for data-system and platforms to have negative or unintended consequences, it is worth considering whether remaining directionless is the best outcome. There are many examples of data-driven systems in healthcare (Obermeyer et al.), welfare (Eubanks; Henman and Marston), and economics (MacKenzie), having unintended and negative social consequences. Appropriately guiding the design and deployment of theses system also represents a growing body of knowledge and practical endeavour (Jirotka et al.; Stilgoe et al.). Armed with the knowledge of these social implications, constructing an appropriate social architecture (Box and Lemon; Box et al.) around the platforms and data systems that form the shadow of hierarchy should be encouraged. This social architecture should account for the affordances and emergent potentials of a complex social, institutional, economic, political, and technical environment, and should assist in guiding the shadow of hierarchy away from egregious challenges and towards meaningful opportunities. To be directionless is an opportunity to take a new direction. The intersection of platforms with public institutions and infrastructures has moulded society’s light into an evolving and emergent shadow of hierarchy over many domains. With the scale of the shadow changing, and shaping participation, who benefits and who loses out in the shadow of hierarchy is also changing. Equipped with insights into this change, we should not hesitate to shape this change, creating or preserving relationalities that offer the best outcomes. Defining, understanding, and practically implementing what the “best” outcome(s) are would be a valuable next step in this endeavour, and should prompt considerable discussion. If we wish the shadow of hierarchy to continue to be productive, then finding a social architecture to shape the emergence and directionlessness of socio-technical systems like platforms is an important step in the continued evolution of the shadow of hierarchy. References Ajunwa, Ifeoma. “Age Discrimination by Platforms.” Berkeley J. Emp. & Lab. L. 40 (2019): 1-30. Amoore, Louise. Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press, 2020. ———. “Cloud Geographies: Computing, Data, Sovereignty.” Progress in Human Geography 42.1 (2018): 4-24. Ananny, Mike, and Kate Crawford. “Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability.” New Media & Society 20.3 (2018): 973–89. Attoh, Kafui, et al. “‘We’re Building Their Data’: Labor, Alienation, and Idiocy in the Smart City.” Environment and Planning D: Society and Space 37.6 (2019): 1007-24. Baldwin, Carliss Y., and C. Jason Woodard. “The Architecture of Platforms: A Unified View.” Platforms, Markets and Innovation. Ed. Annabelle Gawer. Cheltenham: Edward Elgar, 2009. 19–44. Barns, Sarah. “Mine Your Data: Open Data, Digital Strategies and Entrepreneurial Governance by Code.” Urban Geography 37.4 (2016): 554–71. Bourdieu, Pierre. Distinction: A Social Critique of the Judgement of Taste. Cambridge, MA: Harvard University Press, 1984. Box, Paul, et al. Data Platforms for Smart Cities – A Landscape Scan and Recommendations for Smart City Practice. Canberra: CSIRO, 2020. Box, Paul, and David Lemon. The Role of Social Architecture in Information Infrastructure: A Report for the National Environmental Information Infrastructure (NEII). Canberra: CSIRO, 2015. Chadwick, Andrew. “Explaining the Failure of an Online Citizen Engagement Initiative: The Role of Internal Institutional Variables.” Journal of Information Technology & Politics 8.1 (2011): 21–40. Conger, Kate. “Uber Wants to Sell You Train Tickets. And Be Your Bus Service, Too.” The New York Times, 7 Aug. 2019. 19 Jan. 2021. <https://www.nytimes.com/2019/08/07/technology/uber-train-bus-public-transit.html>. Dawes, Sharon S. “The Evolution and Continuing Challenges of E‐Governance.” Public Administration Review 68 (2008): 86–102. Delacroix, Sylvie, and Neil D. Lawrence. “Bottom-Up Data Trusts: Disturbing the ‘One Size Fits All’ Approach to Data Governance.” International Data Privacy Law 9.4 (2019): 236-252. Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press, 2018. Fisher, Joshua A., and Jay David Bolter. “Ethical Considerations for AR Experiences at Dark Tourism Sites”. IEEE Explore 29 April. 2019. 13 Apr. 2021 <https://ieeexplore.ieee.org/document/8699186>. Foth, Marcus, et al. From Social Butterfly to Engaged Citizen: Urban Informatics, Social Media, Ubiquitous Computing, and Mobile Technology to Support Citizen Engagement. Cambridge MA: MIT Press, 2011. Fourcade, Marion, and Kieran Healy. “Seeing like a Market.” Socio-Economic Review, 15.1 (2017): 9–29. Gehl, Robert, and Fenwick McKelvey. “Bugging Out: Darknets as Parasites of Large-Scale Media Objects.” Media, Culture & Society 41.2 (2019): 219–35. Gillespie, Tarleton. “The Politics of ‘Platforms.’” New Media & Society 12.3 (2010): 347–64. Hall, Jonathan D., et al. “Is Uber a Substitute or Complement for Public Transit?” Journal of Urban Economics 108 (2018): 36–50. Henman, Paul. “Improving Public Services Using Artificial Intelligence: Possibilities, Pitfalls, Governance.” Asia Pacific Journal of Public Administration 42.4 (2020): 209–21. Henman, Paul, and Greg Marston. “The Social Division of Welfare Surveillance.” Journal of Social Policy 37.2 (2008): 187–205. Héritier, Adrienne, and Dirk Lehmkuhl. “Introduction: The Shadow of Hierarchy and New Modes of Governance.” Journal of Public Policy 28.1 (2008): 1–17. Janssen, Marijn, et al. “Benefits, Adoption Barriers and Myths of Open Data and Open Government.” Information Systems Management 29.4 (2012): 258–68. Jenkins, Shannon. “Visa Privatisation Plan Scrapped, with New Approach to Tackle ’Emerging Global Threats’.” The Mandarin. 23 Mar. 2020. 19 Jan. 2021 <https://www.themandarin.com.au/128244-visa-privatisation-plan-scrapped-with-new-approach-to-tackle-emerging-global-threats/>. Jirotka, Marina, et al. “Responsible Research and Innovation in the Digital Age.” Communications of the ACM 60.6 (2016): 62–68. Kitchin, Rob. The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences. Thousand Oaks, CA: Sage, 2014. Konkel, Frank. “CIA Awards Secret Multibillion-Dollar Cloud Contract.” Nextgov 20 Nov. 2020. 19 Jan. 2021 <https://www.nextgov.com/it-modernization/2020/11/exclusive-cia-awards-secret-multibillion-dollar-cloud-contract/170227/>. Lance, Kate T., et al. “Cross‐Agency Coordination in the Shadow of Hierarchy: ‘Joining up’Government Geospatial Information Systems.” International Journal of Geographical Information Science, 23.2 (2009): 249–69. Lee, Ashlin J., and Peta S. Cook. “The Myth of the ‘Data‐Driven’ Society: Exploring the Interactions of Data Interfaces, Circulations, and Abstractions.” Sociology Compass 14.1 (2020): 1–14. Lyons, Glenn, et al. “The Importance of User Perspective in the Evolution of MaaS.” Transportation Research Part A: Policy and Practice 121(2019): 22-36. MacKenzie, Donald. “‘Making’, ‘Taking’ and the Material Political Economy of Algorithmic Trading.” Economy and Society 47.4 (2018): 501-23. Mayer-Schönberger, V., and K. Cukier. Big Data: A Revolution That Will Change How We Live, Work and Think. London: John Murray, 2013. Michel Foucault. Discipline and Punish. London: Penguin, 1977. Nardi, Bonnie, and Hamid Ekbia. Heteromation, and Other Stories of Computing and Capitalism. Cambridge, MA: MIT Press, 2017. O’Neil, Cathy. Weapons of Math Destruction – How Big Data Increases Inequality and Threatens Democracy. London: Penguin, 2017. Obermeyer, Ziad, et al. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science 366.6464 (2019): 447-53. Plantin, Jean-Christophe, et al. “Infrastructure Studies Meet Platform Studies in the Age of Google and Facebook.” New Media & Society 20.1 (2018): 293–310. Rosenblat, Alex, and Luke Stark. “Algorithmic Labor and Information Asymmetries: A Case Study of Uber’s Drivers.” International Journal of Communication 10 (2016): 3758–3784. Sawers, Paul. “Uber Drivers Sue for Data on Secret Profiling and Automated Decision-Making.” VentureBeat 20 July. 2020. 19 Jan. 2021 <https://venturebeat.com/2020/07/20/uber-drivers-sue-for-data-on-secret-profiling-and-automated-decision-making/>. Services Australia. About MyGov. Services Australia 19 Jan. 2021. 19 Jan. 2021 <https://www.servicesaustralia.gov.au/individuals/subjects/about-mygov>. Star, Susan Leigh. “Infrastructure and Ethnographic Practice: Working on the Fringes.” Scandinavian Journal of Information Systems 14.2 (2002):107-122. Stilgoe, Jack, et al. “Developing a Framework for Responsible Innovation.” Research Policy 42.9 (2013):1568-80. Suzor, Nicolas. Lawless: The Secret Rules That Govern Our Digital Lives. Cambridge: Cambridge University Press, 2019. Thomas, John Clayton, and Gregory Streib. “The New Face of Government: Citizen‐initiated Contacts in the Era of E‐Government.” Journal of Public Administration Research and Theory 13.1 (2003): 83-102. Van Dijck, José. “Datafication, Dataism and Dataveillance: Big Data between Scientific Paradigm and Ideology.” Surveillance & Society 12.2 (2014): 197–208. ———. “Governing Digital Societies: Private Platforms, Public Values.” Computer Law & Security Review 36 (2020) 13 Apr. 2021 <https://www.sciencedirect.com/science/article/abs/pii/S0267364919303887>. ———. The Culture of Connectivity: A Critical History of Social Media. Oxford: Oxford University Press, 2013. Webster, Elizabeth, and Glenys Harding. “Outsourcing Public Employment Services: The Australian Experience.” Australian Economic Review 34.2 (2001): 231-42.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography