Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Algorithmic imaginarie.

Articles de revues sur le sujet « Algorithmic imaginarie »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 21 meilleurs articles de revues pour votre recherche sur le sujet « Algorithmic imaginarie ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les articles de revues sur diverses disciplines et organisez correctement votre bibliographie.

1

de Vries, Patricia, et Willem Schinkel. « Algorithmic anxiety : Masks and camouflage in artistic imaginaries of facial recognition algorithms ». Big Data & ; Society 6, no 1 (janvier 2019) : 205395171985153. http://dx.doi.org/10.1177/2053951719851532.

Texte intégral
Résumé :
This paper discusses prominent examples of what we call “algorithmic anxiety” in artworks engaging with algorithms. In particular, we consider the ways in which artists such as Zach Blas, Adam Harvey and Sterling Crispin design artworks to consider and critique the algorithmic normativities that materialize in facial recognition technologies. Many of the artworks we consider center on the face, and use either camouflage technology or forms of masking to counter the surveillance effects of recognition technologies. Analyzing their works, we argue they on the one hand reiterate and reify a modernist conception of the self when they conjure and imagination of Big Brother surveillance. Yet on the other hand, their emphasis on masks and on camouflage also moves beyond such more conventional critiques of algorithmic normativities, and invites reflection on ways of relating to technology beyond the affirmation of the liberal, privacy-obsessed self. In this way, and in particular by foregrounding the relational modalities of the mask and of camouflage, we argue academic observers of algorithmic recognition technologies can find inspiration in artistic algorithmic imaginaries.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wijermars, Mariëlle, et Mykola Makhortykh. « Sociotechnical imaginaries of algorithmic governance in EU policy on online disinformation and FinTech ». New Media & ; Society 24, no 4 (avril 2022) : 942–63. http://dx.doi.org/10.1177/14614448221079033.

Texte intégral
Résumé :
Datafication and the use of algorithmic systems increasingly blur distinctions between policy fields. In the financial sector, for example, algorithms are used in credit scoring, money has become transactional data sought after by large data-driven companies, while financial technologies (FinTech) are emerging as a locus of information warfare. To grasp the context specificity of algorithmic governance and the assumptions on which its evaluation within different domains is based, we comparatively study the sociotechnical imaginaries of algorithmic governance in European Union (EU) policy on online disinformation and FinTech. We find that sociotechnical imaginaries prevalent in EU policy documents on disinformation and FinTech are highly divergent. While the first can be characterized as an algorithm-facilitated attempt to return to the presupposed status quo (absence of manipulation) without a defined future imaginary, the latter places technological innovation at the centre of realizing a globally competitive Digital Single Market.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kazansky, Becky, et Stefania Milan. « “Bodies not templates” : Contesting dominant algorithmic imaginaries ». New Media & ; Society 23, no 2 (février 2021) : 363–81. http://dx.doi.org/10.1177/1461444820929316.

Texte intégral
Résumé :
Through an array of technological solutions and awareness-raising initiatives, civil society mobilizes against an onslaught of surveillance threats. What alternative values, practices, and tactics emerge from the grassroots which point toward other ways of being in the datafied society? Conversing with critical data studies, science and technology studies, and surveillance studies, this article looks at how dominant imaginaries of datafication are reconfigured and responded to by groups of people dealing directly with their harms and risks. Building on practitioner interviews and participant observation in digital rights events and surveying projects intervening in three critical technological issues of our time—the challenges of digitally secure computing, the Internet of Things, and the threat of widespread facial recognition—this article investigates social justice activists, human rights defenders, and progressive technologists as they try to flip dominant algorithmic imaginaries. In so doing, the article contributes to our understanding of how individuals and social groups make sense of the challenges of datafication from the bottom-up.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Schellewald, Andreas. « Theorizing “Stories About Algorithms” as a Mechanism in the Formation and Maintenance of Algorithmic Imaginaries ». Social Media + Society 8, no 1 (janvier 2022) : 205630512210770. http://dx.doi.org/10.1177/20563051221077025.

Texte intégral
Résumé :
In this article, I report from an ethnographic investigation into young adult users of the popular short-video app TikTok. More specifically, I discuss their experience of TikTok’s algorithmic content feed, or so-called “For You Page.” Like many other personalized online environments today, the For You Page is marked by the tension of being a mechanism of digital surveillance and affective control, yet also a source of entertainment and pleasure. Focusing on people’s sense-making practices, especially in relation to stories about the TikTok algorithm, the article approaches the discursive repertoire that underpins people’s negotiation of this tension. Doing so, I theorize the role and relevance of “stories about algorithms” within the context of algorithmic imaginaries as activating users in sense-making processes about their algorithmic entanglements.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kidd, Dorothy. « Hybrid media activism : ecologies, imaginaries, algorithms ». Information, Communication & ; Society 22, no 14 (21 juin 2019) : 2207–10. http://dx.doi.org/10.1080/1369118x.2019.1631374.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Schwennesen, Nete. « Algorithmic assemblages of care : imaginaries, epistemologies and repair work ». Sociology of Health & ; Illness 41, S1 (octobre 2019) : 176–92. http://dx.doi.org/10.1111/1467-9566.12900.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Erslev, Malthe Stavning. « A Mimetic Method ». A Peer-Reviewed Journal About 11, no 1 (18 octobre 2022) : 34–49. http://dx.doi.org/10.7146/aprja.v11i1.134305.

Texte intégral
Résumé :
How does a practice of mimesis — as dramatic enactment in a live-action role-playing game (LARP) — relate to the design of artificial intelligence systems? In this article, I trace the contours of a mimetic method, working through an auto-ethnographic approach in tandem with new materialist theory and in conjunction with recent tendencies in design research to argue that mimesis carries strong potential as a practice through which to encounter, negotiate, and design with artificial intelligence imaginaries. Building on a new materialist conception of mimesis as more-than-human sympathy, I illuminate how LARP that centered on the enactment of a fictional artificial intelligence system sustained an encounter with artificial intelligence imaginaries. In what can be understood as a decidedly mimetic way of doing ethnography of algorithmic systems, I argue that we need to consider the value of mimesis — understood as a practice and a method — as a way to render research into artificial intel- ligence imaginaries.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Anikina, Alexandra. « Procedural Animism ». A Peer-Reviewed Journal About 11, no 1 (18 octobre 2022) : 134–51. http://dx.doi.org/10.7146/aprja.v11i1.134311.

Texte intégral
Résumé :
The current proliferation of algorithmic agents (bots, virtual assistants, therapeutic chatbots) that boast real or exaggerated use of AI produces a wide range of interactions between them and humans. The ambiguity of various real and perceived agencies that arises in these encounters is usually dismissed in favour of designating them as technologically or socially determined. However, I argue that the ambiguity brought forth by different opacities, complexities and autonomies at work renders the imaginaries of these algorithms a powerful political and cultural tool. Following approaches from critical theory, posthumanities, decolonial AI and feminist STS that have already approached the boundary between human and non-human productively, it becomes possible to consider technological agents as algorithmic Others, whose outlines, in turn, reveal not only human fears and hopes for technology, but also what it means to be “human” and how normative “humanness” is constructed. Drawing on the work of Antoinette Rouvroy on algorithmic governmentality and Elizabeth A. Povinelli’s ideas of geontology and geontopower, this paper offers a conceptual model of procedural animism in order to rethink the questions of governance and relationality unfolding between humans and non-humans, between the do- mains of “Life” and “Non-Life”. In doing so, it illuminates a series of processes and procedures of (de)humanisation, image politics and figuration in the context of everyday communication and politically engaged art. Ultimately, what is at stake is a potential to consider alternative conceptions of algorithmic Others, ones that might be differently oriented within our environmental, political and cultural futures.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Williamson, Ben. « Silicon startup schools : technocracy, algorithmic imaginaries and venture philanthropy in corporate education reform ». Critical Studies in Education 59, no 2 (24 mai 2016) : 218–36. http://dx.doi.org/10.1080/17508487.2016.1186710.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Storms, Elias, Oscar Alvarado et Luciana Monteiro-Krebs. « 'Transparency is Meant for Control' and Vice Versa : Learning from Co-designing and Evaluating Algorithmic News Recommenders ». Proceedings of the ACM on Human-Computer Interaction 6, CSCW2 (7 novembre 2022) : 1–24. http://dx.doi.org/10.1145/3555130.

Texte intégral
Résumé :
Algorithmic systems that recommend content often lack transparency about how they come to their suggestions. One area in which recommender systems are increasingly prevalent is online news distribution. In this paper, we explore how a lack of transparency of (news) recommenders can be tackled by involving users in the design of interface elements. In the context of automated decision-making, legislative frameworks such as the GDPR in Europe introduce a specific conception of transparency, granting 'data subjects' specific rights and imposing obligations on service providers. An important related question is how people using personalized recommender systems relate to the issue of transparency, not as legal data subjects but as users. This paper builds upon a two-phase study on how users conceive of transparency and related issues in the context of algorithmic news recommenders. We organized co-design workshops to elicit participants' 'algorithmic imaginaries' and invited them to ideate interface elements for increased transparency. This revealed the importance of combining legible transparency features with features that increase user control. We then conducted a qualitative evaluation of mock-up prototypes to investigate users' preferences and concerns when dealing with design features to increase transparency and control. Our investigation illustrates how users' expectations and impressions of news recommenders are closely related to their news reading practices. On a broader level, we show how transparency and control are conceptually intertwined. Transparency without control leaves users frustrated. Conversely, without a basic level of transparency into how a system works, users remain unsure of the impact of controls.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Giraud, Eva. « Treré, E. (2019). Hybrid media activism : Ecologies, imaginaries, algorithms. London : Routledge, 222 pp. » Communications 45, no 2 (26 mai 2020) : 264–66. http://dx.doi.org/10.1515/commun-2020-2085.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
12

Beschorner, Thomas, et Florian Krause. « Algorithms, Decision Making, and the Human Outside the Code ». Morals & ; Machines 1, no 2 (2021) : 78–85. http://dx.doi.org/10.5771/2747-5174-2021-2-78.

Texte intégral
Résumé :
In this intervention, we discuss to what extent the term “decision” serves as an adequate terminology for what algorithms actually do. Although calculations of algorithms might be perceived as or be an important basis for a decision, we argue, that this terminology is not only misleading but also problematic. A calculation is not a decision, we claim, since a decision (implicitly) includes two important aspects: imaginaries about the future and a “fictional surplus” as well as the process of justification. Our proposal can be seen as an invitation to reflect but the role of “humans outside the code” (and not merely “in the loop”).
Styles APA, Harvard, Vancouver, ISO, etc.
13

Levesque, Patrick. « L’élaboration du matériau musical dans les dernières oeuvres vocales de Claude Vivier ». Circuit 18, no 3 (16 octobre 2008) : 89–106. http://dx.doi.org/10.7202/019141ar.

Texte intégral
Résumé :
Résumé Les mêmes principes stylistiques animent les cinq dernières oeuvres vocales de Claude Vivier (Lonely Child, Bouchara, Prologue pour un Marco Polo, Wo bist du Licht !, Trois airs pour un opéra imaginaire). Le concept de dyade, soit un intervalle formé d’une note de basse et d’une note mélodique, joue un rôle primordial dans l’élaboration du matériau musical. Les mélodies sont construites à partir de bassins de hauteurs d’inspiration diatonique. Généralement soutenus par une note de basse, ils peuvent être décalés par rapport à celle-ci, ou encore varier de façon à créer un univers polytonal. L’harmonie repose largement sur des algorithmes de construction spectrale précis. La présentation de ces spectres sonores varie d’oeuvre en oeuvre. L’analyse réductionnelle révèle que le déploiement des oeuvres est sous-tendu par des pratiques harmoniques diatoniques, sinon franchement tonales.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Lally, Nick. « Crowdsourced surveillance and networked data ». Security Dialogue 48, no 1 (21 septembre 2016) : 63–77. http://dx.doi.org/10.1177/0967010616664459.

Texte intégral
Résumé :
Possibilities for crowdsourced surveillance have expanded in recent years as data uploaded to social networks can be mined, distributed, assembled, mapped, and analyzed by anyone with an uncensored internet connection. These data points are necessarily fragmented and partial, open to interpretation, and rely on algorithms for retrieval and sorting. Yet despite these limitations, they have been used to produce complex representations of space, subjects, and power relations as internet users attempt to reconstruct and investigate events while they are developing. In this article, I consider one case of crowdsourced surveillance that emerged following the detonation of two bombs at the 2013 Boston Marathon. I focus on the actions of a particular forum on reddit.com , which would exert a significant influence on the events as they unfolded. The study describes how algorithmic affordances, internet cultures, surveillance imaginaries, and visual epistemologies contributed to the structuring of thought, action, and subjectivity in the moment of the event. I use this case study as a way to examine moments of entangled political complicity and resistance, highlighting the ways in which particular surveillance practices are deployed and feed back into the event amid its unfolding.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Guerra, Ana, et Carlos Frederico De Brito d'Andréa. « ALGORITHMIC IMAGINARIES IN THE MAKING : BRAZILIAN UBERTUBERS ENCOUNTERS WITH SURGE PRICING ALGORITHMS ». AoIR Selected Papers of Internet Research, 15 septembre 2021. http://dx.doi.org/10.5210/spir.v2021i0.12177.

Texte intégral
Résumé :
This paper explores the algorithmic imaginaries associated with Uber's surge pricing, a central mediator of Uber drivers' labor experience. Surge pricing can be described as an algorithmically driven mechanism that uses price adjustments as financial incentives to redistribute the workforce on a territory. Inspired by Critical Algorithm Studies, we argue that this mechanism of governance is not passively incorporated by drivers and that their everyday encounters with surge pricing algorithms are productive of imaginaries and valid forms of knowledge that orient their practices. We approach this by mapping and analyzing how popular Brazilian Uber drivers on YouTube (the “UberTubers”, as we propose) discuss surge pricing in their channels. Based on 59 videos about surge pricing posted at the seven most popular Brazilian Ubertuber’s channels, we outline three main topics approached: (a) What is surge pricing and what does it do?; (b) Tactics to benefit from surge pricing; (c) Algorithmic Labor as a laboratory. Among our findings, we identified that some Ubertubers produce and share their own visual inscriptions to explain how surge pricing works. Through the engagement with their audience, Ubertubers collectivize their experiences and imaginations potentially transforming how various other drivers incorporate surge pricing into their own tactics and daily routines. These channels provide us with something similar to an “archive” of surge pricing’s many versions and facets and offer us the opportunity to learn how algorithms are perceived and how they shape labor on a micropolitical level.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Kant, Tanya. « ALGORITHMIC WOMEN’S WORK : THE LABOUR OF NEGOTIATING BLACK-BOXED REPRESENTATION ». AoIR Selected Papers of Internet Research 2019 (31 octobre 2019). http://dx.doi.org/10.5210/spir.v2019i0.10994.

Texte intégral
Résumé :
This paper argues that under the proprietary logics of the contemporary web, the ‘algorithmic identities’ (Cheney-Lippold, 2017) created by platforms like Google and Facebook function as value-generating constellations that unequally distribute the burdens of being made in data. The paper focuses on a particular identity demographic: that of the algorithmically inferred 'female', based in the 'UK', 'aged 25-34', and therefore deemed to be interested in 'fertility'. Though other algorithmic profiles certainly exist (and generate their own critical problems), I will use this particular template of subjectivity to explore issues of representation, black-boxing and user trust from a gendered perspective. Combining online audience reception with political economy, I analyse two ad campaigns - for Clearblue Pregnancy Tests and the Natural Cycles Contraceptive app - to understand how the algorithmically fertile female comes to exist, both at the level of the database and at the level of ad representation. I argue that black-boxing occurs at two stages in this process: firstly when the subject is computationally constituted as female (ie in the database) and secondly when the user herself is delivered the ads informed by her algorithmic identity (ie at the interface). This black-boxing creates 'algorithmic imaginaries' (Bucher, 2016) for the user wherein the burden of being made a fertile female in data is experienced as a form of immaterial and emotional labour. Some algorithmic constitutions can therefore be considered a form of algorithmic women's work; work that potentially generates distrust in targeted advertising.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Lupton, Deborah. « ‘Not the Real Me’ : Social Imaginaries of Personal Data Profiling ». Cultural Sociology, 6 août 2020, 174997552093977. http://dx.doi.org/10.1177/1749975520939779.

Texte intégral
Résumé :
In this article, I present findings from my Data Personas study, in which I invited Australian adults to respond to the stimulus of the ‘data persona’ to help them consider personal data profiling and related algorithmic processing of personal digitised information. The literature on social imaginaries is brought together with vital materialism theory, with a focus on identifying the affective forces, relational connections and agential capacities in participants’ imaginaries and experiences concerning data profiling and related practices now and into the future. The participants were aware of how their personal data were generated from their online engagements, and that commercial and government agencies used these data. However, most people suggested that data profiling was only ever partial, configuring a superficial and static version of themselves. They noted that as people move through their life-course, their identities and bodies are subject to change: dynamic and emergent. While the digital data that are generated about humans are also lively, these data can never fully capture the full vibrancy, fluidity and spontaneity of human experience and behaviour. In these imaginaries, therefore, data personas are figured as simultaneously less-than-human and more-than-human. The implications for understanding and theorising human-personal data relations are discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Esko, Terhi, et Riikka Koulu. « Rethinking research on social harms in an algorithmic context ». Justice, Power and Resistance, 1 novembre 2022, 1–7. http://dx.doi.org/10.1332/xvwg6748.

Texte intégral
Résumé :
In this paper we suggest that theoretically and methodologically creative interdisciplinary research can benefit the research on social harms in an algorithmic context. We draw on our research on automated decision making within public authorities and the current on-going legislative reform on the use of such in Finland. The paper suggests combining socio-legal studies with science and technology studies (STS) and highlights an organisational learning perspective. It also points to three challenges for researchers. The first challenge is that the visions and imaginaries of technological expectations oversimplify the benefits of algorithms. Secondly, designing automated systems for public authorities has overlooked the social and collective structures of decision making, and the citizen’s perspective is absent. Thirdly, as social harms are unforeseen from the perspective of citizens, we need comprehensive research on the contexts of those harms as well as transformative activities within public organisations.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Monsees, Linda, Tobias Liebetrau, Jonathan Luke Austin, Anna Leander et Swati Srivastava. « Transversal Politics of Big Tech ». International Political Sociology 17, no 1 (4 janvier 2023). http://dx.doi.org/10.1093/ips/olac020.

Texte intégral
Résumé :
Abstract Our everyday life is entangled with products and services of so-called Big Tech companies, such as Amazon, Google, and Facebook. International relations (IR) scholars increasingly seek to reflect on the relationships between Big Tech, capitalism, and institutionalized politics, and they engage with the practices of algorithmic governance and platformization that shape and are shaped by Big Tech. This collective discussion advances these emerging debates by approaching Big Tech transversally, meaning that we problematize Big Tech as an object of study and raise a range of fundamental questions about its politics. The contributions demonstrate how a transversal perspective that cuts across sociomaterial, institutional, and disciplinary boundaries and framings opens up the study of the politics of Big Tech. The discussion brings to the fore perspectives on the ontologies of Big Tech, the politics of the aesthetics and credibility of Big Tech and rethinks the concepts of legitimacy and responsibility. The article thereby provides several inroads for how IR and international political sociology can leverage their analytical engagement with Big Tech and nurture imaginaries of alternative and subversive technopolitical futures.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Goetz, Teddy. « Swapping Gender is a Snap(chat) ». Catalyst : Feminism, Theory, Technoscience 7, no 2 (26 octobre 2021). http://dx.doi.org/10.28968/cftt.v7i2.34839.

Texte intégral
Résumé :
In May 2019 the photographic cellphone application Snapchat released two company-generated image filters that were officially dubbed “My Twin” and “My Other Twin,” though users and media labeled them as feminine and masculine, respectively. While touted in most commentary as a “gender swap” feature, these digital imaginaries represent a unique opportunity to consider what features contribute to classification of faces into binary gender buckets. After all, the commonly considered “male” filter makes various modifications—including a broader jaw and addition of facial hair—to whichever face is selected in the photograph. It does not ask and cannot detect if that face belongs to a man or woman (cis- or transgender) or to a non-binary individual. Instead, the augmented reality that it offers is a preprogrammed algorithmic reinscription of reductive gendered norms. When interacting with a novel face, humans similarly implement algorithms to assign a gender to that face. The Snapchat “My Twin” filters—which are not neutral, but rather human-designed—offer an analyzable projection of one such binarization, which is otherwise rarely articulated or visually recreated. Here I pair an ethnographic exploration of twenty-eight transgender, non-binary, and/or gender diverse individuals’ embodied experiences of facial gender legibility throughout life and with digital distortion, with a quantitative analysis of the “My Twin” filter facial distortions, to better understand the role of technology in reimaginations of who and what we see in the mirror.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Querubín, Natalia Sánchez, et Sabine Niederer. « Climate futures : Machine learning from cli-fi ». Convergence : The International Journal of Research into New Media Technologies, 31 octobre 2022, 135485652211357. http://dx.doi.org/10.1177/13548565221135715.

Texte intégral
Résumé :
This paper introduces and contextualises Climate Futures, an experiment in which AI was repurposed as a ‘co-author’ of climate stories and a co-designer of climate-related images that facilitate reflections on present and future(s) of living with climate change. It converses with histories of writing and computation, including surrealistic ‘algorithmic writing’, recombinatory poems and ‘electronic literature’. At the core lies a reflection about how machine learning’s associative, predictive and regenerative capacities can be employed in playful, critical and contemplative goals. Our goal is not automating writing (as in product-oriented applications of AI). Instead, as poet Charles Hartman argues, ‘the question isn’t exactly whether a poet or a computer writes the poem, but what kinds of collaboration might be interesting’ (1996, p. 5). STS scholars critique labs as future-making sites and machine learning modelling practices and, for example, describe them also as fictions. Building on these critiques and in line with ‘critical technical practice’ ( Agre, 1997 ), we embed our critique of ‘making the future’ in how we employ machine learning to design a tool for looking ahead and telling stories on life with climate change. This has involved engaging with climate narratives and machine learning from the critical and practical perspectives of artistic research. We trained machine learning algorithms (i.e. GPT-2 and AttnGAN) using climate fiction novels (as a dataset of cultural imaginaries of the future). We prompted them to produce new climate fiction stories and images, which we edited to create a tarot-like deck and a story-book, thus also playfully engaging with machine learning’s predictive associations. The tarot deck is designed to facilitate conversations about climate change. How to imagine the future beyond scenarios of resilience and the dystopian? How to aid our transition into different ways of caring for the planet and each other?
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie