Horrigan, Matthew. "A Flattering Robopocalypse." M/C Journal 23, no. 6 (November 28, 2020). http://dx.doi.org/10.5204/mcj.2726.
Анотація:
RACHAEL. It seems you feel our work is not a benefit to the public.DECKARD. Replicants are like any other machine. They're either a benefit or a hazard. If they're a benefit it's not my problem.RACHAEL. May I ask you a personal question?DECKARD. Yes.RACHAEL. Have you every retired a human by mistake? (Scott 17:30) CAPTCHAs (henceforth "captchas") are commonplace on today's Internet. Their purpose seems clear: block malicious software, allow human users to pass. But as much as they exclude spambots, captchas often exclude humans with visual and other disabilities (Dzieza; W3C Working Group). Worse yet, more and more advanced captcha-breaking technology has resulted in more and more challenging captchas, raising the barrier between online services and those who would access them. In the words of inclusive design advocate Robin Christopherson, "CAPTCHAs are evil". In this essay I describe how the captcha industry implements a posthuman process that speculative fiction has gestured toward but not grasped. The hostile posthumanity of captcha is not just a technical problem, nor just a problem of usability or access. Rather, captchas convey a design philosophy that asks humans to prove themselves by performing well at disembodied games. This philosophy has its roots in the Turing Test itself, whose terms guide speculation away from the real problems that today's authentication systems present. Drawing the concept of "procedurality" from game studies, I argue that, despite a design goal of separating machines and humans to the benefit of the latter, captchas actually and ironically produce an arms race in which humans have a systematic and increasing disadvantage. This arms race results from the Turing Test's equivocation between human and machine bodies, an assumption whose influence I identify in popular film, science fiction literature, and captcha design discourse. The Captcha Industry and Its Side-Effects Exclusion is an essential function of every cybersecurity system. From denial-of-service attacks to data theft, toxic automated entities constantly seek admission to services they would damage. To remain functional and accessible, Websites need security systems to keep out "abusive agents" (Shet). In cybersecurity, the term "user authentication" refers to the process of distinguishing between abusive agents and welcome users (Jeng et al.). Of the many available authentication techniques, CAPTCHA, "Completely Automated Public Turing test[s] to tell Computers and Humans Apart" (Von Ahn et al. 1465), is one of the most iconic. Although some captchas display a simple checkbox beside a disclaimer to the effect that "I am not a robot" (Shet), these frequently give way to more difficult alternatives: perception tests (fig. 1). Test captchas may show sequences of distorted letters, which a user is supposed to recognise and then type in (Godfrey). Others effectively digitize a game of "I Spy": an image appears, with an instruction to select the parts of it that show a specific type of object (Zhu et al.). A newer type of captcha involves icons rotated upside-down or sideways, the task being to right them (Gossweiler et al.). These latter developments show the influence of gamification (Kani and Nishigaki; Kumar et al.), the design trend where game-like elements figure in serious tasks. Fig. 1: A series of captchas followed by multifactor authentication as a "quick security check" during the author's suspicious attempt to access LinkedIn over a Virtual Private Network Gamified captchas, in using tests of ability to tell humans from computers, invite three problems, of which only the first has received focussed critical attention. I discuss each briefly below, and at greater length in subsequent sections. First, as many commentators have pointed out (W3C Working Group), captchas can accidentally categorise real humans as nonhumans—a technical problem that becomes more likely as captcha-breaking technologies improve (e.g. Tam et al.; Brown et al.). Indeed, the design and breaking of captchas has become an almost self-sustaining subfield in computer science, as researchers review extant captchas, publish methods for breaking them, and publish further captcha designs (e.g. Weng et al.). Such research fuels an industry of captcha-solving services (fig. 2), of which some use automated techniques, and some are "human-powered", employing groups of humans to complete large numbers of captchas, thus clearing the way for automated incursions (Motoyama et al. 2). Captchas now face the quixotic task of using ability tests to distinguish legitimate users from abusers with similar abilities. Fig. 2: Captcha production and captcha breaking: a feedback loop Second, gamified captchas import the feelings of games. When they defeat a real human, the human seems not to have encountered the failure state of an automated procedure, but rather to have lost, or given up on, a game. The same frame of "gameful"-ness (McGonigal, under "Happiness Hacking") or "gameful work" (under "The Rise of the Happiness Engineers"), supposed to flatter users with a feeling of reward or satisfaction when they complete a challenge, has a different effect in the event of defeat. Gamefulness shifts the fault from procedure to human, suggesting, for the latter, the shameful status of loser. Third, like games, gamified captchas promote a particular strain of logic. Just as other forms of media can be powerful venues for purveying stereotypes, so are gamified captchas, in this case conveying the notion that ability is a legitimate means, not only of apportioning privilege, but of humanising and dehumanising. Humanity thus appears as a status earned, and disability appears not as a stigma, nor an occurrence, but an essence. The latter two problems emerge because the captcha reveals, propagates and naturalises an ideology through mechanised procedures. Below I invoke the concept of "procedural rhetoric" to critique the disembodied notion of humanity that underlies both the original Turing Test and the "Completely Automated Public Turing test." Both tests, I argue, ultimately play to the disadvantage of their human participants. Rhetorical Games, Procedural Rhetoric When videogame studies emerged as an academic field in the early 2000s, once of its first tasks was to legitimise games relative to other types of artefact, especially literary texts (Eskelinen; Aarseth). Scholars sought a framework for discussing how video games, like other more venerable media, can express ideas (Weise). Janet Murray and Ian Bogost looked to the notion of procedure, devising the concepts of "procedurality" (Bogost 3), "procedural authorship" (Murray 171), and "procedural rhetoric" (Bogost 1). From a proceduralist perspective, a videogame is both an object and a medium for inscribing processes. Those processes have two basic types: procedures the game's developers have authored, which script the behaviour of the game as a computer program; and procedures human players respond with, the "operational logic" of gameplay (Bogost 13). Procedurality's two types of procedure, the computerised and the human, have a kind of call-and-response relationship, where the behaviour of the machine calls upon players to respond with their own behaviour patterns. Games thus train their players. Through the training that is play, players acquire habits they bring to other contexts, giving videogames the power not only to express ideas but "disrupt and change fundamental attitudes and beliefs about the world, leading to potentially significant long-term social change" (Bogost ix). That social change can be positive (McGonigal), or it can involve "dark patterns", cases where game procedures provoke and exploit harmful behaviours (Zagal et al.). For example, embedded in many game paradigms is the procedural rhetoric of "toxic meritocracy" (Paul 66), where players earn rewards, status and personal improvement by overcoming challenges, and, especially, excelling where others fail. While meritocracy may seem logical within a strictly competitive arena, its effect in a broader cultural context is to legitimise privileges as the spoils of victory, and maltreatment as the just result of defeat. As game design has influenced other fields, so too has procedurality's applicability expanded. Gamification, "the use of game design elements in non-game contexts" (Deterding et al. 9), is a popular trend in which designers seek to imbue diverse tasks with some of the enjoyment of playing a game (10). Gamification discourse has drawn heavily upon Mihaly Csikszentmihalyi's "positive psychology" (Seligman and Csikszentmihalyi), and especially the speculative psychology of flow (Csikszentmihalyi 51), which promise enormously broad benefits for individuals acting in the "flow state" that challenging play supposedly promotes (75). Gamification has become a celebrated cause, advocated by a group of scholars and designers Sebastian Deterding calls the "Californian league of gamification evangelists" (120), before becoming an object of critical scrutiny (Fuchs et al.). Where gamification goes, it brings its dark patterns with it. In gamified user authentication (Kroeze and Olivier), and particularly gamified captcha, there occurs an intersection of deceptively difficult games, real-world stakes, and users whose differences go often ignored. The Disembodied Arms Race In captcha design research, the concept of disability occurs under the broader umbrella of usability. Usability studies emphasise the fact that some technology pieces are easier to access than others (Yan and El Ahmad). Disability studies, in contrast, emphasises the fact that different users have different capacities to overcome access barriers. Ability is contextual, an intersection of usability and disability, use case and user (Reynolds 443). When used as an index of humanness, ability yields illusive results. In Posthuman Knowledge, Rosi Braidotti begins her conceptual enquiry into the posthuman condition with a contemplation of captcha, asking what it means to tick that checkbox claiming that "I am not a robot" (8), and noting the baffling multiplicity of possible answers. From a practical angle, Junya Kani and Masakatsu Nishigaki write candidly about the problem of distinguishing robot from human: "no matter how advanced malicious automated programs are, a CAPTCHA that will not pass automated programs is required. Hence, we have to find another human cognitive processing capability to tackle this challenge" (40). Kani and Nishigaki try out various human cognitive processing capabilities for the task. Narrative comprehension and humour become candidates: might a captcha ascribe humanity based on human users' ability to determine the correct order of scenes in a film (43)? What about panels in a cartoon (40)? As they seek to assess the soft skills of machines, Kani and Nishigaki set up a drama similar to that of Philip K. Dick's Do Androids Dream of Electric Sheep. Do Androids Dream of Electric Sheep, and its film adaptation, Blade Runner (Scott), describe a spacefaring society populated by both humans and androids. Androids have lesser legal privileges than humans, and in particular face execution—euphemistically called "retirement"—for trespassing on planet Earth (Dick 60). Blade Runner gave these androids their more famous name: "replicant". Replicants mostly resemble humans in thought and action, but are reputed to lack the capacity for empathy, so human police, seeking a cognitive processing capability unique to humans, test for empathy to test for humanness (30). But as with captchas, Blade Runner's testing procedure depends upon an automated device whose effectiveness is not certain, prompting the haunting question: "have you ever retired a human by mistake?" (Scott 17:50). Blade Runner's empathy test is part of a long philosophical discourse about the distinction between human and machine (e.g. Putnam; Searle). At the heart of the debate lies Alan Turing's "Turing Test", which a machine hypothetically passes when it can pass itself off as a human conversationalist in an exchange of written text. Turing's motivation for coming up with the test goes: there may be no absolute way of defining what makes a human mind, so the best we can do is assess a computer's ability to imitate one (Turing 433). The aporia, however—how can we determine what makes a human mind?—is the result of an unfair question. Turing's test, dealing only with information expressed in strings of text, purposely disembodies both humans and machines. The Blade Runner universe similarly evens the playing field: replicants look, feel and act like humans to such an extent that distinguishing between the two becomes, again, the subject of a cognition test. The Turing Test, obsessed with information processing and steeped in mind-body dualism, assesses humanness using criteria that automated users can master relatively easily. In contrast, in everyday life, I use a suite of much more intuitive sensory tests to distinguish between my housemate and my laptop. My intuitions capture what the Turing Test masks: a human is a fleshy entity, possessed of the numerous trappings and capacities of a human body. The result of the automated Turing Test's focus on cognition is an arms race that places human users at an increasing disadvantage. Loss, in such a race, manifests not only as exclusion by and from computer services, but as a redefinition of proper usership, the proper behaviour of the authentic, human, user. Thus the Turing Test implicitly provides for a scenario where a machine becomes able to super-imitate humanness: to be perceived as human more often than a real human would be. In such an outcome, it would be the human conversationalist who would begin to fail the Turing test; to fail to pass themself off according to new criteria for authenticity. This scenario is possible because, through procedural rhetoric, machines shift human perspectives: about what is and is not responsible behaviour; about what humans should and should not feel when confronted with a challenge; about who does and does not deserve access; and, fundamentally, about what does and does not signify authentic usership. In captcha, as in Blade Runner, it is ultimately a machine that adjudicates between human and machine cognition. As users we rely upon this machine to serve our interests, rather than pursue some emergent automated interest, some by-product of the feedback loop that results from the ideologies of human researchers both producing and being produced by mechanised procedures. In the case of captcha, that faith is misplaced. The Feeling of Robopocalypse A rich repertory of fiction has speculated upon what novelist Daniel Wilson calls the "Robopocalypse", the scenario where machines overthrow humankind. Most versions of the story play out as a slave-owner's nightmare, featuring formerly servile entities (which happen to be machines) violently revolting and destroying the civilisation of their masters. Blade Runner's rogue replicants, for example, are effectively fugitive slaves (Dihal 196). Popular narratives of robopocalypse, despite showing their antagonists as lethal robots, are fundamentally human stories with robots playing some of the parts. In contrast, the exclusion a captcha presents when it defeats a human is not metaphorical or emancipatory. There, in that moment, is a mechanised entity defeating a human. The defeat takes place within an authoritative frame that hides its aggression. For a human user, to be defeated by a captcha is to fail to meet an apparently common standard, within the framework of a common procedure. This is a robopocalypse of baffling systems rather than anthropomorphic soldiers. Likewise, non-human software clients pose threats that humanoid replicants do not. In particular, software clients replicate much faster than physical bodies. The sheer sudden scale of a denial-of-service attack makes Philip K. Dick's vision of android resistance seem quaint. The task of excluding unauthorised software, unlike the impulse to exclude replicants, is more a practical necessity than an exercise in colonialism. Nevertheless, dystopia finds its way into the captcha process through the peril inherent in the test, whenever humans are told apart from authentic users. This is the encroachment of the hostile posthuman, naturalised by us before it denaturalises us. The hostile posthuman sometimes manifests as a drone strike, Terminator-esque (Cameron), a dehumanised decision to kill (Asaro). But it is also a process of gradual exclusion, detectable from moment to moment as a feeling of disdain or impatience for the irresponsibility, incompetence, or simply unusualness of a human who struggles to keep afloat of a rising standard. "We are in this together", Braidotti writes, "between the algorithmic devil and the acidified deep blue sea" (9). But we are also in this separately, divided along lines of ability. Captcha's danger, as a broken procedure, hides in plain sight, because it lashes out at some only while continuing to flatter others with a game that they can still win. Conclusion Online security systems may always have to define some users as legitimate and others as illegitimate. Is there a future where they do so on the basis of behaviour rather than identity or essence? Might some future system accord each user, human or machine, the same authentic status, and provide all with an initial benefit of the doubt? In the short term, such a system would seem grossly impractical. The type of user that most needs to be excluded is the disembodied type, the type that can generate orders of magnitude more demands than a human, that can proliferate suddenly and in immense number because it does not lag behind the slow processes of human bodies. This type of user exists in software alone. Rich in irony, then, is the captcha paradigm which depends on the disabilities of the threats it confronts. We dread malicious software not for its disabilities—which are momentary and all too human—but its abilities. Attenuating the threat presented by those abilities requires inverting a habit that meritocracy trains and overtrains: specifically, we have here a case where the plight of the human user calls for negative action toward ability rather than disability. References Aarseth, Espen. "Computer Game Studies, Year One." Game Studies 1.1 (2001): 1–15. Asaro, Peter. "On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making." International Review of the Red Cross 94.886 (2012): 687–709. Blade Runner. Dir. Ridley Scott. Warner Bros, 1982. Bogost, Ian. Persuasive Games: The Expressive Power of Videogames. Cambridge, MA: MIT Press, 2007. Braidotti, Rosi. Posthuman Knowledge. Cambridge: Polity Press, 2019. Brown, Samuel S., et al. "I Am 'Totally' Human: Bypassing the Recaptcha." 13th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), 2017. Christopherson, Robin. "AI Is Making CAPTCHA Increasingly Cruel for Disabled Users." AbilityNet 2019. 17 Sep. 2020 <https://abilitynet.org.uk/news-blogs/ai-making-captcha-increasingly-cruel-disabled-users>. Csikszentmihalyi, Mihaly. Flow: The Psychology of Optimal Experience. Harper & Row: New York, 1990. Deterding, Sebastian. "Eudaimonic Design, Or: Six Invitations to Rethink Gamification." Rethinking Gamification. Eds. Mathias Fuchs et al. Lüneburg: Meson Press, 2014. Deterding, Sebastian, et al. "From Game Design Elements to Gamefulness: Defining Gamification." Proceedings of the 15th International Academic MindTrek Conference: Envisioning Future Media Environments. ACM, 2011. Dick, Philip K. Do Androids Dream of Electric Sheep. 1968. New York: Del Rey, 1996. Dihal, Kanta. "Artificial Intelligence, Slavery, and Revolt." AI Narratives: A History of Imaginative Thinking about Intelligent Machines. Eds. Stephen Cave, Kanta Dihal, and Sarah Dillon. 2020. 189–212. Dzieza, Josh. "Why Captchas Have Gotten So Difficult." The Verge 2019. 17 Sep. 2020 <https://www.theverge.com/2019/2/1/18205610/google-captcha-ai-robot-human-difficult-artificial-intelligence>. Eskelinen, Markku. "Towards Computer Game Studies." Digital Creativity 12.3 (2001): 175–83. Fuchs, Mathias, et al., eds. Rethinking Gamification. Lüneburg: Meson Press, 2014. Godfrey, Philip Brighten. "Text-Based CAPTCHA Algorithms." First Workshop on Human Interactive Proofs, 15 Dec. 2001. 14 Nov. 2020 <http://www.aladdin.cs.cmu.edu/hips/events/abs/godfreyb_abstract.pdf>. Gossweiler, Rich, et al. "What's Up CAPTCHA? A CAPTCHA Based on Image Orientation." Proceedings of the 18th International Conference on World Wide Web. WWW, 2009. Jeng, Albert B., et al. "A Study of CAPTCHA and Its Application to User Authentication." International Conference on Computational Collective Intelligence. Springer, 2010. Kani, Junya, and Masakatsu Nishigaki. "Gamified Captcha." International Conference on Human Aspects of Information Security, Privacy, and Trust. Springer, 2013. Kroeze, Christien, and Martin S. Olivier. "Gamifying Authentication." 2012 Information Security for South Africa. IEEE, 2012. Kumar, S. Ashok, et al. "Gamification of Internet Security by Next Generation Captchas." 2017 International Conference on Computer Communication and Informatics (ICCCI). IEEE, 2017. McGonigal, Jane. Reality Is Broken: Why Games Make Us Better and How They Can Change the World. Penguin, 2011. Motoyama, Marti, et al. "Re: Captchas – Understanding CAPTCHA-Solving Services in an Economic Context." USENIX Security Symposium. 2010. Murray, Janet. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. New York: The Free Press, 1997. Paul, Christopher A. The Toxic Meritocracy of Video Games: Why Gaming Culture Is the Worst. University of Minnesota Press, 2018. Putnam, Hilary. "Robots: Machines or Artificially Created Life?" The Journal of Philosophy 61.21 (1964): 668–91. Reynolds, Joel Michael. "The Meaning of Ability and Disability." The Journal of Speculative Philosophy 33.3 (2019): 434–47. Searle, John. "Minds, Brains, and Programs." Behavioral and Brain Sciences 3.3 (1980): 417–24. Seligman, Martin, and Mihaly Csikszentmihalyi. "Positive Psychology: An Introduction." Flow and the Foundations of Positive Psychology. 2000. Springer, 2014. 279–98. Shet, Vinay. "Are You a Robot? Introducing No Captcha Recaptcha." Google Security Blog 3 (2014): 12. Tam, Jennifer, et al. "Breaking Audio Captchas." Advances in Neural Information Processing Systems. 2009. Proceedings of the 21st International Conference on Neural Information Processing Systems 1625–1632. ACM, 2008. The Terminator. Dir. James Cameron. Orion, 1984. Turing, Alan. "Computing Machinery and Intelligence." Mind 59.236 (1950). Von Ahn, Luis, et al. "Recaptcha: Human-Based Character Recognition via Web Security Measures." Science 321.5895 (2008): 1465–68. W3C Working Group. "Inaccessibility of CAPTCHA: Alternatives to Visual Turing Tests on the Web." W3C 2019. 17 Sep. 2020 <https://www.w3.org/TR/turingtest/>. Weise, Matthew. "How Videogames Express Ideas." DiGRA Conference. 2003. Weng, Haiqin, et al. "Towards Understanding the Security of Modern Image Captchas and Underground Captcha-Solving Services." Big Data Mining and Analytics 2.2 (2019): 118–44. Wilson, Daniel H. Robopocalypse. New York: Doubleday, 2011. Yan, Jeff, and Ahmad Salah El Ahmad. "Usability of Captchas or Usability Issues in CAPTCHA Design." Proceedings of the 4th Symposium on Usable Privacy and Security. 2008. Zagal, José P., Staffan Björk, and Chris Lewis. "Dark Patterns in the Design of Games." 8th International Conference on the Foundations of Digital Games. 2013. 25 Aug. 2020 <http://soda.swedish-ict.se/5552/1/DarkPatterns.1.1.6_cameraready.pdf>. Zhu, Bin B., et al. "Attacks and Design of Image Recognition Captchas." Proceedings of the 17th ACM Conference on Computer and Communications Security. 2010.