Academic literature on the topic 'Information and analytical tools post-clearance audit'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Information and analytical tools post-clearance audit.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Information and analytical tools post-clearance audit"

1

Brown, Katherine L., Jo Wray, Rachel L. Knowles, Sonya Crowe, Jenifer Tregay, Deborah Ridout, David J. Barron, et al. "Infant deaths in the UK community following successful cardiac surgery: building the evidence base for optimal surveillance, a mixed-methods study." Health Services and Delivery Research 4, no. 19 (May 2016): 1–176. http://dx.doi.org/10.3310/hsdr04190.

Full text
Abstract:
BackgroundWhile early outcomes of paediatric cardiac surgery have improved, less attention has been given to later outcomes including post-discharge mortality and emergency readmissions.ObjectivesOur objectives were to use a mixed-methods approach to build an evidenced-based guideline for postdischarge management of infants undergoing interventions for congenital heart disease (CHD).MethodsSystematic reviews of the literature – databases used: MEDLINE (1980 to 1 February 2013), EMBASE (1980 to 1 February 2013), Cumulative Index to Nursing and Allied Health Literature (CINAHL; 1981 to 1 February 2013), The Cochrane Library (1999 to 1 February 2013), Web of Knowledge (1980 to 1 February 2013) and PsycINFO (1980 to 1 February 2013). Analysis of audit data from the National Congenital Heart Disease Audit and Paediatric Intensive Care Audit Network databases pertaining to records of infants undergoing interventions for CHD between 1 January 2005 and 31 December 2010. Qualitative analyses of online discussion posted by 73 parents, interviews with 10 helpline staff based at user groups, interviews with 20 families whose infant either died after discharge or was readmitted urgently to intensive care, and interviews with 25 professionals from tertiary care and 13 professionals from primary and secondary care. Iterative multidisciplinary review and discussion of evidence incorporating the views of parents on suggestions for improvement.ResultsDespite a wide search strategy, the studies identified for inclusion in reviews related only to patients with complex CHD, for whom adverse outcome was linked to non-white ethnicity, lower socioeconomic status, comorbidity, age, complexity and feeding difficulties. There was evidence to suggest that home monitoring programmes (HMPs) are beneficial. Of 7976 included infants, 333 (4.2%) died postoperatively, leaving 7634 infants, of whom 246 (3.2%) experienced outcome 1 (postdischarge death) and 514 (6.7%) experienced outcome 2 (postdischarge death plus emergency intensive care readmissions). Multiple logistic regression models for risk of outcomes 1 and 2 had areas under the receiver operator curve of 0.78 [95% confidence interval (CI) 0.75 to 0.82] and 0.78 (95% CI 0.75 to 0.80), respectively. Six patient groups were identified using classification and regression tree analysis to stratify by outcome 2 (range 3–24%), which were defined in terms of neurodevelopmental conditions, high-risk cardiac diagnosis (hypoplastic left heart, single ventricle or pulmonary atresia), congenital anomalies and length of stay (LOS) > 1 month. Deficiencies and national variability were noted for predischarge training and information, the process of discharge to non-specialist services including documentation, paediatric cardiology follow-up including HMP, psychosocial support post discharge and the processes for accessing help when an infant becomes unwell.ConclusionsNational standardisation may improve discharge documents, training and guidance on ‘what is normal’ and ‘signs and symptoms to look for’, including how to respond. Infants with high-risk cardiac diagnoses, neurodevelopmental conditions or LOS > 1 month may benefit from discharge via their local hospital. HMP is suggested for infants with hypoplastic left heart, single ventricle or pulmonary atresia. Discussion of postdischarge deaths for infant CHD should occur at a network-based multidisciplinary meeting. Audit is required of outcomes for this stage of the patient journey.Future workFurther research may determine the optimal protocol for HMPs, evaluate the use of traffic light tools for monitoring infants post discharge and develop the analytical steps and processes required for audit of postdischarge metrics.Study registrationThis study is registered as PROSPERO CRD42013003483 and CRD42013003484.FundingThe National Institute for Health Research Health Services and Delivery Research programme. The National Congenital Heart Diseases Audit (NCHDA) and Paediatric Intensive Care Audit Network (PICANet) are funded by the National Clinical Audit and Patient Outcomes Programme, administered by the Healthcare Quality Improvement Partnership (HQIP). PICAnet is also funded by Welsh Health Specialised Services Committee; NHS Lothian/National Service Division NHS Scotland, the Royal Belfast Hospital for Sick Children, National Office of Clinical Audit Ireland, and HCA International. The study was supported by the National Institute for Health Research Biomedical Research Centre at Great Ormond Street Hospital for Children NHS Foundation Trust and University College London. Sonya Crowe was supported by the Health Foundation, an independent charity working to continuously improve the quality of health care in the UK.
APA, Harvard, Vancouver, ISO, and other styles
2

Karki, S. "Errors : Detection and minimization in histopathology laboratories." Journal of Pathology of Nepal 5, no. 10 (September 14, 2015): 859–64. http://dx.doi.org/10.3126/jpn.v5i10.15643.

Full text
Abstract:
The histopathological diagnosis plays a major role in the treatment of diseases. Errors in these reports affect patient care. Hence, it is of utmost importance for all practitioners of this specialty to be aware of possible errors in histopathology laboratories and the means to minimize them. As with other disciplines of laboratory medicine, errors can occur in the pre-analytical, analytical and post analytical phase. The concept of quality and its control should be applied to all phases to curb errors. Audit can be used as a tool to generate information about the background level of errors in pathology which in turn can be used to reduce and avoid errors in histopathology laboratory. Furthermore, accreditation is a means to ensure patient safety and best quality assurance.
APA, Harvard, Vancouver, ISO, and other styles
3

Droumeva, Milena. "Curating Everyday Life: Approaches to Documenting Everyday Soundscapes." M/C Journal 18, no. 4 (August 10, 2015). http://dx.doi.org/10.5204/mcj.1009.

Full text
Abstract:
In the last decade, the cell phone’s transformation from a tool for mobile telephony into a multi-modal, computational “smart” media device has engendered a new kind of emplacement, and the ubiquity of technological mediation into the everyday settings of urban life. With it, a new kind of media literacy has become necessary for participation in the networked social publics (Ito; Jenkins et al.). Increasingly, the way we experience our physical environments, make sense of immediate events, and form impressions is through the lens of the camera and through the ear of the microphone, framed by the mediating possibilities of smartphones. Adopting these practices as a kind of new media “grammar” (Burn 29)—a multi-modal language for public and interpersonal communication—offers new perspectives for thinking about the way in which mobile computing technologies allow us to explore our environments and produce new types of cultural knowledge. Living in the Social Multiverse Many of us are concerned about new cultural practices that communication technologies bring about. In her now classic TED talk “Connected but alone?” Sherry Turkle talks about the world of instant communication as having the illusion of control through which we micromanage our immersion in mobile media and split virtual-physical presence. According to Turkle, what we fear is, on the one hand, being caught unprepared in a spontaneous event and, on the other hand, missing out or not documenting or recording events—a phenomenon that Abha Dawesar calls living in the “digital now.” There is, at the same time, a growing number of ways in which mobile computing devices connect us to new dimensions of everyday life and everyday experience: geo-locative services and augmented reality, convergent media and instantaneous participation in the social web. These technological capabilities arguably shift the nature of presence and set the stage for mobile users to communicate the flow of their everyday life through digital storytelling and media production. According to a Digital Insights survey on social media trends (Bennett), more than 500 million tweets are sent per day and 5 Vines tweeted every second; 100 hours of video are uploaded to YouTube every minute; more than 20 billion photos have been shared on Instagram to date; and close to 7 million people actively produce and publish content using social blogging platforms. There are more than 1 billion smartphones in the US alone, and most social media platforms are primarily accessed using mobile devices. The question is: how do we understand the enormity of these statistics as a coherent new media phenomenon and as a predominant form of media production and cultural participation? More importantly, how do mobile technologies re-mediate the way we see, hear, and perceive our surrounding evironment as part of the cultural circuit of capturing, sharing, and communicating with and through media artefacts? Such questions have furnished communication theory even before McLuhan’s famous tagline “the medium is the message”. Much of the discourse around communication technology and the senses has been marked by distinctions between “orality” and “literacy” understood as forms of collective consciousness engendered by technological shifts. Leveraging Jonathan Sterne’s critique of this “audio-visual litany”, an exploration of convergent multi-modal technologies allows us to focus instead on practices and techniques of use, considered as both perceptual and cultural constructs that reflect and inform social life. Here in particular, a focus on sound—or aurality—can help provide a fresh new entry point into studying technology and culture. The phenomenon of everyday photography is already well conceptualised as a cultural expression and a practice connected with identity construction and interpersonal communication (Pink, Visual). Much more rarely do we study the act of capturing information using mobile media devices as a multi-sensory practice that entails perceptual techniques as well as aesthetic considerations, and as something that in turn informs our unmediated sensory experience. Daisuke and Ito argue that—in contrast to hobbyist high-quality photographers—users of camera phones redefine the materiality of urban surroundings as “picture-worthy” (or not) and elevate the “mundane into a photographic object.” Indeed, whereas traditionally recordings and photographs hold institutional legitimacy as reliable archival references, the proliferation of portable smart technologies has transformed user-generated content into the gold standard for authentically representing the everyday. Given that visual approaches to studying these phenomena are well underway, this project takes a sound studies perspective, focusing on mediated aural practices in order to explore the way people make sense of their everyday acoustic environments using mobile media. Curation, in this sense, is a metaphor for everyday media production, illuminated by the practice of listening with mobile technology. Everyday Listening with Technology: A Case Study The present conceptualisation of curation emerged out of a participant-driven qualitative case study focused on using mobile media to make sense of urban everyday life. The study comprised 10 participants using iPod Touches (a device equivalent to an iPhone, without the phone part) to produce daily “aural postcards” of their everyday soundscapes and sonic experiences, over the course of two to four weeks. This work was further informed by, and updates, sonic ethnography approaches nascent in the World Soundscape Project, and the field of soundscape studies more broadly. Participants were asked to fill out a questionnaire about their media and technology use, in order to establish their participation in new media culture and correlate that to the documentary styles used in their aural postcards. With regard to capturing sonic material, participants were given open-ended instructions as to content and location, and encouraged to use the full capabilities of the device—that is, to record audio, video, and images, and to use any applications on the device. Specifically, I drew their attention to a recording app (Recorder) and a decibel measurement app (dB), which combines a photo with a static readout of ambient sound levels. One way most participants described the experience of capturing sound in a collection of recordings for a period of time was as making a “digital scrapbook” or a “media diary.” Even though they had recorded individual (often unrelated) soundscapes, almost everyone felt that the final product came together as a stand-alone collection—a kind of gallery of personalised everyday experiences that participants, if anything, wished to further organise, annotate, and flesh out. Examples of aural postcard formats used by participants: decibel photographs of everyday environments and a comparison audio recording of rain on a car roof with and without wipers (in the middle). Working with 139 aural postcards comprising more than 250 audio files and 150 photos and videos, the first step in the analysis was to articulate approaches to media documentation in terms of format, modality, and duration as deliberate choices in conversation with dominant media forms that participants regularly consume and are familiar with. Ambient sonic recordings (audio-only) comprised a large chunk of the data, and within this category there were two approaches: the sonic highlight, a short vignette of a given soundscape with minimal or no introduction or voice-over; and the process recording, featuring the entire duration of an unfolding soundscape or event. Live commentaries, similar to the conventions set forth by radio documentaries, represented voice-over entries at the location of the sound event, sometimes stationary and often in motion as the event unfolded. Voice memos described verbal reflections, pre- or post- sound event, with no discernable ambience—that is, participants intended them to serve as reflective devices rather than as part of the event. Finally, a number of participants also used the sound level meter app, which allowed them to generate visual records of the sonic levels of a given environment or location in the form of sound level photographs. Recording as a Way of Listening In their community soundwalking practice, Förnstrom and Taylor refer to recording sound in everyday settings as taking world experience, mediating it through one’s body and one’s memories and translating it into approximate experience. The media artefacts generated by participants as part of this study constitute precisely such ‘approximations’ of everyday life accessed through aural experience and mediated by the technological capabilities of the iPod. Thinking of aural postcards along this technological axis, the act of documenting everyday soundscapes involves participants acting as media producers, ‘framing’ urban everyday life through a mobile documentary rubric. In the process of curating these documentaries, they have to make decisions about the significance and stylistic framing of each entry and the message they wish to communicate. In order to bring the scope of these curatorial decisions into dialogue with established media forms, in this work’s analysis I combine Bill Nichols’s classification of documentary modes in cinema with Karin Bijsterveld’s concept of soundscape ‘staging’ to characterise the various approaches participants took to the multi-modal curation of their everyday (sonic) experience. In her recent book on the staging of urban soundscapes in both creative and documentary/archival media, Bijsterveld describes the representation of sound as particular ‘dramatisations’ that construct different kinds of meanings about urban space and engender different kinds of listening positions. Nichols’s articulation of cinematic documentary modes helps detail ways in which the author’s intentionality is reflected in the styling, design, and presentation of filmic narratives. Michel Chion’s discussion of cinematic listening modes further contextualises the cultural construction of listening that is a central part of both design and experience of media artefacts. The conceptual lens is especially relevant to understanding mobile curation of mediated sonic experience as a kind of mobile digital storytelling. Working across all postcards, settings, and formats, the following four themes capture some of the dominant stylistic dimensions of mobile media documentation. The exploratory approach describes a methodology for representing everyday life as a flow, predominantly through ambient recordings of unfolding processes that participants referred to in the final discussion as a ‘turn it on and forget it’ approach to recording. As a stylistic method, the exploratory approach aligns most closely with Nichols’s poetic and observational documentary modes, combining a ‘window to the world’ aesthetic with minimal narration, striving to convey the ‘inner truth’ of phenomenal experience. In terms of listening modes reflected in this approach, exploratory aural postcards most strongly engage causal listening, to use Chion’s framework of cinematic listening modes. By and large, the exploratory approach describes incidental documentaries of routine events: soundscapes that are featured as a result of greater attentiveness and investment in the sonic aspects of everyday life. The entries created using this approach reflect a process of discovering (seeing and hearing) the ordinary as extra-ordinary; re-experiencing sometimes mundane and routine places and activities with a fresh perspective; and actively exploring hidden characteristics, nuances of meaning, and significance. For instance, in the following example, one participant explores a new neighborhood while on a work errand:The narrative approach to creating aural postcards stages sound as a springboard for recollecting memories and storytelling through reflecting on associations with other soundscapes, environments, and interactions. Rather than highlighting place, routine, or sound itself, this methodology constructs sound as a window into the identity and inner life of the recordist, mobilising most strongly a semantic listening mode through association and narrative around sound’s meaning in context (Chion 28). This approach combines a subjective narrative development with a participatory aesthetic that draws the listener into the unfolding story. This approach is also performative, in that it stages sound as a deeply subjective experience and approaches the narrative from a personally significant perspective. Most often this type of sound staging was curated using voice memo narratives about a particular sonic experience in conjunction with an ambient sonic highlight, or as a live commentary. Recollections typically emerged from incidental encounters, or in the midst of other observations about sound. In the following example a participant reminisces about the sound of wind, which, interestingly, she did not record: Today I have been listening to the wind. It’s really rainy and windy outside today and it was reminding me how much I like the sound of wind. And you know when I was growing up on the wide prairies, we sure had a lot of wind and sometimes I kind of miss the sound of it… (Participant 1) The aesthetic approach describes instances where the creation of aural postcards was motivated by a reduced listening position (Chion 29)—driven primarily by the qualities and features of the soundscape itself. This curatorial practice for staging mediated aural experience combines a largely subjective approach to documenting with an absence of traditional narrative development and an affective and evocative aesthetic. Where the exploratory documentary approach seeks to represent place, routine, environment, and context through sonic characteristics, the aesthetic approach features sound first and foremost, aiming to represent and comment on sound qualities and characteristics in a more ‘authentic’ manner. The media formats most often used in conjunction with this approach were the incidental ambient sonic highlight and the live commentary. In the following example we have the sound of coffee being made as an important domestic ritual where important auditory qualities are foregrounded: That’s the sound of a stovetop percolator which I’ve been using for many years and I pretty much know exactly how long it takes to make a pot of coffee by the sound that it makes. As soon as it starts gurgling I know I have about a minute before it burns. It’s like the coffee calls and I come. (Participant 6) The analytical approach characterises entries that stage mediated aural experience as a way of systematically and inductively investigating everyday phenomena. It is a conceptual and analytical experimental methodology employed to move towards confirming or disproving a ‘hypothesis’ or forming a theory about sonic relations developed in the course of the study. As such, this approach most strongly aligns with Chion’s semantic listening mode, with the addition of the interactive element of analytical inquiry. In this context, sound is treated as a variable to be measured, compared, researched, and theorised about in an explicit attempt to form conclusions about social relationships, personal significance, place, or function. This analytical methodology combines an explicit and critical focus to the process of documenting itself (whether it be measuring decibels or systematically attending to sonic qualities) with a distinctive analytical synthesis that presents as ‘formal discovery’ or even ‘truth.’ In using this approach, participants most often mobilised the format of short sonic highlights and follow-up voice memos. While these aural postcards typically contained sound level photographs (decibel measurement values), in some cases the inquiry and subsequent conclusions were made inductively through sustained observation of a series of soundscapes. The following example is by a participant who exclusively recorded and compared various domestic spaces in terms of sound levels, comparing and contrasting them using voice memos. This is a sound level photograph of his home computer system: So I decided to record sitting next to my computer today just because my computer is loud, so I wanted to see exactly how loud it really was. But I kept the door closed just to be sort of fair, see how quiet it could possibly get. I think it peaked at 75 decibels, and that’s like, I looked up a decibel scale, and apparently a lawn mower is like 90 decibels. (Participant 2) Mediated Curation as a New Media Cultural Practice? One aspect of adopting the metaphor of ‘curation’ towards everyday media production is that it shifts the critical discourse on aesthetic expression from the realm of specialised expertise to general practice (“Everyone’s a photographer”). The act of curation is filtered through the aesthetic and technological capabilities of the smartphone, a device that has become co-constitutive of our routine sensorial encounters with the world. Revisiting McLuhan-inspired discourses on communication technologies stages the iPhone not as a device that itself shifts consciousness but as an agent in a media ecology co-constructed by the forces of use and design—a “crystallization of cultural practices” (Sterne). As such, mobile technology is continuously re-crystalised as design ‘constraints’ meet both normative and transgressive user approaches to interacting with everyday life. The concept of ‘social curation’ already exists in commercial discourse for social web marketing (O’Connell; Allton). High-traffic, wide-integration web services such as Digg and Pinterest, as well as older portals such as Reddit, all work on the principles of arranging user-generated, web-aggregated, and re-purposed content around custom themes. From a business perspective, the notion of ‘social curation’ captures, unsurprisingly, only the surface level of consumer behaviour rather than the kinds of values and meaning that this process holds for people. In the more traditional sense, art curation involves aesthetic, pragmatic, epistemological, and communication choices about the subject of (re)presentation, including considerations such as manner of display, intended audience, and affective and phenomenal impact. In his 2012 book tracing the discourse and culture of curating, Paul O’Neill proposes that over the last few decades the role of the curator has shifted from one of arts administrator to important agent in the production of cultural experiences, an influential cultural figure in her own right, independent of artistic content (88). Such discursive shifts in the formulation of ‘curatorship’ can easily be transposed from a specialised to a generalised context of cultural production, in which everyone with the technological means to capture, share, and frame the material and sensory content of everyday life is a curator of sorts. Each of us is an agent with a unique aesthetic and epistemological perspective, regardless of the content we curate. The entire communicative exchange is necessarily located within a nexus of new media practices as an activity that simultaneously frames a cultural construction of sensory experience and serves as a cultural production of the self. To return to the question of listening and a sound studies perspective into mediated cultural practices, technology has not single-handedly changed the way we listen and attend to everyday experience, but it has certainly influenced the range and manner in which we make sense of the sensory ‘everyday’. Unlike acoustic listening, mobile digital technologies prompt us to frame sonic experience in a multi-modal and multi-medial fashion—through the microphone, through the camera, and through the interactive, analytical capabilities of the device itself. Each decision for sensory capture as a curatorial act is both epistemological and aesthetic; it implies value of personal significance and an intention to communicate meaning. The occurrences that are captured constitute impressions, highlights, significant moments, emotions, reflections, experiments, and creative efforts—very different knowledge artefacts from those produced through textual means. Framing phenomenal experience—in this case, listening—in this way is, I argue, a core characteristic of a more general type of new media literacy and sensibility: that of multi-modal documenting of sensory materialities, or the curation of everyday life. References Allton, Mike. “5 Cool Content Curation Tools for Social Marketers.” Social Media Today. 15 Apr. 2013. 10 June 2015 ‹http://socialmediatoday.com/mike-allton/1378881/5-cool-content-curation-tools-social-marketers›. Bennett, Shea. “Social Media Stats 2014.” Mediabistro. 9 June 2014. 20 June 2015 ‹http://www.mediabistro.com/alltwitter/social-media-statistics-2014_b57746›. Bijsterveld, Karin, ed. Soundscapes of the Urban Past: Staged Sound as Mediated Cultural Heritage. Bielefeld: Transcript-Verlag, 2013. Burn, Andrew. Making New Media: Creative Production and Digital Literacies. New York, NY: Peter Lang Publishing, 2009. Daisuke, Okabe, and Mizuko Ito. “Camera Phones Changing the Definition of Picture-worthy.” Japan Media Review. 8 Aug. 2015 ‹http://www.dourish.com/classes/ics234cw04/ito3.pdf›. Chion, Michel. Audio-Vision: Sound on Screen. New York, NY: Columbia UP, 1994. Förnstrom, Mikael, and Sean Taylor. “Creative Soundwalks.” Urban Soundscapes and Critical Citizenship Symposium. Limerick, Ireland. 27–29 March 2014. Ito, Mizuko, ed. Hanging Out, Messing Around, and Geeking Out: Kids Living and Learning with New Media. Cambridge, MA: The MIT Press, 2010. Jenkins, Henry, Ravi Purushotma, Margaret Weigel, Katie Clinton, and Alice J. Robison. Confronting the Challenges of Participatory Culture: Media Education for the 21st Century. White Paper prepared for the McArthur Foundation, 2006. McLuhan, Marshall. Understanding Media: The Extensions of Man. New York: McGraw-Hill, 1964. Nichols, Brian. Introduction to Documentary. Bloomington & Indianapolis, Indiana: Indiana UP, 2001. Nielsen. “State of the Media – The Social Media Report.” Nielsen 4 Dec. 2012. 12 May 2015 ‹http://www.nielsen.com/us/en/insights/reports/2012/state-of-the-media-the-social-media-report-2012.html›. O’Connel, Judy. “Social Content Curation – A Shift from the Traditional.” 8 Aug. 2011. 11 May 2015 ‹http://judyoconnell.com/2011/08/08/social-content-curation-a-shift-from-the-traditional/›. O’Neill, Paul. The Culture of Curating and the Curating of Culture(s). Cambridge, MA: MIT Press, 2012. Pink, Sarah. Doing Visual Ethnography. London, UK: Sage, 2007. ———. Situating Everyday Life. London, UK: Sage, 2012. Sterne, Jonathan. The Audible Past: Cultural Origins of Sound Reproduction. Durham, NC: Duke UP, 2003. Schafer, R. Murray, ed. World Soundscape Project. European Sound Diary (reprinted). Vancouver: A.R.C. Publications, 1977. Turkle, Sherry. “Connected But Alone?” TED Talk, Feb. 2012. 8 Aug. 2015 ‹http://www.ted.com/talks/sherry_turkle_alone_together?language=en›.
APA, Harvard, Vancouver, ISO, and other styles
4

Egliston, Ben. "Building Skill in Videogames: A Play of Bodies, Controllers and Game-Guides." M/C Journal 20, no. 2 (April 26, 2017). http://dx.doi.org/10.5204/mcj.1218.

Full text
Abstract:
IntroductionIn his now-seminal book, Pilgrim in the Microworld (1983), David Sudnow details his process of learning to play the game Breakout on the Atari 2600. Sudnow develops an account of his graduation from a novice (having never played a videogame prior, and middle-aged at time of writing) to being able to fluidly perform the various configurative processes involved in an acclimated Breakout player’s repertoire.Sudnow’s account of videogame skill-development is not at odds with common-sense views on the matter: people become competent at videogames by playing them—we get used to how controllers work and feel, and to the timings of the game and those required of our bodies, through exposure. We learn by playing, failing, repeating, and ultimately internalising the game’s rhythms—allowing us to perform requisite actions. While he does not put it in as many words, Sudnow’s account affords parity to various human and nonhuman stakeholders involved in videogame-play: technical, temporal, and corporeal. Essentially, his point is that intertwined technical systems like software and human-interface devices—with their respective temporal rhythms, which coalesce and conflict with those of the human player—require management to play skilfully.The perspective Sudnow develops here is no doubt important, but modes of building competency cannot be strictly fixed around a player-videogame relationship; a relatively noncontroversial view in game studies. Videogame scholars have shown that there is currency in understanding how competencies in gameplay arise from engaging with ancillary objects beyond the thresholds of player-game relations; the literature to date casting a long shadow across a broad spectrum of materials and practices. Pursuing this thread, this article addresses the enterprise (and conceptualisation) of ‘skill building’ in videogames (taken as the ability to ‘beat games’ or cultivate the various competencies to do so) via the invocation of peripheral objects or practices. More precisely, this article develops the perspective that we need to attend to the impacts of ancillary objects on play—positioned as hybrid assemblage, as described in the work of writers like Sudnow. In doing so, I first survey how the intervention of peripheral game material has been researched and theorised in game studies, suggesting that many accounts deal too simply with how players build skill through these means—eliding the fact that play works as an engine of many moving parts. We do not simply become ‘better’ at videogames by engaging peripheral material. Furthering this view, I visit recent literature broadly associated with disciplines like post-phenomenology, which handles the hybridity of play and its extension across bodies, game systems, and other gaming material—attending to how skill building occurs; that is, through the recalibration of perceptual faculties operating in the bodily and temporal dimensions of videogame play. We become ‘better’ at videogames by drawing on peripheral gaming material to augment how we negotiate the rhythms of play.Following on from this, I conclude by mobilising post-phenomenological thinking to further consider skill-building through peripheral material, showing how such approaches can generate insights into important and emerging areas of this practice. Following recent games research, such as the work of James Ash, I adopt Bernard Stiegler’s formulation of technicity—pointing toward the conditioning of play through ancillary gaming objects: focusing particularly on the relationship between game skill, game guides, and embodied processes of memory and perception.In short, this article considers videogame skill-building, through means beyond the game, as a significant recalibration of embodied, temporal, and technical entanglements involved in play. Building Skill: From Guides to BodiesThere is a handsome literature that has sought to conceptualise the influence of ancillary game material, which can be traced to earlier theories of media convergence (Jenkins). More incisive accounts (pointing directly at game-skill) have been developed since, through theoretical rubrics such as paratext and metagaming. A point of congruence is the theme of relation: the idea that the locus of understanding and meaning can be specified through things outside the game. For scholars like Mia Consalvo (who popularised the notion of paratext in game studies), paratexts are a central motor in play. As Consalvo suggests, paratexts are quite often primed to condition how we do things in and around videogames; there is a great instructive potential in material like walkthrough guides, gaming magazines and cheating devices. Subsequent work has since made productive use of the concept to investigate game-skill and peripheral material and practice. Worth noting is Chris Paul’s research on World of Warcraft (WoW). Paul suggests that players disseminate high-level strategies through a practice known as ‘Theorycraft’ in the game’s community: one involving the use of paratextual statistics applications to optimise play—the results then disseminated across Web-forums (see also: Nardi).Metagaming (Salen and Zimmerman 482) is another concept that is often used to position the various extrinsic objects or practices installed in play—a concept deployed by scholars to conceptualise skill building through both games and the things at their thresholds (Donaldson). Moreover, the ability to negotiate out-of-game material has been positioned as a form of skill in its own right (see also: Donaldson). Becoming familiar with paratextual resources and being able to parse this information could then constitute skill-building. Ancillary gaming objects are important, and as some have argued, central in gaming culture (Consalvo). However, critical areas are left unexamined with respect to skill-building, because scholars often fail to place paratexts or metagaming in the contexts in which they operate; that is, amongst the complex technical, embodied and temporal conjunctures of play—such as those described by Sudnow. Conceptually, much of what Sudnow says in Microworld undergirds the post-human, object-oriented, or post-phenomenological literature that has begun to populate game studies (and indeed media studies more broadly). This materially-inflected writing takes seriously the fact that technical objects (like videogames) and human subjects are caught up in the rhythms of each other; digital media exists “as a mode or cluster of operations in consort with matter”, as Anna Munster tells us (330).To return to videogames, Patrick Crogan and Helen Kennedy argue that gameplay is about a “technicity” between human and nonhuman things, irreducible to any sole actor. Play is a confluence of metastable forces and conditions, a network of distributed agencies (see also Taylor, Assemblage). Others like Brendan Keogh forward post-phenomenological approaches (operating under scholars like Don Ihde)—looking past the subject-centred nature of videogame research. Ultimately, these theorists situate play as an ‘exploded diagram’, challenging anthropocentric accounts.This position has proven productive in research on ‘skilled’ or ‘high-level’ play (fertile ground for considering competency-development). Emma Witkowski, T.L. Taylor (Raising), and Todd Harper have suggested that skilled play in games emerges from the management of complex embodied and technical rhythms (echoing the points raised prior by Sudnow).Placing Paratexts in PlayWhile we have these varying accounts of how skill develops within and beyond player-game relationships, these two perspectives are rarely consolidated. That said, I address some of the limited body of work that has sought to place the paratext in the complex and distributed conjunctures of play; building a vocabulary and framework via encounters with what could loosely be called post-phenomenological thinking (not dissimilar to the just surveyed accounts). The strength of this work lies in its development of a more precise view of the operational reality of playing ‘with’ paratexts. The recent work of Darshana Jayemanne, Bjorn Nansen, and Thomas Apperley theorises the outward expansion of games and play, into diverse material, social, and spatial dimensions (147), as an ‘aesthetics of recruitment’. Consideration is given to ‘paratextual’ play and skill. For instance, they provide the example of players invoking the expertise they have witnessed broadcast through Websites like Twitch.tv or YouTube—skill-building operating here across various fronts, and through various modalities (155). Players are ‘recruited’, in different capacities, through expanded interfaces, which ultimately contour phenomenological encounters with games.Ash provides a fine-grained account in research on spatiotemporal perception and videogames—one much more focused on game-skill. Ash examines how high-level communities of players cultivate ‘spatiotemporal sensitivity’ in the game Street Fighter IV through—in Stiegler’s terms—‘exteriorising’ (Fault) game information into various data sets—producing what he calls ‘technicity’. In this way, Ash suggests that these paratextual materials don’t merely ‘influence play’ (Technology 200), but rather direct how players perceive time, and habituate exteriorised temporal rhythms into their embodied facility (a translation of high-level play). By doing so, the game can be played more proficiently. Following the broadly post-phenomenological direction of these works, I develop a brief account of two paratextual practices. Like Ash, I deploy the work of Stiegler (drawing also on Ash’s usage). I utilise Stiegler’s theoretical schema of technicity to roughly sketch how some other areas of skill-building via peripheral material can be placed within the context of play—looking particularly at the conditioning of embodied faculties of player anticipation, memory and perception through play and paratext alike. A Technicity of ParatextThe general premise of Stiegler’s technicity is that the human cannot be thought of independent from their technical supplements—that is, ‘exterior’ technical objects which could include, but are not limited to, technologies (Fault). Stiegler argues that the human, and their fundamental memory structure is finite, and as such is reliant on technical prostheses, which register and transmit experience (Fault 17). This technical supplement is what Stiegler terms ‘tertiary retention’. In short, for Stiegler, technicity can be understood as the interweaving of ‘lived’ consciousness (Cinematic 21) with tertiary retentional apparatus—which is palpably felt in our orientations in and toward time (Fault) and space (including the ‘space’ of our bodies, see New Critique 11).To be more precise, tertiary retention conditions the relationship between perception, anticipation, and subjective memory (or what Stiegler—by way of phenomenologist Edmund Husserl, whose work he renovates—calls primary retention, protention, and secondary retention respectively). As Ash demonstrates (Technology), Stiegler’s framework is rich with potential in investigating the relationship between videogames and their peripheral materials. Invoking technicity, we can rethink—and expand on—commonly encountered forms of paratexts, such as game guides or walkthroughs (an example Consalvo gives in Cheating). Stiegler’s framework provides a means to assess the technical organisation (through both games and paratexts) of embodied and temporal conditions of ‘skilled play’. Following Stiegler, Consalvo’s example of a game guide is a kind of ‘exteriorisation of play’ (to the guide) that adjusts the embodied and temporal conditions of anticipation and memory (which Sudnow would tell us are key in skill-development). To work through an example, if I was playing a hard game (such as Dark Souls [From Software]), the general idea is that I would be playing from memories of the just experienced, and with expectations of what’s to come based on everything that’s happened prior (following Stiegler). There is a technicity in the game’s design here, as Ash would tell us (Technology 190-91). By way of Stiegler (and his reading of Heidegger), Ash argues a popular trend in game design is to force a technologically-mediated interplay between memory, anticipation, and perception by making videogames ‘about’ a “a future outside of present experience” (Technology 191), but hinging this on past-memory. Players then, to be ‘skilful’, and move forward through the game environment without dying, need to manage cognitive and somatic memory (which, in Dark Souls, is conventionally accrued through trial-and-error play; learning through error incentivised through punitive game mechanics, such as item-loss). So, if I was playing against one of the game’s ‘bosses’ (powerful enemies), I would generally only be familiar with the way they manoeuvre, the speed with which they do so, and where and when to attack based on prior encounter. For instance, my past-experience (of having died numerous times) would generally inform me that using a two-handed sword allows me to get in two attacks on a boss before needing to retreat to avoid fatal damage. Following Stiegler, we can understand the inscription of videogame experience in objects like game guides as giving rise to anticipation and memory—albeit based on a “past that I have not lived but rather inherited as tertiary retentions” (Cinematic 60). Tertiary retentions trigger processes of selection in our anticipations, memories, and perceptions. Where videogame technologies are traditionally the tertiary retentions in play (Ash, Technologies), the use of game-guides refracts anticipation, memory, and perception through joint systems of tertiary retention—resulting in the outcome of more efficiently beating a game.To return to my previous example of navigating Dark Souls: where I might have died otherwise, via the guide, I’d be cognisant to the timings within which I can attack the boss without sustaining damage, and when to dodge its crushing blows—allowing me to eventually defeat it and move toward the stage’s end (prompting somatic and cognitive memory shifts, which influence my anticipation in-game). Through ‘neurological’ accounts of technology—such as Stiegler’s technicity—we can think more closely about how playing with a skill-building apparatus (like a game guide) works in practice; allowing us to identify how various situations ingame can be managed via deferring functions of the player (such as memory) to exteriorised objects—shifting conditions of skill building. The prism of technicity is also useful in conceptualising some of the new ways players are building skill beyond the game. In recent years, gaming paratexts have transformed in scope and scale. Gaming has shifted into an age of quantification—with analytics platforms which harvest, aggregate, and present player data gaining significant traction, particularly in competitive and multiplayer videogames. These platforms perform numerous operations that assist players in developing skill—and are marketed as tools for players to improve by reflecting on their own practices and the practices of others (functioning similarly to the previously noted practice of TheoryCraft, but operating at a wider scale). To focus on one example, the WarCraftLogs application in WoW (Image 1) is a highly-sophisticated form of videogame analytics; the perspective of technicity providing insights into its functionality as skill-building apparatus.Image 1: WarCraftLogs. Image credit: Ben Egliston. Following Ash’s use of Stiegler (Technology), quantifying the operations that go into playing WoW can be conceptualised as what Stiegler calls a system of traces (Technology 196). Because of his central thesis of ‘technical existence’, Stiegler maintains that ‘interiority’ is coincident with technical support. As such, there is no calculation, no mental phenomena, that does not arise from internal manipulation of exteriorised symbols (Cinematic 52-54). Following on with his discussion of videogames, Ash suggests that in the exteriorisation of gameplay there is “no opposition between gesture, calculation and the representation of symbols” (Technology 196); the symbols working as an ‘abbreviation’ of gameplay that can be read as such. Drawing influence from this view, I show that ‘Big Data’ analytics platforms like WarCraftLogs similarly allow users to ‘read’ play as a set of exteriorised symbols—with significant outcomes for skill-building; allowing users to exteriorise their own play, examine the exteriorised play of others, and compare exteriorisations of their own play with those of others. Image 2: WarCraftLogs Gameplay Breakdown. Image credit: Ben Egliston.Image 2 shows a screenshot of the WarCraftLogs interface. Here we can see the exteriorisation of gameplay, and how the platform breaks down player inputs and in-game occurrences (written and numeric, like Ash’s game data). The screenshot shows a ‘raid boss’ (where players team up to defeat powerful computer-controlled enemies)—atomising the sequence of inputs a player has made over the course of the encounter. This is an accurate ledger of play—a readout that can speak to mechanical performance (specific ingame events occurred at a specific time), as well as caching and providing parses of somatic inputs and execution (e.g. ability to trace the rates at which players expend in-game resources can provide insights into rapidity of button presses). If information falls outside what is presented, players can work with an Application Programming Interface to develop customised readouts (this is encouraged through other game-data platforms, like OpenDota in Dota 2). Through this system, players can exteriorise their own input and output or view the play of others—both useful in building skill. The first point here—of exteriorising one’s own experience—resonates with Stiegler’s renovation of Husserl's ‘temporal object’—that is, an object that exists in and is formed through time—through temporal fluxes of what appears, what happens and what manifests itself in disappearing (Cinematic 14). Stiegler suggests that tertiary retentional apparatus (e.g. a gramophone) allow us to re-experience a temporal object (e.g. a melody) which would otherwise not be possible due to the finitude of human memory.To elaborate, Stiegler argues that primary memories recede into secondary memory (which is selective reactivation of perception), but through technologies of recording, (such as game-data) we can re-experience these things verbatim. So ultimately, games analytics platforms—as exteriorised technologies of recording—facilitate this after-the-fact interplay between primary and secondary memory where players can ‘audit’ their past performance, reflecting on well-played encounters or revising error. These platforms allow the detailed examination of responses to game mechanics, and provide readouts of the technical and embodied rhythms of play (which can be incorporated into future play via reading the data). Beyond self-reflection, these platforms allow the examination of other’s play. The aggregation and sorting of game-data makes expertise both visible and legible. To elaborate, players are ranked on their performance based on all submitted log-data, offering a view of how expertise ‘works’.Image 3: Top-Ranking Players in WarCraftLogs. Image credit: Ben Egliston.Image 3 shows the top-ranked players on an encounter (the top 10 of over 100,000 logs), which means that these players have performed most competently out of all gameplay parses (the metric being most damage dealt per-second in defeating a boss). Users of the platform can look in detail at the actions performed by top players in that encounter—reading and mobilising data in a similar manner to game-guides; markedly different, however, in terms of the scope (i.e. there are many available logs to draw from) and richness of the data (more detailed and current—with log rankings recalibrated regularly). Conceptually, we can also draw parallels with previous work (see: Ash, Technology)—where the habituation of expert game data can produce new videogame technicities; ways of ‘experiencing’ play as ‘higher-level’ organisation of space and time (Ash, Technology). So, if a player wanted to ‘learn from the experts’ they would restructure their own rhythms of play around high-level logs which provide an ordered readout of various sequences of inputs involved in playing well. Moreover, the platform allows players to compare their logs to those of others—so these various introspective and outward-facing uses can work together, conditioning anticipations with inscriptions of past-play and ‘prosthetic’ memories through other’s log-data. In my experience as a WoW player, I often performed better (or built skill) by comparing and contrasting my own detailed readouts of play to the inputs and outputs of the best players in the world.To summarise, through technicity, I have briefly shown how exteriorising play shifts the conditions of skill-building from recalibrating msnesic and anticipatory processes through ‘firsthand’ play, to reworking these functions through engaging both games and extrinsic objects, like game guides and analytics platforms. Additionally, by reviewing and adopting various usages of technicity, I have pointed out how we might more holistically situate the gaming paratext in skill building. Conclusion There is little doubt—as exemplified through both scholarly and popular interest—that paratextual videogame material reframes modes of building game skill. Following recent work, and by providing a brief account of two paratextual practices (venturing the framework of technicity, via Stiegler and Ash—showing the complication of memory, perception, and anticipation in skill-building), I have contended that videogame-skill building—via paratextual material—can be rendered a process of operating outside of, but still caught up in, the complex assemblages of time, bodies, and technical architectures described by Sudnow at this article’s outset. Additionally, by reviewing and adopting ideas associated with technics and post-phenomenology, this article has aimed to contribute to the development of more ‘complete’ accounts of the processes and practices comprising skill building regimens of contemporary videogame players.References Ash, James. “Technology, Technicity and Emerging Practices of Temporal Sensitivity in Videogames.” Environment and Planning A 44.1 (2012): 187-201.———. “Technologies of Captivation: Videogames and the Attunement of Affect.” Body and Society 19.1 (2013): 27-51.Consalvo, Mia. Cheating: Gaining Advantage in Videogames. Cambridge: Massachusetts Institute of Technology P, 2007. Crogan, Patrick, and Helen Kennedy. “Technologies between Games and Culture.” Games and Culture 4.2 (2009): 107-14.Donaldson, Scott. “Mechanics and Metagame: Exploring Binary Expertise in League of Legends.” Games and Culture (2015). 4 Jun. 2015 <http://journals.sagepub.com/doi/abs/10.1177/1555412015590063>.From Software. Dark Souls. Playstation 3 Game. 2011.Harper, Todd. The Culture of Digital Fighting Games: Performance and Practice. New York: Routledge, 2014.Jayemanne, Darshana, Bjorn Nansen, and Thomas H. Apperley. “Postdigital Interfaces and the Aesthetics of Recruitment.” Transactions of the Digital Games Research Association 2.3 (2016): 145-72.Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York UP, 2006.Keogh, Brendan. “Across Worlds and Bodies.” Journal of Games Criticism 1.1 (2014). Jan. 2014 <http://gamescriticism.org/articles/keogh-1-1/>.Munster, Anna. “Materiality.” The Johns Hopkins Guide to Digital Media. Eds. Marie-Laure Ryan, Lori Emerson, and Benjamin J. Robertson. Baltimore: Johns Hopkins UP, 2014. 327-30. Nardi, Bonnie. My Life as Night Elf Priest: An Anthropological Account of World of Warcraft. Ann Arbor: Michigan UP, 2010. OpenDota. OpenDota. Web browser application. 2017.Paul, Christopher A. “Optimizing Play: How Theory Craft Changes Gameplay and Design.” Game Studies: The International Journal of Computer Game Research 11.2 (2011). May 2011 <http://gamestudies.org/1102/articles/paul>.Salen, Katie, and Eric Zimmerman. Rules of Play: Game Design Fundamentals. Cambridge: Massachusetts Institute of Technology P, 2004.Stiegler, Bernard. Technics and Time, 1: The Fault of Epimetheus. Stanford: Stanford UP, 1998.———. For a New Critique of Political Economy. Cambridge: Polity, 2010.———. Technics and Time, 3: Cinematic Time and the Question of Malaise. Stanford: Stanford UP, 2011.Sudnow, David. Pilgrim in the Microworld. New York: Warner Books, 1983.Taylor, T.L. “The Assemblage of Play.” Games and Culture 4.4 (2009): 331-39.———. Raising the Stakes: E-Sports and the Professionalization of Computer Gaming. Cambridge: Massachusetts Institute of Technology P, 2012.WarCraftLogs. WarCraftLogs. Web browser application. 2016.Witkowski, Emma. “On the Digital Playing Field: How We ‘Do Sport’ with Networked Computer Games.” Games and Culture 7.5 (2012): 349-74.
APA, Harvard, Vancouver, ISO, and other styles
5

Broeckmann, Andreas. "Minor Media - Heterogenic Machines." M/C Journal 2, no. 6 (September 1, 1999). http://dx.doi.org/10.5204/mcj.1788.

Full text
Abstract:
1. A Minor Philosopher According to Guattari and Deleuze's definition, a 'minor literature' is the literature of a minority that makes use of a major language, a literature which deterritorialises that language and interconnects meanings of the most disparate levels, inseparably mixing and implicating poetic, psychological, social and political issues with each other. In analogy, the Japanese media theorist Toshiya Ueno has refered to Félix Guattari as a 'minor philosopher'. Himself a practicing psychoanalyst, Guattari was a foreigner to the Grand Nation of Philosophy, whose natives mostly treat him like an unworthy bastard. And yet he has established a garden of minor flowers, of mongrel weeds and rhizomes that are as polluting to philosophy as Kafka's writing has been to German literature (cf. Deleuze & Guattari, Kafka). The strategies of 'being minor' are, as exemplified by Guattari's writings (with and without Deleuze), deployed in multiple contexts: intensification, re-functionalisation, estrangement, transgression. The following offers a brief overview over the way in which Guattari conceptualises media, new technologies and art, as well as descriptions of several media art projects that may help to illustrate the potentials of such 'minor machines'. Without wanting to pin these projects down as 'Guattarian' artworks, I suggest that the specific practices of contemporary media artists can point us in the direction of the re-singularising, deterritorialising and subjectifying forces which Guattari indicated as being germane to media technologies. Many artists who work with media technologies do so through strategies of appropriation and from a position of 'being minor': whenever a marginality, a minority, becomes active, takes the word power (puissance de verbe), transforms itself into becoming, and not merely submitting to it, identical with its condition, but in active, processual becoming, it engenders a singular trajectory that is necessarily deterritorialising because, precisely, it's a minority that begins to subvert a majority, a consensus, a great aggregate. As long as a minority, a cloud, is on a border, a limit, an exteriority of a great whole, it's something that is, by definition, marginalised. But here, this point, this object, begins to proliferate ..., begins to amplify, to recompose something that is no longer a totality, but that makes a former totality shift, detotalises, deterritorialises an entity.' (Guattari, "Pragmatic/Machinic") In the context of media art, 'becoming minor' is a strategy of turning major technologies into minor machines. a. Krzysztof Wodiczko (PL/USA): Alien Staff Krzysztof Wodiczko's Alien Staff is a mobile communication system and prosthetic instrument which facilitates the communication of migrants in their new countries of residence, where they have insufficient command of the local language for communicating on a par with the native inhabitants. Alien Staff consists of a hand-held staff with a small video monitor and a loudspeaker at the top. The operator can adjust the height of the staff's head to be at a level with his or her own head. Via the video monitor, the operator can replay pre-recorded elements of an interview or a narration of him- or herself. The recorded material may contain biographical information when people have difficulties constructing coherent narratives in the foreign language, or it may include the description of feelings and impressions which the operator normally doesn't get a chance to talk about. The Staff is used in public places where passers-by are attracted to listen to the recording and engage in a conversation with the operator. Special transparent segments of the staff contain memorabilia, photographs or other objects which indicate a part of the personal history of the operator and which are intended to instigate a conversation. The Alien Staff offers individuals an opportunity to remember and retell their own story and to confront people in the country of immigration with this particular story. The Staff reaffirms the migrant's own subjectivity and re-singularises individuals who are often perceived as representative of a homogenous group. The instrument displaces expectations of the majority audience by articulating unformulated aspects of the migrant's subjectivity through a medium that appears as the attractive double of an apparently 'invisible' person. 2. Mass Media, New Technologies and 'Planetary Computerisation' Guattari's comments about media are mostly made in passing and display a clearly outlined opinion about the role of media in contemporary society: a staunch critique of mass media is coupled with an optimistic outlook to the potentials of a post-medial age in which new technologies can develop their singularising, heterogenic forces. The latter development is, as Guattari suggests, already discernible in the field of art and other cultural practices making use of electronic networks, and can lead to a state of 'planetary computerisation' in which multiple new subject-groups can emerge. Guattari consistently refers to the mass media with contempt, qualifying them as a stupefying machinery that is closely wedded to the forces of global capitalism, and that is co-responsible for much of the reactionary hyper-individualism, the desperation and the "state of emergency" that currently dominates "four-fifth of humanity" (Guattari, Chaosmosis 97; cf. Guattari, Drei Ökologien 16, 21). Guattari makes a passionate plea for a new social ecology and formulates, as one step towards this goal, the necessity, "to guide these capitalist societies of the age of mass media into a post-mass medial age; by this I mean that the mass media have to be reappropriated by a multiplicity of subject-groups who are able to administer them on a path of singularisation" (Guattari, "Regimes" 64). Guattari consistently refers to the mass media with contempt, qualifying them as a stupefying machinery that is closely wedded to the forces of global capitalism, and that is co-responsible for much of the reactionary hyper-individualism, the desperation and the "state of emergency" that currently dominates "four-fifth of humanity" (Guattari, Chaosmosis 97; cf. Guattari, Drei Ökologien 16, 21). Guattari makes a passionate plea for a new social ecology and formulates, as one step towards this goal, the necessity, "to guide these capitalist societies of the age of mass media into a post-mass medial age; by this I mean that the mass media have to be reappropriated by a multiplicity of subject-groups who are able to administer them on a path of singularisation" (Guattari, "Regimes" 64). b. Seiko Mikami (J/USA): World, Membrane and the Dismembered Body An art project that deals with the cut between the human subject and the body, and with the deterritorialisation of the sense of self, is Seiko Mikami's World, Membrane and the Dismembered Body. It uses the visitor's heart and lung sounds which are amplified and transformed within the space of the installation. These sounds create a gap between the internal and external sounds of the body. The project is presented in an-echoic room where sound does not reverberate. Upon entering this room, it is as though your ears are no longer living while paradoxically you also feel as though all of your nerves are concentrated in your ears. The sounds of the heart, lungs, and pulse beat are digitised by the computer system and act as parameters to form a continuously transforming 3-d polygonal mesh of body sounds moving through the room. Two situations are effected in real time: the slight sounds produced by the body itself resonate in the body's internal membranes, and the transfigured resonance of those sounds is amplified in the space. A time-lag separates both perceptual events. The visitor is overcome by the feeling that a part of his or her corporeality is under erasure. The body exists as abstract data, only the perceptual sense is aroused. The visitor is made conscious of the disappearance of the physical contours of his or her subjectivity and thereby experiences being turned into a fragmented body. The ears mediate the space that exists between the self and the body. Mikami's work fragments the body and its perceptual apparatus into data, employing them as interfaces and thus folding the body's horizon back onto itself. The project elucidates the difference between an actual and a virtual body, the actual body being deterritorialised and projected outwards towards a number of potential, virtual bodies that can, in the installation, be experienced as maybe even more 'real' than the actual body. 3. Artistic Practice Guattari's conception of post-media implies criss-crossing intersections of aesthetic, ethical, political and technological planes, among which the aesthetic, and with it artistic creativity, are ascribed a position of special prominence. This special role of art is a trope that recurs quite frequently in Guattari's writings, even though he is rarely specific about the artistic practices he has in mind. In A Thousand Plateaus, Deleuze and Guattari give some detailled attention to the works of artists like Debussy, Boulez, Beckett, Artaud, Kafka, Kleist, Proust, and Klee, and Chaosmosis includes longer passages and concrete examples for the relevance of the aesthetic paradigm. These examples come almost exclusively from the fields of performing arts, music and literature, while visual arts are all but absent. One reason for this could be that the performing arts are time-based and processual and thus lend themselves much better to theorisation of flows, transformations and differentiations. The visual arts can be related to the abstract machine of faciality (visageité) which produces unified, molar, identical entities out of a multiplicity of different singularities, assigning them to a specific category and associating them with particular social fields (cf. Deleuze & Guattari, Tausend Plateaus 167-91) This semiotic territorialisation is much more likely to happen in the case of static images, whether two- or three-dimensional, than in time-based art forms. An interesting question, then, would be whether media art projects, many of which are time-based, processual and open-ended, can be considered as potential post-medial art practices. Moreover, given the status of computer software as the central motor of the digital age, and the crucial role it plays in aesthetic productions like those discussed here, software may have to be viewed as the epitome of post-medial machines. Guattari seems to have been largely unaware of the beginnings of digital media art as it developed in the 1980s. In generalistic terms he suggests that the artist is particularly well-equipped to conceptualise the necessary steps for this work because, unlike engineers, he or she is not tied to a particular programme or plan for a product, and can change the course of a project at any point if an unexpected event or accident intrudes (cf. Guattari, Drei Ökologien 50). The significance of art for Guattari's thinking comes primarily from its close relation with processes of subjectivation. "Just as scientific machines constantly modify our cosmic frontiers, so do the machines of desire and aesthetic creation. As such, they hold an eminent place within assemblages of subjectivation, themselves called to relieve our old social machines which are incapable of keeping up with the efflorescence of machinic revolutions that shatter our epoch' (Guattari, Chaosmosis 54). The aesthetic paradigm facilitates the development of new, virtual forms of subjectivity, and of liberation, which will be adequate to these machinic revolutions. c. Knowbotic Research + cF: IO_Dencies The Alien Staff project was mentioned as an example for the re-singularisation and the virtualisation of identity, and World, Membrane and the Dismembered Body as an instance of the deterritorialisation and virtualisation of the human body through an artistic interface. The recent project by Knowbotic Research, IO_Dencies -- Questioning Urbanity, deals with the possibilities of agency, collaboration and construction in translocal and networked environments. It points in the direction of what Guattari has called the formation of 'group subjects' through connective interfaces. The project looks at urban settings in different megacities like Tokyo, São Paulo or the Ruhr Area, analyses the forces present in particular local urban situations, and offers experimental interfaces for dealing with these local force fields. IO_Dencies São Paulo enables the articulation of subjective experiences of the city through a collaborative process. Over a period of several months, a group of young architects and urbanists from São Paulo, the 'editors', provided the content and dynamic input for a database. The editors collected material (texts, images, sounds) based on their current situation and on their personal urban experience. A specially designed editor tool allowed the editors to build individual conceptual 'maps' in which to construct the relations between the different materials in the data-pool according to the subjective perception of the city. On the computational level, connectivities are created between the different maps of the editors, a process that is driven by algorithmic self-organisation whose rules are determined by the choices that the editors make. In the process, the collaborative editorial work in the database generates zones of intensities and zones of tension which are visualised as force fields and turbulences and which can be experienced through interfaces on the Internet and at physical exhibition sites. Participants on the Net and in the exhibition can modify and influence these electronic urban movements, force fields and intensities on an abstract, visual level, as well as on a content-based, textual level. This engagement with the project and its material is fed back into the database and influences the relational forces within the project's digital environment. Characteristic of the forms of agency as they evolve in networked environments is that they are neither individualistic nor collective, but rather connective. Whereas the collective is determined by an intentional and empathetic relation between agents within an assemblage, the connective rests on any kind of machinic relation and is therefore more versatile, more open, and based on the heterogeneity of its components or members. In the IO_Dencies interfaces, the different networked participants become visible for each other, creating a trans-local zone of connective agency. The inter-connectedness of their activities can be experienced visually, acoustically, and through the constant reconfiguration of the data sets, an experience which can become the basis of the formation of a specific, heterogeneous group subject. 4. Guattari's Concept of the Machinic An important notion underlying these analyses is that of the machine which, for Guattari, relates not so much to particular technological or mechanical objects, to the technical infrastructure or the physical flows of the urban environment. 'Machines' can be social bodies, industrial complexes, psychological or cultural formations, they are assemblages of heterogeneous parts, aggregations which transform forces, articulate and propel their elements, and force them into a continuous state of transformation and becoming. An important notion underlying these analyses is that of the machine which, for Guattari, relates not so much to particular technological or mechanical objects, to the technical infrastructure or the physical flows of the urban environment. 'Machines' can be social bodies, industrial complexes, psychological or cultural formations, they are assemblages of heterogeneous parts, aggregations which transform forces, articulate and propel their elements, and force them into a continuous state of transformation and becoming. d. Xchange Network My final example is possibly the most evocative in relation to Guattari's notions of the polyvocity and heterogenesis that new media technologies can trigger. It also links up closely with Guattari's own engagement with the minor community radio movement. In late 1997, the E-Lab in Riga initiated the Xchange network for audio experiments on the Internet. The participating groups in London, Ljubljana, Sydney, Berlin, and many other minor and major places, use the Net for distributing their original sound programmes. The Xchange network is "streaming via encoders to remote servers, picking up the stream and re-broadcasting it purely or re-mixed, looping the streams" (Rasa Smite). Xchange is a distributed group, a connective, that builds creative cooperation in live-audio streaming on the communication channels that connect them. They explore the Net as a sound-scape with particular qualities regarding data transmission, delay, feedback, and open, distributed collaborations. Moreover, they connect the network with a variety of other fields. Instead of defining an 'authentic' place of their artistic work, they play in the transversal post-medial zone of media labs in different countries, mailing lists, net-casting and FM broadcasting, clubs, magazines, stickers, etc., in which 'real' spaces and media continuously overlap and fuse (cf. Slater). 5. Heterogenic Practices If we want to understand the technological and the political implications of the machinic environment of the digital networks, and if we want to see the emergence of the group subjects of the post-media age Guattari talks about, we have to look at connectives like Xchange and the editor-participant assemblages of IO_Dencies. The far-reaching machinic transformations which they articulate, hold the potential of what Guattari refers to as the 'molecular revolution'. To realise this revolution, it is vital to "forge new analytical instruments, new concepts, because it is ... the transversality, the crossing of abstract machines that constitute a subjectivity and that are incarnated, that live in very different regions and domains and ... that can be contradictory and antagonistic". For Guattari, this is not a mere theoretical question, but one of experimentation, "of new forms of interactions, of movement construction that respects the diversity, the sensitivities, the particularities of interventions, and that is nonetheless capable of constituting antagonistic machines of struggle to intervene in power relations" (Guattari, "Pragmatic/Machinic" 4-5). The implication here is that some of the minor media practices pursued by artists using digital technologies point us in the direction of the positive potentials of post media. The line of flight of such experimentation is the construction of new and strong forms of subjectivity, "an individual and/or collective reconstitution of the self" (Guattari, Drei Ökologien 21), which can strengthen the process of what Guattari calls "heterogenesis, that is a continuous process of resingularisation. The individuals must, at the same time, become solidary and ever more different" (Guattari, Drei Ökologien 76). References Deleuze, Gilles, and Felix Guattari. Kafka: Pour une Litterature Mineur. Paris: Ed. de Minuit, 1975. ---. Tausend Plateaus. (1980) Berlin: Merve, 1992. Guattari, Félix. Cartographies Schizoanalytiques. Paris: Ed. Galilée, 1989. ---. Chaosmosis: An Ethico-Aesthetic Paradigm. (1992) Sydney: Power Publications, 1995. ---. Die drei Ökologien. (1989) Wien: Passagen Verlag, 1994. ---. "Pragmatic/Machinic." Discussion with Guattari, conducted and transcribed by Charles J. Stivale. (1985) Pre/Text 14.3-4 (1995). ---. "Regimes, Pathways, Subjects." Die drei Ökologien. (1989) Wien: Passagen Verlag, 1994. 95-108. ---. "Über Maschinen." (1990) Schmidgen, 115-32. Knowbotic Research. IO_Dencies. 1997-8. 11 Sep. 1999 <http://io.khm.de/>. De Landa, Manuel. "The Machinic Phylum." Technomorphica. Eds. V2_Organisation. Rotterdam: V2_Organisation, 1997. Mikami, Seiko. World, Membrane and the Dismembered Body. 1997. 11 Sep. 1999 <http://www.ntticc.or.jp/permanent/mikami/mikami_e.php>. Schmidgen, Henning, ed. Ästhetik und Maschinismus: Texte zu und von Félix Guattari. Berlin: Merve, 1995. ---. Das Unbewußte der Maschinen: Konzeptionen des Psychischen bei Guattari, Deleuze und Lacan. München: Fink, 1997. Slater, Howard. "Post-Media Operators." Nettime, 10 June 1998. 11 Sep. 1999 <http://www.factory.org>. Wodiczko, Krzysztof. 11 Sep. 1999 <http://cavs.mit.edu/people/kw.htm>. Xchange. 11 Sep. 1999 <http://xchange.re-lab.net>. (Note: An extended, Dutch version of this text was published in: Oosterling/Thissen, eds. Chaos ex Machina: Het ecosofisch Werk van Félix Guattari op de Kaart Gezet. Rotterdam: CFK, 1998. Citation reference for this article MLA style: Andreas Broeckmann. "Minor Media -- Heterogenic Machines: Notes on Félix Guattari's Conceptions of Art and New Media." M/C: A Journal of Media and Culture 2.6 (1999). [your date of access] <http://www.uq.edu.au/mc/9909/minor.php>. Chicago style: Andreas Broeckmann, "Minor Media -- Heterogenic Machines: Notes on Félix Guattari's Conceptions of Art and New Media," M/C: A Journal of Media and Culture 2, no. 6 (1999), <http://www.uq.edu.au/mc/9909/minor.php> ([your date of access]). APA style: Andreas Broeckmann. (1999) Minor Media -- Heterogenic Machines: Notes on Félix Guattari's Conceptions of Art and New Media. M/C: A Journal of Media and Culture 2(6). <http://www.uq.edu.au/mc/9909/minor.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
6

Peeters, Stijn, and Tom Willaert. "Telegram and Digital Methods." M/C Journal 25, no. 1 (March 17, 2022). http://dx.doi.org/10.5204/mcj.2878.

Full text
Abstract:
Introduction The study of online conspiracy theory communities presents unique methodological challenges. Online conspiracy theorists often adhere to an individualistic knowledge culture of “doing one’s own research” (Fenster 158). This results in a decentralised landscape of theories, narratives and communities that challenges conventional top-down approaches to analysis. Moreover, conspiracy theories tend to be discussed on the fringes of the online ecosystem, in chat groups, small subcultural Web forums and away from mainstream social media platforms such as Facebook and Twitter (Frenkel; see also De Zeeuw et al.). In this context, the messaging app Telegram has developed into a particularly prominent space (Rogers, “Deplatforming”; Urman and Katz). On the one hand, this platform is not quite part of the same mainstream as Facebook or Twitter, owing in part to its emphasis on security, “social privacy”, and lack of central moderation (Rogers, “Deplatforming”). But it is also not quite an “alternative social medium” (Gehl), as it does not position itself in opposition to mainstream platforms per se, nor does its business model centered around investor funding and advertisements present a break from the “dominant political economy” (ibid.). This ambiguous position might account for Telegram’s wide adoption, as well as its status as a relatively safe haven for communities deplatformed elsewhere – including a lively ecosystem of conspiracy theory communities (La Morgia et al.). Because Telegram communities are distributed over a wide range of channels and chat groups, they cannot always be investigated using existing analytical approaches for social media research. Confronting this challenge, we propose and discuss a method for studying Telegram communities that repurposes the “methods of the medium” (Rogers, Digital Methods). Specifically, our method appropriates Telegram’s feature of forwarding messages from one group to another to discover interlinked distributed communities, collect data from these communities for close reading, and map their information sharing practices. In this article, we will first present this approach and illustrate the types of analyses the collected data might afford in relation to a brief case study on Dutch-speaking conspiracy theories. In this short illustration, we map the convergence of right-wing and conspiratorial communities, both structurally and discursively. As Vieten discusses, “digital pandemic populism during lockdown might have pushed further the mobilisation of the far right, also on the streets”. In the Dutch context there has been a demonstrated connection between the two. Because of this connection, we were drawn to the questions of what these entanglements might look like in a relatively unmoderated Telegram environment. We then proceed to discuss some strengths and limitations, identify avenues for future research, and conclude with some ethical, methodological and epistemological reflections. Overview of Method Our method first combines expert knowledge, and the affordances of the Telegram app’s ‘search’ function to retrieve a set of channels mentioning specific politicians or political parties, as well as other marked terms that might point towards far-right or conspiratorial content. This includes wakker (awakened), variations of batavier and geus (nationalist demonyms), names of known far right politicians (such as The Netherlands’ Thierry Baudet and Flanders’ Dries Van Langenhove) and conspiracy theory activists, and volk (a term meaning roughly “our people”). As this approach precludes discovery of related groups that do not match the queries exactly, this initial curated list is then supplemented with channels advertised elsewhere, such as those featured on the Websites of far-right politicians and organisations, as well as channels covered in mainstream news media. This yields an initial expert list of channels, in our sample case of Dutch-speaking right-wing and conspiracist actors comprising 50 items. One might stop here, and collect data for this manually curated list of groups, as in Nikkhah et al.’s study of Telegram use among the Iranian diaspora in the United States, or Davey and Weinberg’s analysis of far-right groups used in the US military. But this would exclude any groups not known by the researchers; and groups are not always easy to naively discover on Telegram. Because of this, in a subsequent step, we expanded the initial set of relevant Telegram channels by crawling posts in these channels that were forwarded from other channels, constituting links between these channels. We used a custom crawler based on the open source library Selenium, which allows one to control a browser programmatically. The browser was then made to scroll through the Web-based view of the selected channels (e.g. https://t.me/s/durov). In principle, all messages ever posted in a channel are available in this view. We then follow those links, and store the names of the linked channels. Overall, this method thus presumes that if a channel forwards a message from another channel there will be some overlap in terms of topic of discussion between both, making the newly discovered channel similarly relevant to the analysis. This results in a network-like representation of connections between channels. In the context of our case study, this process expands our initial seed list to a list of over 215 relevant public channels, after discarding groups that are not germane to the case study, i.e. those that were not related to far-right or conspiracy theory-related communities. To verify this, channels were inductively coded by a team of four researchers after capture. We then repeat the data collection for this new list of channels, retrieving forwarded messages from over 370,000 total messages spanning the period 2017-2021. This dataset then serves as a starting point for structural analysis of the wider context of the community, aspects of which will be illustrated in the next section. Illustration Emphasising the value of a “quali-quanti” (Venturini and Latour) approach we offer a tentative analysis of the decentralised Dutch-speaking conspiracist narratives and communities on Telegram, and in a broader sense observe what such a distributed community may look like on the platform. This then suggests the various affordances which a dataset collected with this method can offer. Fig. 1: Network visualisation of collected channels (depth: 1) including channels forwarded from (4354 nodes). Nodes are sized and coloured by degree (amount of connections to other channels). A first observation that can be made concerns the topology of the network of channels we found (see fig. 1). A network analysis is a suitable distant reading approach for this kind of data, because it “maps and measures formal and informal relationships … viz., who knows whom, and who shares what information and knowledge with whom” (Serrat 40). It is a type of analysis that can reveal the relevance and positioning of actors and narratives within the data. In our network visualisation, we use the ForceAtlas2 algorithm (Jacomy et al.) to position the nodes. This algorithm makes more connected nodes “gravitate” towards each other; the more central a node is, the more connected it is to the rest of the network, roughly speaking (Figure 1). Highlighting the channels representing political parties shows, for example, that while the Dutch far right party FvD (FVDNL) is quite central (connected), this is not the case for the Flemish far right politician Dries Van Langenhove (kiesdries). This suggests that compared to a similar Flemish politician, the Dutch FvD is a more prominent part of the general conspiracist discussion – which is then perhaps more overtly politicised in the Dutch context. We can additionally discern channels that we might label “content aggregators”, which forward large numbers of messages from other channels but post comparatively little original content, occupying a relatively central place in the wider network. These content aggregators play an important structural role in the network, as other channels might forward messages from these collections on a “pick and choose” basis. More abstractly, they also serve to confirm thematic similarity between the channels messages are forwarded from, with the owners of the aggregator channels playing the role of a curator that collects interesting content about a certain topic of interest. Furthermore, our data reveal that the network is highly dynamic. As forwarded messages are timestamped, we can plot the graph at different moments in time. When comparing changes over a year, we can observe a significant growth in the number of channels that connect to the network particularly between 2020 and 2021 (see fig. 2). This growth, and the associated diversity of the network, can be attributed to Telegram’s role as a haven for actors that were deplatformed (or present themselves as targets of deplatforming and censorship) from other social media; a “platform of last resort”. Previous research has for instance indicated that a number of alt-right fringe actors moved to Telegram after being deplatformed from Facebook or Twitter (Rogers, “Deplatforming”). It can be hypothesised that events such as Donald Trump’s removal from Twitter around the time of the January 2021 Capitol riots might similarly have inspired other actors to move to Telegram in response to the platform policies of mainstream social media. Fig. 2: The channel network based on messages sent before June 2020 (334 nodes). Same layout and parameters as in Figure 1 (not to scale). Nodes also appearing in Figure 1 are highlighted. The structure of the network (in fig. 1) can also be used to discern ‘sub-communities’, which forward messages mutually but have relatively few links to the broader network. These can then be analysed qualitatively. This may then reveal that Flemish groups that oppose COVID-19 policies are less connected with the far right, whereas such groups that can be identified as Dutch seem to merge more easily with far-right channels. As discussed, this is also suggested by the position of FVDNL, the Dutch far-right political party Forum voor democratie (FvD) which is central in the network. On this level, this suggests that structurally, Dutch far-right parties are more explicitly affiliating themselves with conspiracy-related channels than Flemish parties. Actual textual analyses of the channels’ posts and images, however, could offer a more nuanced picture, whereby structurally unconnected channels might still share common harmful narratives, spanning anti-progressive discourse, anti-mainstream sentiments, anti-government discourse, and evocations of prominent conspiracy theories such as QAnon and The Great Reset. These structural analyses then present a number of possibilities for further content analysis, where one might for example “zoom in” on Dutch far-right groups in particular, and qualitatively study images posted therein to identify salient narratives and positions. Discussion Methodological Gains Variations of the proposed approach have been used in other work (e.g. Hashemi and Chahooki; La Morgia et al.). Most prominently, the Pushshift Telegram Dataset (Baumgartner et al.) comprises a large dataset of channel metadata, author metadata and messages. This dataset was collected by discovering new channels from an initial seed list of approximately 300 channels using forwarded messages, and then collecting messages from these channels. While there is great archival value in the resulting datasets, our approach differs from these earlier approaches in a number of instructive ways. Like other work we appropriate an affordance of Telegram – forwarded messages – for our own research purposes, but we purposely limit the extent of “following” these forwarded messages. Though one could keep following links indefinitely, not every link is a link that is structural to the distributed community of users that is of interest here. Though more extensive crawling might reveal ever more channels and associated data, these are also increasingly unlikely to be related to the initial topic of interest, and are in any case further removed from the users of the initial seed groups. For this reason, we use a relatively shallow crawl depth and only retain links up to two “hops” away from the initial seed channels. This trades a higher number of crawled channels for a higher likelihood that the captured channels are indeed relevant to the case study. The suitable crawl depth would differ from case to case. In our case, it was established empirically through pilot crawls, which were stopped once collected groups appeared to no longer be strongly connected to the initial seed groups by topic. Datasets of this type are often also difficult to reproduce or qualify. For example for the datasets compiled by La Morgia et al. as well as for Baumgartner et al., the original seed list is not provided. Because of this, it is impossible to see where the network of found groups originates and how it might be biased one way or another. We suggest that where possible, this seed list is documented and shared. In our case, this would be particularly important as the seeds represent an intentional and explicit bias; that is, Dutch-speaking conspiracy-themed and far-right groups. If the starting point of the crawl is documented, one could potentially re-collect the data later from the same starting points and compare results to those found initially, allowing for longitudinal analyses of the topology of these communities. Ethics The method described here does not deal with personally identifiable information (PII) per se; one can map the channel network on a structural level without collecting user data or analysing specific messages, when purely tracing the origin of forwarded messages. It should be noted, however, that in the process of collecting these structural data, one can potentially go further. For example, it is possible to scrape the full content of messages. When also including chat groups, user details including (user-provided) full names are also available. Their inclusion in (public) datasets should be subject to closer scrutiny than that of public channels, as the former may represent conversations had under the assumption that this conversation was more or less off the record; while the latter are explicitly intended for information broadcast. Even if many of these chat groups are technically public, we should consider that "even if users are aware of being observed by others, they do not consider the possibility that their actions and interactions may be documented and analysed in detail at a later occasion" (Sveningsson Elm 77). In many cases, a (structural) analysis of only channels strikes a good balance between collecting representative data and respecting the privacy of those who produce the collected data. Avenues for Future Research The method may be expanded in a number of ways. One could, as discussed, increase the amount of crawl iterations, which would expand the network at the potential cost of case-specificity. A larger seed list could also increase the quantity of the data, though the effect of this can often be limited, as a relatively limited amount of channels forward messages from other channels. Links between channels could be collected not only from forwarded messages, as we do here, but also via other repurposed Telegram features such as channel invitation links or simple hyperlinks to other channels found in message content. The latter would require more fine-grained parsing of the message texts through natural language processing, for example, as a hyperlink can suggest a wider range of connections than an intentionally forwarded message. Additionally, and as previously mentioned, one could include not only public channels but also public chat groups, which are often linked to these channels and offer a space for people to discuss the content posted in them. While this can be an attractive way of acquiring extra data, we forego this in our example. As discussed, there are ethical trade-offs to consider when deciding to work with data from groups; but, it can be argued that Telegram channels represent an explicit “broadcast” style of communication (Shehabat et al.). Because the channel owner(s) decide what is worthy of sharing, one can reasonably assume that if one “follows the medium” here, all content retrieved from a channel will be somewhat relevant to the channel's purported theme. Conversely, discourse in chat groups might be expected to meander into a variety of directions and can easily include many (forwarded) messages only tangentially related to the case study of interest. Conclusion In this article we have sought to present one methodological approach to studying communities on Telegram. Rather than presenting a thorough case study or a definitive analysis of the Telegram-based community we discuss, our goal was to demonstrate the method's benefits but also its potential shortcomings, avenues of further development, and what types of analysis data collected with it might afford. A cursory analysis of the fringe community we studied here shows how with such data one can map a given community or set of communities on a structural level, which may then be used to demarcate areas of interest for further content analysis. The observations presented in this article are far from a complete picture of the data collected, but can serve as suggestions for analytical avenues one might venture down in a more substantive analysis. Beyond these observations, our repurposing of “the methods of the medium” (Rogers, Digital Methods) through forwarded messages allows us to contribute an empirically informed reflection on the possibilities and limitations of studying conspiracist information sharing practices on Telegram. Our method for instance highlights tensions between public and private knowledge, whereby we only consider information from public channels, and for technical and ethical reasons omit Telegram’s closed-off, private chat groups from our analysis. Our method of sourcing channels through forwarded messages does not preclude the existence of isolated channels or clusters of channels that, for a lack of forwarded messages from channels that were already identified, elude the scope of such snowballing efforts. Along the same lines, one could imagine that a deeper, more far-reaching crawl would reveal some strange bedfellows for the initial seed that were not part of our a priori understanding and hypotheses concerning the communities of interest. All of these considerations represent choices that may be taken differently depending on the case study at hand. Statement on Data and Ethics The analysis on offer in this article is limited to names of public channels on Telegram and we purposely refrain from citing channel names or analysing specific messages so they cannot be traced back to single persons. Our analysis does not comprise live subjects or PII, and thus did not require ERB clearance from our respective institutions. The anonymised dataset described above is available upon request via Zenodo at https://doi.org/10.5281/zenodo.6344795. Acknowledgements We would like to thank Nathalie van Raemdonck (Vrije Universiteit Brussel) and Jasmin Seijbel (Erasmus Universiteit Rotterdam) for their contributions to the empirical work underlying this article. References Baumgartner, Jason, et al. “The Pushshift Telegram Dataset.” Proceedings of the International AAAI Conference on Web and Social Media 14 (2020): 840–847. Blondel, Vincent D., et al. “Fast Unfolding of Communities in Large Networks.” Journal of Statistical Mechanics: Theory and Experiment 2008.10 (2008): P10008. Davey, Jacob, and Dana Weinberg. Influence: Discussions of the US Military in Extreme Right-Wing Telegram Channels. ISD Global, 2021. De Zeeuw, Daniel et al. “Tracing Normiefication: A Cross-Platform Analysis of the QAnon Conspiracy Theory.” First Monday 25.1 (2020). Fenster, Mark. Conspiracy Theories. Minneapolis: U of Minnesota P, 2008. Frenkel, Sheera. “Facebook Amps Up Its Crackdown on QAnon.” The New York Times 6 Oct. 2020, sec. Technology. <https://www.nytimes.com/2020/10/06/technology/facebook-qanon-crackdown.html>. Gehl, Robert W. “The Case for Alternative Social Media.” Social Media + Society 1.2 (2015). Hashemi, Ali, and Mohammad Ali Zare Chahooki. “Telegram Group Quality Measurement by User Behavior Analysis.” Social Network Analysis and Mining 9.1 (2019): 33. Jacomy, Mathieu, et al. “ForceAtlas2, a Continuous Graph Layout Algorithm for Handy Network Visualization Designed for the Gephi Software.” PLOS ONE 9.6 (2014): e98679. La Morgia, Massimo et al. “Uncovering the Dark Side of Telegram: Fakes, Clones, Scams, and Conspiracy Movements.” arXiv:2111.13530 [cs] (2021). <http://arxiv.org/abs/2111.13530>. Nikkhah, Sarah, et al. “Coming to America: Iranians’ Use of Telegram for Immigration Information Seeking.” Aslib Journal of Information Management 72.4 (2020): 561–585. Rogers, Richard. “Deplatforming: Following Extreme Internet Celebrities to Telegram and Alternative Social Media.” European Journal of Communication 35.3 (2020): 213–229. ———. Digital Methods. Cambridge: MIT P, 2013. Serrat, Olivier. “Social Network Analysis.” In Knowledge Solutions: Tools, Methods, and Approaches to Drive Organizational Performance. Ed. Olivier Serrat. Singapore: Springer, 2017. 39–43. <https://doi.org/10.1007/978-981-10-0983-9_9>. Shehabat, Ahmad, Teodor Mitew, and Yahia Alzoubi. “Encrypted Jihad: Investigating the Role of Telegram App in Lone Wolf Attacks in the West.” Journal of Strategic Security 10.3 (2017): 27–53. Sveningsson Elm, Malin. “How Do Various Notions of Privacy Influence Decisions in Qualitative Internet Research?” In Internet Inquiry: Conversations about Method. Eds. Annette Markham and Nancy Baym. SAGE, 2009. 69–97. Urman, Aleksandra, and Stefan Katz. “What They Do in the Shadows: Examining the Far-Right Networks on Telegram.” Information, Communication & Society (2020). Venturini, Tommaso, and Bruno Latour. “The Social Fabric: Digital Footprints and Quali-quantitative Methods.” In Proceedings of Futur en Seine 2009: The Digital Future of the City. Festival for Digital Life and Creativity, 2010. 87-101. Vieten, Ulrike M. “The ‘New Normal’ and ‘Pandemic Populism’: The COVID-19 Crisis and Anti-Hygienic Mobilisation of the Far-Right.” Social Sciences 9.9 [165] (2020).
APA, Harvard, Vancouver, ISO, and other styles
7

Broderick, Mick, Stuart Marshall Bender, and Tony McHugh. "Virtual Trauma: Prospects for Automediality." M/C Journal 21, no. 2 (April 25, 2018). http://dx.doi.org/10.5204/mcj.1390.

Full text
Abstract:
Unlike some current discourse on automediality, this essay eschews most of the analysis concerning the adoption or modification of avatars to deliberately enhance, extend or distort the self. Rather than the automedial enabling of alternative, virtual selves modified by playful, confronting or disarming avatars we concentrate instead on emerging efforts to present the self in hyper-realist, interactive modes. In doing so we ask, what is the relationship between traumatic forms of automediation and the affective impact on and response of the audience? We argue that, while on the one hand there are promising avenues for valuable individual and social engagements with traumatic forms of automediation, there is an overwhelming predominance of suffering as a theme in such virtual depictions, comingled with uncritically asserted promises of empathy, which are problematic as the technology assumes greater mainstream uptake.As Smith and Watson note, embodiment is always a “translation” where the body is “dematerialized” in virtual representation (“Virtually” 78). Past scholarship has analysed the capacity of immersive realms, such as Second Life or online games, to highlight how users can modify their avatars in often spectacular, non-human forms. Critics of this mode of automediality note that users can adopt virtually any persona they like (racial, religious, gendered and sexual, human, animal or hybrid, and of any age), behaving as “identity tourists” while occupying virtual space or inhabiting online communities (Nakamura). Furthermore, recent work by Jaron Lanier, a key figure from the 1980s period of early Virtual Reality (VR) technology, has also explored so-called “homuncular flexibility” which describes the capacity for humans to seemingly adapt automatically to the control mechanisms of an avatar with multiple legs, other non-human appendages, or for two users to work in tandem to control a single avatar (Won et. al.). But this article is concerned less with these single or multi-player online environments and the associated concerns over modifying interactive identities. We are principally interested in other automedial modes where the “auto” of autobiography is automated via Artificial Intelligences (AIs) to convincingly mimic human discourse as narrated life-histories.We draw from case studies promoted by the 2017 season of ABC television’s flagship science program, Catalyst, which opened with semi-regular host and biological engineer Dr Jordan Nguyen, proclaiming in earnest, almost religious fervour: “I want to do something that has long been a dream. I want to create a copy of a human. An avatar. And it will have a life of its own in virtual reality.” As the camera followed Nguyen’s rapid pacing across real space he extolled: “Virtual reality, virtual human, they push the limits of the imagination and help us explore the impossible […] I want to create a virtual copy of a person. A digital addition to the family, using technology we have now.”The troubling implications of such rhetoric were stark and the next third of the program did little to allay such techno-scientific misgivings. Directed and produced by David Symonds, with Nguyen credited as co-developer and presenter, the episode “Meet the Avatars” immediately introduced scenarios where “volunteers” entered a pop-up inner city virtual lab, to experience VR for the first time. The volunteers were shown on screen subjected to a range of experimental VR environments designed to elicit fear and/or adverse and disorienting responses such as vertigo, while the presenter and researchers from Sydney University constantly smirked and laughed at their participants’ discomfort. We can only wonder what the ethics process was for both the ABC and university researchers involved in these broadcast experiments. There is little doubt that the participant/s experienced discomfort, if not distress, and that was televised to a national audience. Presenter Nguyen was also shown misleading volunteers on their way to the VR lab, when one asked “You’re not going to chuck us out of a virtual plane are you?” to which Nguyen replied “I don't know what we’re going to do yet,” when it was next shown that they immediately underwent pre-programmed VR exposure scenarios, including a fear of falling exercise from atop a city skyscraper.The sweat-inducing and heart rate-racing exposures to virtual plank walks high above a cityscape, or seeing subjects haptically viewing spiders crawl across their outstretched virtual hands, all elicited predictable responses, showcased as carnivalesque entertainment for the viewing audience. As we will see, this kind of trivialising of a virtual environment’s capacity for immersion belies the serious use of the technology in a range of treatments for posttraumatic stress disorder (see Rizzo and Koenig; Rothbaum, Rizzo and Difede).Figure 1: Nguyen and researchers enjoying themselves as their volunteers undergo VR exposure Defining AutomedialityIn their pioneering 2008 work, Automedialität: Subjektkonstitution in Schrift, Bild und neuen Medien, Jörg Dünne and Christian Moser coined the term “automediality” to problematise the production, application and distribution of autobiographic modes across various media and genres—from literary texts to audiovisual media and from traditional expression to inter/transmedia and remediated formats. The concept of automediality was deployed to counter the conventional critical exclusion of analysis of the materiality/technology used for an autobiographical purpose (Gernalzick). Dünne and Moser proffered a concept of automediality that rejects the binary division of (a) self-expression determining the mediated form or (b) (self)subjectivity being solely produced through the mediating technology. Hence, automediality has been traditionally applied to literary constructs such as autobiography and life-writing, but is now expanding into the digital domain and other “paratextual sites” (Maguire).As Nadja Gernalzick suggests, automediality should “encourage and demand not only a systematics and taxonomy of the constitution of the self in respectively genre-specific ways, but particularly also in medium-specific ways” (227). Emma Maguire has offered a succinct working definition that builds on this requirement to signal the automedial universally, noting it operates asa way of studying auto/biographical texts (of a variety of forms) that take into account how the effects of media shape the kinds of selves that can be represented, and which understands the self not as a preexisting subject that might be distilled into story form but as an entity that is brought into being through the processes of mediation.Sidonie Smith and Julia Watson point to automediality as a methodology, and in doing so emphasize how the telling or mediation of a life actually shapes the kind of story that can be told autobiographically. They state “media cannot simply be conceptualized as ‘tools’ for presenting a preexisting, essential self […] Media technologies do not just transparently present the self. They constitute and expand it” (Smith and Watson “Virtually Me” 77).This distinction is vital for understanding how automediality might be applied to self-expression in virtual domains, including the holographic avatar dreams of Nguyen throughout Catalyst. Although addressing this distinction in relation to online websites, following P. David Marshall’s description of “the proliferation of the public self”, Maguire notes:The same integration of digital spaces and platforms into daily life that is prompting the development of new tools in autobiography studies […] has also given rise to the field of persona studies, which addresses the ways in which individuals engage in practices of self-presentation in order to form commoditised identities that circulate in affective communities.For Maguire, these automedial works operate textually “to construct the authorial self or persona”.An extension to this digital, authorial construction is apparent in the exponential uptake of screen mediated prosumer generated content, whether online or theatrical (Miller). According to Gernalzick, unlike fictional drama films, screen autobiographies more directly enable “experiential temporalities”. Based on Mary Anne Doane’s promotion of the “indexicality” of film/screen representations to connote the real, Gernalzick suggests that despite semiotic theories of the index problematising realism as an index as representation, the film medium is still commonly comprehended as the “imprint of time itself”:Film and the spectator of film are said to be in a continuous present. Because the viewer is aware, however, that the images experienced in or even as presence have been made in the past, the temporality of the so-called filmic present is always ambiguous” (230).When expressed as indexical, automedial works, the intrinsic audio-visual capacities of film and video (as media) far surpass the temporal limitations of print and writing (Gernalzick, 228). One extreme example can be found in an emergent trend of “performance crime” murder and torture videos live-streamed or broadcast after the fact using mobile phone cameras and FaceBook (Bender). In essence, the political economy of the automedial ecology is important to understand in the overall context of self expression and the governance of content exhibition, access, distribution and—where relevant—interaction.So what are the implications for automedial works that employ virtual interfaces and how does this evolving medium inform both the expressive autobiographical mode and audiences subjectivities?Case StudyThe Catalyst program described above strove to shed new light on the potential for emerging technology to capture and create virtual avatars from living participants who (self-)generate autobiographical narratives interactively. Once past the initial gee-wiz journalistic evangelism of VR, the episode turned towards host Nguyen’s stated goal—using contemporary technology to create an autonomous virtual human clone. Nguyen laments that if he could create only one such avatar, his primary choice would be that of his grandfather who died when Nguyen was two years old—a desire rendered impossible. The awkward humour of the plank walk scenario sequence soon gives way as the enthusiastic Nguyen is surprised by his family’s discomfort with the idea of digitally recreating his grandfather.Nguyen next visits a Southern California digital media lab to experience the process by which 3D virtual human avatars are created. Inside a domed array of lights and cameras, in less than one second a life-size 3D avatar is recorded via 6,000 LEDs illuminating his face in 20 different combinations, with eight cameras capturing the exposures from multiple angles, all in ultra high definition. Called the Light Stage (Debevec), it is the same technology used to create a life size, virtual holocaust survivor, Pinchas Gutter (Ziv).We see Nguyen encountering a life-size, high-resolution 2D screen version of Gutter’s avatar. Standing before a microphone, Nguyen asks a series of questions about Gutter’s wartime experiences and life in the concentration camps. The responses are naturalistic and authentic, as are the pauses between questions. The high definition 4K screen is photo-realist but much more convincing in-situ (as an artifact of the Catalyst video camera recording, in some close-ups horizontal lines of transmission appear). According to the project’s curator, David Traum, the real Pinchas Gutter was recorded in 3D as a virtual holograph. He spent 25 hours providing 1,600 responses to a broad range of questions that the curator maintained covered “a lot of what people want to say” (Catalyst).Figure 2: The Museum of Jewish Heritage in Manhattan presented an installation of New Dimensions in Testimony, featuring Pinchas Gutter and Eva SchlossIt is here that the intersection between VR and auto/biography hybridise in complex and potentially difficult ways. It is where the concept of automediality may offer insight into this rapidly emerging phenomenon of creating interactive, hyperreal versions of our selves using VR. These hyperreal VR personae can be questioned and respond in real-time, where interrogators interact either as casual conversers or determined interrogators.The impact on visitors is sobering and palpable. As Nguyen relates at the end of his session, “I just want to give him a hug”. The demonstrable capacity for this avatar to engender a high degree of empathy from its automedial testimony is clear, although as we indicate below, it could simply indicate increased levels of emotion.Regardless, an ongoing concern amongst witnesses, scholars and cultural curators of memorials and museums dedicated to preserving the history of mass violence, and its associated trauma, is that once the lived experience and testimony of survivors passes with that generation the impact of the testimony diminishes (Broderick). New media modes of preserving and promulgating such knowledge in perpetuity are certainly worthy of embracing. As Stephen Smith, the executive director of the USC Shoah Foundation suggests, the technology could extendto people who have survived cancer or catastrophic hurricanes […] from the experiences of soldiers with post-traumatic stress disorder or survivors of sexual abuse, to those of presidents or great teachers. Imagine if a slave could have told her story to her grandchildren? (Ziv)Yet questions remain as to the veracity of these recorded personae. The avatars are created according to a specific agenda and the autobiographical content controlled for explicit editorial purposes. It is unclear what and why material has been excluded. If, for example, during the recorded questioning, the virtual holocaust survivor became mute at recollecting a traumatic memory, cried or sobbed uncontrollably—all natural, understandable and authentic responses given the nature of the testimony—should these genuine and spontaneous emotions be included along with various behavioural ticks such as scratching, shifting about in the seat and other naturalistic movements, to engender a more profound realism?The generation of the photorealist, mimetic avatar—remaining as an interactive persona long after the corporeal, authorial being is gone—reinforces Baudrillard’s concept of simulacra, where a clone exists devoid of its original entity and unable to challenge its automedial discourse. And what if some unscrupulous hacker managed to corrupt and subvert Gutter’s AI so that it responded antithetically to its purpose, by denying the holocaust ever happened? The ethical dilemmas of such a paradigm were explored in the dystopian 2013 film, The Congress, where Robyn Wright plays herself (and her avatar), as an out of work actor who sells off the rights to her digital self. A movie studio exploits her screen persona in perpetuity, enabling audiences to “become” and inhabit her avatar in virtual space while she is limited in the real world from undertaking certain actions due to copyright infringement. The inability of Wright to control her mimetic avatar’s discourse or action means the assumed automedial agency of her virtual self as an immortal, interactive being remains ontologically perplexing.Figure 3: Robyn Wright undergoing a full body photogrammetry to create her VR avatar in The Congress (2013)The various virtual exposures/experiences paraded throughout Catalyst’s “Meet the Avatars” paradoxically recorded and broadcast a range of troubling emotional responses to such immersion. Many participant responses suggest great caution and sensitivity be undertaken before plunging headlong into the new gold rush mentality of virtual reality, augmented reality, and AI affordances. Catalyst depicted their program subjects often responding in discomfort and distress, with some visibly overwhelmed by their encounters and left crying. There is some irony that presenter Ngyuen was himself relying on the conventions of 2D linear television journalism throughout, adopting face-to-camera address in (unconscious) automedial style to excitedly promote the assumed socio-cultural boon such automedial VR avatars will generate.Challenging AuthenticityThere are numerous ethical considerations surrounding the potential for AIs to expand beyond automedial (self-)expression towards photorealist avatars interacting outside of their pre-recorded content. When such systems evolve it may be neigh impossible to discern on screen whether the person you are conversing with is authentic or an indistinguishable, virtual doppelganger. In the future, a variant on the Turning Test may be needed to challenge and identify such hyperreal simulacra. We may be witnessing the precursor to such a dilemma playing out in the arena of audio-only podcasts, with some public intellectuals such as Sam Harris already discussing the legal and ethical problems from technology that can create audio from typed text that convincingly replicate the actual voice of a person by sampling approximately 30 minutes of their original speech (Harris). Such audio manipulation technology will soon be available to anybody with the motivation and relatively minor level of technological ability in order to assume an identity and masquerade as automediated dialogue. However, for the moment, the ability to convincingly alter a real-time computer generated video image of a person remains at the level of scientific innovation.Also of significance is the extent to which the audience reactions to such automediated expressions are indeed empathetic or simply part of the broader range of affective responses that also include direct sympathy as well as emotions such as admiration, surprise, pity, disgust and contempt (see Plantinga). There remains much rhetorical hype surrounding VR as the “ultimate empathy machine” (Milk). Yet the current use of the term “empathy” in VR, AI and automedial forms of communication seems to be principally focused on the capacity for the user-viewer to ameliorate negatively perceived emotions and experiences, whether traumatic or phobic.When considering comments about authenticity here, it is important to be aware of the occasional slippage of technological terminology into the mainstream. For example, the psychological literature does emphasise that patients respond strongly to virtual scenarios, events, and details that appear to be “authentic” (Pertaub, Slater, and Barker). Authentic in this instance implies a resemblance to a corresponding scenario/activity in the real world. This is not simply another word for photorealism, but rather it describes for instance the experimental design of one study in which virtual (AI) audience members in a virtual seminar room designed to treat public speaking anxiety were designed to exhibit “random autonomous behaviours in real-time, such as twitches, blinks, and nods, designed to encourage the illusion of life” (Kwon, Powell and Chalmers 980). The virtual humans in this study are regarded as having greater authenticity than an earlier project on social anxiety (North, North, and Coble) which did not have much visual complexity but did incorporate researcher-triggered audio clips of audience members “laughing, making comments, encouraging the speaker to speak louder or more clearly” (Kwon, Powell, and Chalmers 980). The small movements, randomly cued rather than according to a recognisable pattern, are described by the researchers as creating a sense of authenticity in the VR environment as they seem to correspond to the sorts of random minor movements that actual human audiences in a seminar can be expected to make.Nonetheless, nobody should regard an interaction with these AIs, or the avatar of Gutter, as in any way an encounter with a real person. Rather, the characteristics above function to create a disarming effect and enable the real person-viewer to willingly suspend their disbelief and enter into a pseudo-relationship with the AI; not as if it is an actual relationship, but as if it is a simulation of an actual relationship (USC). Lucy Suchman and colleagues invoke these ideas in an analysis of a YouTube video of some apparently humiliating human interactions with the MIT created AI-robot Mertz. Their analysis contends that, while it may appear on first glance that the humans’ mocking exchange with Mertz are mean-spirited, there is clearly a playfulness and willingness to engage with a form of AI that is essentially continuous with “long-standing assumptions about communication as information processing, and in the robot’s performance evidence for the limits to the mechanical reproduction of interaction as we know it through computational processes” (Suchman, Roberts, and Hird).Thus, it will be important for future work in the area of automediated testimony to consider the extent to which audiences are willing to suspend disbelief and treat the recounted traumatic experience with appropriate gravitas. These questions deserve attention, and not the kind of hype displayed by the current iteration of techno-evangelism. Indeed, some of this resurgent hype has come under scrutiny. From the perspective of VR-based tourism, Janna Thompson has recently argued that “it will never be a substitute for encounters with the real thing” (Thompson). Alyssa K. Loh, for instance, also argues that many of the negatively themed virtual experiences—such as those that drop the viewer into a scene of domestic violence or the location of a terrorist bomb attack—function not to put you in the position of the actual victim but in the position of the general category of domestic violence victim, or bomb attack victim, thus “deindividuating trauma” (Loh).Future work in this area should consider actual audience responses and rely upon mixed-methods research approaches to audience analysis. In an era of alt.truth and Cambridge Analytics personality profiling from social media interaction, automediated communication in the virtual guise of AIs demands further study.ReferencesAnon. “New Dimensions in Testimony.” Museum of Jewish Heritage. 15 Dec. 2017. 19 Apr. 2018 <http://mjhnyc.org/exhibitions/new-dimensions-in-testimony/>.Australian Broadcasting Corporation. “Meet The Avatars.” Catalyst, 15 Aug. 2017.Baudrillard, Jean. “Simulacra and Simulations.” Jean Baudrillard: Selected Writings. Ed. Mark Poster. Stanford: Stanford UP, 1988. 166-184.Bender, Stuart Marshall. Legacies of the Degraded Image in Violent Digital Media. Basingstoke: Palgrave Macmillan, 2017.Broderick, Mick. “Topographies of Trauma, Dark Tourism and World Heritage: Hiroshima’s Genbaku Dome.” Intersections: Gender and Sexuality in Asia and the Pacific. 24 Apr. 2010. 14 Apr. 2018 <http://intersections.anu.edu.au/issue24/broderick.htm>.Debevec, Paul. “The Light Stages and Their Applications to Photoreal Digital Actors.” SIGGRAPH Asia. 2012.Doane, Mary Ann. The Emergence of Cinematic Time: Modernity, Contingency, the Archive. Cambridge: Harvard UP, 2002.Dünne, Jörg, and Christian Moser. “Allgemeine Einleitung: Automedialität”. Automedialität: Subjektkonstitution in Schrift, Bild und neuen Medien. Eds. Jörg Dünne and Christian Moser. München: Wilhelm Fink, 2008. 7-16.Harris, Sam. “Waking Up with Sam Harris #64 – Ask Me Anything.” YouTube, 16 Feb. 2017. 16 Mar. 2018 <https://www.youtube.com/watch?v=gMTuquaAC4w>.Kwon, Joung Huem, John Powell, and Alan Chalmers. “How Level of Realism Influences Anxiety in Virtual Reality Environments for a Job Interview.” International Journal of Human-Computer Studies 71.10 (2013): 978-87.Loh, Alyssa K. "I Feel You." Artforum, Nov. 2017. 10 Apr. 2018 <https://www.artforum.com/print/201709/alyssa-k-loh-on-virtual-reality-and-empathy-71781>.Marshall, P. David. “Persona Studies: Mapping the Proliferation of the Public Self.” Journalism 15.2 (2014): 153-170.Mathews, Karen. “Exhibit Allows Virtual ‘Interviews’ with Holocaust Survivors.” Phys.org Science X Network, 15 Dec. 2017. 18 Apr. 2018 <https://phys.org/news/2017-09-virtual-holocaust-survivors.html>.Maguire, Emma. “Home, About, Shop, Contact: Constructing an Authorial Persona via the Author Website” M/C Journal 17.9 (2014).Miller, Ken. More than Fifteen Minutes of Fame: The Evolution of Screen Performance. Unpublished PhD Thesis. Murdoch University. 2009.Milk, Chris. “Ted: How Virtual Reality Can Create the Ultimate Empathy Machine.” TED Conferences, LLC. 16 Mar. 2015. <https://www.ted.com/talks/chris_milk_how_virtual_reality_can_create_the_ultimate_empathy_machine>.Nakamura, Lisa. “Cyberrace.” Identity Technologies: Constructing the Self Online. Eds. Anna Poletti and Julie Rak. Madison, Wisconsin: U of Wisconsin P, 2014. 42-54.North, Max M., Sarah M. North, and Joseph R Coble. "Effectiveness of Virtual Environment Desensitization in the Treatment of Agoraphobia." International Journal of Virtual Reality 1.2 (1995): 25-34.Pertaub, David-Paul, Mel Slater, and Chris Barker. “An Experiment on Public Speaking Anxiety in Response to Three Different Types of Virtual Audience.” Presence: Teleoperators and Virtual Environments 11.1 (2002): 68-78.Plantinga, Carl. "Emotion and Affect." The Routledge Companion to Philosophy and Film. Eds. Paisley Livingstone and Carl Plantinga. New York: Routledge, 2009. 86-96.Rizzo, A.A., and Sebastian Koenig. “Is Clinical Virtual Reality Ready for Primetime?” Neuropsychology 31.8 (2017): 877-99.Rothbaum, Barbara O., Albert “Skip” Rizzo, and JoAnne Difede. "Virtual Reality Exposure Therapy for Combat-Related Posttraumatic Stress Disorder." Annals of the New York Academy of Sciences 1208.1 (2010): 126-32.Smith, Sidonie, and Julia Watson. Reading Autobiography: A Guide to Interpreting Life Narratives. 2nd ed. Minneapolis: U of Minnesota P, 2010.———. “Virtually Me: A Toolbox about Online Self-Presentation.” Identity Technologies: Constructing the Self Online. Eds. Anna Poletti and Julie Rak. Madison: U of Wisconsin P, 2014. 70-95.Suchman, Lucy, Celia Roberts, and Myra J. Hird. "Subject Objects." Feminist Theory 12.2 (2011): 119-45.Thompson, Janna. "Why Virtual Reality Cannot Match the Real Thing." The Conversation, 14 Mar. 2018. 10 Apr. 2018 <http://theconversation.com/why-virtual-reality-cannot-match-the-real-thing-92035>.USC. "Skip Rizzo on Medical Virtual Reality: USC Global Conference 2014." YouTube, 28 Oct. 2014. 2 Apr. 2018 <https://www.youtube.com/watch?v=PdFge2XgDa8>.Won, Andrea Stevenson, Jeremy Bailenson, Jimmy Lee, and Jaron Lanier. "Homuncular Flexibility in Virtual Reality." Journal of Computer-Mediated Communication 20.3 (2015): 241-59.Ziv, Stan. “How Technology Is Keeping Holocaust Survivor Stories Alive Forever”. Newsweek, 18 Oct. 2017. 19 Apr. 2018 <http://www.newsweek.com/2017/10/27/how-technology-keeping-holocaust-survivor-stories-alive-forever-687946.html>.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Information and analytical tools post-clearance audit"

1

Кримчак, Л. А., and L. A. Krymchak. "Інформаційно-аналітичне забезпечення економічної безпеки зовнішньоекономічної діяльності промислових підприємств." Дисертація, 2019. http://elar.khnu.km.ua/jspui/handle/123456789/8880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography