Journal articles on the topic 'Little blue penguin – Effect of human beings on'

To see the other types of publications on this topic, follow the link: Little blue penguin – Effect of human beings on.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 journal articles for your research on the topic 'Little blue penguin – Effect of human beings on.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mussari, Mark. "Umberto Eco Would Have Made a Bad Fauve." M/C Journal 5, no. 3 (July 1, 2002). http://dx.doi.org/10.5204/mcj.1966.

Full text
Abstract:
"The eye altering, alters all." - Blake In his essay "How Culture Conditions the Colours We See," Umberto Eco claims that chromatic perception is determined by language. Regarding language as the primary modeling system, Eco argues for linguistic predominance over visual experience: ". . . the puzzle we are faced with is neither a psychological one nor an aesthetic one: it is a cultural one, and as such is filtered through a linguistic system" (159). Eco goes on to explain that he is 'very confused' about chromatic effect, and his arguments do a fine job of illustrating that confusion. To Eco's claim that color perception is determined by language, one can readily point out that both babies and animals, sans language, experience--and respond to--color perception. How then can color be only a cultural matter? Eco attempts to make a connection between the "negative concept" of a geopolitical unit (e.g., Holland or Italy defined by what is not Holland or Italy) and a chromatic system in which "units are defined not in themselves but in terms of opposition and position in relation to other units" (171). Culture, however, is not the only determinant in the opposition that defines certain colors: It is a physiological phenomenon that the eye, after staring at one color (for example, red) for a long time, will see that color's complement, its opposite (green), on a white background. Language is a frustrating tool when discussing color: languages throughout the world have only a limited number of words for the myriad color-sensations experienced by the average eye. Though language training and tradition have an undoubtedly profound effect on our color sense, our words for color constitute only one part of the color expression and not always the most important one. In his Remarks on Colour (1950-51), Wittgenstein observed: 'When we're asked 'What do the words 'red', 'blue', 'black', 'white' mean?' we can, of course, immediately point to things which have these colours,--but our ability to explain the meanings of these words goes no further!' (I-68). We can never say with complete certainly that what this writer meant by this color (we are already in trouble) is understood by this reader (the woods are now officially burning). A brief foray into the world of color perception discloses that, first and foremost, a physiological process, not a cultural one, takes place when a person sees colors. In his lively Art & Physics (1991), Leonard Shlain observes that "Color is the subjective perception in our brains of an objective feature of light's specific wavelengths. Each aspect is inseparable from the other" (170). In his 1898 play To Damascus I, August Strindberg indicated specifically in a stage direction that the Mourners and Pallbearers were to be dressed in brown, while allowing the characters to defy what the audience saw and claim that they were wearing black. In what may well be the first instance of such dramatic toying with an audience's perception, Strindberg forces us to ask where colors exist: In the subject's eye or in the perceived object? In no other feature of the world does such an interplay exist between subject and object. Shlain notes that color "is both a subjective opinion and an objective feature of the world and is both an energy and an entity" (171). In the science of imaging (the transfer of one color digital image from one technology to another) recent research has suggested that human vision may be the best model for this process. Human vision is spatial: it views colors also as sensations involving relationships within an entire image. This phenomenon is part of the process of seeing and unique to the way humans see. In some ways color terms illustrate Roland Barthes's arguments (in S/Z) that connotation actually precedes denotation in language--possibly even produces what we normally consider a word's denotation. Barthes refers to denotation as 'the last of connotations' (9). Look up 'red' in the American Heritage Dictionary and the first definition you find is a comparison to 'blood.' Blood carries with it (or the reader brings to it) a number of connotations that have long inspired a tradition of associating red with life, sex, energy, etc. Perhaps the closest objective denotation for red is the mention of 'the long wavelength end of the spectrum,' which basically tells us nothing about experiencing the color red. Instead, the connotations of red, many of them based on previous perceptual experience, constitute our first encounter with the word 'red.' I would not be so inclined to apply Barthes's connotational hierarchy when one sees red in, say, a painting--an experience in which some of the subjectivity one brings to a color is more limited by the actual physical appearance of the hue chosen by the artist. Also, though Barthes talks about linguistic associations, colors are more inclined to inspire emotional associations which sometimes cannot be expressed in language. As Gaston Bachelard wrote in Air and Dreams: An Essay on the Imagination of Movement: 'The word blue designates, but it does not render' (162). Still, the 'pluralism' Barthes argues for in reading seems particularly present in the reader's encounter with color terms and their constant play of objectivity/subjectivity. In painting color was first released from the confines of form by the Post-Impressionists Cézanne, Gauguin, and van Gogh, who allowed the color of the paint, the very marks on the canvas, to carry the power of expression. Following their lead, the French Fauve painters, under the auspices of Matisse, took the power of color another step further. Perhaps the greatest colorist of the twentieth century, Matisse understood that colors possess a harmony all their own--that colors call out for their complements; he used this knowledge to paint some of the most harmonious canvases in the history of art. 'I use the simplest colors,' Matisse wrote in 'The Path of Color' (1947). 'I don't transform them myself, it is the relationships that take care of that' (178). When he painted the Red Studio, for example, the real walls were actually a blue-gray; he later said that he 'felt red' in the room--and so he painted red (what he felt), leaving the observer to see red (what she feels). Other than its descriptive function, what does language have to do with any of this? It is a matter of perception and emotion. At a 1998 Seattle art gallery exhibit of predominantly monochromatic sculptures featuring icy white glass objects, I asked the artist why he had employed so little color in his work (there were two small pieces in colored glass and they were not as successful). He replied that "color has a tendency to get away from you," and so he had avoided it as much as possible. The fact that color has a power all its own, that the effects of chromaticism depend partially on how colors function beyond the associations applied to them, has long been acknowledged by more expressionistic artists. Writing to Emile Bernard in 1888, van Gogh proclaimed: 'I couldn't care less what the colors are in reality.' The pieces of the color puzzle which Umberto Eco wishes to dismiss, the psychological and the aesthetic, actually serve as the thrust of most pictorial and literary uses of color spaces. Toward the end of his essay, Eco bows to Klee, Mondrian, and Kandinsky (including even the poetry of Virgil) and their "artistic activity," which he views as working "against social codes and collective categorization" (175). Perhaps these artists and writers retrieved color from the deadening and sometimes restrictive effects of culture. Committed to the notion that the main function of color is expression, Matisse liberated color to abolish the sense of distance between the observer and the painting. His innovations are still baffling theorists: In Reconfiguring Modernism: Exploring the Relationship between Modern Art and Modern Literature, Daniel R. Schwarz bemoans the difficulty in viewing Matisse's decorative productions in 'hermeneutical patterns' (149). Like Eco, Schwarz wants to replace perception and emotion with language and narrativity. Language may determine how we express the experience of color, but Eco places the cart before the horse if he actually believes that language 'determines' chromatic experience. Eco is not alone: the Cambridge linguist John Lyons, observing that color is 'not grammaticalised across the languages of the world as fully or centrally as shape, size, space, time' (223), concludes that colors are the product of language under the influence of culture. One is reminded of Goethe's remark that "the ox becomes furious if a red cloth is shown to him; but the philosopher, who speaks of color only in a general way, begins to rave" (xli). References Bachelard, Gaston. Air and Dreams: An Essay on the Imagination of Movement. Dallas: The Dallas Institute Publications, 1988. Barthes, Roland. S/Z. Trans. Richard Miller. New York: Hill and Wang, 1974. Eco, Umberto. 'How Culture Conditions the Colours We See.' On Signs. Ed. M. Blonsky. Baltimore: Johns Hopkins University Press, 1985. 157-75. Goethe, Johann Wolfgang. The Theory of Colors. Trans. Charles Lock Eastlake. Cambridge: The MIT Press, 1970. Lyons, John. 'Colour in Language.' Colour: Art & Science. Ed. Trevor Lamb and Janine Bourriau. Cambridge: Cambridge University Press, 1995. 194-224. Matisse, Henri. Matisse on Art. Ed. Jack Flam. Rev. ed. Berkeley: University of California, 1995. Riley, Charles A., II. Color Codes: Modern Theories of Color in Philosophy, Painting and Architecture, Literature, Music and Psychology. Hanover: University Press of New England, 1995. Schwarz, Daniel R. Reconfiguring Modernism: Explorations in the Relationship between Modern Art and Modern Literature. New York: St. Martin's, 1997. Shlain, Leonard. Art & Physics: Parallel Visions in Space, Time & Light. New York: Morrow, 1991. Strindberg, August. To Damascus in Selected Plays. Volume 2: The Post-Inferno Period. Trans. Evert Sprinchorn. Minneapolis: University of Minnesota Press, 1986. 381-480. Van Gogh, Vincent. The Letters of Vincent van Gogh. Trans. Arnold Pomerans. London: Penguin, 1996. Citation reference for this article MLA Style Mussari, Mark. "Umberto Eco Would Have Made a Bad Fauve" M/C: A Journal of Media and Culture 5.3 (2002). [your date of access] < http://www.media-culture.org.au/0207/eco.php>. Chicago Style Mussari, Mark, "Umberto Eco Would Have Made a Bad Fauve" M/C: A Journal of Media and Culture 5, no. 3 (2002), < http://www.media-culture.org.au/0207/eco.php> ([your date of access]). APA Style Mussari, Mark. (2002) Umberto Eco Would Have Made a Bad Fauve. M/C: A Journal of Media and Culture 5(3). < http://www.media-culture.org.au/0207/eco.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
2

Brien, Donna Lee. "Demon Monsters or Misunderstood Casualties?" M/C Journal 24, no. 5 (October 5, 2021). http://dx.doi.org/10.5204/mcj.2845.

Full text
Abstract:
Over the past century, many books for general readers have styled sharks as “monsters of the deep” (Steele). In recent decades, however, at least some writers have also turned to representing how sharks are seriously threatened by human activities. At a time when media coverage of shark sightings seems ever increasing in Australia, scholarship has begun to consider people’s attitudes to sharks and how these are formed, investigating the representation of sharks (Peschak; Ostrovski et al.) in films (Le Busque and Litchfield; Neff; Schwanebeck), newspaper reports (Muter et al.), and social media (Le Busque et al., “An Analysis”). My own research into representations of surfing and sharks in Australian writing (Brien) has, however, revealed that, although reporting of shark sightings and human-shark interactions are prominent in the news, and sharks function as vivid and commanding images and metaphors in art and writing (Ellis; Westbrook et al.), little scholarship has investigated their representation in Australian books published for a general readership. While recognising representations of sharks in other book-length narrative forms in Australia, including Australian fiction, poetry, and film (Ryan and Ellison), this enquiry is focussed on non-fiction books for general readers, to provide an initial review. Sampling holdings of non-fiction books in the National Library of Australia, crosschecked with Google Books, in early 2021, this investigation identified 50 Australian books for general readers that are principally about sharks, or that feature attitudes to them, published from 1911 to 2021. Although not seeking to capture all Australian non-fiction books for general readers that feature sharks, the sampling attempted to locate a wide range of representations and genres across the time frame from the earliest identified text until the time of the survey. The books located include works of natural and popular history, travel writing, memoir, biography, humour, and other long-form non-fiction for adult and younger readers, including hybrid works. A thematic analysis (Guest et al.) of the representation of sharks in these texts identified five themes that moved from understanding sharks as fishes to seeing them as monsters, then prey, and finally to endangered species needing conservation. Many books contained more than one theme, and not all examples identified have been quoted in the discussion of the themes below. Sharks as Part of the Natural Environment Drawing on oral histories passed through generations, two memoirs (Bradley et al.; Fossa) narrate Indigenous stories in which sharks play a central role. These reveal that sharks are part of both the world and a wider cosmology for Aboriginal and Torres Strait Islander people (Clua and Guiart). In these representations, sharks are integrated with, and integral to, Indigenous life, with one writer suggesting they are “creator beings, ancestors, totems. Their lifecycles reflect the seasons, the landscape and sea country. They are seen in the movement of the stars” (Allam). A series of natural history narratives focus on zoological studies of Australian sharks, describing shark species and their anatomy and physiology, as well as discussing shark genetics, behaviour, habitats, and distribution. A foundational and relatively early Australian example is Gilbert P. Whitley’s The Fishes of Australia: The Sharks, Rays, Devil-fish, and Other Primitive Fishes of Australia and New Zealand, published in 1940. Ichthyologist at the Australian Museum in Sydney from the early 1920s to 1964, Whitley authored several books which furthered scientific thought on sharks. Four editions of his Australian Sharks were published between 1983 and 1991 in English, and the book is still held in many libraries and other collections worldwide. In this text, Whitley described a wide variety of sharks, noting shared as well as individual features. Beautiful drawings contribute information on shape, colouring, markings, and other recognisable features to assist with correct identification. Although a scientist and a Fellow and then President of the Royal Zoological Society of New South Wales, Whitley recognised it was important to communicate with general readers and his books are accessible, the prose crisp and clear. Books published after this text (Aiken; Ayling; Last and Stevens; Tricas and Carwardine) share Whitley’s regard for the diversity of sharks as well as his desire to educate a general readership. By 2002, the CSIRO’s Field Guide to Australian Sharks & Rays (Daley et al.) also featured numerous striking photographs of these creatures. Titles such as Australia’s Amazing Sharks (Australian Geographic) emphasise sharks’ unique qualities, including their agility and speed in the water, sensitive sight and smell, and ability to detect changes in water pressure around them, heal rapidly, and replace their teeth. These books also emphasise the central role that sharks play in the marine ecosystem. There are also such field guides to sharks in specific parts of Australia (Allen). This attention to disseminating accurate zoological information about sharks is also evident in books written for younger readers including very young children (Berkes; Kear; Parker and Parker). In these and other similar books, sharks are imaged as a central and vital component of the ocean environment, and the narratives focus on their features and qualities as wondrous rather than monstrous. Sharks as Predatory Monsters A number of books for general readers do, however, image sharks as monsters. In 1911, in his travel narrative Peeps at Many Lands: Australia, Frank Fox describes sharks as “the most dangerous foes of man in Australia” (23) and many books have reinforced this view over the following century. This can be seen in titles that refer to sharks as dangerous predatory killers (Fox and Ruhen; Goadby; Reid; Riley; Sharpe; Taylor and Taylor). The covers of a large proportion of such books feature sharks emerging from the water, jaws wide open in explicit homage to the imaging of the monster shark in the film Jaws (Spielberg). Shark!: Killer Tales from the Dangerous Depths (Reid) is characteristic of books that portray encounters with sharks as terrifying and dramatic, using emotive language and stories that describe sharks as “the world’s most feared sea creature” (47) because they are such “highly efficient killing machines” (iv, see also 127, 129). This representation of sharks is also common in several books for younger readers (Moriarty; Rohr). Although the risk of being injured by an unprovoked shark is extremely low (Chapman; Fletcher et al.), fear of sharks is prevalent and real (Le Busque et al., “People’s Fear”) and described in a number of these texts. Several of the memoirs located describe surfers’ fear of sharks (Muirhead; Orgias), as do those of swimmers, divers, and other frequent users of the sea (Denness; de Gelder; McAloon), even if the author has never encountered a shark in the wild. In these texts, this fear of sharks is often traced to viewing Jaws, and especially to how the film’s huge, bloodthirsty great white shark persistently and determinedly attacks its human hunters. Pioneer Australian shark expert Valerie Taylor describes such great white sharks as “very big, powerful … and amazingly beautiful” but accurately notes that “revenge is not part of their thought process” (Kindle version). Two books explicitly seek to map and explain Australians’ fear of sharks. In Sharks: A History of Fear in Australia, Callum Denness charts this fear across time, beginning with his own “shark story”: a panicked, terror-filled evacuation from the sea, following the sighting of a shadow which turned out not to be a shark. Blake Chapman’s Shark Attacks: Myths, Misunderstandings and Human Fears explains commonly held fearful perceptions of sharks. Acknowledging that sharks are a “highly emotive topic”, the author of this text does not deny “the terror [that] they invoke in our psyche” but makes a case that this is “only a minor characteristic of what makes them such intriguing animals” (ix). In Death by Coconut: 50 Things More Dangerous than a Shark and Why You Shouldn’t Be Afraid of the Ocean, Ruby Ashby Orr utilises humour to educate younger readers about the real risk humans face from sharks and, as per the book’s title, why they should not be feared, listing champagne corks and falling coconuts among the many everyday activities more likely to lead to injury and death in Australia than encountering a shark. Taylor goes further in her memoir – not only describing her wonder at swimming with these creatures, but also her calm acceptance of the possibility of being injured by a shark: "if we are to be bitten, then we are to be bitten … . One must choose a life of adventure, and of mystery and discovery, but with that choice, one must also choose the attendant risks" (2019: Kindle version). Such an attitude is very rare in the books located, with even some of the most positive about these sea creatures still quite sensibly fearful of potentially dangerous encounters with them. Sharks as Prey There is a long history of sharks being fished in Australia (Clark). The killing of sharks for sport is detailed in An American Angler in Australia, which describes popular adventure writer Zane Grey’s visit to Australia and New Zealand in the 1930s to fish ‘big game’. This text includes many bloody accounts of killing sharks, which are justified with explanations about how sharks are dangerous. It is also illustrated with gruesome pictures of dead sharks. Australian fisher Alf Dean’s biography describes him as the “World’s Greatest Shark Hunter” (Thiele), this text similarly illustrated with photographs of some of the gigantic sharks he caught and killed in the second half of the twentieth century. Apart from being killed during pleasure and sport fishing, sharks are also hunted by spearfishers. Valerie Taylor and her late husband, Ron Taylor, are well known in Australia and internationally as shark experts, but they began their careers as spearfishers and shark hunters (Taylor, Ron Taylor’s), with the documentary Shark Hunters gruesomely detailing their killing of many sharks. The couple have produced several books that recount their close encounters with sharks (Taylor; Taylor, Taylor and Goadby; Taylor and Taylor), charting their movement from killers to conservationists as they learned more about the ocean and its inhabitants. Now a passionate campaigner against the past butchery she participated in, Taylor’s memoir describes her shift to a more respectful relationship with sharks, driven by her desire to understand and protect them. In Australia, the culling of sharks is supposedly carried out to ensure human safety in the ocean, although this practice has long been questioned. In 1983, for instance, Whitley noted the “indiscriminate” killing of grey nurse sharks, despite this species largely being very docile and of little threat to people (Australian Sharks, 10). This is repeated by Tony Ayling twenty-five years later who adds the information that the generally harmless grey nurse sharks have been killed to the point of extinction, as it was wrongly believed they preyed on surfers and swimmers. Shark researcher and conservationist Riley Elliott, author of Shark Man: One Kiwi Man’s Mission to Save Our Most Feared and Misunderstood Predator (2014), includes an extremely critical chapter on Western Australian shark ‘management’ through culling, summing up the problems associated with this approach: it seems to me that this cull involved no science or logic, just waste and politics. It’s sickening that the people behind this cull were the Fisheries department, which prior to this was the very department responsible for setting up the world’s best acoustic tagging system for sharks. (Kindle version, Chapter 7) Describing sharks as “misunderstood creatures”, Orr is also clear in her opposition to killing sharks to ‘protect’ swimmers noting that “each year only around 10 people are killed in shark attacks worldwide, while around 73 million sharks are killed by humans”. She adds the question and answer, “sounds unfair? Of course it is, but when an attack is all over the news and the people are baying for shark blood, it’s easy to lose perspective. But culling them? Seriously?” (back cover). The condemnation of culling is also evident in David Brooks’s recent essay on the topic in his collection of essays about animal welfare, conservation and the relationship between humans and other species, Animal Dreams. This disapproval is also evident in narratives by those who have been injured by sharks. Navy diver Paul de Gelder and surfer Glen Orgias were both bitten by sharks in Sydney in 2009 and both their memoirs detail their fear of sharks and the pain they suffered from these interactions and their lengthy recoveries. However, despite their undoubted suffering – both men lost limbs due to these encounters – they also attest to their ongoing respect for these creatures and specify a shared desire not to see them culled. Orgias, instead, charts the life story of the shark who bit him alongside his own story in his memoir, musing at the end of the book, not about himself or his injury, but about the fate of the shark he had encountered: great whites are portrayed … as pathological creatures, and as malevolent. That’s rubbish … they are graceful, mighty beasts. I respect them, and fear them … [but] the thought of them fighting, dying, in a net upsets me. I hope this great white shark doesn’t end up like that. (271–271) Several of the more recent books identified in this study acknowledge that, despite growing understanding of sharks, the popular press and many policy makers continue to advocate for shark culls, these calls especially vocal after a shark-related human death or injury (Peppin-Neff). The damage to shark species involved caused by their killing – either directly by fishing, spearing, finning, or otherwise hunting them, or inadvertently as they become caught in nets or affected by human pollution of the ocean – is discussed in many of the more recent books identified in this study. Sharks as Endangered Alongside fishing, finning, and hunting, human actions and their effects such as beach netting, pollution and habitat change are killing many sharks, to the point where many shark species are threatened. Several recent books follow Orr in noting that an estimated 100 million sharks are now killed annually across the globe and that this, as well as changes to their habitats, are driving many shark species to the status of vulnerable, threatened or towards extinction (Dulvy et al.). This is detailed in texts about biodiversity and climate change in Australia (Steffen et al.) as well as in many of the zoologically focussed books discussed above under the theme of “Sharks as part of the natural environment”. The CSIRO’s Field Guide to Australian Sharks & Rays (Daley et al.), for example, emphasises not only that several shark species are under threat (and protected) (8–9) but also that sharks are, as individuals, themselves very fragile creatures. Their skeletons are made from flexible, soft cartilage rather than bone, meaning that although they are “often thought of as being incredibly tough; in reality, they need to be handled carefully to maximise their chance of survival following capture” (9). Material on this theme is included in books for younger readers on Australia’s endangered animals (Bourke; Roc and Hawke). Shark Conservation By 1991, shark conservation in Australia and overseas was a topic of serious discussion in Sydney, with an international workshop on the subject held at Taronga Zoo and the proceedings published (Pepperell et al.). Since then, the movement to protect sharks has grown, with marine scientists, high-profile figures and other writers promoting shark conservation, especially through attempts to educate the general public about sharks. De Gelder’s memoir, for instance, describes how he now champions sharks, promoting shark conservation in his work as a public speaker. Peter Benchley, who (with Carl Gottlieb) recast his novel Jaws for the film’s screenplay, later attested to regretting his portrayal of sharks as aggressive and became a prominent spokesperson for shark conservation. In explaining his change of heart, he stated that when he wrote the novel, he was reflecting the general belief that sharks would both seek out human prey and attack boats, but he later discovered this to be untrue (Benchley, “Without Malice”). Many recent books about sharks for younger readers convey a conservation message, underscoring how, instead of fearing or killing sharks, or doing nothing, humans need to actively assist these vulnerable creatures to survive. In the children’s book series featuring Bindi Irwin and her “wildlife adventures”, there is a volume where Bindi and a friend are on a diving holiday when they find a dead shark whose fin has been removed. The book not only describes how shark finning is illegal, but also how Bindi and friend are “determined to bring the culprits to justice” (Browne). This narrative, like the other books in this series, has a dual focus; highlighting the beauty of wildlife and its value, but also how the creatures described need protection and assistance. Concluding Discussion This study was prompted by the understanding that the Earth is currently in the epoch known as the Anthropocene, a time in which humans have significantly altered, and continue to alter, the Earth by our activities (Myers), resulting in numerous species becoming threatened, endangered, or extinct. It acknowledges the pressing need for not only natural science research on these actions and their effects, but also for such scientists to publish their findings in more accessible ways (see, Paulin and Green). It specifically responds to demands for scholarship outside the relevant areas of science and conservation to encourage widespread thinking and action (Mascia et al.; Bennett et al.). As understanding public perceptions and overcoming widely held fear of sharks can facilitate their conservation (Panoch and Pearson), the way sharks are imaged is integral to their survival. The five themes identified in this study reveal vastly different ways of viewing and writing about sharks. These range from seeing sharks as nothing more than large fishes to be killed for pleasure, to viewing them as terrifying monsters, to finally understanding that they are amazing creatures who play an important role in the world’s environment and are in urgent need of conservation. This range of representation is important, for if sharks are understood as demon monsters which hunt humans, then it is much more ‘reasonable’ to not care about their future than if they are understood to be fascinating and fragile creatures suffering from their interactions with humans and our effect on the environment. Further research could conduct a textual analysis of these books. In this context, it is interesting to note that, although in 1949 C. Bede Maxwell suggested describing human deaths and injuries from sharks as “accidents” (182) and in 2013 Christopher Neff and Robert Hueter proposed using “sightings, encounters, bites, and the rare cases of fatal bites” (70) to accurately represent “the true risk posed by sharks” to humans (70), the majority of the books in this study, like mass media reports, continue to use the ubiquitous and more dramatic terminology of “shark attack”. The books identified in this analysis could also be compared with international texts to reveal and investigate global similarities and differences. While the focus of this discussion has been on non-fiction texts, a companion analysis of representation of sharks in Australian fiction, poetry, films, and other narratives could also be undertaken, in the hope that such investigations contribute to more nuanced understandings of these majestic sea creatures. References Aitken, Kelvin. Sharks & Rays of Australia. New Holland, 1998. Allam, Lorena. “Indigenous Cultural Views of the Shark.” Earshot, ABC Radio, 24 Sep. 2015. 1 Mar. 2021 <https://www.abc.net.au/radionational/programs/earshot/indigenous-cultural-views-of-the-shark/6798174>. Allen, Gerald R. Field Guide to Marine Fishes of Tropical Australia and South-East Asia. 4th ed. Welshpool: Western Australian Museum, 2009. Australian Geographic. Australia’s Amazing Sharks. Bauer Media, 2020. Ayling, Tony. Sharks & Rays. Steve Parish, 2008. Benchley, Peter. Jaws. New York: Doubleday, 1974. Benchley, Peter. “Without Malice: In Defence of the Shark.” The Guardian 9 Nov. 2000. 1 Mar. 2021 <https://www.theguardian.com/theguardian/2000/nov/09/features11.g22>. Bennett, Nathan J., Robin Roth, Sarah C. Klain, Kai M.A. Chan, Douglas A. Clark, Georgina Cullman, Graham Epstein, Michael Paul Nelson, Richard Stedman, Tara L. Teel, Rebecca E. W. Thomas, Carina Wyborn, Deborah Curran, Alison Greenberg, John Sandlos, and Diogo Veríssimo. “Mainstreaming the Social Sciences in Conservation.” Conservation Biology 31.1 (2017): 56–66. Berkes, Marianne. Over in Australia: Amazing Animals Down Under. Sourcebooks, 2011. Bourke, Jane. Endangered Species of Australia. Ready-Ed Publications, 2006. Bradley, John, and Yanyuwa Families. Singing Saltwater Country: Journey to the Songlines of Carpentaria. Allen & Unwin, 2010. Brien, Donna Lee. “Surfing with Sharks: A Survey of Australian Non-Fiction Writing about Surfing and Sharks.” TEXT: Journal of Writing and Writing Programs, forthcoming. Brooks, David. Animal Dreams. Sydney: Sydney University Press, 2021. Browne, Ellie. Island Ambush. Random House Australia, 2011. Chapman, Blake. Shark Attacks: Myths, Misunderstandings and Human Fears. CSIRO, 2017. Clark, Anna. The Catch: The Story of Fishing in Australia. National Library of Australia, 2017. Clua, Eric, and Jean Guiart. “Why the Kanak Don’t Fear Sharks: Myths as a Coherent but Dangerous Mirror of Nature.” Oceania 90 (2020): 151–166. Daley, R.K., J.D. Stevens, P.R. Last, and G.R. Yearsly. Field Guide to Australian Sharks & Rays. CSIRO Marine Research, 2002. De Gelder, Paul. No Time For Fear: How a Shark Attack Survivor Beat the Odds. Penguin, 2011. Denness, Callum. Sharks: A History of Fear in Australia. Affirm Press, 2019. Dulvy, Nicholas K., Sarah L. Fowler, John A. Musick, Rachel D. Cavanagh, Peter M. Kyne, Lucy R. Harrison, John K. Carlson, Lindsay N.K. Davidson, Sonja V. Fordham, Malcolm P. Francis, Caroline M. Pollock, Colin A. Simpfendorfer, George H. Burgess, Kent E. Carpenter, Leonard J.V. Compagno, David A. Ebert, Claudine Gibson, Michelle R. Heupel, Suzanne R. Livingstone, Jonnell C. Sanciangco, John D. Stevens, Sarah Valenti, and William T. White. “Extinction Risk and Conservation of the World’s Sharks and Rays.” eLife 3 (2014): e00590. DOI: 10.7554/eLife.00590. Elliott, Riley. Shark Man: One Kiwi Man’s Mission to Save Our Most Feared and Misunderstood Predator. Penguin Random House New Zealand, 2014. Ellis, Richard. Shark: A Visual History. New York: Lyons Press, 2012. Fletcher, Garth L., Erich Ritter, Raid Amin, Kevin Cahn, and Jonathan Lee. “Against Common Assumptions, the World’s Shark Bite Rates are Decreasing.” Journal of Marine Biology 2019: art ID 7184634. <https://doi.org/10.1155/2019/7184634>. Fossa, Ada. Stories, Laughter and Tears Through Bygone Years in Shark Bay. Morrisville, Lulu.com, 2017. Fox, Frank. Peeps at Many Lands: Australia. Adam and Charles Black, 1911. Fox, Rodney, and Olaf Ruhen. Shark Attacks and Adventures with Rodney Fox. O’Neill Wetsuits, 1975. Gerhardt, Karin. Indigenous Knowledge and Cultural Values of Hammerhead Sharks in Northern Australia. James Cook University, 2018. Goadby, Peter. Sharks and Other Predatory Fish of Australia. 2nd ed. Jacaranda Press, 1968. Grey, Zane. An American Angler in Australia. 1st ed. 1937. Derrydale Press, 2002. Guest, Greg, Kathleen M. MacQueen, and Emily E. Namey. Applied Thematic Analysis. Sage, 2012. Jaws. Dir. Steven Spielberg. Universal Pictures, 1975. Kear, Katie. Baby Shark: Adventure Down Under. North Sydney: Puffin/Penguin Random House, 2020. Last, Peter R., and John Donald Stevens. Sharks and Rays of Australia. CSIRO, 2009. Le Busque, Brianna, and Carla Litchfield. “Sharks on Film: An Analysis of How Shark-Human Interactions Are Portrayed in Films.” Human Dimensions of Wildlife (2021). DOI: 10.1080/10871209.2021.1951399. Le Busque, Brianna, Philip Roetman, Jillian Dorrian, and Carla Litchfield. “An Analysis of Australian News and Current Affair Program Coverage of Sharks on Facebook.” Conservation Science and Practice 1.11 (2019): e111. <https://doi.org/10.1111/csp2.111>. Le Busque, Brianna, Philip Roetman, Jillian Dorrian, and Carl Litchfield. “People’s Fear of Sharks: A Qualitative Analysis.” Journal of Environmental Studies and Sciences 11 (2021): 258–265. Lucrezi, Serena, Suria Ellis, and Enrico Gennari. “A Test of Causative and Moderator Effects in Human Perceptions of Sharks, Their Control and Framing.” Marine Policy 109 (2019): art 103687. <https://doi.org/10.1016/j.marpol.2019.103687>. Mascia, Michael B., C. Anne Claus, and Robin Naidoo. “Impacts of Marine Protected Areas on Fishing Communities.” Conservation Biology 24.5 (2010): 1424–1429. Maxwell, C. Bede. Surf: Australians against the Sea. Angus and Robertson, 1949. McAloon, Brendan. Sharks Never Sleep: First-Hand Encounters with Killers of the Sea. Updated ed. Hardie Grant, 2018. Moriarty, Ros. Ten Scared Fish. Sydney, Allen & Unwin, 2012. Muirhead, Desmond. Surfing in Hawaii: A Personal Memoir. Northland, 1962. Muter, Bret A., Meredith L. Gore, Katie S. Gledhill, Christopher Lamont, and Charlie Huveneers. “Australian and U.S. News Media Portrayal of Sharks and Their Conservation.” Conservation Biology 27 (2012): 187–196. Myers, Joe. “What Is the Anthropocene? And Why Does It Matter?” World Economic Forum 31 Aug. 2016. 6 Aug. 2021 <https://www.weforum.org/agenda/2016/08/what-is-the-anthropocene-and-why-does-it-matter>. Neff, Christopher. “The Jaws Effect: How Movie Narratives Are Used to Influence Policy Responses to Shark Bites in Western Australia.” Australian Journal of Political Science 50.1 (2015): 114–127. Neff, Christopher, and Robert Hueter. “Science, Policy, and the Public Discourse of Shark 'Attack': A Proposal for Reclassifying Human–Shark Interactions.” Journal of Environmental Studies and Sciences 3 (2013): 65–73. Orgias, Glenn. Man in a Grey Suit: A Memoir of Surfing, Shark Attack and Survival. Penguin, 2012. Orr, Ruby Ashby. Death by Coconut: 50 Things More Dangerous than a Shark and Why You Shouldn’t Be Afraid of the Ocean. Affirm Press, 2015. Ostrovski, Raquel Lubambo, Guilherme Martins Violante, Mariana Reis de Brito, Jean Louis Valentin, and Marcelo Vianna. “The Media Paradox: Influence on Human Shark Perceptions and Potential Conservation Impacts.” Ethnobiology and Conservation 10.12 (2021): 1–15. Panoch, Rainera, and Elissa L. Pearson. “Humans and Sharks: Changing Public Perceptions and Overcoming Fear to Facilitate Shark Conservation.” Society & Animals 25.1 (2017): 57–76 Parker Steve, and Jane Parker. The Encyclopedia of Sharks. Universal International, 1999. Paulin, Mike, and David Green. “Mostly Harmless: Sharks We Have Met.” Junctures 19 (2018): 117–122. Pepin-Neff, Christopher L. Flaws: Shark Bites and Emotional Public Policymaking. Palgrave Macmilliam, 2019. Pepperell, Julian, John West, and Peter Woon, eds. Shark Conservation: Proceedings of an International Workshop on the Conservation of Elasmobranchs Held at Taronga Zoo, Sydney, Australia, 24 February 1991. Zoological Parks Board of New South Wales, 1993. Peschak, Thomas P. “Sharks and Shark Bites in the Media.” Finding a Balance: White Shark Conservation and Recreational Safety in the Inshore Waters of Cape Town, South Africa. Eds. Deon C. Nel and Thomas P. Peschak. Cape Town: World Wildlife Fund, 2006. 159–163. Reid, Robert. Shark!: Killer Tales from the Dangerous Depths. Allen & Unwin Kindle version, 2010. Riley, Kathy. Australia’s Most Dangerous Sharks. Australian Geographic, 2013. Roc, Margaret, and Kathleen Hawke. Australia’s Critically Endangered Animals. Heinemann Library, 2006. Rohr, Ian. Snappers, Stingers and Stabbers of Australia. Young Reed, 2006. Royal Zoological Society of New South Wales. “RZS NSW Fellows.” 2021. 6 Aug. 2021 <https://www.rzsnsw.org.au/about-us/rzs-nsw-fellows/rzs-nsw-fellows>. Ryan, Mark David, and Elizabeth Ellison. “Beaches in Australian Horror Films: Sites of Fear and Retreat.” Writing the Australian Beach Local Site, Global Idea. Eds. Elizabeth Ellison and Donna Lee Brien. Palgrave/Springer, 2020. 125–141. Schwanebeck, Wieland, ed. Der Weisse Hai revisited: Steven Spielberg’s Jaws und die Geburt eines amerikanischen Albtraums. Bertz & Fischer, 2015. Shark Hunters. Dirs. Ben Cropp and Ron Tayor. Sydney, 1962. Sharpe, Alan. Shark Down Under: The History Shark Attacks in Australian Waters. Dominion Publishing, 1976. Steele, Philip. Sharks and Other Monsters of the Deep. London: DK, 1998. Steffen, Will, Andrew A. Burbidge, Lesley Hughes, Roger Kitching, David Lindenmayer, Warren Musgrave, Mark Stafford Smith, and Patricia A. Werner. Australia’s Biodiversity and Climate Change. CSIRO Publishing, 2009. Taylor, Ron. Ron Taylor’s Shark Fighters: Underwater in Colour. John Harding Underwater Promotions, 1965. Taylor, Ron, and Valerie Taylor. Sharks: Silent Hunters of the Deep. Reader’s Digest, 1990. Taylor, Ron, Valerie Taylor, and Peter Goadby, eds. Great Shark Stories. Harper & Row, 1978. Repub. 1986 and 2000. Taylor, Valerie. Valerie Taylor: An Adventurous Life. Hachette Australia, 2019. Thiele, Colin. Maneater Man: Alf Dean, the World’s Greatest Shark Hunter. Rigby, 1979. Tricas, Timothy C., and Mark Carwardine. Sharks and Whales. Five Mile Press, 2002 Westbrook, Vivienne R., Shaun Collin, Dean Crawford, and Mark Nicholls. Sharks in the Arts: From Feared to Revered. Routledge, 2018. Whitley, Gilbert Percy. The Fishes of Australia: The Sharks, Rays, Devil-Fish, and other Primitive Fishes of Australia and New Zealand. Royal Zoological Society of New South Wales, 1940. Whitley, Gilbert Percy. Australian Sharks. Lloyd O’Neil, 1983.
APA, Harvard, Vancouver, ISO, and other styles
3

Mantle, Martin. "“Have You Tried Not Being a Mutant?”." M/C Journal 10, no. 5 (October 1, 2007). http://dx.doi.org/10.5204/mcj.2712.

Full text
Abstract:
There is an expression, in recent Marvel superhero films, of a social anxiety about genetic science that, in part, replaces the social anxieties about nuclear weapons that can be detected in the comic books on which these films are based (Rutherford). Much of the analysis of superhero comics – and the films on which they are based – has focussed its attention on the anxieties contained within them about gender, sexuality, race, politics, and the nation. Surprisingly little direct critique is applied to the most obvious point of difference within those texts, namely the acquisition, display, and use of extra-ordinary abilities. These superhero films represent some of the ways that audiences come to understand genetics. I am interested in this essay in considering how the representation of genetic mutation, as an error in a bio-chemical code, is a key narrative device. Moreover, mutation is central to the way the films explore the social exclusion of characters who acquire super-abilities. My contention is that, in these Marvel comic films, extra-ordinary ability, and the anxieties expressed about those abilities, parallels some of the social and cultural beliefs about the disabled body. The impaired body thus becomes a larger trope for any deviation from the “normal” body and gives rise to the anxieties about deviation and deviance explored in these films. Impairment and illness have historically been represented as either a blessing or a curse – the source of revelation and discovery, or the site of ignominy. As Western culture developed, the confluence of Greek and Judeo-Christian stories about original sin and inherited punishment for parental digression resulted in the entrenchment of beliefs about bent and broken bodies as the locus of moral questions (and answers) about the abilities and use of the human body (Sontag 47). I want to explore, firstly, in the film adaptations of the Marvel comics X-Men, Spiderman, Fantastic Four, and The Hulk, the representation of changes to the body as the effect of invisible bio-chemical states and processes. It has been impossible to see DNA, whether with the human eye or with technical aid; the science of genetics is largely based on inference from other observations. In these superhero films, the graphic display of DNA and genetic restructuring is strikingly large. This overemphasis suggests both that the genetic is a key narrative impetus of the films and that there is something uncertain or disturbing about genetic science. One such concern about genetic science is identifying the sources of oppression that might underlie the, at times understandable, desire to eliminate disease and congenital defect through changes to the genetic code or elimination of genetic error. As Adrienne Asch states, this urge to eliminate disease and impairment is problematic: Why should it be acceptable to avoid some characteristics and not others? How can the society make lists of acceptable and unacceptable tests and still maintain that only disabling traits, and not people who live with those traits, are to be avoided? (339) Asch’s questioning ends with the return to the moral concerns that have always circulated around the body, and in particular a body that deviates from a norm. The maxim “hate the sin, not the sinner” is replaced by “eradicate the impairment, not the impaired”: it is some kind of lack of effort or resourcefulness on the part of the impaired that is detectable in the presence of the impairment. This replacement of sin by science is yet another example of the trace of the body as the site of moral arguments. As Bryan Turner argues, categories of disease, and by association impairment, are intrinsic to the political discourse of Western societies about otherness and exclusion (Turner 216). It is not surprising then, that characters that experience physical changes caused by genetic mutation may take on for themselves the social shame that is part of the exclusion process. As genetic science has increasingly infiltrated the popular imagination and thus finds expression in cinema, so too has this concern of shame and guilt become key to the narrative tension of films that link changes in the genetic code to the acquisition of super-ability. In the X-Men franchise, the young female character Rogue (Anna Paquin), acquires the ability to absorb another’s life force (and abilities), and she seeks to have her genetic code resequenced in order to be able to touch others, and thus by implication have a “normal” life. In X2 (Bryan Singer, 2003), Rogue’s boyfriend, Iceman (Shawn Ashmore), who has been largely excluded from her touch, returns home with other mutants. After having hidden his mutant abilities from his family, he finally confesses to them the truth about himself. His shocked mother turns to him and asks: “Have you tried not being a mutant?” Whilst this moment has been read as an expression of anxiety about homosexuality (“Pop Culture: Out Is In”; Vary), it also marks a wider social concern about otherness, including disability, and its attendant social exclusion. Moreover, this moment reasserts the paradigm of effort that underlies anxieties about deviations from the norm: Iceman could have been normal if only he had tried harder, had a different girlfriend, remained at home, sought more knowledge, or had better counsel. Science, and more specifically genetic science, is suggested in many of these films as the site of bad counsel. The narratives of these superhero stories, almost without exception, begin or hinge on some kind of mistake by scientists – the escaped spider, the accident in the laboratory, the experiment that gets out of control. The classic image of the mad scientist or Doctor Frankenstein type, locked away in his laboratory is reflected in the various scenes in all these films, in which the scientists are separated from wider society. In Fantastic 4 (Tim Story, 2005), the villain, Dr Von Doom (Julian McMahon), is located at the top of a large multi-story building, as too are the heroes. Their separation from the rest of society is made even more dramatic by placing the site of their exposure to cosmic radiation, the source of the genetic mutation, in a space station that is empty of anyone else except the five main characters whose bodies will be altered. In Spiderman (Sam Raimi, 2002), the villain is a scientist whose experiments are kept secret by the military, emphasising the danger inherent in his work. The mad-scientist imagery dominates the representation of Bruce Bannor’s father in Hulk (Ang Lee, 2003), whose experiments have altered his genetic code, and that alteration in genetic structure has subsequently been passed onto his son. The Fantastic 4 storyline returns several times to the link between genetic mutation and the exposure to cosmic radiation. Indeed, it is made explicit that human existence – and by implication the human body and abilities – is predicated on this cosmic radiation as the source of transformations that formed the human genetic code. The science of early biology thus posits this cosmic radiation as the source of what is “normal,” and it is this appeal to the cosmos – derived from the Greek kosmos meaning “order” – that provides, in part, the basis on which to value the current human genetic code. This link to the cosmic is also made in the opening sequence of X-Men in which the following voice-over is heard as we see a ball of light form. This light show is both a reminder of the Big Bang (the supposed beginning of the universe which unleased vast amounts of radiation) and the intertwining of chromosomes seen inside biological nuclei: Mutation, it is the key to our evolution. It has enabled us to evolve from a single celled organism to the dominant species on the planet. This process is slow, normally taking thousands and thousands of years. But every few hundred millennia evolution leaps forward. Whilst mutation may be key to human evolution and the basis for the dramatic narratives of these superhero films, it is also the source of social anxiety. Mutation, whilst derived from the Latin for “change,” has come to take on the connotation of an error or mistake. Richard Dawkins, in his celebrated book The Selfish Gene, compares mutation to “an error corresponding to a single misprinted letter in a book” (31). The language of science is intended to be without the moral overtones that such words as “error” and “misprint” attract. Nevertheless, in the films under consideration, the negative connotations of mutation as error or mistake, are, therefore, the source of the many narrative crises as characters seek to rid themselves of their abilities. Norman Osborn (Willem Dafoe), the villain of Spiderman, is spurred on by his belief that human beings have not achieved their potential, and the implication here is that the presence of physical weakness, illness, and impairment is the supporting evidence. The desire to return the bodies of these superheroes to a “normal” state is best expressed in_ Hulk_, when Banner’s father says: “So you wanna know what’s wrong with him. So you can fix him, cure him, change him.” The link between a mistake in the genetic code and the disablement of the these characters is made explicit when Banner demands from his father an explanation for his transformation into the Hulk – the genetic change is explicitly named a deformity. These films all gesture towards the key question of just what is the normal human genetic code, particularly given the way mutation, as error, is a fundamental tenet in the formation of that code. The films’ focus on extra-ordinary ability can be taken as a sign of the extent of the anxiety about what we might consider normal. Normal is represented, in part, by the supporting characters, named and unnamed, and the narrative turns towards rehabilitating the altered bodies of the main characters. The narratives of social exclusion caused by such radical deviations from the normal human body suggest the lack of a script or language for being able to talk about deviation, except in terms of disability. In Spiderman, Peter Parker (Tobey Maguire) is doubly excluded in the narrative. Beginning as a classic weedy, glasses-wearing, nerdy individual, unable to “get the girl,” he is exposed to numerous acts of humiliation at the commencement of the film. On being bitten by a genetically altered spider, he acquires its speed and agility, and in a moment of “revenge” he confronts one of his tormentors. His super-ability marks him as a social outcast; his tormentors mock him saying “You are a freak” – the emphasis in speech implying that Parker has never left a freakish mode. The film emphasises the physical transformation that occurs after Parker is bitten, by showing his emaciated (and ill) body then cutting to a graphic depiction of genes being spliced into Parker’s DNA. Finally revealing his newly formed, muscular body, the framing provides the visual cues as to the verbal alignment of these bodies – the extraordinary and the impaired bodies are both sources of social disablement. The extreme transformation that occurs to Ben Grimm (Michael Chiklis), in Fantastic 4, can be read as a disability, buying into the long history of the disabled body as freak, and is reinforced by his being named “The Thing.” Socially, facial disfigurement may be regarded as one of the most isolating impairments; for example, films such as The Man without a Face (Mel Gibson, 1993) explicitly explore this theme. As the only character with a pre-existing relationship, Grimm’s social exclusion is reinforced by the rejection of his girlfriend when she sees his face. The isolation in naming Ben Grimm as “The Thing” is also expressed in the naming of Bruce Banner’s (Eric Bana) alter ego “Hulk.” They are grossly enlarged bodies that are seen as grotesque mutations of the “normal” human body – not human, but “thing-like.” The theme of social exclusion is played alongside the idea that those with extra-ordinary ability are also emblematic of the evolutionary dominance of a superior species of which science is an example of human dominance. The Human Genome Project, begun in 1990, and completed in 2003, was in many ways the culmination of a century and a half of work in biochemistry, announcing that science had now completely mapped the human genome: that is, provided the complete sequence of genes on each of the 46 chromosomes in human cells. The announcement of the completed sequencing of the human genome led to, what may be more broadly called, “genomania” in the international press (Lombardo 193). But arguably also, the continued announcements throughout the life of the Project maintained interest in, and raised significant social, legal, and ethical questions about genetics and its use and abuse. I suggest that in these superhero films, whose narratives centre on genetic mutation, that the social exclusion of the characters is based in part on fears about genetics as the source of disability. In these films deviation becomes deviance. It is not my intention to reduce the important political aims of the disability movement by equating the acquisition of super-ability and physical impairment. Rather, I suggest that in the expression of the extraordinary in terms of the genetic within the films, we can detect wider social anxieties about genetic science, particularly as the representations of that science focus the audience’s attention on mutation of the genome. An earlier film, not concerned with superheroes but with the perfectibility of the human body, might prove useful here. Gattaca (Andrew Nicol, 1997), which explores the slippery moral slope of basing the value of the human body in genetic terms (the letters of the title recall the chemicals that structure DNA, abbreviated to G, A, T, C), is a powerful tale of the social consequences of the primacy of genetic perfectibility and reflects the social and ethical issues raised by the Human Genome Project. In a coda to the film, that was not included in the theatrical release, we read: We have now evolved to the point where we can direct our own evolution. Had we acquired this knowledge sooner, the following people may never have been born. The screen then reveals a list of significant people who were either born with or acquired physical or psychological impairments: for example, Abraham Lincoln/Marfan Syndrome, Jackie Joyner-Kersee/Asthma, Emily Dickinson/Manic Depression. The audience is then given the stark reminder of the message of the film: “Of course the other birth that may never have taken place is your own.” The social order of Gattaca is based on “genoism” – discrimination based on one’s genetic profile – which forces characters to either alter or hide their genetic code in order to gain social and economic benefit. The film is an example of what the editors of the special issue of the Florida State University Law Journal on genetics and disability note: how we look at genetic conditions and their relationship to health and disability, or to notions of “normalcy” and “deviance,” is not strictly or even primarily a legal matter. Instead, the issues raised in this context involve ethical considerations and require an understanding of the social contexts in which those issues appear. (Crossley and Shepherd xi) Implicit in these commentators’ concern is the way an ideal body is assumed as the basis from which a deviation in form or ability is measured. These superhero films demonstrate that, in order to talk about super-ability as a deviation from a normal body, they rely on disability scripts as the language of deviation. Scholars in disability studies have identified a variety of ways of talking about disability. The medical model associates impairment or illness with a medical tragedy, something that must be cured. In medical terms an error is any deviation from the norm that needs to be rectified by medical intervention. By contrast, in the social constructivist model, the source of disablement is environmental, political, cultural, or economic factors. Proponents of the social model do not regard impairment as equal to inability (Karpf 80) and argue that the discourses of disability are “inevitably informed by normative beliefs about what it is proper for people’s bodies and minds to be like” (Cumberbatch and Negrine 5). Deviations from the normal body are classification errors, mistakes in social categorisation. In these films aspects of both the medical tragedy and social construction of disability can be detected. These films come at a time when disability remains a site of social and political debate. The return to these superheroes, and their experiences of exclusion, in recent films is an indicator of social anxiety about the functionality of the human body. And as the science of genetics gains increasing public representation, the idea of ability – and disability – that is, what is regarded as “proper” for bodies and minds, is increasingly related to how we regard the genetic code. As the twenty first century began, new insights into the genetic origins of disease and congenital impairments offered the possibility that the previous uncertainty about the provenance of these illnesses and impairments may be eliminated. But new uncertainties have arisen around the value of human bodies in terms of ability and function. This essay has explored the way representations of extra-ordinary ability, as a mutation of the genetic code, trace some of the experiences of disablement. A study of these superhero films suggests that the popular dissemination of genetics has not resulted in an understanding of ability and form as purely bio-chemical, but that thinking about the body as a bio-chemical code occurs within already present moral discourses of the body’s value. References Asch, Adrienne. “Disability Equality and Prenatal Testing: Contradictory or Compatible?” Florida State University Law Review 30.2 (2003): 315-42. Crossley, Mary, and Lois Shepherd. “Genes and Disability: Questions at the Crossroads.” Florida State University Law Review 30.2 (2003): xi-xxiii. Cumberbatch, Guy, and Ralph Negrine. Images of Disability on Television. London: Routledge, 1992. Dawkins, Richard. The Selfish Gene. 30th Anniversary ed. Oxford: Oxford UP, 2006. Karpf, A. “Crippling Images.” Framed: Interrogating Disability in the Media. Eds. A. Pointon and C. Davies. London: British Film Institute, 1997. 79-83. Lombardo, Paul A. “Taking Eugenics Seriously: Three Generations Of ??? Are Enough.” Florida State University Law Review 30.2 (2003): 191-218. “Pop Culture: Out Is In.” Contemporary Sexuality 37.7 (2003): 9. Rutherford, Adam. “Return of the Mutants.” Nature 423.6936 (2003): 119. Sontag, Susan. Illness as Metaphor. London: Penguin, 1988. Turner, Bryan S. Regulating Bodies. London: Routledge, 1992. Vary, Adam B. “Mutant Is the New Gay.” Advocate 23 May 2006: 44-45. Citation reference for this article MLA Style Mantle, Martin. "“Have You Tried Not Being a Mutant?”: Genetic Mutation and the Acquisition of Extra-ordinary Ability." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/10-mantle.php>. APA Style Mantle, M. (Oct. 2007) "“Have You Tried Not Being a Mutant?”: Genetic Mutation and the Acquisition of Extra-ordinary Ability," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/10-mantle.php>.
APA, Harvard, Vancouver, ISO, and other styles
4

Stockwell, Stephen. "Theory-Jamming." M/C Journal 9, no. 6 (December 1, 2006). http://dx.doi.org/10.5204/mcj.2691.

Full text
Abstract:
“The intellect must not only desire surreptitious delights; it must become completely free and celebrate Saturnalia.” (Nietzsche 6) Theory-jamming suggests an array of eclectic methods, deployed in response to emerging conditions, using traditional patterns to generate innovative moves, seeking harmony and syncopation, transparent about purpose and power, aiming for demonstrable certainties while aware of their own provisional fragility. In this paper, theory-jamming is suggested as an antidote for the confusion and disarray that typifies communication theory. Communication theory as the means to conceptualise the transmission of information and the negotiation of meaning has never been a stable entity. Entrenched divisions between ‘administrative’ and ‘critical’ tendencies are played out within schools and emerging disciplines and across a range of scientific/humanist, quantitative/qualitative and political/cultural paradigms. “Of course, this is only the beginning of the mischief for there are many other polarities at play and a host of variations within polar contrasts” (Dervin, Shields and Song). This paper argues that the play of contending schools with little purchase on each other, or anything much, has turned meta-discourse about communication into an ontological spiral. Perhaps the only way to ride out this storm is to look towards communication practices that confront these issues and appreciate their theoretical underpinnings. From its roots in jazz and blues to its contemporary manifestations in rap and hip-hop and throughout the communication industries, the jam (or improvised reorganisation of traditional themes into new and striking patterns) confronts the ontological spiral in music, and life, by taking the flotsam flung out of the spiral to piece together the means to transcend the downward pull into the abyss. Many pretenders have a theory. Theory abounds: language theory, number theory, game theory, quantum theory, string theory, chaos theory, cyber-theory, queer theory, even conspiracy theory and, most poignantly, the putative theory of everything. But since Bertrand Russell’s unsustainable class of all classes, Gödel’s systemically unprovable propositions and Heisenberger’s uncertainty principle, the propensity for theories to fall into holes in themselves has been apparent. Nowhere is this more obvious than in communication theory where many schools contend without actually connecting to each other. From the 1930s, as the mass media formed, there have been administrative and critical tendencies at war in the communication arena. Some point to the origins of the split in the Institute of Social Research’s Radio Project where pragmatic sociologist, Paul Lazarsfeld broke with Frankfurt School critical theorist, Theodor Adorno over the quality of data. Lazarsfeld was keen to produce results while Adorno complained the data over-simplified the relationship between mass media and audiences (Rogers). From this split grew the twin disciplines of mass communication (quantitative, liberal, commercial and lost in its obsession with the measurement of minor media effects) and cultural/media studies (qualitative, post-Marxist, radical and lost in simulacra of their own devising). The complexity of interactions between these two disciplines, with the same subject matter but very different ways of thinking about it, is the foundation of the ontological black hole in communication theory. As the disciplines have spread out across universities, professional organizations and publishers, they have been used and abused for ideological, institutional and personal purposes. By the summer of 1983, the split was documented in a special issue of the Journal of Communication titled “Ferment in the Field”. Further, professional courses in journalism, public relations, marketing, advertising and media production have complex relations with both theoretical wings, which need the student numbers and are adept at constructing and defending new boundaries. The 90s saw any number ‘wars’: Journalism vs Cultural Studies, Cultural Studies vs Cultural Policy Studies, Cultural Studies vs Public Relations, Public Relations vs Journalism. More recently, the study of new communication technologies has led to a profusion of nascent, neo-disciplines shadowing, mimicking and reacting with old communication studies: “Internet studies; New media studies; Digital media studies; Digital arts and culture studies; Cyberculture studies; Critical cyberculture studies; Networked culture studies; Informatics; Information science; Information society studies; Contemporary media studies” (Silver & Massanari 1). As this shower of cyberstudies spirals by, it is further warped by the split between the hard science of communication infrastructure in engineering and information technology and what the liberal arts have to offer. The early, heroic attempt to bridge this gap by Claude Shannon and, particularly, Warren Weaver was met with disdain by both sides. Weaver’s philosophical interpretation of Shannon’s mathematics, accommodating the interests of technology and of human communication together, is a useful example of how disparate ideas can connect productively. But how does a communications scholar find such connections? How can we find purchase amongst this avalanche of ideas and agendas? Where can we get the traction to move beyond twentieth century Balkanisation of communications theory to embrace the whole? An answer came to me while watching the Discovery Channel. A documentary on apes showed them leaping from branch to branch, settling on a swaying platform of leaves, eating and preening, then leaping into the void until they make another landing, settling again… until the next leap. They are looking for what is viable and never come to ground. Why are we concerned to ground theory which can only prove its own impossibility while disregarding the certainty of what is viable for now? I carried this uneasy insight for almost five years, until I read Nietzsche on the methods of the pre-Platonic philosophers: “Two wanderers stand in a wild forest brook flowing over rocks; the one leaps across using the stones of the brook, moving to and fro ever further… The other stands there helplessly at each moment. At first he must construct the footing that can support his heavy steps; when this does not work, no god helps him across the brook. Is it only boundless rash flight across great spaces? Is it only greater acceleration? No, it is with flights of fantasy, in continuous leaps from possibility to possibility taken as certainties; an ingenious notion shows them to him, and he conjectures that there are formally demonstrable certainties” (Nietzsche 26). Nietzsche’s advice to take the leap is salutary but theory must be more than jumping from one good idea to the next. What guidance do the practices of communication offer? Considering new forms that have developed since the 1930s, as communication theory went into meltdown, the significance of the jam is unavoidable. While the jam session began as improvised jazz and blues music for practice, fellowship and fun, it quickly became the forum for exploring new kinds of music arising from the deconstruction of the old and experimentation with technical, and ontological, possibilities. The jam arose as a spin-off of the dance music circuit in the 1930s. After the main, professional show was over, small groups would gather together in all-night dives for informal, spontaneous sessions of unrehearsed improvisation, playing for their own pleasure, “in accordance with their own esthetic [sic] standards” (Cameron 177). But the jam is much more than having a go. The improvisation occurs on standard melodies: “Theoretically …certain introductions, cadenzas, clichés and ensemble obbligati assume traditional associations (as) ‘folkways’… that are rarely written down but rather learned from hearing (“head jobs”)” (Cameron 178-9). From this platform of tradition, the artist must “imagine in advance the pattern which unfolds… select a part in the pattern appropriate to the occasion, instrument and personal abilities (then) produce startlingly distinctive sound patterns (that) rationalise the impossible.” The jam is founded on its very impossibility: “the jazz aesthetic is basically a paradox… traditionalism and the radical originality are irreconcilable” (Cameron 181). So how do we escape from this paradox, the same paradox that catches all communication theorists between the demands of the past and the impossibility of the future? “Experimentation is mandatory and formal rules become suspect because they too quickly stereotype and ossify” (Cameron 181). The jam seems to work because it offers the possibility of the impossible made real by the act of communication. This play between the possible and the impossible, the rumbling engine of narrative, is the dynamo of the jam. Theory-jamming seeks to activate just such a dynamo. Rather than having a group of players on their instruments, the communication theorist has access a range of theoretical riffs and moves that can be orchestrated to respond to the question in focus, to latest developments, to contradictions or blank spaces within theoretical terrains. The theory-jammer works to their own standards, turning ideas learned from others (‘head jobs’) into their own distinctive patterns, still reliant on traditional melody, harmony and syncopation but now bent, twisted and reorganised into an entirely new story. The practice of following old pathways to new destinations has a long tradition in the West as eclecticism, a Graeco-Roman, particularly Alexandrian, philosophical tradition from the first century BC to the end of the classical period. Typified by Potamo who “encouraged his pupils instead to learn from a variety of masters”, eclecticism sought the best from each school, “all that teaches righteousness combined, the complete eclectic unity” (Kelley 578). By selecting the best, most reasonable, most useful elements from existing philosophical beliefs, polymaths such as Cicero sought the harmonious solution of particular problems. We see something similar to eclecticism in the East in the practices of ‘wild fox zen’ which teaches liberation from conceptual fixation (Heine). The 20th century’s most interesting eclectic was probably Walter Benjamin whose method owes something to both scientific Marxism and the Jewish Kabbalah. His hero was the rag-picker who had the cunning to create life from refuse and detritus. Benjamin’s greatest work, the unfinished Arcades Project, sought to create history from the same. It is a collection of photos, ephemera and transcriptions from books and newspapers (Benjamin). The particularity of eclecticism may be contrasted with the claim to universality of syncretism, the reconciliation of disparate or opposing beliefs by melding together various schools of thought into a new orthodoxy. Theory-jammers are not looking for a final solution but rather they seek what will work on this problem now, to come to a provisional solution, always aware that other, better, further solutions may be ahead. Elements of the jam are apparent in other contemporary forms of communication. For example bricolage, the practice from art, culture and information systems, involves tinkering elements together by trial and error, in ways not originally planned. Pastiche, from literature to the movies, mimics style while creating a new message. In theatre and TV comedy, improvisation has become a style in itself. Theory-jamming has direct connections with brainstorming, the practice that originated in the advertising industry to generate new ideas and solutions by kicking around possibilities. Against the hyper-administration of modern life, as the disintegration of grand theory immobilises thinkers, theory-jamming provides the means to think new thoughts. As a political activist and communications practitioner in Australia over the last thirty years, I have always been bemused by the human propensity to factionalise. Rather than getting bogged down by positions, I have sought to use administrative structures to explore critical ideas, to marshal critical approaches into administrative apparatus, to weld together critical and administrative formations in ways useful to both sides, bust most importantly, in ways useful to human society and a healthy environment. I've been accused of selling-out by the critical camp and of being unrealistic by the administrative side. My response is that we have much more to learn by listening and adapting than we do by self-satisfied stasis. Five Theses on Theory-Jamming Eclecticism requires Ethnography: the eclectic is the ethnographer loose in their own mind. “The free spirit surveys things, and now for the first time mundane existence appears to it worthy of contemplation…” (Nietzsche 6). Enculturation and Enumeration need each other: qualitative and quantitative research work best when they work off each other. “Beginners learned how to establish parallels, by means of the Game’s symbols, between a piece of classical music and the formula for some law of nature. Experts and Masters of the Game freely wove the initial theme into unlimited combinations.” (Hesse) Ephemera and Esoterica tell us the most: the back-story is the real story as we stumble on the greatest truths as if by accident. “…the mind’s deeper currents often need to be surprised by indirection, sometimes, indeed, by treachery and ruse, as when you steer away from a goal in order to reach it more directly…” (Jameson 71). Experimentation beyond Empiricism: more than testing our sense of our sense data of the world. Communication theory extends from infra-red to ultraviolet, from silent to ultrasonic, from absolute zero to complete heat, from the sub-atomic to the inter-galactic. “That is the true characteristic of the philosophical drive: wonderment at that which lies before everyone.” (Nietzsche 6). Extravagance and Exuberance: don’t stop until you’ve got enough. Theory-jamming opens the possibility for a unified theory of communication that starts, not with a false narrative certainty, but with the gaps in communication: the distance between what we know and what we say, between what we say and what we write, between what we write and what others read back, between what others say and what we hear. References Benjamin, Walter. The Arcades Project. Cambridge, Mass: Harvard UP, 2002. Cameron, W. B. “Sociological Notes on the Jam Session.” Social Forces 33 (Dec. 1954): 177–82. Dervin, B., P. Shields and M. Song. “More than Misunderstanding, Less than War.” Paper at International Communication Association annual meeting, New York City, NY, 2005. 5 Oct. 2006 http://www.allacademic.com/meta/p13530_index.html>. “Ferment in the Field.” Journal of Communication 33.3 (1983). Heine, Steven. “Putting the ‘Fox’ Back in the ‘Wild Fox Koan’: The Intersection of Philosophical and Popular Religious Elements in The Ch’an/Zen Koan Tradition.” Harvard Journal of Asiatic Studies 56.2 (Dec. 1996): 257-317. Hesse, Hermann. The Glass Bead Game. Harmondsworth: Penguin, 1972. Jameson, Fredric. “Postmodernism, or the Cultural Logic of Late Capitalism.” New Left Review 146 (1984): 53-90. Kelley, Donald R. “Eclecticism and the History of Ideas.” Journal of the History of Ideas 62.4 (Oct. 2001): 577-592 Nietzsche, Friedrich. The Pre-Platonic Philosophers. Urbana: University of Illinois Press, 2001. Rogers, E. M. “The Empirical and the Critical Schools of Communication Research.” Communication Yearbook 5 (1982): 125-144. Shannon, C.E., and W. Weaver. The Mathematical Theory of Communication. Urbana: University of Illinois Press, 1949. Silver, David, Adrienne Massanari. Critical Cyberculture Studies. New York: NYU P, 2006. Citation reference for this article MLA Style Stockwell, Stephen. "Theory-Jamming: Uses of Eclectic Method in an Ontological Spiral." M/C Journal 9.6 (2006). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0612/09-stockwell.php>. APA Style Stockwell, S. (Dec. 2006) "Theory-Jamming: Uses of Eclectic Method in an Ontological Spiral," M/C Journal, 9(6). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0612/09-stockwell.php>.
APA, Harvard, Vancouver, ISO, and other styles
5

Colvin, Neroli. "Resettlement as Rebirth: How Effective Are the Midwives?" M/C Journal 16, no. 5 (August 21, 2013). http://dx.doi.org/10.5204/mcj.706.

Full text
Abstract:
“Human beings are not born once and for all on the day their mothers give birth to them [...] life obliges them over and over again to give birth to themselves.” (Garcia Marquez 165) Introduction The refugee experience is, at heart, one of rebirth. Just as becoming a new, distinctive being—biological birth—necessarily involves the physical separation of mother and infant, so becoming a refugee entails separation from a "mother country." This mother country may or may not be a recognised nation state; the point is that the refugee transitions from physical connectedness to separation, from insider to outsider, from endemic to alien. Like babies, refugees may have little control over the timing and conditions of their expulsion. Successful resettlement requires not one rebirth but multiple rebirths—resettlement is a lifelong process (Layton)—which in turn require hope, imagination, and energy. In rebirthing themselves over and over again, people who have fled or been forced from their homelands become both mother and child. They do not go through this rebirthing alone. A range of agencies and individuals may be there to assist, including immigration officials, settlement services, schools and teachers, employment agencies and employers, English as a Second Language (ESL) resources and instructors, health-care providers, counsellors, diasporic networks, neighbours, church groups, and other community organisations. The nature, intensity, and duration of these “midwives’” interventions—and when they occur and in what combinations—vary hugely from place to place and from person to person, but there is clear evidence that post-migration experiences have a significant impact on settlement outcomes (Fozdar and Hartley). This paper draws on qualitative research I did in 2012 in a regional town in New South Wales to illuminate some of the ways in which settlement aides ease, or impede, refugees’ rebirth as fully recognised and participating Australians. I begin by considering what it means to be resilient before tracing some of the dimensions of the resettlement process. In doing so, I draw on data from interviews and focus groups with former refugees, service providers, and other residents of the town I shall call Easthaven. First, though, a word about Easthaven. As is the case in many rural and regional parts of Australia, Easthaven’s population is strongly dominated by Anglo Celtic and Saxon ancestries: 2011 Census data show that more than 80 per cent of residents were born in Australia (compared with a national figure of 69.8 per cent) and about 90 per cent speak only English at home (76.8 per cent). Almost twice as many people identify as Aboriginal or Torres Strait Islander as the national figure of 2.5 per cent (Australian Bureau of Statistics). For several years Easthaven has been an official “Refugee Welcome Zone”, welcoming hundreds of refugees from diverse countries in Africa and the Middle East as well as from Myanmar. This reflects the Department of Immigration and Citizenship’s drive to settle a fifth of Australia’s 13,750 humanitarian entrants a year directly in regional areas. In Easthaven’s schools—which is where I focused my research—almost all of the ESL students are from refugee backgrounds. Defining Resilience Much of the research on human resilience is grounded in psychology, with a capacity to “bounce back” from adverse experiences cited in many definitions of resilience (e.g. American Psychological Association). Bouncing back implies a relatively quick process, and a return to a state or form similar to that which existed before the encounter with adversity. Yet resilience often requires sustained effort and significant changes in identity. As Jerome Rugaruza, a former UNHCR refugee, says of his journey from the Democratic Republic of Congo to Australia: All the steps begin in the burning village: you run with nothing to eat, no clothes. You just go. Then you get to the refugee camp […] You have a little bread and you thank god you are safe. Then after a few years in the camp, you think about a future for your children. You arrive in Australia and then you learn a new language, you learn to drive. There are so many steps and not everyone can do it. (Milsom) Not everyone can do it, but a large majority do. Research by Graeme Hugo, for example, shows that although humanitarian settlers in Australia face substantial barriers to employment and initially have much higher unemployment rates than other immigrants, for most nationality groups this difference has disappeared by the second generation: “This is consistent with the sacrifice (or investment) of the first generation and the efforts extended to attain higher levels of education and English proficiency, thereby reducing the barriers over time.” (Hugo 35). Ingrid Poulson writes that “resilience is not just about bouncing. Bouncing […] is only a reaction. Resilience is about rising—you rise above it, you rise to the occasion, you rise to the challenge. Rising is an active choice” (47; my emphasis) I see resilience as involving mental and physical grit, coupled with creativity, aspiration and, crucially, agency. Dimensions of Resettlement To return to the story of 41-year-old Jerome Rugaruza, as related in a recent newspaper article: He [Mr Rugaruza] describes the experience of being a newly arrived refugee as being like that of a newborn baby. “You need special care; you have to learn to speak [English], eat the different food, create relationships, connections”. (Milsom) This is a key dimension of resettlement: the adult becomes like an infant again, shifting from someone who knows how things work and how to get by to someone who is likely to be, for a while, dependent on others for even the most basic things—communication, food, shelter, clothing, and social contact. The “special care” that most refugee arrivals need initially (and sometimes for a long time) often results in their being seen as deficient—in knowledge, skills, dispositions, and capacities as well as material goods (Keddie; Uptin, Wright and Harwood). As Fozdar and Hartley note: “The tendency to use a deficit model in refugee resettlement devalues people and reinforces the view of the mainstream population that refugees are a liability” (27). Yet unlike newborns, humanitarian settlers come to their new countries with rich social networks and extensive histories of experience and learning—resources that are in fact vital to their rebirth. Sisay (all names are pseudonyms), a year 11 student of Ethiopian heritage who was born in Kenya, told me with feeling: I had a life back in Africa [her emphasis]. It was good. Well, I would go back there if there’s no problems, which—is a fact. And I came here for a better life—yeah, I have a better life, there’s good health care, free school, and good environment and all that. But what’s that without friends? A fellow student, Celine, who came to Australia five years ago from Burundi via Uganda, told me in a focus group: Some teachers are really good but I think some other teachers could be a little bit more encouraging and understanding of what we’ve gone through, because [they] just look at you like “You’re year 11 now, you should know this” […] It’s really discouraging when [the teachers say] in front of the class, “Oh, you shouldn’t do this subject because you haven’t done this this this this” […] It’s like they’re on purpose to tell you “you don’t have what it takes; just give up and do something else.” As Uptin, Wright and Harwood note, “schools not only have the power to position who is included in schooling (in culture and pedagogy) but also have the power to determine whether there is room and appreciation for diversity” (126). Both Sisay and Celine were disheartened by the fact they felt some of their teachers, and many of their peers, had little interest in or understanding of their lives before they came to Australia. The teachers’ low expectations of refugee-background students (Keddie, Uptin, Wright and Harwood) contrasted with the students’ and their families’ high expectations of themselves (Brown, Miller and Mitchell; Harris and Marlowe). When I asked Sisay about her post-school ambitions, she said: “I have a good idea of my future […] write a documentary. And I’m working on it.” Celine’s response was: “I know I’m gonna do medicine, be a doctor.” A third girl, Lily, who came to Australia from Myanmar three years ago, told me she wanted to be an accountant and had studied accounting at the local TAFE last year. Joseph, a father of three who resettled from South Sudan seven years ago, stressed how important getting a job was to successful settlement: [But] you have to get a certificate first to get a job. Even the job of cleaning—when I came here I was told that somebody has to go to have training in cleaning, to use the different chemicals to clean the ground and all that. But that is just sweeping and cleaning with water—you don’t need the [higher-level] skills. Simple jobs like this, we are not able to get them. In regional Australia, employment opportunities tend to be limited (Fozdar and Hartley); the unemployment rate in Easthaven is twice the national average. Opportunities to study are also more limited than in urban centres, and would-be students are not always eligible for financial assistance to gain or upgrade qualifications. Even when people do have appropriate qualifications, work experience, and language proficiency, the colour of their skin may still mean they miss out on a job. Tilbury and Colic-Peisker have documented the various ways in which employers deflect responsibility for racial discrimination, including the “common” strategy (658) of arguing that while the employer or organisation is not prejudiced, they have to discriminate because of their clients’ needs or expectations. I heard this strategy deployed in an interview with a local businesswoman, Catriona: We were advertising for a new technician. And one of the African refugees came to us and he’d had a lot of IT experience. And this is awful, but we felt we couldn't give him the job, because we send our technicians into people's houses, and we knew that if a black African guy rocked up at someone’s house to try and fix their computer, they would not always be welcomed in all—look, it would not be something that [Easthaven] was ready for yet. Colic-Peisker and Tilbury (Refugees and Employment) note that while Australia has strict anti-discrimination legislation, this legislation may be of little use to the people who, because of the way they look and sound (skin colour, dress, accent), are most likely to face prejudice and discrimination. The researchers found that perceived discrimination in the labour market affected humanitarian settlers’ sense of satisfaction with their new lives far more than, for example, racist remarks, which were generally shrugged off; the students I interviewed spoke of racism as “expected,” but “quite rare.” Most of the people Colic-Peisker and Tilbury surveyed reported finding Australians “friendly and accepting” (33). Even if there is no active discrimination on the basis of skin colour in employment, education, or housing, or overt racism in social situations, visible difference can still affect a person’s sense of belonging, as Joseph recounts: I think of myself as Australian, but my colour doesn’t [laughs] […] Unfortunately many, many Australians are expecting that Australia is a country of Europeans … There is no need for somebody to ask “Where do you come from?” and “Do you find Australia here safe?” and “Do you enjoy it?” Those kind of questions doesn’t encourage that we are together. This highlights another dimension of resettlement: the journey from feeling “at home” to feeling “foreign” to, eventually, feeling at home again in the host country (Colic-Peisker and Tilbury, Refugees and Employment). In the case of visibly different settlers, however, this last stage may never be completed. Whether the questions asked of Joseph are well intentioned or not, their effect may be the same: they position him as a “forever foreigner” (Park). A further dimension of resettlement—one already touched on—is the degree to which humanitarian settlers actively manage their “rebirth,” and are allowed and encouraged to do so. A key factor will be their mastery of English, and Easthaven’s ESL teachers are thus pivotal in the resettlement process. There is little doubt that many of these teachers have gone to great lengths to help this cohort of students, not only in terms of language acquisition but also social inclusion. However, in some cases what is initially supportive can, with time, begin to undermine refugees’ maturity into independent citizens. Sharon, an ESL teacher at one of the schools, told me how she and her colleagues would give their refugee-background students lifts to social events: But then maybe three years down the track they have a car and their dad can drive, but they still won’t take them […] We arrive to pick them up and they’re not ready, or there’s five fantastic cars in the driveway, and you pick up the student and they say “My dad’s car’s much bigger and better than yours” [laughs]. So there’s an expectation that we’ll do stuff for them, but we’ve created that [my emphasis]. Other support services may have more complex interests in keeping refugee settlers dependent. The more clients an agency has, the more services it provides, and the longer clients stay on its books, the more lucrative the contract for the agency. Thus financial and employment imperatives promote competition rather than collaboration between service providers (Fozdar and Hartley; Sidhu and Taylor) and may encourage assumptions about what sorts of services different individuals and groups want and need. Colic-Peisker and Tilbury (“‘Active’ and ‘Passive’ Resettlement”) have developed a typology of resettlement styles—“achievers,” “consumers,” “endurers,” and “victims”—but stress that a person’s style, while influenced by personality and pre-migration factors, is also shaped by the institutions and individuals they come into contact with: “The structure of settlement and welfare services may produce a victim mentality, leaving members of refugee communities inert and unable to see themselves as agents of change” (76). The prevailing narrative of “the traumatised refugee” is a key aspect of this dynamic (Colic-Peisker and Tilbury, “‘Active’ and ‘Passive’ Resettlement”; Fozdar and Hartley; Keddie). Service providers may make assumptions about what humanitarian settlers have gone through before arriving in Australia, how they have been affected by their experiences, and what must be done to “fix” them. Norah, a long-time caseworker, told me: I think you get some [providers] who go, “How could you have gone through something like that and not suffered? There must be—you must have to talk about this stuff” […] Where some [refugees] just come with the [attitude] “We’re all born into a situation; that was my situation, but I’m here now and now my focus is this.” She cited failure to consider cultural sensitivities around mental illness and to recognise that stress and anxiety during early resettlement are normal (Tilbury) as other problems in the sector: [Newly arrived refugees] go through the “happy to be here” [phase] and now “hang on, I’ve thumped to the bottom and I’m missing my own foods and smells and cultures and experiences”. I think sometimes we’re just too quick to try and slot people into a box. One factor that appears to be vital in fostering and sustaining resilience is social connection. Norah said her clients were “very good on the mobile phone” and had links “everywhere,” including to family and friends in their countries of birth, transition countries, and other parts of Australia. A 2011 report for DIAC, Settlement Outcomes of New Arrivals, found that humanitarian entrants to Australia were significantly more likely to be members of cultural and/or religious groups than other categories of immigrants (Australian Survey Research). I found many examples of efforts to build both bonding and bridging capital (Putnam) in Easthaven, and I offer two examples below. Several people told me about a dinner-dance that had been held a few weeks before one of my visits. The event was organised by an African women’s group, which had been formed—with funding assistance—several years before. The dinner-dance was advertised in the local newspaper and attracted strong interest from a broad cross-section of Easthaveners. To Debbie, a counsellor, the response signified a “real turnaround” in community relations and was a big boon to the women’s sense of belonging. Erica, a teacher, told me about a cultural exchange day she had organised between her bush school—where almost all of the children are Anglo Australian—and ESL students from one of the town schools: At the start of the day, my kids were looking at [the refugee-background students] and they were scared, they were saying to me, "I feel scared." And we shoved them all into this tiny little room […] and they had no choice but to sit practically on top of each other. And by the end of the day, they were hugging each other and braiding their hair and jumping and playing together. Like Uptin, Wright and Harwood, I found that the refugee-background students placed great importance on the social aspects of school. Sisay, the girl I introduced earlier in this paper, said: “It’s just all about friendship and someone to be there for you […] We try to be friends with them [the non-refugee students] sometimes but sometimes it just seems they don’t want it.” Conclusion A 2012 report on refugee settlement services in NSW concludes that the state “is not meeting its responsibility to humanitarian entrants as well as it could” (Audit Office of New South Wales 2); moreover, humanitarian settlers in NSW are doing less well on indicators such as housing and health than humanitarian settlers in other states (3). Evaluating the effectiveness of formal refugee-centred programs was not part of my research and is beyond the scope of this paper. Rather, I have sought to reveal some of the ways in which the attitudes, assumptions, and everyday practices of service providers and members of the broader community impact on refugees' settlement experience. What I heard repeatedly in the interviews I conducted was that it was emotional and practical support (Matthews; Tilbury), and being asked as well as told (about their hopes, needs, desires), that helped Easthaven’s refugee settlers bear themselves into fulfilling new lives. References Audit Office of New South Wales. Settling Humanitarian Entrants in New South Wales—Executive Summary. May 2012. 15 Aug. 2013 ‹http://www.audit.nsw.gov.au/ArticleDocuments/245/02_Humanitarian_Entrants_2012_Executive_Summary.pdf.aspx?Embed=Y>. Australian Bureau of Statistics. 2011 Census QuickStats. Mar. 2013. 11 Aug. 2013 ‹http://www.censusdata.abs.gov.au/census_services/getproduct/census/2011/quickstat/0>. Australian Survey Research. Settlement Outcomes of New Arrivals—Report of Findings. Apr. 2011. 15 Aug. 2013 ‹http://www.immi.gov.au/media/publications/research/_pdf/settlement-outcomes-new-arrivals.pdf>. Brown, Jill, Jenny Miller, and Jane Mitchell. “Interrupted Schooling and the Acquisition of Literacy: Experiences of Sudanese Refugees in Victorian Secondary Schools.” Australian Journal of Language and Literacy 29.2 (2006): 150-62. Colic-Peisker, Val, and Farida Tilbury. “‘Active’ and ‘Passive’ Resettlement: The Influence of Supporting Services and Refugees’ Own Resources on Resettlement Style.” International Migration 41.5 (2004): 61-91. ———. Refugees and Employment: The Effect of Visible Difference on Discrimination—Final Report. Perth: Centre for Social and Community Research, Murdoch University, 2007. Fozdar, Farida, and Lisa Hartley. “Refugee Resettlement in Australia: What We Know and Need To Know.” Refugee Survey Quarterly 4 Jun. 2013. 12 Aug. 2013 ‹http://rsq.oxfordjournals.org/search?fulltext=fozdar&submit=yes&x=0&y=0>. Garcia Marquez, Gabriel. Love in the Time of Cholera. London: Penguin Books, 1989. Harris, Vandra, and Jay Marlowe. “Hard Yards and High Hopes: The Educational Challenges of African Refugee University Students in Australia.” International Journal of Teaching and Learning in Higher Education 23.2 (2011): 186-96. Hugo, Graeme. A Significant Contribution: The Economic, Social and Civic Contributions of First and Second Generation Humanitarian Entrants—Summary of Findings. Canberra: Department of Immigration and Citizenship, 2011. Keddie, Amanda. “Pursuing Justice for Refugee Students: Addressing Issues of Cultural (Mis)recognition.” International Journal of Inclusive Education 16.12 (2012): 1295-1310. Layton, Robyn. "Building Capacity to Ensure the Inclusion of Vulnerable Groups." Creating Our Future conference, Adelaide, 28 Jul. 2012. Milsom, Rosemarie. “From Hard Luck Life to the Lucky Country.” Sydney Morning Herald 20 Jun. 2013. 12 Aug. 2013 ‹http://www.smh.com.au/national/from-hard-luck-life-to-the-lucky-country-20130619-2oixl.html>. Park, Gilbert C. “’Are We Real Americans?’: Cultural Production of Forever Foreigners at a Diversity Event.” Education and Urban Society 43.4 (2011): 451-67. Poulson, Ingrid. Rise. Sydney: Pan Macmillan Australia, 2008. Putnam, Robert D. Bowling Alone: The Collapse and Revival of American Community. New York: Simon & Schuster, 2000. Sidhu, Ravinder K., and Sandra Taylor. “The Trials and Tribulations of Partnerships in Refugee Settlement Services in Australia.” Journal of Education Policy 24.6 (2009): 655-72. Tilbury, Farida. “‘I Feel I Am a Bird without Wings’: Discourses of Sadness and Loss among East Africans in Western Australia.” Identities: Global Studies in Culture and Power 14.4 (2007): 433-58. ———, and Val Colic-Peisker. “Deflecting Responsibility in Employer Talk about Race Discrimination.” Discourse & Society 17.5 (2006): 651-76. Uptin, Jonnell, Jan Wright, and Valerie Harwood. “It Felt Like I Was a Black Dot on White Paper: Examining Young Former Refugees’ Experience of Entering Australian High Schools.” The Australian Educational Researcher 40.1 (2013): 125-37.
APA, Harvard, Vancouver, ISO, and other styles
6

Masson, Sophie Veronique. "Fairy Tale Transformation: The Pied Piper Theme in Australian Fiction." M/C Journal 19, no. 4 (August 31, 2016). http://dx.doi.org/10.5204/mcj.1116.

Full text
Abstract:
The traditional German tale of the Pied Piper of Hamelin inhabits an ambiguous narrative borderland, a liminal space between fact and fiction, fantasy and horror, concrete details and elusive mystery. In his study of the Pied Piper in Tradition and Innovation in Folk Literature, Wolfgang Mieder describes how manuscripts and other evidence appear to confirm the historical base of the story. Precise details from a fifteenth-century manuscript, based on earlier sources, specify that in 1284 on the 26th of June, the feast-day of Saints John and Paul, 130 children from Hamelin were led away by a piper clothed in many colours to the Koppen Hill, and there vanished (Mieder 48). Later manuscripts add details familiar today, such as a plague of rats and a broken bargain with burghers as a motive for the Piper’s actions, while in the seventeenth century the first English-language version advances what might also be the first attempt at a “rational” explanation for the children’s disappearance, claiming that they were taken to Transylvania. The uncommon pairing of such precise factual detail with enigmatic mystery has encouraged many theories. These have ranged from references to the Children’s Crusade, or other religious fervours, to the devastation caused by the Black Death, from the colonisation of Romania by young German migrants to a murderous rampage by a paedophile. Fictional interpretations of the story have multiplied, with the classic versions of the Brothers Grimm and Robert Browning being most widely known, but with contemporary creators exploring the theme too. This includes interpretations in Hamelin itself. On 26 June 2015, in Hamelin Museum, I watched a wordless five-minute play, entirely performed not by humans but by animatronic stylised figures built out of scrap iron, against a montage of multilingual, confused voices and eerie music, with the vanished children represented by a long line of small empty shirts floating by. The uncanny, liminal nature of the story was perfectly captured. Australia is a world away from German fairy tale mysteries, historically, geographically, and culturally. Yet, as Lisa M. Fiander has persuasively argued, contemporary Australian fiction has been more influenced by fairy tales than might be assumed, and in this essay it is proposed that major motifs from the Pied Piper appear in several Australian novels, transformed not only by distance of setting and time from that of the original narrative, but also by elements specific to the Australian imaginative space. These motifs are lost children, the enigmatic figure of the Piper himself, and the power of a very particular place (as Hamelin and its Koppen Hill are particularised in the original tale). Three major Australian novels will be examined in this essay: Joan Lindsay’s Picnic at Hanging Rock (1967), Christopher Koch’s The Doubleman (1985), and Ursula Dubosarsky’s The Golden Day (2011). Dubosarsky’s novel was written for children; both Koch’s and Lindsay’s novels were published as adult fiction. In each of these works of fiction, the original tale’s motifs have been developed and transformed to express unique evocations of the Pied Piper theme. As noted by Fiander, fiction writers are “most likely to draw upon fairy tales when they are framing, in writing, a subject that generates anxiety in their culture” (158). Her analysis is about anxieties of place within Australian fiction, but this insight could be usefully extended to the motifs which I have identified as inherent in the Pied Piper story. Prominent among these is the lost children motif, whose importance in the Australian imagination has been well-established by scholars such as Peter Pierce. Pierce’s The Country of Lost Children: An Australian Anxiety explores this preoccupation from the earliest beginnings of European settlement, through analysis of fiction, newspaper reports, paintings, and films. As Pierce observed in a later interview in the Sydney Morning Herald (Knox), over time the focus changed from rural children and the nineteenth-century fear of the vast impersonal nature of the bush, where children of colonists could easily get lost, to urban children and the contemporary fear of human predators.In each of the three novels under examination in this essay, lost children—whether literal or metaphorical—feature prominently. Writer Carmel Bird, whose fiction has also frequently centred on the theme of the lost child, observes in “Dreaming the Place” that the lost child, the stolen child – this must be a narrative that is lodged in the heart and imagination, nightmare and dream, of all human beings. In Australia the nightmare became reality. The child is the future, and if the child goes, there can be no future. The true stories and the folk tales on this theme are mirror images of each other. (7) The motif of lost children—and of children in danger—is not unique to the Pied Piper. Other fairy tales, such as Hansel and Gretel and Little Red Riding Hood, contain it, and it is those antecedents which Bird cites in her essay. But within the Pied Piper story it has three features which distinguish it from other traditional tales. First, unlike in the classic versions of Hansel and Gretel or Red Riding Hood, the children do not return. Neither are there bodies to find. The children have vanished into thin air, never to be seen again. Second, it is not only parents who have lost them, but an entire community whose future has been snatched away: a community once safe, ordered, even complacent, traumatised by loss. The lack of hope, of a happy ending for anyone, is striking. And thirdly, the children are not lost or abandoned or even, strictly speaking, stolen: they are lured away, semi-willingly, by the central yet curiously marginal figure of the Piper himself. In the original story there is no mention of motive and no indication of malice on the part of the Piper. There is only his inexplicable presence, a figure out of fairy folklore appearing in the midst of concrete historical dates and numbers. Clearly, he links to the liminal, complex world of the fairies, found in folklore around the world—beings from a world close to the human one, yet alien. Whimsical and unpredictable by human standards, such beings are nevertheless bound by mysteriously arbitrary rules and taboos, and haunt the borders of the human world, disturbing its rational edges and transforming lives forever. It is this sense of disturbance, that enchanting yet frightening sudden shifting of the border of reality and of the comforting order of things, the essence of transformation itself, which can also be seen at the core of the three novels under examination in this essay, with the Piper represented in each of them but in different ways. The third motif within the Pied Piper is a focus on place as a source of uncanny power, a theme which particularly resonates within an Australian context. Fiander argues that if contemporary British fiction writers use fairy tale to explore questions of community and alienation, and Canadian fiction writers use it to explore questions of identity, then Australian writers use it to explore the unease of place. She writes of the enduring legacy of Australia’s history “as a settler colony which invests the landscape with strangeness for many protagonists” (157). Furthermore, she suggests that “when Australian fiction writers, using fairy tales, describe the landscape as divorced from reality, they might be signalling anxiety about their own connection with the land which had already seen tens of thousands of years of occupation when Captain James Cook ‘found’ it in 1770” (160). I would argue, however, that in the case of the Pied Piper motifs, it is less clear that it is solely settler anxieties which are driving the depiction of the power of place in these three novels. There is no divorce from reality here, but rather an eruption of the metaphysical potency of place within the usual, “normal” order of reality. This follows the pattern of the original tale, where the Piper and all the children, except for one or two stragglers, disappear at Koppen Hill, vanishing literally into the hill itself. In traditional European folklore, hollow hills are associated with fairies and their uncanny power, but other places, especially those of water—springs, streams, even the sea—may also be associated with their liminal world (in the original tale, the River Weser is another important locus for power). In Joan Lindsay’s Picnic at Hanging Rock, it is another outcrop in the landscape which holds that power and claims the “lost children.” Inspired partly by a painting by nineteenth-century Australian artist William Ford, titled At the Hanging Rock (1875), depicting a group of elegant people picnicking in the bush, this influential novel, which inspired an equally successful film adaptation, revolves around an incident in 1900 when four girls from Appleyard College, an exclusive school in Victoria, disappear with one of their teachers whilst climbing Hanging Rock, where they have gone for a picnic. Only one of their number, a girl called Irma, is ever found, and she has no memory of how and why she found herself on the Rock, and what has happened to the others. This inexplicable event is the precursor to a string of tragedies which leads to the violent deaths of several people, and which transforms the sleepy and apparently content little community around Appleyard College into a centre of loss, horror, and scandal.Told in a way which makes it appear that the novelist is merely recounting a true story—Lindsay even tells readers in an author’s note that they must decide for themselves if it is fact or fiction—Picnic at Hanging Rock shares the disturbingly liminal fact-fiction territory of the Piper tale. Many readers did in fact believe that the novel was based on historical events and combed newspaper files, attempting to propound ingenious “rational” explanations for what happened on the Rock. Picnic at Hanging Rock has been the subject of many studies, with the novel being analysed through various prisms, including the Gothic, the pastoral, historiography, and philosophy. In “Fear and Loathing in the Australian Bush,” Kathleen Steele has depicted Picnic at Hanging Rock as embodying the idea that “Ordered ‘civilisation’ cannot overcome the gothic landscapes of settler imaginations: landscapes where time and people disappear” (44). She proposes that Lindsay intimates that the landscape swallows the “lost children” of the novel because there is a great absence in that place: that of Aboriginal people. In this reading of the novel, it is that absence which becomes, in a sense, a malevolent presence that will reach out beyond the initial disappearance of the three people on the Rock to destroy the bonds that held the settler community together. It is a powerfully-made argument, which has been taken up by other scholars and writers, including studies which link the theme of the novel with real-life lost-children cases such as that of Azaria Chamberlain, who disappeared near another “Rock” of great Indigenous metaphysical potency—Uluru, or Ayers Rock. However, to date there has been little exploration of the fairy tale quality of the novel, and none at all of the striking ways in which it evokes Pied Piper motifs, whilst transforming them to suit the exigencies of its particular narrative world. The motif of lost children disappearing from an ordered, safe, even complacent community into a place of mysterious power is extended into an exploration of the continued effects of those disappearances, depicting the disastrous impact on those left behind and the wider community in a way that the original tale does not. There is no literal Pied Piper figure in this novel, though various theories are evoked by characters as to who might have lured the girls and their teacher, and who might be responsible for the disappearances. Instead, there is a powerful atmosphere of inevitability and enchantment within the landscape itself which both illustrates the potency of place, and exemplifies the Piper’s hold on his followers. In Picnic at Hanging Rock, place and Piper are synonymous: the Piper has been transformed into the land itself. Yet this is not the “vast impersonal bush,” nor is it malevolent or vengeful. It is a living, seductive metaphysical presence: “Everything, if only you could see it clearly enough, is beautiful and complete . . .” (Lindsay 35). Just as in the original tale, the lost children follow the “Piper” willingly, without regret. Their disappearance is a happiness to them, in that moment, as it is for the lost children of Hamelin, and quite unlike how it must be for those torn apart by that loss—the community around Appleyard, the townspeople of Hamelin. Music, long associated with fairy “takings,” is also a subtle feature of the story. In the novel, just before the luring, Irma hears a sound like the beating of far-off drums. In the film, which more overtly evokes fairy tale elements than does the novel, it is noteworthy that the music at that point is based on traditional tunes for Pan-pipes, played by the great Romanian piper Gheorge Zamfir. The ending of the novel, with questions left unanswered, and lives blighted by the forever-inexplicable, may be seen as also following the trajectory of the original tale. Readers as much as the fictional characters are left with an enigma that continues to perplex and inspire. Picnic at Hanging Rock was one of the inspirations for another significant Australian fiction, this time a contemporary novel for children. Ursula Dubosarsky’s The Golden Day (2011) is an elegant and subtle short novel, set in Sydney at an exclusive girls’ school, in 1967. Like the earlier novel, The Golden Day is also partly inspired by visual art, in this case the Schoolgirl series of paintings by Charles Blackman. Combining a fairy tale atmosphere with historical details—the Vietnam War, the hanging of Ronald Ryan, the drowning of Harold Holt—the story is told through the eyes of several girls, especially one, known as Cubby. The Golden Day echoes the core narrative patterns of the earlier novel, but intriguingly transformed: a group of young girls goes with their teacher on an outing to a mysterious place (in this case, a cave on the beach—note the potent elements of rock and water, combined), and something inexplicable happens which results in a disappearance. Only this time, the girls are much younger than the characters of Lindsay’s novel, pre-pubertal in fact at eleven years old, and it is their teacher, a young, idealistic woman known only as Miss Renshaw, who disappears, apparently into thin air, with only an amber bead from her necklace ever found. But it is not only Miss Renshaw who vanishes: the other is a poet and gardener named Morgan who is also Miss Renshaw’s secret lover. Later, with the revelation of a dark past, he is suspected in absentia of being responsible for Miss Renshaw’s vanishment, with implications of rape and murder, though her body is never found. Morgan, who could partly figure as the Piper, is described early on in the novel as having “beautiful eyes, soft, brown, wet with tears, like a stuffed toy” (Dubosarsky 11). This disarming image may seem a world away from the ambiguously disturbing figure of the legendary Piper, yet not only does it fit with the children’s naïve perception of the world, it also echoes the fact that the children in the original story were not afraid of the Piper, but followed him willingly. However, that is complicated by the fact that Morgan does not lure the children; it is Miss Renshaw who follows him—and the children follow her, who could be seen as the other half of the Piper. The Golden Day similarly transforms the other Piper motifs in its own original way. The children are only literally lost for a short time, when their teacher vanishes and they are left to make their own way back from the cave; yet it could be argued that metaphorically, the girls are “lost” to childhood from that moment, in terms of never being able to go back to the state of innocence in which they were before that day. Their safe, ordered school community will never be the same again, haunted by the inexplicability of the events of that day. Meanwhile, the exploration of Australian place—the depiction of the Memorial Gardens where Miss Renshaw enjoins them to write poetry, the uncomfortable descent over rocks to the beach, and the fateful cave—is made through the eyes of children, not the adolescents and adults of Picnic at Hanging Rock. The girls are not yet in that liminal space which is adolescence and so their impressions of what the places represent are immediate, instinctive, yet confused. They don’t like the cave and can’t wait to get out of it, whereas the beach inspires them with a sense of freedom and the gardens with a sense of enchantment. But in each place, those feelings are mixed both with ordinary concerns and with seemingly random associations that are nevertheless potently evocative. For example, in the cave, Cubby senses a threateningly weightless atmosphere, a feeling of reality shifting, which she associates, apparently confusedly, with the hanging of Ronald Ryan, reported that very day. In this way, Dubosarsky subtly gestures towards the sinister inevitability of the following events, and creates a growing tension that will eventually fade but never fully dissipate. At the end, the novel takes an unexpected turn which is as destabilising as the ending of the Pied Piper story, and as open-ended in its transformative effects as the original tale: “And at that moment Cubby realised she was not going to turn into the person she had thought she would become. There was something inside her head now that would make her a different person, though she scarcely understood what it was” (Dubosarsky 148). The eruption of the uncanny into ordinary life will never leave her now, as it will never leave the other girls who followed Miss Renshaw and Morgan into the literally hollow hill of the cave and emerged alone into a transformed world. It isn’t just childhood that Cubby has lost but also any possibility of a comforting sense of the firm borders of reality. As in the Pied Piper, ambiguity and loss combine to create questions which cannot be logically answered, only dimly apprehended.Christopher Koch’s 1985 novel The Doubleman, winner of the Miles Franklin Award, also explores the power of place and the motif of lost children, but unlike the other two novels examined in this essay depicts an actual “incarnated” Piper motif in the mysteriously powerful figure of Clive Broderick, brilliant guitarist and charismatic teacher/guru, whose office, significantly, is situated in a subterranean space of knowledge—a basement room beneath a bookshop. Both central yet peripheral to the main action of the novel, touched with hints of the supernatural which never veer into overt fantasy, Broderick remains an enigma to the end. Set, like The Golden Day, in the 1960s, The Doubleman is narrated in the first person by Richard Miller, in adulthood a producer of a successful folk-rock group, the Rymers, but in childhood an imaginative, troubled polio survivor, with a crutch and a limp. It is noteworthy here that in the Grimms’ version of the Pied Piper, two children are left behind, despite following the Piper: one is blind, one is lame. And it is the lame boy who tells the townspeople what he glimpsed at Koppen Hill. In creating the character of Broderick, the author blends the traditional tropes of the Piper figure with Mephistophelian overtones and a strong influence from fairy lore, specifically the idea of the “doubleman,” here drawn from the writings of seventeenth-century Scottish pastor, the Reverend Robert Kirk of Aberfoyle. Kirk’s 1691 book The Secret Commonwealth of Elves, Fauns and Fairies is the earliest known serious attempt at objective description of the fairy beliefs of Gaelic-speaking Highlanders. His own precisely dated life-story and ambiguous end—it is said he did not die but is forever a prisoner of the fairies—has eerie parallels to the Piper story. “And there is the uncanny, powerful and ambiguous fact of the matter. Here is a man, named, born, lived, who lived a fairy story, really lived it: and in the popular imagination, he lives still” (Masson).Both in his creative and his non-fiction work Koch frequently evoked what he called “the Otherland,” which he depicted as a liminal, ambiguous, destabilising but nevertheless very real and potent presence only thinly veiled by the everyday world. This Otherland is not the same in all his fictions, but is always part of an actual place, whether that be Java in The Year of Living Dangerously, Hobart and Sydney in The Doubleman, Tasmania, Vietnam and Cambodia in Highways to a War, and Ireland and Tasmania in Out of Ireland. It is this sense of the “Otherland” below the surface, a fairy tale, mythical realm beyond logic or explanation, which gives his work its distinctive and particular power. And in The Doubleman, this motif, set within a vividly evoked real world, complete with precise period detail, transforms the Piper figure into one which could easily appear in a Hobart lane, yet which loses none of its uncanny potency. As Noel Henricksen writes in his study of Koch’s work, Island and Otherland, “Behind the membrane of Hobart is Otherland, its manifestations a spectrum stretched between the mystical and the spiritually perverted” (213).This is Broderick’s first appearance, described through twelve-year-old Richard Miller’s eyes: Tall and thin in his long dark overcoat, he studied me for the whole way as he approached, his face absolutely serious . . . The man made me uneasy to a degree for which there seemed to be no explanation . . . I was troubled by the notion that he was no ordinary man going to work at all: that he was not like other people, and that his interest couldn’t be explained so simply. (Koch, Doubleman 3)That first encounter is followed by another, more disturbing still, when Broderick speaks to the boy, eyes fixed on him: “. . . hooded by drooping lids, they were entirely without sympathy, yet nevertheless interested, and formidably intelligent” (5).The sense of danger that Broderick evokes in the boy could be explained by a sinister hint of paedophilia. But though Broderick is a predator of sorts on young people, nothing is what it seems; no rational explanation encompasses the strange effect of his presence. It is not until Richard is a young man, in the company of his musical friend Brian Brady, that he comes across Broderick again. The two young men are looking in the window of a music shop, when Broderick appears beside them, and as Richard observes, just as in a fairy tale, “He didn’t seem to have changed or aged . . .” (44). But the shock of his sudden re-appearance is mixed with something else now, as Broderick engages Brady in conversation, ignoring Richard, “. . . as though I had failed some test, all that time ago, and the man had no further use for me” (45).What happens next, as Broderick demonstrates his musical prowess, becomes Brady’s teacher, and introduces them to his disciple, young bass player Darcy Burr, will change the young men’s lives forever and set them on a path that leads both to great success and to living nightmare, even after Broderick’s apparent disappearance, for Burr will take on the Piper’s mantle. Koch’s depiction of the lost children motif is distinctively different to the other two novels examined in this essay. Their fate is not so much a mystery as a tragedy and a warning. The lost children of The Doubleman are also lost children of the sixties, bright, talented young people drawn through drugs, immersive music, and half-baked mysticism into darkness and horrifying violence. In his essay “California Dreaming,” published in the collection Crossing the Gap, Koch wrote about this subterranean aspect of the sixties, drawing a connection between it and such real-life sinister “Pipers” as Charles Manson (60). Broderick and Burr are not the same as the serial killer Manson, of course; but the spell they cast over the “lost children” who follow them is only different in degree, not in kind. In the end of the novel, the spell is broken and the world is again transformed. Yet fittingly it is a melancholy transformation: an end of childhood dreams of imaginative potential, as well as dangerous illusions: “And I knew now that it was all gone—like Harrigan Street, and Broderick, and the district of Second-Hand” (Koch, Doubleman 357). The power of place, the last of the Piper motifs, is also deeply embedded in The Doubleman. In fact, as with the idea of Otherland, place—or Island, as Henricksen evocatively puts it—is a recurring theme in Koch’s work. He identified primarily and specifically as a Tasmanian writer rather than as simply Australian, pointing out in an essay, “The Lost Hemisphere,” that because of its landscape and latitude, different to the mainland of Australia, Tasmania “genuinely belongs to a different region from the continent” (Crossing the Gap 92). In The Doubleman, Richard Miller imbues his familiar and deeply loved home landscape with great mystical power, a power which is both inherent within it as it is, but also expressive of the Otherland. In “A Tasmanian Tone,” another essay from Crossing the Gap, Koch describes that tone as springing “from a sense of waiting in the landscape: the tense yet serene expectancy of some nameless revelation” (118). But Koch could also write evocatively of landscapes other than Tasmanian ones. The unnerving climax of The Doubleman takes place in Sydney—significantly, as in The Golden Day, in a liminal, metaphysically charged place of rocks and water. That place, which is real, is called Point Piper. In conclusion, the original tale’s three main motifs—lost children, the enigma of the Piper, and the power of place—have been explored in distinctive ways in each of the three novels examined in this article. Contemporary Australia may be a world away from medieval Germany, but the uncanny liminality and capacious ambiguity of the Pied Piper tale has made it resonate potently within these major Australian fictions. Transformed and transformative within the Australian imagination, the theme of the Pied Piper threads like a faintly-heard snatch of unearthly music through the apparently mimetic realism of the novels, destabilising readers’ expectations and leaving them with subversively unanswered questions. ReferencesBird, Carmel. “Dreaming the Place: An Exploration of Antipodean Narratives.” Griffith Review 42 (2013). 1 May 2016 <https://griffithreview.com/articles/dreaming-the-place/>.Dubosarsky, Ursula. The Golden Day. Sydney: Allen and Unwin, 2011.Fiander, Lisa M. “Writing in A Fairy Story Landscape: Fairy Tales and Contemporary Australian Fiction.” Journal of the Association for the Study of Australian Literature 2 (2003). 30 April 2016 <http://openjournals.library.usyd.edu.au/index.php/JASAL/index>.Henricksen, Noel. Island and Otherland: Christopher Koch and His Books. Melbourne: Educare, 2003.Knox, Malcolm. “A Country of Lost Children.” Sydney Morning Herald 15 Aug. 2009. 1 May 2016 <http://www.smh.com.au/national/a-country-of-lost-children-20090814-el8d.html>.Koch, Christopher. The Doubleman. 1985. Sydney: Minerva, 1996.Koch, Christopher. Crossing the Gap: Memories and Reflections. 1987. Sydney: Vintage, 2000. Lindsay, Joan. Picnic at Hanging Rock. 1967. Melbourne: Penguin, 1977.Masson, Sophie. “Captive in Fairyland: The Strange Case of Robert Kirk of Aberfoyle.” Nation and Federation in the Celtic World: Papers from the Fourth Australian Conference of Celtic Studies, University of Sydney, June–July 2001. Ed. Pamela O’Neil. Sydney: University of Sydney Celtic Studies Foundation, 2003. Mieder, Wolfgang. “The Pied Piper: Origin, History, and Survival of a Legend.” Tradition and Innovation in Folk Literature. 1987. London: Routledge Revivals, 2015.Pierce, Peter. The Country of Lost Children: An Australian Anxiety. Cambridge: Cambridge UP, 1999.Steele, Kathleen. “Fear and Loathing in the Australian Bush: Gothic Landscapes in Bush Studies and Picnic at Hanging Rock.” Colloquy 20 (2010): 33–56. 27 July 2016 <http://artsonline.monash.edu.au/wp-content/arts/files/colloquy/colloquy_issue_20_december_2010/steele.pdf>.
APA, Harvard, Vancouver, ISO, and other styles
7

Caudwell, Catherine Barbara. "Cute and Monstrous Furbys in Online Fan Production." M/C Journal 17, no. 2 (February 28, 2014). http://dx.doi.org/10.5204/mcj.787.

Full text
Abstract:
Image 1: Hasbro/Tiger Electronics 1998 Furby. (Photo credit: Author) Introduction Since the mid-1990s robotic and digital creatures designed to offer social interaction and companionship have been developed for commercial and research interests. Integral to encouraging positive experiences with these creatures has been the use of cute aesthetics that aim to endear companions to their human users. During this time there has also been a growth in online communities that engage in cultural production through fan fiction responses to existing cultural artefacts, including the widely recognised electronic companion, Hasbro’s Furby (image 1). These user stories and Furby’s online representation in general, demonstrate that contrary to the intentions of their designers and marketers, Furbys are not necessarily received as cute, or the embodiment of the helpless and harmless demeanour that goes along with it. Furbys’ large, lash-framed eyes, small, or non-existent limbs, and baby voice are typical markers of cuteness but can also evoke another side of cuteness—monstrosity, especially when the creature appears physically capable instead of helpless (Brzozowska-Brywczynska 217). Furbys are a particularly interesting manifestation of the cute aesthetic because it is used as tool for encouraging attachment to a socially interactive electronic object, and therefore intersects with existing ideas about technology and nonhuman companions, both of which often embody a sense of otherness. This paper will explore how cuteness intersects withand transitions into monstrosity through online representations of Furbys, troubling their existing design and marketing narrative by connecting and likening them to other creatures, myths, and anecdotes. Analysis of narrative in particular highlights the instability of cuteness, and cultural understandings of existing cute characters, such as the gremlins from the film Gremlins (Dante) reinforce the idea that cuteness should be treated with suspicion as it potentially masks a troubling undertone. Ultimately, this paper aims to interrogate the cultural complexities of designing electronic creatures through the stories that people tell about them online. Fan Production Authors of fan fiction are known to creatively express their responses to a variety of media by appropriating the characters, settings, and themes of an original work and sharing their cultural activity with others (Jenkins 88). On a personal level, Jenkins (103) argues that “[i]n embracing popular texts, the fans claim those works as their own, remaking them in their own image, forcing them to respond to their needs and to gratify their desires.” Fan fiction authors are motivated to write not for financial or professional gains but for personal enjoyment and fan recognition, however, their production does not necessarily come from favourable opinions of an existing text. The antifan is an individual who actively hates a text or cultural artefact and is mobilised in their dislike to contribute to a community of others who share their views (Gray 841). Gray suggests that both fan and antifan activity contribute to our understanding of the kinds of stories audiences want: Although fans may wish to bring a text into everyday life due to what they believe it represents, antifans fear or do not want what they believe it represents and so, as with fans, antifan practice is as important an indicator of interactions between the textual and public spheres. (855) Gray reminds that fans, nonfans, and antifans employ different interpretive strategies when interacting with a text. In particular, while fans intimate knowledge of a text reflects their overall appreciation, antifans more often focus on the “dimensions of the moral, the rational-realistic, [or] the aesthetic” (856) that they find most disagreeable. Additionally, antifans may not experience a text directly, but dislike what knowledge they do have of it from afar. As later examples will show, the treatment of Furbys in fan fiction arguably reflects an antifan perspective through a sense of distrust and aversion, and analysing it can provide insight into why interactions with, or indirect knowledge of, Furbys might inspire these reactions. Derecho argues that in part because of the potential copyright violation that is faced by most fandoms, “even the most socially conventional fan fiction is an act of defiance of corporate control…” (72). Additionally, because of the creative freedom it affords, “fan fiction and archontic literature open up possibilities – not just for opposition to institutions and social systems, but also for a different perspective on the institutional and the social” (76). Because of this criticality, and its subversive nature, fan fiction provides an interesting consumer perspective on objects that are designed and marketed to be received in particular ways. Further, because much of fan fiction draws on fictional content, stories about objects like Furby are not necessarily bound to reality and incorporate fantastical, speculative, and folkloric readings, providing diverse viewpoints of the object. Finally, if, as robotics commentators (cf. Levy; Breazeal) suggest, companionable robots and technologies are going to become increasingly present in everyday life, it is crucial to understand not only how they are received, but also where they fit within a wider cultural sphere. Furbys can be seen as a widespread, if technologically simple, example of these technologies and are often treated as a sign of things to come (Wilks 12). The Design of Electronic Companions To compete with the burgeoning market of digital and electronic pets, in 1998 Tiger Electronics released the Furby, a fur-covered, robotic creature that required the user to carry out certain nurturance duties. Furbys expected feeding and entertaining and could become sick and scared if neglected. Through a program that advanced slowly over time regardless of external stimulus, Furbys appeared to evolve from speaking entirely Furbish, their mother tongue, to speaking English. To the user, it appeared as though their interactions with the object were directly affecting its progress and maturation because their care duties of feeding and entertaining were happening parallel to the Furbish to English transition (Turkle, Breazeal, Daste, & Scassellati 314). The design of electronic companions like Furby is carefully considered to encourage positive emotional responses. For example, Breazeal (2002 230) argues that a robot will be treated like a baby, and nurtured, if it has a large head, big eyes, and pursed lips. Kinsella’s (1995) also emphasises cute things need for care as they are “soft, infantile, mammalian, round, without bodily appendages (e.g. arms), without bodily orifices (e.g. mouths), non-sexual, mute, insecure, helpless or bewildered” (226). From this perspective, Furbys’ physical design plays a role in encouraging nurturance. Such design decisions are reinforced by marketing strategies that encourage Furbys to be viewed in a particular way. As a marketing tool, Harris (1992) argues that: cuteness has become essential in the marketplace in that advertisers have learned that consumers will “adopt” products that create, often in their packaging alone, an aura of motherlessness, ostracism, and melancholy, the silent desperation of the lost puppy dog clamoring to be befriended - namely, to be bought. (179) Positioning Furbys as friendly was also important to encouraging a positive bond with a caregiver. The history, or back story, that Furbys were given in the instruction manual was designed to convey their kind, non-threatening nature. Although alive and unpredictable, it was crucial that Furbys were not frightening. As imaginary living creatures, the origin of Furbys required explaining: “some had suggested positioning Furby as an alien, but that seemed too foreign and frightening for little girls. By May, the thinking was that Furbies live in the clouds – more angelic, less threatening” (Kirsner). In creating this story, Furby’s producers both endeared the object to consumers by making it seem friendly and inquisitive, and avoided associations to its mass-produced, factory origins. Monstrous and Cute Furbys Across fan fiction, academic texts, and media coverage there is a tendency to describe what Furbys look like by stringing together several animals and objects. Furbys have been referred to as a “mechanized ball of synthetic hair that is part penguin, part owl and part kitten” (Steinberg), a “cross between a hamster and a bird…” (Lawson & Chesney 34), and “ “owl-like in appearance, with large bat-like ears and two large white eyes with small, reddish-pink pupils” (ChaosInsanity), to highlight only a few. The ambiguous appearance of electronic companions is often a strategic decision made by the designer to avoid biases towards specific animals or forms, making the companion easier to accept as “real” or “alive” (Shibata 1753). Furbys are arguably evidence of this strategy and appear to be deliberately unfamiliar. However, the assemblage, and exaggeration, of parts that describes Furbys also conjures much older associations: the world of monsters in gothic literature. Notice the similarities between the above attempts to describe what Furbys looks like, and a historical description of monsters: early monsters are frequently constructed out of ill-assorted parts, like the griffin, with the head and wings of an eagle combined with the body and paws of a lion. Alternatively, they are incomplete, lacking essential parts, or, like the mythological hydra with its many heads, grotesquely excessive. (Punter & Byron 263) Cohen (6) argues that, metaphorically, because of their strange visual assembly, monsters are displaced beings “whose externally incoherent bodies resist attempts to include them in any systematic structuration. And so the monster is dangerous, a form suspended between forms that threatens to smash distinctions.” Therefore, to call something a monster is also to call it confusing and unfamiliar. Notice in the following fan fiction example how comparing Furby to an owl makes it strange, and there seems to be uncertainty around what Furbys are, and where they fit in the natural order: The first thing Heero noticed was that a 'Furby' appeared to be a childes toy, shaped to resemble a mutated owl. With fur instead of feathers, no wings, two large ears and comical cat paws set at the bottom of its pudding like form. Its face was devoid of fuzz with a yellow plastic beak and too large eyes that gave it the appearance of it being addicted to speed [sic]. (Kontradiction) Here is a character unfamiliar with Furbys, describing its appearance by relating it to animal parts. Whether Furbys are cute or monstrous is contentious, particularly in fan fictions where they have been given additional capabilities like working limbs and extra appendages that make them less helpless. Furbys’ lack, or diminution of parts, and exaggeration of others, fits the description of cuteness, as well as their sole reliance on caregivers to be fed, entertained, and transported. If viewed as animals, Furbys appear physically limited. Kinsella (1995) finds that a sense of disability is important to the cute aesthetic: stubby arms, no fingers, no mouths, huge heads, massive eyes – which can hide no private thoughts from the viewer – nothing between their legs, pot bellies, swollen legs or pigeon feet – if they have feet at all. Cute things can’t walk, can’t talk, can’t in fact do anything at all for themselves because they are physically handicapped. (236) Exploring the line between cute and monstrous, Brzozowska-Brywczynska argues that it is this sense of physical disability that distinguishes the two similar aesthetics. “It is the disempowering feeling of pity and sympathy […] that deprives a monster of his monstrosity” (218). The descriptions of Furbys in fan fiction suggest that they transition between the two, contingent on how they are received by certain characters, and the abilities they are given by the author. In some cases it is the overwhelming threat the Furby poses that extinguishes feelings of care. In the following two excerpts that the revealing of threatening behaviour shifts the perception of Furby from cute to monstrous in ‘When Furbies Attack’ (Kellyofthemidnightdawn): “These guys are so cute,” she moved the Furby so that it was within inches of Elliot's face and positioned it so that what were apparently the Furby's lips came into contact with his cheek “See,” she smiled widely “He likes you.” […] Olivia's breath caught in her throat as she found herself backing up towards the door. She kept her eyes on the little yellow monster in front of her as her hand slowly reached for the door knob. This was just too freaky, she wanted away from this thing. The Furby that was originally called cute becomes a monster when it violently threatens the protagonist, Olivia. The shifting of Furbys between cute and monstrous is a topic of argument in ‘InuYasha vs the Demon Furbie’ (Lioness of Dreams). The character Kagome attempts to explain a Furby to Inuyasha, who views the object as a demon: That is a toy called a Furbie. It's a thing we humans call “CUTE”. See, it talks and says cute things and we give it hugs! (Lioness of Dreams) A recurrent theme in the Inuyasha (Takahashi) anime is the generational divide between Kagome and Inuyasha. Set in feudal-era Japan, Kagome is transported there from modern-day Tokyo after falling into a well. The above line of dialogue reinforces the relative newness, and cultural specificity, of cute aesthetics, which according to Kinsella (1995 220) became increasingly popular throughout the 1980s and 90s. In Inuyasha’s world, where demons and monsters are a fixture of everyday life, the Furby appearance shifts from cute to monstrous. Furbys as GremlinsDuring the height of the original 1998 Furby’s public exposure and popularity, several news articles referred to Furby as “the five-inch gremlin” (Steinberg) and “a furry, gremlin-looking creature” (Del Vecchio 88). More recently, in a review of the 2012 Furby release, one commenter exclaimed: “These things actually look scary! Like blue gremlins!” (KillaRizzay). Following the release of the original Furbys, Hasbro collaborated with the film’s merchandising team to release Interactive ‘Gizmo’ Furbys (image 2). Image 2: Hasbro 1999 Interactive Gizmo (photo credit: Author) Furbys’ likeness to gremlins offers another perspective on the tension between cute and monstrous aesthetics that is contingent on the creature’s behaviour. The connection between Furbys and gremlins embodies a sense of mistrust, because the film Gremlins focuses on the monsters that dwell within the seemingly harmless and endearing mogwai/gremlin creatures. Catastrophic events unfold after they are cared for improperly. Gremlins, and by association Furbys, may appear cute or harmless, but this story tells that there is something darker beneath the surface. The creatures in Gremlins are introduced as mogwai, and in Chinese folklore the mogwai or mogui is a demon (Zhang, 1999). The pop culture gremlin embodied in the film, then, is cute and demonic, depending on how it is treated. Like a gremlin, a Furby’s personality is supposed to be a reflection of the care it receives. Transformation is a common theme of Gremlins and also Furby, where it is central to the sense of “aliveness” the product works to create. Furbys become “wiser” as time goes on, transitioning through “life stages” as they “learn” about their surroundings. As we learn from their origin story, Furbys jumped from their home in the clouds in order to see and explore the world firsthand (Tiger Electronics 2). Because Furbys are susceptible to their environment, they come with rules on how they must be cared for, and the consequences if this is ignored. Without attention and “food”, a Furby will become unresponsive and even ill: “If you allow me to get sick, soon I will not want to play and will not respond to anything but feeding” (Tiger Electronics 6). In Gremlins, improper care manifests in an abrupt transition from cute to monstrous: Gizmo’s strokeable fur is transformed into a wet, scaly integument, while the vacant portholes of its eyes (the most important facial feature of the cute thing, giving us free access to its soul and ensuring its total structability, its incapacity to hold back anything in reserve) become diabolical slits hiding a lurking intelligence, just as its dainty paws metamorphose into talons and its pretty puckered lips into enormous Cheshire grimaces with full sets of sharp incisors. (Harris 185–186) In the Naruto (Kishimoto) fan fiction ‘Orochimaru's World Famous New Year's Eve Party’ (dead drifter), while there is no explicit mention of Gremlins, the Furby undergoes the physical transformation that appears in the films. The Furby, named Sasuke, presumably after the Naruto antagonist Sasuke, and hinting at its untrustworthy nature, undergoes a transformation that mimics that of Gremlins: when water is poured on the Furby, boils appear and fall from its back, each growing into another Furby. Also, after feeding the Furby, it lays eggs: Apparently, it's not a good idea to feed Furbies chips. Why? Because they make weird cocoon eggs and transform into… something. (ch. 5) This sequence of events follows the Gremlins movie structure, in which cute and furry Gizmo, after being exposed to water and fed after midnight, “begins to reproduce, laying eggs that enter a larval stage in repulsive cocoons covered in viscous membranes” (Harris 185). Harris also reminds that the appearance of gremlins comes with understandings of how they should be treated: Whereas cute things have clean, sensuous surfaces that remain intact and unpenetrated […] the anti-cute Gremlins are constantly being squished and disembowelled, their entrails spilling out into the open, as they explode in microwaves and run through paper shredders and blenders. (Harris 186) The Furbys in ‘Orochimaru's World Famous New Year's Eve Party’ meet a similar end: Kuro Furby whined as his brain was smashed in. One of its eyes popped out and rolled across the floor. (dead drifter ch. 6) A horde of mischievous Furbys are violently dispatched, including the original Furby that was lovingly cared for. Conclusion This paper has explored examples from online culture in which different cultural references clash and merge to explore artefacts such as Furby, and the complexities of design, such as the use of ambiguously mammalian, and cute, aesthetics in an effort to encourage positive attachment. Fan fiction, as a subversive practice, offers valuable critiques of Furby that are imaginative and speculative, providing creative responses to experiences with Furbys, but also opening up potential for what electronic companions could become. In particular, the use of narrative demonstrates that cuteness is an unstable aesthetic that is culturally contingent and very much tied to behaviour. As above examples demonstrate, Furbys can move between cute, friendly, helpless, threatening, monstrous, and strange in one story. Cute Furbys became monstrous when they were described as an assemblage of disparate parts, made physically capable and aggressive, and affected by their environment or external stimulus. Cultural associations, such as gremlins, also influence how an electronic animal is received and treated, often troubling the visions of designers and marketers who seek to present friendly, nonthreatening, and accommodating companions. These diverse readings are valuable in understanding how companionable technologies are received, especially if they continue to be developed and made commercially available, and if cuteness is to be used as means of encouraging positive attachment. References Breazeal, Cynthia. Designing Sociable Robots. Cambridge, MA: MIT Press, 2002. Brzozowska-Brywczynska, Maja. "Monstrous/Cute: Notes on the Ambivalent Nature of Cuteness." Monsters and the Monstrous: Myths and Metaphors of Enduring Evil. Ed. Niall Scott. Amsterdam/New York: Rodopi. 2007. 213 - 28. ChaosInsanity. “Attack of the Killer Furby.” Fanfiction.net, 2008. 20 July 2012. Cohen, Jeffrey Jerome. “Monster Culture (Seven Theses).” In Monster Theory: Reading Culture, ed. Jeffrey Jerome Cohen. Minneapolis, MN: University of Minnesota Press. 1996. 3 – 25. dead drifter. “Orochimaru's World Famous New Year's Eve Party.”Fanfiction.net, 2007. 4 Mar. 2013. Del Vecchio, Gene. The Blockbuster Toy! How to Invent the Next Big Thing. Gretna, LA: Pelican Publishing Company. 2003. Derecho, Abigail. “Archontic Literature: A Definition, a History, and Several Theories of Fan Fiction.” In Fan Fiction and Fan Communities in the Age of the Internet, eds. Karen Hellekson and Kristina Busse. Jefferson, NC: McFarland & Company, 2006. 6—78. Gremlins. Dir. Joe Dante. Warner Brothers & Amblin Entertainment, 1984. Gray, Jonathan. “Antifandom and the Moral Text.” American Behavioral Scientist 48.7 (2005). 24 Mar. 2014 ‹http://abs.sagepub.com/content/48/7/840.abstract›. Harris, Daniel. “Cuteness.” Salmagundi 96 (1992). 20 Feb. 2014 ‹http://www.jstor.org/stable/40548402›. Inuyasha. Created by Rumiko Takahashi. Yomiuri Telecasting Corporation (YTV) & Sunrise, 1996. Jenkins, Henry. “Star Trek Rerun, Reread, Rewritten: Fan Writing as Textual Poaching.” Critical Studies in Mass Communication 5.2 (1988). 19 Feb. 2014 ‹http://www.tandfonline.com/doi/abs/10.1080/15295038809366691#.UwVmgGcdeIU›. Kellyofthemidnightdawn. “When Furbies Attack.” Fanfiction.net, 2006. 6 Oct. 2011. KillaRizzay. “Furby Gets a Reboot for 2012, We Go Hands-On (Video).” Engadget 10 July 2012. 11 Feb. 2014 ‹http://www.engadget.com/2012/07/06/furby-hands-on-video/›. Kinsella, Sharon. “Cuties in Japan.” In Women, Media and Consumption in Japan, eds. Lise Skov and Brian Moeran. Honolulu, HI: University of Hawai'i Press. 1995. 220–254. Kirsner, Scott. “Moody Furballs and the Developers Who Love Them.” Wired 6.09 (1998). 20 Feb. 2014 ‹http://www.wired.com/wired/archive/6.09/furby_pr.html›. Kontradiction. “Ehloh the Invincible.” Fanfiction.net, 2002. 20 July 2012. Lawson, Shaun, and Thomas Chesney. “Virtual Pets and Electronic Companions – An Agenda for Inter-Disciplinary Research.” Paper presented at AISB'07: Artificial and Ambient Intelligence. Newcastle upon Tyne: Newcastle University, 2-4 Apr. 2007. ‹http://homepages.cs.ncl.ac.uk/patrick.olivier/AISB07/catz-dogz.pdf›.Levy, David. Love and Sex with Robots: The Evolution of Human-Robot Relationships. New York, NY: HarperCollins, 2007. Lioness of Dreams. “InuYasha vs the Demon Furbie.” Fanfiction.net, 2003. 19 July 2012. Naruto. Created by Masashi Kishimoto. Shueisha. 1999. Punter, David, and Glennis Byron. The Gothic. Oxford: Blackwell Publishing, 2004. Shibata, Takanori. “An Overview of Human Interactive Robots for Psychological Enrichment.” Proceedings of the IEEE 92.11 (2004). 4 Mar. 2011 ‹http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1347456&tag=1›. Steinberg, Jacques. “Far from the Pleading Crowd: Furby's Dad.” The New York Times: Public Lives, 10 Dec. 1998. 20 Nov. 2013 ‹http://www.nytimes.com/1998/12/10/nyregion/public-lives-far-from-the-pleading-crowd-furby-s-dad.html?src=pm›. Tiger Electronics. Electronic Furby Instruction Manual. Vernon Hills, IL: Tiger Electronics, 1999. Turkle, Sherry, Cynthia Breazeal, Olivia Daste, and Brian Scassellati. “First Encounters with Kismit and Cog: Children Respond to Relational Artifacts.” In Digital Media: Transformations in Human Communication, eds. Paul Messaris and Lee Humphreys. New York, NY: Peter Lang, 2006. 313–330. Wilks, Yorick. Close Engagements with Artificial Companions: Key Social, Psychological and Ethical Design Issues. Amsterdam/Philadelphia, PA: John Benjamins Publishing Company, 2010. Zhang, Qiong. “About God, Demons, and Miracles: The Jesuit Discourse on the Supernatural in Late Ming China.” Early Science and Medicine 4.1 (1999). 15 Dec. 2013 ‹http://dx.doi.org/10.1163/157338299x00012›.
APA, Harvard, Vancouver, ISO, and other styles
8

Shiloh, Ilana. "Adaptation, Intertextuality, and the Endless Deferral of Meaning." M/C Journal 10, no. 2 (May 1, 2007). http://dx.doi.org/10.5204/mcj.2636.

Full text
Abstract:
Film adaptation is an ambiguous term, both semantically and conceptually. Among its multiple connotations, the word “adaptation” may signify an artistic composition that has been recast in a new form, an alteration in the structure or function of an organism to make it better fitted for survival, or a modification in individual or social activity in adjustment to social surroundings. What all these definitions have in common is a tacitly implied hierarchy and valorisation: they presume the existence of an origin to which the recast work of art is indebted, or of biological or societal constraints to which the individual should conform in order to survive. The bias implied in the very connotations of the word has affected the theory and study of film adaptation. This bias is most noticeably reflected in the criterion of fidelity, which has been the major standard for evaluating film adaptations ever since George Bluestone’s 1957 pivotal Novels into Films. “Fidelity criticism,” observes McFarlane, “depends on a notion of the text as having and rendering up to the (intelligent) reader a single, correct ‘meaning’ which the film-maker has either adhered to or in some sense violated or tampered with” (7). But such an approach, Leitch argues, is rooted in several unacknowledged but entrenched misconceptions. It privileges literature over film, casts a false aura of originality on the precursor text, and ignores the fact that all texts, whether literary or cinematic, are essentially intertexts. As Kristeva, along with other poststructuralist theorists, has taught us, any text is an amalgam of others, a part of a larger fabric of cultural discourse (64-91). “A text is a multidimensional space in which a variety of writings, none of them original, blend and clash”, writes Barthes in 1977 (146), and 15 years later film theoretician Robert Stam elaborates: “The text feeds on and is fed into an infinitely permutating intertext, which is seen through evershifting grids of interpretation” (57). The poststructuralists’ view of texts draws on the structuralists’ view of language, which is conceived as a system that pre-exists the individual speaker and determines subjectivity. These assumptions counter the Romantic ideology of individualism, with its associated concepts of authorial originality and a text’s single, unified meaning, based on the author’s intention. “It is language which speaks, not the author,” declares Barthes, “to write is to reach the point where only language acts, ‘performs’, and not me” (143). In consequence, the fidelity criterion of film adaptation may be regarded as an outdated vestige of the Romantic world-view. If all texts quote or embed fragments of earlier texts, the notion of an authoritative literary source, which the cinematic version should faithfully reproduce, is no longer valid. Film adaptation should rather be perceived as an intertextual practice, contributing to a dynamic interpretive exchange between the literary and cinematic texts, an exchange in which each text can be enriched, modified or subverted. The relationship between Jonathan Nolan’s short story “Memento Mori” and Christopher Nolan’s film Memento (2001) is a case in point. Here there was no source text, as the writing of the story did not precede the making of the film. The two processes were concurrent, and were triggered by the same basic idea, which Jonathan discussed with his brother during a road trip from Chicago to LA. Christopher developed the idea into a film and Jonathan turned it into a short story; he also collaborated in the film script. Moreover, Jonathan designed otnemem> (memento in reverse), the official Website, which contextualises the film’s fictional world, while increasing its ambiguity. What was adapted here was an idea, and each text explores in different ways the narrative, ontological and epistemological implications of that idea. The story, the film and the Website produce a multi-layered intertextual fabric, in which each thread potentially unravels the narrative possibilities suggested by the other threads. Intertextuality functions to increase ambiguity, and is therefore thematically relevant, for “Memento Mori”, Memento and otnemem> are three fragmented texts in search of a coherent narrative. The concept of narrative may arguably be one of the most overused and under-defined terms in academic discourse. In the context of the present paper, the most productive approach is that of Wilkens, Hughes, Wildemuth and Marchionini, who define narrative as a chain of events related by cause and effect, occurring in time and space, and involving agency and intention. In fiction or in film, intention is usually associated with human agents, who can be either the characters or the narrator. It is these agents who move along the chain of causes and effects, so that cause-effect and agency work together to make the narrative. This narrative paradigm underpins mainstream Hollywood cinema in the years 1917-1960. In Narration in the Fiction Film, David Bordwell writes: The classical Hollywood film presents psychologically defined individuals who struggle to solve a clear-cut problem or to attain specific goals. … The story ends with a decisive victory or defeat, a resolution of the problem, and a clear achievement, or non achievement, of the goals. The principal causal agency is thus the character … . In classical fabula construction, causality is the prime unifying principle. (157) The large body of films flourishing in America between the years 1941 and 1958 collectively dubbed film noir subvert this narrative formula, but only partially. As accurately observed by Telotte, the devices of flashback and voice-over associated with the genre implicitly challenge conventionally linear narratives, while the use of the subjective camera shatters the illusion of objective truth and foregrounds the rift between reality and perception (3, 20). Yet in spite of the narrative experimentation that characterises the genre, the viewer of a classical film noir film can still figure out what happened in the fictional world and why, and can still reconstruct the story line according to sequential and causal schemata. This does not hold true for the intertextual composite consisting of Memento, “Memento Mori” and otnemem>. The basic idea that generated the project was that of a self-appointed detective who obsessively investigates and seeks to revenge his wife’s rape and murder, while suffering from a total loss of short term memory. The loss of memory precludes learning and the acquisition of knowledge, so the protagonist uses scribbled notes, Polaroid photos and information tattooed onto his skin, in an effort to reconstruct his fragmented reality into a coherent and meaningful narrative. Narrativity is visually foregrounded: the protagonist reads his body to make sense of his predicament. To recap, the narrative paradigm relies on a triad of terms: connectedness (a chain of events), causality, and intentionality. The basic situation in Memento and “Memento Mori”, which involves a rupture in the protagonist’s/narrator’s psychological time, entails a breakdown of all three pre-requisites of narrativity. Since the protagonists of both story and film are condemned, by their memory deficiency, to living in an eternal present, they are unable to experience the continuity of time and the connectedness of events. The disruption of temporality inevitably entails the breakdown of causality: the central character’s inability to determine the proper sequence of events prevents him from being able to distinguish between cause and effect. Finally, the notion of agency is also problematised, because agency implies the existence of a stable, identifiable subject, and identity is contingent on the subject’s uninterrupted continuity across time and change. The subversive potential of the basic narrative situation is heightened by the fact that both Memento and “Memento Mori” are focalised through the consciousness and perception of the main character. This means that the story, as well as the film, is conveyed from the point of view of a narrator who is constitutionally unable to experience his life as a narrative. This conundrum is addressed differently in the story and in the film, both thematically and formally. “Memento Mori” presents, in a way, the backdrop to Memento. It focuses on the figure of Earl, a brain damaged protagonist suffering from anterograde amnesia, who is staying in a blank, anonymous room, that we assume to be a part of a mental institution. We also assume that Earl’s brain damage results from a blow to the head that he received while witnessing the rape and murder of his wife. Earl is bent on avenging his wife’s death. To remind himself to do so, he writes messages to himself, which he affixes on the walls of his room. Leonard Shelby is Memento’s cinematic version of Earl. By Leonard’s own account, he has an inability to form memories. This, he claims, is the result of neurological damage sustained during an attack on him and his wife, an attack in which his wife was raped and killed. To be able to pursue his wife’s killers, he has recourse to various complex and bizarre devices—Polaroid photos, a quasi-police file, a wall chart, and inscriptions tattooed onto his skin—in order to replace his memory. Hampered by his affliction, Leonard trawls the motels and bars of Southern California in an effort to gather evidence against the killer he believes to be named “John G.” Leonard’s faulty memory is deviously manipulated by various people he encounters, of whom the most crucial are a bartender called Natalie and an undercover cop named Teddy, both involved in a lucrative drug deal. So far for a straightforward account of the short story and the film. But there is nothing straightforward about either Memento or “Memento Mori”. The basic narrative premise, consisting of a protagonist/narrator suffering from a severe memory deficit, is a condition entailing far-reaching psychological and philosophical implications. In the following discussion, I would like to focus on these two implications and to tie them in to the notions of narrativity, intertextuality, and eventually, adaptation. The first implication of memory loss is the dissolution of identity. Our sense of identity is contingent on our ability to construct an uninterrupted personal narrative, a narrative in which the present self is continuous with the past self. In Oneself as Another, his philosophical treatise on the concept of selfhood, Paul Ricoeur queries: “do we not consider human lives to be more readable when they have been interpreted in terms of the stories that people tell about them?” He concludes by observing that “interpretation of the self … finds in narrative, among others signs and symbols, a privileged form of mediation” (ft. 114). Ricoeur further suggests that the sense of selfhood is contingent on four attributes: numerical identity, qualitative identity, uninterrupted continuity across time and change, and finally, permanence in time that defines sameness. The loss of memory subverts the last two attributes of personal identity, the sense of continuity and permanence over time, and thereby also ruptures the first two. In “Memento Mori” and Memento, the disintegration of identity is formally rendered through the fragmentation of the literary and cinematic narratives, respectively. In Jonathan Nolan’s short story, traditional linear narrative is disrupted by shifts in point of view and by graphic differences in the shape of the print on the page. “Memento Mori” is alternately narrated in the first and in the third person. The first person segments, which constitute the present of the story, are written by Earl to himself. As his memory span is ten-minute long, his existence consists of “just the same ten minutes, over and over again” (Nolan, 187). Fully aware of the impending fading away of memory, Earl’s present-version self leaves notes to his future-version self, in an effort to situate him in time and space and to motivate him to the final action of revenge. The literary device of alternating points of view formally subverts the notion of identity as a stable unity. Paradoxically, rather than setting him apart from the rest of us, Earl’s brain damage foregrounds his similarity. “Every man is broken into twenty-four-hour fractions,” observes Earl, comforting his future self by affirming his basic humanity, “Your problem is a little more acute, maybe, but fundamentally the same thing” (Nolan, 189). His observation echoes Beckett’s description of the individual as “the seat of a constant process of decantation … to the vessel containing the fluid of past time” (Beckett, 4-5). Identity, suggests Jonathan Nolan, following Beckett, among other things, is a theoretical construct. Human beings are works in progress, existing in a state of constant flux. We are all fragmented beings—the ten-minute man is only more so. A second strategy employed by Jonathan to convey the discontinuity of the self is the creation of visual graphic disunity. As noted by Yellowlees Douglas, among others, the static, fixed nature of the printed page and its austere linearity make it ideal for the representation of our mental construct of narrative. The text of “Memento Mori” appears on the page in three different font types: the first person segments, Earl’s admonitions to himself, appear in italics; the third person segments are written in regular type; and the notes and signs are capitalised. Christopher Nolan obviously has recourse to different strategies to reach the same ends. His principal technique, and the film’s most striking aspect, is its reversed time sequence. The film begins with a crude Polaroid flash photograph of a man’s body lying on a decaying wooden floor. The image in the photo gradually fades, until the camera sucks the picture up. The photograph presents the last chronological event; the film then skips backwards in ten-minute increments, mirroring the protagonist’s memory span. But the film’s time sequence is not simply a reversed linear structure. It is a triple-decker narrative, mirroring the three-part organisation of the story. In the opening scene, one comes to realise that the film-spool is running backwards. After several minutes the film suddenly reverses and runs forward for a few seconds. Then there is a sudden cut to a different scene, in black and white, where the protagonist (who we have just learned is called Leonard) begins to talk, out of the blue, about his confusion. Soon the film switches to a color scene, again unconnected, in which the “action” of the film begins. In the black and white scenes, which from then on are interspersed with the main action, Leonard attempts to understand what is happening to him and to explain (to an unseen listener) the nature of his condition. The “main action” of the film follows a double temporal structure: while each scene, as a unit of action, runs normally forward, each scene is triggered by the following, rather than by the preceding scene, so that we are witnessing a story whose main action goes back in time as the film progresses (Hutchinson and Read, 79). A third narrative thread, interspersed with the other two, is a story that functions as a foil to the film’s main action. It is the story of Sammy Jankis: one of the cases that Leonard worked on in his past career as an insurance investigator. Sammy was apparently suffering from anterograde amnesia, the same condition that Leonard is suffering from now. Sammy’s wife filed an insurance claim on his behalf, a claim that Leonard rejected on the grounds that Sammy’s condition was merely psychosomatic. Hoping to confirm Leonard’s diagnosis, Sammy’s diabetic wife puts her husband to the test. He fails the test as he tenderly administers multiple insulin injections to her, thereby causing her death. As Leonard’s beloved wife also suffered from diabetes, and as Teddy (the undercover cop) eventually tells Leonard that Sammy never had a wife, the Sammy Jankis parable functions as a mise en abyme, which can either corroborate or subvert the narrative that Leonard is attempting to construct of his own life. Sammy may be seen as Leonard’s symbolic double in that his form of amnesia foreshadows the condition with which Leonard will eventually be afflicted. This interpretation corroborates Leonard’s personal narrative of his memory loss, while tainting him with the blame for the death of Sammy’s wife. But the camera also suggests a more unsettling possibility—Leonard may ultimately be responsible for the death of his own wife. The scene in which Sammy, condemned by his amnesia, administers to his wife a repeated and fatal shot of insulin, is briefly followed by a scene of Leonard pinching his own wife’s thigh before her insulin shot, a scene recurring in the film like a leitmotif. The juxtaposition of the two scenes suggests that it is Leonard who, mistakenly or deliberately, has killed his wife, and that ever since he has been projecting his guilt onto others: the innocent victims of his trail of revenge. In this ironic interpretive twist, it is Leonard, rather than Sammy, who has been faking his amnesia. The parable of Sammy Jankis highlights another central concern of Memento and “Memento Mori”: the precarious nature of truth. This is the second psychological and philosophical implication of what Leonard persistently calls his “condition”, namely his loss of memory. The question explicitly raised in the film is whether memory records or creates, if it retains the lived life or reshapes it into a narrative that will confer on it unity and meaning. The answer is metaphorically suggested by the recurring shots of a mirror, which Leonard must use to read his body inscriptions. The mirror, as Lacan describes it, offers the infant his first recognition as a coherent, unique self. But this recognition is a mis-recognition, for the reflection has a coherence and unity that the subject both lacks and desires. The body inscriptions that Leonard can read only in the mirror do not necessarily testify to the truth. But they do enable him to create a narrative that makes his life worth living. A Lacanian reading of the mirror image has two profoundly unsettling implications. It establishes Leonard as a morally deficient, rather than neurologically deficient, human being, and it suggests that we are not fundamentally different from him. Leonard’s intricate system of notes and body inscriptions builds up an inventory of set representations to which he can refer in all his future experiences. Thus when he wakes up naked in bed with a woman lying beside him, he looks among his Polaroid photographs for a picture which he can match with her, which will tell him what the woman’s name is and what he can expect from her on the basis of past experience. But this, suggest Hutchinson and Read, is an external representation of operations that all of us perform mentally (89). We all respond to sensory input by constructing internal representations that form the foundations of our psyche. This view underpins current theories of language and of the mind. Semioticians tell us that the word, the signifier, refers to a mental representation of an object rather than to the object itself. Cognitivists assume that cognition consists in the operation of mental items which are symbols for real entities. Leonard’s apparently bizarre method of apprehending reality is thus essentially an externalisation of memory. But if, cognitively and epistemologically speaking, Lennie is less different from us than we would like to think, this implication may also extend to our moral nature. Our complicity with Leonard is mainly established through the film’s complex temporal structure, which makes us viscerally share the protagonist’s/narrator’s confusion and disorientation. We become as unable as he is to construct a single, coherent and meaningful narrative: the film’s obscurity is built in. Memento’s ambiguity is enhanced by the film’s Website, which presents a newspaper clipping about the attack on Leonard and his wife, torn pages from police and psychiatric reports, and a number of notes from Leonard to himself. While blurring the boundaries between story and film by suggesting that Leonard, like Earl, may have escaped from a mental institution, otnemem> also provides evidence that can either confirm or confound our interpretive efforts, such as a doctor’s report suggesting that “John G.” may be a figment of Leonard’s imagination. The precarious nature of truth is foregrounded by the fact that the narrative Leonard is trying to construct, as well as the narrative in which Christopher Nolan has embedded him, is a detective story. The traditional detective story proceeds from a two-fold assumption: truth exists, and it can be known. But Memento and “Memento Mori” undermine this epistemological confidence. They suggest that truth, like identity, is a fictional construct, derived from the tales we tell ourselves and recount to others. These tales do not coincide with objective reality; they are the prisms we create in order to understand reality, to make our lives bearable and worth living. Narratives are cognitive patterns that we construct to make sense of the world. They convey our yearning for coherence, closure, and a single unified meaning. The overlapping and conflicting threads interweaving Memento, “Memento Mori” and the Website otnemem> simultaneously expose and resist our nostalgia for unity, by evoking a multiplicity of meanings and creating an intertextual web that is the essence of all adaptation. References Barthes, Roland. Image-Music-Text. London: Fontana, 1977. Beckett, Samuel. Proust. London: Chatto and Windus, 1931. Bluestone, George. Novels into Film. Berkley and Los Angeles: California UP, 1957. Bordwell, David. Narration in the Fiction Film. Madison: Wisconsin UP, 1985. Hutchinson, Phil, and Rupert Read. “Memento: A Philosophical Investigation.” Film as Philosophy: Essays in Cinema after Wittgenstein and Cavell. Ed. Rupert Read and Jerry Goodenough. Hampshire: Palgrave, 2005. 72-93. Kristeva, Julia. “World, Dialogue and Novel.” Desire in Language: A Semiotic Approach to Literature and Art. Ed. Leon S. Rudiez. Trans. Thomas Gora. New York: Columbia UP, 1980. 64-91. Lacan, Jacques. “The Mirror Stage as Formative of the Function of the I as Revealed in Psychoanalytic Experience.” Ēcrits: A Selection. New York: Norton 1977. 1-7. Leitch, Thomas. “Twelve Fallacies in Contemporary Adaptation Theory.” Criticism 45.2 (2003): 149-71. McFarlane, Brian. Novel to Film: An Introduction to the Theory of Adaptation. Oxford: Clarendon Press, 1996. Nolan, Jonathan. “Memento Mori.” The Making of Memento. Ed. James Mottram. London: Faber and Faber, 2002. 183-95. Nolan, Jonathan. otnemem. 24 April 2007 http://otnemem.com>. Ricoeur, Paul. Oneself as Another. Chicago: Chicago UP, 1992. Stam, Robert. “Beyond Fidelity: The Dialogics of Adaptation.” Film Adaptation. Ed. James Naremore. New Brunswick: Rutgers UP, 2000. 54-76. Telotte, J.P. Voices in the Dark: The Narrative Patterns of Film Noir. Urbana and Chicago: Illinois UP, 1989. Wilkens, T., A. Hughes, B.M. Wildemuth, and G. Marchionini. “The Role of Narrative in Understanding Digital Video.” 24 April 2007 http://www.open-video.org/papers/Wilkens_Asist_2003.pdf>. Yellowlees Douglass, J. “Gaps, Maps and Perception: What Hypertext Readers (Don’t) Do.” 24 April 2007 http://www.pd.org/topos/perforations/perf3/douglas_p3.html>. Citation reference for this article MLA Style Shiloh, Ilana. "Adaptation, Intertextuality, and the Endless Deferral of Meaning: Memento." M/C Journal 10.2 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0705/08-shiloh.php>. APA Style Shiloh, I. (May 2007) "Adaptation, Intertextuality, and the Endless Deferral of Meaning: Memento," M/C Journal, 10(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0705/08-shiloh.php>.
APA, Harvard, Vancouver, ISO, and other styles
9

Ibrahim, Yasmin. "Commodifying Terrorism." M/C Journal 10, no. 3 (June 1, 2007). http://dx.doi.org/10.5204/mcj.2665.

Full text
Abstract:
Introduction Figure 1 The counter-Terrorism advertising campaign of London’s Metropolitan Police commodifies some everyday items such as mobile phones, computers, passports and credit cards as having the potential to sustain terrorist activities. The process of ascribing cultural values and symbolic meanings to some everyday technical gadgets objectifies and situates Terrorism into the everyday life. The police, in urging people to look out for ‘the unusual’ in their normal day-to-day lives, juxtapose the everyday with the unusual, where day-to-day consumption, routines and flows of human activity can seemingly house insidious and atavistic elements. This again is reiterated in the Met police press release: Terrorists live within our communities making their plans whilst doing everything they can to blend in, and trying not to raise suspicions about their activities. (MPA Website) The commodification of Terrorism through uncommon and everyday objects situates Terrorism as a phenomenon which occupies a liminal space within the everyday. It resides, breathes and co-exists within the taken-for-granted routines and objects of ‘the everyday’ where it has the potential to explode and disrupt without warning. Since 9/11 and the 7/7 bombings Terrorism has been narrated through the disruption of mobility, whether in mid-air or in the deep recesses of the Underground. The resonant thread of disruption to human mobility evokes a powerful meta-narrative where acts of Terrorism can halt human agency amidst the backdrop of the metropolis, which is often a metaphor for speed and accelerated activities. If globalisation and the interconnected nature of the world are understood through discourses of risk, Terrorism bears the same footprint in urban spaces of modernity, narrating the vulnerability of the human condition in an inter-linked world where ideological struggles and resistance are manifested through inexplicable violence and destruction of lives, where the everyday is suspended to embrace the unexpected. As a consequence ambient fear “saturates the social spaces of everyday life” (Hubbard 2). The commodification of Terrorism through everyday items of consumption inevitably creates an intertextuality with real and media events, which constantly corrode the security of the metropolis. Paddy Scannell alludes to a doubling of place in our mediated world where “public events now occur simultaneously in two different places; the place of the event itself and that in which it is watched and heard. The media then vacillates between the two sites and creates experiences of simultaneity, liveness and immediacy” (qtd. in Moores 22). The doubling of place through media constructs a pervasive environment of risk and fear. Mark Danner (qtd. in Bauman 106) points out that the most powerful weapon of the 9/11 terrorists was that innocuous and “most American of technological creations: the television set” which provided a global platform to constantly replay and remember the dreadful scenes of the day, enabling the terrorist to appear invincible and to narrate fear as ubiquitous and omnipresent. Philip Abrams argues that ‘big events’ (such as 9/11 and 7/7) do make a difference in the social world for such events function as a transformative device between the past and future, forcing society to alter or transform its perspectives. David Altheide points out that since September 11 and the ensuing war on terror, a new discourse of Terrorism has emerged as a way of expressing how the world has changed and defining a state of constant alert through a media logic and format that shapes the nature of discourse itself. Consequently, the intensity and centralisation of surveillance in Western countries increased dramatically, placing the emphasis on expanding the forms of the already existing range of surveillance processes and practices that circumscribe and help shape our social existence (Lyon, Terrorism 2). Normalisation of Surveillance The role of technologies, particularly information and communication technologies (ICTs), and other infrastructures to unevenly distribute access to the goods and services necessary for modern life, while facilitating data collection on and control of the public, are significant characteristics of modernity (Reiman; Graham and Marvin; Monahan). The embedding of technological surveillance into spaces and infrastructures not only augment social control but also redefine data as a form of capital which can be shared between public and private sectors (Gandy, Data Mining; O’Harrow; Monahan). The scale, complexity and limitations of omnipresent and omnipotent surveillance, nevertheless, offer room for both subversion as well as new forms of domination and oppression (Marx). In surveillance studies, Foucault’s analysis is often heavily employed to explain lines of continuity and change between earlier forms of surveillance and data assemblage and contemporary forms in the shape of closed-circuit television (CCTV) and other surveillance modes (Dee). It establishes the need to discern patterns of power and normalisation and the subliminal or obvious cultural codes and categories that emerge through these arrangements (Fopp; Lyon, Electronic; Norris and Armstrong). In their study of CCTV surveillance, Norris and Armstrong (cf. in Dee) point out that when added to the daily minutiae of surveillance, CCTV cameras in public spaces, along with other camera surveillance in work places, capture human beings on a database constantly. The normalisation of surveillance, particularly with reference to CCTV, the popularisation of surveillance through television formats such as ‘Big Brother’ (Dee), and the expansion of online platforms to publish private images, has created a contradictory, complex and contested nature of spatial and power relationships in society. The UK, for example, has the most developed system of both urban and public space cameras in the world and this growth of camera surveillance and, as Lyon (Surveillance) points out, this has been achieved with very little, if any, public debate as to their benefits or otherwise. There may now be as many as 4.2 million CCTV cameras in Britain (cf. Lyon, Surveillance). That is one for every fourteen people and a person can be captured on over 300 cameras every day. An estimated £500m of public money has been invested in CCTV infrastructure over the last decade but, according to a Home Office study, CCTV schemes that have been assessed had little overall effect on crime levels (Wood and Ball). In spatial terms, these statistics reiterate Foucault’s emphasis on the power economy of the unseen gaze. Michel Foucault in analysing the links between power, information and surveillance inspired by Bentham’s idea of the Panopticon, indicated that it is possible to sanction or reward an individual through the act of surveillance without their knowledge (155). It is this unseen and unknown gaze of surveillance that is fundamental to the exercise of power. The design and arrangement of buildings can be engineered so that the “surveillance is permanent in its effects, even if it is discontinuous in its action” (Foucault 201). Lyon (Terrorism), in tracing the trajectory of surveillance studies, points out that much of surveillance literature has focused on understanding it as a centralised bureaucratic relationship between the powerful and the governed. Invisible forms of surveillance have also been viewed as a class weapon in some societies. With the advancements in and proliferation of surveillance technologies as well as convergence with other technologies, Lyon argues that it is no longer feasible to view surveillance as a linear or centralised process. In our contemporary globalised world, there is a need to reconcile the dialectical strands that mediate surveillance as a process. In acknowledging this, Giles Deleuze and Felix Guattari have constructed surveillance as a rhizome that defies linearity to appropriate a more convoluted and malleable form where the coding of bodies and data can be enmeshed to produce intricate power relationships and hierarchies within societies. Latour draws on the notion of assemblage by propounding that data is amalgamated from scattered centres of calculation where these can range from state and commercial institutions to scientific laboratories which scrutinise data to conceive governance and control strategies. Both the Latourian and Deleuzian ideas of surveillance highlight the disparate arrays of people, technologies and organisations that become connected to make “surveillance assemblages” in contrast to the static, unidirectional Panopticon metaphor (Ball, “Organization” 93). In a similar vein, Gandy (Panoptic) infers that it is misleading to assume that surveillance in practice is as complete and totalising as the Panoptic ideal type would have us believe. Co-optation of Millions The Metropolitan Police’s counter-Terrorism strategy seeks to co-opt millions where the corporeal body can complement the landscape of technological surveillance that already co-exists within modernity. In its press release, the role of civilian bodies in ensuring security of the city is stressed; Keeping Londoners safe from Terrorism is not a job solely for governments, security services or police. If we are to make London the safest major city in the world, we must mobilise against Terrorism not only the resources of the state, but also the active support of the millions of people who live and work in the capita. (MPA Website). Surveillance is increasingly simulated through the millions of corporeal entities where seeing in advance is the goal even before technology records and codes these images (William). Bodies understand and code risk and images through the cultural narratives which circulate in society. Compared to CCTV technology images, which require cultural and political interpretations and interventions, bodies as surveillance organisms implicitly code other bodies and activities. The travel bag in the Metropolitan Police poster reinforces the images of the 7/7 bombers and the renewed attempts to bomb the London Underground on the 21st of July. It reiterates the CCTV footage revealing images of the bombers wearing rucksacks. The image of the rucksack both embodies the everyday as well as the potential for evil in everyday objects. It also inevitably reproduces the cultural biases and prejudices where the rucksack is subliminally associated with a specific type of body. The rucksack in these terms is a laden image which symbolically captures the context and culture of risk discourses in society. The co-optation of the population as a surveillance entity also recasts new forms of social responsibility within the democratic polity, where privacy is increasingly mediated by the greater need to monitor, trace and record the activities of one another. Nikolas Rose, in discussing the increasing ‘responsibilisation’ of individuals in modern societies, describes the process in which the individual accepts responsibility for personal actions across a wide range of fields of social and economic activity as in the choice of diet, savings and pension arrangements, health care decisions and choices, home security measures and personal investment choices (qtd. in Dee). While surveillance in individualistic terms is often viewed as a threat to privacy, Rose argues that the state of ‘advanced liberalism’ within modernity and post-modernity requires considerable degrees of self-governance, regulation and surveillance whereby the individual is constructed both as a ‘new citizen’ and a key site of self management. By co-opting and recasting the role of the citizen in the age of Terrorism, the citizen to a degree accepts responsibility for both surveillance and security. In our sociological imagination the body is constructed both as lived as well as a social object. Erving Goffman uses the word ‘umwelt’ to stress that human embodiment is central to the constitution of the social world. Goffman defines ‘umwelt’ as “the region around an individual from which signs of alarm can come” and employs it to capture how people as social actors perceive and manage their settings when interacting in public places (252). Goffman’s ‘umwelt’ can be traced to Immanuel Kant’s idea that it is the a priori categories of space and time that make it possible for a subject to perceive a world (Umiker-Sebeok; qtd. in Ball, “Organization”). Anthony Giddens adapted the term Umwelt to refer to “a phenomenal world with which the individual is routinely ‘in touch’ in respect of potential dangers and alarms which then formed a core of (accomplished) normalcy with which individuals and groups surround themselves” (244). Benjamin Smith, in considering the body as an integral component of the link between our consciousness and our material world, observes that the body is continuously inscribed by culture. These inscriptions, he argues, encompass a wide range of cultural practices and will imply knowledge of a variety of social constructs. The inscribing of the body will produce cultural meanings as well as create forms of subjectivity while locating and situating the body within a cultural matrix (Smith). Drawing on Derrida’s work, Pugliese employs the term ‘Somatechnics’ to conceptualise the body as a culturally intelligible construct and to address the techniques in and through which the body is formed and transformed (qtd. in Osuri). These techniques can encompass signification systems such as race and gender and equally technologies which mediate our sense of reality. These technologies of thinking, seeing, hearing, signifying, visualising and positioning produce the very conditions for the cultural intelligibility of the body (Osuri). The body is then continuously inscribed and interpreted through mediated signifying systems. Similarly, Hayles, while not intending to impose a Cartesian dichotomy between the physical body and its cognitive presence, contends that the use and interactions with technology incorporate the body as a material entity but it also equally inscribes it by marking, recording and tracing its actions in various terrains. According to Gayatri Spivak (qtd. in Ball, “Organization”) new habits and experiences are embedded into the corporeal entity which then mediates its reactions and responses to the social world. This means one’s body is not completely one’s own and the presence of ideological forces or influences then inscribe the body with meanings, codes and cultural values. In our modern condition, the body and data are intimately and intricately bound. Outside the home, it is difficult for the body to avoid entering into relationships that produce electronic personal data (Stalder). According to Felix Stalder our physical bodies are shadowed by a ‘data body’ which follows the physical body of the consuming citizen and sometimes precedes it by constructing the individual through data (12). Before we arrive somewhere, we have already been measured and classified. Thus, upon arrival, the citizen will be treated according to the criteria ‘connected with the profile that represents us’ (Gandy, Panoptic; William). Following September 11, Lyon (Terrorism) reveals that surveillance data from a myriad of sources, such as supermarkets, motels, traffic control points, credit card transactions records and so on, was used to trace the activities of terrorists in the days and hours before their attacks, confirming that the body leaves data traces and trails. Surveillance works by abstracting bodies from places and splitting them into flows to be reassembled as virtual data-doubles, and in the process can replicate hierarchies and centralise power (Lyon, Terrorism). Mike Dee points out that the nature of surveillance taking place in modern societies is complex and far-reaching and in many ways insidious as surveillance needs to be situated within the broadest context of everyday human acts whether it is shopping with loyalty cards or paying utility bills. Physical vulnerability of the body becomes more complex in the time-space distanciated surveillance systems to which the body has become increasingly exposed. As such, each transaction – whether it be a phone call, credit card transaction, or Internet search – leaves a ‘data trail’ linkable to an individual person or place. Haggerty and Ericson, drawing from Deleuze and Guattari’s concept of the assemblage, describe the convergence and spread of data-gathering systems between different social domains and multiple levels (qtd. in Hier). They argue that the target of the generic ‘surveillance assemblage’ is the human body, which is broken into a series of data flows on which surveillance process is based. The thrust of the focus is the data individuals can yield and the categories to which they can contribute. These are then reapplied to the body. In this sense, surveillance is rhizomatic for it is diverse and connected to an underlying, invisible infrastructure which concerns interconnected technologies in multiple contexts (Ball, “Elements”). The co-opted body in the schema of counter-Terrorism enters a power arrangement where it constitutes both the unseen gaze as well as the data that will be implicated and captured in this arrangement. It is capable of producing surveillance data for those in power while creating new data through its transactions and movements in its everyday life. The body is unequivocally constructed through this data and is also entrapped by it in terms of representation and categorisation. The corporeal body is therefore part of the machinery of surveillance while being vulnerable to its discriminatory powers of categorisation and victimisation. As Hannah Arendt (qtd. in Bauman 91) had warned, “we terrestrial creatures bidding for cosmic significance will shortly be unable to comprehend and articulate the things we are capable of doing” Arendt’s caution conveys the complexity, vulnerability as well as the complicity of the human condition in the surveillance society. Equally it exemplifies how the corporeal body can be co-opted as a surveillance entity sustaining a new ‘banality’ (Arendt) in the machinery of surveillance. Social Consequences of Surveillance Lyon (Terrorism) observed that the events of 9/11 and 7/7 in the UK have inevitably become a prism through which aspects of social structure and processes may be viewed. This prism helps to illuminate the already existing vast range of surveillance practices and processes that touch everyday life in so-called information societies. As Lyon (Terrorism) points out surveillance is always ambiguous and can encompass genuine benefits and plausible rationales as well as palpable disadvantages. There are elements of representation to consider in terms of how surveillance technologies can re-present data that are collected at source or gathered from another technological medium, and these representations bring different meanings and enable different interpretations of life and surveillance (Ball, “Elements”). As such surveillance needs to be viewed in a number of ways: practice, knowledge and protection from threat. As data can be manipulated and interpreted according to cultural values and norms it reflects the inevitability of power relations to forge its identity in a surveillance society. In this sense, Ball (“Elements”) concludes surveillance practices capture and create different versions of life as lived by surveilled subjects. She refers to actors within the surveilled domain as ‘intermediaries’, where meaning is inscribed, where technologies re-present information, where power/resistance operates, and where networks are bound together to sometimes distort as well as reiterate patterns of hegemony (“Elements” 93). While surveillance is often connected with technology, it does not however determine nor decide how we code or employ our data. New technologies rarely enter passive environments of total inequality for they become enmeshed in complex pre-existing power and value systems (Marx). With surveillance there is an emphasis on the classificatory powers in our contemporary world “as persons and groups are often risk-profiled in the commercial sphere which rates their social contributions and sorts them into systems” (Lyon, Terrorism 2). Lyon (Terrorism) contends that the surveillance society is one that is organised and structured using surveillance-based techniques recorded by technologies, on behalf of the organisations and governments that structure our society. This information is then sorted, sifted and categorised and used as a basis for decisions which affect our life chances (Wood and Ball). The emergence of pervasive, automated and discriminatory mechanisms for risk profiling and social categorising constitute a significant mechanism for reproducing and reinforcing social, economic and cultural divisions in information societies. Such automated categorisation, Lyon (Terrorism) warns, has consequences for everyone especially in face of the new anti-terror measures enacted after September 11. In tandem with this, Bauman points out that a few suicidal murderers on the loose will be quite enough to recycle thousands of innocents into the “usual suspects”. In no time, a few iniquitous individual choices will be reprocessed into the attributes of a “category”; a category easily recognisable by, for instance, a suspiciously dark skin or a suspiciously bulky rucksack* *the kind of object which CCTV cameras are designed to note and passers-by are told to be vigilant about. And passers-by are keen to oblige. Since the terrorist atrocities on the London Underground, the volume of incidents classified as “racist attacks” rose sharply around the country. (122; emphasis added) Bauman, drawing on Lyon, asserts that the understandable desire for security combined with the pressure to adopt different kind of systems “will create a culture of control that will colonise more areas of life with or without the consent of the citizen” (123). This means that the inhabitants of the urban space whether a citizen, worker or consumer who has no terrorist ambitions whatsoever will discover that their opportunities are more circumscribed by the subject positions or categories which are imposed on them. Bauman cautions that for some these categories may be extremely prejudicial, restricting them from consumer choices because of credit ratings, or more insidiously, relegating them to second-class status because of their colour or ethnic background (124). Joseph Pugliese, in linking visual regimes of racial profiling and the shooting of Jean Charles de Menezes in the aftermath of 7/7 bombings in London, suggests that the discursive relations of power and visuality are inextricably bound. Pugliese argues that racial profiling creates a regime of visuality which fundamentally inscribes our physiology of perceptions with stereotypical images. He applies this analogy to Menzes running down the platform in which the retina transforms him into the “hallucinogenic figure of an Asian Terrorist” (Pugliese 8). With globalisation and the proliferation of ICTs, borders and boundaries are no longer sacrosanct and as such risks are managed by enacting ‘smart borders’ through new technologies, with huge databases behind the scenes processing information about individuals and their journeys through the profiling of body parts with, for example, iris scans (Wood and Ball 31). Such body profiling technologies are used to create watch lists of dangerous passengers or identity groups who might be of greater ‘risk’. The body in a surveillance society can be dissected into parts and profiled and coded through technology. These disparate codings of body parts can be assembled (or selectively omitted) to construct and represent whole bodies in our information society to ascertain risk. The selection and circulation of knowledge will also determine who gets slotted into the various categories that a surveillance society creates. Conclusion When the corporeal body is subsumed into a web of surveillance it often raises questions about the deterministic nature of technology. The question is a long-standing one in our modern consciousness. We are apprehensive about according technology too much power and yet it is implicated in the contemporary power relationships where it is suspended amidst human motive, agency and anxiety. The emergence of surveillance societies, the co-optation of bodies in surveillance schemas, as well as the construction of the body through data in everyday transactions, conveys both the vulnerabilities of the human condition as well as its complicity in maintaining the power arrangements in society. Bauman, in citing Jacques Ellul and Hannah Arendt, points out that we suffer a ‘moral lag’ in so far as technology and society are concerned, for often we ruminate on the consequences of our actions and motives only as afterthoughts without realising at this point of existence that the “actions we take are most commonly prompted by the resources (including technology) at our disposal” (91). References Abrams, Philip. Historical Sociology. Shepton Mallet, UK: Open Books, 1982. Altheide, David. “Consuming Terrorism.” Symbolic Interaction 27.3 (2004): 289-308. Arendt, Hannah. Eichmann in Jerusalem: A Report on the Banality of Evil. London: Faber & Faber, 1963. Bauman, Zygmunt. Liquid Fear. Cambridge, UK: Polity, 2006. Ball, Kristie. “Elements of Surveillance: A New Framework and Future Research Direction.” Information, Communication and Society 5.4 (2002): 573-90 ———. “Organization, Surveillance and the Body: Towards a Politics of Resistance.” Organization 12 (2005): 89-108. Dee, Mike. “The New Citizenship of the Risk and Surveillance Society – From a Citizenship of Hope to a Citizenship of Fear?” Paper Presented to the Social Change in the 21st Century Conference, Queensland University of Technology, Queensland, Australia, 22 Nov. 2002. 14 April 2007 http://eprints.qut.edu.au/archive/00005508/02/5508.pdf>. Deleuze, Gilles, and Felix Guattari. A Thousand Plateaus. Minneapolis: U of Minnesota P, 1987. Fopp, Rodney. “Increasing the Potential for Gaze, Surveillance and Normalization: The Transformation of an Australian Policy for People and Homeless.” Surveillance and Society 1.1 (2002): 48-65. Foucault, Michel. Discipline and Punish: The Birth of the Prison. London: Allen Lane, 1977. Giddens, Anthony. Modernity and Self-Identity. Self and Society in the Late Modern Age. Stanford: Stanford UP, 1991. Gandy, Oscar. The Panoptic Sort: A Political Economy of Personal Information. Boulder, CO: Westview, 1997. ———. “Data Mining and Surveillance in the Post 9/11 Environment.” The Intensification of Surveillance: Crime, Terrorism and War in the Information Age. Eds. Kristie Ball and Frank Webster. Sterling, VA: Pluto Press, 2003. Goffman, Erving. Relations in Public. Harmondsworth: Penguin, 1971. Graham, Stephen, and Simon Marvin. Splintering Urbanism: Networked Infrastructures, Technological Mobilities and the Urban Condition. New York: Routledge, 2001. Hier, Sean. “Probing Surveillance Assemblage: On the Dialectics of Surveillance Practices as Process of Social Control.” Surveillance and Society 1.3 (2003): 399-411. Hayles, Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics. Chicago: U of Chicago P, 1999. Hubbard, Phil. “Fear and Loathing at the Multiplex: Everyday Anxiety in the Post-Industrial City.” Capital & Class 80 (2003). Latour, Bruno. Science in Action. Cambridge, Mass: Harvard UP, 1987 Lyon, David. The Electronic Eye – The Rise of Surveillance Society. Oxford: Polity Press, 1994. ———. “Terrorism and Surveillance: Security, Freedom and Justice after September 11 2001.” Privacy Lecture Series, Queens University, 12 Nov 2001. 16 April 2007 http://privacy.openflows.org/lyon_paper.html>. ———. “Surveillance Studies: Understanding Visibility, Mobility and the Phonetic Fix.” Surveillance and Society 1.1 (2002): 1-7. Metropolitan Police Authority (MPA). “Counter Terrorism: The London Debate.” Press Release. 21 June 2006. 18 April 2007 http://www.mpa.gov.uk.access/issues/comeng/Terrorism.htm>. Pugliese, Joseph. “Asymmetries of Terror: Visual Regimes of Racial Profiling and the Shooting of Jean Charles de Menezes in the Context of the War in Iraq.” Borderlands 5.1 (2006). 30 May 2007 http://www.borderlandsejournal.adelaide.edu.au/vol15no1_2006/ pugliese.htm>. Marx, Gary. “A Tack in the Shoe: Neutralizing and Resisting the New Surveillance.” Journal of Social Issues 59.2 (2003). 18 April 2007 http://web.mit.edu/gtmarx/www/tack.html>. Moores, Shaun. “Doubling of Place.” Mediaspace: Place Scale and Culture in a Media Age. Eds. Nick Couldry and Anna McCarthy. Routledge, London, 2004. Monahan, Teri, ed. Surveillance and Security: Technological Politics and Power in Everyday Life. Routledge: London, 2006. Norris, Clive, and Gary Armstrong. The Maximum Surveillance Society: The Rise of CCTV. Oxford: Berg, 1999. O’Harrow, Robert. No Place to Hide. New York: Free Press, 2005. Osuri, Goldie. “Media Necropower: Australian Media Reception and the Somatechnics of Mamdouh Habib.” Borderlands 5.1 (2006). 30 May 2007 http://www.borderlandsejournal.adelaide.edu.au/vol5no1_2006 osuri_necropower.htm>. Rose, Nikolas. “Government and Control.” British Journal of Criminology 40 (2000): 321–399. Scannell, Paddy. Radio, Television and Modern Life. Oxford: Blackwell, 1996. Smith, Benjamin. “In What Ways, and for What Reasons, Do We Inscribe Our Bodies?” 15 Nov. 1998. 30 May 2007 http:www.bmezine.com/ritual/981115/Whatways.html>. Stalder, Felix. “Privacy Is Not the Antidote to Surveillance.” Surveillance and Society 1.1 (2002): 120-124. Umiker-Sebeok, Jean. “Power and the Construction of Gendered Spaces.” Indiana University-Bloomington. 14 April 2007 http://www.slis.indiana.edu/faculty/umikerse/papers/power.html>. William, Bogard. The Simulation of Surveillance: Hypercontrol in Telematic Societies. Cambridge: Cambridge UP, 1996. Wood, Kristie, and David M. Ball, eds. “A Report on the Surveillance Society.” Surveillance Studies Network, UK, Sep. 2006. 14 April 2007 http://www.ico.gov.uk/upload/documents/library/data_protection/ practical_application/surveillance_society_full_report_2006.pdf>. Citation reference for this article MLA Style Ibrahim, Yasmin. "Commodifying Terrorism: Body, Surveillance and the Everyday." M/C Journal 10.3 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0706/05-ibrahim.php>. APA Style Ibrahim, Y. (Jun. 2007) "Commodifying Terrorism: Body, Surveillance and the Everyday," M/C Journal, 10(3). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0706/05-ibrahim.php>.
APA, Harvard, Vancouver, ISO, and other styles
10

Mules, Warwick. "A Remarkable Disappearing Act." M/C Journal 4, no. 4 (August 1, 2001). http://dx.doi.org/10.5204/mcj.1920.

Full text
Abstract:
Creators and Creation Creation is a troubling word today, because it suggests an impossible act, indeed a miracle: the formation of something out of nothing. Today we no longer believe in miracles, yet we see all around us myriad acts which we routinely define as creative. Here, I am not referring to the artistic performances and works of gifted individuals, which have their own genealogy of creativity in the lineages of Western art. Rather, I am referring to the small, personal events that we see within the mediated spaces of the everyday (on the television screen, in magazines and newspapers) where lives are suddenly changed for the better through the consumption of products designed to fulfil our personal desires. In this paper, I want to explore the implications of thinking about everyday creativity as a modern cultural form. I want to suggest that not only is such an impossible possibility possible, but that its meaning has been at the centre of the desire to name, to gain status from, and to market the products of modern industrialisation. Furthermore I want to suggest that beyond any question of marketing rhetoric, we need to attend to this desire as the ghost of a certain kind of immanence which has haunted modernity and its projects from the very beginning, linking the great thoughts of modern philosophy with the lowliest products of modern life. Immanence, Purity and the Cogito In Descartes' famous Discourse on Method, the author-narrator (let's call him Descartes) recounts how he came about the idea of the thinking self or cogito, as the foundation of worldly knowledge: And so because sometimes our senses deceive us, I made up my mind to suppose that they always did. . . . I resolved to pretend that everything that had ever entered my mind was as false as the figments of my dreams. But then as I strove to think of everything false, I realized that, in the very act of thinking everything false, I was aware of myself as something real. (60-61) These well known lines are, of course, the beginnings of a remarkable philosophical enterprise, reaching forward to Husserl and beyond, in which the external world is bracketed, all the better to know it in the name of reason. Through an act of pretence ("I resolved to pretend"), Descartes disavows the external world as the source of certain knowledge, and, turning to the only thing left: the thought of himself—"I was aware of myself as something real"—makes his famous declaration, "I think therefore I am". But what precisely characterises this thinking being, destined to become the cogito of all modernity? Is it purely this act of self-reflection?: Then, from reflecting on the fact that I had doubts, and that consequently my existence was not wholly perfect, it occurred to me to enquire how I learned to think of something more perfect than myself, and it became evident to me that it must be through some nature which was in fact more perfect. (62) Descartes has another thought that "occurred to me" almost at the same moment that he becomes aware of his own thinking self. This second thought makes him aware that the cogito is not complete, requiring yet a further thought, that of a perfection drawn from something "more perfect than myself". The creation of the cogito does not occur, as we might have first surmised, within its own space of self-reflection, but becomes lodged within what might be called, following Deleuze and Guattari, a "plane of immanence" coming from the outside: "The plane of immanence is . . . an outside more distant than any external world because it is an inside deeper than any internal world: it is immanence" (59). Here we are left with a puzzling question: what of this immanence that made him aware of his own imperfection at the very moment of the cogito's inception? Can this immanence be explained away by Descartes' appeal to God as a state of perfection? Or is it the very material upon which the cogito is brought into existence, shaping it towards perfection? We are forced to admit that, irrespective of the source of this perfection, the cogito requires something from the outside which, paradoxically, is already on the inside, in order to create itself as a pure form. Following the contours of Descartes' own writing, we cannot account for modernity purely in terms of self-reflection, if, in the very act of its self-creation, the modern subject is shot through with immanence that comes from the outside. Rather what we must do is describe the various forms this immanence takes. Although there is no necessary link between immanence and perfection (that is, one does not logically depend on the other as its necessary cause) their articulation nevertheless produces something (the cogito for instance). Furthermore, this something is always characterised as a creation. In its modern form, creation is a form of immanence within materiality—a virtualisation of material actuality, that produces idealised states, such as God, freedom, reason, uniqueness, originality, love and perfection. As Bruno Latour has argued, the "modern critical stance" creates unique, pure objects, by purging the material "networks" from which they are formed, of their impurities (11-12). Immanence is characterised by a process of sifting and purification which brings modern objects into existence: "the plane of immanence . . . acts like a sieve" (Deleuze and Guattari 42). The nation, the state, the family, the autonomous subject, and the work of art—all of these are modern when their 'material' is purged of impurities by an immanence that 'comes from the outside' yet is somehow intrinsic to the material itself. As Zygmunt Bauman points out, the modern nation exists by virtue of a capacity to convert strangers into citizens; by purging itself of impurities inhabiting it from within but coming from the outside (63). The modern work of art is created by purging itself of the vulgarities and impurities of everyday life (Berman 30); by reducing its contingent and coincidental elements to a geometrical, punctual or serialised form. The modern nuclear family is created by converting the community-based connections between relatives and friends into a single, internally consistent self-reproducing organism. All of these examples require us to think of creativity as an act which brings something new into existence from within a material base that must be purged and disavowed, but which, simultaneously, must also be retained as its point of departure that it never really leaves. Immanence should not be equated with essence, if by essence we mean a substratum of materiality inherent in things; a quality or quiddity to which all things can be reduced. Rather, immanence is the process whereby things appear as they are to others, thereby forming themselves into 'objects' with certain identifiable characteristics. Immanence draws the 'I' and the 'we' into relations of subjectivity to the objects thus produced. Immanence is not in things; it is the thing's condition of objectivity in a material, spatial and temporal sense; its 'becoming object' before it can be 'perceived' by a subject. As Merleau-Ponty has beautifully argued, seeing as a bodily effect necessarily comes before perception as an inner ownership (Merleau-Ponty 3-14). Since immanence always comes from elsewhere, no intensive scrutiny of the object in itself will bring it to light. But since immanence is already inside the object from the moment of its inception, no amount of examination of its contextual conditions—the social, cultural, economic, institutional and authorial conditions under which the object was created—will bring us any closer to it. Rather, immanence can only be 'seen' (if this is the right word) in terms of the objects it creates. We should stop seeking immanence as a characteristic of objects considered in themselves, and rather see it in terms of a virtual field or plane, in which objects appear, positioned in a transversally related way. This field does not exist transcendentally to the objects, like some overarching principle of order, but as a radically exteriorised stratum of 'immaterial materiality' with a specific image-content, capable of linking objects together as a series of creations, all with the stamp of their own originality, individuality and uniqueness, yet all bound together by a common set of image relations (Deleuze 34-35). If, as Foucault argues, modern objects emerge in a "field of exteriority"—a complex web of discursive interrelations, with contingent rather than necessary connections to one another (Foucault 45)—then it should be possible to map the connections between these objects in terms of the "schema of correspondence" (74) detected in the multiplicities thrown up by the regularities of modern production and consumption. Commodities and Created Objects We can extend the idea of creation to include not only aesthetic acts and their objects, but also the commodity-products of modern industrialisation. Let's begin by plunging straight into the archive, where we might find traces of these small modern miracles. An illustrated advertisement for 'Hudson's Extract of Soap' appeared in the Illustrated Sydney News, on Saturday February 22nd, 1888. The illustration shows a young woman with a washing basket under her arm, standing beside a sign posted to a wall, which reads 'Remarkable Disappearance of all Dirt from Everything by using Hudson's Extract of Soap' (see Figure 1). The woman has her head turned towards the poster, as if reading it. Beneath these words, is another set of words offering a reward: 'Reward !!! Purity, Health, Perfection, Satisfaction. By its regular daily use'. Here we are confronted with a remarkable proposition: soap does not make things clean, rather it makes dirt disappear. Soap purifies things by making their impurities disappear. The claim made applies to 'everything', drawing attention to a desire for a certain state of perfection, exemplified by the pure body, cleansed of dirt and filth. The pure exists in potentia as a perfect state of being, realised by the purgation of impurities. Fig 1: Hudson's Soap. Illustrated Sydney News, on Saturday February 22nd, 1888 Here we might be tempted to trace the motivation of this advertisement to a concern in the nineteenth century for a morally purged, purified body, regulated according to bourgeois values of health, respectability and decorum. As Catherine Gallagher has pointed out, the body in the nineteenth century was at the centre of a sick society requiring "constant flushing, draining, and excising of various deleterious elements" (Gallagher 90). But this is only half the story. The advertisement offers a certain image of purity; an image which exceeds the immediate rhetorical force associated with selling a product, one which cannot be simply reduced to its contexts of use. The image of perfection in the Hudson's soap advertisement belongs to a network of images spread across a far-flung field; a network in which we can 'see' perfection as a material immanence embodied in things. In modernity, commodities are created objects par excellence, which, in their very ordinariness, bear with them an immanence, binding consumers together into consumer formations. Each act of consumption is not simply driven by necessity and need, but by a desire for self-transformation, embodied in the commodity itself. Indeed, self-transformation becomes one of the main creative processes in what Marshal Berman has identified as the "third" phase of modernity, where, paraphrasing Nietzsche, "modern mankind found itself in the midst of a great absence and emptiness of values and yet, at the same time, a remarkable abundance of possibilities" (Berman 21). Commodification shifts human desire away from the thought of the other as a transcendental reality remote from the senses, and onto a future oriented material plane, in which the self is capable of becoming an other in a tangible, specific way (Massumi 35 ff.). By the end of the nineteenth century, commodities had become associated with scenarios of self-transformation embedded in human desire, which then began to shape the needs of society itself. Consumer formations are not autonomous realms; they are transversally located within and across social strata. This is because commodities bear with them an immanence which always exceeds their context of production and consumption, spreading across vast cultural terrains. An individual consumer is thus subject to two forces: the force of production that positions her within the social strata as a member of a class or social grouping, and the force of consumption that draws her away from, or indeed, further into a social positioning. While the consumption of commodities remained bound to ideologies relating to the formation of class in terms of a bourgeois moral order, as it was in Britain, America and Europe throughout the nineteenth century, then the discontinuity between social strata and cultural formation was felt in terms of the possibility of self-transformation by moving up a class. In the nineteenth century, working class families flocked to the new photographic studios to have their portraits taken, emulating the frozen moral rectitude of the ideal bourgeois type, or scrimped and saved to purchase parlour pianos and other such cultural paraphernalia, thereby signalling a certain kind of leisured freedom from the grind of work (Sekula 8). But when the desire for self-transformation starts to outstrip the ideological closure of class; that is, when the 'reality' of commodities starts to overwhelm the social reality of those who make them, then desire itself takes on an autonomy, which can then be attached to multiple images of the other, expressed in imaginary scenarios of escape, freedom, success and hyper-experience. This kind of free-floating desire has now become a major trigger for transformations in consumer formations, linked to visual technologies where images behave like quasi-autonomous beings. The emergence of these images can be traced back at least to the mid-nineteenth century where products of industrialisation were transformed into commodities freely available as spectacles within the public spaces of exhibitions and in mass advertising in the press, for instance in the Great Exhibition of 1851 held at London's Crystal Palace (Richards 28 ff.) Here we see the beginnings of a new kind of object-image dislocated from the utility of the product, with its own exchange value and logic of dispersal. Bataille's notion of symbolic exchange can help explain the logic of dispersal inherent in commodities. For Bataille, capitalism involves both production utility and sumptuary expenditure, where the latter is not simply a calculated version of the former (Bataille 120 ff.) Sumptuary expenditure is a discharge of an excess, and not a drawing in of demand to match the needs of supply. Consumption thus has a certain 'uncontrolled' element embedded in it, which always moves beyond the machinations of market logic. Under these conditions, the commodity image always exceeds production and use, taking on a life of its own, charged with desire. In the late nineteenth century, the convergence of photography and cartes-de-visites released a certain scopophilic desire in the form of postcard pornography, which eventually migrated to the modern forms of advertising and public visual imagery that we see today. According to Suren Lalvani, the "onset of scopophilia" in modern society is directly attributable to the convergence of photographic technology and erotic display in the nineteenth century (Lalvani). In modern consumer cultures, desire does not lag behind need, but enters into the cycle of production and consumption from the outside, where it becomes its driving force. In this way, modern consumer cultures transform themselves by ecstasis (literally, by standing outside oneself) when the body becomes virtualised into its other. Here, the desire for self-transformation embodied in the act of consumption intertwines with, and eventually redefines, the social positioning of the subject. Indeed the 'laws' of capital and labour where each person or family group is assigned a place and regime of duties, are constantly undone and redefined by the superfluity of consumption, gradually gathering pace throughout the nineteenth and twentieth centuries. These tremendous changes operating throughout all capitalist consumer cultures for some time, do not occur in a calculated way, as if controlled by the forces of production alone. Rather, they occur through myriad acts of self-transformation, operating transversally, linking consumer to consumer within what I have defined earlier as a field of immanence. Here, the laws of supply and demand are inadequate to predict the logic of this operation; they only describe the effects of consumption after desire has been spent. Or, to put this another way, they misread desire as need, thereby transcribing the primary force of consumption into a secondary component of the production/labour cycle. This error is made by Humphrey McQueen in his recent book The Essence of Capitalism: the origins of our future (2001). In chapter 8, McQueen examines the logic of the consumer market through a critique of the marketeer's own notion of desire, embodied in the "sovereign consumer", making rational choices. Here desire is reduced back to a question of calculated demand, situated within the production/consumption cycle. McQueen leaves himself no room to manoeuvre outside this cycle; there is no way to see beyond the capitalist cycle of supply/demand which accelerates across ever-increasing horizons. To avoid this error, desire needs to be seen as immanent to the production/consumption cycle; as produced by it, yet superfluous to its operations. We need therefore to situate ourselves not on the side of production, but in the superfluity of consumption in order to recognise the transformational triggers that characterise modern consumer cultures, and their effects on the social order. In order to understand the creative impulse in modernity today, we need to come to grips with the mystery of consumption, where the thing consumed operates on the consumer in both a material and an immaterial way. This mystification of the commodity was, of course, well noted by Marx: A commodity is . . . a mysterious thing, simply because in it the social character of men's labour appears to them as an objective character stamped upon the product of that labour; because the relation of the producers to the sum total of their own labour is presented to them as a social relation, existing not between themselves, but between the products of their labour. (Marx 43, my emphasis) When commodities take on such a powerful force that their very presence starts to drive and shape the social relations that have given rise to them; that is, when desire replaces need as the shaping force of societies, then we are obliged to redefine the commodity and its relation to the subject. Under these conditions, the mystery of the commodity is no longer something to be dispelled in order to retrieve the real relation between labour and capital, but becomes the means whereby "men's labour" is actually shaped and formed as a specific mode of production. Eric Alliez and Michel Feher (1987) point out that in capitalism "the subjection framework which defines the wage relation has penetrated society to such an extent that we can now speak not only of the formal subsumption of labor by capital but of the actual or 'real' subsumption by capital of society as a whole" (345). In post-Fordist economic contexts, individuals' relation to capital is no longer based on subjection but incorporation: "space is subsumed under a time entirely permeated by capital. In so doing, they [neo-Fordist strategies] also instigate a regime in which individuals are less subject to than incorporated by capital" (346). In societies dominated by the subjection of workers to capital, the commodity's exchange value is linked strongly to the classed position of the worker, consolidating his interests within the shadow of a bourgeois moral order. But where the worker is incorporated into capital, his 'real' social relations go with him, making it difficult to see how they can be separated from the commodities he produces and which he also consumes at leisure: "If the capitalist relation has colonized all of the geographical and social space, it has no inside into which to integrate things. It has become an unbounded space—in other words, a space coextensive with its own inside and outside. It has become a field of immanence" (Massumi 18). It therefore makes little sense to initiate critiques of the capital relation by overthrowing the means of subjection. Instead, what is required is a way through the 'incorporation' of the individual into the capitalist system, an appropriation of the means of consumption in order to invent new kinds of selfhood. Or at the very least, to expose the process of self-formation to its own means of consumption. What we need to do, then, is to undertake a description of the various ways in which desire is produced within consumer cultures as a form of self-creation. As we have seen, in modernity, self-creation occurs when human materiality is rendered immaterial through a process purification. Borrowing from Deleuze and Guattari, I have characterised this process in terms of immanence: a force coming from the outside, but which is already inside the material itself. In the necessary absence of any prime mover or deity, pure immanence becomes the primary field in which material is rendered into its various and specific modern forms. Immanence is not a transcendental power operating over things, but that which is the very motor of modernity; its specific way of appearing to itself, and of relating to itself in its various guises and manifestations. Through a careful mapping of the network of commodity images spread through far-flung fields, cutting through specific contexts of production and consumption, we can see creation at work in one of its specific modern forms. Immanence, and the power of creation it makes possible, can be found in all modern things, even soap powder! References Alliez, Eric and Michel Feher. "The Luster of Capital." Zone 1(2) 1987: 314-359. Bauman, Zygmunt. Modernity and Ambivalence. Cambridge: Polity, 1991. Berman, Marshall. All That is Solid Melts into Air. New York: Penguin, 1982. Bataille, George. "The Notion of Expenditure." George Bataille, Visions of Excess: Selected Writings, 1927-1939. Trans. Alan Stoekl, Minneapolis: University of Minnesota Press, 1995, pp.116-129. Deleuze, Gilles. Foucault. Trans. Seán Hand, Minneapolis: University of Minnesota Press, 1988. Deleuze, Gilles and Félix Guattari. What is Philosophy? Trans. Hugh Tomlinson and Graham Burchill, New York: Columbia University Press, 1994. Descartes, Rene. Discourse on Method. Trans. Arthur Wollaston, Harmondsworth: Penguin, 1960. Foucault, Michel. The Archaeology of Knowledge. Trans. A.M. Sheridan Smith, London: Tavistock, 1972. Gallagher, Catherine. "The Body Versus the Social Body in the Works of Thomas Malthus and Henry Mayhew." The Making of the Modern Body: Sexuality and Society in the Nineteenth Century, Catherine Gallagher and Thomas Laqueur (Eds.), Berkeley: University of California Press, 1987: 83-106. Lalvani, Suren. "Photography, Epistemology and the Body." Cultural Studies, 7(3), 1993: 442-465. Latour, Bruno. We Have Never Been Modern. Trans. Catherine Porter, Cambridge, Mass.: Harvard University Press, 1993. Karl. Capital, A New Abridgement. David McLellan (Ed.), Oxford: Oxford University Press, 1995. Massumi, Brian. "Everywhere You Want to Be: Introduction to Fear" in Brian Massumi (Ed.). The Politics of Everyday Fear. Minneapolis: University of Minnesota Press, 1993: 3-37. Merleau-Ponty, Maurice. The Visible and the Invisible. Trans. Alphonso Lingis, Evanston: Northwest University Press, 1968. McQueen, Humphrey. The Essence of Capitalism: the Origins of Our Future. Sydney: Sceptre, 2001. Richards, Thomas. The Commodity Culture of Victorian England: Advertising and Spectacle, 1851-1914. Stanford: Stanford University Press, 1990. Sekula, Allan. "The Body and the Archive." October, 39, 1986: 3-65.
APA, Harvard, Vancouver, ISO, and other styles
11

Hutcheon, Linda. "In Defence of Literary Adaptation as Cultural Production." M/C Journal 10, no. 2 (May 1, 2007). http://dx.doi.org/10.5204/mcj.2620.

Full text
Abstract:
Biology teaches us that organisms adapt—or don’t; sociology claims that people adapt—or don’t. We know that ideas can adapt; sometimes even institutions can adapt. Or not. Various papers in this issue attest in exciting ways to precisely such adaptations and maladaptations. (See, for example, the articles in this issue by Lelia Green, Leesa Bonniface, and Tami McMahon, by Lexey A. Bartlett, and by Debra Ferreday.) Adaptation is a part of nature and culture, but it’s the latter alone that interests me here. (However, see the article by Hutcheon and Bortolotti for a discussion of nature and culture together.) It’s no news to anyone that not only adaptations, but all art is bred of other art, though sometimes artists seem to get carried away. My favourite example of excess of association or attribution can be found in the acknowledgements page to a verse drama called Beatrice Chancy by the self-defined “maximalist” (not minimalist) poet, novelist, librettist, and critic, George Elliot Clarke. His selected list of the incarnations of the story of Beatrice Cenci, a sixteenth-century Italian noblewoman put to death for the murder of her father, includes dramas, romances, chronicles, screenplays, parodies, sculptures, photographs, and operas: dramas by Vincenzo Pieracci (1816), Percy Bysshe Shelley (1819), Juliusz Slowacki (1843), Waldter Landor (1851), Antonin Artaud (1935) and Alberto Moravia (1958); the romances by Francesco Guerrazi (1854), Henri Pierangeli (1933), Philip Lindsay (1940), Frederic Prokosch (1955) and Susanne Kircher (1976); the chronicles by Stendhal (1839), Mary Shelley (1839), Alexandre Dumas, père (1939-40), Robert Browning (1864), Charles Swinburne (1883), Corrado Ricci (1923), Sir Lionel Cust (1929), Kurt Pfister (1946) and Irene Mitchell (1991); the film/screenplay by Bertrand Tavernier and Colo O’Hagan (1988); the parody by Kathy Acker (1993); the sculpture by Harriet Hosmer (1857); the photograph by Julia Ward Cameron (1866); and the operas by Guido Pannain (1942), Berthold Goldschmidt (1951, 1995) and Havergal Brian (1962). (Beatrice Chancy, 152) He concludes the list with: “These creators have dallied with Beatrice Cenci, but I have committed indiscretions” (152). An “intertextual feast”, by Clarke’s own admission, this rewriting of Beatrice’s story—especially Percy Bysshe Shelley’s own verse play, The Cenci—illustrates brilliantly what Northrop Frye offered as the first principle of the production of literature: “literature can only derive its form from itself” (15). But in the last several decades, what has come to be called intertextuality theory has shifted thinking away from looking at this phenomenon from the point of view of authorial influences on the writing of literature (and works like Harold Bloom’s famous study of the Anxiety of Influence) and toward considering our readerly associations with literature, the connections we (not the author) make—as we read. We, the readers, have become “empowered”, as we say, and we’ve become the object of academic study in our own right. Among the many associations we inevitably make, as readers, is with adaptations of the literature we read, be it of Jane Austin novels or Beowulf. Some of us may have seen the 2006 rock opera of Beowulf done by the Irish Repertory Theatre; others await the new Neil Gaiman animated film. Some may have played the Beowulf videogame. I personally plan to miss the upcoming updated version that makes Beowulf into the son of an African explorer. But I did see Sturla Gunnarsson’s Beowulf and Grendel film, and yearned to see the comic opera at the Lincoln Centre Festival in 2006 called Grendel, the Transcendence of the Great Big Bad. I am not really interested in whether these adaptations—all in the last year or so—signify Hollywood’s need for a new “monster of the week” or are just the sign of a desire to cash in on the success of The Lord of the Rings. For all I know they might well act as an ethical reminder of the human in the alien in a time of global strife (see McGee, A4). What interests me is the impact these multiple adaptations can have on the reader of literature as well as on the production of literature. Literature, like painting, is usually thought of as what Nelson Goodman (114) calls a one-stage art form: what we read (like what we see on a canvas) is what is put there by the originating artist. Several major consequences follow from this view. First, the implication is that the work is thus an original and new creation by that artist. However, even the most original of novelists—like Salman Rushdie—are the first to tell you that stories get told and retold over and over. Indeed his controversial novel, The Satanic Verses, takes this as a major theme. Works like the Thousand and One Nights are crucial references in all of his work. As he writes in Haroun and the Sea of Stories: “no story comes from nowhere; new stories are born of old” (86). But illusion of originality is only one of the implications of seeing literature as a one-stage art form. Another is the assumption that what the writer put on paper is what we read. But entire doctoral programs in literary production and book history have been set up to study how this is not the case, in fact. Editors influence, even change, what authors want to write. Designers control how we literally see the work of literature. Beatrice Chancy’s bookend maps of historical Acadia literally frame how we read the historical story of the title’s mixed-race offspring of an African slave and a white slave owner in colonial Nova Scotia in 1801. Media interest or fashion or academic ideological focus may provoke a publisher to foreground in the physical presentation different elements of a text like this—its stress on race, or gender, or sexuality. The fact that its author won Canada’s Governor General’s Award for poetry might mean that the fact that this is a verse play is emphasised. If the book goes into a second edition, will a new preface get added, changing the framework for the reader once again? As Katherine Larson has convincingly shown, the paratextual elements that surround a work of literature like this one become a major site of meaning generation. What if literature were not a one-stage an art form at all? What if it were, rather, what Goodman calls “two-stage” (114)? What if we accept that other artists, other creators, are needed to bring it to life—editors, publishers, and indeed readers? In a very real and literal sense, from our (audience) point of view, there may be no such thing as a one-stage art work. Just as the experience of literature is made possible for readers by the writer, in conjunction with a team of professional and creative people, so, arguably all art needs its audience to be art; the un-interpreted, un-experienced art work is not worth calling art. Goodman resists this move to considering literature a two-stage art, not at all sure that readings are end products the way that performance works are (114). Plays, films, television shows, or operas would be his prime examples of two-stage arts. In each of these, a text (a playtext, a screenplay, a score, a libretto) is moved from page to stage or screen and given life, by an entire team of creative individuals: directors, actors, designers, musicians, and so on. Literary adaptations to the screen or stage are usually considered as yet another form of this kind of transcription or transposition of a written text to a performance medium. But the verbal move from the “book” to the diminutive “libretto” (in Italian, little book or booklet) is indicative of a view that sees adaptation as a step downward, a move away from a primary literary “source”. In fact, an entire negative rhetoric of “infidelity” has developed in both journalistic reviewing and academic discourse about adaptations, and it is a morally loaded rhetoric that I find surprising in its intensity. Here is the wonderfully critical description of that rhetoric by the king of film adaptation critics, Robert Stam: Terms like “infidelity,” “betrayal,” “deformation,” “violation,” “bastardisation,” “vulgarisation,” and “desecration” proliferate in adaptation discourse, each word carrying its specific charge of opprobrium. “Infidelity” carries overtones of Victorian prudishness; “betrayal” evokes ethical perfidy; “bastardisation” connotes illegitimacy; “deformation” implies aesthetic disgust and monstrosity; “violation” calls to mind sexual violence; “vulgarisation” conjures up class degradation; and “desecration” intimates religious sacrilege and blasphemy. (3) I join many others today, like Stam, in challenging the persistence of this fidelity discourse in adaptation studies, thereby providing yet another example of what, in his article here called “The Persistence of Fidelity: Adaptation Theory Today,” John Connor has called the “fidelity reflex”—the call to end an obsession with fidelity as the sole criterion for judging the success of an adaptation. But here I want to come at this same issue of the relation of adaptation to the adapted text from another angle. When considering an adaptation of a literary work, there are other reasons why the literary “source” text might be privileged. Literature has historical priority as an art form, Stam claims, and so in some people’s eyes will always be superior to other forms. But does it actually have priority? What about even earlier performative forms like ritual and song? Or to look forward, instead of back, as Tim Barker urges us to do in his article here, what about the new media’s additions to our repertoire with the advent of electronic technology? How can we retain this hierarchy of artistic forms—with literature inevitably on top—in a world like ours today? How can both the Romantic ideology of original genius and the capitalist notion of individual authorship hold up in the face of the complex reality of the production of literature today (as well as in the past)? (In “Amen to That: Sampling and Adapting the Past”, Steve Collins shows how digital technology has changed the possibilities of musical creativity in adapting/sampling.) Like many other ages before our own, adaptation is rampant today, as director Spike Jonze and screenwriter Charlie Kaufman clearly realised in creating Adaptation, their meta-cinematic illustration-as-send-up film about adaptation. But rarely has a culture denigrated the adapter as a secondary and derivative creator as much as we do the screenwriter today—as Jonze explores with great irony. Michelle McMerrin and Sergio Rizzo helpfully explain in their pieces here that one of the reasons for this is the strength of auteur theory in film criticism. But we live in a world in which works of literature have been turned into more than films. We now have literary adaptations in the forms of interactive new media works and videogames; we have theme parks; and of course, we have the more common television series, radio and stage plays, musicals, dance works, and operas. And, of course, we now have novelisations of films—and they are not given the respect that originary novels are given: it is the adaptation as adaptation that is denigrated, as Deborah Allison shows in “Film/Print: Novelisations and Capricorn One”. Adaptations across media are inevitably fraught, and for complex and multiple reasons. The financing and distribution issues of these widely different media alone inevitably challenge older capitalist models. The need or desire to appeal to a global market has consequences for adaptations of literature, especially with regard to its regional and historical specificities. These particularities are what usually get adapted or “indigenised” for new audiences—be they the particularities of the Spanish gypsy Carmen (see Ioana Furnica, “Subverting the ‘Good, Old Tune’”), those of the Japanese samurai genre (see Kevin P. Eubanks, “Becoming-Samurai: Samurai [Films], Kung-Fu [Flicks] and Hip-Hop [Soundtracks]”), of American hip hop graffiti (see Kara-Jane Lombard, “‘To Us Writers, the Differences Are Obvious’: The Adaptation of Hip Hop Graffiti to an Australian Context”) or of Jane Austen’s fiction (see Suchitra Mathur, “From British ‘Pride’ to Indian ‘Bride’: Mapping the Contours of a Globalised (Post?)Colonialism”). What happens to the literary text that is being adapted, often multiple times? Rather than being displaced by the adaptation (as is often feared), it most frequently gets a new life: new editions of the book appear, with stills from the movie adaptation on its cover. But if I buy and read the book after seeing the movie, I read it differently than I would have before I had seen the film: in effect, the book, not the adaptation, has become the second and even secondary text for me. And as I read, I can only “see” characters as imagined by the director of the film; the cinematic version has taken over, has even colonised, my reader’s imagination. The literary “source” text, in my readerly, experiential terms, becomes the secondary work. It exists on an experiential continuum, in other words, with its adaptations. It may have been created before, but I only came to know it after. What if I have read the literary work first, and then see the movie? In my imagination, I have already cast the characters: I know what Gabriel and Gretta Conroy of James Joyce’s story, “The Dead,” look and sound like—in my imagination, at least. Then along comes John Huston’s lush period piece cinematic adaptation and the director superimposes his vision upon mine; his forcibly replaces mine. But, in this particular case, Huston still arguably needs my imagination, or at least my memory—though he may not have realised it fully in making the film. When, in a central scene in the narrative, Gabriel watches his wife listening, moved, to the singing of the Irish song, “The Lass of Aughrim,” what we see on screen is a concerned, intrigued, but in the end rather blank face: Gabriel doesn’t alter his expression as he listens and watches. His expression may not change—but I know exactly what he is thinking. Huston does not tell us; indeed, without the use of voice-over, he cannot. And since the song itself is important, voice-over is impossible. But I know exactly what he is thinking: I’ve read the book. I fill in the blank, so to speak. Gabriel looks at Gretta and thinks: There was grace and mystery in her attitude as if she were a symbol of something. He asked himself what is a woman standing on the stairs in the shadow, listening to distant music, a symbol of. If he were a painter he would paint her in that attitude. … Distant Music he would call the picture if he were a painter. (210) A few pages later the narrator will tell us: At last she turned towards them and Gabriel saw that there was colour on her cheeks and that her eyes were shining. A sudden tide of joy went leaping out of his heart. (212) This joy, of course, puts him in a very different—disastrously different—state of mind than his wife, who (we later learn) is remembering a young man who sang that song to her when she was a girl—and who died, for love of her. I know this—because I’ve read the book. Watching the movie, I interpret Gabriel’s blank expression in this knowledge. Just as the director’s vision can colonise my visual and aural imagination, so too can I, as reader, supplement the film’s silence with the literary text’s inner knowledge. The question, of course, is: should I have to do so? Because I have read the book, I will. But what if I haven’t read the book? Will I substitute my own ideas, from what I’ve seen in the rest of the film, or from what I’ve experienced in my own life? Filmmakers always have to deal with this problem, of course, since the camera is resolutely externalising, and actors must reveal their inner worlds through bodily gesture or facial expression for the camera to record and for the spectator to witness and comprehend. But film is not only a visual medium: it uses music and sound, and it also uses words—spoken words within the dramatic situation, words overheard on the street, on television, but also voice-over words, spoken by a narrating figure. Stephen Dedalus escapes from Ireland at the end of Joseph Strick’s 1978 adaptation of Joyce’s A Portrait of the Artist as a Young Man with the same words as he does in the novel, where they appear as Stephen’s diary entry: Amen. So be it. Welcome, O life! I go to encounter for the millionth time the reality of experience and to forge in the smithy of my soul the uncreated conscience of my race. … Old father, old artificer, stand me now and ever in good stead. (253) The words from the novel also belong to the film as film, with its very different story, less about an artist than about a young Irishman finally able to escape his family, his religion and his country. What’s deliberately NOT in the movie is the irony of Joyce’s final, benign-looking textual signal to his reader: Dublin, 1904 Trieste, 1914 The first date is the time of Stephen’s leaving Dublin—and the time of his return, as we know from the novel Ulysses, the sequel, if you like, to this novel. The escape was short-lived! Portrait of the Artist as a Young Man has an ironic structure that has primed its readers to expect not escape and triumph but something else. Each chapter of the novel has ended on this kind of personal triumphant high; the next has ironically opened with Stephen mired in the mundane and in failure. Stephen’s final words in both film and novel remind us that he really is an Icarus figure, following his “Old father, old artificer”, his namesake, Daedalus. And Icarus, we recall, takes a tumble. In the novel version, we are reminded that this is the portrait of the artist “as a young man”—later, in 1914, from the distance of Trieste (to which he has escaped) Joyce, writing this story, could take some ironic distance from his earlier persona. There is no such distance in the film version. However, it stands alone, on its own; Joyce’s irony is not appropriate in Strick’s vision. His is a different work, with its own message and its own, considerably more romantic and less ironic power. Literary adaptations are their own things—inspired by, based on an adapted text but something different, something other. I want to argue that these works adapted from literature are now part of our readerly experience of that literature, and for that reason deserve the same attention we give to the literary, and not only the same attention, but also the same respect. I am a literarily trained person. People like me who love words, already love plays, but shouldn’t we also love films—and operas, and musicals, and even videogames? There is no need to denigrate words that are heard (and visualised) in order to privilege words that are read. Works of literature can have afterlives in their adaptations and translations, just as they have pre-lives, in terms of influences and models, as George Eliot Clarke openly allows in those acknowledgements to Beatrice Chancy. I want to return to that Canadian work, because it raises for me many of the issues about adaptation and language that I see at the core of our literary distrust of the move away from the written, printed text. I ended my recent book on adaptation with a brief examination of this work, but I didn’t deal with this particular issue of language. So I want to return to it, as to unfinished business. Clarke is, by the way, clear in the verse drama as well as in articles and interviews that among the many intertexts to Beatrice Chancy, the most important are slave narratives, especially one called Celia, a Slave, and Shelley’s play, The Cenci. Both are stories of mistreated and subordinated women who fight back. Since Clarke himself has written at length about the slave narratives, I’m going to concentrate here on Shelley’s The Cenci. The distance from Shelley’s verse play to Clarke’s verse play is a temporal one, but it is also geographic and ideological one: from the old to the new world, and from a European to what Clarke calls an “Africadian” (African Canadian/African Acadian) perspective. Yet both poets were writing political protest plays against unjust authority and despotic power. And they have both become plays that are more read than performed—a sad fate, according to Clarke, for two works that are so concerned with voice. We know that Shelley sought to calibrate the stylistic registers of his work with various dramatic characters and effects to create a modern “mixed” style that was both a return to the ancients and offered a new drama of great range and flexibility where the expression fits what is being expressed (see Bruhn). His polemic against eighteenth-century European dramatic conventions has been seen as leading the way for realist drama later in the nineteenth century, with what has been called its “mixed style mimesis” (Bruhn) Clarke’s adaptation does not aim for Shelley’s perfect linguistic decorum. It mixes the elevated and the biblical with the idiomatic and the sensual—even the vulgar—the lushly poetic with the coarsely powerful. But perhaps Shelley’s idea of appropriate language fits, after all: Beatrice Chancy is a woman of mixed blood—the child of a slave woman and her slave owner; she has been educated by her white father in a convent school. Sometimes that educated, elevated discourse is heard; at other times, she uses the variety of discourses operative within slave society—from religious to colloquial. But all the time, words count—as in all printed and oral literature. Clarke’s verse drama was given a staged reading in Toronto in 1997, but the story’s, if not the book’s, real second life came when it was used as the basis for an opera libretto. Actually the libretto commission came first (from Queen of Puddings Theatre in Toronto), and Clarke started writing what was to be his first of many opera texts. Constantly frustrated by the art form’s demands for concision, he found himself writing two texts at once—a short libretto and a longer, five-act tragic verse play to be published separately. Since it takes considerably longer to sing than to speak (or read) a line of text, the composer James Rolfe keep asking for cuts—in the name of economy (too many singers), because of clarity of action for audience comprehension, or because of sheer length. Opera audiences have to sit in a theatre for a fixed length of time, unlike readers who can put a book down and return to it later. However, what was never sacrificed to length or to the demands of the music was the language. In fact, the double impact of the powerful mixed language and the equally potent music, increases the impact of the literary text when performed in its operatic adaptation. Here is the verse play version of the scene after Beatrice’s rape by her own father, Francis Chancey: I was black but comely. Don’t glance Upon me. This flesh is crumbling Like proved lies. I’m perfumed, ruddied Carrion. Assassinated. Screams of mucking juncos scrawled Over the chapel and my nerves, A stickiness, as when he finished Maculating my thighs and dress. My eyes seep pus; I can’t walk: the floors Are tizzy, dented by stout mauling. Suddenly I would like poison. The flesh limps from my spine. My inlets crimp. Vultures flutter, ghastly, without meaning. I can see lice swarming the air. … His scythe went shick shick shick and slashed My flowers; they lay, murdered, in heaps. (90) The biblical and the violent meet in the texture of the language. And none of that power gets lost in the opera adaptation, despite cuts and alterations for easier aural comprehension. I was black but comely. Don’t look Upon me: this flesh is dying. I’m perfumed, bleeding carrion, My eyes weep pus, my womb’s sopping With tears; I can hardly walk: the floors Are tizzy, the sick walls tumbling, Crumbling like proved lies. His scythe went shick shick shick and cut My flowers; they lay in heaps, murdered. (95) Clarke has said that he feels the libretto is less “literary” in his words than the verse play, for it removes the lines of French, Latin, Spanish and Italian that pepper the play as part of the author’s critique of the highly educated planter class in Nova Scotia: their education did not guarantee ethical behaviour (“Adaptation” 14). I have not concentrated on the music of the opera, because I wanted to keep the focus on the language. But I should say that the Rolfe’s score is as historically grounded as Clarke’s libretto: it is rooted in African Canadian music (from ring shouts to spirituals to blues) and in Scottish fiddle music and local reels of the time, not to mention bel canto Italian opera. However, the music consciously links black and white traditions in a way that Clarke’s words and story refuse: they remain stubbornly separate, set in deliberate tension with the music’s resolution. Beatrice will murder her father, and, at the very moment that Nova Scotia slaves are liberated, she and her co-conspirators will be hanged for that murder. Unlike the printed verse drama, the shorter opera libretto functions like a screenplay, if you will. It is not so much an autonomous work unto itself, but it points toward a potential enactment or embodiment in performance. Yet, even there, Clarke cannot resist the lure of words—even though they are words that no audience will ever hear. The stage directions for Act 3, scene 2 of the opera read: “The garden. Slaves, sunflowers, stars, sparks” (98). The printed verse play is full of these poetic associative stage directions, suggesting that despite his protestations to the contrary, Clarke may have thought of that version as one meant to be read by the eye. After Beatrice’s rape, the stage directions read: “A violin mopes. Invisible shovelsful of dirt thud upon the scene—as if those present were being buried alive—like ourselves” (91). Our imaginations—and emotions—go to work, assisted by the poet’s associations. There are many such textual helpers—epigraphs, photographs, notes—that we do not have when we watch and listen to the opera. We do have the music, the staged drama, the colours and sounds as well as the words of the text. As Clarke puts the difference: “as a chamber opera, Beatrice Chancy has ascended to television broadcast. But as a closet drama, it play only within the reader’s head” (“Adaptation” 14). Clarke’s work of literature, his verse drama, is a “situated utterance, produced in one medium and in one historical and social context,” to use Robert Stam’s terms. In the opera version, it was transformed into another “equally situated utterance, produced in a different context and relayed through a different medium” (45-6). I want to argue that both are worthy of study and respect by wordsmiths, by people like me. I realise I’ve loaded the dice: here neither the verse play nor the libretto is primary; neither is really the “source” text, for they were written at the same time and by the same person. But for readers and audiences (my focus and interest here), they exist on a continuum—depending on which we happen to experience first. As Ilana Shiloh explores here, the same is true about the short story and film of Memento. I am not alone in wanting to mount a defence of adaptations. Julie Sanders ends her new book called Adaptation and Appropriation with these words: “Adaptation and appropriation … are, endlessly and wonderfully, about seeing things come back to us in as many forms as possible” (160). The storytelling imagination is an adaptive mechanism—whether manifesting itself in print or on stage or on screen. The study of the production of literature should, I would like to argue, include those other forms taken by that storytelling drive. If I can be forgiven a move to the amusing—but still serious—in concluding, Terry Pratchett puts it beautifully in his fantasy story, Witches Abroad: “Stories, great flapping ribbons of shaped space-time, have been blowing and uncoiling around the universe since the beginning of time. And they have evolved. The weakest have died and the strongest have survived and they have grown fat on the retelling.” In biology as in culture, adaptations reign. References Bloom, Harold. The Anxiety of Influence. New York: Oxford University Press, 1975. Bruhn, Mark J. “’Prodigious Mixtures and Confusions Strange’: The Self-Subverting Mixed Style of The Cenci.” Poetics Today 22.4 (2001). Clarke, George Elliott. “Beatrice Chancy: A Libretto in Four Acts.” Canadian Theatre Review 96 (1998): 62-79. ———. Beatrice Chancy. Victoria, BC: Polestar, 1999. ———. “Adaptation: Love or Cannibalism? Some Personal Observations”, unpublished manuscript of article. Frye, Northrop. The Educated Imagination. Toronto: CBC, 1963. Goodman, Nelson. Languages of Art: An Approach to a Theory of Symbols. Indianapolis: Bobbs-Merrill, 1968. Hutcheon, Linda, and Gary R. Bortolotti. “On the Origin of Adaptations: Rethinking Fidelity Discourse and “Success”—Biologically.” New Literary History. Forthcoming. Joyce, James. Dubliners. 1916. New York: Viking, 1967. ———. A Portrait of the Artist as a Young Man. 1916. Penguin: Harmondsworth, 1960. Larson, Katherine. “Resistance from the Margins in George Elliott Clarke’s Beatrice Chancy.” Canadian Literature 189 (2006): 103-118. McGee, Celia. “Beowulf on Demand.” New York Times, Arts and Leisure. 30 April 2006. A4. Rushdie, Salman. The Satanic Verses. New York: Viking, 1988. ———. Haroun and the Sea of Stories. London: Granta/Penguin, 1990. Sanders, Julie. Adaptation and Appropriation. London and New York: Routledge, 160. Shelley, Percy Bysshe. The Cenci. Ed. George Edward Woodberry. Boston and London: Heath, 1909. Stam, Robert. “Introduction: The Theory and Practice of Adaptation.” Literature and Film: A Guide to the Theory and Practice of Film Adaptation. Oxford: Blackwell, 2005. 1-52. Citation reference for this article MLA Style Hutcheon, Linda. "In Defence of Literary Adaptation as Cultural Production." M/C Journal 10.2 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0705/01-hutcheon.php>. APA Style Hutcheon, L. (May 2007) "In Defence of Literary Adaptation as Cultural Production," M/C Journal, 10(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0705/01-hutcheon.php>.
APA, Harvard, Vancouver, ISO, and other styles
12

Hodge, Bob. "The Complexity Revolution." M/C Journal 10, no. 3 (June 1, 2007). http://dx.doi.org/10.5204/mcj.2656.

Full text
Abstract:
‘Complex(ity)’ is currently fashionable in the humanities. Fashions come and go, but in this article I argue that the interest in complexity connects with something deeper, an intellectual revolution that began before complexity became trendy, and will continue after the spotlight passes on. Yet to make this case, and understand and advance this revolution, we need a better take on ‘complexity’. ‘Complex’ is of course complex. In common use it refers to something ‘composed of many interrelated parts’, or problems ‘so complicated or intricate as to be hard to deal with’. I will call this popular meaning, with its positive and negative values, complexity-1. In science it has a more negative sense, complexity-2, referring to the presenting complexity of problems, which science will strip down to underlying simplicity. But recently it has developed positive meanings in both science and humanities. Complexity-3 marks a revolutionarily more positive attitude to complexity in science that does seek to be reductive. Humanities-style complexity-4, which acknowledges and celebrates the inherent complexity of texts and meanings, is basic in contemporary Media and Cultural studies (MaC for short). The underlying root of complex is plico bend or fold, plus con- together, via complector grasp (something), encompass an idea, or person. The double of ‘complex’ is ‘simple’, from Latin simplex, which less obviously also comes from plico, plus semel once, at the same time. ‘Simple’ and ‘complex’ are closer than people think: only a fold or two apart. A key idea is that these elements are interdependent, parts of a single underlying form. ‘Simple(x)’ is another modality of ‘complex’, dialectically related, different in degree not kind, not absolutely opposite. The idea of ‘holding together’ is stronger in Latin complex, the idea of difficulty more prominent in modern usage, yet the term still includes both. The concept ‘complex’ is untenable apart from ‘simple’. This figure maps the basic structures in ‘complexity’. This complexity contains both positive and negative values, science and non-science, academic and popular meanings, with folds/differences and relationships so dynamically related that no aspect is totally independent. This complex field is the minimum context in which to explore claims about a ‘complexity revolution’. Complexity in Science and Humanities In spite of the apparent similarities between Complexity-3 (sciences) and 4 (humanities), in practice a gulf separates them, policed from both sides. If these sides do not talk to each other, as they often do not, the result is not a complex meaning for ‘complex’, but a semantic war-zone. These two forms of complexity connect and collide because they reach into a new space where discourses of science and non-science are interacting more than they have for many years. For many, in both academic communities, a strong, taken-for-granted mindset declares the difference between them is absolute. They assume that if ‘complexity’ exists in science, it must mean something completely different from what it means in humanities or everyday discourse, so different as to be incomprehensible or unusable by humanists. This terrified defence of the traditional gulf between sciences and humanities is not the clinching argument these critics think. On the contrary, it symptomises what needs to be challenged, via the concept complex. One influential critic of this split was Lord Snow, who talked of ‘two cultures’. Writing in class-conscious post-war Britain he regretted the ignorance of humanities-trained ruling elites about basic science, and scientists’ ignorance of humanities. No-one then or now doubts there is a problem. Most MaC students have a science-light education, and feel vulnerable to critiques which say they do not need to know any science or maths, including complexity science, and could not understand it anyway. To understand how this has happened I go back to the 17th century rise of ‘modern science’. The Royal Society then included the poet Dryden as well as the scientist Newton, but already the fissure between science and humanities was emerging in the elite, re-enforcing existing gaps between both these and technology. The three forms of knowledge and their communities continued to develop over the next 400 years, producing the education system which formed most of us, the structure of academic knowledges in which culture, technology and science form distinct fields. Complexity has been implicated in this three-way split. Influenced by Newton’s wonderful achievement, explaining so much (movements of earthly and heavenly bodies) with so little (three elegant laws of motion, one brief formula), science defined itself as a reductive practice, in which complexity was a challenge. Simplicity was the sign of a successful solution, altering the older reciprocity between simplicity and complexity. The paradox was ignored that proof involved highly complex mathematics, as anyone who reads Newton knows. What science held onto was the outcome, a simplicity then retrospectively attributed to the universe itself, as its true nature. Simplicity became a core quality in the ontology of science, with complexity-2 the imperfection which challenged and provoked science to eliminate it. Humanities remained a refuge for a complexity ontology, in which both problems and solutions were irreducibly complex. Because of the dominance of science as a form of knowing, the social sciences developed a reductivist approach opposing traditional humanities. They also waged bitter struggles against anti-reductionists who emerged in what was called ‘social theory’. Complexity-4 in humanities is often associated with ‘post-structuralism’, as in Derrida, who emphasises the irreducible complexity of every text and process of meaning, or ‘postmodernism’, as in Lyotard’s controversial, influential polemic. Lyotard attempted to take the pulse of contemporary Western thought. Among trends he noted were new forms of science, new relationships between science and humanities, and a new kind of logic pervading all branches of knowledge. Not all Lyotard’s claims have worn well, but his claim that something really important is happening in the relationship between kinds and institutions of knowledge, especially between sciences and humanities, is worth serious attention. Even classic sociologists like Durkheim recognised that the modern world is highly complex. Contemporary sociologists agree that ‘globalisation’ introduces new levels of complexity in its root sense, interconnections on a scale never seen before. Urry argues that the hyper-complexity of the global world requires a complexity approach, combining complexity-3 and 4. Lyotard’s ‘postmodernism’ has too much baggage, including dogmatic hostility to science. Humanities complexity-4 has lost touch with the sceptical side of popular complexity-1, and lacks a dialectic relationship with simplicity. ‘Complexity’, incorporating Complexity-1 and 3, popular and scientific, made more complex by incorporating humanities complexity-4, may prove a better concept for thinking creatively and productively about these momentous changes. Only complex complexity in the approach, flexible and interdisciplinary, can comprehend these highly complex new objects of knowledge. Complexity and the New Condition of Science Some important changes in the way science is done are driven not from above, by new theories or discoveries, but by new developments in social contexts. Gibbons and Nowottny identify new forms of knowledge and practice, which they call ‘mode-2 knowledge’, emerging alongside older forms. Mode-1 is traditional academic knowledge, based in universities, organised in disciplines, relating to real-life problems at one remove, as experts to clients or consultants to employers. Mode-2 is orientated to real life problems, interdisciplinary and collaborative, producing provisional, emergent knowledge. Gibbons and Nowottny do not reference postmodernism but are looking at Lyotard’s trends as they were emerging in practice 10 years later. They do not emphasise complexity, but the new objects of knowledge they address are fluid, dynamic and highly complex. They emphasise a new scale of interdisciplinarity, in collaborations between academics across all disciplines, in science, technology, social sciences and humanities, though they do not see a strong role for humanities. This approach confronts and welcomes irreducible complexity in object and methods. It takes for granted that real-life problems will always be too complex (with too many factors, interrelated in too many ways) to be reduced to the sort of problem that isolated disciplines could handle. The complexity of objects requires equivalent complexity in responses; teamwork, using networks, drawing on relevant knowledge wherever it is to be found. Lyotard famously and foolishly predicted the death of the ‘grand narrative’ of science, but Gibbons and Nowottny offer a more complex picture in which modes-1 and 2 will continue alongside each other in productive dialectic. The linear form of science Lyotard attacked is stronger than ever in some ways, as ‘Big Science’, which delivers wealth and prestige to disciplinary scientists, accessing huge funds to solve highly complex problems with a reductionist mindset. But governments also like the idea of mode-2 knowledge, under whatever name, and try to fund it despite resistance from powerful mode-1 academics. Moreover, non-reductionist science in practice has always been more common than the dominant ideology allowed, whether or not its exponents, some of them eminent scientists, chose to call it ‘complexity’ science. Quantum physics, called ‘the new physics’, consciously departed from the linear, reductionist assumptions of Newtonian physics to project an irreducibly complex picture of the quantum world. Different movements, labelled ‘catastrophe theory’, ‘chaos theory’ and ‘complexity science’, emerged, not a single coherent movement replacing the older reductionist model, but loosely linked by new attitudes to complexity. Instead of seeing chaos and complexity as problems to be removed by analysis, chaos and complexity play a more ambiguous role, as ontologically primary. Disorder and complexity are not later regrettable lapses from underlying essential simplicity and order, but potentially creative resources, to be understood and harnessed, not feared, controlled, eliminated. As a taste of exciting ideas on complexity, barred from humanities MaC students by the general prohibition on ‘consorting with the enemy’ (science), I will outline three ideas, originally developed in complexity-3, which can be described in ways requiring no specialist knowledge or vocabulary, beyond a Mode-2 openness to dynamic, interdisciplinary engagement. Fractals, a term coined by mathematician Benoit Mandelbrot, are so popular as striking shapes produced by computer-graphics, circulated on T-shirts, that they may seem superficial, unscientific, trendy. They exist at an intersection between science, media and culture, and their complexity includes transactions across that folded space. The name comes from Latin fractus, broken: irregular shapes like broken shards, which however have their own pattern. Mandelbrot claims that in nature, many such patterns partly repeat on different scales. When this happens, he says, objects on any one scale will have equivalent complexity. Part of this idea is contained in Blake’s famous line: ‘To see the world in a grain of sand’. The importance of the principle is that it fundamentally challenges reductiveness. Nor is it as unscientific as it may sound. Geologists indeed see grains of sand under a microscope as highly complex. In sociology, instead of individuals (literal meaning ‘cannot be divided’) being the minimally simple unit of analysis, individuals can be understood to be as complex (e.g. with multiple identities, linked with many other social beings) as groups, classes or nations. There is no level where complexity disappears. A second concept is ‘fuzzy logic’, invented by an engineer, Zadeh. The basic idea is not unlike the literary critic Empson’s ‘ambiguity’, the sometimes inexhaustible complexity of meanings in great literature. Zadeh’s contribution was to praise the inherent ambiguity and ambiguity of natural languages as a resource for scientists and engineers, making them better, not worse, for programming control systems. Across this apparently simple bridge have flowed many fuzzy machines, more effective than their over-precise brothers. Zadeh crystallised this wisdom in his ‘Principle of incompatibility’: As the complexity of a system increases, our ability to make precise and yet significant statements about its behaviour decreases until a threshold is reached beyond which precision and significance (or relevance) become almost mutually exclusive characteristics (28) Something along these lines is common wisdom in complexity-1. For instance, under the headline “Law is too complex for juries to understand, says judge” (Dick 4), the Chief Justice of Australia, Murray Gleeson, noted a paradox of complexity, that attempts to improve a system by increasing its complexity make it worse (meaningless or irrelevant, as Zadeh said). The system loses its complexity in another sense, that it no longer holds together. My third concept is the ‘Butterfly Effect’, a name coined by Lorenz. The butterfly was this scientist’s poetic fantasy, an imagined butterfly that flaps its wings somewhere on the Andes, and introduces a small change in the weather system that triggers a hurricane in Montana, or Beijing. This idea is another riff on the idea that complex situations are not reducible to component elements. Every cause is so complex that we can never know in advance just what factor will operate in a given situation, or what its effects might be across a highly complex system. Travels in Complexity I will now explore these issues with reference to a single example, or rather, a nested set of examples, each (as in fractal theory) equivalently complex, yet none identical at any scale. I was travelling in a train from Penrith to Sydney in New South Wales in early 2006 when I read a publicity text from NSW State Rail which asked me: ‘Did you know that delays at Sydenham affect trains to Parramatta? Or that a sick passenger on a train at Berowra can affect trains to Penrith?’ No, I did not know that. As a typical commuter I was impressed, and even more so as an untypical commuter who knows about complexity science. Without ostentatious reference to sources in popular science, NSW Rail was illustrating Lorenz’s ‘butterfly effect’. A sick passenger is prosaic, a realistic illustration of the basic point, that in a highly complex system, a small change in one part, so small that no-one could predict it would matter, can produce a massive, apparently unrelated change in another part. This text was part of a publicity campaign with a scientific complexity-3 subtext, which ran in a variety of forms, in their website, in notices in carriages, on the back of tickets. I will use a complexity framework to suggest different kinds of analysis and project which might interest MaC students, applicable to objects that may not refer to be complexity-3. The text does two distinct things. It describes a planning process, and is part of a publicity program. The first, simplifying movement of Mode-1 analysis would see this difference as projecting two separate objects for two different specialists: a transport expert for the planning, a MaC analyst for the publicity, including the image. Unfortunately, as Zadeh warned, in complex conditions simplification carries an explanatory cost, producing descriptions that are meaningless or irrelevant, even though common sense (complexity-1) says otherwise. What do MaC specialists know about rail systems? What do engineers know about publicity? But collaboration in a mode-2 framework does not need extensive specialist knowledge, only enough to communicate with others. MaC specialists have a fuzzy knowledge of their own and other areas of knowledge, attuned by Humanities complexity-4 to tolerate uncertainty. According to the butterfly principle it would be foolish to wish our University education had equipped us with the necessary other knowledges. We could never predict what precise items of knowledge would be handy from our formal and informal education. The complexity of most mode-2 problems is so great that we cannot predict in advance what we will need to know. MaC is already a complex field, in which ‘Media’ and ‘Culture’ are fuzzy terms which interact in different ways. Media and other organisations we might work with are often imbued with linear forms of thought (complexity-2), and want simple answers to simple questions about complex systems. For instance, MaC researchers might be asked as consultants to determine the effect of this message on typical commuters. That form of analysis is no longer respectable in complexity-4 MaC studies. Old-style (complexity-2) effects-research modelled Senders, Messages and Receivers to measure effects. Standard research methods of complexity-2 social sciences might test effects of the message by a survey instrument, with a large sample to allow statistically significant results. Using this, researchers could claim to know whether the publicity campaign had its desired effect on its targeted demographic: presumably inspiring confidence in NSW Rail. However, each of these elements is complex, and interactions between them, and others that don’t enter into the analysis, create further levels of complexity. To manage this complexity, MaC analysts often draw on Foucault’s authority to use ‘discourse’ to simplify analysis. This does not betray the principle of complexity. Complexity-4 needs a simplicity-complexity dialectic. In this case I propose a ‘complexity discourse’ to encapsulate the complex relations between Senders, Receivers and Messages into a single word, which can then be related to other such elements (e.g. ‘publicity discourse’). In this case complexity-3 can also be produced by attending to details of elements in the S-M-R chain, combining Derridean ‘deconstruction’ with expert knowledge of the situation. This Sender may be some combination of engineers and planners, managers who commissioned the advertisement, media professionals who carried it out. The message likewise loses its unity as its different parts decompose into separate messages, leaving the transaction a fraught, unpredictable encounter between multiple messages and many kinds of reader and sender. Alongside its celebration of complexity-3, this short text runs another message: ‘untangling our complex rail network’. This is complexity-2 from science and engineering, where complexity is only a problem to be removed. A fuller text on the web-site expands this second strand, using bullet points and other signals of a linear approach. In this text, there are 5 uses of ‘reliable’, 6 uses of words for problems of complexity (‘bottlenecks’, ‘delays’, ‘congestion’), and 6 uses of words for the new system (‘simpler’, ‘independent’). ‘Complex’ is used twice, both times negatively. In spite of the impression given by references to complexity-3, this text mostly has a reductionist attitude to complexity. Complexity is the enemy. Then there is the image. Each line is a different colour, and they loop in an attractive way, seeming to celebrate graceful complexity-2. Yet this part of the image is what is going to be eliminated by the new program’s complexity-2. The interesting complexity of the upper part of the image is what the text declares is the problem. What are commuters meant to think? And Railcorp? This media analysis identifies a fissure in the message, which reflects a fissure in the Sender-complex. It also throws up a problem in the culture that produced such interesting allusions to complexity science, but has linear, reductionist attitudes to complexity in its practice. We can ask: where does this cultural problem go, in the organisation, in the interconnected system and bureaucracy it manages? Is this culture implicated in the problems the program is meant to address? These questions are more productive if asked in a collaborative mode-2 framework, with an organisation open to such questions, with complex researchers able to move between different identities, as media analyst, cultural analyst, and commuter, interested in issues of organisation and logistics, engaged with complexity in all senses. I will continue my imaginary mode-2 collaboration with Railcorp by offering them another example of fractal analysis, looking at another instant, captured in a brief media text. On Wednesday 14 March, 2007, two weeks before a State government election, a very small cause triggered a systems failure in the Sydney network. A small carbon strip worth $44 which was not properly attached properly threw Sydney’s transport network into chaos on Wednesday night, causing thousands of commuters to be trapped in trains for hours. (Baker and Davies 7) This is an excellent example of a butterfly effect, but it is not labelled as such, nor regarded positively in this complexity-1 framework. ‘Chaos’ signifies something no-one wants in a transport system. This is popular not scientific reductionism. The article goes on to tell the story of one passenger, Mark MacCauley, a quadriplegic left without power or electricity in a train because the lift was not working. He rang City Rail, and was told that “someone would be in touch in 3 to 5 days” (Baker and Davies 7). He then rang emergency OOO, and was finally rescued by contractors “who happened to be installing a lift at North Sydney” (Baker and Davies 7). My new friends at NSW Rail would be very unhappy with this story. It would not help much to tell them that this is a standard ‘human interest’ article, nor that it is more complex than it looks. For instance, MacCauley is not typical of standard passengers who usually concern complexity-2 planners of rail networks. He is another butterfly, whose specific needs would be hard to predict or cater for. His rescue is similarly unpredictable. Who would have predicted that these contractors, with their specialist equipment, would be in the right place at the right time to rescue him? Complexity provided both problem and solution. The media’s double attitude to complexity, positive and negative, complexity-1 with a touch of complexity-3, is a resource which NSW Rail might learn to use, even though it is presented with such hostility here. One lesson of the complexity is that a tight, linear framing of systems and problems creates or exacerbates problems, and closes off possible solutions. In the problem, different systems didn’t connect: social and material systems, road and rail, which are all ‘media’ in McLuhan’s highly fuzzy sense. NSW Rail communication systems were cumbrously linear, slow (3 to 5 days) and narrow. In the solution, communication cut across institutional divisions, mediated by responsive, fuzzy complex humans. If the problem came from a highly complex system, the solution is a complex response on many fronts: planning, engineering, social and communication systems open to unpredictable input from other surrounding systems. As NSW Rail would have been well aware, the story responded to another context. The page was headed ‘Battle for NSW’, referring to an election in 2 weeks, in which this newspaper editorialised that the incumbent government should be thrown out. This political context is clearly part of the complexity of the newspaper message, which tries to link not just the carbon strip and ‘chaos’, but science and politics, this strip and the government’s credibility. Yet the government was returned with a substantial though reduced majority, not the swingeing defeat that might have been predicted by linear logic (rail chaos = electoral defeat) or by some interpretations of the butterfly effect. But complexity-3 does not say that every small cause produces catastrophic effects. On the contrary, it says that causal situations can be so complex that we can never be entirely sure what effects will follow from any given case. The political situation in all its complexity is an inseparable part of the minimal complex situation which NSW Rail must take into account as it considers how to reform its operations. It must make complexity in all its senses a friend and ally, not just a source of nasty surprises. My relationship with NSW Rail at the moment is purely imaginary, but illustrates positive and negative aspects of complexity as an organising principle for MaC researchers today. The unlimited complexity of Humanities’ complexity-4, Derridean and Foucauldian, can be liberating alongside the sometimes excessive scepticism of Complexity-2, but needs to keep in touch with the ambivalence of popular complexity-1. Complexity-3 connects with complexity-2 and 4 to hold the bundle together, in a more complex, cohesive, yet still unstable dynamic structure. It is this total sprawling, inchoate, contradictory (‘complex’) brand of complexity that I believe will play a key role in the up-coming intellectual revolution. But only time will tell. References Baker, Jordan, and Anne Davies. “Carbon Strip Caused Train Chaos.” Sydney Morning Herald 17 Mar. 2007: 7. Derrida, Jacques. Of Grammatology. Baltimore: Johns Hopkins, 1976. Dick, Tim. “Law Is Now Too Complex for Juries to Understand, Says Judge.” Sydney Morning Herald 26 Mar. 2007: 4. Empson, William. Seven Types of Ambiguity. London: Chatto and Windus, 1930. Foucault, Michel. “The Order of Discourse.” In Archaeology of Knowledge, trans. A.M Sheridan Smith. London: Tavistock, 1972. Gibbons, Michael. The New Production of Knowledge. London: Sage, 1994. Lorenz, Edward. The Essence of Chaos. London: University College, 1993. Lyotard, Jean-Francois. The Postmodern Condition. Manchester: Manchester UP, 1984. McLuhan, Marshall. Understanding Media. London: Routledge, 1964. Mandelbrot, Benoit. “The Fractal Geometry of Nature.” In Nina Hall, ed. The New Scientist Guide to Chaos. Harmondsworth: Penguin, 1963. Nowottny, Henry. Rethinking Science. London: Polity, 2001. Snow, Charles Percy. The Two Cultures and the Scientific Revolution. London: Faber 1959. Urry, John. Global Complexity. London: Sage, 2003. Zadeh, Lotfi Asker. “Outline of a New Approach to the Analysis of Complex Systems and Decision Processes.” ILEE Transactions on Systems, Man, and Cybernetics 3.1 (1973): 28-44. Citation reference for this article MLA Style Hodge, Bob. "The Complexity Revolution." M/C Journal 10.3 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0706/01-hodge.php>. APA Style Hodge, B. (Jun. 2007) "The Complexity Revolution," M/C Journal, 10(3). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0706/01-hodge.php>.
APA, Harvard, Vancouver, ISO, and other styles
13

Mac Con Iomaire, Máirtín. "The Pig in Irish Cuisine and Culture." M/C Journal 13, no. 5 (October 17, 2010). http://dx.doi.org/10.5204/mcj.296.

Full text
Abstract:
In Ireland today, we eat more pigmeat per capita, approximately 32.4 kilograms, than any other meat, yet you very seldom if ever see a pig (C.S.O.). Fat and flavour are two words that are synonymous with pig meat, yet scientists have spent the last thirty years cross breeding to produce leaner, low-fat pigs. Today’s pig professionals prefer to use the term “pig finishing” as opposed to the more traditional “pig fattening” (Tuite). The pig evokes many themes in relation to cuisine. Charles Lamb (1775-1834), in his essay Dissertation upon Roast Pig, cites Confucius in attributing the accidental discovery of the art of roasting to the humble pig. The pig has been singled out by many cultures as a food to be avoided or even abhorred, and Harris (1997) illustrates the environmental effect this avoidance can have by contrasting the landscape of Christian Albania with that of Muslim Albania.This paper will focus on the pig in Irish cuisine and culture from ancient times to the present day. The inspiration for this paper comes from a folklore tale about how Saint Martin created the pig from a piece of fat. The story is one of a number recorded by Seán Ó Conaill, the famous Kerry storyteller and goes as follows:From St Martin’s fat they were made. He was travelling around, and one night he came to a house and yard. At that time there were only cattle; there were no pigs or piglets. He asked the man of the house if there was anything to eat the chaff and the grain. The man replied there were only the cattle. St Martin said it was a great pity to have that much chaff going to waste. At night when they were going to bed, he handed a piece of fat to the servant-girl and told her to put it under a tub, and not to look at it at all until he would give her the word next day. The girl did so, but she kept a bit of the fat and put it under a keeler to find out what it would be.When St Martin rose next day he asked her to go and lift up the tub. She lifted it up, and there under it were a sow and twelve piglets. It was a great wonder to them, as they had never before seen pig or piglet.The girl then went to the keeler and lifted it, and it was full of mice and rats! As soon as the keeler was lifted, they went running about the house searching for any hole that they could go into. When St Martin saw them, he pulled off one of his mittens and threw it at them and made a cat with that throw. And that is why the cat ever since goes after mice and rats (Ó Conaill).The place of the pig has long been established in Irish literature, and longer still in Irish topography. The word torc, a boar, like the word muc, a pig, is a common element of placenames, from Kanturk (boar’s head) in West Cork to Ros Muc (headland of pigs) in West Galway. The Irish pig had its place in literature well established long before George Orwell’s English pig, Major, headed the dictatorship in Animal Farm. It was a wild boar that killed the hero Diarmaid in the Fenian tale The Pursuit of Diarmaid and Gráinne, on top of Ben Bulben in County Sligo (Mac Con Iomaire). In Ancient and Medieval Ireland, wild boars were hunted with great fervour, and the prime cuts were reserved for the warrior classes, and certain other individuals. At a feast, a leg of pork was traditionally reserved for a king, a haunch for a queen, and a boar’s head for a charioteer. The champion warrior was given the best portion of meat (Curath Mhir or Champions’ Share), and fights often took place to decide who should receive it. Gantz (1981) describes how in the ninth century tale The story of Mac Dathó’s Pig, Cet mac Matach, got supremacy over the men of Ireland: “Moreover he flaunted his valour on high above the valour of the host, and took a knife in his hand and sat down beside the pig. “Let someone be found now among the men of Ireland”, said he, “to endure battle with me, or leave the pig for me to divide!”It did not take long before the wild pigs were domesticated. Whereas cattle might be kept for milk and sheep for wool, the only reason for pig rearing was as a source of food. Until the late medieval period, the “domesticated” pigs were fattened on woodland mast, the fruit of the beech, oak, chestnut and whitethorn, giving their flesh a delicious flavour. So important was this resource that it is acknowledged by an entry in the Annals of Clonmacnoise for the year 1038: “There was such an abundance of ackornes this yeare that it fattened the pigges [runts] of pigges” (Sexton 45). In another mythological tale, two pig keepers, one called ‘friuch’ after the boars bristle (pig keeper to the king of Munster) and the other called ‘rucht’ after its grunt (pig keeper to the king of Connacht), were such good friends that the one from the north would bring his pigs south when there was a mast of oak and beech nuts in Munster. If the mast fell in Connacht, the pig-keeper from the south would travel northward. Competitive jealousy sparked by troublemakers led to the pig keepers casting spells on each other’s herds to the effect that no matter what mast they ate they would not grow fat. Both pig keepers were practised in the pagan arts and could form themselves into any shape, and having been dismissed by their kings for the leanness of their pig herds due to the spells, they eventually formed themselves into the two famous bulls that feature in the Irish Epic The Táin (Kinsella).In the witty and satirical twelfth century text, The Vision of Mac Conglinne (Aisling Mhic Conglinne), many references are made to the various types of pig meat. Bacon, hams, sausages and puddings are often mentioned, and the gate to the fortress in the visionary land of plenty is described thus: “there was a gate of tallow to it, whereon was a bolt of sausage” (Jackson).Although pigs were always popular in Ireland, the emergence of the potato resulted in an increase in both human and pig populations. The Irish were the first Europeans to seriously consider the potato as a staple food. By 1663 it was widely accepted in Ireland as an important food plant and by 1770 it was known as the Irish Potato (Mac Con Iomaire and Gallagher). The potato transformed Ireland from an under populated island of one million in the 1590s to 8.2 million in 1840, making it the most densely populated country in Europe. Two centuries of genetic evolution resulted in potato yields growing from two tons per acre in 1670 to ten tons per acre in 1800. A constant supply of potato, which was not seen as a commercial crop, ensured that even the smallest holding could keep a few pigs on a potato-rich diet. Pat Tuite, an expert on pigs with Teagasc, the Irish Agricultural and Food Development Authority, reminded me that the potatoes were cooked for the pigs and that they also enjoyed whey, the by product of both butter and cheese making (Tuite). The agronomist, Arthur Young, while travelling through Ireland, commented in 1770 that in the town of Mitchelstown in County Cork “there seemed to be more pigs than human beings”. So plentiful were pigs at this time that on the eve of the Great Famine in 1841 the pig population was calculated to be 1,412,813 (Sexton 46). Some of the pigs were kept for home consumption but the rest were a valuable source of income and were shown great respect as the gentleman who paid the rent. Until the early twentieth century most Irish rural households kept some pigs.Pork was popular and was the main meat eaten at all feasts in the main houses; indeed a feast was considered incomplete without a whole roasted pig. In the poorer holdings, fresh pork was highly prized, as it was only available when a pig of their own was killed. Most of the pig was salted, placed in the brine barrel for a period or placed up the chimney for smoking.Certain superstitions were observed concerning the time of killing. Pigs were traditionally killed only in months that contained the letter “r”, since the heat of the summer months caused the meat to turn foul. In some counties it was believed that pigs should be killed under the full moon (Mahon 58). The main breed of pig from the medieval period was the Razor Back or Greyhound Pig, which was very efficient in converting organic waste into meat (Fitzgerald). The killing of the pig was an important ritual and a social occasion in rural Ireland, for it meant full and plenty for all. Neighbours, who came to help, brought a handful of salt for the curing, and when the work was done each would get a share of the puddings and the fresh pork. There were a number of days where it was traditional to kill a pig, the Michaelmas feast (29 September), Saint Martins Day (11 November) and St Patrick’s Day (17 March). Olive Sharkey gives a vivid description of the killing of the barrow pig in rural Ireland during the 1930s. A barrow pig is a male pig castrated before puberty:The local slaughterer (búistéir) a man experienced in the rustic art of pig killing, was approached to do the job, though some farmers killed their own pigs. When the búistéirarrived the whole family gathered round to watch the killing. His first job was to plunge the knife in the pig’s heart via the throat, using a special knife. The screeching during this performance was something awful, but the animal died instantly once the heart had been reached, usually to a round of applause from the onlookers. The animal was then draped across a pig-gib, a sort of bench, and had the fine hairs on its body scraped off. To make this a simple job the animal was immersed in hot water a number of times until the bristles were softened and easy to remove. If a few bristles were accidentally missed the bacon was known as ‘hairy bacon’!During the killing of the pig it was imperative to draw a good flow of blood to ensure good quality meat. This blood was collected in a bucket for the making of puddings. The carcass would then be hung from a hook in the shed with a basin under its head to catch the drip, and a potato was often placed in the pig’s mouth to aid the dripping process. After a few days the carcass would be dissected. Sharkey recalls that her father maintained that each pound weight in the pig’s head corresponded to a stone weight in the body. The body was washed and then each piece that was to be preserved was carefully salted and placed neatly in a barrel and hermetically sealed. It was customary in parts of the midlands to add brown sugar to the barrel at this stage, while in other areas juniper berries were placed in the fire when hanging the hams and flitches (sides of bacon), wrapped in brown paper, in the chimney for smoking (Sharkey 166). While the killing was predominantly men’s work, it was the women who took most responsibility for the curing and smoking. Puddings have always been popular in Irish cuisine. The pig’s intestines were washed well and soaked in a stream, and a mixture of onions, lard, spices, oatmeal and flour were mixed with the blood and the mixture was stuffed into the casing and boiled for about an hour, cooled and the puddings were divided amongst the neighbours.The pig was so palatable that the famous gastronomic writer Grimod de la Reyniere once claimed that the only piece you couldn’t eat was the “oink”. Sharkey remembers her father remarking that had they been able to catch the squeak they would have made tin whistles out of it! No part went to waste; the blood and offal were used, the trotters were known as crubeens (from crúb, hoof), and were boiled and eaten with cabbage. In Galway the knee joint was popular and known as the glúiníns (from glún, knee). The head was roasted whole or often boiled and pressed and prepared as Brawn. The chitterlings (small intestines) were meticulously prepared by continuous washing in cool water and the picking out of undigested food and faeces. Chitterlings were once a popular bar food in Dublin. Pig hair was used for paintbrushes and the bladder was occasionally inflated, using a goose quill, to be used as a football by the children. Meindertsma (2007) provides a pictorial review of the vast array of products derived from a single pig. These range from ammunition and porcelain to chewing gum.From around the mid-eighteenth century, commercial salting of pork and bacon grew rapidly in Ireland. 1820 saw Henry Denny begin operation in Waterford where he both developed and patented several production techniques for bacon. Bacon curing became a very important industry in Munster culminating in the setting up of four large factories. Irish bacon was the brand leader and the Irish companies exported their expertise. Denny set up a plant in Denmark in 1894 and introduced the Irish techniques to the Danish industry, while O’Mara’s set up bacon curing facilities in Russia in 1891 (Cowan and Sexton). Ireland developed an extensive export trade in bacon to England, and hams were delivered to markets in Paris, India, North and South America. The “sandwich method” of curing, or “dry cure”, was used up until 1862 when the method of injecting strong brine into the meat by means of a pickling pump was adopted by Irish bacon-curers. 1887 saw the formation of the Bacon Curers’ Pig Improvement Association and they managed to introduce a new breed, the Large White Ulster into most regions by the turn of the century. This breed was suitable for the production of “Wiltshire” bacon. Cork, Waterford Dublin and Belfast were important centres for bacon but it was Limerick that dominated the industry and a Department of Agriculture document from 1902 suggests that the famous “Limerick cure” may have originated by chance:1880 […] Limerick producers were short of money […] they produced what was considered meat in a half-cured condition. The unintentional cure proved extremely popular and others followed suit. By the turn of the century the mild cure procedure was brought to such perfection that meat could [… be] sent to tropical climates for consumption within a reasonable time (Cowan and Sexton).Failure to modernise led to the decline of bacon production in Limerick in the 1960s and all four factories closed down. The Irish pig market was protected prior to joining the European Union. There were no imports, and exports were subsidised by the Pigs and Bacon Commission. The Department of Agriculture started pig testing in the early 1960s and imported breeds from the United Kingdom and Scandinavia. The two main breeds were Large White and Landrace. Most farms kept pigs before joining the EU but after 1972, farmers were encouraged to rationalise and specialise. Grants were made available for facilities that would keep 3,000 pigs and these grants kick started the development of large units.Pig keeping and production were not only rural occupations; Irish towns and cities also had their fair share. Pigs could easily be kept on swill from hotels, restaurants, not to mention the by-product and leftovers of the brewing and baking industries. Ed Hick, a fourth generation pork butcher from south County Dublin, recalls buying pigs from a local coal man and bus driver and other locals for whom it was a tradition to keep pigs on the side. They would keep some six or eight pigs at a time and feed them on swill collected locally. Legislation concerning the feeding of swill introduced in 1985 (S.I.153) and an amendment in 1987 (S.I.133) required all swill to be heat-treated and resulted in most small operators going out of business. Other EU directives led to the shutting down of thousands of slaughterhouses across Europe. Small producers like Hick who slaughtered at most 25 pigs a week in their family slaughterhouse, states that it was not any one rule but a series of them that forced them to close. It was not uncommon for three inspectors, a veterinarian, a meat inspector and a hygiene inspector, to supervise himself and his brother at work. Ed Hick describes the situation thus; “if we had taken them on in a game of football, we would have lost! We were seen as a huge waste of veterinary time and manpower”.Sausages and rashers have long been popular in Dublin and are the main ingredients in the city’s most famous dish “Dublin Coddle.” Coddle is similar to an Irish stew except that it uses pork rashers and sausage instead of lamb. It was, traditionally, a Saturday night dish when the men came home from the public houses. Terry Fagan has a book on Dublin Folklore called Monto: Murder, Madams and Black Coddle. The black coddle resulted from soot falling down the chimney into the cauldron. James Joyce describes Denny’s sausages with relish in Ulysses, and like many other Irish emigrants, he would welcome visitors from home only if they brought Irish sausages and Irish whiskey with them. Even today, every family has its favourite brand of sausages: Byrne’s, Olhausens, Granby’s, Hafner’s, Denny’s Gold Medal, Kearns and Superquinn are among the most popular. Ironically the same James Joyce, who put Dublin pork kidneys on the world table in Ulysses, was later to call his native Ireland “the old sow that eats her own farrow” (184-5).The last thirty years have seen a concerted effort to breed pigs that have less fat content and leaner meat. There are no pure breeds of Landrace or Large White in production today for they have been crossbred for litter size, fat content and leanness (Tuite). Many experts feel that they have become too lean, to the detriment of flavour and that the meat can tend to split when cooked. Pig production is now a complicated science and tighter margins have led to only large-scale operations being financially viable (Whittemore). The average size of herd has grown from 29 animals in 1973, to 846 animals in 1997, and the highest numbers are found in counties Cork and Cavan (Lafferty et al.). The main players in today’s pig production/processing are the large Irish Agribusiness Multinationals Glanbia, Kerry Foods and Dairygold. Tuite (2002) expressed worries among the industry that there may be no pig production in Ireland in twenty years time, with production moving to Eastern Europe where feed and labour are cheaper. When it comes to traceability, in the light of the Foot and Mouth, BSE and Dioxin scares, many feel that things were much better in the old days, when butchers like Ed Hick slaughtered animals that were reared locally and then sold them back to local consumers. Hick has recently killed pigs for friends who have begun keeping them for home consumption. This slaughtering remains legal as long as the meat is not offered for sale.Although bacon and cabbage, and the full Irish breakfast with rashers, sausages and puddings, are considered to be some of Ireland’s most well known traditional dishes, there has been a growth in modern interpretations of traditional pork and bacon dishes in the repertoires of the seemingly ever growing number of talented Irish chefs. Michael Clifford popularised Clonakilty Black Pudding as a starter in his Cork restaurant Clifford’s in the late 1980s, and its use has become widespread since, as a starter or main course often partnered with either caramelised apples or red onion marmalade. Crubeens (pigs trotters) have been modernised “a la Pierre Kaufman” by a number of Irish chefs, who bone them out and stuff them with sweetbreads. Kevin Thornton, the first Irish chef to be awarded two Michelin stars, has roasted suckling pig as one of his signature dishes. Richard Corrigan is keeping the Irish flag flying in London in his Michelin starred Soho restaurant, Lindsay House, where traditional pork and bacon dishes from his childhood are creatively re-interpreted with simplicity and taste.Pork, ham and bacon are, without doubt, the most traditional of all Irish foods, featuring in the diet since prehistoric times. Although these meats remain the most consumed per capita in post “Celtic Tiger” Ireland, there are a number of threats facing the country’s pig industry. Large-scale indoor production necessitates the use of antibiotics. European legislation and economic factors have contributed in the demise of the traditional art of pork butchery. Scientific advancements have resulted in leaner low-fat pigs, many argue, to the detriment of flavour. Alas, all is not lost. There is a growth in consumer demand for quality local food, and some producers like J. Hick & Sons, and Prue & David Rudd and Family are leading the way. The Rudds process and distribute branded antibiotic-free pig related products with the mission of “re-inventing the tastes of bygone days with the quality of modern day standards”. Few could argue with the late Irish writer John B. Keane (72): “When this kind of bacon is boiling with its old colleague, white cabbage, there is a gurgle from the pot that would tear the heart out of any hungry man”.ReferencesCowan, Cathal and Regina Sexton. Ireland's Traditional Foods: An Exploration of Irish Local & Typical Foods & Drinks. Dublin: Teagasc, 1997.C.S.O. Central Statistics Office. Figures on per capita meat consumption for 2009, 2010. Ireland. http://www.cso.ie.Fitzgerald, Oisin. "The Irish 'Greyhound' Pig: an extinct indigenous breed of Pig." History Ireland13.4 (2005): 20-23.Gantz, Jeffrey Early Irish Myths and Sagas. New York: Penguin, 1981.Harris, Marvin. "The Abominable Pig." Food and Culture: A Reader. Eds. Carole Counihan and Penny Van Esterik. New York: Routledge, 1997. 67-79.Hick, Edward. Personal Communication with master butcher Ed Hick. 15 Apr. 2002.Hick, Edward. Personal Communication concerning pig killing. 5 Sep. 2010.Jackson, K. H. Ed. Aislinge Meic Con Glinne, Dublin: Institute of Advanced Studies, 1990.Joyce, James. The Portrait of the Artist as a Young Man, London: Granada, 1977.Keane, John B. Strong Tea. Cork: Mercier Press, 1963.Kinsella, Thomas. The Táin. Oxford: Oxford University Press, 1970.Lafferty, S., Commins, P. and Walsh, J. A. Irish Agriculture in Transition: A Census Atlas of Agriculture in the Republic of Ireland. Dublin: Teagasc, 1999.Mac Con Iomaire, Liam. Ireland of the Proverb. Dublin: Town House, 1988.Mac Con Iomaire, Máirtín and Pádraic Óg Gallagher. "The Potato in Irish Cuisine and Culture."Journal of Culinary Science and Technology 7.2-3 (2009): 1-16.Mahon, Bríd. Land of Milk and Honey: The Story of Traditional Irish Food and Drink. Cork:Mercier, 1998.Meindertsma, Christien. PIG 05049 2007. 10 Aug. 2010 http://www.christienmeindertsma.com.Ó Conaill, Seán. Seán Ó Conaill's Book. Bailie Átha Cliath: Bhéaloideas Éireann, 1981.Sexton, Regina. A Little History of Irish Food. Dublin: Gill and Macmillan, 1998.Sharkey, Olive. Old Days Old Ways: An Illustrated Folk History of Ireland. Dublin: The O'Brien Press, 1985.S.I. 153, 1985 (Irish Legislation) http://www.irishstatutebook.ie/1985/en/si/0153.htmlS.I. 133, 1987 (Irish Legislation) http://www.irishstatuebook.ie/1987/en/si/0133.htmlTuite, Pat. Personal Communication with Pat Tuite, Chief Pig Advisor, Teagasc. 3 May 2002.Whittemore, Colin T. and Ilias Kyriazakis. Whitmore's Science and Practice of Pig Production 3rdEdition. Oxford: Wiley-Blackwell, 2006.
APA, Harvard, Vancouver, ISO, and other styles
14

Maxwell, Richard, and Toby Miller. "The Real Future of the Media." M/C Journal 15, no. 3 (June 27, 2012). http://dx.doi.org/10.5204/mcj.537.

Full text
Abstract:
When George Orwell encountered ideas of a technological utopia sixty-five years ago, he acted the grumpy middle-aged man Reading recently a batch of rather shallowly optimistic “progressive” books, I was struck by the automatic way in which people go on repeating certain phrases which were fashionable before 1914. Two great favourites are “the abolition of distance” and “the disappearance of frontiers”. I do not know how often I have met with the statements that “the aeroplane and the radio have abolished distance” and “all parts of the world are now interdependent” (1944). It is worth revisiting the old boy’s grumpiness, because the rhetoric he so niftily skewers continues in our own time. Facebook features “Peace on Facebook” and even claims that it can “decrease world conflict” through inter-cultural communication. Twitter has announced itself as “a triumph of humanity” (“A Cyber-House” 61). Queue George. In between Orwell and latter-day hoody cybertarians, a whole host of excitable public intellectuals announced the impending end of materiality through emergent media forms. Marshall McLuhan, Neil Postman, Daniel Bell, Ithiel de Sola Pool, George Gilder, Alvin Toffler—the list of 1960s futurists goes on and on. And this wasn’t just a matter of punditry: the OECD decreed the coming of the “information society” in 1975 and the European Union (EU) followed suit in 1979, while IBM merrily declared an “information age” in 1977. Bell theorized this technological utopia as post-ideological, because class would cease to matter (Mattelart). Polluting industries seemingly no longer represented the dynamic core of industrial capitalism; instead, market dynamism radiated from a networked, intellectual core of creative and informational activities. The new information and knowledge-based economies would rescue First World hegemony from an “insurgent world” that lurked within as well as beyond itself (Schiller). Orwell’s others and the Cold-War futurists propagated one of the most destructive myths shaping both public debate and scholarly studies of the media, culture, and communication. They convinced generations of analysts, activists, and arrivistes that the promises and problems of the media could be understood via metaphors of the environment, and that the media were weightless and virtual. The famous medium they wished us to see as the message —a substance as vital to our wellbeing as air, water, and soil—turned out to be no such thing. Today’s cybertarians inherit their anti-Marxist, anti-materialist positions, as a casual glance at any new media journal, culture-industry magazine, or bourgeois press outlet discloses. The media are undoubtedly important instruments of social cohesion and fragmentation, political power and dissent, democracy and demagoguery, and other fraught extensions of human consciousness. But talk of media systems as equivalent to physical ecosystems—fashionable among marketers and media scholars alike—is predicated on the notion that they are environmentally benign technologies. This has never been true, from the beginnings of print to today’s cloud-covered computing. Our new book Greening the Media focuses on the environmental impact of the media—the myriad ways that media technology consumes, despoils, and wastes natural resources. We introduce ideas, stories, and facts that have been marginal or absent from popular, academic, and professional histories of media technology. Throughout, ecological issues have been at the core of our work and we immodestly think the same should apply to media communications, and cultural studies more generally. We recognize that those fields have contributed valuable research and teaching that address environmental questions. For instance, there is an abundant literature on representations of the environment in cinema, how to communicate environmental messages successfully, and press coverage of climate change. That’s not enough. You may already know that media technologies contain toxic substances. You may have signed an on-line petition protesting the hazardous and oppressive conditions under which workers assemble cell phones and computers. But you may be startled, as we were, by the scale and pervasiveness of these environmental risks. They are present in and around every site where electronic and electric devices are manufactured, used, and thrown away, poisoning humans, animals, vegetation, soil, air and water. We are using the term “media” as a portmanteau word to cover a multitude of cultural and communications machines and processes—print, film, radio, television, information and communications technologies (ICT), and consumer electronics (CE). This is not only for analytical convenience, but because there is increasing overlap between the sectors. CE connect to ICT and vice versa; televisions resemble computers; books are read on telephones; newspapers are written through clouds; and so on. Cultural forms and gadgets that were once separate are now linked. The currently fashionable notion of convergence doesn’t quite capture the vastness of this integration, which includes any object with a circuit board, scores of accessories that plug into it, and a global nexus of labor and environmental inputs and effects that produce and flow from it. In 2007, a combination of ICT/CE and media production accounted for between 2 and 3 percent of all greenhouse gases emitted around the world (“Gartner Estimates,”; International Telecommunication Union; Malmodin et al.). Between twenty and fifty million tonnes of electronic waste (e-waste) are generated annually, much of it via discarded cell phones and computers, which affluent populations throw out regularly in order to buy replacements. (Presumably this fits the narcissism of small differences that distinguishes them from their own past.) E-waste is historically produced in the Global North—Australasia, Western Europe, Japan, and the US—and dumped in the Global South—Latin America, Africa, Eastern Europe, Southern and Southeast Asia, and China. It takes the form of a thousand different, often deadly, materials for each electrical and electronic gadget. This trend is changing as India and China generate their own media detritus (Robinson; Herat). Enclosed hard drives, backlit screens, cathode ray tubes, wiring, capacitors, and heavy metals pose few risks while these materials remain encased. But once discarded and dismantled, ICT/CE have the potential to expose workers and ecosystems to a morass of toxic components. Theoretically, “outmoded” parts could be reused or swapped for newer parts to refurbish devices. But items that are defined as waste undergo further destruction in order to collect remaining parts and valuable metals, such as gold, silver, copper, and rare-earth elements. This process causes serious health risks to bones, brains, stomachs, lungs, and other vital organs, in addition to birth defects and disrupted biological development in children. Medical catastrophes can result from lead, cadmium, mercury, other heavy metals, poisonous fumes emitted in search of precious metals, and such carcinogenic compounds as polychlorinated biphenyls, dioxin, polyvinyl chloride, and flame retardants (Maxwell and Miller 13). The United States’ Environmental Protection Agency estimates that by 2007 US residents owned approximately three billion electronic devices, with an annual turnover rate of 400 million units, and well over half such purchases made by women. Overall CE ownership varied with age—adults under 45 typically boasted four gadgets; those over 65 made do with one. The Consumer Electronics Association (CEA) says US$145 billion was expended in the sector in 2006 in the US alone, up 13% on the previous year. The CEA refers joyously to a “consumer love affair with technology continuing at a healthy clip.” In the midst of a recession, 2009 saw $165 billion in sales, and households owned between fifteen and twenty-four gadgets on average. By 2010, US$233 billion was spent on electronic products, three-quarters of the population owned a computer, nearly half of all US adults owned an MP3 player, and 85% had a cell phone. By all measures, the amount of ICT/CE on the planet is staggering. As investigative science journalist, Elizabeth Grossman put it: “no industry pushes products into the global market on the scale that high-tech electronics does” (Maxwell and Miller 2). In 2007, “of the 2.25 million tons of TVs, cell phones and computer products ready for end-of-life management, 18% (414,000 tons) was collected for recycling and 82% (1.84 million tons) was disposed of, primarily in landfill” (Environmental Protection Agency 1). Twenty million computers fell obsolete across the US in 1998, and the rate was 130,000 a day by 2005. It has been estimated that the five hundred million personal computers discarded in the US between 1997 and 2007 contained 6.32 billion pounds of plastics, 1.58 billion pounds of lead, three million pounds of cadmium, 1.9 million pounds of chromium, and 632000 pounds of mercury (Environmental Protection Agency; Basel Action Network and Silicon Valley Toxics Coalition 6). The European Union is expected to generate upwards of twelve million tons annually by 2020 (Commission of the European Communities 17). While refrigerators and dangerous refrigerants account for the bulk of EU e-waste, about 44% of the most toxic e-waste measured in 2005 came from medium-to-small ICT/CE: computer monitors, TVs, printers, ink cartridges, telecommunications equipment, toys, tools, and anything with a circuit board (Commission of the European Communities 31-34). Understanding the enormity of the environmental problems caused by making, using, and disposing of media technologies should arrest our enthusiasm for them. But intellectual correctives to the “love affair” with technology, or technophilia, have come and gone without establishing much of a foothold against the breathtaking flood of gadgets and the propaganda that proclaims their awe-inspiring capabilities.[i] There is a peculiar enchantment with the seeming magic of wireless communication, touch-screen phones and tablets, flat-screen high-definition televisions, 3-D IMAX cinema, mobile computing, and so on—a totemic, quasi-sacred power that the historian of technology David Nye has named the technological sublime (Nye Technological Sublime 297).[ii] We demonstrate in our book why there is no place for the technological sublime in projects to green the media. But first we should explain why such symbolic power does not accrue to more mundane technologies; after all, for the time-strapped cook, a pressure cooker does truly magical things. Three important qualities endow ICT/CE with unique symbolic potency—virtuality, volume, and novelty. The technological sublime of media technology is reinforced by the “virtual nature of much of the industry’s content,” which “tends to obscure their responsibility for a vast proliferation of hardware, all with high levels of built-in obsolescence and decreasing levels of efficiency” (Boyce and Lewis 5). Planned obsolescence entered the lexicon as a new “ethics” for electrical engineering in the 1920s and ’30s, when marketers, eager to “habituate people to buying new products,” called for designs to become quickly obsolete “in efficiency, economy, style, or taste” (Grossman 7-8).[iii] This defines the short lifespan deliberately constructed for computer systems (drives, interfaces, operating systems, batteries, etc.) by making tiny improvements incompatible with existing hardware (Science and Technology Council of the American Academy of Motion Picture Arts and Sciences 33-50; Boyce and Lewis). With planned obsolescence leading to “dizzying new heights” of product replacement (Rogers 202), there is an overstated sense of the novelty and preeminence of “new” media—a “cult of the present” is particularly dazzled by the spread of electronic gadgets through globalization (Mattelart and Constantinou 22). References to the symbolic power of media technology can be found in hymnals across the internet and the halls of academe: technologies change us, the media will solve social problems or create new ones, ICTs transform work, monopoly ownership no longer matters, journalism is dead, social networking enables social revolution, and the media deliver a cleaner, post-industrial, capitalism. Here is a typical example from the twilight zone of the technological sublime (actually, the OECD): A major feature of the knowledge-based economy is the impact that ICTs have had on industrial structure, with a rapid growth of services and a relative decline of manufacturing. Services are typically less energy intensive and less polluting, so among those countries with a high and increasing share of services, we often see a declining energy intensity of production … with the emergence of the Knowledge Economy ending the old linear relationship between output and energy use (i.e. partially de-coupling growth and energy use) (Houghton 1) This statement mixes half-truths and nonsense. In reality, old-time, toxic manufacturing has moved to the Global South, where it is ascendant; pollution levels are rising worldwide; and energy consumption is accelerating in residential and institutional sectors, due almost entirely to ICT/CE usage, despite advances in energy conservation technology (a neat instance of the age-old Jevons Paradox). In our book we show how these are all outcomes of growth in ICT/CE, the foundation of the so-called knowledge-based economy. ICT/CE are misleadingly presented as having little or no material ecological impact. In the realm of everyday life, the sublime experience of electronic machinery conceals the physical work and material resources that go into them, while the technological sublime makes the idea that more-is-better palatable, axiomatic; even sexy. In this sense, the technological sublime relates to what Marx called “the Fetishism which attaches itself to the products of labour” once they are in the hands of the consumer, who lusts after them as if they were “independent beings” (77). There is a direct but unseen relationship between technology’s symbolic power and the scale of its environmental impact, which the economist Juliet Schor refers to as a “materiality paradox” —the greater the frenzy to buy goods for their transcendent or nonmaterial cultural meaning, the greater the use of material resources (40-41). We wrote Greening the Media knowing that a study of the media’s effect on the environment must work especially hard to break the enchantment that inflames popular and elite passions for media technologies. We understand that the mere mention of the political-economic arrangements that make shiny gadgets possible, or the environmental consequences of their appearance and disappearance, is bad medicine. It’s an unwelcome buzz kill—not a cool way to converse about cool stuff. But we didn’t write the book expecting to win many allies among high-tech enthusiasts and ICT/CE industry leaders. We do not dispute the importance of information and communication media in our lives and modern social systems. We are media people by profession and personal choice, and deeply immersed in the study and use of emerging media technologies. But we think it’s time for a balanced assessment with less hype and more practical understanding of the relationship of media technologies to the biosphere they inhabit. Media consumers, designers, producers, activists, researchers, and policy makers must find new and effective ways to move ICT/CE production and consumption toward ecologically sound practices. In the course of this project, we found in casual conversation, lecture halls, classroom discussions, and correspondence, consistent and increasing concern with the environmental impact of media technology, especially the deleterious effects of e-waste toxins on workers, air, water, and soil. We have learned that the grip of the technological sublime is not ironclad. Its instability provides a point of departure for investigating and criticizing the relationship between the media and the environment. The media are, and have been for a long time, intimate environmental participants. Media technologies are yesterday’s, today’s, and tomorrow’s news, but rarely in the way they should be. The prevailing myth is that the printing press, telegraph, phonograph, photograph, cinema, telephone, wireless radio, television, and internet changed the world without changing the Earth. In reality, each technology has emerged by despoiling ecosystems and exposing workers to harmful environments, a truth obscured by symbolic power and the power of moguls to set the terms by which such technologies are designed and deployed. Those who benefit from ideas of growth, progress, and convergence, who profit from high-tech innovation, monopoly, and state collusion—the military-industrial-entertainment-academic complex and multinational commandants of labor—have for too long ripped off the Earth and workers. As the current celebration of media technology inevitably winds down, perhaps it will become easier to comprehend that digital wonders come at the expense of employees and ecosystems. This will return us to Max Weber’s insistence that we understand technology in a mundane way as a “mode of processing material goods” (27). Further to understanding that ordinariness, we can turn to the pioneering conversation analyst Harvey Sacks, who noted three decades ago “the failures of technocratic dreams [:] that if only we introduced some fantastic new communication machine the world will be transformed.” Such fantasies derived from the very banality of these introductions—that every time they took place, one more “technical apparatus” was simply “being made at home with the rest of our world’ (548). Media studies can join in this repetitive banality. Or it can withdraw the welcome mat for media technologies that despoil the Earth and wreck the lives of those who make them. In our view, it’s time to green the media by greening media studies. References “A Cyber-House Divided.” Economist 4 Sep. 2010: 61-62. “Gartner Estimates ICT Industry Accounts for 2 Percent of Global CO2 Emissions.” Gartner press release. 6 April 2007. ‹http://www.gartner.com/it/page.jsp?id=503867›. Basel Action Network and Silicon Valley Toxics Coalition. Exporting Harm: The High-Tech Trashing of Asia. Seattle: Basel Action Network, 25 Feb. 2002. Benjamin, Walter. “Central Park.” Trans. Lloyd Spencer with Mark Harrington. New German Critique 34 (1985): 32-58. Biagioli, Mario. “Postdisciplinary Liaisons: Science Studies and the Humanities.” Critical Inquiry 35.4 (2009): 816-33. Boyce, Tammy and Justin Lewis, eds. Climate Change and the Media. New York: Peter Lang, 2009. Commission of the European Communities. “Impact Assessment.” Commission Staff Working Paper accompanying the Proposal for a Directive of the European Parliament and of the Council on Waste Electrical and Electronic Equipment (WEEE) (recast). COM (2008) 810 Final. Brussels: Commission of the European Communities, 3 Dec. 2008. Environmental Protection Agency. Management of Electronic Waste in the United States. Washington, DC: EPA, 2007 Environmental Protection Agency. Statistics on the Management of Used and End-of-Life Electronics. Washington, DC: EPA, 2008 Grossman, Elizabeth. Tackling High-Tech Trash: The E-Waste Explosion & What We Can Do about It. New York: Demos, 2008. ‹http://www.demos.org/pubs/e-waste_FINAL.pdf› Herat, Sunil. “Review: Sustainable Management of Electronic Waste (e-Waste).” Clean 35.4 (2007): 305-10. Houghton, J. “ICT and the Environment in Developing Countries: Opportunities and Developments.” Paper prepared for the Organization for Economic Cooperation and Development, 2009. International Telecommunication Union. ICTs for Environment: Guidelines for Developing Countries, with a Focus on Climate Change. Geneva: ICT Applications and Cybersecurity Division Policies and Strategies Department ITU Telecommunication Development Sector, 2008. Malmodin, Jens, Åsa Moberg, Dag Lundén, Göran Finnveden, and Nina Lövehagen. “Greenhouse Gas Emissions and Operational Electricity Use in the ICT and Entertainment & Media Sectors.” Journal of Industrial Ecology 14.5 (2010): 770-90. Marx, Karl. Capital: Vol. 1: A Critical Analysis of Capitalist Production, 3rd ed. Trans. Samuel Moore and Edward Aveling, Ed. Frederick Engels. New York: International Publishers, 1987. Mattelart, Armand and Costas M. Constantinou. “Communications/Excommunications: An Interview with Armand Mattelart.” Trans. Amandine Bled, Jacques Guot, and Costas Constantinou. Review of International Studies 34.1 (2008): 21-42. Mattelart, Armand. “Cómo nació el mito de Internet.” Trans. Yanina Guthman. El mito internet. Ed. Victor Hugo de la Fuente. Santiago: Editorial aún creemos en los sueños, 2002. 25-32. Maxwell, Richard and Toby Miller. Greening the Media. New York: Oxford University Press, 2012. Nye, David E. American Technological Sublime. Cambridge, Mass.: MIT Press, 1994. Nye, David E. Technology Matters: Questions to Live With. Cambridge, Mass.: MIT Press. 2007. Orwell, George. “As I Please.” Tribune. 12 May 1944. Richtel, Matt. “Consumers Hold on to Products Longer.” New York Times: B1, 26 Feb. 2011. Robinson, Brett H. “E-Waste: An Assessment of Global Production and Environmental Impacts.” Science of the Total Environment 408.2 (2009): 183-91. Rogers, Heather. Gone Tomorrow: The Hidden Life of Garbage. New York: New Press, 2005. Sacks, Harvey. Lectures on Conversation. Vols. I and II. Ed. Gail Jefferson. Malden: Blackwell, 1995. Schiller, Herbert I. Information and the Crisis Economy. Norwood: Ablex Publishing, 1984. Schor, Juliet B. Plenitude: The New Economics of True Wealth. New York: Penguin, 2010. Science and Technology Council of the American Academy of Motion Picture Arts and Sciences. The Digital Dilemma: Strategic Issues in Archiving and Accessing Digital Motion Picture Materials. Los Angeles: Academy Imprints, 2007. Weber, Max. “Remarks on Technology and Culture.” Trans. Beatrix Zumsteg and Thomas M. Kemple. Ed. Thomas M. Kemple. Theory, Culture [i] The global recession that began in 2007 has been the main reason for some declines in Global North energy consumption, slower turnover in gadget upgrades, and longer periods of consumer maintenance of electronic goods (Richtel). [ii] The emergence of the technological sublime has been attributed to the Western triumphs in the post-Second World War period, when technological power supposedly supplanted the power of nature to inspire fear and astonishment (Nye Technology Matters 28). Historian Mario Biagioli explains how the sublime permeates everyday life through technoscience: "If around 1950 the popular imaginary placed science close to the military and away from the home, today’s technoscience frames our everyday life at all levels, down to our notion of the self" (818). [iii] This compulsory repetition is seemingly undertaken each time as a novelty, governed by what German cultural critic Walter Benjamin called, in his awkward but occasionally illuminating prose, "the ever-always-the-same" of "mass-production" cloaked in "a hitherto unheard-of significance" (48).
APA, Harvard, Vancouver, ISO, and other styles
15

Deer, Patrick, and Toby Miller. "A Day That Will Live In … ?" M/C Journal 5, no. 1 (March 1, 2002). http://dx.doi.org/10.5204/mcj.1938.

Full text
Abstract:
By the time you read this, it will be wrong. Things seemed to be moving so fast in these first days after airplanes crashed into the World Trade Center, the Pentagon, and the Pennsylvania earth. Each certainty is as carelessly dropped as it was once carelessly assumed. The sounds of lower Manhattan that used to serve as white noise for residents—sirens, screeches, screams—are no longer signs without a referent. Instead, they make folks stare and stop, hurry and hustle, wondering whether the noises we know so well are in fact, this time, coefficients of a new reality. At the time of writing, the events themselves are also signs without referents—there has been no direct claim of responsibility, and little proof offered by accusers since the 11th. But it has been assumed that there is a link to US foreign policy, its military and economic presence in the Arab world, and opposition to it that seeks revenge. In the intervening weeks the US media and the war planners have supplied their own narrow frameworks, making New York’s “ground zero” into the starting point for a new escalation of global violence. We want to write here about the combination of sources and sensations that came that day, and the jumble of knowledges and emotions that filled our minds. Working late the night before, Toby was awoken in the morning by one of the planes right overhead. That happens sometimes. I have long expected a crash when I’ve heard the roar of jet engines so close—but I didn’t this time. Often when that sound hits me, I get up and go for a run down by the water, just near Wall Street. Something kept me back that day. Instead, I headed for my laptop. Because I cannot rely on local media to tell me very much about the role of the US in world affairs, I was reading the British newspaper The Guardian on-line when it flashed a two-line report about the planes. I looked up at the calendar above my desk to see whether it was April 1st. Truly. Then I got off-line and turned on the TV to watch CNN. That second, the phone rang. My quasi-ex-girlfriend I’m still in love with called from the mid-West. She was due to leave that day for the Bay Area. Was I alright? We spoke for a bit. She said my cell phone was out, and indeed it was for the remainder of the day. As I hung up from her, my friend Ana rang, tearful and concerned. Her husband, Patrick, had left an hour before for work in New Jersey, and it seemed like a dangerous separation. All separations were potentially fatal that day. You wanted to know where everyone was, every minute. She told me she had been trying to contact Palestinian friends who worked and attended school near the event—their ethnic, religious, and national backgrounds made for real poignancy, as we both thought of the prejudice they would (probably) face, regardless of the eventual who/what/when/where/how of these events. We agreed to meet at Bruno’s, a bakery on La Guardia Place. For some reason I really took my time, though, before getting to Ana. I shampooed and shaved under the shower. This was a horror, and I needed to look my best, even as men and women were losing and risking their lives. I can only interpret what I did as an attempt to impose normalcy and control on the situation, on my environment. When I finally made it down there, she’d located our friends. They were safe. We stood in the street and watched the Towers. Horrified by the sight of human beings tumbling to their deaths, we turned to buy a tea/coffee—again some ludicrous normalization—but were drawn back by chilling screams from the street. Racing outside, we saw the second Tower collapse, and clutched at each other. People were streaming towards us from further downtown. We decided to be with our Palestinian friends in their apartment. When we arrived, we learnt that Mark had been four minutes away from the WTC when the first plane hit. I tried to call my daughter in London and my father in Canberra, but to no avail. I rang the mid-West, and asked my maybe-former novia to call England and Australia to report in on me. Our friend Jenine got through to relatives on the West Bank. Israeli tanks had commenced a bombardment there, right after the planes had struck New York. Family members spoke to her from under the kitchen table, where they were taking refuge from the shelling of their house. Then we gave ourselves over to television, like so many others around the world, even though these events were happening only a mile away. We wanted to hear official word, but there was just a huge absence—Bush was busy learning to read in Florida, then leading from the front in Louisiana and Nebraska. As the day wore on, we split up and regrouped, meeting folks. One guy was in the subway when smoke filled the car. Noone could breathe properly, people were screaming, and his only thought was for his dog DeNiro back in Brooklyn. From the panic of the train, he managed to call his mom on a cell to ask her to feed “DeNiro” that night, because it looked like he wouldn’t get home. A pregnant woman feared for her unborn as she fled the blasts, pushing the stroller with her baby in it as she did so. Away from these heart-rending tales from strangers, there was the fear: good grief, what horrible price would the US Government extract for this, and who would be the overt and covert agents and targets of that suffering? What blood-lust would this generate? What would be the pattern of retaliation and counter-retaliation? What would become of civil rights and cultural inclusiveness? So a jumble of emotions came forward, I assume in all of us. Anger was not there for me, just intense sorrow, shock, and fear, and the desire for intimacy. Network television appeared to offer me that, but in an ultimately unsatisfactory way. For I think I saw the end-result of reality TV that day. I have since decided to call this ‘emotionalization’—network TV’s tendency to substitute analysis of US politics and economics with a stress on feelings. Of course, powerful emotions have been engaged by this horror, and there is value in addressing that fact and letting out the pain. I certainly needed to do so. But on that day and subsequent ones, I looked to the networks, traditional sources of current-affairs knowledge, for just that—informed, multi-perspectival journalism that would allow me to make sense of my feelings, and come to a just and reasoned decision about how the US should respond. I waited in vain. No such commentary came forward. Just a lot of asinine inquiries from reporters that were identical to those they pose to basketballers after a game: Question—‘How do you feel now?’ Answer—‘God was with me today.’ For the networks were insistent on asking everyone in sight how they felt about the end of las torres gemelas. In this case, we heard the feelings of survivors, firefighters, viewers, media mavens, Republican and Democrat hacks, and vacuous Beltway state-of-the-nation pundits. But learning of the military-political economy, global inequality, and ideologies and organizations that made for our grief and loss—for that, there was no space. TV had forgotten how to do it. My principal feeling soon became one of frustration. So I headed back to where I began the day—The Guardian web site, where I was given insightful analysis of the messy factors of history, religion, economics, and politics that had created this situation. As I dealt with the tragedy of folks whose lives had been so cruelly lost, I pondered what it would take for this to stop. Or whether this was just the beginning. I knew one thing—the answers wouldn’t come from mainstream US television, no matter how full of feelings it was. And that made Toby anxious. And afraid. He still is. And so the dreams come. In one, I am suddenly furloughed from my job with an orchestra, as audience numbers tumble. I make my evening-wear way to my locker along with the other players, emptying it of bubble gum and instrument. The next night, I see a gigantic, fifty-feet high wave heading for the city beach where I’ve come to swim. Somehow I am sheltered behind a huge wall, as all the people around me die. Dripping, I turn to find myself in a media-stereotype “crack house” of the early ’90s—desperate-looking black men, endless doorways, sudden police arrival, and my earnest search for a passport that will explain away my presence. I awake in horror, to the realization that the passport was already open and stamped—racialization at work for Toby, every day and in every way, as a white man in New York City. Ana’s husband, Patrick, was at work ten miles from Manhattan when “it” happened. In the hallway, I overheard some talk about two planes crashing, but went to teach anyway in my usual morning stupor. This was just the usual chatter of disaster junkies. I didn’t hear the words, “World Trade Center” until ten thirty, at the end of the class at the college I teach at in New Jersey, across the Hudson river. A friend and colleague walked in and told me the news of the attack, to which I replied “You must be fucking joking.” He was a little offended. Students were milling haphazardly on the campus in the late summer weather, some looking panicked like me. My first thought was of some general failure of the air-traffic control system. There must be planes falling out of the sky all over the country. Then the height of the towers: how far towards our apartment in Greenwich Village would the towers fall? Neither of us worked in the financial district a mile downtown, but was Ana safe? Where on the college campus could I see what was happening? I recognized the same physical sensation I had felt the morning after Hurricane Andrew in Miami seeing at a distance the wreckage of our shattered apartment across a suburban golf course strewn with debris and flattened power lines. Now I was trapped in the suburbs again at an unbridgeable distance from my wife and friends who were witnessing the attacks first hand. Were they safe? What on earth was going on? This feeling of being cut off, my path to the familiar places of home blocked, remained for weeks my dominant experience of the disaster. In my office, phone calls to the city didn’t work. There were six voice-mail messages from my teenaged brother Alex in small-town England giving a running commentary on the attack and its aftermath that he was witnessing live on television while I dutifully taught my writing class. “Hello, Patrick, where are you? Oh my god, another plane just hit the towers. Where are you?” The web was choked: no access to newspapers online. Email worked, but no one was wasting time writing. My office window looked out over a soccer field to the still woodlands of western New Jersey: behind me to the east the disaster must be unfolding. Finally I found a website with a live stream from ABC television, which I watched flickering and stilted on the tiny screen. It had all already happened: both towers already collapsed, the Pentagon attacked, another plane shot down over Pennsylvania, unconfirmed reports said, there were other hijacked aircraft still out there unaccounted for. Manhattan was sealed off. George Washington Bridge, Lincoln and Holland tunnels, all the bridges and tunnels from New Jersey I used to mock shut down. Police actions sealed off the highways into “the city.” The city I liked to think of as the capital of the world was cut off completely from the outside, suddenly vulnerable and under siege. There was no way to get home. The phone rang abruptly and Alex, three thousand miles away, told me he had spoken to Ana earlier and she was safe. After a dozen tries, I managed to get through and spoke to her, learning that she and Toby had seen people jumping and then the second tower fall. Other friends had been even closer. Everyone was safe, we thought. I sat for another couple of hours in my office uselessly. The news was incoherent, stories contradictory, loops of the planes hitting the towers only just ready for recycling. The attacks were already being transformed into “the World Trade Center Disaster,” not yet the ahistorical singularity of the emergency “nine one one.” Stranded, I had to spend the night in New Jersey at my boss’s house, reminded again of the boundless generosity of Americans to relative strangers. In an effort to protect his young son from the as yet unfiltered images saturating cable and Internet, my friend’s TV set was turned off and we did our best to reassure. We listened surreptitiously to news bulletins on AM radio, hoping that the roads would open. Walking the dog with my friend’s wife and son we crossed a park on the ridge on which Upper Montclair sits. Ten miles away a huge column of smoke was rising from lower Manhattan, where the stunning absence of the towers was clearly visible. The summer evening was unnervingly still. We kicked a soccer ball around on the front lawn and a woman walked distracted by, shocked and pale up the tree-lined suburban street, suffering her own wordless trauma. I remembered that though most of my students were ordinary working people, Montclair is a well-off dormitory for the financial sector and high rises of Wall Street and Midtown. For the time being, this was a white-collar disaster. I slept a short night in my friend’s house, waking to hope I had dreamed it all, and took the commuter train in with shell-shocked bankers and corporate types. All men, all looking nervously across the river toward glimpses of the Manhattan skyline as the train neared Hoboken. “I can’t believe they’re making us go in,” one guy had repeated on the station platform. He had watched the attacks from his office in Midtown, “The whole thing.” Inside the train we all sat in silence. Up from the PATH train station on 9th street I came onto a carless 6th Avenue. At 14th street barricades now sealed off downtown from the rest of the world. I walked down the middle of the avenue to a newspaper stand; the Indian proprietor shrugged “No deliveries below 14th.” I had not realized that the closer to the disaster you came, the less information would be available. Except, I assumed, for the evidence of my senses. But at 8 am the Village was eerily still, few people about, nothing in the sky, including the twin towers. I walked to Houston Street, which was full of trucks and police vehicles. Tractor trailers sat carrying concrete barriers. Below Houston, each street into Soho was barricaded and manned by huddles of cops. I had walked effortlessly up into the “lockdown,” but this was the “frozen zone.” There was no going further south towards the towers. I walked the few blocks home, found my wife sleeping, and climbed into bed, still in my clothes from the day before. “Your heart is racing,” she said. I realized that I hadn’t known if I would get back, and now I never wanted to leave again; it was still only eight thirty am. Lying there, I felt the terrible wonder of a distant bystander for the first-hand witness. Ana’s face couldn’t tell me what she had seen. I felt I needed to know more, to see and understand. Even though I knew the effort was useless: I could never bridge that gap that had trapped me ten miles away, my back turned to the unfolding disaster. The television was useless: we don’t have cable, and the mast on top of the North Tower, which Ana had watched fall, had relayed all the network channels. I knew I had to go down and see the wreckage. Later I would realize how lucky I had been not to suffer from “disaster envy.” Unbelievably, in retrospect, I commuted into work the second day after the attack, dogged by the same unnerving sensation that I would not get back—to the wounded, humbled former center of the world. My students were uneasy, all talked out. I was a novelty, a New Yorker living in the Village a mile from the towers, but I was forty-eight hours late. Out of place in both places. I felt torn up, but not angry. Back in the city at night, people were eating and drinking with a vengeance, the air filled with acrid sicklysweet smoke from the burning wreckage. Eyes stang and nose ran with a bitter acrid taste. Who knows what we’re breathing in, we joked nervously. A friend’s wife had fallen out with him for refusing to wear a protective mask in the house. He shrugged a wordlessly reassuring smile. What could any of us do? I walked with Ana down to the top of West Broadway from where the towers had commanded the skyline over SoHo; downtown dense smoke blocked the view to the disaster. A crowd of onlookers pushed up against the barricades all day, some weeping, others gawping. A tall guy was filming the grieving faces with a video camera, which was somehow the worst thing of all, the first sign of the disaster tourism that was already mushrooming downtown. Across the street an Asian artist sat painting the street scene in streaky black and white; he had scrubbed out two white columns where the towers would have been. “That’s the first thing I’ve seen that’s made me feel any better,” Ana said. We thanked him, but he shrugged blankly, still in shock I supposed. On the Friday, the clampdown. I watched the Mayor and Police Chief hold a press conference in which they angrily told the stream of volunteers to “ground zero” that they weren’t needed. “We can handle this ourselves. We thank you. But we don’t need your help,” Commissioner Kerik said. After the free-for-all of the first couple of days, with its amazing spontaneities and common gestures of goodwill, the clampdown was going into effect. I decided to go down to Canal Street and see if it was true that no one was welcome anymore. So many paths through the city were blocked now. “Lock down, frozen zone, war zone, the site, combat zone, ground zero, state troopers, secured perimeter, national guard, humvees, family center”: a disturbing new vocabulary that seemed to stamp the logic of Giuliani’s sanitized and over-policed Manhattan onto the wounded hulk of the city. The Mayor had been magnificent in the heat of the crisis; Churchillian, many were saying—and indeed, Giuliani quickly appeared on the cover of Cigar Afficionado, complete with wing collar and the misquotation from Kipling, “Captain Courageous.” Churchill had not believed in peacetime politics either, and he never got over losing his empire. Now the regime of command and control over New York’s citizens and its economy was being stabilized and reimposed. The sealed-off, disfigured, and newly militarized spaces of the New York through which I have always loved to wander at all hours seemed to have been put beyond reach for the duration. And, in the new post-“9/11” post-history, the duration could last forever. The violence of the attacks seemed to have elicited a heavy-handed official reaction that sought to contain and constrict the best qualities of New York. I felt more anger at the clampdown than I did at the demolition of the towers. I knew this was unreasonable, but I feared the reaction, the spread of the racial harassment and racial profiling that I had already heard of from my students in New Jersey. This militarizing of the urban landscape seemed to negate the sprawling, freewheeling, boundless largesse and tolerance on which New York had complacently claimed a monopoly. For many the towers stood for that as well, not just as the monumental outposts of global finance that had been attacked. Could the American flag mean something different? For a few days, perhaps—on the helmets of firemen and construction workers. But not for long. On the Saturday, I found an unmanned barricade way east along Canal Street and rode my bike past throngs of Chinatown residents, by the Federal jail block where prisoners from the first World Trade Center bombing were still being held. I headed south and west towards Tribeca; below the barricades in the frozen zone, you could roam freely, the cops and soldiers assuming you belonged there. I felt uneasy, doubting my own motives for being there, feeling the blood drain from my head in the same numbing shock I’d felt every time I headed downtown towards the site. I looped towards Greenwich Avenue, passing an abandoned bank full of emergency supplies and boxes of protective masks. Crushed cars still smeared with pulverized concrete and encrusted with paperwork strewn by the blast sat on the street near the disabled telephone exchange. On one side of the avenue stood a horde of onlookers, on the other television crews, all looking two blocks south towards a colossal pile of twisted and smoking steel, seven stories high. We were told to stay off the street by long-suffering national guardsmen and women with southern accents, kids. Nothing happening, just the aftermath. The TV crews were interviewing worn-out, dust-covered volunteers and firemen who sat quietly leaning against the railings of a park filled with scraps of paper. Out on the West Side highway, a high-tech truck was offering free cellular phone calls. The six lanes by the river were full of construction machinery and military vehicles. Ambulances rolled slowly uptown, bodies inside? I locked my bike redundantly to a lamppost and crossed under the hostile gaze of plainclothes police to another media encampment. On the path by the river, two camera crews were complaining bitterly in the heat. “After five days of this I’ve had enough.” They weren’t talking about the trauma, bodies, or the wreckage, but censorship. “Any blue light special gets to roll right down there, but they see your press pass and it’s get outta here. I’ve had enough.” I fronted out the surly cops and ducked under the tape onto the path, walking onto a Pier on which we’d spent many lazy afternoons watching the river at sunset. Dust everywhere, police boats docked and waiting, a crane ominously dredging mud into a barge. I walked back past the camera operators onto the highway and walked up to an interview in process. Perfectly composed, a fire chief and his crew from some small town in upstate New York were politely declining to give details about what they’d seen at “ground zero.” The men’s faces were dust streaked, their eyes slightly dazed with the shock of a horror previously unimaginable to most Americans. They were here to help the best they could, now they’d done as much as anyone could. “It’s time for us to go home.” The chief was eloquent, almost rehearsed in his precision. It was like a Magnum press photo. But he was refusing to cooperate with the media’s obsessive emotionalism. I walked down the highway, joining construction workers, volunteers, police, and firemen in their hundreds at Chambers Street. No one paid me any attention; it was absurd. I joined several other watchers on the stairs by Stuyvesant High School, which was now the headquarters for the recovery crews. Just two or three blocks away, the huge jagged teeth of the towers’ beautiful tracery lurched out onto the highway above huge mounds of debris. The TV images of the shattered scene made sense as I placed them into what was left of a familiar Sunday afternoon geography of bike rides and walks by the river, picnics in the park lying on the grass and gazing up at the infinite solidity of the towers. Demolished. It was breathtaking. If “they” could do that, they could do anything. Across the street at tables military policeman were checking credentials of the milling volunteers and issuing the pink and orange tags that gave access to ground zero. Without warning, there was a sudden stampede running full pelt up from the disaster site, men and women in fatigues, burly construction workers, firemen in bunker gear. I ran a few yards then stopped. Other people milled around idly, ignoring the panic, smoking and talking in low voices. It was a mainly white, blue-collar scene. All these men wearing flags and carrying crowbars and flashlights. In their company, the intolerance and rage I associated with flags and construction sites was nowhere to be seen. They were dealing with a torn and twisted otherness that dwarfed machismo or bigotry. I talked to a moustachioed, pony-tailed construction worker who’d hitched a ride from the mid-west to “come and help out.” He was staying at the Y, he said, it was kind of rough. “Have you been down there?” he asked, pointing towards the wreckage. “You’re British, you weren’t in World War Two were you?” I replied in the negative. “It’s worse ’n that. I went down last night and you can’t imagine it. You don’t want to see it if you don’t have to.” Did I know any welcoming ladies? he asked. The Y was kind of tough. When I saw TV images of President Bush speaking to the recovery crews and steelworkers at “ground zero” a couple of days later, shouting through a bullhorn to chants of “USA, USA” I knew nothing had changed. New York’s suffering was subject to a second hijacking by the brokers of national unity. New York had never been America, and now its terrible human loss and its great humanity were redesignated in the name of the nation, of the coming war. The signs without a referent were being forcibly appropriated, locked into an impoverished patriotic framework, interpreted for “us” by a compliant media and an opportunistic regime eager to reign in civil liberties, to unloose its war machine and tighten its grip on the Muslim world. That day, drawn to the river again, I had watched F18 fighter jets flying patterns over Manhattan as Bush’s helicopters came in across the river. Otherwise empty of air traffic, “our” skies were being torn up by the military jets: it was somehow the worst sight yet, worse than the wreckage or the bands of disaster tourists on Canal Street, a sign of further violence yet to come. There was a carrier out there beyond New York harbor, there to protect us: the bruising, blustering city once open to all comers. That felt worst of all. In the intervening weeks, we have seen other, more unstable ways of interpreting the signs of September 11 and its aftermath. Many have circulated on the Internet, past the blockages and blockades placed on urban spaces and intellectual life. Karl-Heinz Stockhausen’s work was banished (at least temporarily) from the canon of avant-garde electronic music when he described the attack on las torres gemelas as akin to a work of art. If Jacques Derrida had described it as an act of deconstruction (turning technological modernity literally in on itself), or Jean Baudrillard had announced that the event was so thick with mediation it had not truly taken place, something similar would have happened to them (and still may). This is because, as Don DeLillo so eloquently put it in implicit reaction to the plaintive cry “Why do they hate us?”: “it is the power of American culture to penetrate every wall, home, life and mind”—whether via military action or cultural iconography. All these positions are correct, however grisly and annoying they may be. What GK Chesterton called the “flints and tiles” of nineteenth-century European urban existence were rent asunder like so many victims of high-altitude US bombing raids. As a First-World disaster, it became knowable as the first-ever US “ground zero” such precisely through the high premium immediately set on the lives of Manhattan residents and the rarefied discussion of how to commemorate the high-altitude towers. When, a few weeks later, an American Airlines plane crashed on take-off from Queens, that borough was left open to all comers. Manhattan was locked down, flown over by “friendly” bombers. In stark contrast to the open if desperate faces on the street of 11 September, people went about their business with heads bowed even lower than is customary. Contradictory deconstructions and valuations of Manhattan lives mean that September 11 will live in infamy and hyper-knowability. The vengeful United States government and population continue on their way. Local residents must ponder insurance claims, real-estate values, children’s terrors, and their own roles in something beyond their ken. New York had been forced beyond being the center of the financial world. It had become a military target, a place that was receiving as well as dispatching the slings and arrows of global fortune. Citation reference for this article MLA Style Deer, Patrick and Miller, Toby. "A Day That Will Live In … ?" M/C: A Journal of Media and Culture 5.1 (2002). [your date of access] < http://www.media-culture.org.au/0203/adaythat.php>. Chicago Style Deer, Patrick and Miller, Toby, "A Day That Will Live In … ?" M/C: A Journal of Media and Culture 5, no. 1 (2002), < http://www.media-culture.org.au/0203/adaythat.php> ([your date of access]). APA Style Deer, Patrick and Miller, Toby. (2002) A Day That Will Live In … ?. M/C: A Journal of Media and Culture 5(1). < http://www.media-culture.org.au/0203/adaythat.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography