Academic literature on the topic 'IG. Information presentation: hypertext, hypermedia'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'IG. Information presentation: hypertext, hypermedia.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "IG. Information presentation: hypertext, hypermedia"

1

Bolotnov, Aleksey. "Stream as a New Hypermedia Genre." Vestnik Volgogradskogo gosudarstvennogo universiteta. Serija 2. Jazykoznanije, no. 2 (June 2021): 111–20. http://dx.doi.org/10.15688/jvolsu2.2021.2.10.

Full text
Abstract:
The relevance of the study is due to changes in media culture evoked by new technologies that stimulate the emergence of a new hypertext genre-stylistic reality. The article examines a new media phenomenon –stream.The stream is considered as public communicative flow that takes place in real time and includes text, as well as video and audio content, organized by the streamer (the author) – information and media personality with the active involvement of other media participants of different types; it is implemented in live Internet broadcasting; it creates a variety of opportunities for any participant (from commenting, polling, to participation in action). The stream emerged from the instrumental-service approach to the development and comprehension of various relevant topics in the content presentation. The aim of the participants in this media process is self-expression and self-actualization, the incentive to be active and interact (from personal motives to socially significant ones). As a hypermedia genre it is considered on the material of media discourse "#daiDudya" with the participation of A.A. Venediktov, taking into account his linguistic and extralinguistic characteristics. The study of the stream as a new and insufficiently investigated phenomenon of modern media communication, the development of a methodology for its analysis are of interest for communicative and cognitive linguistics, media linguistics, sociology, psychology, discourse, and cultural linguistics.
APA, Harvard, Vancouver, ISO, and other styles
2

Bruns, Axel. "The Knowledge Adventure." M/C Journal 3, no. 5 (October 1, 2000). http://dx.doi.org/10.5204/mcj.1873.

Full text
Abstract:
In his recent re-evaluation of McLuhanite theories for the information age, Digital McLuhan, Paul Levinson makes what at first glance appears to be a curious statement: he says that on the Web "the common denominator ... is the written word, as it is and has been with all things having to do with computers -- and will likely continue to be until such time, if ever, that the spoken word replaces the written as the vehicle of computer commands" (38). This, however, seems to directly contradict what any Web user has been able to experience for several years now: Web content has increasingly come to rely on graphics, at first still, now also often animated, and continues to include more and more audiovisual elements of various kinds. We don't even have to look at the current (and, hopefully, passing) phase of interminable Shockwave splash pages, which users have to endure while they wait to be transferred to the 'real' content of a site: even on as print-focussed a page as the one you're currently reading, you'll see graphical buttons to the left and at the bottom, for example. Other sites far surpass this for graphical content: it is hard to imagine what the official Olympics site or that of EXPO 2000 would look like in text-only versions. The drive towards more and more graphics has long been, well, visible. Already in 1997 (at a time when 33.6k modems were considered fast) Marshall considered the Internet to have entered its "graphic stage, a transitional media form that has made surfing the net feel like flipping through a glossy magazine or the interlinkages of a multimedia game or encyclopedia CD-ROMs"; to him this stage "relies on a construction that is textual and graphically enhanced through software overlays ... and highlighted by sample images, sound bites and occasionally short, moving images" (51). This historicised view mirrors a distinction made around the same time by Lovink, who divided users into "IBM-PC-modernists" still running text-based interfaces, and those enjoying the "Apple-Windows 95-postmodernism" of their graphical user interfaces (Lovink and Winkler 15). In the age of GUIs, in fact, 'text' in itself does not really exist on screen any more: everything from textual to graphical information consists of individual pixels in the same way, which is precisely what makes Levinson's initial statement appear so anachronistic. The move from 'text' to 'graphics and text' could thus be seen as a sign of the overall shift from the industrial to the information age -- a view not without precedent, since the transition from modernist to postmodernist times is similarly contemporaneous with the rise of graphic design as a form of communication as well as art. Beyond such broad strokes, we can also identify some of the finer details presented by the current state of graphics on the Web, however. Marshall's 'graphic stage', after all, was a 'transitional' one, and by now it seems that we might have passed it already, entering into a new aesthetic paradigm which appears to have borrowed many of its approaches from the realm of computer games: the new Web vision is shiny, colourful, animated, and increasingly also accompanied by sound effects. This is no surprise since the mass acceptance of personal computers themselves was largely driven by their use as a source of entertainment. Gaming and computers are inseparably interconnected, and the development of home computers' graphical capabilities in particular has long been driven almost exclusively by players' needs for better, faster, more realistic graphics. Of course, the way we interact with computers also owes a significant debt to games. Engagement in a dialogue with the machine, in which the computer displays both our own actions and its responses, representing us and itself simultaneously on screen, is the predominant mode of computing, and such a mode of engagement (dissolving the barriers between human mind and machinic computation) can now also be found in our interaction with the Web. Here, too, individual knowledge blends with the information available on the network as we immerse ourselves in hypertextual connectivity. As Talbott writes, "clearly, a generation raised on Adventure and video games -- where every door conceals a treasure or a monster worth pursuing in frenetic style -- has its own way of appreciating the hypertext interface" (13); not only has the Web taken on the aesthetics of computer gaming, then, but using the Web itself exhibits aspects of participation in a global 'knowledge game'. Talbott means to criticise this when he writes that thus "the doctrines of endless Enlightenment and Progress become the compelling subtext of a universal video game few can resist playing" (196), but however we may choose to evaluate this game, the observation itself is sound. One possible reason for taking a critical view of this development is that computer and video games rarely present more than the appearance of participation; while players may have a feeling of control over features of the game, the game itself usually remains entirely unaffected and ready for a restart at any moment. Web users might similarly feel empowered by the wealth of information to which they have gained access online, without actually making use of that information to form new knowledge for themselves. This is a matter for the individual user, however; where they have a true interest in the information they seek, we can have every confidence that they will process it to their advantage, too. Beyond this, the skills of information seeking learnt from Web use might also have overall benefits for users, as a kind of 'mind-eye coordination' similar to the 'hand-eye coordination' benefits often attributed to the playing of action games. The ability to figure out unknown problems, the desire to understand and gain control of a situation, which they can learn from computer games, is likely to help them better understand the complexity and interconnectedness of anything they might learn: "it could ... well be true that the cross-linking inherent in hypertext encourages people to see the connections among different aspects of the world so that they will get less fragmented knowledge" (Nielsen 190). The increasingly graphical nature of Web content could appear to work against this, however: "extensions of traditional hyperTEXT systems to encompass hyperMEDIA introduces [sic] a new dimension. ... The picture that 'speaks a thousand words' may say a thousand different words to different viewers. Pictures or graphics lend themselves much more than does text to multiple interpretations", as McAleese claims (12-13) -- but perhaps this overrates significantly the ability of text to anchor down meaning to any one point. Rather, it is questionable whether text and images really are that different from one another -- viewed from a historical perspective, certainly, opinions are divided, it seems: "the medieval church feared the power of the visual image because of the way it appeared to licence the imagination and the consideration of alternatives. Obversely, contemporary cultural critics fear that the abandonment of the written word in favour of graphics is stifling critical and creative powers" (Moody 60) -- take, for example, the commonly held view that movies made from novels limit the reader's imagination to the particular portrayal of events chosen for the film. In fact, there are good reasons to believe that both text and images (especially when they are increasingly easy to manipulate by digital means, thus losing once and for all their claim to photographic 'realism') can 'say a thousand different words to different viewers' -- indeed, traditional photography has also been described as 'writing with light'. As Levinson notes, therefore, "once the photograph is converted to a digital format, it is as amenable to manipulation, as divorced from the reality it purports to represent, as the words which appear on the same screen. On that score, the Internet's co-option of photography -- the rendering of the formerly analogue image as its content -- is at least as profound as the Internet's promotion of written communication" (43), and this, then, may perhaps begin to provide a resolution to his overall preference for writing as the predominant Internet communication form, as quoted above: online writing now includes in almost equal measure 'print' text and graphical images, both of which are of course graphically rendered on screen by the computer anyway; they combine into a new form of writing not unlike ancient hieroglyphics. On the Web, writing has come full circle: from the iconographic representations of the earliest civilisations through their simplification and solidification into the various alphabets back to a new online iconography. This also demonstrates the strong Western bias of this technology, of course: had computers emerged from Chinese or Japanese culture, for example -- where alphabets in the literal sense don't exist -- chances are they would never have existed in a text-only form. Now that we have passed the alphabetic stage to re-enter an era of iconography, then, it remains to be seen how this change along with our overall "'immersion' in hypertext will affect the way that we mentally structure our world. Linear argumentation is more a consequence of alphabetic writing than of printed books and it remains to be seen if hypertext presentation will significantly erode this predominant convention for mentally ordering our world" (McKnight et al. 41). Perhaps the computer game experience (where a blending of text and graphics had begun some time before the Web) can provide some early pointers already, then. The game-like nature of information search and usage online might help to undermine some of the more heavily encrusted structures of information dissemination that are still dominant: "we are promised, on the information 'library' side, less of the dogmatic and more of the ludic, less of the canonical and more of the festive. Fewer arguments from authority, through more juxtaposition of authorities" (Debray 146). This is also supported by the fact that there usually exists no one central authority, no one central site, in any field of information covered by the Web, but that there rather is a multiplicity of sources and viewpoints with varying claims to 'authority' and 'objectivity'; rather than rely on authorities to determine what is accepted knowledge, Web users must, and do, distil their own knowledge from the information they find in their searches. Kumon and Aizu's notion that from the industrial-age "wealth game" we have now moved into the "wisdom game" (320) sums up this view. However, for all the ludic exuberance of this game, we should also be concerned that, as in any game, we are also likely to see winners and losers. Those unaware of the rules of the game, and people who are prevented from playing for personal or socioeconomic reasons (the increased use of graphics will make it much more difficult for certain disabled readers to use the Web, for example) must not be left out of it. In gaming terminology, perhaps the formation of teams including such disadvantaged people is the answer? References Debray, Régis. "The Book as Symbolic Object." The Future of the Book. Ed. Geoffrey Nunberg. Berkeley: U of California P, 1996. 139-51. Kumon, Shumpei, and Izumi Aizu. "Co-Emulation: The Case for a Global Hypernetwork Society." Global Networks: Computers and International Communication. Ed. Linda M. Harasim. Cambridge, Mass.: MIT P, 1994. 311-26. Levinson, Paul. Digital McLuhan: A Guide to the Information Millennium. London: Routledge, 1999. Lovink, Geert, and Hartmut Winkler. "The Computer: Medium or Calculating Machine." Convergence 3.2 (1997): 10-18. Marshall, P. David. "The Commodity and the Internet: Interactivity and the Generation of Audience Commodity." Media International Australia 83 (Feb. 1997): 51-62. McAleese, Ray. "Navigation and Browsing in Hypertext." Hypertext: Theory into Practice. Ed. Ray McAleese. Oxford: Intellect, 1993. 5- 38. McKnight, Cliff, Andrew Dillon, and John Richardson. Hypertext in Context. Cambridge: Cambridge UP, 1991. Moody, Nickianne. "Interacting with the Divine Comedy." Fractal Dreams: New Media in Social Context. Ed. Jon Dovey. London: Lawrence and Wishart, 1996. 59-77. Nielsen, Jakob. Hypertext and Hypermedia. Boston: Academic P, 1990. Talbott, Stephen L. The Future Does Not Compute: Transcending the Machines in Our Midst. Sebastopol, Calif.: O'Reilly and Associates, 1995. Citation reference for this article MLA style: Axel Bruns. "The Knowledge Adventure: Game Aesthetics and Web Hieroglyphics." M/C: A Journal of Media and Culture 3.5 (2000). [your date of access] <http://www.api-network.com/mc/0010/adventure.php>. Chicago style: Axel Bruns, "The Knowledge Adventure: Game Aesthetics and Web Hieroglyphics," M/C: A Journal of Media and Culture 3, no. 5 (2000), <http://www.api-network.com/mc/0010/adventure.php> ([your date of access]). APA style: AxeM/C: A Journal of Media and Culture l Bruns. (2000) The knowledge adventure: game aesthetics and Web hieroglyphics. 3(5). <http://www.api-network.com/mc/0010/adventure.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
3

Bruns, Axel. "What's the Story." M/C Journal 2, no. 5 (July 1, 1999). http://dx.doi.org/10.5204/mcj.1774.

Full text
Abstract:
Practically any good story follows certain narrative conventions in order to hold its readers' attention and leave them with a feeling of satisfaction -- this goes for fictional tales as well as for many news reports (we do tend to call them 'news stories', after all), for idle gossip as well as for academic papers. In the Western tradition of storytelling, it's customary to start with the exposition, build up to major events, and end with some form of narrative closure. Indeed, audience members will feel disturbed if there is no sense of closure at the end -- their desire for closure is a powerful one. From this brief description of narrative patterns it is also clear that such narratives depend crucially on linear progression through the story in order to work -- there may be flashbacks and flashforwards, but very few stories, it seems, could get away with beginning with their point of closure, and work back to the exposition. Closure, as the word suggests, closes the story, and once reached, the audience is left with the feeling of now knowing the whole story, of having all the pieces necessary to understand its events. To understand how important the desire to reach this point is to the audience, just observe the discussions of holes in the plot which people have when they're leaving a cinema: they're trying to reach a better sense of closure than was afforded them by the movie itself. In linearly progressing media, this seems, if you'll pardon the pun, straightforward. Readers know when they've finished an article or a book, viewers know when a movie or a broadcast is over, and they'll be able to assess then if they've reached sufficient closure -- if their desires have been fulfilled. On the World Wide Web, this is much more difficult: "once we have it in our hands, the whole of a book is accessible to us readers. However, in front of an electronic read-only hypertext document we are at the mercy of the author since we will only be able to activate the links which the author has provided" (McKnight et al. 119). In many cases, it's not even clear whether we've reached the end of the text already: just where does a Website end? Does the question even make sense? Consider the following example, reported by Larry Friedlander: I watched visitors explore an interactive program in a museum, one that contained a vast amount of material -- pictures, film, historic explanations, models, simulations. I was impressed by the range of subject matter and by the ambitiousness and polish of the presentation. ... But to my surprise, as I watched visitors going down one pathway after another, I noticed a certain dispirited glaze spread over their faces. They seemed to lose interest quite quickly and, in fact, soon stopped their explorations. (163) Part of the problem here may just have been the location of the programme, of course -- when you're out in public, you might just not have the time to browse as extensively as you could from your computer at home. But there are other explanations, too: the sheer amount of options for exploration may have been overwhelming -- there may not have been any apparent purpose to aim for, any closure to arrive at. This is a problem inherent in hypertext, particularly in networked systems like the Web: it "changes our conception of an ending. Different readers can choose not only to end the text at different points but also to add to and extend it. In hypertext there is no final version, and therefore no last word: a new idea or reinterpretation is always possible. ... By privileging intertextuality, hypertext provides a large number of points to which other texts can attach themselves" (Snyder 57). In other words, there will always be more out there than any reader could possibly explore, since new documents are constantly being added. There is no ending if a text is constantly extended. (In print media this problem appears only to a far more limited extent: there, intertextuality is mostly implicit, and even though new articles may constantly be added -- 'linked', if you will -- to a discourse, due to the medium's physical nature they're still very much separate entities, while Web links make intertextuality explicit and directly connect texts.) Does this mark the end of closure, then? Adding to the problem is the fact that it's not even possible to know how much of the hypertextual information available is still left unexplored, since there is no universal register of all the information available on the Web -- "the extent of hypertext is unknowable because it lacks clear boundaries and is often multi-authored" (Snyder 19). While reading a book you can check how many more pages you've got to go, but on the Web this is not an option. Our traditions of information transmission create this desire for closure, but the inherent nature of the medium prevents us from ever satisfying it. Barrett waxes lyrical in describing this dilemma: contexts presented online are often too limited for what we really want: an environment that delivers objects of desire -- to know more, see more, learn more, express more. We fear being caught in Medusa's gaze, of being transfixed before the end is reached; yet we want the head of Medusa safely on our shield to freeze the bitstream, the fleeting imagery, the unstoppable textualisations. We want, not the dead object, but the living body in its connections to its world, connections that sustain it, give it meaning. (xiv-v) We want nothing less, that is, than closure without closing: we desire the knowledge we need, and the feeling that that knowledge is sufficient to really know about a topic, but we don't want to devalue that knowledge in the same process by removing it from its context and reducing it to trivial truisms. We want the networked knowledge base that the Web is able to offer, but we don't want to feel overwhelmed by the unfathomable dimensions of that network. This is increasingly difficult the more knowledge is included in that network -- "with the growth of knowledge comes decreasing certainty. The confidence that went with objectivity must give way to the insecurity that comes from knowing that all is relative" (Smith 206). The fact that 'all is relative' is one which predates the Net, of course, and it isn't the Internet or the World Wide Web that has destroyed objectivity -- objectivity has always been an illusion, no matter how strongly journalists or scientists have at times laid claims ot it. Internet-based media have simply stripped away more of the pretences, and laid bare the subjective nature of all information; in the process, they have also uncovered the fact that the desire for closure must ultimately remain unfulfilled in any sufficiently non-trivial case. Nonetheless, the early history of the Web has seen attempts to connect all the information available (LEO, one of the first major German Internet resource centres, for example, took its initials from its mission to 'Link Everything Online') -- but as the amount of information on the Net exploded, more and more editorial choices of what to include and what to leave out had to be made, so that now even search engines like Yahoo! and Altavista quite clearly and openly offer only a selection of what they consider useful sites on the Web. Web browsers still hoping to find everything on a certain topic would be well-advised to check with all major search engines, as well as important resource centres in the specific field. The average Web user would probably be happy with picking the search engine, Web directory or Web ring they find easiest to use, and sticking with it. The multitude of available options here actually shows one strength of the Internet and similar networks -- "the computer permits many [organisational] structures to coexist in the same electronic text: tree structures, circles, and lines can cross and recross without obstructing one another. The encyclopedic impulse to organise can run riot in this new technology of writing" (Bolter 95). Still, this multitude of options is also likely to confuse some users: in particular, "novices do not know in which order they need to read the material or how much they should read. They don't know what they don't know. Therefore learners might be sidetracked into some obscure corner of the information space instead or covering the important basic information" (Nielsen 190). They're like first-time visitors to a library -- but this library has constantly shifting aisles, more or less well-known pathways into specialty collections, fiercely competing groups of librarians, and it extends almost infinitely. Of course, the design of the available search and information tools plays an important role here, too -- far more than it is possible to explore at this point. Gay makes the general observation that "visual interfaces and navigational tools that allow quick browsing of information layout and database components are more effective at locating information ... than traditional index or text-based search tools. However, it should be noted that users are less secure in their findings. Users feel that they have not conducted complete searches when they use visual tools and interfaces" (185). Such technical difficulties (especially for novices) will slow take-up of and low satisfaction with the medium (and many negative views of the Web can probably be traced to this dissatisfaction with the result of searches -- in other words, to a lack of satisfaction of the desire for closure); while many novices eventually overcome their initial confusion and become more Web-savvy, others might disregard the medium as unsuitable for their needs. At the other extreme of the scale, the inherent lack for closure, in combination with the societally deeply ingrained desire for it, may also be a strong contributing factor for another negative phenomenon associated with the Internet: that of Net users becoming Net junkies, who spend every available moment online. Where the desire to know, to get to the bottom (or more to the point: to the end) of a topic, becomes overwhelming, and where the fundamental unattainability of this goal remains unrealised, the step to an obsession with finding information seems a small one; indeed, the neverending search for that piece of knowledge surpassing all previously found ones seems to have obvious similarities to drug addiction with its search for the high to better all previous highs. And most likely, the addiction is only heightened by the knowledge that on the Web, new pieces of information are constantly being added -- an endless, and largely free, supply of drugs... There is no easy solution to this problem -- in the end, it is up to the user to avoid becoming an addict, and to keep in mind that there is no such thing as total knowledge. Web designers and content providers can help, though: "there are ways of orienting the reader in an electronic document, but in any true hypertext the ending must remain tentative. An electronic text never needs to end" (Bolter 87). As Tennant & Heilmeier elaborate, "the coming ease-of-use problem is one of developing transparent complexity -- of revealing the limits and the extent of vast coverage to users, and showing how the many known techniques for putting it all together can be used most effectively -- of complexity that reveals itself as powerful simplicity" (122). We have been seeing, therefore, the emergence of a new class of Websites: resource centres which help their visitors to understand a certain topic and view it from all possible angles, which point them in the direction of further information on- and off-site, and which give them an indication of how much they need to know to understand the topic to a certain degree. In this, they must ideally be very transparent, as Tennant & Heilmeier point out -- having accepted that there is no such thing as objectivity, it is necessary for these sites to point out that their offered insight into the field is only one of many possible approaches, and that their presented choice of information is based on subjective editorial decisions. They may present preferred readings, but they must indicate that these readings are open for debate. They may help satisfy some of their readers' desire for closure, but they must at the same time point out that they do so by presenting a temporary ending beyond which a more general story continues. If, as suggested above, closure crucially depends on a linear mode of presentation, such sites in their arguments help trace one linear route through the network of knowledge available online; they impose a linear from-us-to-you model of transmission on the normally unordered many-to-many structure of the Net. In the face of much doomsaying about the broadcast media, then, here is one possible future for these linear transmission media, and it's no surprise that such Internet 'push' broad- or narrowcasting is a growth area of the Net -- simply put, it serves the apparent need of users to be told stories, to have their desire for closure satisfied through clear narrative progressions from exposition through development to end. (This isn't 'push' as such, really: it's more a kind of 'push on demand'.) But at the same time, this won't mean the end of the unstructured, networked information that the Web offers: even such linear media ultimately build on that networked pool of knowledge. The Internet has simply made this pool public -- passively as well as actively accessible to everybody. Now, however, Web designers (and this includes each and every one of us, ultimately) must work "with the users foremost in mind, making sure that at every point there is a clear, simple and focussed experience that hooks them into the welter of information presented" (Friedlander 164); they must play to the desire for closure. (As with any preferred reading, however, there is also a danger that that closure is premature, and that the users' process or meaning-making is contained and stifled rather than aided.) To return briefly to Friedlander's experience with the interactive museum exhibit: he draws the conclusion that visitors were simply overwhelmed by the sheer mass of information and were reluctant to continue accumulating facts without a guiding purpose, without some sense of how or why they could use all this material. The technology that delivers immense bundles of data does not simultaneously deliver a reason for accumulating so much information, nor a way for the user to order and make sense of it. That is the designer's task. The pressing challenge of multimedia design is to transform information into usable and useful knowledge. (163) Perhaps this transformation is exactly what is at the heart of fulfilling the desire for closure: we feel satisfied when we feel we know something, have learnt something from a presentation of information (no matter if it's a news report or a fictional story). Nonetheless, this satisfaction must of necessity remain intermediate -- there is always much more still to be discovered. "From the hypertext viewpoint knowledge is infinite: we can never know the whole extent of it but only have a perspective on it. ... Life is in real-time and we are forced to be selective, we decide that this much constitutes one node and only these links are worth representing" (Beardon & Worden 69). This is not inherently different from processes in other media, where bandwidth limitations may even force much stricter gatekeeping regiments, but as in many cases the Internet brings these processes out into the open, exposes their workings and stresses the fundamental subjectivity of information. Users of hypertext (as indeed users of any medium) must be aware of this: "readers themselves participate in the organisation of the encyclopedia. They are not limited to the references created by the editors, since at any point they can initiate a search for a word or phrase that takes them to another article. They might also make their own explicit references (hypertextual links) for their own purposes ... . It is always a short step from electronic reading to electronic writing, from determining the order of texts to altering their structure" (Bolter 95). Significantly, too, it is this potential for wide public participation which has made the Internet into the medium of the day, and led to the World Wide Web's exponential growth; as Bolter describes, "today we cannot hope for permanence and for general agreement on the order of things -- in encyclopedias any more than in politics and the arts. What we have instead is a view of knowledge as collections of (verbal and visual) ideas that can arrange themselves into a kaleidoscope of hierarchical and associative patterns -- each pattern meeting the needs of one class of readers on one occasion" (97). To those searching for some meaningful 'universal truth', this will sound defeatist, but ultimately it is closer to realism -- one person's universal truth is another one's escapist phantasy, after all. This doesn't keep most of us from hoping and searching for that deeper insight, however -- and from the preceding discussion, it seems likely that in this we are driven by the desire for closure that has been imprinted in us so deeply by the multitudes of narrative structures we encounter each day. It's no surprise, then, that, as Barrett writes, "the virtual environment is a place of longing. Cyberspace is an odyssey without telos, and therefore without meaning. ... Yet cyberspace is also the theatre of operations for the reconstruction of the lost body of knowledge, or, perhaps more correctly, not the reconstruction, but the always primary construction of a body of knowing. Thought and language in a virtual environment seek a higher synthesis, a re-imagining of an idea in the context of its truth" (xvi). And so we search on, following that by definition end-less quest to satisfy our desire for closure, and sticking largely to the narrative structures handed down to us through the generations. This article is no exception, of course -- but while you may gain some sense of closure from it, it is inevitable that there is a deeper feeling of a lack of closure, too, as the article takes its place in a wider hypertextual context, where so much more is still left unexplored: other articles in this issue, other issues of M/C, and further journals and Websites adding to the debate. Remember this, then: you decide when and where to stop. References Barrett, Edward, and Marie Redmont, eds. Contextual Media: Multimedia and Interpretation. Cambridge, Mass.: MIT P, 1995. Barrett, Edward. "Hiding the Head of Medusa: Objects and Desire in a Virtual Environment." Barrett & Redmont xi- vi. Beardon, Colin, and Suzette Worden. "The Virtual Curator: Multimedia Technologies and the Roles of Museums." Barrett & Redmont 63-86. Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, N.J.: Lawrence Erlbaum Associates, 1991. Friedlander, Larry. "Spaces of Experience on Designing Multimedia Applications." Barrett & Redmont 163-74. Gay, Geri. "Issues in Accessing and Constructing Multimedia Documents." Barrett & Redmont 175-88. McKnight, Cliff, John Richardson, and Andrew Dillon. "The Authoring of Hypertext Documents." Hypertext: Theory into Practice. Ed. Ray McAleese. Oxford: Intellect, 1993. Nielsen, Jakob. Hypertext and Hypermedia. Boston: Academic Press, 1990. Smith, Anthony. Goodbye Gutenberg: The Newspaper Revolution of the 1980's [sic]. New York: Oxford UP, 1980. Snyder, Ilana. Hypertext: The ELectronic Labyrinth. Carlton South: Melbourne UP, 1996. Tennant, Harry, and George H. Heilmeier. "Knowledge and Equality: Harnessing the Truth of Information Abundance." Technology 2001: The Future of Computing and Communications. Ed. Derek Leebaert. Cambridge, Mass.: MIT P, 1991. Citation reference for this article MLA style: Axel Bruns. "What's the Story: The Unfulfilled Desire for Closure on the Web." M/C: A Journal of Media and Culture 2.5 (1999). [your date of access] <http://www.uq.edu.au/mc/9907/closure.php>. Chicago style: Axel Bruns, "What's the Story: The Unfulfilled Desire for Closure on the Web," M/C: A Journal of Media and Culture 2, no. 5 (1999), <http://www.uq.edu.au/mc/9907/closure.php> ([your date of access]). APA style: Axel Bruns. (1999) What's the story: the unfulfilled desire for closure on the Web. M/C: A Journal of Media and Culture 2(5). <http://www.uq.edu.au/mc/9907/closure.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
4

Maras, Steven. "Reflections on Adobe Corporation, Bill Viola, and Peter Ramus while Printing Lecture Notes." M/C Journal 8, no. 2 (June 1, 2005). http://dx.doi.org/10.5204/mcj.2338.

Full text
Abstract:
In March 2002, I was visiting the University of Southern California. One night, as sometimes happens on a vibrant campus, two interesting but very different public lectures were scheduled against one another. The first was by the co-chairman and co-founder of Adobe Systems Inc., Dr. John E. Warnock, talking about books. The second was a lecture by acclaimed video artist Bill Viola. The first event was clearly designed as a networking forum for faculty and entrepreneurs. The general student population was conspicuously absent. Warnock spoke of the future of Adobe, shared stories of his love of books, and in an embodiment of the democratising potential of Adobe software (and no doubt to the horror of archivists in the room) he invited the audience to handle extremely rare copies of early printed works from his personal library. In the lecture theatre where Viola was to speak the atmosphere was different. Students were everywhere; even at the price of ten dollars a head. Viola spoke of time and memory in the information age, of consciousness and existence, to an enraptured audience—and showed his latest work. The juxtaposition of these two events says something about our cultural moment, caught between a paradigm modelled on reverence toward the page, and a still emergent sense of medium, intensity and experimentation. But, the juxtaposition yields more. At one point in Warnock’s speech, in a demonstration of the ultra-high resolution possible in the next generation of Adobe products, he presented a scan of a manuscript, two pages, two columns per page, overflowing with detail. Fig. 1. Dr John E. Warnock at the Annenberg Symposium. Photo courtesy of http://www.annenberg.edu/symposia/annenberg/2002/photos.php Later, in Viola’s presentation, a fragment of a video work, Silent Mountain (2001) splits the screen in two columns, matching Warnock’s text: inside each a human figure struggles with intense emotion, and the challenges of bridging the relational gap. Fig. 2. Images from Bill Viola, Silent Mountain (2001). From Bill Viola, THE PASSIONS. The J. Paul Getty Museum, Los Angeles in Association with The National Gallery, London. Ed. John Walsh. p. 44. Both events are, of course, lectures. And although they are different in style and content, a ‘columnular’ scheme informs and underpins both, as a way of presenting and illustrating the lecture. Here, it is worth thinking about Pierre de la Ramée or Petrus (Peter) Ramus (1515-1572), the 16th century educational reformer who in the words of Frances Yates ‘abolished memory as a part of rhetoric’ (229). Ramus was famous for transforming rhetoric through the introduction of his method or dialectic. For Walter J. Ong, whose discussion of Ramism we are indebted to here, Ramus produced the paradigm of the textbook genre. But it is his method that is more noteworthy for us here, organised through definitions and divisions, the distribution of parts, ‘presented in dichotomized outlines or charts that showed exactly how the material was organised spatially in itself and in the mind’ (Ong, Orality 134-135). Fig. 3. Ramus inspired study of Medicine. Ong, Ramus 301. Ong discusses Ramus in more detail in his book Ramus: Method, and the Decay of Dialogue. Elsewhere, Sutton, Benjamin, and I have tried to capture the sense of Ong’s argument, which goes something like the following. In Ramus, Ong traces the origins of our modern, diagrammatic understanding of argument and structure to the 16th century, and especially the work of Ramus. Ong’s interest in Ramus is not as a great philosopher, nor a great scholar—indeed Ong sees Ramus’s work as a triumph of mediocrity of sorts. Rather, his was a ‘reformation’ in method and pedagogy. The Ramist dialectic ‘represented a drive toward thinking not only of the universe but of thought itself in terms of spatial models apprehended by sight’ (Ong, Ramus 9). The world becomes thought of ‘as an assemblage of the sort of things which vision apprehends—objects or surfaces’. Ramus’s teachings and doctrines regarding ‘discoursing’ are distinctive for the way they draw on geometrical figures, diagrams or lecture outlines, and the organization of categories through dichotomies. This sets learning up on a visual paradigm of ‘study’ (Ong, Orality 8-9). Ramus introduces a new organization for discourse. Prior to Ramus, the rhetorical tradition maintained and privileged an auditory understanding of the production of content in speech. Central to this practice was deployment of the ‘seats’, ‘images’ and ‘common places’ (loci communes), stock arguments and structures that had accumulated through centuries of use (Ong, Orality 111). These common places were supported by a complex art of memory: techniques that nourished the practice of rhetoric. By contrast, Ramism sought to map the flow and structure of arguments in tables and diagrams. Localised memory, based on dividing and composing, became crucial (Yates 230). For Ramus, content was structured in a set of visible or sight-oriented relations on the page. Ramism transformed the conditions of visualisation. In our present age, where ‘content’ is supposedly ‘king’, an archaeology of content bears thinking about. In it, Ramism would have a prominent place. With Ramus, content could be mapped within a diagrammatic page-based understanding of meaning. A container understanding of content arises. ‘In the post-Gutenberg age where Ramism flourished, the term “content”, as applied to what is “in” literary productions, acquires a status which it had never known before’ (Ong, Ramus 313). ‘In lieu of merely telling the truth, books would now in common estimation “contain” the truth, like boxes’ (313). For Ramus, ‘analysis opened ideas like boxes’ (315). The Ramist move was, as Ong points out, about privileging the visual over the audible. Alongside the rise of the printing press and page-based approaches to the word, the Ramist revolution sought to re-work rhetoric according to a new scheme. Although spatial metaphors had always had a ‘place’ in the arts of memory—other systems were, however, phonetically based—the notion of place changed. Specific figures such as ‘scheme’, ‘plan’, and ‘table’, rose to prominence in the now-textualised imagination. ‘Structure’ became an abstract diagram on the page disconnected from the total performance of the rhetor. This brings us to another key aspect of the Ramist reformation: that alongside a spatialised organisation of thought Ramus re-works style as presentation and embellishment (Brummett 449). A kind of separation of conception and execution is introduced in relation to performance. In Ramus’ separation of reason and rhetoric, arrangement and memory are distinct from style and delivery (Brummett 464). While both dialectic and rhetoric are re-worked by Ramus in light of divisions and definitions (see Ong, Ramus Chs. XI-XII), and dialectic remains a ‘rhetorical instrument’ (Ramus 290), rhetoric becomes a unique site for simplification in the name of classroom practicality. Dialectic circumscribes the space of learning of rhetoric; invention and arrangement (positioning) occur in advance (289). Ong’s work on the technologisation of the word is strongly focused on identifying the impact of literacy on consciousness. What Ong’s work on Ramus shows is that alongside the so-called printing revolution the Ramist reformation enacts an equally if not more powerful transformation of pedagogic space. Any serious consideration of print must not only look at the technologisation of the word, and the shifting patterns of literacy produced alongside it, but also a particular tying together of pedagogy and method that Ong traces back to Ramus. If, as is canvassed in the call for papers of this issue of M/C Journal, ‘the transitions in print culture are uneven and incomplete at this point’, then could it be in part due to the way Ramism endures and is extended in electronic and hypermedia contexts? Powerpoint presentations, outlining tools (Heim 139-141), and the scourge of bullet points, are the most obvious evidence of greater institutionalization of Ramist knowledge architecture. Communication, and the teaching of communication, is now embedded in a Ramist logic of opening up content like a box. Theories of communication draw on so-called ‘models’ that draw on the representation of the communication process through boxes that divide and define. Perhaps in a less obvious way, ‘spatialized processes of thought and communication’ (Ong, Ramus 314) are essential to the logic of flowcharting and tracking new information structures, and even teaching hypertext (see the diagram in Nielsen 7): a link puts the popular notion that hypertext is close to the way we truly think into an interesting perspective. The notion that we are embedded in print culture is not in itself new, even if the forms of our continual reintegration into print culture can be surprising. In the experience of printing, of the act of pressing the ‘Print’ button, we find ourselves re-integrated into page space. A mini-preview of the page re-assures me of an actuality behind the actualizations on the screen, of ink on paper. As I write in my word processing software, the removal of writing from the ‘element of inscription’ (Heim 136) —the frictionless ‘immediacy’ of the flow of text (152) — is conditioned by a representation called the ‘Page Layout’, the dark borders around the page signalling a kind of structures abyss, a no-go zone, a place, beyond ‘Normal’, from which where there is no ‘Return’. At the same time, however, never before has the technological manipulation of the document been so complex, a part of a docuverse that exists in three dimensions. It is a world that is increasingly virtualised by photocopiers that ‘scan to file’ or ‘scan to email’ rather than good old ‘xeroxing’ style copying. Printing gives way to scanning. In a perverse extension of printing (but also residually film and photography), some video software has a function called ‘Print to Video’. That these super-functions of scanning to file or email are disabled on my department photocopier says something about budgets, but also the comfort with which academics inhabit Ramist space. As I stand here printing my lecture plan, the printer stands defiantly separate from the photocopier, resisting its colonizing convergence even though it is dwarfed in size. Meanwhile, the printer demurely dispenses pages, one at a time, face down, in a gesture of discretion or perhaps embarrassment. For in the focus on the pristine page there is a Puritanism surrounding printing: a morality of blemishes, smudges, and stains; of structure, format and order; and a failure to match that immaculate, perfect argument or totality. (Ong suggests that ‘the term “method” was appropriated from the Ramist coffers and used to form the term “methodists” to designate first enthusiastic preachers who made an issue of their adherence to “logic”’ (Ramus 304).) But perhaps this avoidance of multi-functionality is less of a Ludditism than an understanding that the technological assemblage of printing today exists peripherally to the ideality of the Ramist scheme. A change in technological means does not necessarily challenge the visile language that informs our very understanding of our respective ‘fields’, or the ideals of competency embodied in academic performance and expression, or the notions of content we adopt. This is why I would argue some consideration of Ramism and print culture is crucial. Any ‘true’ breaking out of print involves, as I suggest, a challenge to some fundamental principles of pedagogy and method, and the link between the two. And of course, the very prospect of breaking out of print raises the issue of its desirability at a time when these forms of academic performance are culturally valued. On the surface, academic culture has been a strange inheritor of the Ramist legacy, radically furthering its ambitions, but also it would seem strongly tempering it with an investment in orality, and other ideas of performance, that resist submission to the Ramist ideal. Ong is pessimistic here, however. Ramism was after all born as a pedagogic movement, central to the purveying ‘knowledge as a commodity’ (Ong, Ramus 306). Academic discourse remains an odd mixture of ‘dialogue in the give-and-take Socratic form’ and the scheduled lecture (151). The scholastic dispute is at best a ‘manifestation of concern with real dialogue’ (154). As Ong notes, the ideals of dialogue have been difficult to sustain, and the dominant practice leans towards ‘the visile pole with its typical ideals of “clarity”, “precision”, “distinctness”, and “explanation” itself—all best conceivable in terms of some analogy with vision and a spatial field’ (151). Assessing the importance and after-effects of the Ramist reformation today is difficult. Ong describes it an ‘elusive study’ (Ramus 296). Perhaps Viola’s video, with its figures struggling in a column-like organization of space, structured in a kind of dichotomy, can be read as a glimpse of our existence in or under a Ramist scheme (interestingly, from memory, these figures emote in silence, deprived of auditory expression). My own view is that while it is possible to explore learning environments in a range of ways, and thus move beyond the enclosed mode of study of Ramism, Ramism nevertheless comprises an important default architecture of pedagogy that also informs some higher level assumptions about assessment and knowledge of the field. Software training, based on a process of working through or mimicking a linked series of screenshots and commands is a direct inheritor of what Ong calls Ramism’s ‘corpuscular epistemology’, a ‘one to one correspondence between concept, word and referent’ (Ong, Orality 168). My lecture plan, providing an at a glance view of my presentation, is another. The default architecture of the Ramist scheme impacts on our organisation of knowledge, and the place of performance with in it. Perhaps this is another area where Ong’s fascinating account of secondary orality—that orality that comes into being with television and radio—becomes important (Orality 136). Not only does secondary orality enable group-mindedness and communal exchange, it also provides a way to resist the closure of print and the Ramist scheme, adapting knowledge to new environments and story frameworks. Ong’s work in Orality and Literacy could thus usefully be taken up to discuss Ramism. But this raises another issue, which has to do with the relationship between Ong’s two books. In Orality and Literacy, Ong is careful to trace distinctions between oral, chirographic, manuscript, and print culture. In Ramus this progression is not as prominent— partly because Ong is tracking Ramus’ numerous influences in detail —and we find a more clear-cut distinction between the visile and audile worlds. Yates seems to support this observation, suggesting contra Ong that it is not the connection between Ramus and print that is important, but between Ramus and manuscript culture (230). The interconnections but also lack of fit between the two books suggests a range of fascinating questions about the impact of Ramism across different media/technological contexts, beyond print, but also the status of visualisation in both rhetorical and print cultures. References Brummett, Barry. Reading Rhetorical Theory. Fort Worth: Harcourt, 2000. Heim, Michael. Electric Language: A Philosophical Study of Word Processing. New Haven: Yale UP, 1987. Maras, Steven, David Sutton, and with Marion Benjamin. “Multimedia Communication: An Interdisciplinary Approach.” Information Technology, Education and Society 2.1 (2001): 25-49. Nielsen, Jakob. Multimedia and Hypertext: The Internet and Beyond. Boston: AP Professional, 1995. Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London: Methuen, 1982. —. Ramus: Method, and the Decay of Dialogue. New York: Octagon, 1974. The Second Annual Walter H. Annenberg Symposium. 20 March 2002. http://www.annenberg.edu/symposia/annenberg/2002/photos.php> USC Annenberg Center of Communication and USC Annenberg School for Communication. 22 March 2005. Viola, Bill. Bill Viola: The Passions. Ed. John Walsh. London: The J. Paul Getty Museum, Los Angeles in Association with The National Gallery, 2003. Yates, Frances A. The Art of Memory. Harmondsworth: Penguin, 1969. Citation reference for this article MLA Style Maras, Steven. "Reflections on Adobe Corporation, Bill Viola, and Peter Ramus while Printing Lecture Notes." M/C Journal 8.2 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0506/05-maras.php>. APA Style Maras, S. (Jun. 2005) "Reflections on Adobe Corporation, Bill Viola, and Peter Ramus while Printing Lecture Notes," M/C Journal, 8(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0506/05-maras.php>.
APA, Harvard, Vancouver, ISO, and other styles
5

Losh, Elizabeth. "Artificial Intelligence." M/C Journal 10, no. 5 (October 1, 2007). http://dx.doi.org/10.5204/mcj.2710.

Full text
Abstract:
On the morning of Thursday, 4 May 2006, the United States House Permanent Select Committee on Intelligence held an open hearing entitled “Terrorist Use of the Internet.” The Intelligence committee meeting was scheduled to take place in Room 1302 of the Longworth Office Building, a Depression-era structure with a neoclassical façade. Because of a dysfunctional elevator, some of the congressional representatives were late to the meeting. During the testimony about the newest political applications for cutting-edge digital technology, the microphones periodically malfunctioned, and witnesses complained of “technical problems” several times. By the end of the day it seemed that what was to be remembered about the hearing was the shocking revelation that terrorists were using videogames to recruit young jihadists. The Associated Press wrote a short, restrained article about the hearing that only mentioned “computer games and recruitment videos” in passing. Eager to have their version of the news item picked up, Reuters made videogames the focus of their coverage with a headline that announced, “Islamists Using US Videogames in Youth Appeal.” Like a game of telephone, as the Reuters videogame story was quickly re-run by several Internet news services, each iteration of the title seemed less true to the exact language of the original. One Internet news service changed the headline to “Islamic militants recruit using U.S. video games.” Fox News re-titled the story again to emphasise that this alert about technological manipulation was coming from recognised specialists in the anti-terrorism surveillance field: “Experts: Islamic Militants Customizing Violent Video Games.” As the story circulated, the body of the article remained largely unchanged, in which the Reuters reporter described the digital materials from Islamic extremists that were shown at the congressional hearing. During the segment that apparently most captured the attention of the wire service reporters, eerie music played as an English-speaking narrator condemned the “infidel” and declared that he had “put a jihad” on them, as aerial shots moved over 3D computer-generated images of flaming oil facilities and mosques covered with geometric designs. Suddenly, this menacing voice-over was interrupted by an explosion, as a virtual rocket was launched into a simulated military helicopter. The Reuters reporter shared this dystopian vision from cyberspace with Western audiences by quoting directly from the chilling commentary and describing a dissonant montage of images and remixed sound. “I was just a boy when the infidels came to my village in Blackhawk helicopters,” a narrator’s voice said as the screen flashed between images of street-level gunfights, explosions and helicopter assaults. Then came a recording of President George W. Bush’s September 16, 2001, statement: “This crusade, this war on terrorism, is going to take a while.” It was edited to repeat the word “crusade,” which Muslims often define as an attack on Islam by Christianity. According to the news reports, the key piece of evidence before Congress seemed to be a film by “SonicJihad” of recorded videogame play, which – according to the experts – was widely distributed online. Much of the clip takes place from the point of view of a first-person shooter, seen as if through the eyes of an armed insurgent, but the viewer also periodically sees third-person action in which the player appears as a running figure wearing a red-and-white checked keffiyeh, who dashes toward the screen with a rocket launcher balanced on his shoulder. Significantly, another of the player’s hand-held weapons is a detonator that triggers remote blasts. As jaunty music plays, helicopters, tanks, and armoured vehicles burst into smoke and flame. Finally, at the triumphant ending of the video, a green and white flag bearing a crescent is hoisted aloft into the sky to signify victory by Islamic forces. To explain the existence of this digital alternative history in which jihadists could be conquerors, the Reuters story described the deviousness of the country’s terrorist opponents, who were now apparently modifying popular videogames through their wizardry and inserting anti-American, pro-insurgency content into U.S.-made consumer technology. One of the latest video games modified by militants is the popular “Battlefield 2” from leading video game publisher, Electronic Arts Inc of Redwood City, California. Jeff Brown, a spokesman for Electronic Arts, said enthusiasts often write software modifications, known as “mods,” to video games. “Millions of people create mods on games around the world,” he said. “We have absolutely no control over them. It’s like drawing a mustache on a picture.” Although the Electronic Arts executive dismissed the activities of modders as a “mustache on a picture” that could only be considered little more than childish vandalism of their off-the-shelf corporate product, others saw a more serious form of criminality at work. Testifying experts and the legislators listening on the committee used the video to call for greater Internet surveillance efforts and electronic counter-measures. Within twenty-four hours of the sensationalistic news breaking, however, a group of Battlefield 2 fans was crowing about the idiocy of reporters. The game play footage wasn’t from a high-tech modification of the software by Islamic extremists; it had been posted on a Planet Battlefield forum the previous December of 2005 by a game fan who had cut together regular game play with a Bush remix and a parody snippet of the soundtrack from the 2004 hit comedy film Team America. The voice describing the Black Hawk helicopters was the voice of Trey Parker of South Park cartoon fame, and – much to Parker’s amusement – even the mention of “goats screaming” did not clue spectators in to the fact of a comic source. Ironically, the moment in the movie from which the sound clip is excerpted is one about intelligence gathering. As an agent of Team America, a fictional elite U.S. commando squad, the hero of the film’s all-puppet cast, Gary Johnston, is impersonating a jihadist radical inside a hostile Egyptian tavern that is modelled on the cantina scene from Star Wars. Additional laughs come from the fact that agent Johnston is accepted by the menacing terrorist cell as “Hakmed,” despite the fact that he utters a series of improbable clichés made up of incoherent stereotypes about life in the Middle East while dressed up in a disguise made up of shoe polish and a turban from a bathroom towel. The man behind the “SonicJihad” pseudonym turned out to be a twenty-five-year-old hospital administrator named Samir, and what reporters and representatives saw was nothing more exotic than game play from an add-on expansion pack of Battlefield 2, which – like other versions of the game – allows first-person shooter play from the position of the opponent as a standard feature. While SonicJihad initially joined his fellow gamers in ridiculing the mainstream media, he also expressed astonishment and outrage about a larger politics of reception. In one interview he argued that the media illiteracy of Reuters potentially enabled a whole series of category errors, in which harmless gamers could be demonised as terrorists. It wasn’t intended for the purpose what it was portrayed to be by the media. So no I don’t regret making a funny video . . . why should I? The only thing I regret is thinking that news from Reuters was objective and always right. The least they could do is some online research before publishing this. If they label me al-Qaeda just for making this silly video, that makes you think, what is this al-Qaeda? And is everything al-Qaeda? Although Sonic Jihad dismissed his own work as “silly” or “funny,” he expected considerably more from a credible news agency like Reuters: “objective” reporting, “online research,” and fact-checking before “publishing.” Within the week, almost all of the salient details in the Reuters story were revealed to be incorrect. SonicJihad’s film was not made by terrorists or for terrorists: it was not created by “Islamic militants” for “Muslim youths.” The videogame it depicted had not been modified by a “tech-savvy militant” with advanced programming skills. Of course, what is most extraordinary about this story isn’t just that Reuters merely got its facts wrong; it is that a self-identified “parody” video was shown to the august House Intelligence Committee by a team of well-paid “experts” from the Science Applications International Corporation (SAIC), a major contractor with the federal government, as key evidence of terrorist recruitment techniques and abuse of digital networks. Moreover, this story of media illiteracy unfolded in the context of a fundamental Constitutional debate about domestic surveillance via communications technology and the further regulation of digital content by lawmakers. Furthermore, the transcripts of the actual hearing showed that much more than simple gullibility or technological ignorance was in play. Based on their exchanges in the public record, elected representatives and government experts appear to be keenly aware that the digital discourses of an emerging information culture might be challenging their authority and that of the longstanding institutions of knowledge and power with which they are affiliated. These hearings can be seen as representative of a larger historical moment in which emphatic declarations about prohibiting specific practices in digital culture have come to occupy a prominent place at the podium, news desk, or official Web portal. This environment of cultural reaction can be used to explain why policy makers’ reaction to terrorists’ use of networked communication and digital media actually tells us more about our own American ideologies about technology and rhetoric in a contemporary information environment. When the experts come forward at the Sonic Jihad hearing to “walk us through the media and some of the products,” they present digital artefacts of an information economy that mirrors many of the features of our own consumption of objects of electronic discourse, which seem dangerously easy to copy and distribute and thus also create confusion about their intended meanings, audiences, and purposes. From this one hearing we can see how the reception of many new digital genres plays out in the public sphere of legislative discourse. Web pages, videogames, and Weblogs are mentioned specifically in the transcript. The main architecture of the witnesses’ presentation to the committee is organised according to the rhetorical conventions of a PowerPoint presentation. Moreover, the arguments made by expert witnesses about the relationship of orality to literacy or of public to private communications in new media are highly relevant to how we might understand other important digital genres, such as electronic mail or text messaging. The hearing also invites consideration of privacy, intellectual property, and digital “rights,” because moral values about freedom and ownership are alluded to by many of the elected representatives present, albeit often through the looking glass of user behaviours imagined as radically Other. For example, terrorists are described as “modders” and “hackers” who subvert those who properly create, own, legitimate, and regulate intellectual property. To explain embarrassing leaks of infinitely replicable digital files, witness Ron Roughead says, “We’re not even sure that they don’t even hack into the kinds of spaces that hold photographs in order to get pictures that our forces have taken.” Another witness, Undersecretary of Defense for Policy and International Affairs, Peter Rodman claims that “any video game that comes out, as soon as the code is released, they will modify it and change the game for their needs.” Thus, the implication of these witnesses’ testimony is that the release of code into the public domain can contribute to political subversion, much as covert intrusion into computer networks by stealthy hackers can. However, the witnesses from the Pentagon and from the government contractor SAIC often present a contradictory image of the supposed terrorists in the hearing transcripts. Sometimes the enemy is depicted as an organisation of technological masterminds, capable of manipulating the computer code of unwitting Americans and snatching their rightful intellectual property away; sometimes those from the opposing forces are depicted as pre-modern and even sub-literate political innocents. In contrast, the congressional representatives seem to focus on similarities when comparing the work of “terrorists” to the everyday digital practices of their constituents and even of themselves. According to the transcripts of this open hearing, legislators on both sides of the aisle express anxiety about domestic patterns of Internet reception. Even the legislators’ own Web pages are potentially disruptive electronic artefacts, particularly when the demands of digital labour interfere with their duties as lawmakers. Although the subject of the hearing is ostensibly terrorist Websites, Representative Anna Eshoo (D-California) bemoans the difficulty of maintaining her own official congressional site. As she observes, “So we are – as members, I think we’re very sensitive about what’s on our Website, and if I retained what I had on my Website three years ago, I’d be out of business. So we know that they have to be renewed. They go up, they go down, they’re rebuilt, they’re – you know, the message is targeted to the future.” In their questions, lawmakers identify Weblogs (blogs) as a particular area of concern as a destabilising alternative to authoritative print sources of information from established institutions. Representative Alcee Hastings (D-Florida) compares the polluting power of insurgent bloggers to that of influential online muckrakers from the American political Right. Hastings complains of “garbage on our regular mainstream news that comes from blog sites.” Representative Heather Wilson (R-New Mexico) attempts to project a media-savvy persona by bringing up the “phenomenon of blogging” in conjunction with her questions about jihadist Websites in which she notes how Internet traffic can be magnified by cooperative ventures among groups of ideologically like-minded content-providers: “These Websites, and particularly the most active ones, are they cross-linked? And do they have kind of hot links to your other favorite sites on them?” At one point Representative Wilson asks witness Rodman if he knows “of your 100 hottest sites where the Webmasters are educated? What nationality they are? Where they’re getting their money from?” In her questions, Wilson implicitly acknowledges that Web work reflects influences from pedagogical communities, economic networks of the exchange of capital, and even potentially the specific ideologies of nation-states. It is perhaps indicative of the government contractors’ anachronistic worldview that the witness is unable to answer Wilson’s question. He explains that his agency focuses on the physical location of the server or ISP rather than the social backgrounds of the individuals who might be manufacturing objectionable digital texts. The premise behind the contractors’ working method – surveilling the technical apparatus not the social network – may be related to other beliefs expressed by government witnesses, such as the supposition that jihadist Websites are collectively produced and spontaneously emerge from the indigenous, traditional, tribal culture, instead of assuming that Iraqi insurgents have analogous beliefs, practices, and technological awareness to those in first-world countries. The residual subtexts in the witnesses’ conjectures about competing cultures of orality and literacy may tell us something about a reactionary rhetoric around videogames and digital culture more generally. According to the experts before Congress, the Middle Eastern audience for these videogames and Websites is limited by its membership in a pre-literate society that is only capable of abortive cultural production without access to knowledge that is archived in printed codices. Sometimes the witnesses before Congress seem to be unintentionally channelling the ideas of the late literacy theorist Walter Ong about the “secondary orality” associated with talky electronic media such as television, radio, audio recording, or telephone communication. Later followers of Ong extend this concept of secondary orality to hypertext, hypermedia, e-mail, and blogs, because they similarly share features of both speech and written discourse. Although Ong’s disciples celebrate this vibrant reconnection to a mythic, communal past of what Kathleen Welch calls “electric rhetoric,” the defence industry consultants express their profound state of alarm at the potentially dangerous and subversive character of this hybrid form of communication. The concept of an “oral tradition” is first introduced by the expert witnesses in the context of modern marketing and product distribution: “The Internet is used for a variety of things – command and control,” one witness states. “One of the things that’s missed frequently is how and – how effective the adversary is at using the Internet to distribute product. They’re using that distribution network as a modern form of oral tradition, if you will.” Thus, although the Internet can be deployed for hierarchical “command and control” activities, it also functions as a highly efficient peer-to-peer distributed network for disseminating the commodity of information. Throughout the hearings, the witnesses imply that unregulated lateral communication among social actors who are not authorised to speak for nation-states or to produce legitimated expert discourses is potentially destabilising to political order. Witness Eric Michael describes the “oral tradition” and the conventions of communal life in the Middle East to emphasise the primacy of speech in the collective discursive practices of this alien population: “I’d like to point your attention to the media types and the fact that the oral tradition is listed as most important. The other media listed support that. And the significance of the oral tradition is more than just – it’s the medium by which, once it comes off the Internet, it is transferred.” The experts go on to claim that this “oral tradition” can contaminate other media because it functions as “rumor,” the traditional bane of the stately discourse of military leaders since the classical era. The oral tradition now also has an aspect of rumor. A[n] event takes place. There is an explosion in a city. Rumor is that the United States Air Force dropped a bomb and is doing indiscriminate killing. This ends up being discussed on the street. It ends up showing up in a Friday sermon in a mosque or in another religious institution. It then gets recycled into written materials. Media picks up the story and broadcasts it, at which point it’s now a fact. In this particular case that we were telling you about, it showed up on a network television, and their propaganda continues to go back to this false initial report on network television and continue to reiterate that it’s a fact, even though the United States government has proven that it was not a fact, even though the network has since recanted the broadcast. In this example, many-to-many discussion on the “street” is formalised into a one-to many “sermon” and then further stylised using technology in a one-to-many broadcast on “network television” in which “propaganda” that is “false” can no longer be disputed. This “oral tradition” is like digital media, because elements of discourse can be infinitely copied or “recycled,” and it is designed to “reiterate” content. In this hearing, the word “rhetoric” is associated with destructive counter-cultural forces by the witnesses who reiterate cultural truisms dating back to Plato and the Gorgias. For example, witness Eric Michael initially presents “rhetoric” as the use of culturally specific and hence untranslatable figures of speech, but he quickly moves to an outright castigation of the entire communicative mode. “Rhetoric,” he tells us, is designed to “distort the truth,” because it is a “selective” assembly or a “distortion.” Rhetoric is also at odds with reason, because it appeals to “emotion” and a romanticised Weltanschauung oriented around discourses of “struggle.” The film by SonicJihad is chosen as the final clip by the witnesses before Congress, because it allegedly combines many different types of emotional appeal, and thus it conveniently ties together all of the themes that the witnesses present to the legislators about unreliable oral or rhetorical sources in the Middle East: And there you see how all these products are linked together. And you can see where the games are set to psychologically condition you to go kill coalition forces. You can see how they use humor. You can see how the entire campaign is carefully crafted to first evoke an emotion and then to evoke a response and to direct that response in the direction that they want. Jihadist digital products, especially videogames, are effective means of manipulation, the witnesses argue, because they employ multiple channels of persuasion and carefully sequenced and integrated subliminal messages. To understand the larger cultural conversation of the hearing, it is important to keep in mind that the related argument that “games” can “psychologically condition” players to be predisposed to violence is one that was important in other congressional hearings of the period, as well one that played a role in bills and resolutions that were passed by the full body of the legislative branch. In the witness’s testimony an appeal to anti-game sympathies at home is combined with a critique of a closed anti-democratic system abroad in which the circuits of rhetorical production and their composite metonymic chains are described as those that command specific, unvarying, robotic responses. This sharp criticism of the artful use of a presentation style that is “crafted” is ironic, given that the witnesses’ “compilation” of jihadist digital material is staged in the form of a carefully structured PowerPoint presentation, one that is paced to a well-rehearsed rhythm of “slide, please” or “next slide” in the transcript. The transcript also reveals that the members of the House Intelligence Committee were not the original audience for the witnesses’ PowerPoint presentation. Rather, when it was first created by SAIC, this “expert” presentation was designed for training purposes for the troops on the ground, who would be facing the challenges of deployment in hostile terrain. According to the witnesses, having the slide show showcased before Congress was something of an afterthought. Nonetheless, Congressman Tiahrt (R-KN) is so impressed with the rhetorical mastery of the consultants that he tries to appropriate it. As Tiarht puts it, “I’d like to get a copy of that slide sometime.” From the hearing we also learn that the terrorists’ Websites are threatening precisely because they manifest a polymorphously perverse geometry of expansion. For example, one SAIC witness before the House Committee compares the replication and elaboration of digital material online to a “spiderweb.” Like Representative Eshoo’s site, he also notes that the terrorists’ sites go “up” and “down,” but the consultant is left to speculate about whether or not there is any “central coordination” to serve as an organising principle and to explain the persistence and consistency of messages despite the apparent lack of a single authorial ethos to offer a stable, humanised, point of reference. In the hearing, the oft-cited solution to the problem created by the hybridity and iterability of digital rhetoric appears to be “public diplomacy.” Both consultants and lawmakers seem to agree that the damaging messages of the insurgents must be countered with U.S. sanctioned information, and thus the phrase “public diplomacy” appears in the hearing seven times. However, witness Roughhead complains that the protean “oral tradition” and what Henry Jenkins has called the “transmedia” character of digital culture, which often crosses several platforms of traditional print, projection, or broadcast media, stymies their best rhetorical efforts: “I think the point that we’ve tried to make in the briefing is that wherever there’s Internet availability at all, they can then download these – these programs and put them onto compact discs, DVDs, or post them into posters, and provide them to a greater range of people in the oral tradition that they’ve grown up in. And so they only need a few Internet sites in order to distribute and disseminate the message.” Of course, to maintain their share of the government market, the Science Applications International Corporation also employs practices of publicity and promotion through the Internet and digital media. They use HTML Web pages for these purposes, as well as PowerPoint presentations and online video. The rhetoric of the Website of SAIC emphasises their motto “From Science to Solutions.” After a short Flash film about how SAIC scientists and engineers solve “complex technical problems,” the visitor is taken to the home page of the firm that re-emphasises their central message about expertise. The maps, uniforms, and specialised tools and equipment that are depicted in these opening Web pages reinforce an ethos of professional specialisation that is able to respond to multiple threats posed by the “global war on terror.” By 26 June 2006, the incident finally was being described as a “Pentagon Snafu” by ABC News. From the opening of reporter Jake Tapper’s investigative Webcast, established government institutions were put on the spot: “So, how much does the Pentagon know about videogames? Well, when it came to a recent appearance before Congress, apparently not enough.” Indeed, the very language about “experts” that was highlighted in the earlier coverage is repeated by Tapper in mockery, with the significant exception of “independent expert” Ian Bogost of the Georgia Institute of Technology. If the Pentagon and SAIC deride the legitimacy of rhetoric as a cultural practice, Bogost occupies himself with its defence. In his recent book Persuasive Games: The Expressive Power of Videogames, Bogost draws upon the authority of the “2,500 year history of rhetoric” to argue that videogames represent a significant development in that cultural narrative. Given that Bogost and his Watercooler Games Weblog co-editor Gonzalo Frasca were actively involved in the detective work that exposed the depth of professional incompetence involved in the government’s line-up of witnesses, it is appropriate that Bogost is given the final words in the ABC exposé. As Bogost says, “We should be deeply bothered by this. We should really be questioning the kind of advice that Congress is getting.” Bogost may be right that Congress received terrible counsel on that day, but a close reading of the transcript reveals that elected officials were much more than passive listeners: in fact they were lively participants in a cultural conversation about regulating digital media. After looking at the actual language of these exchanges, it seems that the persuasiveness of the misinformation from the Pentagon and SAIC had as much to do with lawmakers’ preconceived anxieties about practices of computer-mediated communication close to home as it did with the contradictory stereotypes that were presented to them about Internet practices abroad. In other words, lawmakers found themselves looking into a fun house mirror that distorted what should have been familiar artefacts of American popular culture because it was precisely what they wanted to see. References ABC News. “Terrorist Videogame?” Nightline Online. 21 June 2006. 22 June 2006 http://abcnews.go.com/Video/playerIndex?id=2105341>. Bogost, Ian. Persuasive Games: Videogames and Procedural Rhetoric. Cambridge, MA: MIT Press, 2007. Game Politics. “Was Congress Misled by ‘Terrorist’ Game Video? We Talk to Gamer Who Created the Footage.” 11 May 2006. http://gamepolitics.livejournal.com/285129.html#cutid1>. Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York UP, 2006. julieb. “David Morgan Is a Horrible Writer and Should Be Fired.” Online posting. 5 May 2006. Dvorak Uncensored Cage Match Forums. http://cagematch.dvorak.org/index.php/topic,130.0.html>. Mahmood. “Terrorists Don’t Recruit with Battlefield 2.” GGL Global Gaming. 16 May 2006 http://www.ggl.com/news.php?NewsId=3090>. Morgan, David. “Islamists Using U.S. Video Games in Youth Appeal.” Reuters online news service. 4 May 2006 http://today.reuters.com/news/ArticleNews.aspx?type=topNews &storyID=2006-05-04T215543Z_01_N04305973_RTRUKOC_0_US-SECURITY- VIDEOGAMES.xml&pageNumber=0&imageid=&cap=&sz=13&WTModLoc= NewsArt-C1-ArticlePage2>. Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London/New York: Methuen, 1982. Parker, Trey. Online posting. 7 May 2006. 9 May 2006 http://www.treyparker.com>. Plato. “Gorgias.” Plato: Collected Dialogues. Princeton: Princeton UP, 1961. Shrader, Katherine. “Pentagon Surfing Thousands of Jihad Sites.” Associated Press 4 May 2006. SonicJihad. “SonicJihad: A Day in the Life of a Resistance Fighter.” Online posting. 26 Dec. 2005. Planet Battlefield Forums. 9 May 2006 http://www.forumplanet.com/planetbattlefield/topic.asp?fid=13670&tid=1806909&p=1>. Tapper, Jake, and Audery Taylor. “Terrorist Video Game or Pentagon Snafu?” ABC News Nightline 21 June 2006. 30 June 2006 http://abcnews.go.com/Nightline/Technology/story?id=2105128&page=1>. U.S. Congressional Record. Panel I of the Hearing of the House Select Intelligence Committee, Subject: “Terrorist Use of the Internet for Communications.” Federal News Service. 4 May 2006. Welch, Kathleen E. Electric Rhetoric: Classical Rhetoric, Oralism, and the New Literacy. Cambridge, MA: MIT Press, 1999. Citation reference for this article MLA Style Losh, Elizabeth. "Artificial Intelligence: Media Illiteracy and the SonicJihad Debacle in Congress." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/08-losh.php>. APA Style Losh, E. (Oct. 2007) "Artificial Intelligence: Media Illiteracy and the SonicJihad Debacle in Congress," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/08-losh.php>.
APA, Harvard, Vancouver, ISO, and other styles
6

Mallan, Kerry Margaret, and Annette Patterson. "Present and Active: Digital Publishing in a Post-print Age." M/C Journal 11, no. 4 (June 24, 2008). http://dx.doi.org/10.5204/mcj.40.

Full text
Abstract:
At one point in Victor Hugo’s novel, The Hunchback of Notre Dame, the archdeacon, Claude Frollo, looked up from a book on his table to the edifice of the gothic cathedral, visible from his canon’s cell in the cloister of Notre Dame: “Alas!” he said, “this will kill that” (146). Frollo’s lament, that the book would destroy the edifice, captures the medieval cleric’s anxiety about the way in which Gutenberg’s print technology would become the new universal means for recording and communicating humanity’s ideas and artistic expression, replacing the grand monuments of architecture, human engineering, and craftsmanship. For Hugo, architecture was “the great handwriting of humankind” (149). The cathedral as the material outcome of human technology was being replaced by the first great machine—the printing press. At this point in the third millennium, some people undoubtedly have similar anxieties to Frollo: is it now the book’s turn to be destroyed by yet another great machine? The inclusion of “post print” in our title is not intended to sound the death knell of the book. Rather, we contend that despite the enduring value of print, digital publishing is “present and active” and is changing the way in which research, particularly in the humanities, is being undertaken. Our approach has three related parts. First, we consider how digital technologies are changing the way in which content is constructed, customised, modified, disseminated, and accessed within a global, distributed network. This section argues that the transition from print to electronic or digital publishing means both losses and gains, particularly with respect to shifts in our approaches to textuality, information, and innovative publishing. Second, we discuss the Children’s Literature Digital Resources (CLDR) project, with which we are involved. This case study of a digitising initiative opens out the transformative possibilities and challenges of digital publishing and e-scholarship for research communities. Third, we reflect on technology’s capacity to bring about major changes in the light of the theoretical and practical issues that have arisen from our discussion. I. Digitising in a “post-print age” We are living in an era that is commonly referred to as “the late age of print” (see Kho) or the “post-print age” (see Gunkel). According to Aarseth, we have reached a point whereby nearly all of our public and personal media have become more or less digital (37). As Kho notes, web newspapers are not only becoming increasingly more popular, but they are also making rather than losing money, and paper-based newspapers are finding it difficult to recruit new readers from the younger generations (37). Not only can such online-only publications update format, content, and structure more economically than print-based publications, but their wide distribution network, speed, and flexibility attract advertising revenue. Hype and hyperbole aside, publishers are not so much discarding their legacy of print, but recognising the folly of not embracing innovative technologies that can add value by presenting information in ways that satisfy users’ needs for content to-go or for edutainment. As Kho notes: “no longer able to satisfy customer demand by producing print-only products, or even by enabling online access to semi-static content, established publishers are embracing new models for publishing, web-style” (42). Advocates of online publishing contend that the major benefits of online publishing over print technology are that it is faster, more economical, and more interactive. However, as Hovav and Gray caution, “e-publishing also involves risks, hidden costs, and trade-offs” (79). The specific focus for these authors is e-journal publishing and they contend that while cost reduction is in editing, production and distribution, if the journal is not open access, then costs relating to storage and bandwith will be transferred to the user. If we put economics aside for the moment, the transition from print to electronic text (e-text), especially with electronic literary works, brings additional considerations, particularly in their ability to make available different reading strategies to print, such as “animation, rollovers, screen design, navigation strategies, and so on” (Hayles 38). Transition from print to e-text In his book, Writing Space, David Bolter follows Victor Hugo’s lead, but does not ask if print technology will be destroyed. Rather, he argues that “the idea and ideal of the book will change: print will no longer define the organization and presentation of knowledge, as it has for the past five centuries” (2). As Hayles noted above, one significant indicator of this change, which is a consequence of the shift from analogue to digital, is the addition of graphical, audio, visual, sonic, and kinetic elements to the written word. A significant consequence of this transition is the reinvention of the book in a networked environment. Unlike the printed book, the networked book is not bound by space and time. Rather, it is an evolving entity within an ecology of readers, authors, and texts. The Web 2.0 platform has enabled more experimentation with blending of digital technology and traditional writing, particularly in the use of blogs, which have spawned blogwriting and the wikinovel. Siva Vaidhyanathan’s The Googlization of Everything: How One Company is Disrupting Culture, Commerce and Community … and Why We Should Worry is a wikinovel or blog book that was produced over a series of weeks with contributions from other bloggers (see: http://www.sivacracy.net/). Penguin Books, in collaboration with a media company, “Six Stories to Start,” have developed six stories—“We Tell Stories,” which involve different forms of interactivity from users through blog entries, Twitter text messages, an interactive google map, and other features. For example, the story titled “Fairy Tales” allows users to customise the story using their own choice of names for characters and descriptions of character traits. Each story is loosely based on a classic story and links take users to synopses of these original stories and their authors and to online purchase of the texts through the Penguin Books sales website. These examples of digital stories are a small part of the digital environment, which exploits computer and online technologies’ capacity to be interactive and immersive. As Janet Murray notes, the interactive qualities of digital environments are characterised by their procedural and participatory abilities, while their immersive qualities are characterised by their spatial and encyclopedic dimensions (71–89). These immersive and interactive qualities highlight different ways of reading texts, which entail different embodied and cognitive functions from those that reading print texts requires. As Hayles argues: the advent of electronic textuality presents us with an unparalleled opportunity to reformulate fundamental ideas about texts and, in the process, to see print as well as electronic texts with fresh eyes (89–90). The transition to e-text also highlights how digitality is changing all aspects of everyday life both inside and outside the academy. Online teaching and e-research Another aspect of the commercial arm of publishing that is impacting on academe and other organisations is the digitising and indexing of print content for niche distribution. Kho offers the example of the Mark Logic Corporation, which uses its XML content platform to repurpose content, create new content, and distribute this content through multiple portals. As the promotional website video for Mark Logic explains, academics can use this service to customise their own textbooks for students by including only articles and book chapters that are relevant to their subject. These are then organised, bound, and distributed by Mark Logic for sale to students at a cost that is generally cheaper than most textbooks. A further example of how print and digital materials can form an integrated, customised source for teachers and students is eFictions (Trimmer, Jennings, & Patterson). eFictions was one of the first print and online short story anthologies that teachers of literature could customise to their own needs. Produced as both a print text collection and a website, eFictions offers popular short stories in English by well-known traditional and contemporary writers from the US, Australia, New Zealand, UK, and Europe, with summaries, notes on literary features, author biographies, and, in one instance, a YouTube movie of the story. In using the eFictions website, teachers can build a customised anthology of traditional and innovative stories to suit their teaching preferences. These examples provide useful indicators of how content is constructed, customised, modified, disseminated, and accessed within a distributed network. However, the question remains as to how to measure their impact and outcomes within teaching and learning communities. As Harley suggests in her study on the use and users of digital resources in the humanities and social sciences, several factors warrant attention, such as personal teaching style, philosophy, and specific disciplinary requirements. However, in terms of understanding the benefits of digital resources for teaching and learning, Harley notes that few providers in her sample had developed any plans to evaluate use and users in a systematic way. In addition to the problems raised in Harley’s study, another relates to how researchers can be supported to take full advantage of digital technologies for e-research. The transformation brought about by information and communication technologies extends and broadens the impact of research, by making its outputs more discoverable and usable by other researchers, and its benefits more available to industry, governments, and the wider community. Traditional repositories of knowledge and information, such as libraries, are juggling the space demands of books and computer hardware alongside increasing reader demand for anywhere, anytime, anyplace access to information. Researchers’ expectations about online access to journals, eprints, bibliographic data, and the views of others through wikis, blogs, and associated social and information networking sites such as YouTube compete with the traditional expectations of the institutions that fund libraries for paper-based archives and book repositories. While university libraries are finding it increasingly difficult to purchase all hardcover books relevant to numerous and varied disciplines, a significant proportion of their budgets goes towards digital repositories (e.g., STORS), indexes, and other resources, such as full-text electronic specialised and multidisciplinary journal databases (e.g., Project Muse and Proquest); electronic serials; e-books; and specialised information sources through fast (online) document delivery services. An area that is becoming increasingly significant for those working in the humanities is the digitising of historical and cultural texts. II. Bringing back the dead: The CLDR project The CLDR project is led by researchers and librarians at the Queensland University of Technology, in collaboration with Deakin University, University of Sydney, and members of the AustLit team at The University of Queensland. The CLDR project is a “Research Community” of the electronic bibliographic database AustLit: The Australian Literature Resource, which is working towards the goal of providing a complete bibliographic record of the nation’s literature. AustLit offers users with a single entry point to enhanced scholarly resources on Australian writers, their works, and other aspects of Australian literary culture and activities. AustLit and its Research Communities are supported by grants from the Australian Research Council and financial and in-kind contributions from a consortium of Australian universities, and by other external funding sources such as the National Collaborative Research Infrastructure Strategy. Like other more extensive digitisation projects, such as Project Gutenberg and the Rosetta Project, the CLDR project aims to provide a centralised access point for digital surrogates of early published works of Australian children’s literature, with access pathways to existing resources. The first stage of the CLDR project is to provide access to digitised, full-text, out-of-copyright Australian children’s literature from European settlement to 1945, with selected digitised critical works relevant to the field. Texts comprise a range of genres, including poetry, drama, and narrative for young readers and picture books, songs, and rhymes for infants. Currently, a selection of 75 e-texts and digital scans of original texts from Project Gutenberg and Internet Archive have been linked to the Children’s Literature Research Community. By the end of 2009, the CLDR will have digitised approximately 1000 literary texts and a significant number of critical works. Stage II and subsequent development will involve digitisation of selected texts from 1945 onwards. A precursor to the CLDR project has been undertaken by Deakin University in collaboration with the State Library of Victoria, whereby a digital bibliographic index comprising Victorian School Readers has been completed with plans for full-text digital surrogates of a selection of these texts. These texts provide valuable insights into citizenship, identity, and values formation from the 1930s onwards. At the time of writing, the CLDR is at an early stage of development. An extensive survey of out-of-copyright texts has been completed and the digitisation of these resources is about to commence. The project plans to make rich content searchable, allowing scholars from children’s literature studies and education to benefit from the many advantages of online scholarship. What digital publishing and associated digital archives, electronic texts, hypermedia, and so forth foreground is the fact that writers, readers, publishers, programmers, designers, critics, booksellers, teachers, and copyright laws operate within a context that is highly mediated by technology. In his article on large-scale digitisation projects carried out by Cornell and University of Michigan with the Making of America collection of 19th-century American serials and monographs, Hirtle notes that when special collections’ materials are available via the Web, with appropriate metadata and software, then they can “increase use of the material, contribute to new forms of research, and attract new users to the material” (44). Furthermore, Hirtle contends that despite the poor ergonomics associated with most electronic displays and e-book readers, “people will, when given the opportunity, consult an electronic text over the print original” (46). If this preference is universally accurate, especially for researchers and students, then it follows that not only will the preference for electronic surrogates of original material increase, but preference for other kinds of electronic texts will also increase. It is with this preference for electronic resources in mind that we approached the field of children’s literature in Australia and asked questions about how future generations of researchers would prefer to work. If electronic texts become the reference of choice for primary as well as secondary sources, then it seems sensible to assume that researchers would prefer to sit at the end of the keyboard than to travel considerable distances at considerable cost to access paper-based print texts in distant libraries and archives. We considered the best means for providing access to digitised primary and secondary, full text material, and digital pathways to existing online resources, particularly an extensive indexing and bibliographic database. Prior to the commencement of the CLDR project, AustLit had already indexed an extensive number of children’s literature. Challenges and dilemmas The CLDR project, even in its early stages of development, has encountered a number of challenges and dilemmas that centre on access, copyright, economic capital, and practical aspects of digitisation, and sustainability. These issues have relevance for digital publishing and e-research. A decision is yet to be made as to whether the digital texts in CLDR will be available on open or closed/tolled access. The preference is for open access. As Hayles argues, copyright is more than a legal basis for intellectual property, as it also entails ideas about authorship, creativity, and the work as an “immaterial mental construct” that goes “beyond the paper, binding, or ink” (144). Seeking copyright permission is therefore only part of the issue. Determining how the item will be accessed is a further matter, particularly as future technologies may impact upon how a digital item is used. In the case of e-journals, the issue of copyright payment structures are evolving towards a collective licensing system, pay-per-view, and other combinations of print and electronic subscription (see Hovav and Gray). For research purposes, digitisation of items for CLDR is not simply a scan and deliver process. Rather it is one that needs to ensure that the best quality is provided and that the item is both accessible and usable by researchers, and sustainable for future researchers. Sustainability is an important consideration and provides a challenge for institutions that host projects such as CLDR. Therefore, items need to be scanned to a high quality and this requires an expensive scanner and personnel costs. Files need to be in a variety of formats for preservation purposes and so that they may be manipulated to be useable in different technologies (for example, Archival Tiff, Tiff, Jpeg, PDF, HTML). Hovav and Gray warn that when technology becomes obsolete, then content becomes unreadable unless backward integration is maintained. The CLDR items will be annotatable given AustLit’s NeAt funded project: Aus-e-Lit. The Aus-e-Lit project will extend and enhance the existing AustLit web portal with data integration and search services, empirical reporting services, collaborative annotation services, and compound object authoring, editing, and publishing services. For users to be able to get the most out of a digital item, it needs to be searchable, either through double keying or OCR (optimal character recognition). The value of CLDR’s contribution The value of the CLDR project lies in its goal to provide a comprehensive, searchable body of texts (fictional and critical) to researchers across the humanities and social sciences. Other projects seem to be intent on putting up as many items as possible to be considered as a first resort for online texts. CLDR is more specific and is not interested in simply generating a presence on the Web. Rather, it is research driven both in its design and implementation, and in its focussed outcomes of assisting academics and students primarily in their e-research endeavours. To this end, we have concentrated on the following: an extensive survey of appropriate texts; best models for file location, distribution, and use; and high standards of digitising protocols. These issues that relate to data storage, digitisation, collections, management, and end-users of data are aligned with the “Development of an Australian Research Data Strategy” outlined in An Australian e-Research Strategy and Implementation Framework (2006). CLDR is not designed to simply replicate resources, as it has a distinct focus, audience, and research potential. In addition, it looks at resources that may be forgotten or are no longer available in reproduction by current publishing companies. Thus, the aim of CLDR is to preserve both the time and a period of Australian history and literary culture. It will also provide users with an accessible repository of rare and early texts written for children. III. Future directions It is now commonplace to recognize that the Web’s role as information provider has changed over the past decade. New forms of “collective intelligence” or “distributed cognition” (Oblinger and Lombardi) are emerging within and outside formal research communities. Technology’s capacity to initiate major cultural, social, educational, economic, political and commercial shifts has conditioned us to expect the “next big thing.” We have learnt to adapt swiftly to the many challenges that online technologies have presented, and we have reaped the benefits. As the examples in this discussion have highlighted, the changes in online publishing and digitisation have provided many material, network, pedagogical, and research possibilities: we teach online units providing students with access to e-journals, e-books, and customized archives of digitised materials; we communicate via various online technologies; we attend virtual conferences; and we participate in e-research through a global, digital network. In other words, technology is deeply engrained in our everyday lives. In returning to Frollo’s concern that the book would destroy architecture, Umberto Eco offers a placatory note: “in the history of culture it has never happened that something has simply killed something else. Something has profoundly changed something else” (n. pag.). Eco’s point has relevance to our discussion of digital publishing. The transition from print to digital necessitates a profound change that impacts on the ways we read, write, and research. As we have illustrated with our case study of the CLDR project, the move to creating digitised texts of print literature needs to be considered within a dynamic network of multiple causalities, emergent technological processes, and complex negotiations through which digital texts are created, stored, disseminated, and used. Technological changes in just the past five years have, in many ways, created an expectation in the minds of people that the future is no longer some distant time from the present. Rather, as our title suggests, the future is both present and active. References Aarseth, Espen. “How we became Postdigital: From Cyberstudies to Game Studies.” Critical Cyber-culture Studies. Ed. David Silver and Adrienne Massanari. New York: New York UP, 2006. 37–46. An Australian e-Research Strategy and Implementation Framework: Final Report of the e-Research Coordinating Committee. Commonwealth of Australia, 2006. Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, NJ: Erlbaum, 1991. Eco, Umberto. “The Future of the Book.” 1994. 3 June 2008 ‹http://www.themodernword.com/eco/eco_future_of_book.html>. Gunkel, David. J. “What's the Matter with Books?” Configurations 11.3 (2003): 277–303. Harley, Diane. “Use and Users of Digital Resources: A Focus on Undergraduate Education in the Humanities and Social Sciences.” Research and Occasional Papers Series. Berkeley: University of California. Centre for Studies in Higher Education. 12 June 2008 ‹http://www.themodernword.com/eco/eco_future_of_book.html>. Hayles, N. Katherine. My Mother was a Computer: Digital Subjects and Literary Texts. Chicago: U of Chicago P, 2005. Hirtle, Peter B. “The Impact of Digitization on Special Collections in Libraries.” Libraries & Culture 37.1 (2002): 42–52. Hovav, Anat and Paul Gray. “Managing Academic E-journals.” Communications of the ACM 47.4 (2004): 79–82. Hugo, Victor. The Hunchback of Notre Dame (Notre-Dame de Paris). Ware, Hertfordshire: Wordsworth editions, 1993. Kho, Nancy D. “The Medium Gets the Message: Post-Print Publishing Models.” EContent 30.6 (2007): 42–48. Oblinger, Diana and Marilyn Lombardi. “Common Knowledge: Openness in Higher Education.” Opening up Education: The Collective Advancement of Education Through Open Technology, Open Content and Open Knowledge. Ed. Toru Liyoshi and M. S. Vijay Kumar. Cambridge, MA: MIT Press, 2007. 389–400. Murray, Janet H. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, MA: MIT Press, 2001. Trimmer, Joseph F., Wade Jennings, and Annette Patterson. eFictions. New York: Harcourt, 2001.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "IG. Information presentation: hypertext, hypermedia"

1

Guallar, Javier. "Curación de contenidos en el periodismo digital: conceptualización y propuesta de un sistema para la evaluación de la curación en medios de comunicación digitales." Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/672646.

Full text
Abstract:
El objetivo de esta investigación es estudiar la curación de contenidos en el ámbito del periodismo, desarrollar un sistema de análisis para evaluar su uso y analizar su implementación en medios de comunicación digitales. En relación con ello: a) se propone una definición del concepto de curación de contenidos en periodismo; b) se estudia el alcance del concepto de curación periodística, y se relaciona con el concepto cercano de documentación periodística; c) se sitúa la investigación sobre curación periodística en el contexto más general de la investigación sobre curación; d) se presenta por primera vez en la literatura un sistema para la evaluación de la curación en medios de comunicación digitales; e) se prueba y valida esta nueva herramienta de evaluación con una muestra de productos periodísticos, en concreto newsletters, en el primer estudio sistemático realizado sobre curación en medios de un país; f) se establecen las características principales de los productos periodísticos basados en curación; y g) se identifican buenas prácticas de curación periodística.
This research’s goal is to study content curation in the field of journalism, develop an analysis system to evaluate its use and analyse its implementation in digital media. In relation to this: a) a definition of the concept of content curation in journalism is proposed; b) the scope of the concept of journalistic curation is studied, and it is related to the close concept of journalistic documentation; c) research on journalistic curation is situated in the more general context of research on curation; d) a system for evaluating curation in digital media is presented for the first time in the literature; e) this new evaluation tool is tested and validated with a sample of journalistic products, specifically newsletters, in the first systematic study carried out on curation in the media of a country; f) the main characteristics of journalistic products based on curation are established; and g) good journalistic curation practices are identified.
APA, Harvard, Vancouver, ISO, and other styles
2

Romanello, Matteo. "E-scholia: Scenari Digitali per la Comunicazione Scientifica in Ambito Filologico." Thesis, 2008. http://eprints.rclis.org/12200/1/805189.pdf.

Full text
Abstract:
The thesis main goal is to suggest an innovative model for electronic web-based journals in the research field of classical studies and, in particular, of classical philology. In Italy, Humanities suffer some delay in the adoption of digital technologies with regard to publishing in scientific journals. However this delay could be avoided by taking advantage of the results obtained in the past recent years by projects started within other disciplines, in particular within physics and maths. The first chapter discusses the effects of the Web on scientific communication systems such as the journal crisis, the appearance of initiatives in collaborative writing (e.g. Wikipedia) and the Open Access Movement. In the second chapter the main specifics of the Classical Literature and Philology field have been identified in order to outline the requirements of an e-journal model which can solve existing problems with the use of electronic resources for research purposes. The third chapter is dedicated to the illustration of the implementation of the project development. In conclusion, the implemented prototype of an e-journal on classical philology highlighted how suitable the text encoding using an XML-compliant format is in order to provide scholars reading on-line publications with advanced features and value added services. Although the manual deep encoding of huge quantities of documents (journal articles, books, commentaries) is a task too expensive to perform with regard to both time and human resources necessary. Therefore it is need to develop in the next future some tools for the automatic extraction of semantic data from large corpora of unstructured discipline-specific texts.
APA, Harvard, Vancouver, ISO, and other styles
3

Gwizdka, Jacek Stanislaw. "Cognitive abilities, interfaces and tasks: effects on prospective information handling in email." Thesis, 2004. http://eprints.rclis.org/13653/1/JacekSGwizdka_PhDThesis_UofT_2004.pdf.

Full text
Abstract:
This dissertation is focused on new email user interfaces that may improve awareness and handling of task-laden messages in the inbox. The practical motivation for this research was to help email users process messages more effectively.Two user studies were conducted to test hypothesized benefits of the new visual representations and to examine the effects of different levels of selected cognitive abilities on task-laden message handling performance. Task performance on both of the new visual interfaces was faster than on the more traditional textual interfaces, but only when finding date-related information. Selected cognitive abilities were found to impact different dependent measures. Working memory and flexibility of closure had effects on performance time, while visual memory and working memory had effects on user interactions involving manipulation of the visual field, such as scrolling and sorting.A field study was conducted to examine email practices related to handling messages that refer to pending tasks. Individual differences in message handling style were observed, with one group of users transferring such messages out of their email programs to other applications (e.g., calendars), while the other group kept prospective messages in email and used the inbox as a reminder of future events.This research contributes to understanding interactions among cognitive abilities, user interfaces and tasks. These interactions are essential for developing two types of interface: inclusive interfaces that work for users with a wide range of cognitive abilities, and personalized and adaptive interfaces that are fitted to individual characteristics.Two novel graphical user interfaces were designed to facilitate monitoring and retrieval of prospective information from email messages. The TaskView interface displayed task-laden messages on a two-dimensional grid (with time on the horizontal axis). The WebTaskMail interface retained the two-dimensional grid, but extended the representation of pending tasks (added distinction between events and to-do's with deadlines), a vertical date reading line, and more space for email message headers.
APA, Harvard, Vancouver, ISO, and other styles
4

Eike, Kleiner. "Blended Shelf - Ein realitätsbasierter Ansatz zur Präsentation und Exploration von Bibliotheksbeständen." Thesis, 2013. http://eprints.rclis.org/22434/1/Master%20Thesis%20-%20Eike%20Kleiner.pdf.

Full text
Abstract:
Subject of this thesis is the user interface Blended Shelf, which provides a shelf browsing experience beyond the physical location of the library. Shelf browsing offers numerous advantages and users apply it as a research strategy in libraries. Little usable and proven applications exist to provide shelf browsing in the digital domain, which would allow time and location independent shelf access for the users. Therefore, the aim of this work is to develop a user interface, which offers the experience of digital shelf browsing, without losing the essential advantages that are deeply rooted in the physical space. To accomplish this, the first part of the thesis constructs a collection of basic requirements that need to be fulfilled to emulate the shelf browsing experience. The basics of these requirements are the theoretical background of shelf browsing, as well as an analysis of library specific aspects and user needs. The central parts of the work illustrate how the usage of the requirement collection serves as a foundation for the concrete implementation, the set of functions and the reality-based interaction design of Blended Shelf. Finally, an evaluation in form of a comprehensive field study checks whether the implementation meets the requirements and if the users perceive the User Interface as useful and usable. A description and discussion of the study design and results forms the last third of the thesis. An outlook to and discussion of open questions and future work concludes the thesis.
APA, Harvard, Vancouver, ISO, and other styles
5

Sánchez, Villegas Miguel Angel. "Creación de un servicio de información especializado en entorno Web para bibliotecarios : debiblioteconomia.com." Thesis, 2003. http://eprints.rclis.org/7530/1/portada.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Moreira, Walter. "Biblioteca tradicional x biblioteca virtual: modelos de recuperação da informação." Thesis, 1998. http://eprints.rclis.org/8353/1/BibliotecaTradicionalXBibliotecaVirtual_ModelosDeRecuperacaoDaInformacao.pdf.

Full text
Abstract:
Taking libraries as "technologies of intelligence" the author develops a reflection where tradicional library is "opposed" to its virtual counterpart. Models of information retrieval are discussed within the two mentioned models of libraries. New virtual concepts as "navigation" or "surfing" are compared with those present in more traditional information retrieval systems, as "the best match". Here are some other opposite concepts discussed: fuzzy set theory opposed to boolean logic; the interrelation between the basis of information technology and the development of information retrieval tools are also under scrutinity. At last, it discusses the specificity of hypermidia environment which poses new questions to the development of Information Retrieval Theory. The radical differences perceived between tradicional and virtual libraries does not leads the author to a polarization when considering the future. Although digitalization is an irreversible tendency related to information world, the author concludes by the complementarity of the two models of libraries.
APA, Harvard, Vancouver, ISO, and other styles
7

Miranda, Soares Filipi. "Princípios para a criação de uma extensão de metadados sobre interações ecológicas na agrobiodiversidade para o padrão Darwin Core." Thesis, 2019. http://eprints.rclis.org/43096/1/Filipi_Miranda_Soares_disserta%C3%A7%C3%A3o.pdf.

Full text
Abstract:
Information is an intrinsic element to human relations. When transmitted or stored in digital media, it needs to be well described so it can be efficiently retrieved, accessed and interpreted by society. Among the themes of interest to society today, agrobiodiversity is a broad concept that involves organisms and ecosystems related to agricultural production and crops. For computational systems, the representation of information produced about agrobiodiversity can be done with metadata. However, the existing metadata standards do not fully include the representation of some concepts of agrobiodiversity, as in the Darwin Core (DwC) standard. The aim of this research was to present principles for the creation of an extension of metadata for the DwC standard, having as scope the ecological interactions in the context of agrobiodiversity. To achieve the research aim, the methodology, characterized as exploratory, qualitative, applied and descriptive, was divided into two stages: 1) exploration of methodological and terminological inputs; 2) terminological definition and metadata modeling. The execution of the first stage of the methodology was organized into four substeps: a) systematic analysis of the literature on ecological interactions; b) analysis of the main core of DwC terms; c) analysis of the extensions to the DwC metadata standard; d) correlated terminological analysis of the classes of the model , DwC and concepts of ecological interactions. The first substep resulted in a conceptual model on ecological interactions; the second substep resulted in the translation of the definitions of the terms of the main core of the DwC and respective analyzes; the third substep presented a summary of the content of the metadata extensions developed for DwC in other projects; and, finally, the fourth substep consisted of analyzing in a correlated manner the classes of the model, which was developed by Embrapa in order to organize information on agrobiodiversity, the DwC term classes and their extensions and the conceptual model of ecological interactions. The second stage of the methodology resulted in three metadata elements, created to represent the interaction of parasitism, represented as a metadata record in Extensible Markup Language (XML). It is considered that the greatest contribution of this research was to present a set of methodological principles for the creation of an extension of metadata for the representation of ecological interactions in the context of agrobiodiversity, which can foster improvements in agricultural practices, which are important for the entire community. society, as well as for the field of information science studies.
APA, Harvard, Vancouver, ISO, and other styles
8

Cano-Torrecilla, Enrique. "Explotación de registros catalográficos de imágenes del IVAC superando las limitaciones del gestor documental actual mediante herramientas externas." Thesis, 2012. http://eprints.rclis.org/19457/1/memoria.pdf.

Full text
Abstract:
The document management system of the IVAC, implemented over a full-text DBMS, is not able to properly register data of film copies filed in each of its records. It neither allows to navigate simultaneously through different types of item. In this paper the causes of the problem are analysed; and it has been designed a procedure involving data extraction, analysis, and conversion to a relational database, which will allow to represent data accurately. The designed procedure is easy to be repeated. The result is a web application for consulting filmographic fund.
APA, Harvard, Vancouver, ISO, and other styles
9

Johnson, J. K. "Reworking Myth: Casting Lots for the Future of Library Workplaces." Thesis, 2009. http://eprints.rclis.org/19168/1/JohnsonJ0509.pdf.

Full text
Abstract:
The purpose of this work is to provide understanding regarding the future of library workplaces by, first, establishing the relationship between Joseph Campbell's functions of mythology in traditional cultures and workplace texts, and then showing libraries as workplaces with such texts. With this framework in place, it is possible to pick-out the fundamental cycle inherent in library workplace cosmology, highlight pedagogical cycles inherent in library texts, and generate an informed understanding of future cosmological and pedagogical trends using educated extrapolation of such cycles. These steps all serve to lay further groundwork in understanding library workplace mythology and its sociological effects, and, using the relationship between ever-moving cosmological and pedagogical cycles, it becomes possible to form an educated picture of future library sociology. In the end, library workplace mythology has no new revelations about the direction of library workplace sociology, only new ways of dispelling predictions often made about the future of libraries and their workplaces. By looking at library workplaces as sites of mythology, this work offers expectations that the same cycles inherent in past and present library workplaces will continue to overcome changes in the technological, political, and social constructs of future library workplaces.
APA, Harvard, Vancouver, ISO, and other styles
10

Guallar, Javier. "Curación de contenidos en el periodismo digital. Conceptualización y propuesta de un sistema para la evaluación de la curación en medios de comunicación digitales." Thesis, 2021. http://eprints.rclis.org/42065/1/TesisUPF2021_Guallar_Curacio%CC%81nContenidosPeriodismo%20OK.pdf.

Full text
Abstract:
This research’s goal is to study content curation in the field of journalism, develop an analysis system to evaluate its use and analyse its implementation in digital media. In relation to this: a) a definition of the concept of content curation in journalism is proposed; b) the scope of the concept of journalistic curation is studied, and it is related to the close concept of journalistic documentation; c) research on journalistic curation is situated in the more general context of research on curation; d) a system for evaluating curation in digital media is presented for the first time in the literature; e) this new evaluation tool is tested and validated with a sample of journalistic products, specifically newsletters, in the first systematic study carried out on curation in the media of a country; f) the main characteristics of journalistic products based on curation are established; and g) good journalistic curation practices are identified.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography