Academic literature on the topic 'Computer-based hypermedia documents'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computer-based hypermedia documents.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computer-based hypermedia documents"

1

Schrader, U., R. Klar, and S. Schulz. "Computer-based Training and Electronic Publishing in the Health Sector: Tools and Trends." Methods of Information in Medicine 36, no. 02 (March 1997): 149–53. http://dx.doi.org/10.1055/s-0038-1634692.

Full text
Abstract:
Abstract:CBT (computer-based training) applications and hypermedia publications are two different approaches to the utilisation of computers in medical education.Medical CBT software continues to playa minor role in spite of the increasing availability, whereas hypermedia have become very popular through the World Wide Web (WWW). Based on the HTML format they can be designed by non-programmers using inexpensive tools while the production of CBT applications requires programming expertise. HTML documents can be easily developed to be distributed by a web-server or to run as local applications.In developed countries CBT and hypermedia have to compete with an abundance of printed or audio-visual media and a wealth of lectures, conferences, etc., whereas in developing countries these media are scarce and expensive. Here CBT programs, and hypermedia publications in particular, may be a cost-effective way to improve quality of education in the health sector.
APA, Harvard, Vancouver, ISO, and other styles
2

Atzenbeck, Claus. "Interview with Beat Signer." ACM SIGWEB Newsletter, Winter (January 2021): 1–5. http://dx.doi.org/10.1145/3447879.3447881.

Full text
Abstract:
Beat Signer is Professor of Computer Science at the Vrije Universiteit Brussel (VUB) and co-director of the Web & Information Systems Engineering (WISE) research lab. He received a PhD in Computer Science from ETH Zurich where he has also been leading the Interactive Paper lab as a senior researcher for four years. He is an internationally distinguished expert in cross-media technologies and interactive paper solutions. His further research interests include human-information interaction, document engineering, data physicalisation, mixed reality as well as multimodal interaction. He has published more than 100 papers on these topics at international conferences and journals, and received multiple best paper awards. Beat has 20 years of experience in research on cross-media information management and multimodal user interfaces. As part of his PhD research, he investigated the use of paper as an interactive user interface and developed the resource-selector-link (RSL) hypermedia metamodel. With the interactive paper platform (iPaper), he strongly contributed to the interdisciplinary European Paper++ and PaperWorks research projects and the seminal research on paper-digital user interfaces led to innovative cross-media publishing solutions and novel forms of paper-based human-computer interaction. The RSL hypermedia metamodel is nowadays widely applied in his research lab and has, for example, been used for cross-media personal information management, an extensible cross-document link service, the MindXpres presentation platform as well as in a framework for cross-device and Internet of Things applications. For more details, please visit https://beatsigner.com.
APA, Harvard, Vancouver, ISO, and other styles
3

Chang, Kuo-En, Yao-Ting Sung, and Sheng-Kuang Chiou. "Use of Hierarchical Hyper Concept Map in Web-Based Courses." Journal of Educational Computing Research 27, no. 4 (December 2002): 335–53. http://dx.doi.org/10.2190/mtur-9bjq-fe33-qm0a.

Full text
Abstract:
The study proposes a hierarchical hyper concept map (or HHCM) course system. A HHCM course consists of a navigation map, concept maps, and hypermedia documents. The navigation map is a guide to the course, illustrating how the course is composed of learning units. The concept map demonstrates the conceptual structure of each unit, and each node in the HHCM is linked to the hypermedia document, which has a more detailed illustration of the concept. Such a combination for the HHCM course can be viewed as a three-dimensional structure of course representation. The effects of HHCM as a course representation were empirically tested. The experimental results found that students who learn from the course represented by HHCM achieve better learning than those who learn from a linearly represented course. Moreover, students can learn more efficiently than those who learn from the course represented by navigation maps. These findings suggest that the HHCM has a good potential as a device for designing Web-based courses.
APA, Harvard, Vancouver, ISO, and other styles
4

SIERRA, JOSÉ LUIS, BALTASAR FERNÁNDEZ-MANJÓN, ALFREDO FERNÁNDEZ-VALMAYOR, and ANTONIO NAVARRO. "DOCUMENT-ORIENTED DEVELOPMENT OF CONTENT-INTENSIVE APPLICATIONS." International Journal of Software Engineering and Knowledge Engineering 15, no. 06 (December 2005): 975–93. http://dx.doi.org/10.1142/s0218194005002634.

Full text
Abstract:
In this paper we promote a document-oriented approach to the development of content-intensive applications (i.e., applications that critically depend on the informational contents and on the characterization of the contents' structure). This approach is the result of our experience as developers in the educational and in the hypermedia domains, as well as in the domain of knowledge-based systems. The main reason for choosing the document-oriented approach is to make it easier for domain experts to comprehend the elements that represent the main application's features. Among these elements are: the application's contents, the application's customizable properties including those of its interface, and the structure of all this information. Therefore, in our approach, these features are represented by means of a set of application documents, which are marked up using a suitable descriptive Domain-Specific Markup Language (DSML). If this goal is fully accomplished, the application itself can be automatically produced by processing those documents with a suitable processor for the DSML defined. The document-oriented development enhances the production and maintenance of content-intensive applications, because the applications' features are described in the form of human-readable and editable documents, understandable by domain experts and suitable for automatic processing. Nevertheless, the main drawbacks of the approach are the planning overload of the whole production process and the costs of the provision and maintenance of the DSMLs and their processors. These drawbacks can be palliated by adopting an incremental strategy for the production and maintenance of the applications and also for the definition and the operationalization of the DSMLs.
APA, Harvard, Vancouver, ISO, and other styles
5

Bruns, Axel. "What's the Story." M/C Journal 2, no. 5 (July 1, 1999). http://dx.doi.org/10.5204/mcj.1774.

Full text
Abstract:
Practically any good story follows certain narrative conventions in order to hold its readers' attention and leave them with a feeling of satisfaction -- this goes for fictional tales as well as for many news reports (we do tend to call them 'news stories', after all), for idle gossip as well as for academic papers. In the Western tradition of storytelling, it's customary to start with the exposition, build up to major events, and end with some form of narrative closure. Indeed, audience members will feel disturbed if there is no sense of closure at the end -- their desire for closure is a powerful one. From this brief description of narrative patterns it is also clear that such narratives depend crucially on linear progression through the story in order to work -- there may be flashbacks and flashforwards, but very few stories, it seems, could get away with beginning with their point of closure, and work back to the exposition. Closure, as the word suggests, closes the story, and once reached, the audience is left with the feeling of now knowing the whole story, of having all the pieces necessary to understand its events. To understand how important the desire to reach this point is to the audience, just observe the discussions of holes in the plot which people have when they're leaving a cinema: they're trying to reach a better sense of closure than was afforded them by the movie itself. In linearly progressing media, this seems, if you'll pardon the pun, straightforward. Readers know when they've finished an article or a book, viewers know when a movie or a broadcast is over, and they'll be able to assess then if they've reached sufficient closure -- if their desires have been fulfilled. On the World Wide Web, this is much more difficult: "once we have it in our hands, the whole of a book is accessible to us readers. However, in front of an electronic read-only hypertext document we are at the mercy of the author since we will only be able to activate the links which the author has provided" (McKnight et al. 119). In many cases, it's not even clear whether we've reached the end of the text already: just where does a Website end? Does the question even make sense? Consider the following example, reported by Larry Friedlander: I watched visitors explore an interactive program in a museum, one that contained a vast amount of material -- pictures, film, historic explanations, models, simulations. I was impressed by the range of subject matter and by the ambitiousness and polish of the presentation. ... But to my surprise, as I watched visitors going down one pathway after another, I noticed a certain dispirited glaze spread over their faces. They seemed to lose interest quite quickly and, in fact, soon stopped their explorations. (163) Part of the problem here may just have been the location of the programme, of course -- when you're out in public, you might just not have the time to browse as extensively as you could from your computer at home. But there are other explanations, too: the sheer amount of options for exploration may have been overwhelming -- there may not have been any apparent purpose to aim for, any closure to arrive at. This is a problem inherent in hypertext, particularly in networked systems like the Web: it "changes our conception of an ending. Different readers can choose not only to end the text at different points but also to add to and extend it. In hypertext there is no final version, and therefore no last word: a new idea or reinterpretation is always possible. ... By privileging intertextuality, hypertext provides a large number of points to which other texts can attach themselves" (Snyder 57). In other words, there will always be more out there than any reader could possibly explore, since new documents are constantly being added. There is no ending if a text is constantly extended. (In print media this problem appears only to a far more limited extent: there, intertextuality is mostly implicit, and even though new articles may constantly be added -- 'linked', if you will -- to a discourse, due to the medium's physical nature they're still very much separate entities, while Web links make intertextuality explicit and directly connect texts.) Does this mark the end of closure, then? Adding to the problem is the fact that it's not even possible to know how much of the hypertextual information available is still left unexplored, since there is no universal register of all the information available on the Web -- "the extent of hypertext is unknowable because it lacks clear boundaries and is often multi-authored" (Snyder 19). While reading a book you can check how many more pages you've got to go, but on the Web this is not an option. Our traditions of information transmission create this desire for closure, but the inherent nature of the medium prevents us from ever satisfying it. Barrett waxes lyrical in describing this dilemma: contexts presented online are often too limited for what we really want: an environment that delivers objects of desire -- to know more, see more, learn more, express more. We fear being caught in Medusa's gaze, of being transfixed before the end is reached; yet we want the head of Medusa safely on our shield to freeze the bitstream, the fleeting imagery, the unstoppable textualisations. We want, not the dead object, but the living body in its connections to its world, connections that sustain it, give it meaning. (xiv-v) We want nothing less, that is, than closure without closing: we desire the knowledge we need, and the feeling that that knowledge is sufficient to really know about a topic, but we don't want to devalue that knowledge in the same process by removing it from its context and reducing it to trivial truisms. We want the networked knowledge base that the Web is able to offer, but we don't want to feel overwhelmed by the unfathomable dimensions of that network. This is increasingly difficult the more knowledge is included in that network -- "with the growth of knowledge comes decreasing certainty. The confidence that went with objectivity must give way to the insecurity that comes from knowing that all is relative" (Smith 206). The fact that 'all is relative' is one which predates the Net, of course, and it isn't the Internet or the World Wide Web that has destroyed objectivity -- objectivity has always been an illusion, no matter how strongly journalists or scientists have at times laid claims ot it. Internet-based media have simply stripped away more of the pretences, and laid bare the subjective nature of all information; in the process, they have also uncovered the fact that the desire for closure must ultimately remain unfulfilled in any sufficiently non-trivial case. Nonetheless, the early history of the Web has seen attempts to connect all the information available (LEO, one of the first major German Internet resource centres, for example, took its initials from its mission to 'Link Everything Online') -- but as the amount of information on the Net exploded, more and more editorial choices of what to include and what to leave out had to be made, so that now even search engines like Yahoo! and Altavista quite clearly and openly offer only a selection of what they consider useful sites on the Web. Web browsers still hoping to find everything on a certain topic would be well-advised to check with all major search engines, as well as important resource centres in the specific field. The average Web user would probably be happy with picking the search engine, Web directory or Web ring they find easiest to use, and sticking with it. The multitude of available options here actually shows one strength of the Internet and similar networks -- "the computer permits many [organisational] structures to coexist in the same electronic text: tree structures, circles, and lines can cross and recross without obstructing one another. The encyclopedic impulse to organise can run riot in this new technology of writing" (Bolter 95). Still, this multitude of options is also likely to confuse some users: in particular, "novices do not know in which order they need to read the material or how much they should read. They don't know what they don't know. Therefore learners might be sidetracked into some obscure corner of the information space instead or covering the important basic information" (Nielsen 190). They're like first-time visitors to a library -- but this library has constantly shifting aisles, more or less well-known pathways into specialty collections, fiercely competing groups of librarians, and it extends almost infinitely. Of course, the design of the available search and information tools plays an important role here, too -- far more than it is possible to explore at this point. Gay makes the general observation that "visual interfaces and navigational tools that allow quick browsing of information layout and database components are more effective at locating information ... than traditional index or text-based search tools. However, it should be noted that users are less secure in their findings. Users feel that they have not conducted complete searches when they use visual tools and interfaces" (185). Such technical difficulties (especially for novices) will slow take-up of and low satisfaction with the medium (and many negative views of the Web can probably be traced to this dissatisfaction with the result of searches -- in other words, to a lack of satisfaction of the desire for closure); while many novices eventually overcome their initial confusion and become more Web-savvy, others might disregard the medium as unsuitable for their needs. At the other extreme of the scale, the inherent lack for closure, in combination with the societally deeply ingrained desire for it, may also be a strong contributing factor for another negative phenomenon associated with the Internet: that of Net users becoming Net junkies, who spend every available moment online. Where the desire to know, to get to the bottom (or more to the point: to the end) of a topic, becomes overwhelming, and where the fundamental unattainability of this goal remains unrealised, the step to an obsession with finding information seems a small one; indeed, the neverending search for that piece of knowledge surpassing all previously found ones seems to have obvious similarities to drug addiction with its search for the high to better all previous highs. And most likely, the addiction is only heightened by the knowledge that on the Web, new pieces of information are constantly being added -- an endless, and largely free, supply of drugs... There is no easy solution to this problem -- in the end, it is up to the user to avoid becoming an addict, and to keep in mind that there is no such thing as total knowledge. Web designers and content providers can help, though: "there are ways of orienting the reader in an electronic document, but in any true hypertext the ending must remain tentative. An electronic text never needs to end" (Bolter 87). As Tennant & Heilmeier elaborate, "the coming ease-of-use problem is one of developing transparent complexity -- of revealing the limits and the extent of vast coverage to users, and showing how the many known techniques for putting it all together can be used most effectively -- of complexity that reveals itself as powerful simplicity" (122). We have been seeing, therefore, the emergence of a new class of Websites: resource centres which help their visitors to understand a certain topic and view it from all possible angles, which point them in the direction of further information on- and off-site, and which give them an indication of how much they need to know to understand the topic to a certain degree. In this, they must ideally be very transparent, as Tennant & Heilmeier point out -- having accepted that there is no such thing as objectivity, it is necessary for these sites to point out that their offered insight into the field is only one of many possible approaches, and that their presented choice of information is based on subjective editorial decisions. They may present preferred readings, but they must indicate that these readings are open for debate. They may help satisfy some of their readers' desire for closure, but they must at the same time point out that they do so by presenting a temporary ending beyond which a more general story continues. If, as suggested above, closure crucially depends on a linear mode of presentation, such sites in their arguments help trace one linear route through the network of knowledge available online; they impose a linear from-us-to-you model of transmission on the normally unordered many-to-many structure of the Net. In the face of much doomsaying about the broadcast media, then, here is one possible future for these linear transmission media, and it's no surprise that such Internet 'push' broad- or narrowcasting is a growth area of the Net -- simply put, it serves the apparent need of users to be told stories, to have their desire for closure satisfied through clear narrative progressions from exposition through development to end. (This isn't 'push' as such, really: it's more a kind of 'push on demand'.) But at the same time, this won't mean the end of the unstructured, networked information that the Web offers: even such linear media ultimately build on that networked pool of knowledge. The Internet has simply made this pool public -- passively as well as actively accessible to everybody. Now, however, Web designers (and this includes each and every one of us, ultimately) must work "with the users foremost in mind, making sure that at every point there is a clear, simple and focussed experience that hooks them into the welter of information presented" (Friedlander 164); they must play to the desire for closure. (As with any preferred reading, however, there is also a danger that that closure is premature, and that the users' process or meaning-making is contained and stifled rather than aided.) To return briefly to Friedlander's experience with the interactive museum exhibit: he draws the conclusion that visitors were simply overwhelmed by the sheer mass of information and were reluctant to continue accumulating facts without a guiding purpose, without some sense of how or why they could use all this material. The technology that delivers immense bundles of data does not simultaneously deliver a reason for accumulating so much information, nor a way for the user to order and make sense of it. That is the designer's task. The pressing challenge of multimedia design is to transform information into usable and useful knowledge. (163) Perhaps this transformation is exactly what is at the heart of fulfilling the desire for closure: we feel satisfied when we feel we know something, have learnt something from a presentation of information (no matter if it's a news report or a fictional story). Nonetheless, this satisfaction must of necessity remain intermediate -- there is always much more still to be discovered. "From the hypertext viewpoint knowledge is infinite: we can never know the whole extent of it but only have a perspective on it. ... Life is in real-time and we are forced to be selective, we decide that this much constitutes one node and only these links are worth representing" (Beardon & Worden 69). This is not inherently different from processes in other media, where bandwidth limitations may even force much stricter gatekeeping regiments, but as in many cases the Internet brings these processes out into the open, exposes their workings and stresses the fundamental subjectivity of information. Users of hypertext (as indeed users of any medium) must be aware of this: "readers themselves participate in the organisation of the encyclopedia. They are not limited to the references created by the editors, since at any point they can initiate a search for a word or phrase that takes them to another article. They might also make their own explicit references (hypertextual links) for their own purposes ... . It is always a short step from electronic reading to electronic writing, from determining the order of texts to altering their structure" (Bolter 95). Significantly, too, it is this potential for wide public participation which has made the Internet into the medium of the day, and led to the World Wide Web's exponential growth; as Bolter describes, "today we cannot hope for permanence and for general agreement on the order of things -- in encyclopedias any more than in politics and the arts. What we have instead is a view of knowledge as collections of (verbal and visual) ideas that can arrange themselves into a kaleidoscope of hierarchical and associative patterns -- each pattern meeting the needs of one class of readers on one occasion" (97). To those searching for some meaningful 'universal truth', this will sound defeatist, but ultimately it is closer to realism -- one person's universal truth is another one's escapist phantasy, after all. This doesn't keep most of us from hoping and searching for that deeper insight, however -- and from the preceding discussion, it seems likely that in this we are driven by the desire for closure that has been imprinted in us so deeply by the multitudes of narrative structures we encounter each day. It's no surprise, then, that, as Barrett writes, "the virtual environment is a place of longing. Cyberspace is an odyssey without telos, and therefore without meaning. ... Yet cyberspace is also the theatre of operations for the reconstruction of the lost body of knowledge, or, perhaps more correctly, not the reconstruction, but the always primary construction of a body of knowing. Thought and language in a virtual environment seek a higher synthesis, a re-imagining of an idea in the context of its truth" (xvi). And so we search on, following that by definition end-less quest to satisfy our desire for closure, and sticking largely to the narrative structures handed down to us through the generations. This article is no exception, of course -- but while you may gain some sense of closure from it, it is inevitable that there is a deeper feeling of a lack of closure, too, as the article takes its place in a wider hypertextual context, where so much more is still left unexplored: other articles in this issue, other issues of M/C, and further journals and Websites adding to the debate. Remember this, then: you decide when and where to stop. References Barrett, Edward, and Marie Redmont, eds. Contextual Media: Multimedia and Interpretation. Cambridge, Mass.: MIT P, 1995. Barrett, Edward. "Hiding the Head of Medusa: Objects and Desire in a Virtual Environment." Barrett & Redmont xi- vi. Beardon, Colin, and Suzette Worden. "The Virtual Curator: Multimedia Technologies and the Roles of Museums." Barrett & Redmont 63-86. Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, N.J.: Lawrence Erlbaum Associates, 1991. Friedlander, Larry. "Spaces of Experience on Designing Multimedia Applications." Barrett & Redmont 163-74. Gay, Geri. "Issues in Accessing and Constructing Multimedia Documents." Barrett & Redmont 175-88. McKnight, Cliff, John Richardson, and Andrew Dillon. "The Authoring of Hypertext Documents." Hypertext: Theory into Practice. Ed. Ray McAleese. Oxford: Intellect, 1993. Nielsen, Jakob. Hypertext and Hypermedia. Boston: Academic Press, 1990. Smith, Anthony. Goodbye Gutenberg: The Newspaper Revolution of the 1980's [sic]. New York: Oxford UP, 1980. Snyder, Ilana. Hypertext: The ELectronic Labyrinth. Carlton South: Melbourne UP, 1996. Tennant, Harry, and George H. Heilmeier. "Knowledge and Equality: Harnessing the Truth of Information Abundance." Technology 2001: The Future of Computing and Communications. Ed. Derek Leebaert. Cambridge, Mass.: MIT P, 1991. Citation reference for this article MLA style: Axel Bruns. "What's the Story: The Unfulfilled Desire for Closure on the Web." M/C: A Journal of Media and Culture 2.5 (1999). [your date of access] <http://www.uq.edu.au/mc/9907/closure.php>. Chicago style: Axel Bruns, "What's the Story: The Unfulfilled Desire for Closure on the Web," M/C: A Journal of Media and Culture 2, no. 5 (1999), <http://www.uq.edu.au/mc/9907/closure.php> ([your date of access]). APA style: Axel Bruns. (1999) What's the story: the unfulfilled desire for closure on the Web. M/C: A Journal of Media and Culture 2(5). <http://www.uq.edu.au/mc/9907/closure.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
6

Mallan, Kerry Margaret, and Annette Patterson. "Present and Active: Digital Publishing in a Post-print Age." M/C Journal 11, no. 4 (June 24, 2008). http://dx.doi.org/10.5204/mcj.40.

Full text
Abstract:
At one point in Victor Hugo’s novel, The Hunchback of Notre Dame, the archdeacon, Claude Frollo, looked up from a book on his table to the edifice of the gothic cathedral, visible from his canon’s cell in the cloister of Notre Dame: “Alas!” he said, “this will kill that” (146). Frollo’s lament, that the book would destroy the edifice, captures the medieval cleric’s anxiety about the way in which Gutenberg’s print technology would become the new universal means for recording and communicating humanity’s ideas and artistic expression, replacing the grand monuments of architecture, human engineering, and craftsmanship. For Hugo, architecture was “the great handwriting of humankind” (149). The cathedral as the material outcome of human technology was being replaced by the first great machine—the printing press. At this point in the third millennium, some people undoubtedly have similar anxieties to Frollo: is it now the book’s turn to be destroyed by yet another great machine? The inclusion of “post print” in our title is not intended to sound the death knell of the book. Rather, we contend that despite the enduring value of print, digital publishing is “present and active” and is changing the way in which research, particularly in the humanities, is being undertaken. Our approach has three related parts. First, we consider how digital technologies are changing the way in which content is constructed, customised, modified, disseminated, and accessed within a global, distributed network. This section argues that the transition from print to electronic or digital publishing means both losses and gains, particularly with respect to shifts in our approaches to textuality, information, and innovative publishing. Second, we discuss the Children’s Literature Digital Resources (CLDR) project, with which we are involved. This case study of a digitising initiative opens out the transformative possibilities and challenges of digital publishing and e-scholarship for research communities. Third, we reflect on technology’s capacity to bring about major changes in the light of the theoretical and practical issues that have arisen from our discussion. I. Digitising in a “post-print age” We are living in an era that is commonly referred to as “the late age of print” (see Kho) or the “post-print age” (see Gunkel). According to Aarseth, we have reached a point whereby nearly all of our public and personal media have become more or less digital (37). As Kho notes, web newspapers are not only becoming increasingly more popular, but they are also making rather than losing money, and paper-based newspapers are finding it difficult to recruit new readers from the younger generations (37). Not only can such online-only publications update format, content, and structure more economically than print-based publications, but their wide distribution network, speed, and flexibility attract advertising revenue. Hype and hyperbole aside, publishers are not so much discarding their legacy of print, but recognising the folly of not embracing innovative technologies that can add value by presenting information in ways that satisfy users’ needs for content to-go or for edutainment. As Kho notes: “no longer able to satisfy customer demand by producing print-only products, or even by enabling online access to semi-static content, established publishers are embracing new models for publishing, web-style” (42). Advocates of online publishing contend that the major benefits of online publishing over print technology are that it is faster, more economical, and more interactive. However, as Hovav and Gray caution, “e-publishing also involves risks, hidden costs, and trade-offs” (79). The specific focus for these authors is e-journal publishing and they contend that while cost reduction is in editing, production and distribution, if the journal is not open access, then costs relating to storage and bandwith will be transferred to the user. If we put economics aside for the moment, the transition from print to electronic text (e-text), especially with electronic literary works, brings additional considerations, particularly in their ability to make available different reading strategies to print, such as “animation, rollovers, screen design, navigation strategies, and so on” (Hayles 38). Transition from print to e-text In his book, Writing Space, David Bolter follows Victor Hugo’s lead, but does not ask if print technology will be destroyed. Rather, he argues that “the idea and ideal of the book will change: print will no longer define the organization and presentation of knowledge, as it has for the past five centuries” (2). As Hayles noted above, one significant indicator of this change, which is a consequence of the shift from analogue to digital, is the addition of graphical, audio, visual, sonic, and kinetic elements to the written word. A significant consequence of this transition is the reinvention of the book in a networked environment. Unlike the printed book, the networked book is not bound by space and time. Rather, it is an evolving entity within an ecology of readers, authors, and texts. The Web 2.0 platform has enabled more experimentation with blending of digital technology and traditional writing, particularly in the use of blogs, which have spawned blogwriting and the wikinovel. Siva Vaidhyanathan’s The Googlization of Everything: How One Company is Disrupting Culture, Commerce and Community … and Why We Should Worry is a wikinovel or blog book that was produced over a series of weeks with contributions from other bloggers (see: http://www.sivacracy.net/). Penguin Books, in collaboration with a media company, “Six Stories to Start,” have developed six stories—“We Tell Stories,” which involve different forms of interactivity from users through blog entries, Twitter text messages, an interactive google map, and other features. For example, the story titled “Fairy Tales” allows users to customise the story using their own choice of names for characters and descriptions of character traits. Each story is loosely based on a classic story and links take users to synopses of these original stories and their authors and to online purchase of the texts through the Penguin Books sales website. These examples of digital stories are a small part of the digital environment, which exploits computer and online technologies’ capacity to be interactive and immersive. As Janet Murray notes, the interactive qualities of digital environments are characterised by their procedural and participatory abilities, while their immersive qualities are characterised by their spatial and encyclopedic dimensions (71–89). These immersive and interactive qualities highlight different ways of reading texts, which entail different embodied and cognitive functions from those that reading print texts requires. As Hayles argues: the advent of electronic textuality presents us with an unparalleled opportunity to reformulate fundamental ideas about texts and, in the process, to see print as well as electronic texts with fresh eyes (89–90). The transition to e-text also highlights how digitality is changing all aspects of everyday life both inside and outside the academy. Online teaching and e-research Another aspect of the commercial arm of publishing that is impacting on academe and other organisations is the digitising and indexing of print content for niche distribution. Kho offers the example of the Mark Logic Corporation, which uses its XML content platform to repurpose content, create new content, and distribute this content through multiple portals. As the promotional website video for Mark Logic explains, academics can use this service to customise their own textbooks for students by including only articles and book chapters that are relevant to their subject. These are then organised, bound, and distributed by Mark Logic for sale to students at a cost that is generally cheaper than most textbooks. A further example of how print and digital materials can form an integrated, customised source for teachers and students is eFictions (Trimmer, Jennings, & Patterson). eFictions was one of the first print and online short story anthologies that teachers of literature could customise to their own needs. Produced as both a print text collection and a website, eFictions offers popular short stories in English by well-known traditional and contemporary writers from the US, Australia, New Zealand, UK, and Europe, with summaries, notes on literary features, author biographies, and, in one instance, a YouTube movie of the story. In using the eFictions website, teachers can build a customised anthology of traditional and innovative stories to suit their teaching preferences. These examples provide useful indicators of how content is constructed, customised, modified, disseminated, and accessed within a distributed network. However, the question remains as to how to measure their impact and outcomes within teaching and learning communities. As Harley suggests in her study on the use and users of digital resources in the humanities and social sciences, several factors warrant attention, such as personal teaching style, philosophy, and specific disciplinary requirements. However, in terms of understanding the benefits of digital resources for teaching and learning, Harley notes that few providers in her sample had developed any plans to evaluate use and users in a systematic way. In addition to the problems raised in Harley’s study, another relates to how researchers can be supported to take full advantage of digital technologies for e-research. The transformation brought about by information and communication technologies extends and broadens the impact of research, by making its outputs more discoverable and usable by other researchers, and its benefits more available to industry, governments, and the wider community. Traditional repositories of knowledge and information, such as libraries, are juggling the space demands of books and computer hardware alongside increasing reader demand for anywhere, anytime, anyplace access to information. Researchers’ expectations about online access to journals, eprints, bibliographic data, and the views of others through wikis, blogs, and associated social and information networking sites such as YouTube compete with the traditional expectations of the institutions that fund libraries for paper-based archives and book repositories. While university libraries are finding it increasingly difficult to purchase all hardcover books relevant to numerous and varied disciplines, a significant proportion of their budgets goes towards digital repositories (e.g., STORS), indexes, and other resources, such as full-text electronic specialised and multidisciplinary journal databases (e.g., Project Muse and Proquest); electronic serials; e-books; and specialised information sources through fast (online) document delivery services. An area that is becoming increasingly significant for those working in the humanities is the digitising of historical and cultural texts. II. Bringing back the dead: The CLDR project The CLDR project is led by researchers and librarians at the Queensland University of Technology, in collaboration with Deakin University, University of Sydney, and members of the AustLit team at The University of Queensland. The CLDR project is a “Research Community” of the electronic bibliographic database AustLit: The Australian Literature Resource, which is working towards the goal of providing a complete bibliographic record of the nation’s literature. AustLit offers users with a single entry point to enhanced scholarly resources on Australian writers, their works, and other aspects of Australian literary culture and activities. AustLit and its Research Communities are supported by grants from the Australian Research Council and financial and in-kind contributions from a consortium of Australian universities, and by other external funding sources such as the National Collaborative Research Infrastructure Strategy. Like other more extensive digitisation projects, such as Project Gutenberg and the Rosetta Project, the CLDR project aims to provide a centralised access point for digital surrogates of early published works of Australian children’s literature, with access pathways to existing resources. The first stage of the CLDR project is to provide access to digitised, full-text, out-of-copyright Australian children’s literature from European settlement to 1945, with selected digitised critical works relevant to the field. Texts comprise a range of genres, including poetry, drama, and narrative for young readers and picture books, songs, and rhymes for infants. Currently, a selection of 75 e-texts and digital scans of original texts from Project Gutenberg and Internet Archive have been linked to the Children’s Literature Research Community. By the end of 2009, the CLDR will have digitised approximately 1000 literary texts and a significant number of critical works. Stage II and subsequent development will involve digitisation of selected texts from 1945 onwards. A precursor to the CLDR project has been undertaken by Deakin University in collaboration with the State Library of Victoria, whereby a digital bibliographic index comprising Victorian School Readers has been completed with plans for full-text digital surrogates of a selection of these texts. These texts provide valuable insights into citizenship, identity, and values formation from the 1930s onwards. At the time of writing, the CLDR is at an early stage of development. An extensive survey of out-of-copyright texts has been completed and the digitisation of these resources is about to commence. The project plans to make rich content searchable, allowing scholars from children’s literature studies and education to benefit from the many advantages of online scholarship. What digital publishing and associated digital archives, electronic texts, hypermedia, and so forth foreground is the fact that writers, readers, publishers, programmers, designers, critics, booksellers, teachers, and copyright laws operate within a context that is highly mediated by technology. In his article on large-scale digitisation projects carried out by Cornell and University of Michigan with the Making of America collection of 19th-century American serials and monographs, Hirtle notes that when special collections’ materials are available via the Web, with appropriate metadata and software, then they can “increase use of the material, contribute to new forms of research, and attract new users to the material” (44). Furthermore, Hirtle contends that despite the poor ergonomics associated with most electronic displays and e-book readers, “people will, when given the opportunity, consult an electronic text over the print original” (46). If this preference is universally accurate, especially for researchers and students, then it follows that not only will the preference for electronic surrogates of original material increase, but preference for other kinds of electronic texts will also increase. It is with this preference for electronic resources in mind that we approached the field of children’s literature in Australia and asked questions about how future generations of researchers would prefer to work. If electronic texts become the reference of choice for primary as well as secondary sources, then it seems sensible to assume that researchers would prefer to sit at the end of the keyboard than to travel considerable distances at considerable cost to access paper-based print texts in distant libraries and archives. We considered the best means for providing access to digitised primary and secondary, full text material, and digital pathways to existing online resources, particularly an extensive indexing and bibliographic database. Prior to the commencement of the CLDR project, AustLit had already indexed an extensive number of children’s literature. Challenges and dilemmas The CLDR project, even in its early stages of development, has encountered a number of challenges and dilemmas that centre on access, copyright, economic capital, and practical aspects of digitisation, and sustainability. These issues have relevance for digital publishing and e-research. A decision is yet to be made as to whether the digital texts in CLDR will be available on open or closed/tolled access. The preference is for open access. As Hayles argues, copyright is more than a legal basis for intellectual property, as it also entails ideas about authorship, creativity, and the work as an “immaterial mental construct” that goes “beyond the paper, binding, or ink” (144). Seeking copyright permission is therefore only part of the issue. Determining how the item will be accessed is a further matter, particularly as future technologies may impact upon how a digital item is used. In the case of e-journals, the issue of copyright payment structures are evolving towards a collective licensing system, pay-per-view, and other combinations of print and electronic subscription (see Hovav and Gray). For research purposes, digitisation of items for CLDR is not simply a scan and deliver process. Rather it is one that needs to ensure that the best quality is provided and that the item is both accessible and usable by researchers, and sustainable for future researchers. Sustainability is an important consideration and provides a challenge for institutions that host projects such as CLDR. Therefore, items need to be scanned to a high quality and this requires an expensive scanner and personnel costs. Files need to be in a variety of formats for preservation purposes and so that they may be manipulated to be useable in different technologies (for example, Archival Tiff, Tiff, Jpeg, PDF, HTML). Hovav and Gray warn that when technology becomes obsolete, then content becomes unreadable unless backward integration is maintained. The CLDR items will be annotatable given AustLit’s NeAt funded project: Aus-e-Lit. The Aus-e-Lit project will extend and enhance the existing AustLit web portal with data integration and search services, empirical reporting services, collaborative annotation services, and compound object authoring, editing, and publishing services. For users to be able to get the most out of a digital item, it needs to be searchable, either through double keying or OCR (optimal character recognition). The value of CLDR’s contribution The value of the CLDR project lies in its goal to provide a comprehensive, searchable body of texts (fictional and critical) to researchers across the humanities and social sciences. Other projects seem to be intent on putting up as many items as possible to be considered as a first resort for online texts. CLDR is more specific and is not interested in simply generating a presence on the Web. Rather, it is research driven both in its design and implementation, and in its focussed outcomes of assisting academics and students primarily in their e-research endeavours. To this end, we have concentrated on the following: an extensive survey of appropriate texts; best models for file location, distribution, and use; and high standards of digitising protocols. These issues that relate to data storage, digitisation, collections, management, and end-users of data are aligned with the “Development of an Australian Research Data Strategy” outlined in An Australian e-Research Strategy and Implementation Framework (2006). CLDR is not designed to simply replicate resources, as it has a distinct focus, audience, and research potential. In addition, it looks at resources that may be forgotten or are no longer available in reproduction by current publishing companies. Thus, the aim of CLDR is to preserve both the time and a period of Australian history and literary culture. It will also provide users with an accessible repository of rare and early texts written for children. III. Future directions It is now commonplace to recognize that the Web’s role as information provider has changed over the past decade. New forms of “collective intelligence” or “distributed cognition” (Oblinger and Lombardi) are emerging within and outside formal research communities. Technology’s capacity to initiate major cultural, social, educational, economic, political and commercial shifts has conditioned us to expect the “next big thing.” We have learnt to adapt swiftly to the many challenges that online technologies have presented, and we have reaped the benefits. As the examples in this discussion have highlighted, the changes in online publishing and digitisation have provided many material, network, pedagogical, and research possibilities: we teach online units providing students with access to e-journals, e-books, and customized archives of digitised materials; we communicate via various online technologies; we attend virtual conferences; and we participate in e-research through a global, digital network. In other words, technology is deeply engrained in our everyday lives. In returning to Frollo’s concern that the book would destroy architecture, Umberto Eco offers a placatory note: “in the history of culture it has never happened that something has simply killed something else. Something has profoundly changed something else” (n. pag.). Eco’s point has relevance to our discussion of digital publishing. The transition from print to digital necessitates a profound change that impacts on the ways we read, write, and research. As we have illustrated with our case study of the CLDR project, the move to creating digitised texts of print literature needs to be considered within a dynamic network of multiple causalities, emergent technological processes, and complex negotiations through which digital texts are created, stored, disseminated, and used. Technological changes in just the past five years have, in many ways, created an expectation in the minds of people that the future is no longer some distant time from the present. Rather, as our title suggests, the future is both present and active. References Aarseth, Espen. “How we became Postdigital: From Cyberstudies to Game Studies.” Critical Cyber-culture Studies. Ed. David Silver and Adrienne Massanari. New York: New York UP, 2006. 37–46. An Australian e-Research Strategy and Implementation Framework: Final Report of the e-Research Coordinating Committee. Commonwealth of Australia, 2006. Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, NJ: Erlbaum, 1991. Eco, Umberto. “The Future of the Book.” 1994. 3 June 2008 ‹http://www.themodernword.com/eco/eco_future_of_book.html>. Gunkel, David. J. “What's the Matter with Books?” Configurations 11.3 (2003): 277–303. Harley, Diane. “Use and Users of Digital Resources: A Focus on Undergraduate Education in the Humanities and Social Sciences.” Research and Occasional Papers Series. Berkeley: University of California. Centre for Studies in Higher Education. 12 June 2008 ‹http://www.themodernword.com/eco/eco_future_of_book.html>. Hayles, N. Katherine. My Mother was a Computer: Digital Subjects and Literary Texts. Chicago: U of Chicago P, 2005. Hirtle, Peter B. “The Impact of Digitization on Special Collections in Libraries.” Libraries & Culture 37.1 (2002): 42–52. Hovav, Anat and Paul Gray. “Managing Academic E-journals.” Communications of the ACM 47.4 (2004): 79–82. Hugo, Victor. The Hunchback of Notre Dame (Notre-Dame de Paris). Ware, Hertfordshire: Wordsworth editions, 1993. Kho, Nancy D. “The Medium Gets the Message: Post-Print Publishing Models.” EContent 30.6 (2007): 42–48. Oblinger, Diana and Marilyn Lombardi. “Common Knowledge: Openness in Higher Education.” Opening up Education: The Collective Advancement of Education Through Open Technology, Open Content and Open Knowledge. Ed. Toru Liyoshi and M. S. Vijay Kumar. Cambridge, MA: MIT Press, 2007. 389–400. Murray, Janet H. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, MA: MIT Press, 2001. Trimmer, Joseph F., Wade Jennings, and Annette Patterson. eFictions. New York: Harcourt, 2001.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Computer-based hypermedia documents"

1

Suh, Woojong, and Heeseok Lee. "Hypermedia Document Management." In Human Computer Interaction Development & Management, 71–92. IGI Global, 2002. http://dx.doi.org/10.4018/978-1-931777-13-1.ch005.

Full text
Abstract:
Recently, many organizations have attempted to build hypermedia systems to expand their working areas into Internet-based virtual workplaces. Thus, it is important to manage corporate hypermedia documents effectively. Metadata plays a critical role for managing these documents. This paper identifies metadata roles and components to build a metadata schema. Furthermore, a meta-information system, HyDoMiS (Hyperdocument Meta-information System) is proposed by the use of this metadata schema. HyDoMiS performs three functions: metadata management, search, and reporting. The metadata management function is concerned with workflow, documents, and databases. The system is more likely to help implement and maintain hypermedia information systems effectively.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography