Siga este enlace para ver otros tipos de publicaciones sobre el tema: Successione ed internet provider.

Artículos de revistas sobre el tema "Successione ed internet provider"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 29 mejores artículos de revistas para su investigación sobre el tema "Successione ed internet provider".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Papa, Anna. "La complessa realtŕ della Rete tra "creativitŕ" dei fornitori di servizi Internet ed esigenze regolatorie pubbliche: la sottile linea di demarcazione tra provider di servizi "content" e di "hosting attivo"". ECONOMIA E DIRITTO DEL TERZIARIO, n.º 2 (noviembre de 2012): 221–53. http://dx.doi.org/10.3280/ed2012-002004.

Texto completo
Resumen
La Rete Internet si presenta come una realtŕ complessa nell'ambito della quale alla funzione di trasmissione di dati si associano, acquisendo sempre maggiore rilevanza, funzioni legate all'utilizzo diffuso degli strumenti propri della societŕ dell'informazione e della comunicazione. In costante crescita sono anche i soggetti che offrono servizi legati alle funzioni ora citate, in particolare fornitori di connettivitŕ e gestori di applicazioni in grado di consentire la comunicazione e la diffusione in Internet di notizie, opinioni e contenuti. A fronte di una cosě complessa realtŕ, la legislazione nazionale, in linea con la normativa europea, in materia si presenta ancora poco sensibile alle differenziazioni dei ruoli ricoperti dai soggetti operanti in Rete, come fornitori di servizi e come utenti. Essa attribuisce centralitŕ, soprattutto sul piano della responsabilitŕ, all'Internet provider, considerato come il soggetto centrale della fruizione dei servizi Internet, pur nella tripartizione ora prevista dalla normativa nazionale (in ossequio a quella comunitaria) che distingue tra access, caching e hosting. In realtŕ, pur certi dell'importanza dei Service per il funzionamento (e il controllo) della Rete, appare ormai evidente che i fornitori di servizi Internet si presentano come un universo ben piů articolato e dinamico, con prestazioni che vanno a collocarsi nello spazio creato dalla Rete e non semplicemente nella funzione di trasmissione o conservazione dei dati immessi o prodotti. Una prima importante conseguenza č la difficoltŕ di distinguere tra "hosting" e "content" provider. Sono soprattutto questi ad essersi molto evoluti negli ultimi anni. Nel saggio ne vengono forniti tre esempi: siti istituzionali, gestori di piattaforma e curatori di luoghi di discussione. In assenza di una regolamentazione normativa, a livello europeo e nazionale, la giurisprudenza sta cercando di individuare un punto di bilanciamento tra i diversi interessi coinvolti che tenga conto delle caratteristiche e del carattere innovativo della Rete rispetto ad esperienze e contesti preesistenti. Appare tuttavia evidente che l'azione giurisprudenziale da sola non č in grado di stabilizzare e di dare affidabilitŕ ad un comparto che necessita invece di regole, frutto di una riflessione condivisa, idonee a garantire una operativitŕ "regolata", rispettosa dei diritti individuali, della concorrenza ma nel contempo capace di assecondare la profonda innovazione dell'informazione e della comunicazione che la Rete sta realizzando
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Ly, Sophia, Ricky Tsang y Kendall Ho. "Patient Perspectives on the Digitization of Personal Health Information in the Emergency Department: Mixed Methods Study During the COVID-19 Pandemic". JMIR Medical Informatics 10, n.º 1 (6 de enero de 2022): e28981. http://dx.doi.org/10.2196/28981.

Texto completo
Resumen
Background Although the digitization of personal health information (PHI) has been shown to improve patient engagement in the primary care setting, patient perspectives on its impact in the emergency department (ED) are unknown. Objective The primary objective was to characterize the views of ED users in British Columbia, Canada, on the impacts of PHI digitization on ED care. Methods This was a mixed methods study consisting of an online survey followed by key informant interviews with a subset of survey respondents. ED users in British Columbia were asked about their ED experiences and attitudes toward PHI digitization in the ED. Results A total of 108 participants submitted survey responses between January and April 2020. Most survey respondents were interested in the use of electronic health records (79/105, 75%) and patient portals (91/107, 85%) in the ED and were amenable to sharing their ED PHI with ED staff (up to 90% in emergencies), family physicians (up to 91%), and family caregivers (up to 75%). In addition, 16 survey respondents provided key informant interviews in August 2020. Interviewees expected PHI digitization in the ED to enhance PHI access by health providers, patient-provider relationships, patient self-advocacy, and postdischarge care management, although some voiced concerns about patient privacy risk and limited access to digital technologies (eg, smart devices, internet connection). Many participants thought the COVID-19 pandemic could provide momentum for the digitization of health care. Conclusions Patients overwhelmingly support PHI digitization in the form of electronic health records and patient portals in the ED. The COVID-19 pandemic may represent a critical moment for the development and implementation of these tools.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Ssendikaddiwa, Joseph y Ruth Lavergne. "Access to Primary Care and Internet Searches for Walk-In Clinics and Emergency Departments in Canada: Observational Study Using Google Trends and Population Health Survey Data". JMIR Public Health and Surveillance 5, n.º 4 (18 de noviembre de 2019): e13130. http://dx.doi.org/10.2196/13130.

Texto completo
Resumen
Background Access to primary care is a challenge for many Canadians. Models of primary care vary widely among provinces, including arrangements for same-day and after-hours access. Use of walk-in clinics and emergency departments (EDs) may also vary, but data sources that allow comparison are limited. Objective We used Google Trends to examine the relative frequency of searches for walk-in clinics and EDs across provinces and over time in Canada. We correlated provincial relative search frequencies from Google Trends with survey responses about primary care access from the Commonwealth Fund’s 2016 International Health Policy Survey of Adults in 11 Countries and the 2016 Canadian Community Health Survey. Methods We developed search strategies to capture the range of terms used for walk-in clinics (eg, urgent care clinic and after-hours clinic) and EDs (eg, emergency room) across Canadian provinces. We used Google Trends to determine the frequencies of these terms relative to total search volume within each province from January 2011 to December 2018. We calculated correlation coefficients and 95% CIs between provincial Google Trends relative search frequencies and survey responses. Results Relative search frequency of walk-in clinic searches increased steadily, doubling in most provinces between 2011 and 2018. Relative frequency of walk-in clinic searches was highest in the western provinces of British Columbia, Alberta, Saskatchewan, and Manitoba. At the provincial level, higher walk-in clinic relative search frequency was strongly positively correlated with the percentage of survey respondents who reported being able to get same- or next-day appointments to see a doctor or a nurse and inversely correlated with the percentage of respondents who reported going to ED for a condition that they thought could have been treated by providers at usual place of care. Relative search frequency for walk-in clinics was also inversely correlated with the percentage of respondents who reported having a regular medical provider. ED relative search frequencies were more stable over time, and we did not observe statistically significant correlation with survey data. Conclusions Higher relative search frequency for walk-in clinics was positively correlated with the ability to get a same- or next-day appointment and inversely correlated with ED use for conditions treatable in the patient’s regular place of care and also with having a regular medical provider. Findings suggest that patient use of Web-based tools to search for more convenient or accessible care through walk-in clinics is increasing over time. Further research is needed to validate Google Trends data with administrative information on service use.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Imran Latif Saifi, Nasreen Akhter y Lubna Salamat. "Covid-19 Pandemic Shutdown: Challenges of Hei’s Electronic Support Services in Teacher Education Programs". International Journal of Distance Education and E-Learning 6, n.º 1 (14 de enero de 2021): 149–69. http://dx.doi.org/10.36261/ijdeel.v6i1.1427.

Texto completo
Resumen
Coronavirus 2019 (COVID-19) is the result of acute respiratory syndrome coronavirus 2 (SARSCoV-2). Pandemic is the outburst of any disease worldwide. Electronic Service (E-service) is based on technology and provides different electronic channels i.e. e-learning & coaching (online learning), e-library, etc. This study was design to explore challenges of HEI’s electronic support services in teacher education during pandemic shut down of COVID-19. The objectives of the study were to explore the challenges of e-support services in teacher education programs due to pandemic shutdown of COVID-19 and to propose a framework for stake holders of HEI’s esupport services. This study was descriptive and the survey method was used. Teacher education program B. Ed (1.5 Years) from two universities of Pakistan, one from formal mode and one from distance education online mode were selected and all the prospective teachers of 2nd semester were defined as population of the study. Sample of the study consisted of 150 students (selected conveniently). An online questionnaire was used as tool for data collection which had 15 closeended statements on 5- point Likert scale. It was concluded that the facility to purchase internet bundles was not available to students in the pandemic shutdown of COVID-19. The students and academia on the other hand were not trained for online teaching learning procedures. It wasproposed that HEI’s may arrange internet bundles with collaboration of internet provider companies to students, academia and institutions also focus to train academia and students for online education because the effect of COVID-19 Pandemic Shutdown are expecting a change in future procedures of education till the situation is not going to be seen normal.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Burstein, Brett, Jocelyn Gravel, Paul Aronson y Mark Neuman. "EMERGENCY DEPARTMENT AND INPATIENT CLINICAL DECISION TOOLS FOR THE MANAGEMENT OF FEBRILE YOUNG INFANTS AMONG TERTIARY PEDIATRIC CENTERS ACROSS CANADA". Paediatrics & Child Health 23, suppl_1 (18 de mayo de 2018): e7-e8. http://dx.doi.org/10.1093/pch/pxy054.019.

Texto completo
Resumen
Abstract BACKGROUND With no nationally endorsed guidelines and newer diagnostic tools, there exists widespread practice variation in the management of febrile infants <90 days. OBJECTIVES This study sought to evaluate the prevalence of clinical decision tools (CDTs) for the management of febrile young infants in the Emergency Department (ED) and inpatient settings among all tertiary paediatric centers across Canada. DESIGN/METHODS A cross-sectional, Internet-based survey was distributed to both an ED and an inpatient physician representative at each of the 16 Canadian tertiary paediatric centers. Participants were asked to characterize their clinical settings, diagnostic test availability and institutional febrile young infant CDTs. Copies were requested of all febrile infant-specific materials for independent classification as clinical pathway, guideline or order set, and content review using list items determined a priori. The primary analysis was the proportion of settings that use a CDT for the management of febrile infants. Chi-square testing was used to compare proportions. RESULTS Survey response rate was 100% (n = 32, 16 ED and 16 inpatient). Febrile young infant CDTs of any type were infrequently reported overall (9/32, 38%), and were more common in the ED than inpatient setting (50% vs. 6%, p=0.02). Prevalence of any CDT was not associated with hospital volume or physician training. Among EDs, clinical pathways, guidelines, and order sets were available at 6/16 (38%), 1/16 (6%), and 4/16 (25%) institutions, respectively. Among centers reporting existent CDTs, few reported ED or inpatient tracking of provider adherence or audits of impact (3/9, 33% overall). Review of existing CDTs revealed inter-center differences for inclusion ages, antibiotic treatment regimens, lumbar puncture recommendations, diagnostic testing and normal laboratory reference values. Despite wide availability reported at nearly all centers, C-reactive protein and respiratory viral testing were each rarely incorporated into existent CDTs (3/9, 33% for both). Procalcitonin testing was reported to be available at 2/16 (13%) centers, and was not incorporated into any existing CDTs. CONCLUSION CDTs for the management of febrile young infants are infrequently available among Canadian tertiary paediatric centers, and when present, rarely contain information on newer diagnostic tests. The paucity of CDTs among paediatric academic training centers may in part underlie ongoing practice variation. Heterogeneity among existent CDTs highlights the need for the establishment of updated and unified ED and inpatient national guidelines.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Kaleem, Tasneem y Robert Clell Miller. "Trends in cancer care with the Affordable Care Act." Journal of Clinical Oncology 34, n.º 7_suppl (1 de marzo de 2016): 46. http://dx.doi.org/10.1200/jco.2016.34.7_suppl.46.

Texto completo
Resumen
46 Background: Accountable Care Organizations (ACO), as proposed by the Affordable Care Act, will change the delivery of health care in the United States. ACO serve as a network of providers with primary care providers (PCP) set up as gate-keepers for referrals to specialists. Within the next several years, many trends will emerge and drive progress of change, requiring oncologist to take a lead role to adapt to the evolving landscape of health care. Methods: Literature search of internet-based and academic sources for oncology and the Affordable Care, with a focus on ACO formation. Results: Four main expected trends and strategies to adapt to changes were formulated. Trend 1: Changes in referral patterns towards oncologists. Referral will be based on outcome data and ACO membership. Strategy: Increase communication and education to PCP and other providers. Endorse multidisciplinary clinics, which have shown to improve guideline compliance, coordination, and communication. Trend 2: Formation of large scale oncology provider groups collaborating with PCP/ACO. Physicians will be able to provide around the clock care to patients with the goal of reducing hospital visits. Strategy: Establish oncology homes with goal of reducing inpatient and ED visits by providing telephone symptom management, daily questionnaires and opportunities for end of life discussions. Trend 3: Reimbursement reform to oncologists based on quality measures. ACO can bill fee for service basis and eligibility for bonus payments based on outcomes. Strategy:Adherence to evidence based guidelines chosen by evaluating efficacy, toxicity and cost have been proven to increase quality of patient care. Trend 4: Development to pathway driven medicine.ACO structure lends to a centralized governance committee responsible in choosing guidelines for treatment within an ACO. Strategy: Oncologists should provide a voice for the field and patients when different guidelines are chosen. Conclusions: In the context of the Affordable Care Act, oncology specialists are encouraged to participate in the new organization model to ensure best outcomes for both physicians and patients. Awareness of future trends and ways to contribute will be the first step in adapting to implementation of the Affordable Care Act.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Situmorang, Mosgan. "MEMBANGUN AKUNTABILITAS ORGANISASI BANTUAN HUKUM". Jurnal Rechts Vinding: Media Pembinaan Hukum Nasional 2, n.º 1 (30 de abril de 2013): 107. http://dx.doi.org/10.33331/rechtsvinding.v2i1.85.

Texto completo
Resumen
<p>Dalam Undang-Undang Nomor 16 Tahun 2011 tentang Bantuan Hukum dikatakan bahwa pemberi bantuan hukum adalah lembaga bantuan hukum atau organisasi kemasyarakatan yang memberi layanan bantuan hukum. Jasa hukum yang diberikan kepada penerima bantuan hukum adalah cuma-cuma, dalam ar Ɵ mereka Ɵ dak mendapat upah dari pihak yang dibantunya, namun pemerintah akan memberikan dana bantuan untuk se Ɵ ap kasus yang ditangani yang besarnya disesuaikan dengan jenis kasusnya. Dana bantuan tersebut memang Ɵ dak akan diberikan kepada semua organisasi bantuan hukum, tetapi hanya kepada organisasi bantuan hukum yang sudah memenuhi syarat sesuai dengan Undang-Undang Bantuan Hukum. Karena dana tersebut berasal dari Anggaran Pendapatan dan Belanja Negara, maka tentu saja akuntabilitas organisasi bantuan hukum yang menerima dana tersebut harus dapat dipertanggung jawaban kepada masyarakat. Tulisan ini adalah berupa kajian norma Ɵ f, dengan demikian data yang digunakan adalah data sekunder berupa bahan primer yakni peraturan perundang undangan, utamanya Undang-Undang Nomor 16 Tahun 2011 dan undang- undang lain yang terkait serta bahan sekunder berupa bahan kepustakaan dan data dari internet. Dalam peneli Ɵ an ini disimpulkan bahwa Undang- Undang Bantuan Hukum sudah dapat mengan Ɵ sipasi perlunya akuntabilitas organisasi bantuan hukum tapi masih perlu di Ɵ ngkatkan dengan cara membuat aturan-aturan yang mendukung terciptanya akuntabilitas tersebut terutama peraturan mengenai standar bantuan hukum.</p><p>In Law No. 16 Year 2011 regarding Legal Aid, stated that legal aid provider is a legal aid organiza Ɵ on or community organiza Ɵ ons that provide legal aid services. Legal services provided by the legal aid organiza Ɵ on is free in the sense that they do not get paid from those who helped. However, the government will provide fi nancial assistance for each case handled that amount is in accordance with the type of case. The grant is not given to all legal aid organiza Ɵ ons but only to a legal aid organiza Ɵ on that has been quali fi ed in accordance with the Legal Aid Act. Because these funds come from the state budget of course accountability of legal aid organiza Ɵ ons receiving funds must be able to be an answer to the public. This paper is a norma Ɵ ve review, thus the data used are secondary data from the primary material i.e laws and regula Ɵ ons, especially Law No. 16 of 2011 and other laws related and secondary materials in the form of the literature and data from the internet.This study concluded that the Legal Aid Act was able to an Ɵ cipate the need for accountability of legal aid organiza Ɵ ons but it is need to be improved by making rules that favor the crea Ɵ on of accountability mainly standard rules regarding legal aid.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Britt, Deron, Udi Blankstein, Matthew Lenardis, Alexandra Millman, Ethan Grober y Yonah Krakowsky. "Availability of platelet-rich plasma for treatment of erectile dysfunction and associated costs and efficacy: A review of current publications and Canadian data". Canadian Urological Association Journal 15, n.º 6 (17 de noviembre de 2020). http://dx.doi.org/10.5489/cuaj.6947.

Texto completo
Resumen
Introduction: Platelet-rich plasma (PRP) is an increasingly used unconventional treatment option for erectile dysfunction (ED). The validity of PRP as a potential treatment for ED has been proposed in limited human trials. Furthermore, the costs associated with PRP for ED treatment are not readily promoted to patients. The goal of this review was to determine the efficacy and costs of PRP based on currently available literature and Canadian data. Methods: A comprehensive literature review of available PRP studies and current published data pertaining to cost, availability, and provider clinics globally was conducted using the PubMed database. Physicians offering genital PRP in Canada were identified using internet searches and PRP provider directories. Physician qualifications, clinic locations, and cost information were obtained from provider websites and telephone calls to identified clinics. Results: Availability of PRP injections offered for treating ED is increasing globally. There are currently no peer-reviewed publications to substantiate anecdotal evidence pertaining to the efficacy of PRP as a viable treatment option for ED patients. Our results indicate 19 providers for PRP injections in Canada, costing on average $1777 CAD per injection. No providers were affiliated with academic institutions and providers varied in their area of clinical speciality and training. Conclusions: To our knowledge, there is currently no research underway investigating the clinical efficacy of PRP for ED treatment despite its broad availability and significant cost. Patients should be informed of the lack of substantiated efficacy and safety data, as the reliability of PRP treatments requires further evaluation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Green, Lelia. "Relating to Internet 'Audiences'". M/C Journal 3, n.º 1 (1 de marzo de 2000). http://dx.doi.org/10.5204/mcj.1826.

Texto completo
Resumen
Audiences are a contested domain with Ang and others desperate to analyse, anatomise, understand and describe them. They are particularly important for the commercialisation of any medium since advertisers like to know what they are getting for their money and, in the famous aphorism, 'the role of the commercial media is to deliver audiences to advertisers'. Marshall's concept of 'audience-commodity' continues this intellectual interrogation of the audience and its production by individual practices of media consumption. Mass media audiences have consumed much research attention over most of the past century with major consideration being paid to the displacement of other activities arising from the consumption of newly-introduced media, effects of the media and a succession of moral panics. It has only been in recent years that 'the audience' has been researched on (essentially) its own terms -- in the branch of media and culture studies enquiry called, conveniently, 'audience studies'. Well- known Australian examples of such studies often concern children and adolescents and include: Hodge & Tripp, Noble, and Palmer (now Gillard). Audience studies assumes that audience participants are sufficiently insightful and sufficiently cognisant of their various pleasures, desires and frustrations to be able to discuss their media consumption patterns with interested researchers. The paradigm takes as read that people have reasons for their behaviours, and sets out to uncover what these are through (often) a variety of interview and observation techniques. It accords audience membership an importance in people's lives. The nature of the 'general' audience is illuminated by specific comments and examples offered during the research process by specific audience members -- analysed and interpreted by the research team. What is clear from a cursory glance at the literature is that audiences do not talk about 'broadcasting' per se, they talk about specific programs and have a tendency to compare programs with others of the same type. Audiences perceive broadcasting as divided into genred broadcasting streams. Unless asked to do so, an audience member (and I've formally interviewed over two hundred such people) is unlikely to compare Home and Away with the ABC Evening News. Comparisons between Home and Away and Neighbours are commonplace, however. What genre is the Internet? A silly question, I know -- but one that is begged by the repeated discussions of Internet culture, Internet communications and information and Internet communities as 'the Internet'. It's a long time since media studies and popular culture academics have discussed 'broadcasting' generically because concern for the specifics of genred broadcasting (both in television and radio) have rendered generalised discussion ridiculously global and oversimplified. In broadcasting we talk about television and radio as if they were (since they are) significantly different. We recognise that the production values for soap opera, drama, sport, news and current affairs and light entertainment are dissimilar. It's only silly to ask 'what genre is the Internet' because, when we think about it, the Internet is multiply genred. Audiences that consume broadcast programmes can be differentiated from each other in terms of age, gender and socioeconomic status, and in terms of viewing place, viewing style, motivation and preferred programme genres. As Morley indicates in his 1986 treatise, Family Television: Cultural Power and Domestic Leisure, the domestic context is central to the everyday consumption of TV. He argues that "the social dimensions of 'watching television' -- the social relationships within which viewing is performed as an activity -- have to be brought more directly into focus if we are properly to understand television audiences' choices of, and responses to, their viewing" (15). That focus upon social relationships as the domestic context within which television is consumed is the substance of his book. Holmes suggests that much of the appeal of the Internet is a spurious one, viz. by selling "a new kind of community to those who have been disconnected from geographical communities" (35). He claims that society has been divided into a multitude of separate domestic spheres within which television is consumed, creating an isolation which the Internet is marketed as solving. "The Internet offers to the dispossessed the ability to remove some of the walls for brief periods of time in return for a time-charged fee" (35). A key to understanding the domestic consumption of television, however, is an understanding of the specifics of genre, and the pleasures associated with the consumption of the genre. Uses to which the broadcast material is put in daily life in interpersonal settings are essentially related to the broadcast material consumed. Discussion of soaps, and of finance reporting, may both be used to develop interpersonal networks and to display current knowledge, but these discussions are likely to occur in different domestic/work contexts. Have we had enough of generalised discussion of the global Internet? Can we move onto addressing whether it is genred; and if so, in which ways? Faced with the cacophony which is the Internet today -- let alone the projected manifestation of the Internet tomorrow -- we are forced to conclude that the Internet has the potential to mimic the features of all the media and genres that have preceded it, and more. It can operate as a mass medium, as a niche medium, and as one-to-one discrete communication -- Dayan's 'particularistic' media (103-13). Within all these categories it can (or has the potential to) work in audio, visual, audiovisual, text and data. On top of this complexity, it offers a variety of degrees of interactivity from simple access to full content creation as part of the communication exchange. You thought Media Studies was big? Watch out for the disciplinary field of Internet Studies! The concept of the active audience has been a staple of audience studies theory for a generation. Here the activity recognised in the 'active' audience is one of the audience actively engaging with programme content -- resisting, reformulating and recirculating the messages and meanings on offer. This is a different level of interactivity compared with that implicit in some aspects of the Internet (online community, for example). Internet interactivity recognises that the text is produced as part of the act of consumption. Have the audience activity characteristics of online community members been sufficiently differentiated from -- say -- the activity of accessing Encyclopaedia Britannica online? Are online community members more of a 'www.participants' than an 'audience'; should we see audiences as genred too? Television audiences (as my anonymous reviewer has helpfully remarked) are typically constituted via essentialising experiences' "generally domestic/familial setting, generally in the context of other activities, generally ritualised in terms of the serialisation of these experiences etc." We know that this is the case from detailed investigations into the consumption of television. Less is known about the experience of online participation, although Wilbur discusses "the strangely solitary work that many CMC [computer-mediated communications] researchers are engaged in, sitting alone at their computers, but surrounded by a global multitude" (6). He goes on to suggest seven definitions of 'virtual community' before concluding that the "multi-bladed, critical Swiss army knives" might offer an appropriate metaphor for the many uses of the Internet. 'Participation' in this culture is similarly hard to define, and (given that it is so individual and spatially private) expressive of individual difference. "For those who doubt the possibility of online intimacy, I can only speak of ... hours sitting at my keyboard with tears streaming down my face, or convulsed with laughter" (Wilbur 18). I wait for the ethnographic research before I venture further into definitions of 'www.participants'. Online community, I would argue, is a specifically genred stream of Internet activity. Further, it is particularly interesting to audience researchers because it has no clear precursor in the audiences and readerships of the traditional mass media. Holmes (32) has usefully differentiated between 'Communities of broadcast' (using the generic term, to offer an exception to the rule!) and 'Communities of interactivity', but he does so to highlight difference -- not to argue great similarity. The community of interest brought into being by the shared consumption and social circulation of elements of broadcast programming differs from the community of interactivity made visible through online community membership -- and both differ from Anderson's notion of the imagined community. Online communities are particularly problematic for audience studies theorists because the audience is the content producer. There is no content apart from the interactions and creativity of community members, and the contributions of new/casual online participants. For sites where 'hits' are enumerated, the simple act of access is also content production, and creates value and interest for others. Clearly the research is yet to be done in these areas. If we are to theorise cogently and in depth about people's activities and production/consumption patterns on the Internet, we need to identify genres and investigate specific audience/community members. Interactions with online community members suggest that age may offer a critical nexus of audience/participant distinction (Palandri & Green). Community members of 35+ have had to deliberately choose to learn the conventions of Internet interaction. They have experienced specific motivations. In affluent societies such as ours, on the other hand, for many people under 20, the required Internet skills and competencies have been normalised as part of an everyday social repertoire, in the same way that almost all of us have learned the conventions of television viewing. An understanding of the specifics of difference, and of congruence, will make discussions of Internet audiences/participants/content providers/community members that much more useful. Such research has an added frisson. I started this article with an acknowledgement of Ang's book Desperately Seeking the Audience. The research to be undertaken in the Internet genre of online community includes the need to seek desperately for the audience; the individual audience member; and (in many cases) the individual audience member's multiple identities -- each of which offers specific and different value to the researched community member. Identity is a key issue for Internet researchers, and a signal difference between communities of broadcast and communities of interactivity. As Holmes has usefully pointed out: "broadcast facilitates mass recognition ... with little reciprocity while the Internet facilitates reciprocity with little or no recognition" (31). We need to acknowledge, recognise and explore these differences in the next generation of audience studies research. References Anderson, B. Imagined Communities. 2nd ed. London: Verso, 1991. Ang, I. Desperately Seeking the Audience. London: Routledge, 1991. Dayan, D. "Particularistic Media and Diasporic Communications." Media, Ritual and Identity. Eds T. Liebes and J. Curran. London: Routledge, 1998. 103-13. Hodge, B., and D. Tripp. Children and Television: A Semiotic Approach. Cambridge: Polity Press, 1986. Holmes, D. "Virtual Identity: Communities of Broadcast, Communities of Interactivity." Virtual Politics: Identity and Community in Cyberspace. Ed. D. Holmes. London: Sage, 1997. 26-45. Morley, D. Family Television: Cultural Power and Domestic Leisure. London: Routledge, 1986. Noble, G. Children in Front of the Small Screen. London: Constable, 1975. Palandri, M., and L. Green. "Image Management in a Bondage, Discipline, Sadomasochist Subculture: A Cyber-Ethnographic Study." CyberPsychology and Behavior. USA: Mary Ann Liebert, forthcoming. <http://www.liebertpub.com/cpb/default.htm>. Palmer, P. Girls and Television. Sydney: NSW Ministry of Education, 1986. ---. The Lively Audience: A Study of Children around the TV Set. Sydney: Allen & Unwin, 1986. Wilbur, S.P. "An Archaeology of Cyberspaces: Virtuality, Community, Identity." Internet Culture. Ed. D. Porter. New York: Routledge, 1997. 5- 22. Citation reference for this article MLA style: Lelia Green. "Relating to Internet 'Audiences'." M/C: A Journal of Media and Culture 3.1 (2000). [your date of access] <http://www.uq.edu.au/mc/0003/internet.php>. Chicago style: Lelia Green, "Relating to Internet 'Audiences'," M/C: A Journal of Media and Culture 3, no. 1 (2000), <http://www.uq.edu.au/mc/0003/internet.php> ([your date of access]). APA style: Lelia Green. (2000) Relating to Internet 'Audiences'. M/C: A Journal of Media and Culture 3(1). <http://www.uq.edu.au/mc/0003/internet.php> ([your date of access]).
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Campbell, Cynthia. "Familiars in a Strange Land". M/C Journal 3, n.º 4 (1 de agosto de 2000). http://dx.doi.org/10.5204/mcj.1864.

Texto completo
Resumen
As people spend increasing time interacting with others online, computer-mediated correspondence is rapidly becoming a common form of everyday communication. Computer-mediated communication ranges from text to voice to video, through a variety of technologies (e.g., e-mail, Web pages, listservs, chat). Real-time online 'conversation' occurs in group chat rooms and one-to-one instant messages, between people who may or may not know each other outside the cyber environment. Because of its emerging popularity, Internet chat has become a distinct form of discourse with characteristics unique to the medium. As people spend increasing time interacting with others online, computer-mediated correspondence is rapidly becoming a common form of everyday communication. Computer-mediated communication ranges from text to voice to video, through a variety of technologies (e.g., e-mail, Web pages, listservs, chat). Real-time online 'conversation' occurs in group chat rooms and one-to-one instant messages, between people who may or may not know each other outside the cyber environment. Because of its emerging popularity, Internet chat has become a distinct form of discourse with characteristics unique to the medium. The purpose of this study is to apply research findings investigating Internet communication between strangers to the online chats of 'familiars', people who already know each other. Although some people may become familiars through prolonged cyberspace encounters without face-to-face contact (Parks and Floyd; Walther), this study investigates only the chats of people who had an existing face-to-face relationship prior to chatting online. The genesis of this project came from our personal experience as friends who later became Internet chatters because of geographical distance. Because our communications often detailed professional matters, we would save our discussions as text files as an efficient method of record-keeping. We noticed at times, while sending instant messages, the need to make special accommodations to reconcile misunderstandings and effectively deal with interruptions. To get ourselves 'back on track' in those instances, we had to 'talk about the talk' (i.e., metacommunicate) in order to make sense of how seemingly straightforward communication had gone astray. Continued instant message sending resulted in more observations suggesting that our online chats were qualitatively different from our face-to-face conversations. In the following paragraphs, three examples from our transcripts are analysed and discussed in relation to findings from research in computer-mediated communication: (a) meaning negotiation through metacommunication and shared history; (b) disinhibition and reconstruction of self; and (c) rule establishment. Note that, in these examples, "DRCSC" and "Wickmansa" are the respective online screen names of the co-authors. Example One: Meaning Negotiation through Metacommunication and Shared History The first example illustrates difficulty understanding the essence of a sent message without cues typically available in face-to-face contact. It is important to note that the following exchanges were preceded by dialogue concerning a stressful situation where Wickmansa had responded "I can see why [there is a problem]". Beginning in line 35 below, DRCSC jokingly attempts to parallel Wickmansa's "I can see why" with "I can see why there is panic disorder". However, given the sober nature of the conversation up to that point, line 35's intended meaning is uncertain for Wickmansa (i.e., could be serious; could be sarcastic). The three lines subsequent to 35 represent deliberate attempts at meaning negotiation. February 4, 2000 The breakdown in communication between lines 35 and 36 required management and repair. To regain mutual understanding, we attempted to make sense of this misalignment in 37 and 38 and bring it back on track. Each line's query served two purposes: (a) to clarify the 'speaker's' previous statement; and (b) to request clarification about the other's meaning. Continuing the above dialogue, lines 39 through 48 below seem to work toward realignment through metacommunication. February 4, 2000 cont. It is noteworthy that in line 42, DRCSC strengthened the realignment by introducing a metaphor that invoked a shared deli counter experience (i.e., meaning negotiation through shared history). Wickmansa let DRCSC know that the reference was understood by building on the metaphor in line 44. In this way, shared history not only provided an efficient way to anchor meaning but also expedited realignment. Referencing shared history may be a distinct way familiars, unlike strangers, display social competence by demonstrating familiarity with both topic and person when negotiating meaning during Internet chat. The act of 'going meta' in lines 37-48 above seemed to renegotiate the initial misfires of 35 and 36, moving the conversation toward collaborative understanding. Moreover, it may be that part of the necessity for familiars to repair any perceived miscommunication is tied to consequences that do not exist for strangers over the Internet. Although strangers have the opportunity to reestablish 'anonymity' by creating a new screen name and/or persona (Kiesler, Siegal, and McGuire; Myers; Turkle), familiars are bound to the 'reality' of the Internet experience in later face-to-face contact. Consequently, familiars have a greater investment in the outcomes negotiated while chatting online. Example Two: Disinhibition and Reconstruction of Self As suggested by previous research, perceived anonymity between online strangers increases disinhibition and playfulness. This can lead to the formation of multiple 'selves' within a single individual that may bear little resemblance to the corporeal self (Balsamo; Turkle; Waskul, Douglass, and Edgely). Although it is impossible for familiars to realistically 'reinvent' who they are, we, as familiars, found ourselves enacting a version of anonymity by accentuating contextually favorable aspects of our personalities. This is to say that despite the seemingly restrictive nature of text-only media, chatting online provided a new forum to lightheartedly reveal, for example, humour. Thus, a witty and clever side that might not have been otherwise readily apparent was now 'viewable'. To illustrate, the following lines are extracted from a tangential discussion about the Internet service provider America Online. February 3, 2000 In response to Wickmansa's 178 above, DRCSC playfully interjected cleverness in line 179. Wickmansa's reply "LOL" in 180 could demonstrate his understanding of DRCSC's play on "-OL" words by embedding a similar intentionality within his response. However, without contextual cues normally available in face-to-face interaction, whether Wickmansa 'really' picked up on intentionality, competency, and playfulness -- or was merely using a common chat abbreviation -- was not certain to DRCSC. Note that "LOL" stands for 'laughing out loud' and is among the most common Internet chat expressions (Grossman). To better understand (i.e, negotiate meaning), DRCSC responded in line 181 by: (a) clarifying her previous position; and (b) indirectly requesting clarification from Wickmansa. In this way, regaining equilibrium paralleled the accommodations described in example one, despite different end goals. Continued cleverness may have been evidenced again in lines 182 and 183, albeit unintentionally, with a collaborative "poet, know it" follow-up, representative of the playful context which had been co-constructed. Example Three: Rule Establishment Internet chatters create and conform to norms and rules exclusive to the medium (Hayashibara; Postmes, Spears, and Lea; Rintel and Pittam), such as abbreviating common phrases, ignoring capitalisation, spelling phonetically, and using typed symbols to transform elements from the typist's behind-the-keyboard experience into the co-constructed 'cyber' reality. The chats below illustrate the co-creation of a rule as a way to efficiently convey an abrupt interruption in the external environment. April 10, 2000 April 11, 2000 The April 10 example began with metacommunication that suggested the need for a shorthand way to signify 'interruption -- do not send instant messages now'. This led to our creating a way to do so with the "/" symbol in lines 230-250. The rule's effectiveness was evidenced immediately by its usage in lines 251-253. Rule application again was seen on April 11 when DRCSC opened the chat with "/?" In effect, she parsimoniously asked 'Are you available for online chat?' with two key strokes. The "/" became a regular chat feature after it was established, exemplifying an idiosyncratic rule created by familiars for Internet chat. As familiars chatting online, we found ourselves using metacommunication and shared history to realign after a conversational breakdown, accentuating contextually favorable aspects of our personalities, and following global Internet chat norms while creating idiosyncratic rules to accommodate for missing sensory cues. Moreover, distinct from strangers, familiars may have a greater need to display social competence due to real-world consequences. Further research is recommended to investigate the generalisablility of our experience as familiars and to explore other characteristics unique to Internet chat. Moreover, it would be interesting to see if: (a) familiars' online chats are patterned with the same idiosyncratic features as their face-to-face and telephone interactions; or (b) the chat patterns of familiars who know each other in 'real life' contain significant differences from persons who know each other only through regular online encounters. References Balsamo, Ann. "The Virtual Body in Cyberspace." Research in Philosophy and Technology 13 (1993): 119. Baym, Nancy. "The Performance of Humor in Computer-Mediated Communication." Journal of Computer-Mediated Communication 1.2 (1995). 14 Aug. 2000 <http://www.ascusc.org/jcmc/vol1/issue2/baym.php>. Grossman, Steve. "Chatter's Jargon Dictionary." 14 Aug. 2000 <http://www.stevegrossman.com/jargpge.htm#Dictionary>. Danet, Brenda, Lucia Ruedenberg-Wright, and Yehudit Rosenbaum-Tamari. "Hmmm ... Where's That Smoke Coming From: Writing, Play and Performance on Internet Relay Chat." Journal of Computer-Mediated Communication 2.4 (1997). 14 Aug. 2000 <http://www.ascusc.org/jcmc/vol2/issue4/danet.php>. Hayashibara, Kammie Nobue. "Adolescent Communication on the Internet: Investigation of a Teenage Chat Room." Masters Abstracts International 37.01 (1998): 0009. Holland, Norman M. "The Internet Regression." 14 Aug. 2000 <http://www.shef.ac.uk/~psysc/rmy/holland.php>. Kiesler, Sara, Jane Siegal, and Timothy W. McGuire. "Social Psychological Aspects of Computer-Mediated Communication." American Psychologist 39 (1984): 1123-34. Myers, David. "'Anonymity Is Part of the Magic': Individual Manipulation of Computer-Mediated Communication Contexts." Qualitative Sociology 10.3 (1987): 251-66. Parks, Malcolm R., and Kory Floyd. "Making Friends in Cyberspace." Journal of Computer-Mediated Communication 2.4 (1997). 14 Aug. 2000 <http://www.ascusc.org/jcmc/vol2/issue4/index.php>. Postmes, Tom, Russell Spears, and Martin Lea. "The Formation of Group Norms in Computer-Mediated Communication." Human Communication Research 26 (in press). Rintel, E. Sean, and Jeffrey Pittam. "Strangers in a Strange Land: Interaction Management on Internet Relay Chat." Human Communication Research 23.4 (1997): 507-34. Spears, Russell, and Martin Lea. "Panacea or Panopticon? The Hidden Power in Computer-Mediated Communication." Communication Research 21.4 (1994): 427-59. Sproull, Lee, and Sara Kiesler. Connections: New Ways of Working in the Networked Organization. Cambridge, MA: MIT P, 1991. Thomas, Jim. "Introduction: A Debate About the Ethics of Fair Practices for Collecting Social Science Data in Cyberspace." The Information Society 12.2 (1996): 107-17. Turkle, Sherry. Life on the Screen: Identity in the Age of the Internet. New York: Simon and Schuster, 1995. Walther, Joseph. "Computer-Mediated Communication: Impersonal, Interpersonal and Hyperpersonal Interaction." Communication Research 23.1 (1996): 3-43. Walther, Joseph, Jeffrey Anderson, and David Park. "Interpersonal Effects in Computer-Mediated Interaction." Communication Research 21.4 (1994): 460-87. Waskul, Dennis, Mark Douglass, and Charles Edgley. "Cybersex: Outercourse and the Enselfment of the Body." Symbolic Interaction 24.4 (in press). Witmer, Diane. "Practicing Safe Computing: Why People Engage in Risky Computer-Mediated Communication." Network and Netplay: Virtual Groups on the Internet. Ed. Fay Sudweeks, Margaret L. McLaughlin, and Sheizaf Rafaeli, Menlo Park, CA: AAAI/MIT P, 1998. 127-46. Citation reference for this article MLA style: Cynthia Campbell, Scott A. Wickman. "Familiars in a Strange Land: A Case Study of Friends Chatting Online." M/C: A Journal of Media and Culture 3.4 (2000). [your date of access] <http://www.api-network.com/mc/0008/friends.php>. Chicago style: Cynthia Campbell, Scott A. Wickman, "Familiars in a Strange Land: A Case Study of Friends Chatting Online," M/C: A Journal of Media and Culture 3, no. 4 (2000), <http://www.api-network.com/mc/0008/friends.php> ([your date of access]). APA style: Cynthia Campbell, Scott A. Wickman. (2000) Familiars in a strange land: a case study of friends chatting online. M/C: A Journal of Media and Culture 3(4). <http://www.api-network.com/mc/0008/friends.php> ([your date of access]).
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Ruch, Adam y Steve Collins. "Zoning Laws: Facebook and Google+". M/C Journal 14, n.º 5 (18 de octubre de 2011). http://dx.doi.org/10.5204/mcj.411.

Texto completo
Resumen
As the single most successful social-networking Website to date, Facebook has caused a shift in both practice and perception of online socialisation, and its relationship to the offline world. While not the first online social networking service, Facebook’s user base dwarfs its nearest competitors. Mark Zuckerberg’s creation boasts more than 750 million users (Facebook). The currently ailing MySpace claimed a ceiling of 100 million users in 2006 (Cashmore). Further, the accuracy of this number has been contested due to a high proportion of fake or inactive accounts. Facebook by contrast, claims 50% of its user base logs in at least once a day (Facebook). The popular and mainstream uptake of Facebook has shifted social use of the Internet from various and fragmented niche groups towards a common hub or portal around which much everyday Internet use is centred. The implications are many, but this paper will focus on the progress what Mimi Marinucci terms the “Facebook effect” (70) and the evolution of lists as a filtering mechanism representing one’s social zones within Facebook. This is in part inspired by the launch of Google’s new social networking service Google+ which includes “circles” as a fundamental design feature for sorting contacts. Circles are an acknowledgement of the shortcomings of a single, unified friends list that defines the Facebook experience. These lists and circles are both manifestations of the same essential concept: our social lives are, in fact, divided into various zones not defined by an online/offline dichotomy, by fantasy role-play, deviant sexual practices, or other marginal or minority interests. What the lists and circles demonstrate is that even very common, mainstream people occupy different roles in everyday life, and that to be effective social tools, social networking sites must grant users control over their various identities and over who knows what about them. Even so, the very nature of computer-based social tools lead to problematic definitions of identities and relationships using discreet terms, in contrast to more fluid, performative constructions of an individual and their relations to others. Building the Monolith In 1995, Sherry Turkle wrote that “the Internet has become a significant social laboratory for experimenting with the constructions and reconstructions of self that characterize postmodern life” (180). Turkle describes the various deliberate acts of personnae creation possible online in contrast to earlier constraints placed upon the “cycling through different identities” (179). In the past, Turkle argues, “lifelong involvement with families and communities kept such cycling through under fairly stringent control” (180). In effect, Turkle was documenting the proliferation of identity games early adopters of Internet technologies played through various means. Much of what Turkle focused on were MUDs (Multi-User Dungeons) and MOOs (MUD Object Oriented), explicit play-spaces that encouraged identity-play of various kinds. Her contemporary Howard Rheingold focused on what may be described as the more “true to life” communities of the WELL (Whole Earth ‘Lectronic Link) (1–38). In particular, Rheingold explored a community established around the shared experience of parenting, especially of young children. While that community was not explicitly built on the notion of role-play, the parental identity was an important quality of community members. Unlike contemporary social media networks, these early communities were built on discreet platforms. MUDs, MOOs, Bulletin Board Systems, UseNet Groups and other early Internet communication platforms were generally hosted independently of one another, and even had to be dialled into via modem separately in some cases (such as the WELL). The Internet was a truly disparate entity in 1995. The discreetness of each community supported the cordoning off of individual roles or identities between them. Thus, an individual could quite easily be “Pete” a member of the parental WELL group and “Gorak the Destroyer,” a role-player on a fantasy MUD without the two roles ever being associated with each other. As Turkle points out, even within each MUD ample opportunity existed to play multiple characters (183–192). With only a screen name and associated description to identify an individual within the MUD environment, nothing technical existed to connect one player’s multiple identities, even within the same community. As the Internet has matured, however, the tendency has been shifting towards monolithic hubs, a notion of collecting all of “the Internet” together. From a purely technical and operational perspective, this has led to the emergence of the ISP (Internet service provider). Users can make a connection to one point, and then be connected to everything “on the Net” instead of individually dialling into servers and services one at a time as was the case in the early 1980s with companies such as Prodigy, the Source, CompuServe, and America On-Line (AOL). The early information service providers were largely walled gardens. A CompuServe user could only access information on the CompuServe network. Eventually the Internet became the network of choice and services migrated to it. Standards such as HTTP for Web page delivery and SMTP for email became established and dominate the Internet today. Technically, this has made the Internet much easier to use. The services that have developed on this more rationalised and unified platform have also tended toward monolithic, centralised architectures, despite the Internet’s apparent fundamental lack of a hierarchy. As the Internet replaced the closed networks, the wider Web of HTTP pages, forums, mailing lists and other forms of Internet communication and community thrived. Perhaps they required slightly more technological savvy than the carefully designed experience of walled-garden ISPs such as AOL, but these fora and IRC (Internet Relay Chat) rooms still provided the discreet environments within which to role-play. An individual could hold dozens of login names to as many different communities. These various niches could be simply hobby sites and forums where a user would deploy their identity as model train enthusiast, musician, or pet owner. They could also be explicitly about role-play, continuing the tradition of MUDs and MOOs into the new millennium. Pseudo- and polynymity were still very much part of the Internet experience. Even into the early parts of the so-called Web 2.0 explosion of more interactive Websites which allowed for easier dialog between site owner and viewer, a given identity would be very much tied to a single site, blog or even individual comments. There was no “single sign on” to link my thread from a music forum to the comments I made on a videogame blog to my aquarium photos at an image gallery site. Today, Facebook and Google, among others, seek to change all that. The Facebook Effect Working from a psychological background Turkle explored the multiplicity of online identities as a valuable learning, even therapeutic, experience. She assessed the experiences of individuals who were coming to terms with aspects of their own personalities, from simple shyness to exploring their sexuality. In “You Can’t Front on Facebook,” Mimi Marinucci summarizes an analysis of online behaviour by another psychologist, John Suler (67–70). Suler observed an “online disinhibition effect” characterised by users’ tendency to express themselves more openly online than offline (321). Awareness of this effect was drawn (no pun intended) into popular culture by cartoonist Mike Krahulik’s protagonist John Gabriel. Although Krahulik’s summation is straight to the point, Suler offers a more considered explanation. There are six general reasons for the online disinhibition effect: being anonymous, being invisible, the communications being out of sync, the strange sensation that a virtual interlocutor is all in the mind of the user, the general sense that the online world simply is not real and the minimisation of status and authority (321–325). Of the six, the notion of anonymity is most problematic, as briefly explored above in the case of AOL. The role of pseudonymity has been explored in more detail in Ruch, and will be considered with regard to Facebook and Google+ below. The Facebook effect, Marinucci argues, mitigates all six of these issues. Though Marinucci explains the mitigation of each factor individually, her final conclusion is the most compelling reason: “Facebook often facilitates what is best described as an integration of identities, and this integration of identities in turn functions as something of an inhibiting factor” (73). Ruch identifies this phenomenon as the “aggregation of identities” (219). Similarly, Brady Robards observes that “social network sites such as MySpace and Facebook collapse the entire array of social relationships into just one category, that of ‘Friend’” (20). Unlike earlier community sites, Ruch notes “Facebook rejects both the mythical anonymity of the Internet, but also the actual pseudo- or polynonymous potential of the technologies” (219). Essentially, Facebook works to bring the offline social world online, along with all the conventional baggage that accompanies the individual’s real-world social life. Facebook, and now Google+, present a hard, dichotomous approach to online identity: anonymous and authentic. Their socially networked individual is the “real” one, using a person’s given name, and bringing all (or as many as the sites can capture) their contacts from the offline world into the online one, regardless of context. The Facebook experience is one of “friending” everyone one has any social contact with into one homogeneous group. Not only is Facebook avoiding the multiple online identities that interested Turkle, but it is disregarding any multiplicity of identity anywhere, including any online/offline split. David Kirkpatrick reports Mark Zuckerberg’s rejection of this construction of identity is explained by his belief that “You have one identity … having two identities for yourself is an example of a lack of integrity” (199). Arguably, Zuckerberg’s calls for accountability through identity continue a perennial concern for anonymity online fuelled by “on the Internet no one knows you’re a dog” style moral panics. Over two decades ago Lindsy Van Gelder recounted the now infamous case of “Joan and Alex” (533) and Julian Dibbell recounted “a rape in cyberspace” (11). More recent anxieties concern the hacking escapades of Anonymous and LulzSec. Zuckerberg’s approach has been criticised by Christopher Poole, the founder of 4Chan—a bastion of Internet anonymity. During his keynote presentation at South by SouthWest 2011 Poole argued that Zuckerberg “equates anonymity with a lack of authenticity, almost a cowardice.” Yet in spite of these objections, Facebook has mainstream appeal. From a social constructivist perspective, this approach to identity would be satisfying the (perceived?) need for a mainstream, context-free, general social space online to cater for the hundreds of millions of people who now use the Internet. There is no specific, pre-defined reason to join Facebook in the way there is a particular reason to join a heavy metal music message board. Facebook is catering to the need to bring “real” social life online generally, with “real” in this case meaning “offline and pre-existing.” Very real risks of missing “real life” social events (engagements, new babies, party invitations etc) that were shared primarily via Facebook became salient to large groups of individuals not consciously concerned with some particular facet of identity performance. The commercial imperatives towards monolithic Internet and identity are obvious. Given that both Facebook and Google+ are in the business of facilitating the sale of advertising, their core business value is the demographic information they can sell to various companies for target advertising. Knowing a user’s individual identity and tastes is extremely important to those in the business of selling consumers what they currently want as well as predicting their future desires. The problem with this is the dawning realisation that even for the average person, role-playing is part of everyday life. We simply aren’t the same person in all contexts. None of the roles we play need to be particularly scandalous for this to be true, but we have different comfort zones with people that are fuelled by context. Suler proposes and Marinucci confirms that inhibition may be just as much part of our authentic self as the uninhibited expression experienced in more anonymous circumstances. Further, different contexts will inform what we inhibit and what we express. It is not as though there is a simple binary between two different groups and two different personal characteristics to oscillate between. The inhibited personnae one occupies at one’s grandmother’s home is a different inhibited self one plays at a job interview or in a heated discussion with faculty members at a university. One is politeness, the second professionalism, the third scholarly—yet they all restrain the individual in different ways. The Importance of Control over Circles Google+ is Google’s latest foray into the social networking arena. Its previous ventures Orkut and Google Buzz did not fare well, both were variously marred by legal issues concerning privacy, security, SPAM and hate groups. Buzz in particular fell afoul of associating Google accounts with users” real life identities, and (as noted earlier), all the baggage that comes with it. “One user blogged about how Buzz automatically added her abusive ex-boyfriend as a follower and exposed her communications with a current partner to him. Other bloggers commented that repressive governments in countries such as China or Iran could use Buzz to expose dissidents” (Novak). Google+ takes a different approach to its predecessors and its main rival, Facebook. Facebook allows for the organisation of “friends” into lists. Individuals can span more than one list. This is an exercise analogous to what Erving Goffman refers to as “audience segregation” (139). According to the site’s own statistics the average Facebook user has 130 friends, we anticipate it would be time-consuming to organise one’s friends according to real life social contexts. Yet without such organisation, Facebook overlooks the social structures and concomitant behaviours inherent in everyday life. Even broad groups offer little assistance. For example, an academic’s “Work People” list may include the Head of Department as well as numerous other lecturers with whom a workspace is shared. There are things one might share with immediate colleagues that should not be shared with the Head of Department. As Goffman states, “when audience segregation fails and an outsider happens upon a performance that was not meant for him, difficult problems in impression management arise” (139). By homogenising “friends” and social contexts users are either inhibited or run the risk of some future awkward encounters. Google+ utilises “circles” as its method for organising contacts. The graphical user interface is intuitive, facilitated by an easy drag and drop function. Use of “circles” already exists in the vocabulary used to describe our social structures. “List” by contrast reduces the subject matter to simple data. The utility of Facebook’s friends lists is hindered by usability issues—an unintuitive and convoluted process that was added to Facebook well after its launch, perhaps a reaction to privacy concerns rather than a genuine attempt to emulate social organisation. For a cogent breakdown of these technical and design problems see Augusto Sellhorn. Organising friends into lists is a function offered by Facebook, but Google+ takes a different approach: organising friends in circles is a central feature; the whole experience is centred around attempting to mirror the social relations of real life. Google’s promotional video explains the centrality of emulating “real life relationships” (Google). Effectively, Facebook and Google+ have adopted two different systemic approaches to dealing with the same issue. Facebook places the burden of organising a homogeneous mass of “friends” into lists on the user as an afterthought of connecting with another user. In contrast, Google+ builds organisation into the act of connecting. Whilst Google+’s approach is more intuitive and designed to facilitate social networking that more accurately reflects how real life social relationships are structured, it suffers from forcing direct correlation between an account and the account holder. That is, use of Google+ mandates bringing online the offline. Google+ operates a real names policy and on the weekend of 23 July 2011 suspended a number of accounts for violation of Google’s Community Standards. A suspension notice posted by Violet Blue reads: “After reviewing your profile, we determined the name you provided violates our Community Standards.” Open Source technologist Kirrily Robert polled 119 Google+ users about their experiences with the real names policy. The results posted to her on blog reveal that users desire pseudonymity, many for reasons of privacy and/or safety rather than the lack of integrity thought by Zuckerberg. boyd argues that Google’s real names policy is an abuse of power and poses danger to those users employing “nicks” for reasons including being a government employment or the victim of stalking, rape or domestic abuse. A comprehensive list of those at risk has been posted to the Geek Feminism Wiki (ironically, the Wiki utilises “Connect”, Facebook’s attempt at a single sign on solution for the Web that connects users’ movements with their Facebook profile). Facebook has a culture of real names stemming from its early adopters drawn from trusted communities, and this culture became a norm for that service (boyd). But as boyd also points out, “[r]eal names are by no means universal on Facebook.” Google+ demands real names, a demand justified by rhetoric of designing a social networking system that is more like real life. “Real”, in this case, is represented by one’s given name—irrespective of the authenticity of one’s pseudonym or the complications and dangers of using one’s given name. Conclusion There is a multiplicity of issues concerning social networks and identities, privacy and safety. This paper has outlined the challenges involved in moving real life to the online environment and the contests in trying to designate zones of social context. Where some earlier research into the social Internet has had a positive (even utopian) feel, the contemporary Internet is increasingly influenced by powerful and competing corporations. As a result, the experience of the Internet is not necessarily as flexible as Turkle or Rheingold might have envisioned. Rather than conducting identity experimentation or exercising multiple personnae, we are increasingly obligated to perform identity as it is defined by the monolithic service providers such as Facebook and Google+. This is not purely an indictment of Facebook or Google’s corporate drive, though they are obviously implicated, but has as much to do with the new social practice of “being online.” So, while there are myriad benefits to participating in this new social context, as Poole noted, the “cost of failure is really high when you’re contributing as yourself.” Areas for further exploration include the implications of Facebook positioning itself as a general-purpose user authentication tool whereby users can log into a wide array of Websites using their Facebook credentials. If Google were to take a similar action the implications would be even more convoluted, given the range of other services Google offers, from GMail to the Google Checkout payment service. While the monolithic centralisation of these services will have obvious benefits, there will be many more subtle problems which must be addressed. References Blue, Violet. “Google Plus Deleting Accounts en Masse: No Clear Answers.” zdnet.com (2011). 10 Aug. 2011 ‹http://www.zdnet.com/blog/violetblue/google-plus-deleting-accounts-en-masse-no-clear-answers/56›. boyd, danah. “Real Names Policies Are an Abuse of Power.” zephoria.org (2011). 10 Aug. 2011 ‹http://www.zephoria.org/thoughts/archives/2011/08/04/real-names.html›. Cashmore, Pete. “MySpace Hits 100 Million Accounts.” mashable.com (2006). 10 Aug. 2011 ‹http://mashable.com/2006/08/09/myspace-hits-100-million-accounts›. Dibble, Julian. My Tiny Life: Crime and Passion in a Virtual World. New York: Henry Holt & Company, 1998. Facebook. “Fact Sheet.” Facebook (2011). 10 Aug. 2011 ‹http://www.facebook.com/press/info.php?statistic›. Geek Feminism Wiki. “Who Is Harmed by a Real Names Policy?” 2011. 10 Aug. 2011 ‹http://geekfeminism.wikia.com/wiki/Who_is_harmed_by_a_%22Real_Names%22_policy› Goffman, Erving. The Presentation of Self in Everyday Life. London: Penguin, 1959. Google. “The Google+ Project: Explore Circles.” Youtube.com (2011). 10 Aug. 2011 ‹http://www.youtube.com/watch?v=ocPeAdpe_A8›. Kirkpatrick, David. The Facebook Effect. New York: Simon & Schuster, 2010. Marinucci, Mimi. “You Can’t Front on Facebook.” Facebook and Philosophy. Ed. Dylan Wittkower. Chicago & La Salle, Illinois: Open Court, 2010. 65–74. Novak, Peter. “Privacy Commissioner Reviewing Google Buzz.” CBC News: Technology and Science (2010). 10 Aug. 2011 ‹http://www.cbc.ca/news/technology/story/2010/02/16/google-buzz-privacy.html›. Poole, Christopher. Keynote presentation. South by SouthWest. Texas, Austin, 2011. Robards, Brady. “Negotiating Identity and Integrity on Social Network Sites for Educators.” International Journal for Educational Integrity 6.2 (2010): 19–23. Robert, Kirrily. “Preliminary Results of My Survey of Suspended Google Accounts.” 2011. 10 Aug. 2011 ‹http://infotrope.net/2011/07/25/preliminary-results-of-my-survey-of-suspended-google-accounts/›. Rheingold, Howard. The Virtual Community: Homesteading on the Electronic Frontier. New York: Harper Perennial, 1993. Ruch, Adam. “The Decline of Pseudonymity.” Posthumanity. Eds. Adam Ruch and Ewan Kirkland. Oxford: Inter-Disciplinary.net Press, 2010: 211–220. Sellhorn, Augusto. “Facebook Friend Lists Suck When Compared to Google+ Circles.” sellmic.com (2011). 10 Aug. 2011 ‹http://sellmic.com/blog/2011/07/01/facebook-friend-lists-suck-when-compared-to-googleplus-circles›. Suler, John. “The Online Disinhibition Effect.” CyberPsychology and Behavior 7 (2004): 321–326. Turkle, Sherry. Life on the Screen: Identity in the Age of the Internet. New York: Simon & Schuster, 1995. Van Gelder, Lindsy. “The Strange Case of the Electronic Lover.” Computerization and Controversy: Value Conflicts and Social Choices Ed. Rob Kling. New York: Academic Press, 1996: 533–46.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Green, Lelia y Carmen Guinery. "Harry Potter and the Fan Fiction Phenomenon". M/C Journal 7, n.º 5 (1 de noviembre de 2004). http://dx.doi.org/10.5204/mcj.2442.

Texto completo
Resumen
The Harry Potter (HP) Fan Fiction (FF) phenomenon offers an opportunity to explore the nature of fame and the work of fans (including the second author, a participant observer) in creating and circulating cultural products within fan communities. Matt Hills comments (xi) that “fandom is not simply a ‘thing’ that can be picked over analytically. It is also always performative; by which I mean that it is an identity which is (dis-)claimed, and which performs cultural work”. This paper explores the cultural work of fandom in relation to FF and fame. The global HP phenomenon – in which FF lists are a small part – has made creator J K Rowling richer than the Queen of England, according to the 2003 ‘Sunday Times Rich List’. The books (five so far) and the films (three) continue to accelerate the growth in Rowling’s fortune, which quadrupled from 2001-3: an incredible success for an author unknown before the publication of Harry Potter and the Philosopher’s Stone in 1997. Even the on-screen HP lead actor, Daniel Radcliffe, is now Britain’s second wealthiest teenager (after England’s Prince Harry). There are other globally successful books, such as the Lord of the Rings trilogy, and the Narnia collection, but neither of these series has experienced the momentum of the HP rise to fame. (See Endnote for an indication of the scale of fan involvement with HP FF, compared with Lord of the Rings.) Contemporary ‘Fame’ has been critically defined in relation to the western mass media’s requirement for ‘entertaining’ content, and the production and circulation of celebrity as opposed to ‘hard news’(Turner, Bonner and Marshall). The current perception is that an army of publicists and spin doctors are usually necessary, but not sufficient, to create and nurture global fame. Yet the HP phenomenon started out with no greater publicity investment than that garnered by any other promising first novelist: and given the status of HP as children’s publishing, it was probably less hyped than equivalent adult-audience publications. So are there particular characteristics of HP and his creator that predisposed the series and its author to become famous? And how does the fame status relate to fans’ incorporation of these cultural materials into their lives? Accepting that it is no more possible to predict the future fame of an author or (fictional) character than it is to predict the future financial success of a book, film or album, there is a range of features of the HP phenomenon that, in hindsight, helped accelerate the fame momentum, creating what has become in hindsight an unparalleled global media property. J K Rowling’s personal story – in the hands of her publicity machine – itself constituted a magical myth: the struggling single mother writing away (in longhand) in a Scottish café, snatching odd moments to construct the first book while her infant daughter slept. (Comparatively little attention was paid by the marketers to the author’s professional training and status as a teacher, or to Rowling’s own admission that the first book, and the outline for the series, took five years to write.) Rowling’s name itself, with no self-evident gender attribution, was also indicative of ambiguity and mystery. The back-story to HP, therefore, became one of a quintessentially romantic endeavour – the struggle to write against the odds. Publicity relating to the ‘starving in a garret’ background is not sufficient to explain the HP/Rowling grip on the popular imagination, however. Instead it is arguable that the growth of HP fame and fandom is directly related to the growth of the Internet and to the middle class readers’ Internet access. If the production of celebrity is a major project of the conventional mass media, the HP phenomenon is a harbinger of the hyper-fame that can be generated through the combined efforts of the mass media and online fan communities. The implication of this – evident in new online viral marketing techniques (Kirby), is that publicists need to pique cyber-interest as well as work with the mass media in the construction of celebrity. As the cheer-leaders for online viral marketing make the argument, the technique “provides the missing link between the [bottom-up] word-of-mouth approach and the top-down, advertainment approach”. Which is not to say that the initial HP success was a function of online viral marketing: rather, the marketers learned their trade by analysing the magnifier impact that the online fan communities had upon the exponential growth of the HP phenomenon. This cyber-impact is based both on enhanced connectivity – the bottom-up, word-of-mouth dynamic, and on the individual’s need to assume an identity (albeit fluid) to participate effectively in online community. Critiquing the notion that the computer is an identity machine, Streeter focuses upon (649) “identities that people have brought to computers from the culture at large”. He does not deal in any depth with FF, but suggests (651) that “what the Internet is and will come to be, then, is partly a matter of who we expect to be when we sit down to use it”. What happens when fans sit down to use the Internet, and is there a particular reason why the Internet should be of importance to the rise and rise of HP fame? From the point of view of one of us, HP was born at more or less the same time as she was. Eleven years old in the first book, published in 1997, Potter’s putative birth year might be set in 1986 – in line with many of the original HP readership, and the publisher’s target market. At the point that this cohort was first spellbound by Potter, 1998-9, they were also on the brink of discovering the Internet. In Australia and many western nations, over half of (two-parent) families with school-aged children were online by the end of 2000 (ABS). Potter would notionally have been 14: his fans a little younger but well primed for the ‘teeny-bopper’ years. Arguably, the only thing more famous than HP for that age-group, at that time, was the Internet itself. As knowledge of the Internet grew stories about it constituted both news and entertainment and circulated widely in the mass media: the uncertainty concerning new media, and their impact upon existing social structures, has – over time – precipitated a succession of moral panics … Established commercial media are not noted for their generosity to competitors, and it is unsurprising that many of the moral panics circulating about pornography on the Net, Internet stalking, Web addiction, hate sites etc are promulgated in the older media. (Green xxvii) Although the mass media may have successfully scared the impressionable, the Internet was not solely constructed as a site of moral panic. Prior to the general pervasiveness of the Internet in domestic space, P. David Marshall discusses multiple constructions of the computer – seen by parents as an educational tool which could help future-proof their children; but which their children were more like to conceptualise as a games machine, or (this was the greater fear) use for hacking. As the computer was to become a site for the battle ground between education, entertainment and power, so too the Internet was poised to be colonised by teenagers for a variety of purposes their parents would have preferred to prevent: chat, pornography, game-playing (among others). Fan communities thrive on the power of the individual fan to project themselves and their fan identity as part of an ongoing conversation. Further, in constructing the reasons behind what has happened in the HP narrative, and in speculating what is to come, fans are presenting themselves as identities with whom others might agree (positive affirmation) or disagree (offering the chance for engagement through exchange). The genuinely insightful fans, who apparently predict the plots before they’re published, may even be credited in their communities with inspiring J K Rowling’s muse. (The FF mythology is that J K Rowling dare not look at the FF sites in case she finds herself influenced.) Nancy Baym, commenting on a soap opera fan Usenet group (Usenet was an early 1990s precursor to discussion groups) notes that: The viewers’ relationship with characters, the viewers’ understanding of socioemotional experience, and soap opera’s narrative structure, in which moments of maximal suspense are always followed by temporal gaps, work together to ensure that fans will use the gaps during and between shows to discuss with one another possible outcomes and possible interpretations of what has been seen. (143) In HP terms the The Philosopher’s Stone constructed a fan knowledge that J K Rowling’s project entailed at least seven books (one for each year at Hogwarts School) and this offered plentiful opportunities to speculate upon the future direction and evolution of the HP characters. With each speculation, each posting, the individual fan can refine and extend their identity as a member of the FF community. The temporal gaps between the books and the films – coupled with the expanding possibilities of Internet communication – mean that fans can feel both creative and connected while circulating the cultural materials derived from their engagement with the HP ‘canon’. Canon is used to describe the HP oeuvre as approved by Rowling, her publishers, and her copyright assignees (for example, Warner Bros). In contrast, ‘fanon’ is the name used by fans to refer the body of work that results from their creative/subversive interactions with the core texts, such as “slash” (homo-erotic/romance) fiction. Differentiation between the two terms acknowledges the likelihood that J K Rowling or her assignees might not approve of fanon. The constructed identities of fans who deal solely with canon differ significantly from those who are engaged in fanon. The implicit (romantic) or explicit (full-action descriptions) sexualisation of HP FF is part of a complex identity play on behalf of both the writers and readers of FF. Further, given that the online communities are often nurtured and enriched by offline face to face exchanges with other participants, what an individual is prepared to read or not to read, or write or not write, says as much about that person’s public persona as does another’s overt consumption of pornography; or diet of art house films, in contrast to someone else’s enthusiasm for Friends. Hearn, Mandeville and Anthony argue that a “central assertion of postmodern views of consumption is that social identity can be interpreted as a function of consumption” (106), and few would disagree with them: herein lies the power of the brand. Noting that consumer culture centrally focuses upon harnessing ‘the desire to desire’, Streeter’s work (654, on the opening up of Internet connectivity) suggests a continuum from ‘desire provoked’; through anticipation, ‘excitement based on what people imagined would happen’; to a sense of ‘possibility’. All this was made more tantalising in terms of the ‘unpredictability’ of how cyberspace would eventually resolve itself (657). Thus a progression is posited from desire through to the thrill of comparing future possibilities with eventual outcomes. These forces clearly influence the HP FF phenomenon, where a section of HP fans have become impatient with the pace of the ‘official’/canon HP text. J K Rowling’s writing has slowed down to the point that Harry’s initial readership has overtaken him by several years. He’s about to enter his sixth year (of seven) at secondary school – his erstwhile-contemporaries have already left school or are about to graduate to University. HP is yet to have ‘a relationship’: his fans are engaged in some well-informed speculation as to a range of sexual possibilities which would likely take J K Rowling some light years from her marketers’ core readership. So the story is progressing more slowly than many fans would choose and with less spice than many would like (from the evidence of the web, at least). As indicated in the Endnote, the productivity of the fans, as they ‘fill in the gaps’ while waiting for the official narrative to resume, is prodigious. It may be that as the fans outstrip HP in their own social and emotional development they find his reactions in later books increasingly unbelievable, and/or out of character with the HP they felt they knew. Thus they develop an alternative ‘Harry’ in fanon. Some FF authors identify in advance which books they accept as canon, and which they have decided to ignore. For example, popular FF author Midnight Blue gives the setting of her evolving FF The Mirror of Maybe as “after Harry Potter and the Goblet of Fire and as an alternative to the events detailed in Harry Potter and the Order of the Phoenix, [this] is a Slash story involving Harry Potter and Severus Snape”. Some fans, tired of waiting for Rowling to get Harry grown up, ‘are doin’ it for themselves’. Alternatively, it may be that as they get older the first groups of HP fans are unwilling to relinquish their investment in the HP phenomenon, but are equally unwilling to align themselves uncritically with the anodyne story of the canon. Harry Potter, as Warner Bros licensed him, may be OK for pre-teens, but less cool for the older adolescent. The range of identities that can be constructed using the many online HP FF genres, however, permits wide scope for FF members to identify with dissident constructions of the HP narrative and helps to add to the momentum with which his fame increases. Latterly there is evidence that custodians of canon may be making subtle overtures to creators of fanon. Here, the viral marketers have a particular challenge – to embrace the huge market represented by fanon, while not disturbing those whose HP fandom is based upon the purity of canon. Some elements of fanon feel their discourses have been recognised within the evolving approved narrative . This sense within the fan community – that the holders of the canon have complimented them through an intertextual reference – is much prized and builds the momentum of the fame engagement (as has been demonstrated by Watson, with respect to the band ‘phish’). Specifically, Harry/Draco slash fans have delighted in the hint of a blown kiss from Draco Malfoy to Harry (as Draco sends Harry an origami bird/graffiti message in a Defence against the Dark Arts Class in Harry Potter and the Prisoner of Azkaban) as an acknowledgement of their cultural contribution to the development of the HP phenomenon. Streeter credits Raymond’s essay ‘The Cathedral and the Bazaar’ as offering a model for the incorporation of voluntary labour into the marketplace. Although Streeter’s example concerns the Open Source movement, derived from hacker culture, it has parallels with the prodigious creativity (and productivity) of the HP FF communities. Discussing the decision by Netscape to throw open the source code of its software in 1998, allowing those who use it to modify and improve it, Streeter comments that (659) “the core trope is to portray Linux-style software development like a bazaar, a real-life competitive marketplace”. The bazaar features a world of competing, yet complementary, small traders each displaying their skills and their wares for evaluation in terms of the product on offer. In contrast, “Microsoft-style software production is portrayed as hierarchical and centralised – and thus inefficient – like a cathedral”. Raymond identifies “ego satisfaction and reputation among other [peers]” as a specific socio-emotional benefit for volunteer participants (in Open Source development), going on to note: “Voluntary cultures that work this way are not actually uncommon [… for example] science fiction fandom, which unlike hackerdom has long explicitly recognized ‘egoboo’ (ego-boosting, or the enhancement of one’s reputation among other fans) as the basic drive behind volunteer activity”. This may also be a prime mover for FF engagement. Where fans have outgrown the anodyne canon they get added value through using the raw materials of the HP stories to construct fanon: establishing and building individual identities and communities through HP consumption practices in parallel with, but different from, those deemed acceptable for younger, more innocent, fans. The fame implicit in HP fandom is not only that of HP, the HP lead actor Daniel Radcliffe and HP’s creator J K Rowling; for some fans the famed ‘state or quality of being widely honoured and acclaimed’ can be realised through their participation in online fan culture – fans become famous and recognised within their own community for the quality of their work and the generosity of their sharing with others. The cultural capital circulated on the FF sites is both canon and fanon, a matter of some anxiety for the corporations that typically buy into and foster these mega-media products. As Jim Ward, Vice-President of Marketing for Lucasfilm comments about Star Wars fans (cited in Murray 11): “We love our fans. We want them to have fun. But if in fact someone is using our characters to create a story unto itself, that’s not in the spirit of what we think fandom is about. Fandom is about celebrating the story the way it is.” Slash fans would beg to differ, and for many FF readers and writers, the joy of engagement, and a significant engine for the growth of HP fame, is partly located in the creativity offered for readers and writers to fill in the gaps. Endnote HP FF ranges from posts on general FF sites (such as fanfiction.net >> books, where HP has 147,067 stories [on 4,490 pages of hotlinks] posted, compared with its nearest ‘rival’ Lord of the rings: with 33,189 FF stories). General FF sites exclude adult content, much of which is corralled into 18+ FF sites, such as Restrictedsection.org, set up when core material was expelled from general sites. As an example of one adult site, the Potter Slash Archive is selective (unlike fanfiction.net, for example) which means that only stories liked by the site team are displayed. Authors submitting work are asked to abide by a list of ‘compulsory parameters’, but ‘warnings’ fall under the category of ‘optional parameters’: “Please put a warning if your story contains content that may be offensive to some authors [sic], such as m/m sex, graphic sex or violence, violent sex, character death, major angst, BDSM, non-con (rape) etc”. Adult-content FF readers/writers embrace a range of unexpected genres – such as Twincest (incest within either of the two sets of twin characters in HP) and Weasleycest (incest within the Weasley clan) – in addition to mainstream romance/homo-erotica pairings, such as that between Harry Potter and Draco Malfoy. (NB: within the time frame 16 August – 4 October, Harry Potter FF writers had posted an additional 9,196 stories on the fanfiction.net site alone.) References ABS. 8147.0 Use of the Internet by Householders, Australia. http://www.abs.gov.au/ausstats/abs@.nsf/ e8ae5488b598839cca25682000131612/ ae8e67619446db22ca2568a9001393f8!OpenDocument, 2001, 2001>. Baym, Nancy. “The Emergence of Community in Computer-Mediated Communication.” CyberSociety: Computer-Mediated Communication and Community. Ed. S. Jones. Thousand Oaks, CA: Sage, 1995. 138-63. Blue, Midnight. “The Mirror of Maybe.” http://www.greyblue.net/MidnightBlue/Mirror/default.htm>. Coates, Laura. “Muggle Kids Battle for Domain Name Rights. Irish Computer. http://www.irishcomputer.com/domaingame2.html>. Fanfiction.net. “Category: Books” http://www.fanfiction.net/cat/202/>. Green, Lelia. Technoculture: From Alphabet to Cybersex. Sydney: Allen & Unwin. Hearn, Greg, Tom Mandeville and David Anthony. The Communication Superhighway: Social and Economic Change in the Digital Age. Sydney: Allen & Unwin, 1997. Hills, Matt. Fan Cultures. London: Routledge, 2002. Houghton Mifflin. “Potlatch.” Encyclopedia of North American Indians. http://college.hmco.com/history/readerscomp/naind/html/ na_030900_potlatch.htm>. Kirby, Justin. “Brand Papers: Getting the Bug.” Brand Strategy July-August 2004. http://www.dmc.co.uk/pdf/BrandStrategy07-0804.pdf>. Marshall, P. David. “Technophobia: Video Games, Computer Hacks and Cybernetics.” Media International Australia 85 (Nov. 1997): 70-8. Murray, Simone. “Celebrating the Story the Way It Is: Cultural Studies, Corporate Media and the Contested Utility of Fandom.” Continuum 18.1 (2004): 7-25. Raymond, Eric S. The Cathedral and the Bazaar. 2000. http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/ar01s11.html>. Streeter, Thomas. The Romantic Self and the Politics of Internet Commercialization. Cultural Studies 17.5 (2003): 648-68. Turner, Graeme, Frances Bonner, and P. David Marshall. Fame Games: The Production of Celebrity in Australia. Melbourne: Cambridge UP. Watson, Nessim. “Why We Argue about Virtual Community: A Case Study of the Phish.net Fan Community.” Virtual Culture: Identity and Communication in Cybersociety. Ed. Steven G. Jones. London: Sage, 1997. 102-32. Citation reference for this article MLA Style Green, Lelia, and Carmen Guinery. "Harry Potter and the Fan Fiction Phenomenon." M/C Journal 7.5 (2004). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0411/14-green.php>. APA Style Green, L., and C. Guinery. (Nov. 2004) "Harry Potter and the Fan Fiction Phenomenon," M/C Journal, 7(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0411/14-green.php>.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Segerstad, Ylva Hard af. "Swedish Chat Rooms". M/C Journal 3, n.º 4 (1 de agosto de 2000). http://dx.doi.org/10.5204/mcj.1865.

Texto completo
Resumen
Most investigations of language use in the computer-mediated communication (CMC) systems colloquially known as 'chat rooms' are based on studies of chat rooms in which English is the predominant language. This study begins to redress that bias by investigating language use in a Swedish text-based chat room. Do Swedish chat participants just adopt strategies adapted to suit the needs of written online conversation, or is Swedish written language being developed in analogy with adaptations that can be observed in 'international' chat rooms? As is now well known, text-based chat rooms provide a means for people to converse in near real time with very little delay between messages. As a written form of interaction, there is no possibility of sending simultaneous non-verbal information, and while the minimal delay gives the interaction a more conversational feel, the conversants must struggle with the time pressure of combining a slow message production system with rapid transmission-reception. Several strategies have been developed in order to ease the strain of writing and to convey more information than written symbols normally allow (Werry; Witmer & Katzman; Hård af Segerstad, "Emoticons"). A number of strategies have been developed to suit the needs of CMC, some of which we recognise from traditional writing, but perhaps use more generously in the new environment. Well known and internationally recognised strategies used to compensate for the lack of non-verbal or non-vocal signals include providing analogies for vocalisations adopted in order to compensate for the effort of typing and time pressure: Smileys (or emoticons): Smileys are combinations of keyboard characters which attempt to resemble facial expressions, eg. ;) (or simple objects such as roses). These are mostly placed at the end of a sentence as an aid to interpret the emotional state of the sender; Surrounding words with *asterisks* (or a number of variants, such as underscores (_word_)). As with smileys, asterisks may be used to indicate the emotional state of the sender (eg. *smiles*, *s*), and also to convey an action (*waves*, *jumps up and down*); In some systems, different fonts and colours may be used to express emotions. Capitals, unorthodox spelling and mixing of cases in the middle of words and Extreme use of punctuation marks may all be used to convey analogies to prosodic phenomena such as intonation, tone of voice, emphasis ("you IDIOT"); Abbreviations and acronyms: some are traditional, others new to the medium; Omission of words: ellipsis, grammatical function words; and, Little correction of typographical errors -- orthography or punctuation -- and little traditional use of mixed cases (eg. capitals at the beginning of sentences), and punctuation. Method This study compares and contrasts data from a questionnaire and material from a logged chat channel. The investigation began with a questionnaire, inquiring into the habits and preferences of Swedish students communicating on the Internet. 333 students (164 females and 169 males) answered the questionnaire that was sent to five upper secondary schools (students aged 16-18), and two lower secondary schools (students aged 13-15). Subjects were asked for three kinds of information: (a) examples of the strategies mentioned above and whether they used these when chatting online, (b) which languages were used in everyday communication and in chat rooms, and (c) the names of favourite chat rooms. One of the most popular public chat rooms turned out to be one maintained by a Swedish newspaper. Permission was obtained to log material from this chat room. The room may be accessed at: <http://nychat.aftonbladet.se/webchat/oppenkanal/Entren.php>. A 'bot (from 'robot', a program that can act like a user on an IRC network) was used to log the time, sender and content of contributions in the room. In order to get a large data set and to record the spread of activity over the most part of a week, approximately 120 hours of logging occurred, six days and nights in succession. During this period 4 293 users ('unique pseudonyms'), from 278 different domains provided 47 715 contributions in total (410 355 total utterances). The logged material was analysed, using the automated search tool TRASA (developed by Leif Gronqvist -- Dept. of Linguistics, Göteborg University, Sweden). Results The language used in the chat room was mainly Swedish. Apart from loan words (in some cases with the English spelling intact, in other cases adapted to Swedish spelling), English phrases (often idiomatic) showed up occasionally, sometimes in the middle of a Swedish sentence. Some examples of contributions are shown, extracted from their original context. (Note: Instances of Nordic letters in the examples have been transformed into the letters 'a' and 'o' respectively.) Table 1. Examples of nicknames and contributions taken from the Web chat material. 01.07.20 Darth Olsson Helloo allibadi hur e de i dag? 14:44:40 G.B Critical information check 01.11.40 Little Boy Lost fru hjarterdam...120 mil busstripp...Later hojdare om det...;) 18.10.30 PeeWee this sucks 22.17.12 Ellen (16) Whatever! 16.06.55 Blackboy Whats up The above examples demonstrate that both nicknames and contributions consist of a mix either of Swedish and English, or of pure English. In answering the questionnaire, the subjects gave many examples of the more 'traditional strategies' used in international chat channels for overcoming the limitations of writing: traditional abbreviations, the use of all uppercase, asterisk-framed words, extreme use of punctuation and the simplest smileys (Hård af Segerstad, "Emoticons", "Expressing Emoticons", "Strategies" and "Swedish Teenagers"). The questionnaire results also included examples of 'net-abbreviations' based on English words. However, while these were similar to those observed in international chat rooms, the most interesting finding was that Swedish teenagers do not just copy that behaviour from the international chat rooms that they have visited: the examples of creative and new abbreviations are made up in comparison with the innovative English net-abbreviations, but based on Swedish words. A number of different types of abbreviations emerged: Acronyms made up from the first letters in a phrase (eg. "istf", meaning "i stallet for" [trans. "instead of"]); Numbers representing the sound value of a syllable in combination with letters (eg. "3vligt" meaning "trevligt" [trans. "nice"]); and, Letters representing the sound value of a syllable in combination with other letters forming an abbreviated representation of a word (eg. "CS" meaning "(vi) ses" [trans. "see (you)"]). The logged chat material showed that all of the strategies, both Swedish and English, mentioned in the questionnaire were actually used online. The Swedish strategies mentioned in the questionnaire are illustrated in Table 2. Table 2. Examples of innovative and traditional Swedish abbreviations given in the questionnaire. Innovative Abbreviation Full phrase Translation Traditional abbreviation Full phrase Translation Asg Asgarvar Laughs hard ngn nagon someone Iofs i och for sig Strictly speaking Ngra nagra some ones iaf, if i allafall Anyway gbg Göteborg Göteborg É Ar Is sv svenska Swedish D Det It bla bland annat among other things Cs (vi) ses See you t.ex. till exempel for example Lr Eller Or ngt nagot something B.S.D.V Bara Sa Du Vet Just To Let You Know t.om till och med even P Pa On, at etc et cetera QL (ql) Kul Fun m.m med mera and more 3vligt Trevligt Nice m.a.o. med andra ord in other words Tebax Tillbaka Back mkt mycket a lot Oxa Ocksa Too ibl Ibland sometimes The table above shows examples of traditional and creative abbreviations developed to suit the limitations and advantages of written Swedish online. A comparison of the logged material with the examples given in the questionnaire shows that all innovative abbreviations exemplified were used, sometimes with slightly different orthography. Table 3. The most frequent abbreviations used in the chat material No. of occurrences Innovative Abbreviations No. of occurrences Traditional abbreviations 224 Oxa 74 GBG 101 Oki 60 gbg 62 Oki 56 ngn 47 É 43 mm 16 P 42 Gbg 10 Iofs 37 ngt 10 If 26 bla 10 D 19 tex 5 Tebax 19 Tom 5 OKI 18 etc 4 É 8 MM 4 Ql 6 Ngn 4 P 5 BLA 4 OXA 4 tom 4 D 4 NGN 3 Asg 4 Mm 3 IF 3 TEX 2 Oxa 2 TOM 1 Cs 2 Ngt 1 Tebax 1 ngra 1 QL 1 bLA 1 If 1 ASG The limited space of this article does not allow for a full analysis of the material from the chat, but in short, data from both the questionnaire and the Web chat of this study suggest that Swedish teenagers conversing in electronic chat rooms draw on their previous knowledge of strategies used in traditional written language to minimise time and effort when writing/typing (cf. Ferrara et al.). They do not just copy behaviour and strategies that they observe in international chat rooms that they have visited, but adapt these to suit the Swedish language. As well as saving time and effort typing, and apart from conveying non-verbal information, it would appear that these communication strategies are also used as a way of signalling and identifying oneself as 'cyber-regulars' -- people who know the game, so to speak. At this stage of research, beyond the use of Swedish language by Swedish nationals, there is nothing to indicate that the adaptations found are significantly different to online adaptations of English or French (cf. Werry). This result calls for further research on the specifics of Swedish adaptations. References Allwood, Jens. "An Activity Based Approach to Pragmatics." Gothenburg Papers in Theoretical Linguistics 76. Dept. of Linguistics, University of Göteborg, 1995. Ferrara, K., H. Brunner, and G. Whittemore. "Interactive Written Discourse as an Emergent Register." Written Communication 8.1 (1991): 8-34. Hård af Segerstad, Ylva. "Emoticons -- A New Mode for the Written Language." Dept. of Linguistics, Göteborg University, Sweden. Unpublished paper, 1998. ---. "Expressing Emotions in Electronic Writing." Dept. of Linguistics, Göteborg University, Sweden. Unpublished paper, 1998. ---. "Strategies in Computer-Mediated Written Communication -- A Comparison between Two User Groups." Dept. of Linguistics, Göteborg University, Sweden. Unpublished paper, 1998. ---. "Swedish Teenagers' Written Conversation in Electronic Chat Environments." WebTalk -- Writing As Conversation. Ed. Diane Penrod. Mahwah, NJ: Lawrence Erlbaum Associates, Forthcoming. Witmer, Diane, and Sandra Lee Katzman. "On-Line Smiles: Does Gender Make A Difference in the Use of Graphic Accents?" Journal of Computer-Mediated Communication 2.4 (1997). 19 Aug. 2000 <http://www.ascusc.org/jcmc/vol2/issue4/witmer1.php>. Werry, Christopher, C. "Linguistic and Interactional Features of Internet Relay Chat." Computer-Mediated Communication: Linguistic, Social and Cross-Cultural Perspectives. Ed. Susan Herring. Amsterdam: John Benjamins, 1996. 47-63. Citation reference for this article MLA style: Ylva Hård af Segerstad. "Swedish Chat Rooms." M/C: A Journal of Media and Culture 3.4 (2000). [your date of access] <http://www.api-network.com/mc/0008/swedish.php>. Chicago style: Ylva Hård af Segerstad, "Swedish Chat Rooms," M/C: A Journal of Media and Culture 3, no. 4 (2000), <http://www.api-network.com/mc/0008/swedish.php> ([your date of access]). APA style: Ylva Hård af Segerstad. (2000) Swedish chat rooms. M/C: A Journal of Media and Culture 3(4). <http://www.api-network.com/mc/0008/swedish.php> ([your date of access]).
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Kennedy, Jenny, Indigo Holcombe-James y Kate Mannell. "Access Denied". M/C Journal 24, n.º 3 (21 de junio de 2021). http://dx.doi.org/10.5204/mcj.2785.

Texto completo
Resumen
Introduction As social-distancing mandates in response to COVID-19 restricted in-person data collection methods such as participant observation and interviews, researchers turned to socially distant methods such as interviewing via video-conferencing technology (Lobe et al.). These were not new tools nor methods, but the pandemic muted any bias towards face-to-face data collection methods. Exemplified in crowd-sourced documents such as Doing Fieldwork in a Pandemic, researchers were encouraged to pivot to digital methods as a means of fulfilling research objectives, “specifically, ideas for avoiding in-person interactions by using mediated forms that will achieve similar ends” (Lupton). The benefits of digital methods for expanding participant cohorts and scope of research have been touted long before 2020 and COVID-19, and, as noted by Murthy, are “compelling” (“Emergent” 172). Research conducted by digital methods can expect to reap benefits such as “global datasets/respondents” and “new modalities for involving respondents” (Murthy, “Emergent” 172). The pivot to digital methods is not in and of itself an issue. What concerns us is that in the dialogues about shifting to digital methods during COVID-19, there does not yet appear to have been a critical consideration of how participant samples and collected data will be impacted upon or skewed towards recording the experiences of advantaged cohorts. Existing literature focusses on the time-saving benefits for the researcher, reduction of travel costs (Fujii), the minimal costs for users of specific platforms – e.g. Skype –, and presumes ubiquity of device access for participants (Cater). We found no discussion on data costs of accessing such services being potential barriers to participation in research, although Deakin and Wakefield did share our concern that: Online interviews may ... mean that some participants are excluded due to the need to have technological competence required to participate, obtain software and to maintain Internet connection for the duration of the discussion. In this sense, access to certain groups may be a problem and may lead to issues of representativeness. (605) We write this as a provocation to our colleagues conducting research at this time to consider the cultural and material capital of their participants and how that capital enables them to participate in digitally-mediated data gathering practices, or not, and to what extent. Despite highlighting the potential benefits of digital methods within a methodological tool kit, Murthy previously cautioned against the implications posed by digital exclusion, noting that “the drawback of these research options is that membership of these communities is inherently restricted to the digital ‘haves’ ... rather than the ‘have nots’” (“Digital” 845). In this article, we argue that while tools such as Zoom have indeed enabled fieldwork to continue despite COVID disruptions, this shift to online platforms has important and under-acknowledged implications for who is and is not able to participate in research. In making this argument, we draw on examples from the Connected Students project, a study of digital inclusion that commenced just as COVID-19 restrictions came into effect in the Australian state of Victoria at the start of 2020. We draw on the experiences of these households to illustrate the barriers that such cohorts face when participating in online research. We begin by providing details about the Connected Students project and then contextualising it through a discussion of research on digital inclusion. We then outline three areas in which households would have experienced (or still do experience) difficulties participating in online research: data, devices, and skills. We use these findings to highlight the barriers that disadvantaged groups may face when engaging in data collection activities over Zoom and question how this is impacting on who is and is not being included in research during COVID-19. The Connected Students Program The Connected Students program was conducted in Shepparton, a regional city located 180km north of Melbourne. The town itself has a population of around 30,000, while the Greater Shepparton region comprises around 64,000 residents. Shepparton was chosen as the program’s site because it is characterised by a unique combination of low-income and low levels of digital inclusion. First, Shepparton ranks in the lowest interval for the Australian Bureau of Statistics’ Socio-Economic Indexes for Areas (SEIFA) and the Index of Relative Socioeconomic Advantage and Disadvantage (IRSAD), as reported in 2016 (Australian Bureau of Statistics, “Census”; Australian Bureau of Statistics, “Index”). Although Shepparton has a strong agricultural and horticultural industry with a number of food-based manufacturing companies in the area, including fruit canneries, dairies, and food processing plants, the town has high levels of long-term and intergenerational unemployment and jobless families. Second, Shepparton is in a regional area that ranks in the lowest interval for the Australian Digital Inclusion Index (Thomas et al.), which measures digital inclusion across dimensions of access, ability, and affordability. Funded by Telstra, Australia’s largest telecommunications provider, and delivered in partnership with Greater Shepparton Secondary College (GSSC), the Connected Students program provided low-income households with a laptop and an unlimited broadband Internet connection for up to two years. Households were recruited to the project via GSSC. To be eligible, households needed to hold a health care card and have at least one child attending the school in year 10, 11, or 12. Both the student and a caregiver were required to participate in the project to be eligible. Additional household members were invited to take part in the research, but were not required to. (See Kennedy & Holcombe-James; and Kennedy et al., "Connected Students", for further details regarding household demographics.) The Australian Digital Inclusion Index identifies that affordability is a significant barrier to digital inclusion in Australia (Thomas et al.). The project’s objective was to measure how removing affordability barriers to accessing connectivity for households impacts on digital inclusion. By providing participating households with a free unlimited broadband internet connection for the duration of the research, the project removed the costs associated with digital access. Access alone is not enough to resolve the digital exclusion confronted by these low-income households. Digital exclusion in these instances is not derived simply from the cost of Internet access, but from the cost of digital devices. As a result, these households typically lacked sufficient digital devices. Each household was therefore provided both a high speed Internet connection, and a brand new laptop with built-in camera, microphone, and speakers (a standard tool kit for video conferencing). Data collection for the Connected Students project was intended to be conducted face-to-face. We had planned in-person observations including semi-structured interviews with household members conducted at three intervals throughout the project’s duration (beginning, middle, and end), and technology tours of each home to spatially and socially map device locations and uses (Kennedy et al., Digital Domesticity). As we readied to make our first research trip to commence the study, COVID-19 was wreaking havoc. It quickly became apparent we would not be travelling to work, much less travelling around the state. We thus pivoted to digital methods, with all our data collection shifting online to interviews conducted via digital platforms such as Zoom and Microsoft Teams. While the pivot to digital methods saved travel hours, allowing us to scale up the number of households we planned to interview, it also demonstrated unexpected aspects of our participants’ lived experiences of digital exclusion. In this article, we draw on our first round of interviews which were conducted with 35 households over Zoom or Microsoft Teams during lockdown. The practice of conducting these interviews reveals insights into the barriers that households faced to digital research participation. In describing these experiences, we use pseudonyms for individual participants and refer to households using the pseudonym for the student participant from that household. Why Does Digital Inclusion Matter? Digital inclusion is broadly defined as universal access to the technologies necessary to participate in social and civic life (Helsper; Livingstone and Helsper). Although recent years have seen an increase in the number of connected households and devices (Thomas et al., “2020”), digital inclusion remains uneven. As elsewhere, digital disadvantage in the Australian context falls along geographic and socioeconomic lines (Alam and Imran; Atkinson et al.; Blanchard et al.; Rennie et al.). Digitally excluded population groups typically experience some combination of education, employment, income, social, and mental health hardship; their predicament is compounded by a myriad of important services moving online, from utility payments, to social services, to job seeking platforms (Australian Council of Social Service; Chen; Commonwealth Ombudsman). In addition to challenges in using essential services, digitally excluded Australians also miss out on the social and cultural benefits of Internet use (Ragnedda and Ruiu). Digital inclusion – and the affordability of digital access – should thus be a key concern for researchers looking to apply online methods. Households in the lowest income quintile spend 6.2% of their disposable income on telecommunications services, almost three times more than wealthier households (Ogle). Those in the lowest income quintile pay a “poverty premium” for their data, almost five times more per unit of data than those in the highest income quintile (Ogle and Musolino). As evidenced by the Australian Digital Inclusion Index, this is driven in part by a higher reliance on mobile-only access (Thomas et al., “2020”). Low-income households are more likely to access critical education, business, and government services through mobile data rather than fixed broadband data (Thomas et al., “2020”). For low-income households, digital participation is the top expense after housing, food, and transport, and is higher than domestic energy costs (Ogle). In the pursuit of responsible and ethical research, we caution against assuming research participants are able to bear the brunt of access costs in terms of having a suitable device, expending their own data resources, and having adequate skills to be able to complete the activity without undue stress. We draw examples from the Connected Students project to support this argument below. Findings: Barriers to Research Participation for Digitally Excluded Households If the Connected Students program had not provided participating households with a technology kit, their preexisting conditions of digital exclusion would have limited their research participation in three key ways. First, households with limited Internet access (particularly those reliant on mobile-only connectivity, and who have a few gigabytes of data per month) would have struggled to provide the data needed for video conferencing. Second, households would have struggled to participate due to a lack of adequate devices. Third, and critically, although the Connected Students technology kit provided households with the data and devices required to participate in the digital ethnography, this did not necessarily resolve the skills gaps that our households confronted. Data Prior to receiving the Connected Students technology kit, many households in our sample had limited modes of connectivity and access to data. For households with comparatively less or lower quality access to data, digital participation – whether for the research discussed here, or in contemporary life – came with very real costs. This was especially the case for households that did not have a home Internet connection and instead relied solely on mobile data. For these households, who carefully managed their data to avoid running out, participating in research through extended video conferences would have been impossible unless adequate financial reimbursement was offered. Households with very limited Internet access used a range of practices to manage and extend their data access by shifting internet costs away from the household budget. This often involved making use of free public Wi-Fi or library internet services. Ellie’s household, for instance, spent their weekends at the public library so that she and her sister could complete their homework. While laborious, these strategies worked well for the families in everyday life. However, they would have been highly unsuitable for participating in research, particularly during the pandemic. On the most obvious level, the expectations of library use – if not silent, then certainly quiet – would have prohibited a successful interview. Further, during COVID-19 lockdowns, public libraries (and other places that provide public Internet) became inaccessible for significant periods of time. Lastly, for some research designs, the location of participants is important even when participation is occurring online. In the case of our own project, the house itself as the site of the interview was critical as our research sought to understand how the layout and materiality of the home impacts on experiences of digital inclusion. We asked participants to guide us around their home, showing where technologies and social activities are colocated. In using the data provided by the Connected Students technology kit, households with limited Internet were able to conduct interviews within their households. For these families, participating in online research would have been near impossible without the Connected Students Internet. Devices Even with adequate Internet connections, many households would have struggled to participate due to a lack of suitable devices. Laptops, which generally provide the best video conferencing experience, were seen as prohibitively expensive for many families. As a result, many families did not have a laptop or were making do with a laptop that was excessively slow, unreliable, and/or had very limited functions. Desktop computers were rare and generally outdated to the extent that they were not able to support video conferencing. One parent, Melissa, described their barely-functioning desktop as “like part of the furniture more than a computer”. Had the Connected Students program not provided a new laptop with video and audio capabilities, participation in video interviews would have been difficult. This is highlighted by the challenges students in these households faced in completing online schooling prior to receiving the Connected Students kit. A participating student, Mallory, for example, explained she had previously not had a laptop, reliant only on her phone and an old iPad: Interviewer: Were you able to do all your homework on those, or was it sometimes tricky?Mallory: Sometimes it was tricky, especially if they wanted to do a call or something ... . Then it got a bit hard because then I would use up all my data, and then didn’t have much left.Interviewer: Yeah. Right.Julia (Parent): ... But as far as schoolwork, it’s hard to do everything on an iPad. A laptop or a computer is obviously easier to manoeuvre around for different things. This example raises several common issues that would likely present barriers to research participation. First, Mallory’s household did not have a laptop before being provided with one through the Connected Students program. Second, while her household did prioritise purchasing tablets and smartphones, which could be used for video conferencing, these were more difficult to navigate for certain tasks and used up mobile data which, as noted above, was often a limited resource. Lastly, it is worth noting that in households which did already own a functioning laptop, it was often shared between several household members. As one parent, Vanessa, noted, “yeah, until we got the [Connected Students] devices, we had one laptop between the four of us that are here. And Noel had the majority use of that because that was his school work took priority”. This lack of individuated access to a device would make participation in some research designs difficult, particularly those that rely on regular access to a suitable device. Skills Despite the Connected Students program’s provision of data and device access, this did not ensure successful research participation. Many households struggled to engage with video research interviews due to insufficient digital skills. While a household with Internet connectivity might be considered on the “right” side of the digital divide, connectivity alone does not ensure participation. People also need to have the knowledge and skills required to use online resources. Brianna’s household, for example, had downloaded Microsoft Teams to their desktop computer in readiness for the interview, but had neglected to consider whether that device had video or audio capabilities. To work around this restriction, the household decided to complete the interview via the Connected Students laptop, but this too proved difficult. Neither Brianna nor her parents were confident in transferring the link to the interview between devices, whether by email or otherwise, requiring the researchers to talk them through the steps required to log on, find, and send the link via email. While Brianna’s household faced digital skills challenges that affected both parent and student participants, in others such as Ariel’s, these challenges were focussed at the parental level. In these instances, the student participant provided a vital resource, helping adults navigate platforms and participate in the research. As Celeste, Ariel’s parent, explained, it's just new things that I get a bit – like, even on here, because your email had come through to me and I said to Ariel "We're going to use your computer with Teams. How do we do this?" So, yeah, worked it out. I just had to look up my email address, but I [initially thought] oh, my god; what am I supposed to do here? Although helpful in our own research given its focus on school-aged young people, this dynamic of parents being helped by their dependents illustrates that the adults in our sample were often unfamiliar with the digital skills required for video conferencing. Research focussing only on adults, or on households in which students have not developed these skills through extended periods of online education such as occurred during the COVID-19 lockdowns, may find participants lacking the digital skills to participate in video interviews. Participation was also impacted upon by participants' lack of more subtle digital skills around the norms and conventions of video conferencing. Several households, for example, conducted their interviews in less ideal situations, such as from both moving and parked cars. A portion of the household interview with Piper’s household was completed as they drove the 30 minutes from their home into Shepperton. Due to living out of town, this household often experienced poor reception. The interview was thus regularly disrupted as they dropped in and out of range, with the interview transcript peppered with interjections such as “we’re going through a bit of an Internet light spot ... we’re back ... sorry ...” (Karina, parent). Finally, Piper switched the device on which they were taking the interview to gain a better connection: “my iPad that we were meeting on has worse Internet than my phone Internet, so we kind of changed it around” (Karina). Choosing to participate in the research from locations other than the home provides evidence of the limited time available to these families, and the onerousness of research participation. These choices also indicate unfamiliarity with video conferencing norms. As digitally excluded households, these participants were likely not the target of popular discussions throughout the pandemic about optimising video conferences through careful consideration of lighting, background, make-up and positioning (e.g. Lasky; Niven-Phillips). This was often identified by how participants positioned themselves in front of the camera, often choosing not to sit squarely within the camera lens. Sometimes this was because several household members were participating and struggled to all sit within view of the single device, but awkward camera positioning also occurred with only one or two people present. A number of interviews were initially conducted with shoulders, or foreheads, or ceilings rather than “whole” participants until we asked them to reposition the device so that the camera was pointing towards their faces. In noting this unfamiliarity we do not seek to criticise or apportion responsibility for accruing such skills to participating households, but rather to highlight the impact this had on the type of conversation between researcher and participant. Such practices offer valuable insight into how digital exclusion impacts on individual’s everyday lives as well as on their research participation. Conclusion Throughout the pandemic, digital methods such as video conferencing have been invaluable for researchers. However, while these methods have enabled fieldwork to continue despite COVID-19 disruptions, the shift to online platforms has important and under-acknowledged implications for who is and is not able to participate in research. In this article, we have drawn on our research with low-income households to demonstrate the barriers that such cohorts experience when participating in online research. Without the technology kits provided as part of our research design, these households would have struggled to participate due to a lack of adequate data and devices. Further, even with the kits provided, households faced additional barriers due to a lack of digital literacy. These experiences raise a number of questions that we encourage researchers to consider when designing methods that avoid in person interactions, and when reviewing studies that use similar approaches: who doesn’t have the technological access needed to participate in digital and online research? What are the implications of this for who and what is most visible in research conducted during the pandemic? Beyond questions of access, to what extent will disadvantaged populations not volunteer to participate in online research because of discomfort or unfamiliarity with digital tools and norms? When low-income participants are included, how can researchers ensure that participation does not unduly burden them by using up precious data resources? And, how can researchers facilitate positive and meaningful participation among those who might be less comfortable interacting through mediums like video conferencing? In raising these questions we acknowledge that not all research will or should be focussed on engaging with disadvantaged cohorts. Rather, our point is that through asking questions such as this, we will be better able to reflect on how data and participant samples are being impacted upon by shifts to digital methods during COVID-19 and beyond. As researchers, we may not always be able to adapt Zoom-based methods to be fully inclusive, but we can acknowledge this as a limitation and keep it in mind when reporting our findings, and later when engaging with the research that was largely conducted online during the pandemic. Lastly, while the Connected Students project focusses on impacts of affordability on digital inclusion, digital disadvantage intersects with many other forms of disadvantage. Thus, while our study focussed specifically on financial disadvantage, our call to be aware of who is and is not able to participate in Zoom-based research applies to digital exclusion more broadly, whatever its cause. Acknowledgements The Connected Students project was funded by Telstra. This research was also supported under the Australian Research Council's Discovery Early Career Researchers Award funding scheme (project number DE200100540). References Alam, Khorshed, and Sophia Imran. “The Digital Divide and Social Inclusion among Refugee Migrants: A Case in Regional Australia.” Information Technology & People 28.2 (2015): 344–65. Atkinson, John, Rosemary Black, and Allan Curtis. “Exploring the Digital Divide in an Australian Regional City: A Case Study of Albury”. Australian Geographer 39.4 (2008): 479–493. Australian Bureau of Statistics. “Census of Population and Housing: Socio-Economic Indexes for Areas (SEIFA), Australia, 2016.” 2016. <https://www.abs.gov.au/ausstats/abs@.nsf/Lookup/by%20Subject/2033.0.55.001~2016~Main%20Features~SOCIO-ECONOMIC%20INDEXES%20FOR%20AREAS%20(SEIFA)%202016~1>. ———. “Index of Relative Socio-Economic Advantage and Disadvantage (IRSAD).” 2016. <https://www.abs.gov.au/ausstats/abs@.nsf/Lookup/by%20Subject/2033.0.55.001~2016~Main%20Features~IRSAD~20>. Australian Council of Social Service. “The Future of Parents Next: Submission to Senate Community Affairs Committee.” 8 Feb. 2019. <http://web.archive.org/web/20200612014954/https://www.acoss.org.au/wp-content/uploads/2019/02/ACOSS-submission-into-Parents-Next_FINAL.pdf>. Beer, David. “The Social Power of Algorithms.” Information, Communication & Society 20.1 (2017): 1–13. Blanchard, Michelle, et al. “Rethinking the Digital Divide: Findings from a Study of Marginalised Young People’s Information Communication Technology (ICT) Use.” Youth Studies Australia 27.4 (2008): 35–42. Cater, Janet. “Skype: A Cost Effective Method for Qualitative Research.” Rehabilitation Counselors and Educators Journal 4.2 (2011): 10-17. Chen, Jesse. “Breaking Down Barriers to Digital Government: How Can We Enable Vulnerable Consumers to Have Equal Participation in Digital Government?” Sydney: Australian Communications Consumer Action Network, 2017. <http://web.archive.org/web/20200612015130/https://accan.org.au/Breaking%20Down%20Barriers%20to%20Digital%20Government.pdf>. Commonwealth Ombudsman. “Centrelink’s Automated Debt Raising and Recovery System: Implementation Report, Report No. 012019.” Commonwealth Ombudsman, 2019. <http://web.archive.org/web/20200612015307/https://www.ombudsman.gov.au/__data/assets/pdf_file/0025/98314/April-2019-Centrelinks-Automated-Debt-Raising-and-Recovery-System.pdf>. Deakin Hannah, and Kelly Wakefield. “Skype Interviewing: Reflections of Two PhD Researchers.” Qualitative Research 14.5 (2014): 603-616. Fujii, LeeAnn. Interviewing in Social Science Research: A Relational Approach. Routledge, 2018. Helsper, Ellen. “Digital Inclusion: An Analysis of Social Disadvantage and the Information Society.” London: Department for Communities and Local Government, 2008. Kennedy, Jenny, and Indigo Holcombe-James. “Connected Students Milestone Report 1: Project Commencement". Melbourne: RMIT, 2021. <https://apo.org.au/node/312817>. Kennedy, Jenny, et al. “Connected Students Milestone Report 2: Findings from First Round of Interviews". Melbourne: RMIT, 2021. <https://apo.org.au/node/312818>. Kennedy, Jenny, et al. Digital Domesticity: Media, Materiality, and Home Life. Oxford UP, 2020. Lasky, Julie. “How to Look Your Best on a Webcam.” New York Times, 25 Mar. 2020 <http://www.nytimes.com/2020/03/25/realestate/coronavirus-webcam-appearance.html>. Livingstone, Sonia, and Ellen Helsper. “Gradations in Digital Inclusion: Children, Young People and the Digital Divide.” New Media & Society 9.4 (2007): 671–696. Lobe, Bojana, David L. Morgan, and Kim A. Hoffman. “Qualitative Data Collection in an Era of Social Distancing.” International Journal of Qualitative Methods 19 (2020): 1–8. Lupton, Deborah. “Doing Fieldwork in a Pandemic (Crowd-Sourced Document).” 2020. <http://docs.google.com/document/d/1clGjGABB2h2qbduTgfqribHmog9B6P0NvMgVuiHZCl8/edit?ts=5e88ae0a#>. Murthy, Dhiraj. “Digital Ethnography: An Examination of the Use of New Technologies for Social Research”. Sociology 42.2 (2008): 837–855. ———. “Emergent Digital Ethnographic Methods for Social Research.” Handbook of Emergent Technologies in Social Research. Ed. Sharlene Nagy Hesse-Biber. Oxford UP, 2011. 158–179. Niven-Phillips, Lisa. “‘Virtual Meetings Aren’t Going Anywhere Soon’: How to Put Your Best Zoom Face Forward.” The Guardian, 27 Mar. 2021. <http://www.theguardian.com/fashion/2021/mar/27/virtual-meetings-arent-going-anywhere-soon-how-to-put-your-best-zoom-face-forward>. Ogle, Greg. “Telecommunications Expenditure in Australia: Fact Sheet.” Sydney: Australian Communications Consumer Action Network, 2017. <https://web.archive.org/web/20200612043803/https://accan.org.au/files/Reports/ACCAN_SACOSS%20Telecommunications%20Expenditure_web_v2.pdf>. Ogle, Greg, and Vanessa Musolino. “Connectivity Costs: Telecommunications Affordability for Low Income Australians.” Sydney: Australian Communications Consumer Action Network, 2016. <https://web.archive.org/web/20200612043944/https://accan.org.au/files/Reports/161011_Connectivity%20Costs_accessible-web.pdf>. Ragnedda, Massimo, and Maria Laura Ruiu. “Social Capital and the Three Levels of Digital Divide.” Theorizing Digital Divides. Eds. Massimo Ragnedda and Glenn Muschert. Routledge, 2017. 21–34. Rennie, Ellie, et al. “At Home on the Outstation: Barriers to Home Internet in Remote Indigenous Communities.” Telecommunications Policy 37.6 (2013): 583–93. Taylor, Linnet. “What Is Data Justice? The Case for Connecting Digital Rights and Freedoms Globally. Big Data & Society 4.2 (2017): 1–14. Thomas, Julian, et al. Measuring Australia’s Digital Divide: The Australian Digital Inclusion Index 2018. Melbourne: RMIT University, for Telstra, 2018. ———. Measuring Australia’s Digital Divide: The Australian Digital Inclusion Index 2019. Melbourne: RMIT University and Swinburne University of Technology, for Telstra, 2019. ———. Measuring Australia’s Digital Divide: The Australian Digital Inclusion Index 2020. Melbourne: RMIT University and Swinburne University of Technology, for Telstra, 2020. Zuboff, Shoshana. “Big Other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology 30 (2015): 75–89.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Cinque, Toija. "A Study in Anxiety of the Dark". M/C Journal 24, n.º 2 (27 de abril de 2021). http://dx.doi.org/10.5204/mcj.2759.

Texto completo
Resumen
Introduction This article is a study in anxiety with regard to social online spaces (SOS) conceived of as dark. There are two possible ways to define ‘dark’ in this context. The first is that communication is dark because it either has limited distribution, is not open to all users (closed groups are a case example) or hidden. The second definition, linked as a result of the first, is the way that communication via these means is interpreted and understood. Dark social spaces disrupt the accepted top-down flow by the ‘gazing elite’ (data aggregators including social media), but anxious users might need to strain to notice what is out there, and this in turn destabilises one’s reception of the scene. In an environment where surveillance technologies are proliferating, this article examines contemporary, dark, interconnected, and interactive communications for the entangled affordances that might be brought to bear. A provocation is that resistance through counterveillance or “sousveillance” is one possibility. An alternative (or addition) is retreating to or building ‘dark’ spaces that are less surveilled and (perhaps counterintuitively) less fearful. This article considers critically the notion of dark social online spaces via four broad socio-technical concerns connected to the big social media services that have helped increase a tendency for fearful anxiety produced by surveillance and the perceived implications for personal privacy. It also shines light on the aspect of darkness where some users are spurred to actively seek alternative, dark social online spaces. Since the 1970s, public-key cryptosystems typically preserved security for websites, emails, and sensitive health, government, and military data, but this is now reduced (Williams). We have seen such systems exploited via cyberattacks and misappropriated data acquired by affiliations such as Facebook-Cambridge Analytica for targeted political advertising during the 2016 US elections. Via the notion of “parasitic strategies”, such events can be described as news/information hacks “whose attack vectors target a system’s weak points with the help of specific strategies” (von Nordheim and Kleinen-von Königslöw, 88). In accord with Wilson and Serisier’s arguments (178), emerging technologies facilitate rapid data sharing, collection, storage, and processing wherein subsequent “outcomes are unpredictable”. This would also include the effect of acquiescence. In regard to our digital devices, for some, being watched overtly—through cameras encased in toys, computers, and closed-circuit television (CCTV) to digital street ads that determine the resonance of human emotions in public places including bus stops, malls, and train stations—is becoming normalised (McStay, Emotional AI). It might appear that consumers immersed within this Internet of Things (IoT) are themselves comfortable interacting with devices that record sound and capture images for easy analysis and distribution across the communications networks. A counter-claim is that mainstream social media corporations have cultivated a sense of digital resignation “produced when people desire to control the information digital entities have about them but feel unable to do so” (Draper and Turow, 1824). Careful consumers’ trust in mainstream media is waning, with readers observing a strong presence of big media players in the industry and are carefully picking their publications and public intellectuals to follow (Mahmood, 6). A number now also avoid the mainstream internet in favour of alternate dark sites. This is done by users with “varying backgrounds, motivations and participation behaviours that may be idiosyncratic (as they are rooted in the respective person’s biography and circumstance)” (Quandt, 42). By way of connection with dark internet studies via Biddle et al. (1; see also Lasica), the “darknet” is a collection of networks and technologies used to share digital content … not a separate physical network but an application and protocol layer riding on existing networks. Examples of darknets are peer-to-peer file sharing, CD and DVD copying, and key or password sharing on email and newsgroups. As we note from the quote above, the “dark web” uses existing public and private networks that facilitate communication via the Internet. Gehl (1220; see also Gehl and McKelvey) has detailed that this includes “hidden sites that end in ‘.onion’ or ‘.i2p’ or other Top-Level Domain names only available through modified browsers or special software. Accessing I2P sites requires a special routing program ... . Accessing .onion sites requires Tor [The Onion Router]”. For some, this gives rise to social anxiety, read here as stemming from that which is not known, and an exaggerated sense of danger, which makes fight or flight seem the only options. This is often justified or exacerbated by the changing media and communication landscape and depicted in popular documentaries such as The Social Dilemma or The Great Hack, which affect public opinion on the unknown aspects of internet spaces and the uses of personal data. The question for this article remains whether the fear of the dark is justified. Consider that most often one will choose to make one’s intimate bedroom space dark in order to have a good night’s rest. We might pleasurably escape into a cinema’s darkness for the stories told therein, or walk along a beach at night enjoying unseen breezes. Most do not avoid these experiences, choosing to actively seek them out. Drawing this thread, then, is the case made here that agency can also be found in the dark by resisting socio-political structural harms. 1. Digital Futures and Anxiety of the Dark Fear of the darkI have a constant fear that something's always nearFear of the darkFear of the darkI have a phobia that someone's always there In the lyrics to the song “Fear of the Dark” (1992) by British heavy metal group Iron Maiden is a sense that that which is unknown and unseen causes fear and anxiety. Holding a fear of the dark is not unusual and varies in degree for adults as it does for children (Fellous and Arbib). Such anxiety connected to the dark does not always concern darkness itself. It can also be a concern for the possible or imagined dangers that are concealed by the darkness itself as a result of cognitive-emotional interactions (McDonald, 16). Extending this claim is this article’s non-binary assertion that while for some technology and what it can do is frequently misunderstood and shunned as a result, for others who embrace the possibilities and actively take it on it is learning by attentively partaking. Mistakes, solecism, and frustrations are part of the process. Such conceptual theorising falls along a continuum of thinking. Global interconnectivity of communications networks has certainly led to consequent concerns (Turkle Alone Together). Much focus for anxiety has been on the impact upon social and individual inner lives, levels of media concentration, and power over and commercialisation of the internet. Of specific note is that increasing commercial media influence—such as Facebook and its acquisition of WhatsApp, Oculus VR, Instagram, CRTL-labs (translating movements and neural impulses into digital signals), LiveRail (video advertising technology), Chainspace (Blockchain)—regularly changes the overall dynamics of the online environment (Turow and Kavanaugh). This provocation was born out recently when Facebook disrupted the delivery of news to Australian audiences via its service. Mainstream social online spaces (SOS) are platforms which provide more than the delivery of media alone and have been conceptualised predominantly in a binary light. On the one hand, they can be depicted as tools for the common good of society through notional widespread access and as places for civic participation and discussion, identity expression, education, and community formation (Turkle; Bruns; Cinque and Brown; Jenkins). This end of the continuum of thinking about SOS seems set hard against the view that SOS are operating as businesses with strategies that manipulate consumers to generate revenue through advertising, data, venture capital for advanced research and development, and company profit, on the other hand. In between the two polar ends of this continuum are the range of other possibilities, the shades of grey, that add contemporary nuance to understanding SOS in regard to what they facilitate, what the various implications might be, and for whom. By way of a brief summary, anxiety of the dark is steeped in the practices of privacy-invasive social media giants such as Facebook and its ancillary companies. Second are the advertising technology companies, surveillance contractors, and intelligence agencies that collect and monitor our actions and related data; as well as the increased ease of use and interoperability brought about by Web 2.0 that has seen a disconnection between technological infrastructure and social connection that acts to limit user permissions and online affordances. Third are concerns for the negative effects associated with depressed mental health and wellbeing caused by “psychologically damaging social networks”, through sleep loss, anxiety, poor body image, real world relationships, and the fear of missing out (FOMO; Royal Society for Public Health (UK) and the Young Health Movement). Here the harms are both individual and societal. Fourth is the intended acceleration toward post-quantum IoT (Fernández-Caramés), as quantum computing’s digital components are continually being miniaturised. This is coupled with advances in electrical battery capacity and interconnected telecommunications infrastructures. The result of such is that the ontogenetic capacity of the powerfully advanced network/s affords supralevel surveillance. What this means is that through devices and the services that they provide, individuals’ data is commodified (Neff and Nafus; Nissenbaum and Patterson). Personal data is enmeshed in ‘things’ requiring that the decisions that are both overt, subtle, and/or hidden (dark) are scrutinised for the various ways they shape social norms and create consequences for public discourse, cultural production, and the fabric of society (Gillespie). Data and personal information are retrievable from devices, sharable in SOS, and potentially exposed across networks. For these reasons, some have chosen to go dark by being “off the grid”, judiciously selecting their means of communications and their ‘friends’ carefully. 2. Is There Room for Privacy Any More When Everyone in SOS Is Watching? An interesting turn comes through counterarguments against overarching institutional surveillance that underscore the uses of technologies to watch the watchers. This involves a practice of counter-surveillance whereby technologies are tools of resistance to go ‘dark’ and are used by political activists in protest situations for both communication and avoiding surveillance. This is not new and has long existed in an increasingly dispersed media landscape (Cinque, Changing Media Landscapes). For example, counter-surveillance video footage has been accessed and made available via live-streaming channels, with commentary in SOS augmenting networking possibilities for niche interest groups or micropublics (Wilson and Serisier, 178). A further example is the Wordpress site Fitwatch, appealing for an end to what the site claims are issues associated with police surveillance (fitwatch.org.uk and endpolicesurveillance.wordpress.com). Users of these sites are called to post police officers’ identity numbers and photographs in an attempt to identify “cops” that might act to “misuse” UK Anti-terrorism legislation against activists during legitimate protests. Others that might be interested in doing their own “monitoring” are invited to reach out to identified personal email addresses or other private (dark) messaging software and application services such as Telegram (freeware and cross-platform). In their work on surveillance, Mann and Ferenbok (18) propose that there is an increase in “complex constructs between power and the practices of seeing, looking, and watching/sensing in a networked culture mediated by mobile/portable/wearable computing devices and technologies”. By way of critical definition, Mann and Ferenbok (25) clarify that “where the viewer is in a position of power over the subject, this is considered surveillance, but where the viewer is in a lower position of power, this is considered sousveillance”. It is the aspect of sousveillance that is empowering to those using dark SOS. One might consider that not all surveillance is “bad” nor institutionalised. It is neither overtly nor formally regulated—as yet. Like most technologies, many of the surveillant technologies are value-neutral until applied towards specific uses, according to Mann and Ferenbok (18). But this is part of the ‘grey area’ for understanding the impact of dark SOS in regard to which actors or what nations are developing tools for surveillance, where access and control lies, and with what effects into the future. 3. Big Brother Watches, So What Are the Alternatives: Whither the Gazing Elite in Dark SOS? By way of conceptual genealogy, consideration of contemporary perceptions of surveillance in a visually networked society (Cinque, Changing Media Landscapes) might be usefully explored through a revisitation of Jeremy Bentham’s panopticon, applied here as a metaphor for contemporary surveillance. Arguably, this is a foundational theoretical model for integrated methods of social control (Foucault, Surveiller et Punir, 192-211), realised in the “panopticon” (prison) in 1787 by Jeremy Bentham (Bentham and Božovič, 29-95) during a period of social reformation aimed at the improvement of the individual. Like the power for social control over the incarcerated in a panopticon, police power, in order that it be effectively exercised, “had to be given the instrument of permanent, exhaustive, omnipresent surveillance, capable of making all visible … like a faceless gaze that transformed the whole social body into a field of perception” (Foucault, Surveiller et Punir, 213–4). In grappling with the impact of SOS for the individual and the collective in post-digital times, we can trace out these early ruminations on the complex documentary organisation through state-controlled apparatuses (such as inspectors and paid observers including “secret agents”) via Foucault (Surveiller et Punir, 214; Subject and Power, 326-7) for comparison to commercial operators like Facebook. Today, artificial intelligence (AI), facial recognition technology (FRT), and closed-circuit television (CCTV) for video surveillance are used for social control of appropriate behaviours. Exemplified by governments and the private sector is the use of combined technologies to maintain social order, from ensuring citizens cross the street only on green lights, to putting rubbish in the correct recycling bin or be publicly shamed, to making cashless payments in stores. The actions see advantages for individual and collective safety, sustainability, and convenience, but also register forms of behaviour and attitudes with predictive capacities. This gives rise to suspicions about a permanent account of individuals’ behaviour over time. Returning to Foucault (Surveiller et Punir, 135), the impact of this finds a dissociation of power from the individual, whereby they become unwittingly impelled into pre-existing social structures, leading to a ‘normalisation’ and acceptance of such systems. If we are talking about the dark, anxiety is key for a Ministry of SOS. Following Foucault again (Subject and Power, 326-7), there is the potential for a crawling, creeping governance that was once distinct but is itself increasingly hidden and growing. A blanket call for some form of ongoing scrutiny of such proliferating powers might be warranted, but with it comes regulation that, while offering certain rights and protections, is not without consequences. For their part, a number of SOS platforms had little to no moderation for explicit content prior to December 2018, and in terms of power, notwithstanding important anxiety connected to arguments that children and the vulnerable need protections from those that would seek to take advantage, this was a crucial aspect of community building and self-expression that resulted in this freedom of expression. In unearthing the extent that individuals are empowered arising from the capacity to post sexual self-images, Tiidenberg ("Bringing Sexy Back") considered that through dark SOS (read here as unregulated) some users could work in opposition to the mainstream consumer culture that provides select and limited representations of bodies and their sexualities. This links directly to Mondin’s exploration of the abundance of queer and feminist pornography on dark SOS as a “counterpolitics of visibility” (288). This work resulted in a reasoned claim that the technological structure of dark SOS created a highly political and affective social space that users valued. What also needs to be underscored is that many users also believed that such a space could not be replicated on other mainstream SOS because of the differences in architecture and social norms. Cho (47) worked with this theory to claim that dark SOS are modern-day examples in a history of queer individuals having to rely on “underground economies of expression and relation”. Discussions such as these complicate what dark SOS might now become in the face of ‘adult’ content moderation and emerging tracking technologies to close sites or locate individuals that transgress social norms. Further, broader questions are raised about how content moderation fits in with the public space conceptualisations of SOS more generally. Increasingly, “there is an app for that” where being able to identify the poster of an image or an author of an unknown text is seen as crucial. While there is presently no standard approach, models for combining instance-based and profile-based features such as SVM for determining authorship attribution are in development, with the result that potentially far less content will remain hidden in the future (Bacciu et al.). 4. There’s Nothing New under the Sun (Ecclesiastes 1:9) For some, “[the] high hopes regarding the positive impact of the Internet and digital participation in civic society have faded” (Schwarzenegger, 99). My participant observation over some years in various SOS, however, finds that critical concern has always existed. Views move along the spectrum of thinking from deep scepticisms (Stoll, Silicon Snake Oil) to wondrous techo-utopian promises (Negroponte, Being Digital). Indeed, concerns about the (then) new technologies of wireless broadcasting can be compared with today’s anxiety over the possible effects of the internet and SOS. Inglis (7) recalls, here, too, were fears that humanity was tampering with some dangerous force; might wireless wave be causing thunderstorms, droughts, floods? Sterility or strokes? Such anxieties soon evaporated; but a sense of mystery might stay longer with evangelists for broadcasting than with a laity who soon took wireless for granted and settled down to enjoy the products of a process they need not understand. As the analogy above makes clear, just as audiences came to use ‘the wireless’ and later the internet regularly, it is reasonable to argue that dark SOS will also gain widespread understanding and find greater acceptance. Dark social spaces are simply the recent development of internet connectivity and communication more broadly. The dark SOS afford choice to be connected beyond mainstream offerings, which some users avoid for their perceived manipulation of content and user both. As part of the wider array of dark web services, the resilience of dark social spaces is reinforced by the proliferation of users as opposed to decentralised replication. Virtual Private Networks (VPNs) can be used for anonymity in parallel to TOR access, but they guarantee only anonymity to the client. A VPN cannot guarantee anonymity to the server or the internet service provider (ISP). While users may use pseudonyms rather than actual names as seen on Facebook and other SOS, users continue to take to the virtual spaces they inhabit their off-line, ‘real’ foibles, problems, and idiosyncrasies (Chenault). To varying degrees, however, people also take their best intentions to their interactions in the dark. The hyper-efficient tools now deployed can intensify this, which is the great advantage attracting some users. In balance, however, in regard to online information access and dissemination, critical examination of what is in the public’s interest, and whether content should be regulated or controlled versus allowing a free flow of information where users self-regulate their online behaviour, is fraught. O’Loughlin (604) was one of the first to claim that there will be voluntary loss through negative liberty or freedom from (freedom from unwanted information or influence) and an increase in positive liberty or freedom to (freedom to read or say anything); hence, freedom from surveillance and interference is a kind of negative liberty, consistent with both libertarianism and liberalism. Conclusion The early adopters of initial iterations of SOS were hopeful and liberal (utopian) in their beliefs about universality and ‘free’ spaces of open communication between like-minded others. This was a way of virtual networking using a visual motivation (led by images, text, and sounds) for consequent interaction with others (Cinque, Visual Networking). The structural transformation of the public sphere in a Habermasian sense—and now found in SOS and their darker, hidden or closed social spaces that might ensure a counterbalance to the power of those with influence—towards all having equal access to platforms for presenting their views, and doing so respectfully, is as ever problematised. Broadly, this is no more so, however, than for mainstream SOS or for communicating in the world. References Bacciu, Andrea, Massimo La Morgia, Alessandro Mei, Eugenio Nerio Nemmi, Valerio Neri, and Julinda Stefa. “Cross-Domain Authorship Attribution Combining Instance Based and Profile-Based Features.” CLEF (Working Notes). Lugano, Switzerland, 9-12 Sep. 2019. Bentham, Jeremy, and Miran Božovič. The Panopticon Writings. London: Verso Trade, 1995. Biddle, Peter, et al. “The Darknet and the Future of Content Distribution.” Proceedings of the 2002 ACM Workshop on Digital Rights Management. Vol. 6. Washington DC, 2002. Bruns, Axel. Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang, 2008. Chenault, Brittney G. “Developing Personal and Emotional Relationships via Computer-Mediated Communication.” CMC Magazine 5.5 (1998). 1 May 2020 <http://www.december.com/cmc/mag/1998/may/chenault.html>. Cho, Alexander. “Queer Reverb: Tumblr, Affect, Time.” Networked Affect. Eds. K. Hillis, S. Paasonen, and M. Petit. Cambridge, Mass.: MIT Press, 2015: 43-58. Cinque, Toija. Changing Media Landscapes: Visual Networking. London: Oxford UP, 2015. ———. “Visual Networking: Australia's Media Landscape.” Global Media Journal: Australian Edition 6.1 (2012): 1-8. Cinque, Toija, and Adam Brown. “Educating Generation Next: Screen Media Use, Digital Competencies, and Tertiary Education.” Digital Culture & Education 7.1 (2015). Draper, Nora A., and Joseph Turow. “The Corporate Cultivation of Digital Resignation.” New Media & Society 21.8 (2019): 1824-1839. Fellous, Jean-Marc, and Michael A. Arbib, eds. Who Needs Emotions? The Brain Meets the Robot. New York: Oxford UP, 2005. Fernández-Caramés, Tiago M. “From Pre-Quantum to Post-Quantum IoT Security: A Survey on Quantum-Resistant Cryptosystems for the Internet of Things.” IEEE Internet of Things Journal 7.7 (2019): 6457-6480. Foucault, Michel. Surveiller et Punir: Naissance de la Prison [Discipline and Punish—The Birth of The Prison]. Trans. Alan Sheridan. New York: Random House, 1977. Foucault, Michel. “The Subject and Power.” Michel Foucault: Power, the Essential Works of Michel Foucault 1954–1984. Vol. 3. Trans. R. Hurley and others. Ed. J.D. Faubion. London: Penguin, 2001. Gehl, Robert W. Weaving the Dark Web: Legitimacy on Freenet, Tor, and I2P. Cambridge, Massachusetts: MIT Press, 2018. Gehl, Robert, and Fenwick McKelvey. “Bugging Out: Darknets as Parasites of Large-Scale Media Objects.” Media, Culture & Society 41.2 (2019): 219-235. Gillespie, Tarleton. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. London: Yale UP, 2018. Habermas, Jürgen. The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society. Trans. Thomas Burger with the assistance of Frederick Lawrence. Cambridge, Mass.: MIT Press, 1989. Inglis, Ken S. This Is the ABC: The Australian Broadcasting Commission 1932–1983. Melbourne: Melbourne UP, 1983. Iron Maiden. “Fear of the Dark.” London: EMI, 1992. Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York UP, 2006. Lasica, J. D. Darknet: Hollywood’s War against the Digital Generation. New York: John Wiley and Sons, 2005. Mahmood, Mimrah. “Australia's Evolving Media Landscape.” 13 Apr. 2021 <https://www.meltwater.com/en/resources/australias-evolving-media-landscape>. Mann, Steve, and Joseph Ferenbok. “New Media and the Power Politics of Sousveillance in a Surveillance-Dominated World.” Surveillance & Society 11.1/2 (2013): 18-34. McDonald, Alexander J. “Cortical Pathways to the Mammalian Amygdala.” Progress in Neurobiology 55.3 (1998): 257-332. McStay, Andrew. Emotional AI: The Rise of Empathic Media. London: Sage, 2018. Mondin, Alessandra. “‘Tumblr Mostly, Great Empowering Images’: Blogging, Reblogging and Scrolling Feminist, Queer and BDSM Desires.” Journal of Gender Studies 26.3 (2017): 282-292. Neff, Gina, and Dawn Nafus. Self-Tracking. Cambridge, Mass.: MIT Press, 2016. Negroponte, Nicholas. Being Digital. New York: Alfred A. Knopf, 1995. Nissenbaum, Helen, and Heather Patterson. “Biosensing in Context: Health Privacy in a Connected World.” Quantified: Biosensing Technologies in Everyday Life. Ed. Dawn Nafus. 2016. 68-79. O’Loughlin, Ben. “The Political Implications of Digital Innovations.” Information, Communication and Society 4.4 (2001): 595–614. Quandt, Thorsten. “Dark Participation.” Media and Communication 6.4 (2018): 36-48. Royal Society for Public Health (UK) and the Young Health Movement. “#Statusofmind.” 2017. 2 Apr. 2021 <https://www.rsph.org.uk/our-work/campaigns/status-of-mind.html>. Statista. “Number of IoT devices 2015-2025.” 27 Nov. 2020 <https://www.statista.com/statistics/471264/iot-number-of-connected-devices-worldwide/>. Schwarzenegger, Christian. “Communities of Darkness? Users and Uses of Anti-System Alternative Media between Audience and Community.” Media and Communication 9.1 (2021): 99-109. Stoll, Clifford. Silicon Snake Oil: Second Thoughts on the Information Highway. Anchor, 1995. Tiidenberg, Katrin. “Bringing Sexy Back: Reclaiming the Body Aesthetic via Self-Shooting.” Cyberpsychology: Journal of Psychosocial Research on Cyberspace 8.1 (2014). The Great Hack. Dirs. Karim Amer, Jehane Noujaim. Netflix, 2019. The Social Dilemma. Dir. Jeff Orlowski. Netflix, 2020. Turkle, Sherry. The Second Self: Computers and the Human Spirit. Cambridge, Mass.: MIT Press, 2005. Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each Other. UK: Hachette, 2017. Turow, Joseph, and Andrea L. Kavanaugh, eds. The Wired Homestead: An MIT Press Sourcebook on the Internet and the Family. Cambridge, Mass.: MIT Press, 2003. Von Nordheim, Gerret, and Katharina Kleinen-von Königslöw. “Uninvited Dinner Guests: A Theoretical Perspective on the Antagonists of Journalism Based on Serres’ Parasite.” Media and Communication 9.1 (2021): 88-98. Williams, Chris K. “Configuring Enterprise Public Key Infrastructures to Permit Integrated Deployment of Signature, Encryption and Access Control Systems.” MILCOM 2005-2005 IEEE Military Communications Conference. IEEE, 2005. Wilson, Dean, and Tanya Serisier. “Video Activism and the Ambiguities of Counter-Surveillance.” Surveillance & Society 8.2 (2010): 166-180.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Bruns, Axel. "Old Players, New Players". M/C Journal 1, n.º 5 (1 de diciembre de 1998). http://dx.doi.org/10.5204/mcj.1729.

Texto completo
Resumen
If you have a look at the concert schedules around Australia (and elsewhere in the Western world) these days, you could be forgiven for thinking that you've suddenly been transported back in time: there is a procession of old players, playing (mainly) old songs. The Rolling Stones came through a while ago, as did the Eagles, Creedence Clearwater Revival's John Fogerty, and James Brown. Jimmy Page and Robert Plant played updated versions of Led Zeppelin's music, with some new songs strewn in on occasion. The Beach Boys served up a double blast from the past, touring with America ("Horse with No Name") as their opening act. Australian content in this trend is provided by the odd assortment of media darling John Farnham, ex-Grease girl Olivia Newton-John, and former Phantom of the Opera Anthony Warlow, who are touring under the unlikely name of 'The Main Event'; Australian rock legends Cold Chisel have also reformed recently, with a reunion tour to follow. On the more prestigious end of the pop mainstream, The Three Tenors have only had one concert in Australia recently, but publicity-savvy as they have proven themselves to be during the Football World Cup it's a fairly safe bet that they'll be rolling into Sydney Opera House in time for the last Olympics of this millennium, in the year 2000. Thankfully, we've so far been spared of a remaining-Beatles reunion and tour (they did release their Anthology CDs and videos, though), but it wouldn't really come as a surprise anymore. Why this wave of musical exhumations; why now? Admittedly, some of the reunions produced interesting results (Page & Plant's update of Led Zeppelin songs with world music elements comes to mind), but largely the bands involved have restricted themselves to playing old favourites or producing new music that is content with plagiarising older material, and so it's unlikely that the Beach Boys are touring, for example, because they have a strong desire to take surf music to the next level of art. A better explanation, it seems, can be found in the music industry and its structures, and in the way those structures are increasingly becoming inadequate for today's mediascape. For much of this century, popular music in the Western world -- while music itself is a global obsession, the marketing industry largely remains dominated by the West -- has come in waves: to give a broad overview, jazz was outdone by rock'n'roll, which was followed by the British invasion and the British blues revival, leading to the stadium rock of the 1970s (co-existing with disco), which in turn caused the punk revolution that fizzled out into New Wave and the new romantics, which were superseded by Alternative Rock and Britpop. Looking at this succession, it's not difficult to see that the waves have become smaller over time, though: recent styles have failed by far to reach the heights of interest and influence that earlier waves like rock'n'roll and the British invasion achieved. How many people will remember, say, Oasis in three decades; how many will The Beatles? The question seems unfair. This gradual decrease in wave amplitude over the years is directly linked to changes in the media structure in the Western world: earlier, new musical waves swept the few available channels of radio and TV to their full extent; severe bandwidth limitations forced the broadcasters to divert their entire attention to the latest trends, with no air time to be spared for the music of yesteryear. As the number of channels increased, however, so did the potential for variety; today, most cities of sufficient size at least have stations catering for listeners of classical music, over-40s easy listening, mainstream rock, and alternative rock, and perhaps there's also an open-access channel for the more obscure styles; stations for more specific tastes -- all-jazz, all-heavy metal, all-goth -- are now also viable in some cities. As new style waves come in, they might still sweep through the mainstream stations, but will only manage to cause some minor ripples amongst the less central channels. Similar trends exist among music stores, and the music press. The mainstream might remain in the middle of the musical spectrum, therefore, but it's been narrowed considerably, with more and more music fans moving over to the more specialised channels. There is now "an increasingly fragmented international marketplace of popular musics" (Campbell Robinson et al. 272). In media-rich Western nations, this trend is strengthened further by changes to the mediascape brought on by the Internet: the Net is the ultimate enpander of bandwidth, where anyone can add another channel if their needs aren't met by the existing ones. With an unlimited number of specialised channels, with fans deciding their musical diet for themselves instead of having radio DJs or music journalists do it for them, and with the continued narrowing of the mainstream as it loses more and more listeners, new waves of musical styles lose their impact almost immediately now. Whatever your specific tastes, you'll find like-minded people, specialty labels and CD retailers, perhaps even an Internet radio station -- there is now less need than ever to engage with outside trends. Whether that development is entirely desirable remains a point of debate, of course. The paradox for the big old players in the music industry is that the ongoing globalisation of their markets hasn't also led to a globalisation of musical tastes -- largely because of this exponential increase and diversification of channels. Music is a powerful instrument of community formation, and community formation implies first and foremost a drawing of boundaries to everything that isn't part of the community (Turner 2): as musical styles diversify, therefore, there are now more musical taste communities than anyone would care to list. Instead of turning to some mainstreamed, global style of music, listeners are found to turn to the local -- either to the music produced geographically local to them, or to a form of virtually local music, that is, the music of a geographically dispersed, but (through modern communications technologies) otherwise highly unified taste community (Bruns sect. 1 bite 8ff.). There certainly are more such groupings than the industry would care to cater for: the division of their resources in order to follow musical trends in a large number of separate communities is eating into the profits of the large multinationals, while small specialty labels are experiencing a resurgence (despite the major labels' attempts to discourage them). As Wallis & Malm note, "the transformation of the business side of the music industry into a number of giant concerns has not stopped small enterprises, often run by enthusiasts, from cropping up everywhere" (270). The large conglomerates are remarkably ill-prepared to deal with such a plurality of styles: everything in their structure is crying out for a unified market with few, major, and tightly controlled trends. This is where we (and the industry) return to the Beach Boys & Co., then. Partly out of a desire for the good old times when the music business was simple, partly to see if a revival of the old marketing concepts may not reverse the tide once more, the industry majors have unleashed this procession of the musical undead (with only a few notable exceptions) upon us; it is a last-stand attempt to regather the remaining few servicable battleships of the mainstream fleet to grab whatever riches are still to be found there. Judging by ticket prices alone (Page & Plant charged over A$110 per head), there still is money to be made, but these prices also indicate that such 'mainstream' acts are now largely a spectacle for well-to-do over-35s. Amongst younger audiences, the multinationals remain mostly clueless, despite a few efforts to create massively hyped, but musically lobotomised lowest-common-denominator acts, from the Spice Girls to Céline Dion or U2. Most of the acts the major industry players cling to as their main attractions have quite simply lost relevance to all but the most gullible of audiences -- in this context, the advertisment of the travelling Farnham / Newton-John / Warlow show as 'The Main Event' seems almost touching in its denial of reality. It's not like the industry hasn't tried this strategy before, of course: reacting to the fragmented musical world of the early 1970s, with styles from folk to hard rock all equally vying for a share of the audience, the labels created stadium rock -- oversized concerts of overproduced bands who eventually became alienated from their audiences, causing the radical back-to-the-roots revolution of punk. Stadium rock mark II is bound to fail even more quickly and decisively: with most of its proponents not even creating any excitement in the all-important 'young adults' market in the first place, it's the wave that wasn't, and should properly be seen as the best sign yet of the industry's loss of touch with its fragmenting market(s). It's time for new, smaller, and more mobile players to take over from the multinationals, it seems. References Bruns, Axel. "'Every Home Is Wired': The Use of Internet Discussion Fora by a Subcultural Community." 1998. 17 Dec. 1998 <http://www.uq.net.au/~zzabruns/uni/honours/thesis.php>. Campbell Robinson, Deanna, et al. Music at the Margins: Popular Music and Global Cultural Diversity. Newbury Park, Calif.: Sage, 1991. Wallis, Roger, and Krister Malm. Big Sounds from Small Peoples: The Music Industry in Small Countries. London: Constable, 1984. Turner, Graeme. "Rock Music, National Culture and Cultural Policy." Rock Music: Politics and Policy. Ed. Tony Bennett. Brisbane: Institute for Cultural Policy Studies, Griffith U, 1988. 1-6. Citation reference for this article MLA style: Axel Bruns. "Old Players, New Players: The Main Event That Isn't." M/C: A Journal of Media and Culture 1.5 (1998). [your date of access] <http://www.uq.edu.au/mc/9812/main.php>. Chicago style: Axel Bruns, "Old Players, New Players: The Main Event That Isn't," M/C: A Journal of Media and Culture 1, no. 5 (1998), <http://www.uq.edu.au/mc/9812/main.php> ([your date of access]). APA style: Axel Bruns. (1998) Old players, new players: the Main Event that isn't. M/C: A Journal of Media and Culture 1(5). <http://www.uq.edu.au/mc/9812/main.php> ([your date of access]).
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Arnold, Bruce y Margalit Levin. "Ambient Anomie in the Virtualised Landscape? Autonomy, Surveillance and Flows in the 2020 Streetscape". M/C Journal 13, n.º 2 (3 de mayo de 2010). http://dx.doi.org/10.5204/mcj.221.

Texto completo
Resumen
Our thesis is that the city’s ambience is now an unstable dialectic in which we are watchers and watched, mirrored and refracted in a landscape of iPhone auteurs, eTags, CCTV and sousveillance. Embrace ambience! Invoking Benjamin’s spirit, this article does not seek to limit understanding through restriction to a particular theme or theoretical construct (Buck-Morss 253). Instead, it offers snapshots of interactions at the dawn of the postmodern city. That bricolage also engages how people appropriate, manipulate, disrupt and divert urban spaces and strategies of power in their everyday life. Ambient information can both liberate and disenfranchise the individual. This article asks whether our era’s dialectics result in a new personhood or merely restate the traditional spectacle of ‘bright lights, big city’. Does the virtualized city result in ambient anomie and satiation or in surprise, autonomy and serendipity? (Gumpert 36) Since the steam age, ambience has been characterised in terms of urban sound, particularly the alienation attributable to the individual’s experience as a passive receptor of a cacophony of sounds – now soft, now loud, random and recurrent–from the hubbub of crowds, the crash and grind of traffic, the noise of industrial processes and domestic activity, factory whistles, fire alarms, radio, television and gramophones (Merchant 111; Thompson 6). In the age of the internet, personal devices such as digital cameras and iPhones, and urban informatics such as CCTV networks and e-Tags, ambience is interactivity, monitoring and signalling across multiple media, rather than just sound. It is an interactivity in which watchers observe the watched observing them and the watched reshape the fabric of virtualized cities merely by traversing urban precincts (Hillier 295; De Certeau 163). It is also about pervasive although unevenly distributed monitoring of individuals, using sensors that are remote to the individual (for example cameras or tag-readers mounted above highways) or are borne by the individual (for example mobile phones or badges that systematically report the location to a parent, employer or sex offender register) (Holmes 176; Savitch 130). That monitoring reflects what Doel and Clark characterized as a pervasive sense of ambient fear in the postmodern city, albeit fear that like much contemporary anxiety is misplaced–you are more at risk from intimates than from strangers, from car accidents than terrorists or stalkers–and that is ahistorical (Doel 13; Scheingold 33). Finally, it is about cooption, with individuals signalling their identity through ambient advertising: wearing tshirts, sweatshirts, caps and other apparel that display iconic faces such as Obama and Monroe or that embody corporate imagery such as the Nike ‘Swoosh’, Coca-Cola ‘Ribbon’, Linux Penguin and Hello Kitty feline (Sayre 82; Maynard 97). In the postmodern global village much advertising is ambient, rather than merely delivered to a device or fixed on a billboard. Australian cities are now seas of information, phantasmagoric environments in which the ambient noise encountered by residents and visitors comprises corporate signage, intelligent traffic signs, displays at public transport nodes, shop-window video screens displaying us watching them, and a plethora of personal devices showing everything from the weather to snaps of people in the street or neighborhood satellite maps. They are environments through which people traverse both as persons and abstractions, virtual presences on volatile digital maps and in online social networks. Spectacle, Anomie or Personhood The spectacular city of modernity is a meme of communication, cultural and urban development theory. It is spectacular in the sense that of large, artificial, even sublime. It is also spectacular because it is built around the gaze, whether the vistas of Hausmann’s boulevards, the towers of Manhattan and Chicago, the shopfront ‘sea of light’ and advertising pillars noted by visitors to Weimar Berlin or the neon ‘neo-baroque’ of Las Vegas (Schivelbusch 114; Fritzsche 164; Ndalianis 535). In the year 2010 it aspires to 2020 vision, a panoptic and panspectric gaze on the part of governors and governed alike (Kullenberg 38). In contrast to the timelessness of Heidegger’s hut and the ‘fixity’ of rural backwaters, spectacular cities are volatile domains where all that is solid continues to melt into air with the aid of jackhammers and the latest ‘new media’ potentially result in a hypereality that make it difficult to determine what is real and what is not (Wark 22; Berman 19). The spectacular city embodies a dialectic. It is anomic because it induces an alienation in the spectator, a fatigue attributable to media satiation and to a sense of being a mere cog in a wheel, a disempowered and readily-replaceable entity that is denied personhood–recognition as an autonomous individual–through subjection to a Fordist and post-Fordist industrial discipline or the more insidious imprisonment of being ‘a housewife’, one ant in a very large ant hill (Dyer-Witheford 58). People, however, are not automatons: they experience media, modernity and urbanism in different ways. The same attributes that erode the selfhood of some people enhance the autonomy and personhood of others. The spectacular city, now a matrix of digits, information flows and opportunities, is a realm in which people can subvert expectations and find scope for self-fulfillment, whether by wearing a hoodie that defeats CCTV or by using digital technologies to find and associate with other members of stigmatized affinity groups. One person’s anomie is another’s opportunity. Ambience and Virtualisation Eighty years after Fritz Lang’s Metropolis forecast a cyber-sociality, digital technologies are resulting in a ‘virtualisation’ of social interactions and cities. In post-modern cityscapes, the space of flows comprises an increasing number of electronic exchanges through physically disjointed places (Castells 2002). Virtualisation involves supplementation or replacement of face-to-face contact with hypersocial communication via new media, including SMS, email, blogging and Facebook. In 2010 your friends (or your boss or a bully) may always be just a few keystrokes away, irrespective of whether it is raining outside, there is a public transport strike or the car is in for repairs (Hassan 69; Baron 215). Virtualisation also involves an abstraction of bodies and physical movements, with the information that represents individual identities or vehicles traversing the virtual spaces comprised of CCTV networks (where viewers never encounter the person or crowd face to face), rail ticketing systems and road management systems (x e-Tag passed by this tag reader, y camera logged a specific vehicle onto a database using automated number-plate recognition software) (Wood 93; Lyon 253). Surveillant Cities Pervasive anxiety is a permanent and recurrent feature of urban experience. Often navigated by an urgency to control perceived disorder, both physically and through cultivated dominant theory (early twentieth century gendered discourses to push women back into the private sphere; ethno-racial closure and control in the Black Metropolis of 1940s Chicago), history is punctuated by attempts to dissolve public debate and infringe minority freedoms (Wilson 1991). In the Post-modern city unprecedented technological capacity generates a totalizing media vector whose plausible by-product is the perception of an ambient menace (Wark 3). Concurrent faith in technology as a cost-effective mechanism for public management (policing, traffic, planning, revenue generation) has resulted in emergence of the surveillant city. It is both a social and architectural fabric whose infrastructure is dotted with sensors and whose people assume that they will be monitored by private/public sector entities and directed by interactive traffic management systems – from electronic speed signs and congestion indicators through to rail schedule displays –leveraging data collected through those sensors. The fabric embodies tensions between governance (at its crudest, enforcement of law by police and their surrogates in private security services) and the soft cage of digital governmentality, with people being disciplined through knowledge that they are being watched and that the observation may be shared with others in an official or non-official shaming (Parenti 51; Staples 41). Encounters with a railway station CCTV might thus result in exhibition of the individual in court or on broadcast television, whether in nightly news or in a ‘reality tv’ crime expose built around ‘most wanted’ footage (Jermyn 109). Misbehaviour by a partner might merely result in scrutiny of mobile phone bills or web browser histories (which illicit content has the partner consumed, which parts of cyberspace has been visited), followed by a visit to the family court. It might instead result in digital viligilantism, with private offences being named and shamed on electronic walls across the global village, such as Facebook. iPhone Auteurism Activists have responded to pervasive surveillance by turning the cameras on ‘the watchers’ in an exercise of ‘sousveillance’ (Bennett 13; Huey 158). That mirroring might involve the meticulous documentation, often using the same geospatial tools deployed by public/private security agents, of the location of closed circuit television cameras and other surveillance devices. One outcome is the production of maps identifying who is watching and where that watching is taking place. As a corollary, people with anxieties about being surveilled, with a taste for street theatre or a receptiveness to a new form of urban adventure have used those maps to traverse cities via routes along which they cannot be identified by cameras, tags and other tools of the panoptic sort, or to simply adopt masks at particular locations. In 2020 can anyone aspire to be a protagonist in V for Vendetta? (iSee) Mirroring might take more visceral forms, with protestors for example increasingly making a practice of capturing images of police and private security services dealing with marches, riots and pickets. The advent of 3G mobile phones with a still/video image capability and ongoing ‘dematerialisation’ of traditional video cameras (ie progressively cheaper, lighter, more robust, less visible) means that those engaged in political action can document interaction with authority. So can passers-by. That ambient imaging, turning the public gaze on power and thereby potentially redefining the ‘public’ (given that in Australia the community has been embodied by the state and discourse has been mediated by state-sanctioned media), poses challenges for media scholars and exponents of an invigorated civil society in which we are looking together – and looking at each other – rather than bowling alone. One challenge for consumers in construing ambient media is trust. Can we believe what we see, particularly when few audiences have forensic skills and intermediaries such as commercial broadcasters may privilege immediacy (the ‘breaking news’ snippet from participants) over context and verification. Social critics such as Baudelaire and Benjamin exalt the flaneur, the free spirit who gazed on the street, a street that was as much a spectacle as the theatre and as vibrant as the circus. In 2010 the same technologies that empower citizen journalism and foster a succession of velvet revolutions feed flaneurs whose streetwalking doesn’t extend beyond a keyboard and a modem. The US and UK have thus seen emergence of gawker services, with new media entrepreneurs attempting to build sustainable businesses by encouraging fans to report the location of celebrities (and ideally provide images of those encounters) for the delectation of people who are web surfing or receiving a tweet (Burns 24). In the age of ambient cameras, where the media are everywhere and nowhere (and micro-stock photoservices challenge agencies such as Magnum), everyone can join the paparazzi. Anyone can deploy that ambient surveillance to become a stalker. The enthusiasm with which fans publish sightings of celebrities will presumably facilitate attacks on bodies rather than images. Information may want to be free but so, inconveniently, do iconoclasts and practitioners of participatory panopticism (Dodge 431; Dennis 348). Rhetoric about ‘citizen journalism’ has been co-opted by ‘old media’, with national broadcasters and commercial enterprises soliciting still images and video from non-professionals, whether for free or on a commercial basis. It is a world where ‘journalists’ are everywhere and where responsibility resides uncertainly at the editorial desk, able to reject or accept offerings from people with cameras but without the industrial discipline formerly exercised through professional training and adherence to formal codes of practice. It is thus unsurprising that South Australia’s Government, echoed by some peers, has mooted anti-gawker legislation aimed at would-be auteurs who impede emergency services by stopping their cars to take photos of bushfires, road accidents or other disasters. The flipside of that iPhone auteurism is anxiety about the public gaze, expressed through moral panics regarding street photography and sexting. Apart from a handful of exceptions (notably photography in the Sydney Opera House precinct, in the immediate vicinity of defence facilities and in some national parks), Australian law does not prohibit ‘street photography’ which includes photographs or videos of streetscapes or public places. Despite periodic assertions that it is a criminal offence to take photographs of people–particularly minors–without permission from an official, parent/guardian or individual there is no general restriction on ambient photography in public spaces. Moral panics about photographs of children (or adults) on beaches or in the street reflect an ambient anxiety in which danger is associated with strangers and strangers are everywhere (Marr 7; Bauman 93). That conceptualisation is one that would delight people who are wholly innocent of Judith Butler or Andrea Dworkin, in which the gaze (ever pervasive, ever powerful) is tantamount to a violation. The reality is more prosaic: most child sex offences involve intimates, rather than the ‘monstrous other’ with the telephoto lens or collection of nastiness on his iPod (Cossins 435; Ingebretsen 190). Recognition of that reality is important in considering moves that would egregiously restrict legitimate photography in public spaces or happy snaps made by doting relatives. An ambient image–unposed, unpremeditated, uncoerced–of an intimate may empower both authors and subjects when little is solid and memory is fleeting. The same caution might usefully be applied in considering alarms about sexting, ie creation using mobile phones (and access by phone or computer monitor) of intimate images of teenagers by teenagers. Australian governments have moved to emulate their US peers, treating such photography as a criminal offence that can be conceptualized as child pornography and addressed through permanent inclusion in sex offender registers. Lifelong stigmatisation is inappropriate in dealing with naïve or brash 12 and 16 year olds who have been exchanging intimate images without an awareness of legal frameworks or an understanding of consequences (Shafron-Perez 432). Cameras may be everywhere among the e-generation but legal knowledge, like the future, is unevenly distributed. Digital Handcuffs Generations prior to 2008 lost themselves in the streets, gaining individuality or personhood by escaping the surveillance inherent in living at home, being observed by neighbours or simply surrounded by colleagues. Streets offered anonymity and autonomy (Simmel 1903), one reason why heterodox sexuality has traditionally been negotiated in parks and other beats and on kerbs where sex workers ply their trade (Dalton 375). Recent decades have seen a privatisation of those public spaces, with urban planning and digital technologies imposing a new governmentality on hitherto ambient ‘deviance’ and on voyeuristic-exhibitionist practice such as heterosexual ‘dogging’ (Bell 387). That governmentality has been enforced through mechanisms such as replacement of traditional public toilets with ‘pods’ that are conveniently maintained by global service providers such as Veolia (the unromantic but profitable rump of former media & sewers conglomerate Vivendi) and function as billboards for advertising groups such as JC Decaux. Faces encountered in the vicinity of the twenty-first century pissoir are thus likely to be those of supermodels selling yoghurt, low interest loans or sportsgear – the same faces sighted at other venues across the nation and across the globe. Visiting ‘the mens’ gives new meaning to the word ambience when you are more likely to encounter Louis Vuitton and a CCTV camera than George Michael. George’s face, or that of Madonna, Barack Obama, Kevin 07 or Homer Simpson, might instead be sighted on the tshirts or hoodies mentioned above. George’s music might also be borne on the bodies of people you see in the park, on the street, or in the bus. This is the age of ambient performance, taken out of concert halls and virtualised on iPods, Walkmen and other personal devices, music at the demand of the consumer rather than as rationed by concert managers (Bull 85). The cost of that ambience, liberation of performance from time and space constraints, may be a Weberian disenchantment (Steiner 434). Technology has also removed anonymity by offering digital handcuffs to employees, partners, friends and children. The same mobile phones used in the past to offer excuses or otherwise disguise the bearer’s movement may now be tied to an observer through location services that plot the person’s movement across Google Maps or the geospatial information of similar services. That tracking is an extension into the private realm of the identification we now take for granted when using taxis or logistics services, with corporate Australia for example investing in systems that allow accurate determination of where a shipment is located (on Sydney Harbour Bridge? the loading dock? accompanying the truck driver on unauthorized visits to the pub?) and a forecast of when it will arrive (Monmonier 76). Such technologies are being used on a smaller scale to enforce digital Fordism among the binary proletariat in corporate buildings and campuses, with ‘smart badges’ and biometric gateways logging an individual’s movement across institutional terrain (so many minutes in the conference room, so many minutes in the bathroom or lingering among the faux rainforest near the Vice Chancellery) (Bolt). Bright Lights, Blog City It is a truth universally acknowledged, at least by right-thinking Foucauldians, that modernity is a matter of coercion and anomie as all that is solid melts into air. If we are living in an age of hypersocialisation and hypercapitalism – movies and friends on tap, along with the panoptic sorting by marketers and pervasive scrutiny by both the ‘information state’ and public audiences (the million people or one person reading your blog) that is an inevitable accompaniment of the digital cornucopia–we might ask whether everyone is or should be unhappy. This article began by highlighting traditional responses to the bright lights, brashness and excitement of the big city. One conclusion might be that in 2010 not much has changed. Some people experience ambient information as liberating; others as threatening, productive of physical danger or of a more insidious anomie in which personal identity is blurred by an ineluctable electro-smog. There is disagreement about the professionalism (for which read ethics and inhibitions) of ‘citizen media’ and about a culture in which, as in the 1920s, audiences believe that they ‘own the image’ embodying the celebrity or public malefactor. Digital technologies allow you to navigate through the urban maze and allow officials, marketers or the hostile to track you. Those same technologies allow you to subvert both the governmentality and governance. You are free: Be ambient! References Baron, Naomi. Always On: Language in an Online and Mobile World. New York: Oxford UP, 2008. Bauman, Zygmunt. Liquid Modernity. Oxford: Polity Press, 2000. Bell, David. “Bodies, Technologies, Spaces: On ‘Dogging’.” Sexualities 9.4 (2006): 387-408. Bennett, Colin. The Privacy Advocates: Resisting the Spread of Surveillance. Cambridge: MIT Press, 2008. Berman, Marshall. All That Is Solid Melts into Air: The Experience of Modernity. London: Verso, 2001. Bolt, Nate. “The Binary Proletariat.” First Monday 5.5 (2000). 25 Feb 2010 ‹http://131.193.153.231/www/issues/issue5_5/bolt/index.html›. Buck-Morss, Susan. The Dialectics of Seeing: Walter Benjamin and the Arcades Project. Cambridge: MIT Press, 1991. Bull, Michael. Sounding Out the City: Personal Stereos and the Management of Everyday Life. Oxford: Berg, 2003. Bull, Michael. Sound Moves: iPod Culture and the Urban Experience. London: Routledge, 2008 Burns, Kelli. Celeb 2.0: How Social Media Foster Our Fascination with Popular Culture. Santa Barbara: ABC-CLIO, 2009. Castells, Manuel. “The Urban Ideology.” The Castells Reader on Cities and Social Theory. Ed. Ida Susser. Malden: Blackwell, 2002. 34-70. Cossins, Anne, Jane Goodman-Delahunty, and Kate O’Brien. “Uncertainty and Misconceptions about Child Sexual Abuse: Implications for the Criminal Justice System.” Psychiatry, Psychology and the Law 16.4 (2009): 435-452. Dalton, David. “Policing Outlawed Desire: ‘Homocriminality’ in Beat Spaces in Australia.” Law & Critique 18.3 (2007): 375-405. De Certeau, Michel. The Practice of Everyday Life. Berkeley: University of California P, 1984. Dennis, Kingsley. “Keeping a Close Watch: The Rise of Self-Surveillance and the Threat of Digital Exposure.” The Sociological Review 56.3 (2008): 347-357. Dodge, Martin, and Rob Kitchin. “Outlines of a World Coming into Existence: Pervasive Computing and the Ethics of Forgetting.” Environment & Planning B: Planning & Design 34.3 (2007): 431-445. Doel, Marcus, and David Clarke. “Transpolitical Urbanism: Suburban Anomaly and Ambient Fear.” Space & Culture 1.2 (1998): 13-36. Dyer-Witheford, Nick. Cyber-Marx: Cycles and Circuits of Struggle in High Technology Capitalism. Champaign: U of Illinois P, 1999. Fritzsche, Peter. Reading Berlin 1900. Cambridge: Harvard UP, 1998. Gumpert, Gary, and Susan Drucker. “Privacy, Predictability or Serendipity and Digital Cities.” Digital Cities II: Computational and Sociological Approaches. Berlin: Springer, 2002. 26-40. Hassan, Robert. The Information Society. Cambridge: Polity Press, 2008. Hillier, Bill. “Cities as Movement Economies.” Intelligent Environments: Spatial Aspects of the Information Revolution. Ed. Peter Drioege. Amsterdam: Elsevier, 1997. 295-342. Holmes, David. “Cybercommuting on an Information Superhighway: The Case of Melbourne’s CityLink.” The Cybercities Reader. Ed. Stephen Graham. London: Routledge, 2004. 173-178. Huey, Laura, Kevin Walby, and Aaron Doyle. “Cop Watching in the Downtown Eastside: Exploring the Use of CounterSurveillance as a Tool of Resistance.” Surveillance and Security: Technological Politics and Power in Everyday Life. Ed. Torin Monahan. London: Routledge, 2006. 149-166. Ingebretsen, Edward. At Stake: Monsters and the Rhetoric of Fear in Public Culture. Chicago: U of Chicago P, 2001. iSee. “Now More Than Ever”. 20 Feb 2010 ‹http://www.appliedautonomy.com/isee/info.html›. Jackson, Margaret, and Julian Ligertwood. "Identity Management: Is an Identity Card the Solution for Australia?” Prometheus 24.4 (2006): 379-387. Jermyn, Deborah. Crime Watching: Investigating Real Crime TV. London: IB Tauris, 2007. Kullenberg, Christopher. “The Social Impact of IT: Surveillance and Resistance in Present-Day Conflicts.” FlfF-Kommunikation 1 (2009): 37-40. Lyon, David. Surveillance as Social Sorting: Privacy, Risk and Digital Discrimination. London: Routledge, 2003. Marr, David. The Henson Case. Melbourne: Text, 2008. Maynard, Margaret. Dress and Globalisation. Manchester: Manchester UP, 2004. Merchant, Carolyn. The Columbia Guide to American Environmental History. New York: Columbia UP, 2002. Monmonier, Mark. “Geolocation and Locational Privacy: The ‘Inside’ Story on Geospatial Tracking’.” Privacy and Technologies of Identity: A Cross-disciplinary Conversation. Ed. Katherine Strandburg and Daniela Raicu. Berlin: Springer, 2006. 75-92. Ndalianis, Angela. “Architecture of the Senses: Neo-Baroque Entertainment Spectacles.” Rethinking Media Change: The Aesthetics of Tradition. Ed. David Thorburn and Henry Jenkins. Cambridge: MIT Press, 2004. 355-374. Parenti, Christian. The Soft Cage: Surveillance in America. New York: Basic Books, 2003. Sayre, Shay. “T-shirt Messages: Fortune or Folly for Advertisers.” Advertising and Popular Culture: Studies in Variety and Versatility. Ed. Sammy Danna. New York: Popular Press, 1992. 73-82. Savitch, Henry. Cities in a Time of Terror: Space, Territory and Local Resilience. Armonk: Sharpe, 2008. Scheingold, Stuart. The Politics of Street Crime: Criminal Process and Cultural Obsession. Philadephia: Temple UP, 1992. Schivelbusch, Wolfgang. Disenchanted Night: The Industrialization of Light in the Nineteenth Century. Berkeley: U of California Press, 1995. Shafron-Perez, Sharon. “Average Teenager or Sex Offender: Solutions to the Legal Dilemma Caused by Sexting.” John Marshall Journal of Computer & Information Law 26.3 (2009): 431-487. Simmel, Georg. “The Metropolis and Mental Life.” Individuality and Social Forms. Ed. Donald Levine. Chicago: University of Chicago P, 1971. Staples, William. Everyday Surveillance: Vigilance and Visibility in Postmodern Life. Lanham: Rowman & Littlefield, 2000. Steiner, George. George Steiner: A Reader. New York: Oxford UP, 1987. Thompson, Emily. The Soundscape of Modernity: Architectural Acoustics and the Culture of Listening in America. Cambridge: The MIT Press, 2004. Wark, Mackenzie. Virtual Geography: Living with Global Media Events. Bloomington: Indiana UP, 1994. Wilson, Elizabeth. The Sphinx in the City: Urban Life, the Control of Disorder and Women. Berkeley: University of California P, 1991. Wood, David. “Towards Spatial Protocol: The Topologies of the Pervasive Surveillance Society.” Augmenting Urban Spaces: Articulating the Physical and Electronic City. Eds. Allesandro Aurigi and Fiorella de Cindio. Aldershot: Ashgate, 2008. 93-106.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Mac Con Iomaire, Máirtín. "Coffee Culture in Dublin: A Brief History". M/C Journal 15, n.º 2 (2 de mayo de 2012). http://dx.doi.org/10.5204/mcj.456.

Texto completo
Resumen
IntroductionIn the year 2000, a group of likeminded individuals got together and convened the first annual World Barista Championship in Monte Carlo. With twelve competitors from around the globe, each competitor was judged by seven judges: one head judge who oversaw the process, two technical judges who assessed technical skills, and four sensory judges who evaluated the taste and appearance of the espresso drinks. Competitors had fifteen minutes to serve four espresso coffees, four cappuccino coffees, and four “signature” drinks that they had devised using one shot of espresso and other ingredients of their choice, but no alcohol. The competitors were also assessed on their overall barista skills, their creativity, and their ability to perform under pressure and impress the judges with their knowledge of coffee. This competition has grown to the extent that eleven years later, in 2011, 54 countries held national barista championships with the winner from each country competing for the highly coveted position of World Barista Champion. That year, Alejandro Mendez from El Salvador became the first world champion from a coffee producing nation. Champion baristas are more likely to come from coffee consuming countries than they are from coffee producing countries as countries that produce coffee seldom have a culture of espresso coffee consumption. While Ireland is not a coffee-producing nation, the Irish are the highest per capita consumers of tea in the world (Mac Con Iomaire, “Ireland”). Despite this, in 2008, Stephen Morrissey from Ireland overcame 50 other national champions to become the 2008 World Barista Champion (see, http://vimeo.com/2254130). Another Irish national champion, Colin Harmon, came fourth in this competition in both 2009 and 2010. This paper discusses the history and development of coffee and coffee houses in Dublin from the 17th century, charting how coffee culture in Dublin appeared, evolved, and stagnated before re-emerging at the beginning of the 21st century, with a remarkable win in the World Barista Championships. The historical links between coffeehouses and media—ranging from print media to electronic and social media—are discussed. In this, the coffee house acts as an informal public gathering space, what urban sociologist Ray Oldenburg calls a “third place,” neither work nor home. These “third places” provide anchors for community life and facilitate and foster broader, more creative interaction (Oldenburg). This paper will also show how competition from other “third places” such as clubs, hotels, restaurants, and bars have affected the vibrancy of coffee houses. Early Coffee Houses The first coffee house was established in Constantinople in 1554 (Tannahill 252; Huetz de Lemps 387). The first English coffee houses opened in Oxford in 1650 and in London in 1652. Coffee houses multiplied thereafter but, in 1676, when some London coffee houses became hotbeds for political protest, the city prosecutor decided to close them. The ban was soon lifted and between 1680 and 1730 Londoners discovered the pleasure of drinking coffee (Huetz de Lemps 388), although these coffee houses sold a number of hot drinks including tea and chocolate as well as coffee.The first French coffee houses opened in Marseille in 1671 and in Paris the following year. Coffee houses proliferated during the 18th century: by 1720 there were 380 public cafés in Paris and by the end of the century there were 600 (Huetz de Lemps 387). Café Procope opened in Paris in 1674 and, in the 18th century, became a literary salon with regular patrons: Voltaire, Rousseau, Diderot and Condorcet (Huetz de Lemps 387; Pitte 472). In England, coffee houses developed into exclusive clubs such as Crockford’s and the Reform, whilst elsewhere in Europe they evolved into what we identify as cafés, similar to the tea shops that would open in England in the late 19th century (Tannahill 252-53). Tea quickly displaced coffee in popularity in British coffee houses (Taylor 142). Pettigrew suggests two reasons why Great Britain became a tea-drinking nation while most of the rest of Europe took to coffee (48). The first was the power of the East India Company, chartered by Elizabeth I in 1600, which controlled the world’s biggest tea monopoly and promoted the beverage enthusiastically. The second was the difficulty England had in securing coffee from the Levant while at war with France at the end of the seventeenth century and again during the War of the Spanish Succession (1702-13). Tea also became the dominant beverage in Ireland and over a period of time became the staple beverage of the whole country. In 1835, Samuel Bewley and his son Charles dared to break the monopoly of The East India Company by importing over 2,000 chests of tea directly from Canton, China, to Ireland. His family would later become synonymous with the importation of coffee and with opening cafés in Ireland (see, Farmar for full history of the Bewley's and their activities). Ireland remains the highest per-capita consumer of tea in the world. Coffee houses have long been linked with social and political change (Kennedy, Politicks; Pincus). The notion that these new non-alcoholic drinks were responsible for the Enlightenment because people could now gather socially without getting drunk is rejected by Wheaton as frivolous, since there had always been alternatives to strong drink, and European civilisation had achieved much in the previous centuries (91). She comments additionally that cafés, as gathering places for dissenters, took over the role that taverns had long played. Pennell and Vickery support this argument adding that by offering a choice of drinks, and often sweets, at a fixed price and in a more civilized setting than most taverns provided, coffee houses and cafés were part of the rise of the modern restaurant. It is believed that, by 1700, the commercial provision of food and drink constituted the second largest occupational sector in London. Travellers’ accounts are full of descriptions of London taverns, pie shops, coffee, bun and chop houses, breakfast huts, and food hawkers (Pennell; Vickery). Dublin Coffee Houses and Later incarnations The earliest reference to coffee houses in Dublin is to the Cock Coffee House in Cook Street during the reign of Charles II (1660-85). Public dining or drinking establishments listed in the 1738 Dublin Directory include taverns, eating houses, chop houses, coffee houses, and one chocolate house in Fownes Court run by Peter Bardin (Hardiman and Kennedy 157). During the second half of the 17th century, Dublin’s merchant classes transferred allegiance from taverns to the newly fashionable coffee houses as places to conduct business. By 1698, the fashion had spread to country towns with coffee houses found in Cork, Limerick, Kilkenny, Clonmel, Wexford, and Galway, and slightly later in Belfast and Waterford in the 18th century. Maxwell lists some of Dublin’s leading coffee houses and taverns, noting their clientele: There were Lucas’s Coffee House, on Cork Hill (the scene of many duels), frequented by fashionable young men; the Phoenix, in Werburgh Street, where political dinners were held; Dick’s Coffee House, in Skinner’s Row, much patronized by literary men, for it was over a bookseller’s; the Eagle, in Eustace Street, where meetings of the Volunteers were held; the Old Sot’s Hole, near Essex Bridge, famous for its beefsteaks and ale; the Eagle Tavern, on Cork Hill, which was demolished at the same time as Lucas’s to make room for the Royal Exchange; and many others. (76) Many of the early taverns were situated around the Winetavern Street, Cook Street, and Fishamble Street area. (see Fig. 1) Taverns, and later coffee houses, became meeting places for gentlemen and centres for debate and the exchange of ideas. In 1706, Francis Dickson published the Flying Post newspaper at the Four Courts coffee house in Winetavern Street. The Bear Tavern (1725) and the Black Lyon (1735), where a Masonic Lodge assembled every Wednesday, were also located on this street (Gilbert v.1 160). Dick’s Coffee house was established in the late 17th century by bookseller and newspaper proprietor Richard Pue, and remained open until 1780 when the building was demolished. In 1740, Dick’s customers were described thus: Ye citizens, gentlemen, lawyers and squires,who summer and winter surround our great fires,ye quidnuncs! who frequently come into Pue’s,To live upon politicks, coffee, and news. (Gilbert v.1 174) There has long been an association between coffeehouses and publishing books, pamphlets and particularly newspapers. Other Dublin publishers and newspapermen who owned coffee houses included Richard Norris and Thomas Bacon. Until the 1850s, newspapers were burdened with a number of taxes: on the newsprint, a stamp duty, and on each advertisement. By 1865, these taxes had virtually disappeared, resulting in the appearance of 30 new newspapers in Ireland, 24 of them in Dublin. Most people read from copies which were available free of charge in taverns, clubs, and coffee houses (MacGiolla Phadraig). Coffee houses also kept copies of international newspapers. On 4 May 1706, Francis Dickson notes in the Dublin Intelligence that he held the Paris and London Gazettes, Leyden Gazette and Slip, the Paris and Hague Lettres à la Main, Daily Courant, Post-man, Flying Post, Post-script and Manuscripts in his coffeehouse in Winetavern Street (Kennedy, “Dublin”). Henry Berry’s analysis of shop signs in Dublin identifies 24 different coffee houses in Dublin, with the main clusters in Essex Street near the Custom’s House (Cocoa Tree, Bacon’s, Dempster’s, Dublin, Merchant’s, Norris’s, and Walsh’s) Cork Hill (Lucas’s, St Lawrence’s, and Solyman’s) Skinners’ Row (Bow’s’, Darby’s, and Dick’s) Christ Church Yard (Four Courts, and London) College Green (Jack’s, and Parliament) and Crampton Court (Exchange, and Little Dublin). (see Figure 1, below, for these clusters and the locations of other Dublin coffee houses.) The earliest to be referenced is the Cock Coffee House in Cook Street during the reign of Charles II (1660-85), with Solyman’s (1691), Bow’s (1692), and Patt’s on High Street (1699), all mentioned in print before the 18th century. The name of one, the Cocoa Tree, suggests that chocolate was also served in this coffee house. More evidence of the variety of beverages sold in coffee houses comes from Gilbert who notes that in 1730, one Dublin poet wrote of George Carterwright’s wife at The Custom House Coffee House on Essex Street: Her coffee’s fresh and fresh her tea,Sweet her cream, ptizan, and whea,her drams, of ev’ry sort, we findboth good and pleasant, in their kind. (v. 2 161) Figure 1: Map of Dublin indicating Coffee House clusters 1 = Sackville St.; 2 = Winetavern St.; 3 = Essex St.; 4 = Cork Hill; 5 = Skinner's Row; 6 = College Green.; 7 = Christ Church Yard; 8 = Crampton Court.; 9 = Cook St.; 10 = High St.; 11 = Eustace St.; 12 = Werburgh St.; 13 = Fishamble St.; 14 = Westmorland St.; 15 = South Great George's St.; 16 = Grafton St.; 17 = Kildare St.; 18 = Dame St.; 19 = Anglesea Row; 20 = Foster Place; 21 = Poolbeg St.; 22 = Fleet St.; 23 = Burgh Quay.A = Cafe de Paris, Lincoln Place; B = Red Bank Restaurant, D'Olier St.; C = Morrison's Hotel, Nassau St.; D = Shelbourne Hotel, St. Stephen's Green; E = Jury's Hotel, Dame St. Some coffee houses transformed into the gentlemen’s clubs that appeared in London, Paris and Dublin in the 17th century. These clubs originally met in coffee houses, then taverns, until later proprietary clubs became fashionable. Dublin anticipated London in club fashions with members of the Kildare Street Club (1782) and the Sackville Street Club (1794) owning the premises of their clubhouse, thus dispensing with the proprietor. The first London club to be owned by the members seems to be Arthur’s, founded in 1811 (McDowell 4) and this practice became widespread throughout the 19th century in both London and Dublin. The origin of one of Dublin’s most famous clubs, Daly’s Club, was a chocolate house opened by Patrick Daly in c.1762–65 in premises at 2–3 Dame Street (Brooke). It prospered sufficiently to commission its own granite-faced building on College Green between Anglesea Street and Foster Place which opened in 1789 (Liddy 51). Daly’s Club, “where half the land of Ireland has changed hands”, was renowned for the gambling that took place there (Montgomery 39). Daly’s sumptuous palace catered very well (and discreetly) for honourable Members of Parliament and rich “bucks” alike (Craig 222). The changing political and social landscape following the Act of Union led to Daly’s slow demise and its eventual closure in 1823 (Liddy 51). Coincidentally, the first Starbucks in Ireland opened in 2005 in the same location. Once gentlemen’s clubs had designated buildings where members could eat, drink, socialise, and stay overnight, taverns and coffee houses faced competition from the best Dublin hotels which also had coffee rooms “in which gentlemen could read papers, write letters, take coffee and wine in the evening—an exiguous substitute for a club” (McDowell 17). There were at least 15 establishments in Dublin city claiming to be hotels by 1789 (Corr 1) and their numbers grew in the 19th century, an expansion which was particularly influenced by the growth of railways. By 1790, Dublin’s public houses (“pubs”) outnumbered its coffee houses with Dublin boasting 1,300 (Rooney 132). Names like the Goose and Gridiron, Harp and Crown, Horseshoe and Magpie, and Hen and Chickens—fashionable during the 17th and 18th centuries in Ireland—hung on decorative signs for those who could not read. Throughout the 20th century, the public house provided the dominant “third place” in Irish society, and the drink of choice for itd predominantly male customers was a frothy pint of Guinness. Newspapers were available in public houses and many newspapermen had their own favourite hostelries such as Mulligan’s of Poolbeg Street; The Pearl, and The Palace on Fleet Street; and The White Horse Inn on Burgh Quay. Any coffee served in these establishments prior to the arrival of the new coffee culture in the 21st century was, however, of the powdered instant variety. Hotels / Restaurants with Coffee Rooms From the mid-19th century, the public dining landscape of Dublin changed in line with London and other large cities in the United Kingdom. Restaurants did appear gradually in the United Kingdom and research suggests that one possible reason for this growth from the 1860s onwards was the Refreshment Houses and Wine Licences Act (1860). The object of this act was to “reunite the business of eating and drinking”, thereby encouraging public sobriety (Mac Con Iomaire, “Emergence” v.2 95). Advertisements for Dublin restaurants appeared in The Irish Times from the 1860s. Thom’s Directory includes listings for Dining Rooms from the 1870s and Refreshment Rooms are listed from the 1880s. This pattern continued until 1909, when Thom’s Directory first includes a listing for “Restaurants and Tea Rooms”. Some of the establishments that advertised separate coffee rooms include Dublin’s first French restaurant, the Café de Paris, The Red Bank Restaurant, Morrison’s Hotel, Shelbourne Hotel, and Jury’s Hotel (see Fig. 1). The pattern of separate ladies’ coffee rooms emerged in Dublin and London during the latter half of the 19th century and mixed sex dining only became popular around the last decade of the 19th century, partly infuenced by Cesar Ritz and Auguste Escoffier (Mac Con Iomaire, “Public Dining”). Irish Cafés: From Bewley’s to Starbucks A number of cafés appeared at the beginning of the 20th century, most notably Robert Roberts and Bewley’s, both of which were owned by Quaker families. Ernest Bewley took over the running of the Bewley’s importation business in the 1890s and opened a number of Oriental Cafés; South Great Georges Street (1894), Westmoreland Street (1896), and what became the landmark Bewley’s Oriental Café in Grafton Street (1927). Drawing influence from the grand cafés of Paris and Vienna, oriental tearooms, and Egyptian architecture (inspired by the discovery in 1922 of Tutankhamen’s Tomb), the Grafton Street business brought a touch of the exotic into the newly formed Irish Free State. Bewley’s cafés became the haunt of many of Ireland’s leading literary figures, including Samuel Becket, Sean O’Casey, and James Joyce who mentioned the café in his book, Dubliners. A full history of Bewley’s is available (Farmar). It is important to note, however, that pots of tea were sold in equal measure to mugs of coffee in Bewley’s. The cafés changed over time from waitress- to self-service and a failure to adapt to changing fashions led to the business being sold, with only the flagship café in Grafton Street remaining open in a revised capacity. It was not until the beginning of the 21st century that a new wave of coffee house culture swept Ireland. This was based around speciality coffee beverages such as espressos, cappuccinos, lattés, macchiatos, and frappuccinnos. This new phenomenon coincided with the unprecedented growth in the Irish economy, during which Ireland became known as the “Celtic Tiger” (Murphy 3). One aspect of this period was a building boom and a subsequent growth in apartment living in the Dublin city centre. The American sitcom Friends and its fictional coffee house, “Central Perk,” may also have helped popularise the use of coffee houses as “third spaces” (Oldenberg) among young apartment dwellers in Dublin. This was also the era of the “dotcom boom” when many young entrepreneurs, software designers, webmasters, and stock market investors were using coffee houses as meeting places for business and also as ad hoc office spaces. This trend is very similar to the situation in the 17th and early 18th centuries where coffeehouses became known as sites for business dealings. Various theories explaining the growth of the new café culture have circulated, with reasons ranging from a growth in Eastern European migrants, anti-smoking legislation, returning sophisticated Irish emigrants, and increased affluence (Fenton). Dublin pubs, facing competition from the new coffee culture, began installing espresso coffee machines made by companies such as Gaggia to attract customers more interested in a good latté than a lager and it is within this context that Irish baristas gained such success in the World Barista competition. In 2001 the Georges Street branch of Bewley’s was taken over by a chain called Café, Bar, Deli specialising in serving good food at reasonable prices. Many ex-Bewley’s staff members subsequently opened their own businesses, roasting coffee and running cafés. Irish-owned coffee chains such as Java Republic, Insomnia, and O’Brien’s Sandwich Bars continued to thrive despite the competition from coffee chains Starbucks and Costa Café. Indeed, so successful was the handmade Irish sandwich and coffee business that, before the economic downturn affected its business, Irish franchise O’Brien’s operated in over 18 countries. The Café, Bar, Deli group had also begun to franchise its operations in 2008 when it too became a victim of the global economic downturn. With the growth of the Internet, many newspapers have experienced falling sales of their printed format and rising uptake of their electronic versions. Most Dublin coffee houses today provide wireless Internet connections so their customers can read not only the local newspapers online, but also others from all over the globe, similar to Francis Dickenson’s coffee house in Winetavern Street in the early 18th century. Dublin has become Europe’s Silicon Valley, housing the European headquarters for companies such as Google, Yahoo, Ebay, Paypal, and Facebook. There are currently plans to provide free wireless connectivity throughout Dublin’s city centre in order to promote e-commerce, however, some coffee houses shut off the wireless Internet in their establishments at certain times of the week in order to promote more social interaction to ensure that these “third places” remain “great good places” at the heart of the community (Oldenburg). Conclusion Ireland is not a country that is normally associated with a coffee culture but coffee houses have been part of the fabric of that country since they emerged in Dublin in the 17th century. These Dublin coffee houses prospered in the 18th century, and survived strong competition from clubs and hotels in the 19th century, and from restaurant and public houses into the 20th century. In 2008, when Stephen Morrissey won the coveted title of World Barista Champion, Ireland’s place as a coffee consuming country was re-established. The first decade of the 21st century witnessed a birth of a new espresso coffee culture, which shows no signs of weakening despite Ireland’s economic travails. References Berry, Henry F. “House and Shop Signs in Dublin in the Seventeenth and Eighteenth Centuries.” The Journal of the Royal Society of Antiquaries of Ireland 40.2 (1910): 81–98. Brooke, Raymond Frederick. Daly’s Club and the Kildare Street Club, Dublin. Dublin, 1930. Corr, Frank. Hotels in Ireland. Dublin: Jemma Publications, 1987. Craig, Maurice. Dublin 1660-1860. Dublin: Allen Figgis, 1980. Farmar, Tony. The Legendary, Lofty, Clattering Café. Dublin: A&A Farmar, 1988. Fenton, Ben. “Cafe Culture taking over in Dublin.” The Telegraph 2 Oct. 2006. 29 Apr. 2012 ‹http://www.telegraph.co.uk/news/uknews/1530308/cafe-culture-taking-over-in-Dublin.html›. Gilbert, John T. A History of the City of Dublin (3 vols.). Dublin: Gill and Macmillan, 1978. Girouard, Mark. Victorian Pubs. New Haven, Conn.: Yale UP, 1984. Hardiman, Nodlaig P., and Máire Kennedy. A Directory of Dublin for the Year 1738 Compiled from the Most Authentic of Sources. Dublin: Dublin Corporation Public Libraries, 2000. Huetz de Lemps, Alain. “Colonial Beverages and Consumption of Sugar.” Food: A Culinary History from Antiquity to the Present. Eds. Jean-Louis Flandrin and Massimo Montanari. New York: Columbia UP, 1999. 383–93. Kennedy, Máire. “Dublin Coffee Houses.” Ask About Ireland, 2011. 4 Apr. 2012 ‹http://www.askaboutireland.ie/reading-room/history-heritage/pages-in-history/dublin-coffee-houses›. ----- “‘Politicks, Coffee and News’: The Dublin Book Trade in the Eighteenth Century.” Dublin Historical Record LVIII.1 (2005): 76–85. Liddy, Pat. Temple Bar—Dublin: An Illustrated History. Dublin: Temple Bar Properties, 1992. Mac Con Iomaire, Máirtín. “The Emergence, Development, and Influence of French Haute Cuisine on Public Dining in Dublin Restaurants 1900-2000: An Oral History.” Ph.D. thesis, Dublin Institute of Technology, Dublin, 2009. 4 Apr. 2012 ‹http://arrow.dit.ie/tourdoc/12›. ----- “Ireland.” Food Cultures of the World Encylopedia. Ed. Ken Albala. Westport, CT: Greenwood Press, 2010. ----- “Public Dining in Dublin: The History and Evolution of Gastronomy and Commercial Dining 1700-1900.” International Journal of Contemporary Hospitality Management 24. Special Issue: The History of the Commercial Hospitality Industry from Classical Antiquity to the 19th Century (2012): forthcoming. MacGiolla Phadraig, Brian. “Dublin: One Hundred Years Ago.” Dublin Historical Record 23.2/3 (1969): 56–71. Maxwell, Constantia. Dublin under the Georges 1714–1830. Dublin: Gill & Macmillan, 1979. McDowell, R. B. Land & Learning: Two Irish Clubs. Dublin: The Lilliput P, 1993. Montgomery, K. L. “Old Dublin Clubs and Coffee-Houses.” New Ireland Review VI (1896): 39–44. Murphy, Antoine E. “The ‘Celtic Tiger’—An Analysis of Ireland’s Economic Growth Performance.” EUI Working Papers, 2000 29 Apr. 2012 ‹http://www.eui.eu/RSCAS/WP-Texts/00_16.pdf›. Oldenburg, Ray, ed. Celebrating the Third Place: Inspiring Stories About The “Great Good Places” At the Heart of Our Communities. New York: Marlowe & Company 2001. Pennell, Sarah. “‘Great Quantities of Gooseberry Pye and Baked Clod of Beef’: Victualling and Eating out in Early Modern London.” Londinopolis: Essays in the Cultural and Social History of Early Modern London. Eds. Paul Griffiths and Mark S. R. Jenner. Manchester: Manchester UP, 2000. 228–59. Pettigrew, Jane. A Social History of Tea. London: National Trust Enterprises, 2001. Pincus, Steve. “‘Coffee Politicians Does Create’: Coffeehouses and Restoration Political Culture.” The Journal of Modern History 67.4 (1995): 807–34. Pitte, Jean-Robert. “The Rise of the Restaurant.” Food: A Culinary History from Antiquity to the Present. Eds. Jean-Louis Flandrin and Massimo Montanari. New York: Columbia UP, 1999. 471–80. Rooney, Brendan, ed. A Time and a Place: Two Centuries of Irish Social Life. Dublin: National Gallery of Ireland, 2006. Tannahill, Reay. Food in History. St Albans, Herts.: Paladin, 1975. Taylor, Laurence. “Coffee: The Bottomless Cup.” The American Dimension: Cultural Myths and Social Realities. Eds. W. Arens and Susan P. Montague. Port Washington, N.Y.: Alfred Publishing, 1976. 14–48. Vickery, Amanda. Behind Closed Doors: At Home in Georgian England. New Haven: Yale UP, 2009. Wheaton, Barbara Ketcham. Savouring the Past: The French Kitchen and Table from 1300-1789. London: Chatto & Windus, Hogarth P, 1983. Williams, Anne. “Historical Attitudes to Women Eating in Restaurants.” Public Eating: Proceedings of the Oxford Symposium on Food and Cookery 1991. Ed. Harlan Walker. Totnes: Prospect Books, 1992. 311–14. World Barista, Championship. “History–World Barista Championship”. 2012. 02 Apr. 2012 ‹http://worldbaristachampionship.com2012›.AcknowledgementA warm thank you to Dr. Kevin Griffin for producing the map of Dublin for this article.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Goggin, Gerard. "Broadband". M/C Journal 6, n.º 4 (1 de agosto de 2003). http://dx.doi.org/10.5204/mcj.2219.

Texto completo
Resumen
Connecting I’ve moved house on the weekend, closer to the centre of an Australian capital city. I had recently signed up for broadband, with a major Australian Internet company (my first contact, cf. Turner). Now I am the proud owner of a larger modem than I have ever owned: a white cable modem. I gaze out into our new street: two thick black cables cosseted in silver wire. I am relieved. My new home is located in one of those streets, double-cabled by Telstra and Optus in the data-rush of the mid-1990s. Otherwise, I’d be moth-balling the cable modem, and the thrill of my data percolating down coaxial cable. And it would be off to the computer supermarket to buy an ASDL modem, then to pick a provider, to squeeze some twenty-first century connectivity out of old copper (the phone network our grandparents and great-grandparents built). If I still lived in the country, or the outskirts of the city, or anywhere else more than four kilometres from the phone exchange, and somewhere that cable pay TV will never reach, it would be a dish for me — satellite. Our digital lives are premised upon infrastructure, the networks through which we shape what we do, fashion the meanings of our customs and practices, and exchange signs with others. Infrastructure is not simply the material or the technical (Lamberton), but it is the dense, fibrous knotting together of social visions, cultural resources, individual desires, and connections. No more can one easily discern between ‘society’ and ‘technology’, ‘carriage’ and ‘content’, ‘base’ and ‘superstructure’, or ‘infrastructure’ and ‘applications’ (or ‘services’ or ‘content’). To understand telecommunications in action, or the vectors of fibre, we need to consider the long and heterogeneous list of links among different human and non-human actors — the long networks, to take Bruno Latour’s evocative concept, that confect our broadband networks (Latour). The co-ordinates of our infrastructure still build on a century-long history of telecommunications networks, on the nineteenth-century centrality of telegraphy preceding this, and on the histories of the public and private so inscribed. Yet we are in the midst of a long, slow dismantling of the posts-telegraph-telephone (PTT) model of the monopoly carrier for each nation that dominated the twentieth century, with its deep colonial foundations. Instead our New World Information and Communication Order is not the decolonising UNESCO vision of the late 1970s and early 1980s (MacBride, Maitland). Rather it is the neoliberal, free trade, market access model, its symbol the 1984 US judicial decision to require the break-up of AT&T and the UK legislation in the same year that underpinned the Thatcherite twin move to privatize British Telecom and introduce telecommunications competition. Between 1984 and 1999, 110 telecommunications companies were privatized, and the ‘acquisition of privatized PTOs [public telecommunications operators] by European and American operators does follow colonial lines’ (Winseck 396; see also Mody, Bauer & Straubhaar). The competitive market has now been uneasily installed as the paradigm for convergent communications networks, not least with the World Trade Organisation’s 1994 General Agreement on Trade in Services and Annex on Telecommunications. As the citizen is recast as consumer and customer (Goggin, ‘Citizens and Beyond’), we rethink our cultural and political axioms as well as the axes that orient our understandings in this area. Information might travel close to the speed of light, and we might fantasise about optical fibre to the home (or pillow), but our terrain, our band where the struggle lies today, is narrower than we wish. Begging for broadband, it seems, is a long way from warchalking for WiFi. Policy Circuits The dreary everyday business of getting connected plugs the individual netizen into a tangled mess of policy circuits, as much as tricky network negotiations. Broadband in mid-2003 in Australia is a curious chimera, welded together from a patchwork of technologies, old and newer communications industries, emerging economies and patterns of use. Broadband conjures up grander visions, however, of communication and cultural cornucopia. Broadband is high-speed, high-bandwidth, ‘always-on’, networked communications. People can send and receive video, engage in multimedia exchanges of all sorts, make the most of online education, realise the vision of home-based work and trading, have access to telemedicine, and entertainment. Broadband really entered the lexicon with the mass takeup of the Internet in the early to mid-1990s, and with the debates about something called the ‘information superhighway’. The rise of the Internet, the deregulation of telecommunications, and the involuted convergence of communications and media technologies saw broadband positioned at the centre of policy debates nearly a decade ago. In 1993-1994, Australia had its Broadband Services Expert Group (BSEG), established by the then Labor government. The BSEG was charged with inquiring into ‘issues relating to the delivery of broadband services to homes, schools and businesses’. Stung by criticisms of elite composition (a narrow membership, with only one woman among its twelve members, and no consumer or citizen group representation), the BSEG was prompted into wider public discussion and consultation (Goggin & Newell). The then Bureau of Transport and Communications Economics (BTCE), since transmogrified into the Communications Research Unit of the Department of Communications, Information Technology and the Arts (DCITA), conducted its large-scale Communications Futures Project (BTCE and Luck). The BSEG Final report posed the question starkly: As a society we have choices to make. If we ignore the opportunities we run the risk of being left behind as other countries introduce new services and make themselves more competitive: we will become consumers of other countries’ content, culture and technologies rather than our own. Or we could adopt new technologies at any cost…This report puts forward a different approach, one based on developing a new, user-oriented strategy for communications. The emphasis will be on communication among people... (BSEG v) The BSEG proposed a ‘National Strategy for New Communications Networks’ based on three aspects: education and community access, industry development, and the role of government (BSEG x). Ironically, while the nation, or at least its policy elites, pondered the weighty question of broadband, Australia’s two largest telcos were doing it. The commercial decision of Telstra/Foxtel and Optus Vision, and their various television partners, was to nail their colours (black) to the mast, or rather telegraph pole, and to lay cable in the major capital cities. In fact, they duplicated the infrastructure in cities such as Sydney and Melbourne, then deciding it would not be profitable to cable up even regional centres, let alone small country towns or settlements. As Terry Flew and Christina Spurgeon observe: This wasteful duplication contrasted with many other parts of the country that would never have access to this infrastructure, or to the social and economic benefits that it was perceived to deliver. (Flew & Spurgeon 72) The implications of this decision for Australia’s telecommunications and television were profound, but there was little, if any, public input into this. Then Minister Michael Lee was very proud of his anti-siphoning list of programs, such as national sporting events, that would remain on free-to-air television rather than screen on pay, but was unwilling, or unable, to develop policy on broadband and pay TV cable infrastructure (on the ironies of Australia’s television history, see Given’s masterly account). During this period also, it may be remembered, Australia’s Internet was being passed into private hands, with the tendering out of AARNET (see Spurgeon for discussion). No such national strategy on broadband really emerged in the intervening years, nor has the market provided integrated, accessible broadband services. In 1997, landmark telecommunications legislation was enacted that provided a comprehensive framework for competition in telecommunications, as well as consolidating and extending consumer protection, universal service, customer service standards, and other reforms (CLC). Carrier and reseller competition had commenced in 1991, and the 1997 legislation gave it further impetus. Effective competition is now well established in long distance telephone markets, and in mobiles. Rivalrous competition exists in the market for local-call services, though viable alternatives to Telstra’s dominance are still few (Fels). Broadband too is an area where there is symbolic rivalry rather than effective competition. This is most visible in advertised ADSL offerings in large cities, yet most of the infrastructure for these services is comprised by Telstra’s copper, fixed-line network. Facilities-based duopoly competition exists principally where Telstra/Foxtel and Optus cable networks have been laid, though there are quite a number of ventures underway by regional telcos, power companies, and, most substantial perhaps, the ACT government’s TransACT broadband network. Policymakers and industry have been greatly concerned about what they see as slow takeup of broadband, compared to other countries, and by barriers to broadband competition and access to ‘bottleneck’ facilities (such as Telstra or Optus’s networks) by potential competitors. The government has alternated between trying to talk up broadband benefits and rates of take up and recognising the real difficulties Australia faces as a large country with a relative small and dispersed population. In March 2003, Minister Alston directed the ACCC to implement new monitoring and reporting arrangements on competition in the broadband industry. A key site for discussion of these matters has been the competition policy institution, the Australian Competition and Consumer Commission, and its various inquiries, reports, and considerations (consult ACCC’s telecommunications homepage at http://www.accc.gov.au/telco/fs-telecom.htm). Another key site has been the Productivity Commission (http://www.pc.gov.au), while a third is the National Office on the Information Economy (NOIE - http://www.noie.gov.au/projects/access/access/broadband1.htm). Others have questioned whether even the most perfectly competitive market in broadband will actually provide access to citizens and consumers. A great deal of work on this issue has been undertaken by DCITA, NOIE, the regulators, and industry bodies, not to mention consumer and public interest groups. Since 1997, there have been a number of governmental inquiries undertaken or in progress concerning the takeup of broadband and networked new media (for example, a House of Representatives Wireless Broadband Inquiry), as well as important inquiries into the still most strategically important of Australia’s companies in this area, Telstra. Much of this effort on an ersatz broadband policy has been piecemeal and fragmented. There are fundamental difficulties with the large size of the Australian continent and its harsh terrain, the small size of the Australian market, the number of providers, and the dominant position effectively still held by Telstra, as well as Singtel Optus (Optus’s previous overseas investors included Cable & Wireless and Bell South), and the larger telecommunications and Internet companies (such as Ozemail). Many consumers living in metropolitan Australia still face real difficulties in realising the slogan ‘bandwidth for all’, but the situation in parts of rural Australia is far worse. Satellite ‘broadband’ solutions are available, through Telstra Countrywide or other providers, but these offer limited two-way interactivity. Data can be received at reasonable speeds (though at far lower data rates than how ‘broadband’ used to be defined), but can only be sent at far slower rates (Goggin, Rural Communities Online). The cultural implications of these digital constraints may well be considerable. Computer gamers, for instance, are frustrated by slow return paths. In this light, the final report of the January 2003 Broadband Advisory Group (BAG) is very timely. The BAG report opens with a broadband rhapsody: Broadband communications technologies can deliver substantial economic and social benefits to Australia…As well as producing productivity gains in traditional and new industries, advanced connectivity can enrich community life, particularly in rural and regional areas. It provides the basis for integration of remote communities into national economic, cultural and social life. (BAG 1, 7) Its prescriptions include: Australia will be a world leader in the availability and effective use of broadband...and to capture the economic and social benefits of broadband connectivity...Broadband should be available to all Australians at fair and reasonable prices…Market arrangements should be pro-competitive and encourage investment...The Government should adopt a National Broadband Strategy (BAG 1) And, like its predecessor nine years earlier, the BAG report does make reference to a national broadband strategy aiming to maximise “choice in work and recreation activities available to all Australians independent of location, background, age or interests” (17). However, the idea of a national broadband strategy is not something the BAG really comes to grips with. The final report is keen on encouraging broadband adoption, but not explicit on how barriers to broadband can be addressed. Perhaps this is not surprising given that the membership of the BAG, dominated by representatives of large corporations and senior bureaucrats was even less representative than its BSEG predecessor. Some months after the BAG report, the Federal government did declare a broadband strategy. It did so, intriguingly enough, under the rubric of its response to the Regional Telecommunications Inquiry report (Estens), the second inquiry responsible for reassuring citizens nervous about the full-privatisation of Telstra (the first inquiry being Besley). The government’s grand $142.8 million National Broadband Strategy focusses on the ‘broadband needs of regional Australians, in partnership with all levels of government’ (Alston, ‘National Broadband Strategy’). Among other things, the government claims that the Strategy will result in “improved outcomes in terms of services and prices for regional broadband access; [and] the development of national broadband infrastructure assets.” (Alston, ‘National Broadband Strategy’) At the same time, the government announced an overall response to the Estens Inquiry, with specific safeguards for Telstra’s role in regional communications — a preliminary to the full Telstra sale (Alston, ‘Future Proofing’). Less publicised was the government’s further initiative in indigenous telecommunications, complementing its Telecommunications Action Plan for Remote Indigenous Communities (DCITA). Indigenous people, it can be argued, were never really contemplated as citizens with the ken of the universal service policy taken to underpin the twentieth-century government monopoly PTT project. In Australia during the deregulatory and re-regulatory 1990s, there was a great reluctance on the part of Labor and Coalition Federal governments, Telstra and other industry participants, even to research issues of access to and use of telecommunications by indigenous communicators. Telstra, and to a lesser extent Optus (who had purchased AUSSAT as part of their licence arrangements), shrouded the issue of indigenous communications in mystery that policymakers were very reluctant to uncover, let alone systematically address. Then regulator, the Australian Telecommunications Authority (AUSTEL), had raised grave concerns about indigenous telecommunications access in its 1991 Rural Communications inquiry. However, there was no government consideration of, nor research upon, these issues until Alston commissioned a study in 2001 — the basis for the TAPRIC strategy (DCITA). The elision of indigenous telecommunications from mainstream industry and government policy is all the more puzzling, if one considers the extraordinarily varied and significant experiments by indigenous Australians in telecommunications and Internet (not least in the early work of the Tanami community, made famous in media and cultural studies by the writings of anthropologist Eric Michaels). While the government’s mid-2003 moves on a ‘National Broadband Strategy’ attend to some details of the broadband predicament, they fall well short of an integrated framework that grasps the shortcomings of the neoliberal communications model. The funding offered is a token amount. The view from the seat of government is a glance from the rear-view mirror: taking a snapshot of rural communications in the years 2000-2002 and projecting this tableau into a safety-net ‘future proofing’ for the inevitable turning away of a fully-privately-owned Telstra from its previously universal, ‘carrier of last resort’ responsibilities. In this aetiolated, residualist policy gaze, citizens remain constructed as consumers in a very narrow sense in this incremental, quietist version of state securing of market arrangements. What is missing is any more expansive notion of citizens, their varied needs, expectations, uses, and cultural imaginings of ‘always on’ broadband networks. Hybrid Networks “Most people on earth will eventually have access to networks that are all switched, interactive, and broadband”, wrote Frances Cairncross in 1998. ‘Eventually’ is a very appropriate word to describe the parlous state of broadband technology implementation. Broadband is in a slow state of evolution and invention. The story of broadband so far underscores the predicament for Australian access to bandwidth, when we lack any comprehensive, integrated, effective, and fair policy in communications and information technology. We have only begun to experiment with broadband technologies and understand their evolving uses, cultural forms, and the sense in which they rework us as subjects. Our communications networks are not superhighways, to invoke an enduring artefact from an older technology. Nor any longer are they a single ‘public’ switched telecommunications network, like those presided over by the post-telegraph-telephone monopolies of old. Like roads themselves, or the nascent postal system of the sixteenth century, broadband is a patchwork quilt. The ‘fibre’ of our communications networks is hybrid. To be sure, powerful corporations dominate, like the Tassis or Taxis who served as postmasters to the Habsburg emperors (Briggs & Burke 25). Activating broadband today provides a perspective on the path dependency of technology history, and how we can open up new threads of a communications fabric. Our options for transforming our multitudinous networked lives emerge as much from everyday tactics and strategies as they do from grander schemes and unifying policies. We may care to reflect on the waning potential for nation-building technology, in the wake of globalisation. We no longer gather our imagined community around a Community Telephone Plan as it was called in 1960 (Barr, Moyal, and PMG). Yet we do require national and international strategies to get and stay connected (Barr), ideas and funding that concretely address the wider dimensions of access and use. We do need to debate the respective roles of Telstra, the state, community initiatives, and industry competition in fair telecommunications futures. Networks have global reach and require global and national integration. Here vision, co-ordination, and resources are urgently required for our commonweal and moral fibre. To feel the width of the band we desire, we need to plug into and activate the policy circuits. Thanks to Grayson Cooke, Patrick Lichty, Ned Rossiter, John Pace, and an anonymous reviewer for helpful comments. Works Cited Alston, Richard. ‘ “Future Proofing” Regional Communications.’ Department of Communications, Information Technology and the Arts, Canberra, 2003. 17 July 2003 <http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115485,00.php> —. ‘A National Broadband Strategy.’ Department of Communications, Information Technology and the Arts, Canberra, 2003. 17 July 2003 <http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115486,00.php>. Australian Competition and Consumer Commission (ACCC). Broadband Services Report March 2003. Canberra: ACCC, 2003. 17 July 2003 <http://www.accc.gov.au/telco/fs-telecom.htm>. —. Emerging Market Structures in the Communications Sector. Canberra: ACCC, 2003. 15 July 2003 <http://www.accc.gov.au/pubs/publications/utilities/telecommu... ...nications/Emerg_mar_struc.doc>. Barr, Trevor. new media.com: The Changing Face of Australia’s Media and Telecommunications. Sydney: Allen & Unwin, 2000. Besley, Tim (Telecommunications Service Inquiry). Connecting Australia: Telecommunications Service Inquiry. Canberra: Department of Information, Communications and the Arts, 2000. 17 July 2003 <http://www.telinquiry.gov.au/final_report.php>. Briggs, Asa, and Burke, Peter. A Social History of the Internet: From Gutenberg to the Internet. Cambridge: Polity, 2002. Broadband Advisory Group. Australia’s Broadband Connectivity: The Broadband Advisory Group’s Report to Government. Melbourne: National Office on the Information Economy, 2003. 15 July 2003 <http://www.noie.gov.au/publications/NOIE/BAG/report/index.htm>. Broadband Services Expert Group. Networking Australia’s Future: Final Report. Canberra: Australian Government Publishing Service (AGPS), 1994. Bureau of Transport and Communications Economics (BTCE). Communications Futures Final Project. Canberra: AGPS, 1994. Cairncross, Frances. The Death of Distance: How the Communications Revolution Will Change Our Lives. London: Orion Business Books, 1997. Communications Law Centre (CLC). Australian Telecommunications Regulation: The Communications Law Centre Guide. 2nd edition. Sydney: Communications Law Centre, University of NSW, 2001. Department of Communications, Information Technology and the Arts (DCITA). Telecommunications Action Plan for Remote Indigenous Communities: Report on the Strategic Study for Improving Telecommunications in Remote Indigenous Communities. Canberra: DCITA, 2002. Estens, D. Connecting Regional Australia: The Report of the Regional Telecommunications Inquiry. Canberra: DCITA, 2002. <http://www.telinquiry.gov.au/rti-report.php>, accessed 17 July 2003. Fels, Alan. ‘Competition in Telecommunications’, speech to Australian Telecommunications Users Group 19th Annual Conference. 6 March, 2003, Sydney. <http://www.accc.gov.au/speeches/2003/Fels_ATUG_6March03.doc>, accessed 15 July 2003. Flew, Terry, and Spurgeon, Christina. ‘Television After Broadcasting’. In The Australian TV Book. Ed. Graeme Turner and Stuart Cunningham. Allen & Unwin, Sydney. 69-85. 2000. Given, Jock. Turning Off the Television. Sydney: UNSW Press, 2003. Goggin, Gerard. ‘Citizens and Beyond: Universal service in the Twilight of the Nation-State.’ In All Connected?: Universal Service in Telecommunications, ed. Bruce Langtry. Melbourne: University of Melbourne Press, 1998. 49-77 —. Rural Communities Online: Networking to link Consumers to Providers. Melbourne: Telstra Consumer Consultative Council, 2003. Goggin, Gerard, and Newell, Christopher. Digital Disability: The Social Construction of Disability in New Media. Lanham, MD: Rowman & Littlefield, 2003. House of Representatives Standing Committee on Communications, Information Technology and the Arts (HoR). Connecting Australia!: Wireless Broadband. Report of Inquiry into Wireless Broadband Technologies. Canberra: Parliament House, 2002. <http://www.aph.gov.au/house/committee/cita/Wbt/report.htm>, accessed 17 July 2003. Lamberton, Don. ‘A Telecommunications Infrastructure is Not an Information Infrastructure’. Prometheus: Journal of Issues in Technological Change, Innovation, Information Economics, Communication and Science Policy 14 (1996): 31-38. Latour, Bruno. Science in Action: How to Follow Scientists and Engineers Through Society. Cambridge, MA: Harvard University Press, 1987. Luck, David. ‘Revisiting the Future: Assessing the 1994 BTCE communications futures project.’ Media International Australia 96 (2000): 109-119. MacBride, Sean (Chair of International Commission for the Study of Communication Problems). Many Voices, One World: Towards a New More Just and More Efficient World Information and Communication Order. Paris: Kegan Page, London. UNESCO, 1980. Maitland Commission (Independent Commission on Worldwide Telecommunications Development). The Missing Link. Geneva: International Telecommunications Union, 1985. Michaels, Eric. Bad Aboriginal Art: Tradition, Media, and Technological Horizons. Sydney: Allen & Unwin, 1994. Mody, Bella, Bauer, Johannes M., and Straubhaar, Joseph D., eds. Telecommunications Politics: Ownership and Control of the Information Highway in Developing Countries. Mahwah, NJ: Erlbaum, 1995. Moyal, Ann. Clear Across Australia: A History of Telecommunications. Melbourne: Thomas Nelson, 1984. Post-Master General’s Department (PMG). Community Telephone Plan for Australia. Melbourne: PMG, 1960. Productivity Commission (PC). Telecommunications Competition Regulation: Inquiry Report. Report No. 16. Melbourne: Productivity Commission, 2001. <http://www.pc.gov.au/inquiry/telecommunications/finalreport/>, accessed 17 July 2003. Spurgeon, Christina. ‘National Culture, Communications and the Information Economy.’ Media International Australia 87 (1998): 23-34. Turner, Graeme. ‘First Contact: coming to terms with the cable guy.’ UTS Review 3 (1997): 109-21. Winseck, Dwayne. ‘Wired Cities and Transnational Communications: New Forms of Governance for Telecommunications and the New Media’. In The Handbook of New Media: Social Shaping and Consequences of ICTs, ed. Leah A. Lievrouw and Sonia Livingstone. London: Sage, 2002. 393-409. World Trade Organisation. General Agreement on Trade in Services: Annex on Telecommunications. Geneva: World Trade Organisation, 1994. 17 July 2003 <http://www.wto.org/english/tratop_e/serv_e/12-tel_e.htm>. —. Fourth protocol to the General Agreement on Trade in Services. Geneva: World Trade Organisation. 17 July 2003 <http://www.wto.org/english/tratop_e/serv_e/4prote_e.htm>. Links http://www.accc.gov.au/pubs/publications/utilities/telecommunications/Emerg_mar_struc.doc http://www.accc.gov.au/speeches/2003/Fels_ATUG_6March03.doc http://www.accc.gov.au/telco/fs-telecom.htm http://www.aph.gov.au/house/committee/cita/Wbt/report.htm http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115485,00.html http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115486,00.html http://www.noie.gov.au/projects/access/access/broadband1.htm http://www.noie.gov.au/publications/NOIE/BAG/report/index.htm http://www.pc.gov.au http://www.pc.gov.au/inquiry/telecommunications/finalreport/ http://www.telinquiry.gov.au/final_report.html http://www.telinquiry.gov.au/rti-report.html http://www.wto.org/english/tratop_e/serv_e/12-tel_e.htm http://www.wto.org/english/tratop_e/serv_e/4prote_e.htm Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Goggin, Gerard. "Broadband" M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0308/02-featurebroadband.php>. APA Style Goggin, G. (2003, Aug 26). Broadband. M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0308/02-featurebroadband.php>
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Rossiter, Ned. "Creative Industries and the Limits of Critique from". M/C Journal 6, n.º 3 (1 de junio de 2003). http://dx.doi.org/10.5204/mcj.2208.

Texto completo
Resumen
‘Every space has become ad space’. Steve Hayden, Wired Magazine, May 2003. Marshall McLuhan’s (1964) dictum that media technologies constitute a sensory extension of the body shares a conceptual affinity with Ernst Jünger’s notion of ‘“organic construction” [which] indicates [a] synergy between man and machine’ and Walter Benjamin’s exploration of the mimetic correspondence between the organic and the inorganic, between human and non-human forms (Bolz, 2002: 19). The logo or brand is co-extensive with various media of communication – billboards, TV advertisements, fashion labels, book spines, mobile phones, etc. Often the logo is interchangeable with the product itself or a way or life. Since all social relations are mediated, whether by communications technologies or architectonic forms ranging from corporate buildings to sporting grounds to family living rooms, it follows that there can be no outside for sociality. The social is and always has been in a mutually determining relationship with mediating forms. It is in this sense that there is no outside. Such an idea has become a refrain amongst various contemporary media theorists. Here’s a sample: There is no outside position anymore, nor is this perceived as something desirable. (Lovink, 2002a: 4) Both “us” and “them” (whoever we are, whoever they are) are all always situated in this same virtual geography. There’s no outside …. There is nothing outside the vector. (Wark, 2002: 316) There is no more outside. The critique of information is in the information itself. (Lash, 2002: 220) In declaring a universality for media culture and information flows, all of the above statements acknowledge the political and conceptual failure of assuming a critical position outside socio-technically constituted relations. Similarly, they recognise the problems inherent in the “ideology critique” of the Frankfurt School who, in their distinction between “truth” and “false-consciousness”, claimed a sort of absolute knowledge for the critic that transcended the field of ideology as it is produced by the culture industry. Althusser’s more complex conception of ideology, material practices and subject formation nevertheless also fell prey to the pretence of historical materialism as an autonomous “science” that is able to determine the totality, albeit fragmented, of lived social relations. One of the key failings of ideology critique, then, is its incapacity to account for the ways in which the critic, theorist or intellectual is implicated in the operations of ideology. That is, such approaches displace the reflexivity and power relationships between epistemology, ontology and their constitution as material practices within socio-political institutions and historical constellations, which in turn are the settings for the formation of ideology. Scott Lash abandons the term ideology altogether due to its conceptual legacies within German dialectics and French post-structuralist aporetics, both of which ‘are based in a fundamental dualism, a fundamental binary, of the two types of reason. One speaks of grounding and reconciliation, the other of unbridgeability …. Both presume a sphere of transcendence’ (Lash, 2002: 8). Such assertions can be made at a general level concerning these diverse and often conflicting approaches when they are reduced to categories for the purpose of a polemic. However, the work of “post-structuralists” such as Foucault, Deleuze and Guattari and the work of German systems theorist Niklas Luhmann is clearly amenable to the task of critique within information societies (see Rossiter, 2003). Indeed, Lash draws on such theorists in assembling his critical dispositif for the information age. More concretely, Lash (2002: 9) advances his case for a new mode of critique by noting the socio-technical and historical shift from ‘constitutive dualisms of the era of the national manufacturing society’ to global information cultures, whose constitutive form is immanent to informational networks and flows. Such a shift, according to Lash, needs to be met with a corresponding mode of critique: Ideologycritique [ideologiekritik] had to be somehow outside of ideology. With the disappearance of a constitutive outside, informationcritique must be inside of information. There is no outside any more. (2002: 10) Lash goes on to note, quite rightly, that ‘Informationcritique itself is branded, another object of intellectual property, machinically mediated’ (2002: 10). It is the political and conceptual tensions between information critique and its regulation via intellectual property regimes which condition critique as yet another brand or logo that I wish to explore in the rest of this essay. Further, I will question the supposed erasure of a “constitutive outside” to the field of socio-technical relations within network societies and informational economies. Lash is far too totalising in supposing a break between industrial modes of production and informational flows. Moreover, the assertion that there is no more outside to information too readily and simplistically assumes informational relations as universal and horizontally organised, and hence overlooks the significant structural, cultural and economic obstacles to participation within media vectors. That is, there certainly is an outside to information! Indeed, there are a plurality of outsides. These outsides are intertwined with the flows of capital and the imperial biopower of Empire, as Hardt and Negri (2000) have argued. As difficult as it may be to ascertain the boundaries of life in all its complexity, borders, however defined, nonetheless exist. Just ask the so-called “illegal immigrant”! This essay identifies three key modalities comprising a constitutive outside: material (uneven geographies of labour-power and the digital divide), symbolic (cultural capital), and strategic (figures of critique). My point of reference in developing this inquiry will pivot around an analysis of the importation in Australia of the British “Creative Industries” project and the problematic foundation such a project presents to the branding and commercialisation of intellectual labour. The creative industries movement – or Queensland Ideology, as I’ve discussed elsewhere with Danny Butt (2002) – holds further implications for the political and economic position of the university vis-à-vis the arts and humanities. Creative industries constructs itself as inside the culture of informationalism and its concomitant economies by the very fact that it is an exercise in branding. Such branding is evidenced in the discourses, rhetoric and policies of creative industries as adopted by university faculties, government departments and the cultural industries and service sectors seeking to reposition themselves in an institutional environment that is adjusting to ongoing structural reforms attributed to the demands by the “New Economy” for increased labour flexibility and specialisation, institutional and economic deregulation, product customisation and capital accumulation. Within the creative industries the content produced by labour-power is branded as copyrights and trademarks within the system of Intellectual Property Regimes (IPRs). However, as I will go on to show, a constitutive outside figures in material, symbolic and strategic ways that condition the possibility of creative industries. The creative industries project, as envisioned by the Blair government’s Department of Culture, Media and Sport (DCMS) responsible for the Creative Industry Task Force Mapping Documents of 1998 and 2001, is interested in enhancing the “creative” potential of cultural labour in order to extract a commercial value from cultural objects and services. Just as there is no outside for informationcritique, for proponents of the creative industries there is no culture that is worth its name if it is outside a market economy. That is, the commercialisation of “creativity” – or indeed commerce as a creative undertaking – acts as a legitimising function and hence plays a delimiting role for “culture” and, by association, sociality. And let us not forget, the institutional life of career academics is also at stake in this legitimating process. The DCMS cast its net wide when defining creative sectors and deploys a lexicon that is as vague and unquantifiable as the next mission statement by government and corporate bodies enmeshed within a neo-liberal paradigm. At least one of the key proponents of the creative industries in Australia is ready to acknowledge this (see Cunningham, 2003). The list of sectors identified as holding creative capacities in the CITF Mapping Document include: film, music, television and radio, publishing, software, interactive leisure software, design, designer fashion, architecture, performing arts, crafts, arts and antique markets, architecture and advertising. The Mapping Document seeks to demonstrate how these sectors consist of ‘... activities which have their origin in individual creativity, skill and talent and which have the potential for wealth and job creation through generation and exploitation of intellectual property’ (CITF: 1998/2001). The CITF’s identification of intellectual property as central to the creation of jobs and wealth firmly places the creative industries within informational and knowledge economies. Unlike material property, intellectual property such as artistic creations (films, music, books) and innovative technical processes (software, biotechnologies) are forms of knowledge that do not diminish when they are distributed. This is especially the case when information has been encoded in a digital form and distributed through technologies such as the internet. In such instances, information is often attributed an “immaterial” and nonrivalrous quality, although this can be highly misleading for both the conceptualisation of information and the politics of knowledge production. Intellectual property, as distinct from material property, operates as a scaling device in which the unit cost of labour is offset by the potential for substantial profit margins realised by distribution techniques availed by new information and communication technologies (ICTs) and their capacity to infinitely reproduce the digital commodity object as a property relation. Within the logic of intellectual property regimes, the use of content is based on the capacity of individuals and institutions to pay. The syndication of media content ensures that market saturation is optimal and competition is kept to a minimum. However, such a legal architecture and hegemonic media industry has run into conflict with other net cultures such as open source movements and peer-to-peer networks (Lovink, 2002b; Meikle, 2002), which is to say nothing of the digital piracy of software and digitally encoded cinematic forms. To this end, IPRs are an unstable architecture for extracting profit. The operation of Intellectual Property Regimes constitutes an outside within creative industries by alienating labour from its mode of information or form of expression. Lash is apposite on this point: ‘Intellectual property carries with it the right to exclude’ (Lash, 2002: 24). This principle of exclusion applies not only to those outside the informational economy and culture of networks as result of geographic, economic, infrastructural, and cultural constraints. The very practitioners within the creative industries are excluded from control over their creations. It is in this sense that a legal and material outside is established within an informational society. At the same time, this internal outside – to put it rather clumsily – operates in a constitutive manner in as much as the creative industries, by definition, depend upon the capacity to exploit the IP produced by its primary source of labour. For all the emphasis the Mapping Document places on exploiting intellectual property, it’s really quite remarkable how absent any elaboration or considered development of IP is from creative industries rhetoric. It’s even more astonishing that media and cultural studies academics have given at best passing attention to the issues of IPRs. Terry Flew (2002: 154-159) is one of the rare exceptions, though even here there is no attempt to identify the implications IPRs hold for those working in the creative industries sectors. Perhaps such oversights by academics associated with the creative industries can be accounted for by the fact that their own jobs rest within the modern, industrial institution of the university which continues to offer the security of a salary award system and continuing if not tenured employment despite the onslaught of neo-liberal reforms since the 1980s. Such an industrial system of traditional and organised labour, however, does not define the labour conditions for those working in the so-called creative industries. Within those sectors engaged more intensively in commercialising culture, labour practices closely resemble work characterised by the dotcom boom, which saw young people working excessively long hours without any of the sort of employment security and protection vis-à-vis salary, health benefits and pension schemes peculiar to traditional and organised labour (see McRobbie, 2002; Ross, 2003). During the dotcom mania of the mid to late 90s, stock options were frequently offered to people as an incentive for offsetting the often minimum or even deferred payment of wages (see Frank, 2000). It is understandable that the creative industries project holds an appeal for managerial intellectuals operating in arts and humanities disciplines in Australia, most particularly at Queensland University of Technology (QUT), which claims to have established the ‘world’s first’ Creative Industries faculty (http://www.creativeindustries.qut.com/). The creative industries provide a validating discourse for those suffering anxiety disorders over what Ruth Barcan (2003) has called the ‘usefulness’ of ‘idle’ intellectual pastimes. As a project that endeavours to articulate graduate skills with labour markets, the creative industries is a natural extension of the neo-liberal agenda within education as advocated by successive governments in Australia since the Dawkins reforms in the mid 1980s (see Marginson and Considine, 2000). Certainly there’s a constructive dimension to this: graduates, after all, need jobs and universities should display an awareness of market conditions; they also have a responsibility to do so. And on this count, I find it remarkable that so many university departments in my own field of communications and media studies are so bold and, let’s face it, stupid, as to make unwavering assertions about market demands and student needs on the basis of doing little more than sniffing the wind! Time for a bit of a reality check, I’d say. And this means becoming a little more serious about allocating funds and resources towards market research and analysis based on the combination of needs between students, staff, disciplinary values, university expectations, and the political economy of markets. However, the extent to which there should be a wholesale shift of the arts and humanities into a creative industries model is open to debate. The arts and humanities, after all, are a set of disciplinary practices and values that operate as a constitutive outside for creative industries. Indeed, in their creative industries manifesto, Stuart Cunningham and John Hartley (2002) loath the arts and humanities in such confused, paradoxical and hypocritical ways in order to establish the arts and humanities as a cultural and ideological outside. To this end, to subsume the arts and humanities into the creative industries, if not eradicate them altogether, is to spell the end of creative industries as it’s currently conceived at the institutional level within academe. Too much specialisation in one post-industrial sector, broad as it may be, ensures a situation of labour reserves that exceed market needs. One only needs to consider all those now unemployed web-designers that graduated from multi-media programs in the mid to late 90s. Further, it does not augur well for the inevitable shift from or collapse of a creative industries economy. Where is the standing reserve of labour shaped by university education and training in a post-creative industries economy? Diehard neo-liberals and true-believers in the capacity for perpetual institutional flexibility would say that this isn’t a problem. The university will just “organically” adapt to prevailing market conditions and shape their curriculum and staff composition accordingly. Perhaps. Arguably if the university is to maintain a modality of time that is distinct from the just-in-time mode of production characteristic of informational economies – and indeed, such a difference is a quality that defines the market value of the educational commodity – then limits have to be established between institutions of education and the corporate organisation or creative industry entity. The creative industries project is a reactionary model insofar as it reinforces the status quo of labour relations within a neo-liberal paradigm in which bids for industry contracts are based on a combination of rich technological infrastructures that have often been subsidised by the state (i.e. paid for by the public), high labour skills, a low currency exchange rate and the lowest possible labour costs. In this respect it is no wonder that literature on the creative industries omits discussion of the importance of unions within informational, networked economies. What is the place of unions in a labour force constituted as individualised units? The conditions of possibility for creative industries within Australia are at once its frailties. In many respects, the success of the creative industries sector depends upon the ongoing combination of cheap labour enabled by a low currency exchange rate and the capacity of students to access the skills and training offered by universities. Certainly in relation to matters such as these there is no outside for the creative industries. There’s a great need to explore alternative economic models to the content production one if wealth is to be successfully extracted and distributed from activities in the new media sectors. The suggestion that the creative industries project initiates a strategic response to the conditions of cultural production within network societies and informational economies is highly debateable. The now well documented history of digital piracy in the film and software industries and the difficulties associated with regulating violations to proprietors of IP in the form of copyright and trademarks is enough of a reason to look for alternative models of wealth extraction. And you can be sure this will occur irrespective of the endeavours of the creative industries. To conclude, I am suggesting that those working in the creative industries, be they content producers or educators, need to intervene in IPRs in such a way that: 1) ensures the alienation of their labour is minimised; 2) collectivising “creative” labour in the form of unions or what Wark (2001) has termed the “hacker class”, as distinct from the “vectoralist class”, may be one way of achieving this; and 3) the advocates of creative industries within the higher education sector in particular are made aware of the implications IPRs have for graduates entering the workforce and adjust their rhetoric, curriculum, and policy engagements accordingly. Works Cited Barcan, Ruth. ‘The Idleness of Academics: Reflections on the Usefulness of Cultural Studies’. Continuum: Journal of Media & Cultural Studies (forthcoming, 2003). Bolz, Norbert. ‘Rethinking Media Aesthetics’, in Geert Lovink, Uncanny Networks: Dialogues with the Virtual Intelligentsia. Cambridge, Mass.: MIT Press, 2002, 18-27. Butt, Danny and Rossiter, Ned. ‘Blowing Bubbles: Post-Crash Creative Industries and the Withering of Political Critique in Cultural Studies’. Paper presented at Ute Culture: The Utility of Culture and the Uses of Cultural Studies, Cultural Studies Association of Australia Conference, Melbourne, 5-7 December, 2002. Posted to fibreculture mailing list, 10 December, 2002, http://www.fibreculture.org/archives/index.html Creative Industry Task Force: Mapping Document, DCMS (Department of Culture, Media and Sport), London, 1998/2001. http://www.culture.gov.uk/creative/mapping.html Cunningham, Stuart. ‘The Evolving Creative Industries: From Original Assumptions to Contemporary Interpretations’. Seminar Paper, QUT, Brisbane, 9 May, 2003, http://www.creativeindustries.qut.com/research/cirac/documen... ...ts/THE_EVOLVING_CREATIVE_INDUSTRIES.pdf Cunningham, Stuart; Hearn, Gregory; Cox, Stephen; Ninan, Abraham and Keane, Michael. Brisbane’s Creative Industries 2003. Report delivered to Brisbane City Council, Community and Economic Development, Brisbane: CIRAC, 2003. http://www.creativeindustries.qut.com/research/cirac/documen... ...ts/bccreportonly.pdf Flew, Terry. New Media: An Introduction. Oxford: Oxford University Press, 2002. Frank, Thomas. One Market under God: Extreme Capitalism, Market Populism, and the End of Economic Democracy. New York: Anchor Books, 2000. Hartley, John and Cunningham, Stuart. ‘Creative Industries: from Blue Poles to fat pipes’, in Malcolm Gillies (ed.) The National Humanities and Social Sciences Summit: Position Papers. Canberra: DEST, 2002. Hayden, Steve. ‘Tastes Great, Less Filling: Ad Space – Will Advertisers Learn the Hard Lesson of Over-Development?’. Wired Magazine 11.06 (June, 2003), http://www.wired.com/wired/archive/11.06/ad_spc.html Hardt, Michael and Negri, Antonio. Empire. Cambridge, Mass.: Harvard University Press, 2000. Lash, Scott. Critique of Information. London: Sage, 2002. Lovink, Geert. Uncanny Networks: Dialogues with the Virtual Intelligentsia. Cambridge, Mass.: MIT Press, 2002a. Lovink, Geert. Dark Fiber: Tracking Critical Internet Culture. Cambridge, Mass.: MIT Press, 2002b. McLuhan, Marshall. Understanding Media: The Extensions of Man. London: Routledge and Kegan Paul, 1964. McRobbie, Angela. ‘Clubs to Companies: Notes on the Decline of Political Culture in Speeded up Creative Worlds’, Cultural Studies 16.4 (2002): 516-31. Marginson, Simon and Considine, Mark. The Enterprise University: Power, Governance and Reinvention in Australia. Cambridge: Cambridge University Press, 2000. Meikle, Graham. Future Active: Media Activism and the Internet. Sydney: Pluto Press, 2002. Ross, Andrew. No-Collar: The Humane Workplace and Its Hidden Costs. New York: Basic Books, 2003. Rossiter, Ned. ‘Processual Media Theory’, in Adrian Miles (ed.) Streaming Worlds: 5th International Digital Arts & Culture (DAC) Conference. 19-23 May. Melbourne: RMIT University, 2003, 173-184. http://hypertext.rmit.edu.au/dac/papers/Rossiter.pdf Sassen, Saskia. Losing Control? Sovereignty in an Age of Globalization. New York: Columbia University Press, 1996. Wark, McKenzie. ‘Abstraction’ and ‘Hack’, in Hugh Brown, Geert Lovink, Helen Merrick, Ned Rossiter, David Teh, Michele Willson (eds). Politics of a Digital Present: An Inventory of Australian Net Culture, Criticism and Theory. Melbourne: Fibreculture Publications, 2001, 3-7, 99-102. Wark, McKenzie. ‘The Power of Multiplicity and the Multiplicity of Power’, in Geert Lovink, Uncanny Networks: Dialogues with the Virtual Intelligentsia. Cambridge, Mass.: MIT Press, 2002, 314-325. Links http://hypertext.rmit.edu.au/dac/papers/Rossiter.pdf http://www.creativeindustries.qut.com/ http://www.creativeindustries.qut.com/research/cirac/documents/THE_EVOLVING_CREATIVE_INDUSTRIES.pdf http://www.creativeindustries.qut.com/research/cirac/documents/bccreportonly.pdf http://www.culture.gov.uk/creative/mapping.html http://www.fibreculture.org/archives/index.html http://www.wired.com/wired/archive/11.06/ad_spc.html Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Rossiter, Ned. "Creative Industries and the Limits of Critique from " M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0306/11-creativeindustries.php>. APA Style Rossiter, N. (2003, Jun 19). Creative Industries and the Limits of Critique from . M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0306/11-creativeindustries.php>
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Hollier, Scott, Katie M. Ellis y Mike Kent. "User-Generated Captions: From Hackers, to the Disability Digerati, to Fansubbers". M/C Journal 20, n.º 3 (21 de junio de 2017). http://dx.doi.org/10.5204/mcj.1259.

Texto completo
Resumen
Writing in the American Annals of the Deaf in 1931, Emil S. Ladner Jr, a Deaf high school student, predicted the invention of words on screen to facilitate access to “talkies”. He anticipated:Perhaps, in time, an invention will be perfected that will enable the deaf to hear the “talkies”, or an invention which will throw the words spoken directly under the screen as well as being spoken at the same time. (Ladner, cited in Downey Closed Captioning)This invention would eventually come to pass and be known as captions. Captions as we know them today have become widely available because of a complex interaction between technological change, volunteer effort, legislative activism, as well as increasing consumer demand. This began in the late 1950s when the technology to develop captions began to emerge. Almost immediately, volunteers began captioning and distributing both film and television in the US via schools for the deaf (Downey, Constructing Closed-Captioning in the Public Interest). Then, between the 1970s and 1990s Deaf activists and their allies began to campaign aggressively for the mandated provision of captions on television, leading eventually to the passing of the Television Decoder Circuitry Act in the US in 1990 (Ellis). This act decreed that any television with a screen greater than 13 inches must be designed/manufactured to be capable of displaying captions. The Act was replicated internationally, with countries such as Australia adopting the same requirements with their Australian standards regarding television sets imported into the country. As other papers in this issue demonstrate, this market ultimately led to the introduction of broadcasting requirements.Captions are also vital to the accessibility of videos in today’s online and streaming environment—captioning is listed as the highest priority in the definitive World Wide Web Consortium (W3C) Web Content Accessibility Guideline’s (WCAG) 2.0 standard (W3C, “Web Content Accessibility Guidelines 2.0”). This recognition of the requirement for captions online is further reflected in legislation, from both the US 21st Century Communications and Video Accessibility Act (CVAA) (2010) and from the Australian Human Rights Commission (2014).Television today is therefore much more freely available to a range of different groups. In addition to broadcast channels, captions are also increasingly available through streaming platforms such as Netflix and other subscription video on demand providers, as well as through user-generated video sites like YouTube. However, a clear discrepancy exists between guidelines, legislation and the industry’s approach. Guidelines such as the W3C are often resisted by industry until compliance is legislated.Historically, captions have been both unavailable (Ellcessor; Ellis) and inadequate (Ellis and Kent), and in many instances, they still are. For example, while the provision of captions in online video is viewed as a priority across international and domestic policies and frameworks, there is a stark contrast between the policy requirements and the practical implementation of these captions. This has led to the active development of a solution as part of an ongoing tradition of user-led development; user-generated captions. However, within disability studies, research around the agency of this activity—and the media savvy users facilitating it—has gone significantly underexplored.Agency of ActivityInformation sharing has featured heavily throughout visions of the Web—from Vannevar Bush’s 1945 notion of the memex (Bush), to the hacker ethic, to Zuckerberg’s motivations for creating Facebook in his dorm room in 2004 (Vogelstein)—resulting in a wide agency of activity on the Web. Running through this development of first the Internet and then the Web as a place for a variety of agents to share information has been the hackers’ ethic that sharing information is a powerful, positive good (Raymond 234), that information should be free (Levey), and that to achieve these goals will often involve working around intended information access protocols, sometimes illegally and normally anonymously. From the hacker culture comes the digerati, the elite of the digital world, web users who stand out by their contributions, success, or status in the development of digital technology. In the context of access to information for people with disabilities, we describe those who find these workarounds—providing access to information through mainstream online platforms that are not immediately apparent—as the disability digerati.An acknowledged mainstream member of the digerati, Tim Berners-Lee, inventor of the World Wide Web, articulated a vision for the Web and its role in information sharing as inclusive of everyone:Worldwide, there are more than 750 million people with disabilities. As we move towards a highly connected world, it is critical that the Web be useable by anyone, regardless of individual capabilities and disabilities … The W3C [World Wide Web Consortium] is committed to removing accessibility barriers for all people with disabilities—including the deaf, blind, physically challenged, and cognitively or visually impaired. We plan to work aggressively with government, industry, and community leaders to establish and attain Web accessibility goals. (Berners-Lee)Berners-Lee’s utopian vision of a connected world where people freely shared information online has subsequently been embraced by many key individuals and groups. His emphasis on people with disabilities, however, is somewhat unique. While maintaining a focus on accessibility, in 2006 he shifted focus to who could actually contribute to this idea of accessibility when he suggested the idea of “community captioning” to video bloggers struggling with the notion of including captions on their videos:The video blogger posts his blog—and the web community provides the captions that help others. (Berners-Lee, cited in Outlaw)Here, Berners-Lee was addressing community captioning in the context of video blogging and user-generated content. However, the concept is equally significant for professionally created videos, and media savvy users can now also offer instructions to audiences about how to access captions and subtitles. This shift—from user-generated to user access—must be situated historically in the context of an evolving Web 2.0 and changing accessibility legislation and policy.In the initial accessibility requirements of the Web, there was little mention of captioning at all, primarily due to video being difficult to stream over a dial-up connection. This was reflected in the initial WCAG 1.0 standard (W3C, “Web Content Accessibility Guidelines 1.0”) in which there was no requirement for videos to be captioned. WCAG 2.0 went some way in addressing this, making captioning online video an essential Level A priority (W3C, “Web Content Accessibility Guidelines 2.0”). However, there were few tools that could actually be used to create captions, and little interest from emerging online video providers in making this a priority.As a result, the possibility of user-generated captions for video content began to be explored by both developers and users. One initial captioning tool that gained popularity was MAGpie, produced by the WGBH National Center for Accessible Media (NCAM) (WGBH). While cumbersome by today’s standards, the arrival of MAGpie 2.0 in 2002 provided an affordable and professional captioning tool that allowed people to create captions for their own videos. However, at that point there was little opportunity to caption videos online, so the focus was more on captioning personal video collections offline. This changed with the launch of YouTube in 2005 and its later purchase by Google (CNET), leading to an explosion of user-generated video content online. However, while the introduction of YouTube closed captioned video support in 2006 ensured that captioned video content could be created (YouTube), the ability for users to create captions, save the output into one of the appropriate captioning file formats, upload the captions, and synchronise the captions to the video remained a difficult task.Improvements to the production and availability of user-generated captions arrived firstly through the launch of YouTube’s automated captions feature in 2009 (Google). This service meant that videos could be uploaded to YouTube and, if the user requested it, Google would caption the video within approximately 24 hours using its speech recognition software. While the introduction of this service was highly beneficial in terms of making captioning videos easier and ensuring that the timing of captions was accurate, the quality of captions ranged significantly. In essence, if the captions were not reviewed and errors not addressed, the automated captions were sometimes inaccurate to the point of hilarity (New Media Rock Stars). These inaccurate YouTube captions are colloquially described as craptions. A #nomorecraptions campaign was launched to address inaccurate YouTube captioning and call on YouTube to make improvements.The ability to create professional user-generated captions across a variety of platforms, including YouTube, arrived in 2010 with the launch of Amara Universal Subtitles (Amara). The Amara subtitle portal provides users with the opportunity to caption online videos, even if they are hosted by another service such as YouTube. The captioned file can be saved after its creation and then uploaded to the relevant video source if the user has access to the location of the video content. The arrival of Amara continues to provide ongoing benefits—it contains a professional captioning editing suite specifically catering for online video, the tool is free, and it can caption videos located on other websites. Furthermore, Amara offers the additional benefit of being able to address the issues of YouTube automated captions—users can benefit from the machine-generated captions of YouTube in relation to its timing, then download the captions for editing in Amara to fix the issues, then return the captions to the original video, saving a significant amount of time when captioning large amounts of video content. In recent years Google have also endeavoured to simplify the captioning process for YouTube users by including its own captioning editors, but these tools are generally considered inferior to Amara (Media Access Australia).Similarly, several crowdsourced caption services such as Viki (https://www.viki.com/community) have emerged to facilitate the provision of captions. However, most of these crowdsourcing captioning services can’t tap into commercial products instead offering a service for people that have a video they’ve created, or one that already exists on YouTube. While Viki was highlighted as a useful platform in protests regarding Netflix’s lack of captions in 2009, commercial entertainment providers still have a responsibility to make improvements to their captioning. As we discuss in the next section, people have resorted extreme measures to hack Netflix to access the captions they need. While the ability for people to publish captions on user-generated content has improved significantly, there is still a notable lack of captions for professionally developed videos, movies, and television shows available online.User-Generated Netflix CaptionsIn recent years there has been a worldwide explosion of subscription video on demand service providers. Netflix epitomises the trend. As such, for people with disabilities, there has been significant focus on the availability of captions on these services (see Ellcessor, Ellis and Kent). Netflix, as the current leading provider of subscription video entertainment in both the US and with a large market shares in other countries, has been at the centre of these discussions. While Netflix offers a comprehensive range of captioned video on its service today, there are still videos that do not have captions, particularly in non-English regions. As a result, users have endeavoured to produce user-generated captions for personal use and to find workarounds to access these through the Netflix system. This has been achieved with some success.There are a number of ways in which captions or subtitles can be added to Netflix video content to improve its accessibility for individual users. An early guide in a 2011 blog post (Emil’s Celebrations) identified that when using the Netflix player using the Silverlight plug-in, it is possible to access a hidden menu which allows a subtitle file in the DFXP format to be uploaded to Netflix for playback. However, this does not appear to provide this file to all Netflix users, and is generally referred to as a “soft upload” just for the individual user. Another method to do this, generally credited as the “easiest” way, is to find a SRT file that already exists for the video title, edit the timing to line up with Netflix, use a third-party tool to convert it to the DFXP format, and then upload it using the hidden menu that requires a specific keyboard command to access. While this may be considered uncomplicated for some, there is still a certain amount of technical knowledge required to complete this action, and it is likely to be too complex for many users.However, constant developments in technology are assisting with making access to captions an easier process. Recently, Cosmin Vasile highlighted that the ability to add captions and subtitle tracks can still be uploaded providing that the older Silverlight plug-in is used for playback instead of the new HTML5 player. Others add that it is technically possible to access the hidden feature in an HTML5 player, but an additional Super Netflix browser plug-in is required (Sommergirl). Further, while the procedure for uploading the file remains similar to the approach discussed earlier, there are some additional tools available online such as Subflicks which can provide a simple online conversion of the more common SRT file format to the DFXP format (Subflicks). However, while the ability to use a personal caption or subtitle file remains, the most common way to watch Netflix videos with alternative caption or subtitle files is through the use of the Smartflix service (Smartflix). Unlike other ad-hoc solutions, this service provides a simplified mechanism to bring alternative caption files to Netflix. The Smartflix website states that the service “automatically downloads and displays subtitles in your language for all titles using the largest online subtitles database.”This automatic download and sharing of captions online—known as fansubbing—facilitates easy access for all. For example, blog posts suggest that technology such as this creates important access opportunities for people who are deaf and hard of hearing. Nevertheless, they can be met with suspicion by copyright holders. For example, a recent case in the Netherlands ruled fansubbers were engaging in illegal activities and were encouraging people to download pirated videos. While the fansubbers, like the hackers discussed earlier, argued they were acting in the greater good, the Dutch antipiracy association (BREIN) maintained that subtitles are mainly used by people downloading pirated media and sought to outlaw the manufacture and distribution of third party captions (Anthony). The fansubbers took the issue to court in order to seek clarity about whether copyright holders can reserve exclusive rights to create and distribute subtitles. However, in a ruling against the fansubbers, the court agreed with BREIN that fansubbing violated copyright and incited piracy. What impact this ruling will have on the practice of user-generated captioning online, particularly around popular sites such as Netflix, is hard to predict; however, for people with disabilities who were relying on fansubbing to access content, it is of significant concern that the contention that the main users of user-generated subtitles (or captions) are engaging in illegal activities was so readily accepted.ConclusionThis article has focused on user-generated captions and the types of platforms available to create these. It has shown that this desire to provide access, to set the information free, has resulted in the disability digerati finding workarounds to allow users to upload their own captions and make content accessible. Indeed, the Internet and then the Web as a place for information sharing is evident throughout this history of user-generated captioning online, from Berner-Lee’s conception of community captioning, to Emil and Vasile’s instructions to a Netflix community of captioners, to finally a group of fansubbers who took BRIEN to court and lost. Therefore, while we have conceived of the disability digerati as a conflation of the hacker and the acknowledged digital influencer, these two positions may again part ways, and the disability digerati may—like the hackers before them—be driven underground.Captioned entertainment content offers a powerful, even vital, mode of inclusion for people who are deaf or hard of hearing. Yet, despite Berners-Lee’s urging that everything online be made accessible to people with all sorts of disabilities, captions were not addressed in the first iteration of the WCAG, perhaps reflecting the limitations of the speed of the medium itself. This continues to be the case today—although it is no longer difficult to stream video online, and Netflix have reached global dominance, audiences who require captions still find themselves fighting for access. Thus, in this sense, user-generated captions remain an important—yet seemingly technologically and legislatively complicated—avenue for inclusion.ReferencesAnthony, Sebastian. “Fan-Made Subtitles for TV Shows and Movies Are Illegal, Court Rules.” Arstechnica UK (2017). 21 May 2017 <https://arstechnica.com/tech-policy/2017/04/fan-made-subtitles-for-tv-shows-and-movies-are-illegal/>.Amara. “Amara Makes Video Globally Accessible.” Amara (2010). 25 Apr. 2017. <https://amara.org/en/ 2010>.Berners-Lee, Tim. “World Wide Web Consortium (W3C) Launches International Web Accessibility Initiative.” Web Accessibility Initiative (WAI) (1997). 19 June 2010. <http://www.w3.org/Press/WAI-Launch.html>.Bush, Vannevar. “As We May Think.” The Atlantic (1945). 26 June 2010 <http://www.theatlantic.com/magazine/print/1969/12/as-we-may-think/3881/>.CNET. “YouTube Turns 10: The Video Site That Went Viral.” CNET (2015). 24 Apr. 2017 <https://www.cnet.com/news/youtube-turns-10-the-video-site-that-went-viral/>.Downey, Greg. Closed Captioning: Subtitling, Stenography, and the Digital Convergence of Text with Television. Baltimore: John Hopkins UP, 2008.———. “Constructing Closed-Captioning in the Public Interest: From Minority Media Accessibility to Mainstream Educational Technology.” Info: The Journal of Policy, Regulation and Strategy for Telecommunications, Information and Media 9.2/3 (2007): 69–82.Ellcessor, Elizabeth. “Captions On, Off on TV, Online: Accessibility and Search Engine Optimization in Online Closed Captioning.” Television & New Media 13.4 (2012): 329-352. <http://tvn.sagepub.com/content/early/2011/10/24/1527476411425251.abstract?patientinform-links=yes&legid=sptvns;51v1>.Ellis, Katie. “Television’s Transition to the Internet: Disability Accessibility and Broadband-Based TV in Australia.” Media International Australia 153 (2014): 53–63.Ellis, Katie, and Mike Kent. “Accessible Television: The New Frontier in Disability Media Studies Brings Together Industry Innovation, Government Legislation and Online Activism.” First Monday 20 (2015). <http://firstmonday.org/ojs/index.php/fm/article/view/6170>.Emil’s Celebrations. “How to Add Subtitles to Movies Streamed in Netflix.” 16 Oct. 2011. 9 Apr. 2017 <https://emladenov.wordpress.com/2011/10/16/how-to-add-subtitles-to-movies-streamed-in-netflix/>.Google. “Automatic Captions in Youtube.” 2009. 24 Apr. 2017 <https://googleblog.blogspot.com.au/2009/11/automatic-captions-in-youtube.html>.Jaeger, Paul. “Disability and the Internet: Confronting a Digital Divide.” Disability in Society. Ed. Ronald Berger. Boulder, London: Lynne Rienner Publishers, 2012.Levey, Steven. Hackers: Heroes of the Computer Revolution. North Sebastopol: O’Teilly Media, 1984.Media Access Australia. “How to Caption a Youtube Video.” 2017. 25 Apr. 2017 <https://mediaaccess.org.au/web/how-to-caption-a-youtube-video>.New Media Rock Stars. “Youtube’s 5 Worst Hilariously Catastrophic Auto Caption Fails.” 2013. 25 Apr. 2017 <http://newmediarockstars.com/2013/05/youtubes-5-worst-hilariously-catastrophic-auto-caption-fails/>.Outlaw. “Berners-Lee Applies Web 2.0 to Improve Accessibility.” Outlaw News (2006). 25 June 2010 <http://www.out-law.com/page-6946>.Raymond, Eric S. The New Hacker’s Dictionary. 3rd ed. Cambridge: MIT P, 1996.Smartflix. “Smartflix: Supercharge Your Netflix.” 2017. 9 Apr. 2017 <https://www.smartflix.io/>.Sommergirl. “[All] Adding Subtitles in a Different Language?” 2016. 9 Apr. 2017 <https://www.reddit.com/r/netflix/comments/32l8ob/all_adding_subtitles_in_a_different_language/>.Subflicks. “Subflicks V2.0.0.” 2017. 9 Apr. 2017 <http://subflicks.com/>.Vasile, Cosmin. “Netflix Has Just Informed Us That Its Movie Streaming Service Is Now Available in Just About Every Country That Matters Financially, Aside from China, of Course.” 2016. 9 Apr. 2017 <http://news.softpedia.com/news/how-to-add-custom-subtitles-to-netflix-498579.shtml>.Vogelstein, Fred. “The Wired Interview: Facebook’s Mark Zuckerberg.” Wired Magazine (2009). 20 Jun. 2010 <http://www.wired.com/epicenter/2009/06/mark-zuckerberg-speaks/>.W3C. “Web Content Accessibility Guidelines 1.0.” W3C Recommendation (1999). 25 Jun. 2010 <http://www.w3.org/TR/WCAG10/>.———. “Web Content Accessibility Guidelines (WCAG) 2.0.” 11 Dec. 2008. 21 Aug. 2013 <http://www.w3.org/TR/WCAG20/>.WGBH. “Magpie 2.0—Free, Do-It-Yourself Access Authoring Tool for Digital Multimedia Released by WGBH.” 2002. 25 Apr. 2017 <http://ncam.wgbh.org/about/news/pr_05072002>.YouTube. “Finally, Caption Video Playback.” 2006. 24 Apr. 2017 <http://googlevideo.blogspot.com.au/2006/09/finally-caption-playback.html>.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Bauer, Kathy Anne. "How Does Taste In Educational Settings Influence Parent Decision Making Regarding Enrolment?" M/C Journal 17, n.º 1 (17 de marzo de 2014). http://dx.doi.org/10.5204/mcj.765.

Texto completo
Resumen
Introduction Historically in Australia, there has been a growing movement behind the development of quality Early Childhood Education and Care Centres (termed ‘centres’ for this article). These centres are designed to provide care and education outside of the home for children from birth to five years old. In the mid 1980s, the then Labor Government of Australia promoted and funded the establishment of many centres to provide women who were at home with children the opportunity to move into the workplace. Centre fees were heavily subsidised to make this option viable in the hope that more women would become employed and Australia’s rising unemployment statistics would be reduced. The popularity of this system soon meant that there was a childcare centre shortage and parents were faced with long waiting lists to enrol their child into a centre. To alleviate this situation, independent centres were established that complied with Government rules and regulations. Independent, state, and local government funded centres had a certain degree of autonomy over facilities, staffing, qualifications, quality programmes, and facilities. This movement became part of the global increased focus on the importance of early childhood education. As part of that educational emphasis, the Melbourne Declaration on Educational Goals for Young Australians in 2008 set the direction for schooling for the next 10 years. This formed the basis of Australia’s Education Reforms (Department of Education, Employment and Workplace Relations). The reforms have influenced the management of early childhood education and care centres. All centres must comply with the National Quality Framework that mandates staff qualifications, facility standards, and the ratios of children to adults. From a parent’s perspective centres now look very much the same. All centres have indoor and outdoor playing spaces, separate rooms for differently aged children, playgrounds, play equipment, foyer and office spaces with similarly qualified staff. With these similarities in mind, the dilemma for parents is how to decide on a centre for their child to attend. Does it come down to parents’ taste about a centre? In the education context, how is taste conceptualised? This article will present research that conceptualises taste as being part of a decision-making process (DMP) that is used by parents when choosing a centre for their child and, in doing so, will introduce the term: parental taste. The Determining Factors of Taste A three phase, sequential, mixed methods study was used to determine how parents select one centre over another. Cresswell described this methodology as successive phases of data collection, where each builds on the previous, with the aim of addressing the research question. This process was seen as a method to identify parents’ varying tastes in centres considered for their child to attend. Phase 1 used a survey of 78 participants to gather baseline data to explore the values, expectations, and beliefs of the parents. It also determined the aspects of the centre important to parents, and gauged the importance of the socio-economic status and educational backgrounds of the participants in their decision making. Phase 2 built on the phase 1 data and included interviews with 20 interviewees exploring the details of the decision-making process (DMP). This phase also elaborated on the survey questions and responses, determined the variables that might impact on the DMP, and identified how parents access information about early learning centres. Phase 3 focussed on parental satisfaction with their choice of early learning setting. Again using 20 interviewees, these interviews investigated the DMP that had been undertaken, as well as any that might still be ongoing. This phase focused on parents' reflection on the DMP used and questioned them as to whether the same process would be used again in other areas of decision making. Thematic analysis of the data revealed that it usually fell to the mother to explore centre options and make the decision about enrolment. Along the way, she may have discussions with the father and, to a lesser extent, with the centre staff. Friends, relatives, the child, siblings, and other educational professionals did not rank highly when the decision was being considered. Interestingly, it was found that the mother began to consider childcare options and the need for care twelve months or more before care was required and a decision had to be made. A small number of parents (three from the 20) said that they thought about it while pregnant but felt silly because they “didn’t even have a baby yet.” All mothers said that it took quite a while to get their head around leaving their child with someone else, and this anxiety and concern increased the younger the child was. Two parents had criteria that they did not want their child in care until he/she could talk and walk, so that the child could look after him- or herself to some extent. This indicated some degree of scepticism that their child would be cared for appropriately. Parents who considered enrolling their child into care closer to when it was required generally chose to do this because they had selected a pre-determined age that their child would go into childcare. A small number of parents (two) decided that their child would not attend a centre until three years old, while other parents found employment and had to find care quickly in response. The survey results showed that the aspects of a centre that most influenced parental decision-making were the activities and teaching methods used by staff, centre reputation, play equipment inside and outside the centre, and the playground size and centre buildings. The interview responses added to this by suggesting that the type of playground facilities available were important, with a natural environment being preferred. Interestingly, the lowest aspect of importance reported was whether the child had friends or family already attending the centre. The results of the survey and interview data reflected the parents’ aspirations for their child and included the development of personal competencies of self-awareness, self-regulation, and motivation linking emotions to thoughts and actions (Gendron). The child’s experience in a centre was expected to develop and refine personal traits such as self-confidence, self-awareness, self-management, the ability to interact with others, and the involvement in educational activities to achieve learning goals. With these aspirations in mind, parents felt considerable pressure to choose the environment that would fit their child the best. During the interview stage of data collection, the term “taste” emerged. The term is commonly used in a food, fashion, or style context. In the education context, however, taste was conceptualised as the judgement of likes and dislikes regarding centre attributes. Gladwell writes that “snap judgements are, first of all, enormously quick: they rely on the thinnest slices of experience. But they are also unconscious” (50). The immediacy of determining one's taste refutes the neoliberal construction (Campbell, Proctor, Sherington) of the DMP as a rational decision-making process that systematically compares different options before making a decision. In the education context, taste can be reconceptualised as an alignment between a decision and inherent values and beliefs. A personal “backpack” of experiences, beliefs, values, ideas, and memories all play a part in forming a person’s taste related to their likes and dislikes. In turn, this effects the end decision made. Parents formulated an initial response to a centre linked to the identification of attributes that aligned with personal values, beliefs, expectations, and aspirations. The data analysis indicated that parents formulated their personal taste in centres very quickly after centres were visited. At that point, parents had a clear image of the preferred centre. Further information gathering was used to reinforce that view and confirm this “parental taste.” How Does Parental Taste about a Centre Influence the Decision-Making Process? All parents used a process of decision-making to some degree. As already stated, it was usually the mother who gathered information to inform the final decision, but in two of the 78 cases it was the father who investigated and decided on the childcare centre in which to enrol. All parents used some form of process to guide their decision-making. A heavily planned process sees the parent gather information over a period of time and included participating in centre tours, drive-by viewings, talking with others, web-based searches, and, checking locations in the phone book. Surprisingly, centre advertising was the least used and least effective method of attracting parents, with only one person indicating that advertising had played a part in her DMP. This approach applied to a woman who had just moved to a new town and was not aware of the care options. This method may also be a reflection of the personality of the parent or it may reflect an understanding that there are differences between services in terms of their focus on education and care. A lightly planned process occurred when a relatively swift decision was made with minimal information gathering. It could have been the selection of the closest and most convenient centre, or the one that parents had heard people talk about. These parents were happy to go to the centre and add their name to the waiting list or enrol straight away. Generally, the impression was that all services provide the same education and care. Parents appeared to use different criteria when considering a centre for their child. Aspects here included the physical environment, size of rooms, aesthetic appeal, clean buildings, tidy surrounds, and a homely feel. Other aspects that affected this parental taste included the location of the centre, the availability of places for the child, and the interest the staff showed in parent and child. The interviews revealed that parents placed an importance on emotions when they decided if a centre suited their tastes that in turn affected their DMP. The “vibe,” the atmosphere, and how the staff made the parents feel were the most important aspects of this process. The centre’s reputation was also central to decision making. What Constructs Underpin the Decision? Parental choice decisions can appear to be rational, but are usually emotionally connected to parental aspirations and values. In this way, parental choice and prior parental decision making processes reflect the bounded rationality described by Kahneman, and are based on factors relevant to the individual as supported by Ariely and Lindstrom. Ariely states that choice and the decision making process are emotionally driven and may be irrational-rational decisions. Gladwell supports this notion in that “the task of making sense of ourselves and our behaviour requires that we acknowledge there can be as much value in the blink of an eye as in months of rational analysis” (17). Reay’s research into social, cultural, emotional, and human capital to explain human behaviour was built upon to develop five constructs for decision making in this research. The R.O.P.E.S. constructs are domains that tie together to categorise the interaction of emotional connections that underpin the decision making process based on the parental taste in centres. The constructs emerged from the analysis of the data collected in the three phase approach. They were based on the responses from parents related to both their needs and their child’s needs in terms of having a valuable and happy experience at a centre. The R.O.P.E.S. constructs were key elements in the formation of parental taste in centres and eventual enrolment. The Reputational construct (R) included word of mouth, from friends, the cleaner, other staff from either the focus or another centre, and may or may not have aligned with parental taste and influenced the decision. Other constructs (O) included the location and convenience of the centre, and availability of spaces. Cost was not seen as an issue with the subsidies making each centre similar in fee structure. The Physical construct (P) included the facilities available such as the indoor and outdoor play space, whether these are natural or artificial environments, and the play equipment available. The Social construct (S) included social interactions—sharing, making friends, and building networks. It was found that the Emotional construct (E) was central to the process. It underpinned all the other constructs and was determined by the emotions that were elicited when the parent had the first and subsequent contact with the centre staff. This construct is pivotal in parental taste and decision making. Parents indicated that some centres did not have an abundance of resources but “the lady was really nice” (interview response) and the parent thought that her child would be cared for in that environment. Comments such as “the lady was really friendly and made us feel part of the place even though we were just looking around” (interview response) added to the emotional connection and construct for the DMP. The emotional connection with staff and the willingness of the director to take the time to show the parent the whole centre was a common comment from parents. Parents indicated that if they felt comfortable, and the atmosphere was warm and homelike, then they knew that their child would too. One centre particularly supported parental taste in a homely environment and had lounges, floor rugs, lamps for lighting, and aromatherapy oil burning that contributed to a home-like feel that appealed to parents and children. The professionalism of the staff who displayed an interest in the children, had interesting activities in their room, and were polite and courteous also added to the emotional construct. Staff speaking to the parent and child, rather than just the parent, was also valued. Interestingly, parents did not comment on the qualifications held by staff, indicating that parents assumed that to be employed staff must hold the required qualifications. Is There a Further Application of Taste in Decision Making? The third phase of data collection was related to additional questions being asked of the interviewee that required reflection of the DMP used when choosing a centre for their child to attend. Parents were asked to review the process and comment on any changes that they would make if they were in the same position again. The majority of parents said that they were content with their taste in centres and the subsequent decision made. A quarter of the parents indicated that they would make minor changes to their process. A common comment made was that the process used was indicative of the parent’s personality. A self confessed “worrier” enrolling her first child gathered a great deal of information and visited many centres to enable the most informed decision to be made. In contrast, a more relaxed parent enrolling a second or third child made a quicker decision after visiting or phoning one or two centres. Although parents considered their decision to be rationally considered, the impact of parental taste upon visiting the centre and speaking to staff was a strong indicator of the level of satisfaction. Taste was a precursor to the decision. When asked if the same process would be used if choosing a different service, such as an accountant, parents indicated that a similar process would be used, but perhaps not as in depth. The reasoning here was that parents were aware that the decision of selecting a centre would impact on their child and ultimately themselves in an emotional way. The parent indicated that if they spent time visiting centres and it appealed to their taste then the child would like it too. In turn this made the whole process of attending a centre less stressful and emotional. Parents clarified that not as much personal information gathering would occur if searching for an accountant. The focus would be centred on the accountant’s professional ability. Other instances were offered, such as purchasing a car, or selecting a house, dentist, or a babysitter. All parents suggested that additional information would be collected if their child of family would be directly impacted by the decision. Advertising of services or businesses through various multimedia approaches appeared not to rate highly when parents were in the process of decision making. Television, radio, print, Internet, and social networks were identified as possible modes of communication available for consideration by parents. The generational culture was evident in the responses from different parent age groups. The younger parents indicated that social media, Internet, and print may be used to ascertain the benefits of different services and to access information about the reputation of centres. In comparison, the older parents preferred word-of-mouth recommendations. Neither television nor radio was seen as media approaches that would attract clientele. Conclusion In the education context, the concept of parental taste can be seen to be an integral component to the decision making process. In this case, the attributes of an educational facility align to an individual’s personal “backpack” and form a like or a dislike, known as parental taste. The implications for the Directors of Early Childhood Education and Care Centres indicate that parental taste plays a role in a child’s enrolment into a centre. Parental taste is determined by the attributes of the centre that are aligned to the R.O.P.E.S. constructs with the emotional element as the key component. A less rigorous DMP is used when a generic service is required. Media and cultural ways of looking at our society interpret how important decisions are made. A general assumption is that major decisions are made in a calm, considered and rational manner. This is a neoliberal view and is not supported by the research presented in this article. References Ariely, Dan. Predictably Irrational: The Hidden Forces That Shape Our Decisions. London: Harper, 2009. Australian Children’s Education, Care and Quality Authority (ACECQA). n.d. 14 Jan. 2014. ‹http://www.acecqa.gov.au›. Campbell, Craig, Helen Proctor, and Geoffrey Sherington. School Choice: How Parents Negotiate The New School Market In Australia. Crows Nest, N.S.W.: Allen and Unwin, 2009. Cresswell, John,W. Research Design. Qualitative, Quantitative and Mixed Methods Approaches (2nd ed.). Los Angeles: Sage, 2003. Department of Education. 11 Oct. 2013. 14 Jan. 2014. ‹http://education.gov.au/national-quality-framework-early-childhood-education-and-care›. Department of Employment, Education and Workplace Relations (DEEWR). Education Reforms. Canberra, ACT: Australian Government Publishing Service, 2009. Gendron, Benedicte. “Why Emotional Capital Matters in Education and in Labour?: Toward an Optimal Exploitation of Human Capital and Knowledge Mangement.” Les Cahiers de la Maison des Sciences Economiques 113 (2004): 1–37. Glaswell, Malcolm. “Blink: The power of thinking without thinking.” Harmondsworth, UK: Penguin, 2005. Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Strauss & Giroux, 2011. Lindstrom, Martin. Buy-ology: How Everything We Believe About Why We Buy is Wrong. Great London: Random House Business Books, 2009. Melbourne Declaration on Educational Goals for Young Australians. 14 Jan. 2014. ‹http://www.mceecdya.edu.au/mceecdya/melbourne_declaration,25979.html›. National Quality Framework. 14 Jan. 2014. ‹http://www.acecqa.gov.au. Reay, Diane. A Useful Extension of Bourdieu’s Conceptual Framework?: Emotional Capital as a Way of Understanding Mothers’ Involvement in their Children’s Education? Oxford: Blackwell Publishers, 2000.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Jethani, Suneel. "New Media Maps as ‘Contact Zones’: Subjective Cartography and the Latent Aesthetics of the City-Text". M/C Journal 14, n.º 5 (18 de octubre de 2011). http://dx.doi.org/10.5204/mcj.421.

Texto completo
Resumen
Any understanding of social and cultural change is impossible without a knowledge of the way media work as environments. —Marshall McLuhan. What is visible and tangible in things represents our possible action upon them. —Henri Bergson. Introduction: Subjective Maps as ‘Contact Zones’ Maps feature heavily in a variety of media; they appear in textbooks, on television, in print, and on the screens of our handheld devices. The production of cartographic texts is a process that is imbued with power relations and bound up with the production and reproduction of social life (Pinder 405). Mapping involves choices as to what information is and is not included. In their organisation, categorisation, modeling, and representation maps show and they hide. Thus “the idea that a small number of maps or even a single (and singular) map might be sufficient can only apply in a spatialised area of study whose own self-affirmation depends on isolation from its context” (Lefebvre 85–86). These isolations determine the way we interpret the physical, biological, and social worlds. The map can be thought of as a schematic for political systems within a confined set of spatial relations, or as a container for political discourse. Mapping contributes equally to the construction of experiential realities as to the representation of physical space, which also contains the potential to incorporate representations of temporality and rhythm to spatial schemata. Thus maps construct realities as much as they represent them and coproduce space as much as the political identities of people who inhabit them. Maps are active texts and have the ability to promote social change (Pickles 146). It is no wonder, then, that artists, theorists and activists alike readily engage in the conflicted praxis of mapping. This critical engagement “becomes a method to track the past, embody memories, explain the unexplainable” and manifest the latent (Ibarra 66). In this paper I present a short case study of Bangalore: Subjective Cartographies a new media art project that aims to model a citizen driven effort to participate in a critical form of cartography, which challenges dominant representations of the city-space. I present a critical textual analysis of the maps produced in the workshops, the artist statements relating to these works used in the exhibition setting, and statements made by the participants on the project’s blog. This “praxis-logical” approach allows for a focus on the project as a space of aggregation and the communicative processes set in motion within them. In analysing such projects we could (and should) be asking questions about the functions served by the experimental concepts under study—who has put it forward? Who is utilising it and under what circumstances? Where and how has it come into being? How does discourse circulate within it? How do these spaces as sites of emergent forms of resistance within global capitalism challenge traditional social movements? How do they create self-reflexive systems?—as opposed to focusing on ontological and technical aspects of digital mapping (Renzi 73). In de-emphasising the technology of digital cartography and honing in on social relations embedded within the text(s), this study attempts to complement other studies on digital mapping (see Strom) by presenting a case from the field of politically oriented tactical media. Bangalore: Subjective Cartographies has been selected for analysis, in this exploration of media as “zone.” It goes some way to incorporating subjective narratives into spatial texts. This is a three-step process where participants tapped into spatial subjectivities by data collection or environmental sensing led by personal reflection or ethnographic enquiry, documenting and geo-tagging their findings in the map. Finally they engaged an imaginative or ludic process of synthesising their data in ways not inherent within the traditional conventions of cartography, such as the use of sound and distortion to explicate the intensity of invisible phenomena at various coordinates in the city-space. In what follows I address the “zone” theme by suggesting that if we apply McLuhan’s notion of media as environment together with Henri Bergson’s assertion that visibility and tangibility constitutes the potential for action to digital maps, projects such as Bangalore: Subjective Cartographies constitute a “contact zone.” A type of zone where groups come together at the local level and flows of discourse about art, information communication, media, technology, and environment intersect with local histories and cultures within the cartographic text. A “contact zone,” then, is a site where latent subjectivities are manifested and made potentially politically potent. “Contact zones,” however, need not be spaces for the aggrieved or excluded (Renzi 82), as they may well foster the ongoing cumulative politics of the mundane capable of developing into liminal spaces where dominant orders may be perforated. A “contact zone” is also not limitless and it must be made clear that the breaking of cartographic convention, as is the case with the project under study here, need not be viewed as resistances per se. It could equally represent thresholds for public versus private life, the city-as-text and the city-as-social space, or the zone where representations of space and representational spaces interface (Lefebvre 233), and culture flows between the mediated and ideated (Appadurai 33–36). I argue that a project like Bangalore: Subjective Cartographies demonstrates that maps as urban text form said “contact zones,” where not only are media forms such as image, text, sound, and video are juxtaposed in a singular spatial schematic, but narratives of individual and collective subjectivities (which challenge dominant orders of space and time, and city-rhythm) are contested. Such a “contact zone” in turn may not only act as a resource for citizens in the struggle of urban design reform and a democratisation of the facilities it produces, but may also serve as a heuristic device for researchers of new media spatiotemporalities and their social implications. Critical Cartography and Media Tactility Before presenting this brief illustrative study something needs to be said of the context from which Bangalore: Subjective Cartographies has arisen. Although a number of Web 2.0 applications have come into existence since the introduction of Google Maps and map application program interfaces, which generate a great deal of geo-tagged user generated content aimed at reconceptualising the mapped city-space (see historypin for example), few have exhibited great significance for researchers of media and communications from the perspective of building critical theories relating to political potential in mediated spaces. The expression of power through mapping can be understood from two perspectives. The first—attributed largely to the Frankfurt School—seeks to uncover the potential of a society that is repressed by capitalist co-opting of the cultural realm. This perspective sees maps as a potential challenge to, and means of providing emancipation from, existing power structures. The second, less concerned with dispelling false ideologies, deals with the politics of epistemology (Crampton and Krygier 14). According to Foucault, power was not applied from the top down but manifested laterally in a highly diffused manner (Foucault 117; Crampton and Krygier 14). Foucault’s privileging of the spatial and epistemological aspects of power and resistance complements the Frankfurt School’s resistance to oppression in the local. Together the two perspectives orient power relative to spatial and temporal subjectivities, and thus fit congruently into cartographic conventions. In order to make sense of these practices the post-oppositional character of tactical media maps should be located within an economy of power relations where resistance is never outside of the field of forces but rather is its indispensable element (Renzi 72). Such exercises in critical cartography are strongly informed by the critical politico-aesthetic praxis of political/art collective The Situationist International, whose maps of Paris were inherently political. The Situationist International incorporated appropriated texts into, and manipulated, existing maps to explicate city-rhythms and intensities to construct imaginative and alternate representations of the city. Bangalore: Subjective Cartographies adopts a similar approach. The artists’ statement reads: We build our subjective maps by combining different methods: photography, film, and sound recording; […] to explore the visible and invisible […] city; […] we adopt psycho-geographical approaches in exploring territory, defined as the study of the precise effects of the geographical environment, consciously developed or not, acting directly on the emotional behaviour of individuals. The project proposals put forth by workshop participants also draw heavily from the Situationists’s A New Theatre of Operations for Culture. A number of Situationist theories and practices feature in the rationale for the maps created in the Bangalore Subjective Cartographies workshop. For example, the Situationists took as their base a general notion of experimental behaviour and permanent play where rationality was approached on the basis of whether or not something interesting could be created out of it (Wark 12). The dérive is the rapid passage through various ambiences with a playful-constructive awareness of the psychographic contours of a specific section of space-time (Debord). The dérive can be thought of as an exploration of an environment without preconceptions about the contours of its geography, but rather a focus on the reality of inhabiting a place. Détournement involves the re-use of elements from recognised media to create a new work with meaning often opposed to the original. Psycho-geography is taken to be the subjective ambiences of particular spaces and times. The principles of détournement and psycho-geography imply a unitary urbanism, which hints at the potential of achieving in environments what may be achieved in media with détournement. Bangalore: Subjective Cartographies carries Situationist praxis forward by attempting to exploit certain properties of information digitalisation to formulate textual representations of unitary urbanism. Bangalore: Subjective Cartographies is demonstrative of a certain media tactility that exists more generally across digital-networked media ecologies and channels this to political ends. This tactility of media is best understood through textual properties awarded by the process and logic of digitalisation described in Lev Manovich’s Language of New Media. These properties are: numerical representation in the form of binary code, which allows for the reification of spatial data in a uniform format that can be stored and retrieved in-silico as opposed to in-situ; manipulation of this code by the use of algorithms, which renders the scales and lines of maps open to alteration; modularity that enables incorporation of other textual objects into the map whilst maintaining each incorporated item’s individual identity; the removal to some degree of human interaction in terms of the translation of environmental data into cartographic form (whilst other properties listed here enable human interaction with the cartographic text), and the nature of digital code allows for changes to accumulate incrementally creating infinite potential for refinements (Manovich 49–63). The Subjective Mapping of Bangalore Bangalore is an interesting site for such a project given the recent and rapid evolution of its media infrastructure. As a “media city,” the first television sets appeared in Bangalore at some point in the early 1980s. The first Internet Service Provider (ISP), which served corporate clients only, commenced operating a decade later and then offered dial-up services to domestic clients in the mid-1990s. At present, however, Bangalore has the largest number of broadband Internet connections in India. With the increasing convergence of computing and telecommunications with traditional forms of media such as film and photography, Bangalore demonstrates well what Scott McQuire terms a media-architecture complex, the core infrastructure for “contact zones” (vii). Bangalore: Subjective Cartographies was a workshop initiated by French artists Benjamin Cadon and Ewen Cardonnet. It was conducted with a number of students at the Srishti School of Art, Design and Technology in November and December 2009. Using Metamap.fr (an online cartographic tool that makes it possible to add multimedia content such as texts, video, photos, sounds, links, location points, and paths to digital maps) students were asked to, in groups of two or three, collect and consult data on ‘felt’ life in Bangalore using an ethnographic, transverse geographic, thematic, or temporal approach. The objective of the project was to model a citizen driven effort to subvert dominant cartographic representations of the city. In doing so, the project and this paper posits that there is potential for such methods to be adopted to form new literacies of cartographic media and to render the cartographic imaginary politically potent. The participants’ brief outlined two themes. The first was the visible and symbolic city where participants were asked to investigate the influence of the urban environment on the behaviours and sensations of its inhabitants, and to research and collect signifiers of traditional and modern worlds. The invisible city brief asked participants to consider the latent environment and link it to human behaviour—in this case electromagnetic radiation linked to the cities telecommunications and media infrastructure was to be specifically investigated. The Visible and Symbolic City During British rule many Indian cities functioned as dual entities where flow of people and commodities circulated between localised enclaves and the centralised British-built areas. Mirroring this was the dual mode of administration where power was shared between elected Indian legislators and appointed British officials (Hoselitz 432–33). Reflecting on this diarchy leads naturally to questions about the politics of civic services such as the water supply, modes of public communication and instruction, and the nature of the city’s administration, distribution, and manufacturing functions. Workshop participants approached these issues in a variety of ways. In the subjective maps entitled Microbial Streets and Water Use and Reuse, food and water sources of street vendors are traced with the aim to map water supply sources relative to the movements of street vendors operating in the city. Images of the microorganisms are captured using hacked webcams as makeshift microscopes. The data was then converted to audio using Pure Data—a real-time graphical programming environment for the processing audio, video and graphical data. The intention of Microbial Streets is to demonstrate how mapping technologies could be used to investigate the flows of food and water from source to consumer, and uncover some of the latencies involved in things consumed unhesitatingly everyday. Typographical Lens surveys Russell Market, an older part of the city through an exploration of the aesthetic and informational transformation of the city’s shop and street signage. In Ethni City, Avenue Road is mapped from the perspective of local goldsmiths who inhabit the area. Both these maps attempt to study the convergence of the city’s dual function and how the relationship between merchants and their customers has changed during the transition from localised enclaves, catering to the sale of particular types of goods, to the development of shopping precincts, where a variety of goods and services can be sought. Two of the project’s maps take a spatiotemporal-archivist approach to the city. Bangalore 8mm 1940s uses archival Super 8 footage and places digitised copies on the map at the corresponding locations of where they were originally filmed. The film sequences, when combined with satellite or street-view images, allow for the juxtaposition of present day visions of the city with those of the 1940s pre-partition era. Chronicles of Collection focuses on the relationship between people and their possessions from the point of view of the object and its pathways through the city in space and time. Collectors were chosen for this map as the value they placed on the object goes beyond the functional and the monetary, which allowed the resultant maps to access and express spatially the layers of meaning a particular object may take on in differing contexts of place and time in the city-space. The Invisible City In the expression of power through city-spaces, and by extension city-texts, certain circuits and flows are ossified and others rendered latent. Raymond Williams in Politics and Letters writes: however dominant a social system may be, the very meaning of its domination involves a limitation or selection of the activities it covers, so that by definition it cannot exhaust all social experience, which therefore always potentially contains space for alternative acts and alternative intentions which are not yet articulated as a social institution or even project. (252) The artists’ statement puts forward this possible response, an exploration of the latent aesthetics of the city-space: In this sense then, each device that enriches our perception for possible action on the real is worthy of attention. Even if it means the use of subjective methods, that may not be considered ‘evidence’. However, we must admit that any subjective investigation, when used systematically and in parallel with the results of technical measures, could lead to new possibilities of knowledge. Electromagnetic City maps the city’s sources of electromagnetic radiation, primarily from mobile phone towers, but also as a by-product of our everyday use of technologies, televisions, mobile phones, Internet Wi-Fi computer screens, and handheld devices. This map explores issues around how the city’s inhabitants hear, see, feel, and represent things that are a part of our environment but invisible, and asks: are there ways that the intangible can be oriented spatially? The intensity of electromagnetic radiation being emitted from these sources, which are thought to negatively influence the meditation of ancient sadhus (sages) also features in this map. This data was collected by taking electromagnetic flow meters into the suburb of Yelhanka (which is also of interest because it houses the largest milk dairy in the state of Karnataka) in a Situationist-like derive and then incorporated back into Metamap. Signal to Noise looks at the struggle between residents concerned with the placement of mobile phone towers around the city. It does so from the perspectives of people who seek information about their placement concerned about mobile phone signal quality, and others concerned about the proximity of this infrastructure to their homes due to to potential negative health effects. Interview footage was taken (using a mobile phone) and manipulated using Pure Data to distort the visual and audio quality of the footage in proportion to the fidelity of the mobile phone signal in the geographic area where the footage was taken. Conclusion The “contact zone” operating in Bangalore: Subjective Cartographies, and the underlying modes of social enquiry that make it valuable, creates potential for the contestation of new forms of polity that may in turn influence urban administration and result in more representative facilities of, and for, city-spaces and their citizenry. Robert Hassan argues that: This project would mean using tactical media to produce new spaces and temporalities that are explicitly concerned with working against the unsustainable “acceleration of just about everything” that our present neoliberal configuration of the network society has generated, showing that alternatives are possible and workable—in ones job, home life, family life, showing that digital [spaces and] temporality need not mean the unerring or unbending meter of real-time [and real city-space] but that an infinite number of temporalities [and subjectivities of space-time] can exist within the network society to correspond with a diversity of local and contextual cultures, societies and polities. (174) As maps and locative motifs begin to feature more prominently in media, analyses such as the one discussed in this paper may allow for researchers to develop theoretical approaches to studying newer forms of media. References Appadurai, Arjun. Modernity at Large: Cultural Dimensions of Globalisation. Minneapolis: U of Minnesota P, 1996. “Bangalore: Subjective Cartographies.” 25 July 2011 ‹http://bengaluru.labomedia.org/page/2/›. Bergson, Henri. Creative Evolution. New York: Henry Holt and Company, 1911. Crampton, Jeremy W., and John Krygier. “An Introduction to Critical Cartography.” ACME: An International E-Journal for Critical Geography 4 (2006): 11–13. Chardonnet, Ewen, and Benjamin Cadon. “Semaphore.” 25 July 2011 ‹http://semaphore.blogs.com/semaphore/spectral_investigations_collective/›. Debord, Guy. “Theory of the Dérive.” 25 July 2011 ‹http://www.bopsecrets.org/SI/2.derive.htm›. Foucault, Michel. Remarks on Marx. New York: Semitotext[e], 1991.Hassan, Robert. The Chronoscopic Society: Globalization, Time and Knowledge in the Networked Economy. New York: Lang, 2003. “Historypin.” 4 Aug. 2011 ‹http://www.historypin.com/›. Hoselitz, Bert F. “A Survey of the Literature on Urbanization in India.” India’s Urban Future Ed. Roy Turner. Berkeley: U of California P, 1961. 425-43. Ibarra, Anna. “Cosmologies of the Self.” Elephant 7 (2011): 66–96. Lefebvre, Henri. The Production of Space. Oxford: Blackwell, 1991. Lovink, Geert. Dark Fibre. Cambridge: MIT Press, 2002. Manovich, Lev. The Language of New Media Cambridge: MIT Press, 2000. “Metamap.fr.” 3 Mar. 2011 ‹http://metamap.fr/›. McLuhan, Marshall, and Quentin Fiore. The Medium Is the Massage. London: Penguin, 1967. McQuire, Scott. The Media City: Media, Architecture and Urban Space. London: Sage, 2008. Pickles, John. A History of Spaces: Cartographic Reason, Mapping and the Geo-Coded World. London: Routledge, 2004. Pinder, David. “Subverting Cartography: The Situationists and Maps of the City.” Environment and Planning A 28 (1996): 405–27. “Pure Data.” 6 Aug. 2011 ‹http://puredata.info/›. Renzi, Alessandra. “The Space of Tactical Media” Digital Media and Democracy: Tactics in Hard Times. Ed. Megan Boler. Cambridge: MIT Press, 2008. 71–100. Situationist International. “A New Theatre of Operations for Culture.” 6 Aug. 2011 ‹http://www.blueprintmagazine.co.uk/index.php/urbanism/reading-the-situationist-city/›. Strom, Timothy Erik. “Space, Cyberspace and the Interface: The Trouble with Google Maps.” M/C Journal 4.3 (2011). 6 Aug. 2011 ‹http://journal.media-culture.org.au/index.php/mcjournal/article/viewArticle/370›. Wark, McKenzie. 50 Years of Recuperation of the Situationist International, New York: Princeton Architectural Press, 2008. Williams, Raymond. Politics and Letters: Interviews with New Left Review. London: New Left, 1979.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Hill, Benjamin Mako. "Revealing Errors". M/C Journal 10, n.º 5 (1 de octubre de 2007). http://dx.doi.org/10.5204/mcj.2703.

Texto completo
Resumen
Introduction In The World Is Not a Desktop, Marc Weisner, the principal scientist and manager of the computer science laboratory at Xerox PARC, stated that, “a good tool is an invisible tool.” Weisner cited eyeglasses as an ideal technology because with spectacles, he argued, “you look at the world, not the eyeglasses.” Although Weisner’s work at PARC played an important role in the creation of the field of “ubiquitous computing”, his ideal is widespread in many areas of technology design. Through repetition, and by design, technologies blend into our lives. While technologies, and communications technologies in particular, have a powerful mediating impact, many of the most pervasive effects are taken for granted by most users. When technology works smoothly, its nature and effects are invisible. But technologies do not always work smoothly. A tiny fracture or a smudge on a lens renders glasses quite visible to the wearer. The Microsoft Windows “Blue Screen of Death” on subway in Seoul (Photo credit Wikimedia Commons). Anyone who has seen a famous “Blue Screen of Death”—the iconic signal of a Microsoft Windows crash—on a public screen or terminal knows how errors can thrust the technical details of previously invisible systems into view. Nobody knows that their ATM runs Windows until the system crashes. Of course, the operating system chosen for a sign or bank machine has important implications for its users. Windows, or an alternative operating system, creates affordances and imposes limitations. Faced with a crashed ATM, a consumer might ask herself if, with its rampant viruses and security holes, she should really trust an ATM running Windows? Technologies make previously impossible actions possible and many actions easier. In the process, they frame and constrain possible actions. They mediate. Communication technologies allow users to communicate in new ways but constrain communication in the process. In a very fundamental way, communication technologies define what their users can say, to whom they say it, and how they can say it—and what, to whom, and how they cannot. Humanities scholars understand the power, importance, and limitations of technology and technological mediation. Weisner hypothesised that, “to understand invisibility the humanities and social sciences are especially valuable, because they specialise in exposing the otherwise invisible.” However, technology activists, like those at the Free Software Foundation (FSF) and the Electronic Frontier Foundation (EFF), understand this power of technology as well. Largely constituted by technical members, both organisations, like humanists studying technology, have struggled to communicate their messages to a less-technical public. Before one can argue for the importance of individual control over who owns technology, as both FSF and EFF do, an audience must first appreciate the power and effect that their technology and its designers have. To understand the power that technology has on its users, users must first see the technology in question. Most users do not. Errors are under-appreciated and under-utilised in their ability to reveal technology around us. By painting a picture of how certain technologies facilitate certain mistakes, one can better show how technology mediates. By revealing errors, scholars and activists can reveal previously invisible technologies and their effects more generally. Errors can reveal technology—and its power and can do so in ways that users of technologies confront daily and understand intimately. The Misprinted Word Catalysed by Elizabeth Eisenstein, the last 35 years of print history scholarship provides both a richly described example of technological change and an analysis of its effects. Unemphasised in discussions of the revolutionary social, economic, and political impact of printing technologies is the fact that, especially in the early days of a major technological change, the artifacts of print are often quite similar to those produced by a new printing technology’s predecessors. From a reader’s purely material perspective, books are books; the press that created the book is invisible or irrelevant. Yet, while the specifics of print technologies are often hidden, they are often exposed by errors. While the shift from a scribal to print culture revolutionised culture, politics, and economics in early modern Europe, it was near-invisible to early readers (Eisenstein). Early printed books were the same books printed in the same way; the early press was conceived as a “mechanical scriptorium.” Shown below, Gutenberg’s black-letter Gothic typeface closely reproduced a scribal hand. Of course, handwriting and type were easily distinguishable; errors and irregularities were inherent in relatively unsteady human hands. Side-by-side comparisons of the hand-copied Malmesbury Bible (left) and the black letter typeface in the Gutenberg Bible (right) (Photo credits Wikimedia Commons & Wikimedia Commons). Printing, of course, introduced its own errors. As pages were produced en masse from a single block of type, so were mistakes. While a scribe would re-read and correct errors as they transcribed a second copy, no printing press would. More revealingly, print opened the door to whole new categories of errors. For example, printers setting type might confuse an inverted n with a u—and many did. Of course, no scribe made this mistake. An inverted u is only confused with an n due to the technological possibility of letter flipping in movable type. As print moved from Monotype and Linotype machines, to computerised typesetting, and eventually to desktop publishing, an accidentally flipped u retreated back into the realm of impossibility (Mergenthaler, Swank). Most readers do not know how their books are printed. The output of letterpresses, Monotypes, and laser printers are carefully designed to produce near-uniform output. To the degree that they succeed, the technologies themselves, and the specific nature of the mediation, becomes invisible to readers. But each technology is revealed in errors like the upside-down u, the output of a mispoured slug of Monotype, or streaks of toner from a laser printer. Changes in printing technologies after the press have also had profound effects. The creation of hot-metal Monotype and Linotype, for example, affected decisions to print and reprint and changed how and when it is done. New mass printing technologies allowed for the printing of works that, for economic reasons, would not have been published before. While personal computers, desktop publishing software, and laser printers make publishing accessible in new ways, it also places real limits on what can be printed. Print runs of a single copy—unheard of before the invention of the type-writer—are commonplace. But computers, like Linotypes, render certain formatting and presentation difficult and impossible. Errors provide a space where the particulars of printing make technologies visible in their products. An inverted u exposes a human typesetter, a letterpress, and a hasty error in judgment. Encoding errors and botched smart quotation marks—a ? in place of a “—are only possible with a computer. Streaks of toner are only produced by malfunctioning laser printers. Dust can reveal the photocopied provenance of a document. Few readers reflect on the power or importance of the particulars of the technologies that produced their books. In part, this is because the technologies are so hidden behind their products. Through errors, these technologies and the power they have on the “what” and “how” of printing are exposed. For scholars and activists attempting to expose exactly this, errors are an under-exploited opportunity. Typing Mistyping While errors have a profound effect on media consumption, their effect is equally important, and perhaps more strongly felt, when they occur during media creation. Like all mediating technologies, input technologies make it easier or more difficult to create certain messages. It is, for example, much easier to write a letter with a keyboard than it is to type a picture. It is much more difficult to write in languages with frequent use of accents on an English language keyboard than it is on a European keyboard. But while input systems like keyboards have a powerful effect on the nature of the messages they produce, they are invisible to recipients of messages. Except when the messages contains errors. Typists are much more likely to confuse letters in close proximity on a keyboard than people writing by hand or setting type. As keyboard layouts switch between countries and languages, new errors appear. The following is from a personal email: hez, if there’s not a subversion server handz, can i at least have the root password for one of our machines? I read through the instructions for setting one up and i think i could do it. [emphasis added] The email was quickly typed and, in two places, confuses the character y with z. Separated by five characters on QWERTY keyboards, these two letters are not easily mistaken or mistyped. However, their positions are swapped on German and English keyboards. In fact, the author was an American typing in a Viennese Internet cafe. The source of his repeated error was his false expectations—his familiarity with one keyboard layout in the context of another. The error revealed the context, both keyboard layouts, and his dependence on a particular keyboard. With the error, the keyboard, previously invisible, was exposed as an inter-mediator with its own particularities and effects. This effect does not change in mobile devices where new input methods have introduced powerful new ways of communicating. SMS messages on mobile phones are constrained in length to 160 characters. The result has been new styles of communication using SMS that some have gone so far as to call a new language or dialect called TXTSPK (Thurlow). Yet while they are obvious to social scientists, the profound effects of text message technologies on communication is unfelt by most users who simply see the messages themselves. More visible is the fact that input from a phone keypad has opened the door to errors which reveal input technology and its effects. In the standard method of SMS input, users press or hold buttons to cycle through the letters associated with numbers on a numeric keyboard (e.g., 2 represents A, B, and C; to produce a single C, a user presses 2 three times). This system makes it easy to confuse characters based on a shared association with a single number. Tegic’s popular T9 software allows users to type in words by pressing the number associated with each letter of each word in quick succession. T9 uses a database to pick the most likely word that maps to that sequence of numbers. While the system allows for quick input of words and phrases on a phone keypad, it also allows for the creation of new types of errors. A user trying to type me might accidentally write of because both words are mapped to the combination of 6 and 3 and because of is a more common word in English. T9 might confuse snow and pony while no human, and no other input method, would. Users composing SMS’s are constrained by its technology and its design. The fact that text messages must be short and the difficult nature of phone-based input methods has led to unique and highly constrained forms of communication like TXTSPK (Sutherland). Yet, while the influence of these input technologies is profound, users are rarely aware of it. Errors provide a situation where the particularities of a technology become visible and an opportunity for users to connect with scholars exposing the effect of technology and activists arguing for increased user control. Google News Denuded As technologies become more complex, they often become more mysterious to their users. While not invisible, users know little about the way that complex technologies work both because they become accustomed to them and because the technological specifics are hidden inside companies, behind web interfaces, within compiled software, and in “black boxes” (Latour). Errors can help reveal these technologies and expose their nature and effects. One such system, Google’s News, aggregates news stories and is designed to make it easy to read multiple stories on the same topic. The system works with “topic clusters” that attempt to group articles covering the same news event. The more items in a news cluster (especially from popular sources) and the closer together they appear in time, the higher confidence Google’s algorithms have in the “importance” of a story and the higher the likelihood that the cluster of stories will be listed on the Google News page. While the decision to include or remove individual sources is made by humans, the act of clustering is left to Google’s software. Because computers cannot “understand” the text of the articles being aggregated, clustering happens less intelligently. We know that clustering is primarily based on comparison of shared text and keywords—especially proper nouns. This process is aided by the widespread use of wire services like the Associated Press and Reuters which provide article text used, at least in part, by large numbers of news sources. Google has been reticent to divulge the implementation details of its clustering engine but users have been able to deduce the description above, and much more, by watching how Google News works and, more importantly, how it fails. For example, we know that Google News looks for shared text and keywords because text that deviates heavily from other articles is not “clustered” appropriately—even if it is extremely similar semantically. In this vein, blogger Philipp Lenssen gives advice to news sites who want to stand out in Google News: Of course, stories don’t have to be exactly the same to be matched—but if they are too different, they’ll also not appear in the same group. If you want to stand out in Google News search results, make your article be original, or else you’ll be collapsed into a cluster where you may or may not appear on the first results page. While a human editor has no trouble understanding that an article using different terms (and different, but equally appropriate, proper nouns) is discussing the same issue, the software behind Google News is more fragile. As a result, Google News fails to connect linked stories that no human editor would miss. A section of a screenshot of Google News clustering aggregation showcasing what appears to be an error. But just as importantly, Google News can connect stories that most human editors will not. Google News’s clustering of two stories by Al Jazeera on how “Iran offers to share nuclear technology,” and by the Guardian on how “Iran threatens to hide nuclear program,” seem at first glance to be a mistake. Hiding and sharing are diametrically opposed and mutually exclusive. But while it is true that most human editors would not cluster these stories, it is less clear that it is, in fact, an error. Investigation shows that the two articles are about the release of a single statement by the government of Iran on the same day. The spin is significant enough, and significantly different, that it could be argued that the aggregation of those stories was incorrect—or not. The error reveals details about the way that Google News works and about its limitations. It reminds readers of Google News of the technological nature of their news’ meditation and gives them a taste of the type of selection—and mis-selection—that goes on out of view. Users of Google News might be prompted to compare the system to other, more human methods. Ultimately it can remind them of the power that Google News (and humans in similar roles) have over our understanding of news and the world around us. These are all familiar arguments to social scientists of technology and echo the arguments of technology activists. By focusing on similar errors, both groups can connect to users less used to thinking in these terms. Conclusion Reflecting on the role of the humanities in a world of increasingly invisible technology for the blog, “Humanities, Arts, Science and Technology Advanced Collaboratory,” Duke English professor Cathy Davidson writes: When technology is accepted, when it becomes invisible, [humanists] really need to be paying attention. This is one reason why the humanities are more important than ever. Analysis—qualitative, deep, interpretive analysis—of social relations, social conditions, in a historical and philosophical perspective is what we do so well. The more technology is part of our lives, the less we think about it, the more we need rigorous humanistic thinking that reminds us that our behaviours are not natural but social, cultural, economic, and with consequences for us all. Davidson concisely points out the strength and importance of the humanities in evaluating technology. She is correct; users of technologies do not frequently analyse the social relations, conditions, and effects of the technology they use. Activists at the EFF and FSF argue that this lack of critical perspective leads to exploitation of users (Stallman). But users, and the technology they use, are only susceptible to this type of analysis when they understand the applicability of these analyses to their technologies. Davidson leaves open the more fundamental question: How will humanists first reveal technology so that they can reveal its effects? Scholars and activists must do more than contextualise and describe technology. They must first render invisible technologies visible. As the revealing nature of errors in printing systems, input systems, and “black box” software systems like Google News show, errors represent a point where invisible technology is already visible to users. As such, these errors, and countless others like them, can be treated as the tip of an iceberg. They represent an important opportunity for humanists and activists to further expose technologies and the beginning of a process that aims to reveal much more. References Davidson, Cathy. “When Technology Is Invisible, Humanists Better Get Busy.” HASTAC. (2007). 1 September 2007 http://www.hastac.org/node/779>. Eisenstein, Elisabeth L. The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early-Modern Europe. Cambridge, UK: Cambridge University Press, 1979. Latour, Bruno. Pandora’s Hope: Essays on the Reality of Science Studies. Harvard UP, 1999. Lenssen, Philipp. “How Google News Indexes.” Google Blogscoped. 2006. 1 September 2007 http://blogoscoped.com/archive/2006-07-28-n49.html>. Mergenthaler, Ottmar. The Biography of Ottmar Mergenthaler, Inventor of the Linotype. New ed. New Castle, Deleware: Oak Knoll Books, 1989. Monotype: A Journal of Composing Room Efficiency. Philadelphia: Lanston Monotype Machine Co, 1913. Stallman, Richard M. Free Software, Free Society: Selected Essays of Richard M. Stallman. Boston, Massachusetts: Free Software Foundation, 2002. Sutherland, John. “Cn u txt?” Guardian Unlimited. London, UK. 2002. Swank, Alvin Garfield, and United Typothetae America. Linotype Mechanism. Chicago, Illinois: Dept. of Education, United Typothetae America, 1926. Thurlow, C. “Generation Txt? The Sociolinguistics of Young People’s Text-Messaging.” Discourse Analysis Online 1.1 (2003). Weiser, Marc. “The World Is Not a Desktop.” ACM Interactions. 1.1 (1994): 7-8. Citation reference for this article MLA Style Hill, Benjamin Mako. "Revealing Errors." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/01-hill.php>. APA Style Hill, B. (Oct. 2007) "Revealing Errors," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/01-hill.php>.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Lyubchenko, Irina. "NFTs and Digital Art". M/C Journal 25, n.º 2 (25 de abril de 2022). http://dx.doi.org/10.5204/mcj.2891.

Texto completo
Resumen
Introduction This article is concerned with the recent rise in popularity of crypto art, the term given to digital artworks whose ownership and provenance are confirmed with a non-fungible token (NFT), making it possible to sell these works within decentralised cryptocurrency art markets. The goal of this analysis is to trace a genealogy of crypto art to Dada, an avant-garde movement that originated in the early twentieth century. My claim is that Dadaism in crypto art appears in its exhausted form that is a result of its revival in the 1950s and 1960s by the Neo Dada that reached the current age through Pop Art. Dada’s anti-art project of rejecting beauty and aesthetics has transformed into commercial success in the Neo Dada Pop Art movement. In turn, Pop Art produced its crypto version that explores not only the question of what art is and is not, but also when art becomes money. In what follows, I will provide a brief overview of NFT art and its three categories that could generally be found within crypto marketplaces: native crypto art, non-digital art, and digital distributed-creativity art. Throughout, I will foreground the presence of Dadaism in these artworks and provide art historical context. NFTs: Brief Overview A major technological component that made NFTs possible was developed in 1991, when cryptographers Stuart Haber and W. Scott Stornetta proposed a method for time-stamping data contained in digital documents shared within a distributed network of users (99). This work laid the foundation for what became known as blockchain and was further implemented in the development of Bitcoin, a digital currency invented by Satoshi Nakamoto in 2008. The original non-fungible tokens, Coloured Coins, were created in 2012. By “colouring” or differentiating bitcoins, Coloured Coins were assigned special properties and had a value independent of the underlying Bitcoin, allowing their use as commodity certificates, alternative currencies, and other financial instruments (Assia et al.). In 2014, fuelled by a motivation to protect digital artists from unsanctioned distribution of their work while also enabling digital art sales, media artist Kevin McCoy and tech entrepreneur Anil Dash saw the potential of blockchain to satisfy their goals and developed what became to be known as NFTs. This overnight invention was a result of McCoy and Dash’s participation in the Seven on Seven annual New York City event, a one-day creative collaboration that challenged seven pairs of artists and engineers to “make something” (Rhizome). McCoy and Dash did not patent their invention, nor were they able to popularise it, mentally archiving it as a “footnote in internet history”. Ironically, just a couple of years later NFTs exploded into a billion-dollar market, living up to an ironic name of “monetized graphics” that the pair gave to their invention. Crypto art became an international sensation in March 2021, when a digital artist Mike Winklemann, known as Beeple, sold his digital collage titled Everydays: The First 5000 Days for US$69.3 million, prompting Noah Davis, a curator who assisted with the sale at the Christie’s auction house, to proclaim: “he showed us this collage, and that was my eureka moment when I knew this was going to be extremely important. It was just so monumental and so indicative of what NFTs can do” (Kastrenakes). As a technology, a non-fungible token can create digital scarcity in an otherwise infinitely replicable digital space. Contrary to fungible tokens, which are easily interchangeable due to having an equal value, non-fungible tokens represent unique items for which one cannot find an equivalent. That is why we rely on the fungibility of money to exchange non-fungible unique goods, such as art. Employing non-fungible tokens allows owning and exchanging digital items outside of the context in which they originated. Now, one can prove one’s possession of a digital skin from a videogame, for example, and sell it on digital markets using crypto currency (“Bible”). Behind the technology of NFTs lies the use of a cryptographic hash function, which converts a digital artwork of any file size into a fixed-length hash, called message digest (Dooley 179). It is impossible to revert the process and arrive at the original image, a quality of non-reversibility that makes the hash function a perfect tool for creating a digital representation of an artwork proofed from data tampering. The issued or minted NFT enters a blockchain, a distributed database that too relies on cryptographic properties to guarantee fidelity and security of data stored. Once the NFT becomes a part of the blockchain, its transaction history is permanently recorded and publicly available. Thus, the NFT simultaneously serves as a unique representation of the artwork and a digital proof of ownership. NFTs are traded in digital marketplaces, such as SuperRare, KnownOrigin, OpenSea, and Rarible, which rely on a blockchain to sustain their operations. An analysis of these markets’ inventory can be summarised by the following list of roughly grouped types of artistic works available for purchase: native crypto art, non-digital art, distributed creativity art. Native Crypto Art In this category, I include projects that motivated the creation of NFT protocols. Among these projects are the aforementioned Colored Coins, created in 2012. These were followed by issuing other visual creations native to the crypto-world, such as LarvaLabs’s CryptoPunks, a series of 10,000 algorithmically generated 8-bit-style pixelated digital avatars originally available for free to anyone with an Ethereum blockchain account, gaining a cult status among the collectors when they became rare sought-after items. On 13 February 2022, CryptoPunk #5822 was sold for roughly $24 million in Ethereum, beating the previous record for such an NFT, CryptoPunk #3100, sold for $7.58 million. CryptoPunks laid the foundation for other collectible personal profile projects, such Bored Ape Yacht Club and Cool Cats. One of the ultimate collections of crypto art that demonstrates the exhaustion of original Dada motivations is titled Monas, an NFT project made up of 5,000 programmatically generated versions of a pixelated Mona Lisa by Leonardo da Vinci (c. 1503-1506). Each Monas, according to the creators, is “a mix of Art, history, and references from iconic NFTs” (“Monas”). Monas are a potpourri of meme and pop culture, infused with inside jokes and utmost silliness. Monas invariably bring to mind the historic Dadaist gesture of challenging bourgeois tastes through defacing iconic art historical works, such as Marcel Duchamp’s treatment of Mona Lisa in L.H.O.O.Q. In 1919, Duchamp drew a moustache and a goatee on a reproduction of La Joconde, as the French called the painting, and inscribed “L.H.O.O.Q.” that when pronounced sounds like “Elle a chaud au cul”, a vulgar expression indicating sexual arousal of the subject. At the time of its creation, this Dada act was met with the utmost public contempt, as Mona Lisa was considered a sacred work of art and a patron of the arts, an almost religious symbol (Elger and Grosenick 82). Needless to say, the effect of Monas on public consciousness is far from causing disgust and, on the contrary, brings childish joy and giggles. As an NFT artist, Mankind, explains in his YouTube video on personal profile projects: “PFPs are built around what people enjoy. People enjoy memes, people enjoy status, people enjoy being a part of something bigger than themselves, the basic primary desire to mix digital with social and belong to a community”. Somehow, “being bigger than themselves” has come to involve collecting defaced images of Mona Lisa. Turning our attention to historical analysis will help trace this transformation of the Dada insult into a collectible NFT object. Dada and Its Legacy in Crypto Art Dada was founded in 1916 in Zurich, by Hugo Ball, Tristan Tzara, Hans Richter, and other artists who fled their homelands during the First World War (Hapgood and Rittner 63). One of Dada’s primary aspirations was to challenge the dominance of reason that brought about the tragedy of the First World War through attacking the postulates of culture this form of reason produced. Already in 1921, such artists as André Breton, Louis Aragon, and Max Ernst were becoming exhausted by Dada’s nihilist tendencies and rejection of all programmes for the arts, except for the one that called for the total freedom of expression. The movement was pronounced dead about May 1921, leaving no sense of regret since, in the words of Breton, “its omnipotence and its tyranny had made it intolerable” (205). An important event associated with Dada’s revival and the birth of the Neo Dada movement was the publication of The Dada Painters and Poets in 1951. This volume, the first collection of Dada writings in English and the most comprehensive anthology in any language, was introduced to the young artists at the New School by John Cage, who revived Tristan Tzara’s concept that “life is far more interesting” than art (Hapgood and Rittner 64). The 1950s were marked by a renewed interest in Dadaism that can also be evidenced in galleries and museums organising numerous exhibitions on the movement, such as Dada 1916 –1923 curated by Marcel Duchamp at the Sidney Janis Gallery in 1953. By the end of the decade, such artists as Jasper Johns and Robert Rauschenberg began exploring materials and techniques that can be attributed to Dadaism, which prompted the title of Neo Dada to describe this thematic return (Hapgood and Rittner 64). Among the artistic approaches that Neo Dada borrowed from Dada are Duchampian readymades that question the status of the art object, Kurt Schwitters’s collage technique of incorporating often banal scraps and pieces of the everyday, and the use of chance operations as a compositional device (Hapgood and Rittner 63–64). These approaches comprise the toolbox of crypto artists as well. Monas, CryptoPunks, and Bored Ape Yacht Club are digital collages made of scraps of pop culture and the everyday Internet life assembled into compositional configurations through chance operation made possible by the application of algorithmic generation of the images in each series. Art historian Helen Molesworth sees the strategies of montage, the readymade, and chance not only as “mechanisms for making art objects” but also as “abdications of traditional forms of artistic labor” (178). Molesworth argues that Duchamp’s invention of the readymade “substituted the act of (artistic) production with consumption” and “profoundly questioned the role, stability, nature, and necessity of the artist’s labor” (179). Together with questioning the need for artistic labour, Neo Dadaists inherited what an American art historian Jack D. Flam terms the “anything goes” attitude: Dada’s liberating destruction of rules and derision of art historical canon allowed anything and everything to be considered art (xii). The “anything goes” approach can also be traced to the contemporary crypto artists, such as Beeple, whose Everydays: The First 5000 Days was a result of assembling into a collage the first 5,000 of his daily training sketches created while teaching himself new digital tools (Kastrenakes). When asked whether he genuinely liked any of his images, Beeple explained that most digital art was created by teams of people working over the course of days or even weeks. When he “is pooping something out in 45 minutes”, it “is probably not gonna look that great comparatively” (Cieplak-Mayr von Baldegg). At the core of Dada was a spirit of absurdism that drove an attack on the social, political, artistic, and philosophical norms, constituting a radical movement against the Establishment (Flam xii). In Dada Art and Anti-Art, Hans Richter’s personal historical account of the Dada movement, the artist describes the basic principle of Dada as guided by a motivation “to outrage public opinion” (66). Richter’s writings also point out a desensitisation towards Dada provocations that the public experienced as a result of Dada’s repetitive assaults, demanding an invention of new methods to disgrace the public taste. Richter recounts: our exhibitions were not enough. Not everyone in Zurich came to look at our pictures, attending our meetings, read our poems and manifestos. The devising and raising of public hell was an essential function of any Dada movement, whether its goal was pro-art, non-art or anti-art. And the public (like insects or bacteria) had developed immunity to one of kind poison, we had to think of another. (66) Richter’s account paints a cultural environment in which new artistic provocations mutate into accepted norms in a quick succession, forming a public body that is immune to anti-art “poisons”. In the foreword to Dada Painters and Poets, Flam outlines a trajectory of acceptance and subjugation of the Dadaist spirit by the subsequent revival of the movement’s core values in the Neo Dada of the 1950s and 1960s. When Dadaism was rediscovered by the writers and artists in the 1950s, the Dada spirit characterised by absurdist irony, self-parody, and deadpan realism was becoming a part of everyday life, as if art entered life and transformed it in its own image. The Neo Dada artists, such as Jasper Johns, Robert Rauschenberg, Claes Oldenburg, Roy Lichtenstein, and Andy Warhol, existed in a culturally pluralistic space where the project of a rejection of the Establishment was quickly absorbed into the mainstream, mutating into the high culture it was supposedly criticising and bringing commercial success of which the original Dada artists would have been deeply ashamed (Flam xiii). Raoul Hausmann states: “Dada fell like a raindrop from heaven. The Neo-Dadaists have learnt to imitate the fall, but not the raindrop” (as quoted in Craft 129). With a similar sentiment, Richard Huelsenbeck writes: “Neo-Dada has turned the weapons used by Dada, and later by Surrealism, into popular ploughshares with which to till the fertile soil of sensation-hungry galleries eager for business” (as quoted in Craft 130). Marcel Duchamp, the forefather of the avant-garde, comments on the loss of Dada’s original intent: this Neo-Dada, which they call New Realism, Pop Art, Assemblage, etc., is an easy way out, and lives on what Dada did. When I discovered ready-mades I thought to discourage aesthetics. In Neo-Dada they have taken my ready-mades and found aesthetic beauty in them. I threw the bottle-rack and the urinal into their faces as a challenge and now they admire them for their aesthetic beauty. (Flam xiii) In Neo Dada, the original anti-art impulse of Dadaism was converted into its opposite, becoming an artistic stance and a form of aesthetics. Flam notes that these gradual transformations resulted in the shifts in public consciousness, which it was becoming more difficult to insult. Artists, among them Roy Lichtenstein, complained that it was becoming impossible to make anything despicable: even a dirty rug could be admired (Flam xiii). The audience lost their ability to understand when they were being mocked, attacked, or challenged. Writing in 1981, Flam proclaimed that “Dada spirit has become an inescapable condition of modern life” (xiv). I contend that the current crypto art thrives on the Dada spirit of absurdism, irony, and self-parody and continues to question the border between art and non-art, while fully subscribing to the “anything goes” approach. In the current iteration of Dada in the crypto world, the original subversive narrative can be mostly found in the liberating rhetoric promoted by the proponents of the decentralised economic system. While Neo Dada understood the futility of shocking the public and questioning their tastes, crypto art is ignorant of the original Dada as a form of outrage, a revolutionary movement ignited by a social passion. In crypto art, the ambiguous relationship that Pop Art, one of the Neo Dada movements, had with commercial success is transformed into the content of the artworks. As Tristan Tzara laconically explained, the Dada project was to “assassinate beauty” and with it all the infrastructure of the art market (as quoted in Danto 39). Ironically, crypto artists, the descendants of Dada, erected the monument to Value artificially created through scarcity made possible by blockchain technology in place of the denigrated Venus demolished by the Dadaists. After all, it is the astronomical prices for crypto art that are lauded the most. If in the pre-NFT age, artistic works were evaluated based on their creative merit that included considering the prominence of the artist within art historical canon, current crypto art is evaluated based on its rareness, to which the titles of the crypto art markets SuperRare and Rarible unambiguously refer (Finucane 28–29). In crypto art, the anti-art and anti-commercialism of Dada has fully transformed into its opposite. Another evidence for considering crypto art to be a descendant of Dada is the NFT artists’ concern for the question of what art is and is not, brought to the table by the original Dada artists. This concern is expressed in the manifesto-like mission statement of the first Museum of Crypto Art: at its core, the Museum of Crypto Art (M○C△) challenges, creates conflict, provokes. M○C△ puts forward a broad representation of perspectives meant to upend our sense of who we are. It poses two questions: “what is art?” and “who decides?” We aim to resolve these questions through a multi-stakeholder decentralized platform of art curation and exhibition. (The Museum of Crypto Art) In the past, the question regarding the definition of art was overtaken by the proponent of the institutional approach to art definition, George Dickie, who besides excluding aesthetics from playing a part in differentiating art from non-art famously pronounced that an artwork created by a monkey is art if it is displayed in an art institution, and non-art if it is displayed elsewhere (Dickie 256). This development might explain why decentralisation of the art market achieved through the use of blockchain technology still relies on the endorsing of the art being sold by the widely acclaimed art auction houses: with their stamp of approval, the work is christened as legitimate art, resulting in astronomical sales. Non-Digital Art It is not surprising that an NFT marketplace is an inviting arena for the investigation of questions of commercialisation tackled in the works of Neo Dada Pop artists, who made their names in the traditional art world. This brings us to a discussion of the second type of artworks found in NFT marketplaces: non-digital art sold as NFT and created by trained visual artists, such as Damien Hirst. In his recent NFT project titled Currency, Hirst explores “the boundaries of art and currency—when art changes and becomes a currency, and when currency becomes art” (“The Currency”). The project consists of 10,000 artworks on A4 paper covered in small, coloured dots, a continuation of the so-called “spot-paintings” series that Hirst and his assistants have been producing since the 1980s. Each artwork is painted on a hand-made paper that bears the watermark of the artist’s bust, adorned with a microdot that serves as a unique identification, and is made to look very similar to the others—visual devices used to highlight the ambiguous state of these artworks that simultaneously function as Hirst-issued currency. For Hirst, this project is an experiment: after the purchase of NFTs, buyers are given an opportunity to exchange the NFT for the original art, safely stored in a UK vault; the unexchanged artworks will be burned. Is art going to fully transform into currency? Will you save it? In Hirst’s project, the transformation of physical art into crypto value becomes the ultimate act of Dada nihilism, except for one big difference: if Dada wanted to destroy art as a way to invent it anew, Hirst destroys art to affirm its death and dissolution in currency. In an ironic gesture, the gif NFT artist Nino Arteiro, as if in agreement with Hirst, attempts to sell his work titled Art Is Not Synonymous of Profit, which contains a crudely written text “ART ≠ PROFIT!” for 0.13 Ether or US$350. Buying this art will negate its own statement and affirm its analogy with money. Distributed-Creativity Art When browsing through crypto art advertised in the crypto markets, one inevitably encounters works that stand out in their emphasis on aesthetic and formal qualities. More often than not, these works are created with the use of Artificial Intelligence (AI). To a viewer bombarded with creations unconcerned with the concept of beauty, these AI works may serve as a sensory aesthetic refuge. Among the most prominent artists working in this realm is Refik Anadol, whose Synthetic Dreams series at a first glance may appear as carefully composed works of a landscape painter. However, at a closer look nodal connections between points in rendered space provide a hint at the use of algorithmic processes. These attractive landscapes are quantum AI data paintings created from a data set consisting of 200 million raw images of landscapes from around the world, with each image having been computed with a unique quantum bit string (“Synthetic Dreams”). Upon further contemplation, Anadol’s work begins to remind of the sublime Romantic landscapes, revamped through the application of AI that turned fascination with nature’s unboundedness into awe in the face of the unfathomable amounts of data used in creation of Anadol’s works. These creations can be seen as a reaction against the crypto art I call exhausted Dada, or a marketing approach that targets a different audience. In either case, Anadol revives aesthetic concern and aligns himself with the history of sublimity in art that dates back to the writings of Longinus, becoming of prime importance in the nineteenth-century Romantic painting, and finding new expressions in what is considered the technological sublime, which, according to David E. Nye. concentrates “on the triumph of machines… over space and time” (as quoted in Butler et al. 8). In relation to his Nature Dreams project, Anadol writes: “the exhibition’s eponymous, sublime AI Data Sculpture, Nature Dreams utilizes over 300 million publicly available photographs of nature collected between 2018- 2021 at Refik Anadol Studio” (“Machine Hallucinations Nature Dreams”). From this short description it is evident that Anadol’s primary focus is on the sublimity of large sets of data. There is an issue with that approach: since experiencing the sublime involves loss of rational thinking (Longinus 1.4), these artworks cease the viewer’s ability to interrogate cultural adaptation of AI technology and stay within the realm of decorative ornamentations, demanding an intervention akin to that brought about by the historical avant-garde. Conclusions I hope that this brief analysis demonstrates the mechanisms by which the strains of Dada entered the vocabulary of crypto artists. It is probably also noticeable that I equate the nihilist project of the exhausted Dada found in such works as Hirst’s Cryptocurrency with a dead end similar to so many other dead ends in art history—one only needs to remember that the death of painting was announced a myriad of times, and yet it is still alive. Each announcement of its death was followed by its radiant return. It could be that using art as a visual package for monetary value, a death statement to art’s capacity to affect human lives, will ignite artists to affirm art’s power to challenge, inspire, and enrich. References Assia, Yoni et al. “Colored Coins Whitepaper.” 2012-13. <https://docs.google.com/document/d/1AnkP_cVZTCMLIzw4DvsW6M8Q2JC0lIzrTLuoWu2z1BE/edit>. Breton, André. “Three Dada Manifestoes, before 1924.” The Dada Painters and Poets: An Anthology, Ed. Robert Motherwell, Cambridge, Mass: Belknap Press of Harvard UP, 1989. 197–206. Butler, Rebecca P., and Benjamin J. Butler. “Examples of the American Technological Sublime.” TechTrends 57.1 (2013): 9–10. Craft, Catherine Anne. Constellations of Past and Present: (Neo-) Dada, the Avant- Garde, and the New York Art World, 1951-1965. 1996. PhD dissertation. University of Texas at Austin. Cieplak-Mayr von Baldegg, Kasia. “Creativity Is Hustle: Make Something Every Day.” The Atlantic, 7 Oct. 2011. 12 July 2021 <https://www.theatlantic.com/video/archive/2011/10/creativity-is-hustle-make-something-every-day/246377/#slide15>. Danto, Arthur Coleman. The Abuse of Beauty: Aesthetics and the Concept of Art. Chicago, Ill: Open Court, 2006. Dash, Anil. “NFTs Weren’t Supposed to End like This.” The Atlantic, 2 Apr. 2021. 16 Apr. 2022 <https://www.theatlantic.com/ideas/archive/2021/04/nfts-werent-supposed-end-like/618488/>. Dickie, George. “Defining Art.” American Philosophical Quarterly 6.3 (1969): 253–256. Dooley, John F. History of Cryptography and Cryptanalysis: Codes, Ciphers, and Their Algorithms. Cham: Springer, 2018. Elder, R. Bruce. Dada, Surrealism, and the Cinematic Effect. Waterloo: Wilfried Laurier UP, 2015. Elger, Dietmar, and Uta Grosenick. Dadaism. Köln: Taschen, 2004. Flam, Jack. “Foreword”. The Dada Painters and Poets: An Anthology. Ed. Robert Motherwell. Cambridge, Mass: Belknap Press of Harvard UP, 1989. xi–xiv. Finucane, B.P. Creating with Blockchain Technology: The ‘Provably Rare’ Possibilities of Crypto Art. 2018. Master’s thesis. University of British Columbia. Haber, Stuart, and W. Scott Stornetta. “How to Time-Stamp a Digital Document.” Journal of Cryptology 3.2 (1991): 99–111. Hapgood, Susan, and Jennifer Rittner. “Neo-Dada: Redefining Art, 1958-1962.” Performing Arts Journal 17.1 (1995): 63–70. Kastrenakes, Jacob. “Beeple Sold an NFT for $69 million: Through a First-of-Its-Kind Auction at Christie’s.” The Verge, 11 Mar. 2021. 14 July 2021 <https://www.theverge.com/2021/3/11/22325054/beeple-christies-nft-sale-cost-everydays-69-million>. Longinus. On the Sublime. Lewiston/Queenston: Edwin Mellen, 1987. Mankind, “What Are PFP NFTs”. YouTube. 2 Feb. 2022 <https://www.youtube.com/watch?v=Drh_fAV4XNM>. “Machine Hallucinations.” Refik Anadol. 20 Jan. 2022 <https://refikanadol.com/works/machine-hallucination/>. “Machine Hallucinations Nature Dreams.” Refik Anadol. 18 Apr. 2022 <https://refikanadol.com/works/machine-hallucinations-nature-dreams/>. Molesworth, Helen. “From Dada to Neo-Dada and Back Again.” October 105 (2003): 177–181. “Monas”. OpenSea. 17 Feb. 2022 <https://opensea.io/collection/monas>. Museum of Crypto Art. 23 Jan. 2022 <https://museumofcryptoart.com/>. Nakamoto, Satoshi. “Bitcoin: A Peer-to-Peer Electronic Cash System.” 2008. <https://bitcoin.org/bitcoin.pdf>. Richter, Hans. Dada: Art and Anti-Art. London: Thames and Hudson, 2016. Rhizome. “Seven on Seven 2019.” rhizome.org, 26 Mar. 2019. 16 Apr. 2022 <https://rhizome.org/editorial/2019/mar/26/announcing-seven-on-seven-2019-participants-details/>. “Synthetic Dreams.” OpenSea. 23 Jan. 2022 <https://opensea.io/collection/synthetic-dreams>. “The Currency.” OpenSea. 15 Feb. 2022 <https://opensea.io/collection/thecurrency>. “The Non-Fungible Token Bible: Everything You Need to Know about NFTs.” OpenSea Blog, 10 Jan. 2020. 10 June 2021 <https://blog.opensea.io/guides/non-fungible-tokens/>.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Hands, Joss. "Device Consciousness and Collective Volition". M/C Journal 16, n.º 6 (6 de noviembre de 2013). http://dx.doi.org/10.5204/mcj.724.

Texto completo
Resumen
The article will explore the augmentation of cognition with the affordances of mobile micro-blogging apps, specifically the most developed of these: Twitter. It will ask whether this is enabling new kinds of on-the-fly collective cognition, and in particular what will be referred to as ‘collective volition.’ It will approach this with an address to Bernard Stiegler’s concept of grammatisation, which he defines as as, “the history of the exteriorization of memory in all its forms: nervous and cerebral memory, corporeal and muscular memory, biogenetic memory” (New Critique 33). This will be explored in particular with reference to the human relation with the time of protention, that is an orientation to the future in the lived moment. The argument is that there is a new relation to technology, as a result of the increased velocity, multiplicity and ubiquity of micro-communications. As such this essay will serve as a speculative hypothesis, laying the groundwork for further research. The Context of Social Media The proliferation of social media, and especially its rapid shift onto diverse platforms, in particular to ‘apps’—that is dedicated software platforms available through multiple devices such as tablet computers and smart phones—has meant a pervasive and intensive form of communication has developed. The fact that these media are also generally highly mobile, always connected and operate though very sophisticated interfaces designed for maximum ease of use mean that, at least for a significant number of users, social media has become a constant accompaniment to everyday life—a permanently unfolding self-narrative. It is against this background that multiple and often highly contradictory claims are being made about the effect of such media on cognition and group dynamics. We have seen claims for the birth of the smart mob (Rheingold) that opens up the realm of decisive action to multiple individuals and group dynamics, something akin to that which operates during moments of shared attention. For example, in the London riots of 2011 the use of Blackberry messenger was apportioned a major role in the way mobs moved around the city, where they gathered and who turned up. Likewise in the Arab Spring there was significant speculation about the role of Twitter as a medium for mass organisation and collective action. Why such possibilities are mooted is clear in the basic affordances of the particular social media in question, and the devices through which these software platforms operate. In the case of Twitter it is clear that simplicity of its interface as well as its brevity and speed are the most important affordances. The ease of the interface, the specificity of the action—of tweeting or scrolling though a feed—is easy. The limitation of messages at 140 characters ensures that nothing takes more than a small bite of attention and that it is possible, and routine, to process many messages and to communicate with multiple interlocutors, if not simultaneously then in far faster succession that is possible in previous applications or technologies. This produces a form of distributed attention, casting a wide zone of social awareness, in which the brains of Twitter users process, and are able to respond to, the perspectives of others almost instantly. Of course the speed of the feed that, beyond a relatively small number of followed accounts, means it becomes impossible to see anything but fragments. This fragmentary character is also intensified by the inevitable limitation of the number of accounts being followed by any one user. In fact we can add a third factor of intensification to this when we consider the migration of social media into mobile smart phone apps using simple icons and even simpler interfaces, configured for ease of use on the move. Such design produces an even greater distribution of attention and temporal fragmentation, interspersed as they are with multiple everyday activities. Mnemotechnology: Spatial and Temporal Flux Attending to a Twitter feed thus places the user into an immediate relationship to the aggregate of the just passed and the passing through, a proximate moment of shared expression, but also one that is placed in a cultural short term memory. As such Twitter is thus a mnemotechnology par-excellence, in that it augments human memory, but in a very particular way. Its short termness distributes memory across and between users as much, if not more, than it does extend memory through time. While most recent media forms also enfold their own recording and temporal extension—print media, archived in libraries; film and television in video archives; sound and music in libraries—tweeting is closer to the form of face to face speech, in that while it is to an extent grammatised into the Twitter feed its temporal extension is far more ambiguous. With Twitter, while there is some cerebral/linguistic memory extension—over say a few minutes in a particular feed, or a number of days if a tweet is given a hash tag—beyond this short-term extension any further access becomes a question of paying for access (after a few days hash tags cease to be searchable, with large archives of tweets being available only at a monetary cost). The luxury of long-term memory is available only to those that can afford it. Grammatisation in Stiegler’s account tends to the solidifying extension of expression into material forms of greater duration, forming what he calls the pharmakon, that is an external object, which is both poison and cure. Stiegler employs Donald Winnicott’s concept of the transitional object as the first of such objects in the path to adulthood, that is the thing—be it blanket, teddy or so forth—that allows the transition from total dependency on a parent to separation and autonomy. In that sense the object is what allows for the transition to adulthood, but within which lies the danger of excessive attachment, dependency and is "destructive of autonomy and trust" (Stiegler, On Pharmacology 3). Writing, or hypomnesis, that is artificial memory, is also such a pharmakon, in as much as it operates as a salve; it allows cultural memory to be extended and shared, but also according to Plato it decays autonomy of thought, but in fact—taking his lead from Derrida—Stiegler tells us that “while Plato opposes autonomy and heteronomy, they in fact constantly compose” (2). The digital pharmakon, according to Stiegler, is the extension of this logic to the entire field of the human body, including in cognitive capitalism wherein "those economic actors who are without knowledge because they are without memory" (35). This is the essence of contemporary proletarianisation, extended into the realm of consumption, in which savour vivre, knowing how to live, is forgotten. In many ways we can see Twitter as a clear example of such a proletarianisation process, as hypomnesis, with its derivation of hypnosis; an empty circulation. This echoes Jodi Dean’s description of the flow of communicative capitalism as simply drive (Dean) in which messages circulate without ever getting where they are meant to go. Yet against this perhaps there is a gain, even in Stiegler’s own thought, as to the therapeutic or individuating elements of this process and within the extension of Tweets from an immediately bounded, but extensible and arbitrary distributed network, provides a still novel form of mediation that connects brains together; but going beyond the standard hyper-dyadic spread that is characteristic of viruses or memes. This spread happens in such a way that the expressed thoughts of others can circulate and mutate—loop—around in observable forms, for example in the form of replies, designation of favourite, as RTs (retweets) and in modified forms as MTs (modified tweets), followed by further iterations, and so on. So it is that the Twitter feeds of clusters of individuals inevitably start to show regularity in who tweets, and given the tendency of accounts to focus on certain issues, and for those with an interest in those issues to likewise follow each other, then we have groups of accounts/individuals intersecting with each other, re-tweeting and commenting on each other–forming clusters of shared opinion. The issue at stake here goes beyond the question of the evolution of such clusters at that level of linguistic exchange as, what might be otherwise called movements, or counter-publics, or issue networks—but that speed produces a more elemental effect on coordination. It is the speed of Twitter that creates an imperative to respond quickly and to assimilate vast amounts of information, to sort the agreeable from the disagreeable, divide that which should be ignored from that which should be responded to, and indeed that which calls to be acted upon. Alongside Twitter’s limited memory, its pharmacological ‘beneficial’ element entails the possibility that responses go beyond a purely linguistic or discursive interlocution towards a protection of ‘brain-share’. That is, to put it bluntly, the moment of knowing what others will think before they think it, what they will say before they say it and what they will do before they do it. This opens a capacity for action underpinned by confidence in a solidarity to come. We have seen this in numerous examples, in the actions of UK Uncut and other such groups and movements around the world, most significantly as the multi-media augmented movements that clustered in Tahrir Square, Zuccotti Park and beyond. Protention, Premediation, and Augmented Volition The concept of the somatic marker plays an important role in enabling this speed up. Antonio Damasio argues that somatic markers are emotional memories that are layered into our brains as desires and preferences, in response to external stimuli they become embedded in our unconscious brain and are triggered by particular situations or events. They produce a capacity to make decisions, to act in ways that our deliberate decision making is not aware of; given the pace of response that is needed for many decisions this is a basic necessity. The example of tennis players is often used in this context, wherein the time needed to process and react consciously to a serve is in excess of the processing time the conscious brain requires; that is there is at least a 0.5 second gap between the brain receiving a stimulus and the conscious mind registering and reacting to it. What this means is that elements of the brain are acting in advance of conscious volition—we preempt our volitions with the already inscribed emotional, or affective layer, protending beyond the immanent into the virtual. However, protention is still, according to Stiegler, a fundamental element of consciousness—it pushes forward into the brain’s awareness of continuity, contributing to its affective reactions, rooted in projection and risk. This aspect of protention therefore is a contributing element of volition as it rises into consciousness. Volition is the active conscious aspect of willing, and as such requires an act of protention to underpin it. Thus the element of protention, as Stiegler describes it, is inscribed in the flow of the Twitter feed, but also and more importantly, is written into the cognitive process that proceeds and frames it. But beyond this even is the affective and emotional element. This allows us to think then of the Twitter-brain assemblage to be something more than just a mechanism, a tool or simply a medium in the linear sense of the term, but something closer to a device—or a dispositif as defined by Michel Foucault (194) and developed by Giorgio Agamben. A dispositif gathers together, orders and processes, but also augments. Maurizio Lazzarato uses the term, explaining that: The machines for crystallizing or modulating time are dispositifs capable of intervening in the event, in the cooperation between brains, through the modulation of the forces engaged therein, thereby becoming preconditions for every process of constitution of whatever subjectivity. Consequently the process comes to resemble a harmonization of waves, a polyphony. (186) This is an excellent framework to consolidate the place of Twitter as just such a dispositif. In the first instance the place of Twitter in “crystallizing or modulating” time is reflected in its grammatisation of the immediate into a circuit that reframes the present moment in a series of ripples and echoes, and which resonates in the protentions of the followers and followed. This organising of thoughts and affections in a temporal multiplicity crosscuts events, to the extent that the event is conceived as something new that enters the world. So it is that the permanent process of sharing, narrating and modulating, changes the shape of events from pinpointed moments of impact into flat plains, or membranes, that intersect with the mental events. The brain-share, or what can be called a ‘brane’ of brains, unfolds both spatially and temporally, but within the limits already defined. This ‘brane’ of brains can be understood in Lazzarato’s terms precisely as a “harmonization of waves, a polyphony.” The dispositif produces this, in the first instance, modulated consciousness—this is not to say this is an exclusive form of consciousness—part of a distributed condition that provides for a cooperation between brains, the multifarious looping mentioned above, that in its protentions forms a harmony, which is a volition. It is therefore clear that this technological change needs to be understood together with notions such as ‘noopolitics’ and ‘neuropolitics’. Maurizio Lazzarato captures very well the notion of a noopolitics when he tells us that “We could say that noopolitics commands and reorganizes the other power relations because it operates at the most deterritorialized level (the virtuality of the action between brains)” (187). However, the danger here is well-defined in the writings of Stiegler, when he explains that: When technologically exteriorized, memory can become the object of sociopolitical and biopolitical controls through the economic investments of social organizations, which thereby rearrange psychic organizations through the intermediary of mnenotechnical organs, among which must be counted machine-tools. (New Critique 33) Here again, we find a proletarianisation, in which gestures, knowledge, how to, become—in the medium and long term—separated from the bodies and brains of workers and turned into mechanisms that make them forget. There is therefore a real possibility that the short term resonance and collective volition becomes a distorted and heightened state, with a rather unpalatable after-effect, in which the memories remain only as commodified digital data. The question is whether Twitter remembers it for us, thinks it for us and as such also, in its dislocations and short termism, obliterates it? A scenario wherein general intellect is reduced to a state of always already forgetting. The proletarian, we read in Gilbert Simondon, is a disindividuated worker, a labourer whose knowledge has passed into the machine in such a way that it is no longer the worker who is individuated through bearing tools and putting them into practice. Rather, the labourer serves the machine-tool, and it is the latter that has become the technical individual. (Stiegler, New Critique 37) Again, this pharmacological character is apparent—Stiegler says ‘the Internet is a pharmakon’ blurring both ‘distributed’ and ‘deep’ attention (Crogan 166). It is a marketing tool par-excellence, and here its capacity to generate protention operates to create not only a collective ‘volition’ but a more coercive collective disposition or tendency, that is the unconscious wiling or affective reflex. This is something more akin to what Richard Grusin refers to as premediation. In premediation the future has already happened, not in the sense that it has already actually happened but such is the preclusion of paths of possibility that cannot be conceived otherwise. Proletarianisation operates in this way through the app, writing in this mode is not as thoughtful exchange between skilled interlocutors, but as habitual respondents to a standard set of pre-digested codes (in the sense of both programming and natural language) ready to hand to be slotted into place. Here the role of the somatic marker is predicated on the layering of ideology, in its full sense, into the brain’s micro-level trained reflexes. In that regard there is a proletarianisation of the prosumer, the idealised figure of the Web 2.0 discourse. However, it needs to be reiterated that this is not the final say on the matter, that where there is volition, and in particular collective volition, there is also the possibility of a reactivated general will: a longer term common consciousness in the sense of class consciousness. Therefore the general claim being made here is that by taking hold of this device consciousness, and transforming it into an active collective volition we stand the best chance of finding “a political will capable of moving away from the economico-political complex of consumption so as to enter into the complex of a new type of investment, or in other words in an investment in common desire” (Stiegler, New Critique 6). In its most simplistic form this requires a new political economy of commoning, wherein micro-blogging services contribute to a broader augmented volition that is not captured within communicative capitalism, coded to turn volition into capital, but rather towards a device consciousness as common desire. Needless to say it is only possible here to propose such an aim as a possible path, but one that is surely worthy of further investigation. References Agamben, Giorgio. What Is an Apparatus? Palo Alto: Stanford University Press, 2009. Crogan, Patrick. “Knowledge, Care, and Transindividuation: An Interview with Bernard Stiegler.” Cultural Politics 6.2 (2010): 157-170. Damasio, Antonio. Self Comes to Mind. London: Heinemann, 2010. Dean, Jodi. Blog Theory. Cambridge: Polity Press, 2010. Foucault, Michel. “The Confession of the Flesh.” Power/Knowledge Selected Interviews and Other Writings. Ed. Colin Gordon. New York: Pantheon. 1980. Grusin, Richard. Pre-mediation. Basingstoke: Palgrave, 2011. Lazzarato, Maurizio. “Life and the Living in the Societies of Control.” Deleuze and the Social. Eds. Martin Fuglsang and Meier Sorensen Bent. Edinburgh: Edinburgh University Press, 2006. Rheingold, Howard. Smart Mobs. Cambridge, Mass.: Perseus Books, 2002. Stiegler, Bernard. For a New Critique of Political Economy. Cambridge: Polity Press, 2010. ———. What Makes Life Worth Living: On Pharmacology. Cambridge: Polity Press, 2013.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Ellis, Katie M., Mike Kent y Kathryn Locke. "Indefinitely beyond Our Reach: The Case for Elevating Audio Description to the Importance of Captions on Australian Television". M/C Journal 20, n.º 3 (21 de junio de 2017). http://dx.doi.org/10.5204/mcj.1261.

Texto completo
Resumen
IntroductionIn a 2013 press release issued by Blind Citizens Australia, the advocacy group announced they were lodging a human rights complaint against the Australian government and the ABC over the lack of audio description available on the public broadcaster. Audio description is a track of narration included between the lines of dialogue which describes important visual elements of a television show, movie or performance. Audio description is broadly recognised as an essential feature to make television accessible to audiences who are blind or vision impaired (Utray et al.). Indeed, Blind Citizens Australia maintained that audio description was as important as captioning on Australian television:people who are blind have waited too long and are frustrated that audio description on television remains indefinitely beyond our reach. Our Deaf or hearing impaired peers have always seen great commitment from the ABC, but we continue to feel like second class citizens.While audio description as a technology was developed in the 1960s—around the same time as captions (Ellis, “Netflix Closed Captions”)—it is not as widely available on television and access is therefore often considered to be out of reach for this group. As a further comparison, in Australia, while the provision of captions was mandated in the Broadcasting Services Act (BSA) 1992 and television sets had clear Australian standards regarding their capability to display captions, there is no legislation for audio description and no consistency regarding the ability of television sets sold in Australia to display them (Ellis, “Television’s Transition”). While as a technology, audio description is as old as captioning it is not as widely available on television. This is despite the promise of technological advancements to facilitate its availability. For example, Cronin and King predicted that technological change such as the introduction of stereo sound on television would facilitate a more widespread availability of audio description; however, this has not eventuated. Similarly, in the lead up to the transition from analogue to digital broadcasting in Australia, government policy documents predicted a more widespread availability of audio description as a result of increased bandwidth available via digital television (Ellis, “Television’s Transition”). While these predictions paved way for an audio description trial, there has been no amendment to the BSA to mandate its provision.Audio description has been experienced on Australian broadcast television in 2012, but only for a 14-week trial on ABC1. The trial report, and feedback from disability groups, identified several technical impediments and limitations which effected the experience of audio described content during this trial, including: the timing of the trial during a period in which the transition from analogue to digital television was still occurring (creating hardware compatibility issues for some consumers); the limitations of the “ad hoc” approach undertaken by the ABC and manual implementation of audio description; and the need for upgraded digital receivers (ABC “Trial of Audio Description”, 2). While advocacy groups acknowledged the technical complexities involved, the expected stakeholder discussions that were due to be held post-trial, in part to attempt to resolve the issues experienced, were never undertaken. As a result of the lack of subsequent commitments to providing audio description, in 2013 advocacy group Blind Citizens Australia lodged their formal complaints of disability discrimination against the ABC and the Federal Government. Since the 2012 trial on ABC1, the ABC’s catch-up portal iView instigated another audio description trial in 2015. Through the iView trial it was further confirmed that audio description held considerable benefits for people with a vision impairment. They also demonstrated that audio description was technically feasible, with far less ‘technical difficulties’ than the experience of the 2012 broadcast-based trial. Over the 15 month trial on ABC iView 1,305 hours of audio described content was provided and played 158, 277 times across multiple platforms, including iOS, Android, the Freeview app and desktop computers (ABC, “ABC iView Audio Description Trial”).Yet despite repeated audio description trials and the lodgement of discrimination complaints, there remains no audio description on Australian broadcast television. Similarly, whereas 55 per cent of DVDs released in Australia have captions, only 25 per cent include an audio description track (Media Access Australia). At the time of writing, the only audio description available on Australian television is on Netflix Australia, a subscription video on demand provider.This article seeks to highlight the importance of television access for people with disability, with a specific focus on the provision of audio description for people with vision impairments. Research consistently shows that despite being a visual medium, people with vision impairments watch television at least once a day (Cronin and King; Ellis, “Netflix Closed Captions”). However, while television access has been a priority for advocates for people who are Deaf and hard of hearing (Downey), audiences advocating audio description are only recently making gains (Ellis, “Netflix Closed Captions”; Ellis and Kent). These gains are frequently attributed to technological change, particularly the digitisation of television and the introduction of subscription video on demand where users access television content online and are not constrained by broadcast schedules. This transformation of how we access television is also considered in the article, again with a focus on the provision–or lack thereof—of audio description.This article also reports findings of research conducted with Australians with disabilities accessing the emerging video on demand environment in 2016. The survey was run online from January to February 2016. Survey respondents included people with disability, their families, and carers, and were sourced through disability organisations and community groups as well as via disability-focused social media. A total of 145 people completed the survey and 12 people participated in follow-up interviews. Insights were gained into both how people with disability are currently using video on demand and their anticipated usage of services. Of note is that most subscription video on demand services (Netflix Australia, Stan, and Presto) had only been introduced in Australia in the year before the survey being carried out, with only Foxtel Play and Quickflix having been in operation for some time prior to that.Finally, the article ends by looking at past and current advocacy in this area, including a discussion on existing—albeit, to date, limited—political will.Access to Television for People with DisabilitiesTelevision can be disabling in different ways for people with impairments, yet several accessibility features exist to translate information. For example, people who are D/deaf or hard of hearing may require captions, while people with vision impairments prefer to make use of audio description (Alper et al.). Similarly, people with mobility and dexterity impairments found the transition to digital broadcasting difficult, particularly with relation to set top box set up (Carmichael et al.). As Joshua Robare has highlighted, even legislation has generally favoured the inclusion of audiences with hearing impairments, while disregarding those with vision impairments. Similarly, much of the literature in this area focuses on the provision of captions—a vital accessibility feature for people who are D/deaf or hard of hearing. Consequently, research into accessibility to television for a diversity of impairments, going beyond hearing impairments, remains deficient.In a study of Australian audiences with disability conducted between September and November 2013—during the final months of the analogue to digital simulcast period of Australian broadcast television—closed captions, clean audio, and large/colour-coded remote control keys emerged as the most desired access features (see Ellis, “Digital Television Flexibility”). Audio description barely registered in the top five. In a different study conducted two years ago/later, when disabled Australian audiences of video on demand were asked the same question, captions continued to dominate at 63.4 per cent; however, audio description was also seen to be a necessary feature for almost one third of respondents (see Ellis et al., Accessing Subcription Video).Robert Kingett, founder of the Accessible Netflix Project, participated in our research and told us in an interview that video on demand providers treat accessibility as an “afterthought”, particularly for blind people whom most don’t think of as watching television. Yet research dating back to the 1990s shows almost 100 per cent of people with vision impairments watch television at least once a day (Cronin & King). Statistically, the number of Australians who identify as blind or vision impaired is not insignificant. Vision Australia estimates that over 357,000 Australians have a vision impairment, while one in five Australians have a disability of some form. With an ageing population, this number is expected to grow exponentially in the next ten years (Australian Network on Disability). Kingett therefore describes this lack of accessibility as evidence video on demand is “stuck in the dark ages”, and advocates that people with vision impairments do use video on demand and therefore continue to have unmet access needs.Video on Demand—Transforming TelevisionSubscription video on demand services have caused a major shift in the way television is used and consumed in Australia. Prior to 2015, there was a small subscription video on demand industry in this country. However, in 2015, following the launch of Netflix Australia, Stan, and Presto, Australia was described as having entered the “streaming wars” (Tucker) where consumers would benefit from the increased competition. As Netflix gained dominance in the video on demand market internationally, people with disability began to recognise the potential this service could have in transforming their access to television.For example, the growing availability of video on demand services continues to provide disruptive change to the way in which consumers enjoy information and entertainment. While traditional broadcast television has provided great opportunities for participation in news, events, and popular culture, both socially and in the workplace, the move towards video on demand services has seen a notable decline in traditional television viewing habits, with online continuing to increase at the expense of Australian free-to-air programming (C-Scott).For the general population, this always-on, always-available, and always-shareable nature of video on demand means that the experience is both convenient and instant. If a television show is of interest to friends and family, it can be quickly shared through popular social media with others, allowing everyone to join in the experience. For people with disability, the ability to both share and personalise the experience of television is critical to the popularity of video on demand services for this group. This gives them not only the same benefits as others but also ensures that people with disability are not unintentionally excluded from participation—it allows people with disability the choice as to whether or not to join in. However, exclusion from video on demand is a significant concern for people with disability due to the lack of accessibility features in popular subscription services. The lack of captions, audio description, and interfaces that do not comply with international Web accessibility standards are resulting in many people with disability being unable to fully participate in the preferred viewing platforms of family and friends.The impact of this expands beyond the consumption patterns of audiences, shifting the way the audience is defined and conceptualised. With an increasing distribution of audience attention to multiple channels, products, and services, the ability to, and strategies for, acquiring a large audience has changed (Napoli). As audience attention is distributed, it is broken up, into smaller, fragmented groups. The success, therefore, of a new provider may be to amass a large audience through the aggregation of smaller, niche audiences. This theory has significance for consumers who require audio description because they represent a viable target group. In this context, accessibility is reframed as a commercial opportunity rather than a cost (Ellis, “Netflix Closed Captions”).However, what this means for future provision of audio description in Australia is still unclear. Chris Mikul from Media Access Australia, author of Access on Demand, was interviewed as part of this research. He told us that the complete lack of audio description on local video on demand services can be attributed to the lack of Australian legislation requiring it. In an interview as part of this research he explained the central issue with audio description in this country as “the lack of audio description on broadcast TV, which is shocking in a world context”.International providers fare only slightly better. Robert Kingett established the Accessible Netflix Project in 2013 with the stated aim of advocating for the provision of audio description on Netflix. Netflix, despite a lack of a clear accessibility policy, are seen as being in front in terms of overall accessibility—captions are available for most content. However, the provision of audio description was initially not considered to be of such importance, and Netflix were initially against the idea, citing technical difficulties. Nevertheless, in 2015—shortly after their Australian launch—they did eventually introduce audio description on original programming, describing the access feature as an option customers could choose, “just like choosing the soundtrack in a different language” (Wright). However, despite such successful trials, the issue in the Australian market remains the absence of legislation mandating the provision of audio description in Australia and the other video on demand providers have not introduced audio description to compete with Netflix. As the Netflix example illustrates, both legislation and recognition of people with disability as a key audience demographic will result in a more accessible television environment for this group.Currently, it is debatable as to whether this increasingly competitive market, the shifting perception of audience attraction and retention, and the entry of multiple international video on demand providers, has influenced how accessibility is viewed, both for broadcast television and video on demand. Although there is some evidence for an increasing consideration of people with disability as “valid” consumers—take, for example, the iView audio description trial, or the inclusion of audio description by Netflix—our research indicates accessibility is still inconsistently considered, designed for, and applied by current providers.Survey Response: Key Issues Regarding AccessibilityRespondents were asked to provide an overall impression of video on demand services, and to tell us about their positive and negative experiences. Analysis of 68 extended responses, and the responses provided by the interview participants, identified a lack of availability of accessibility features such as audio description as a key problem. What our results indicate is that while customers with a disability are largely accommodating of the inaccessibility of providers—they use their own assistive technology to access content—they are keenly aware of the provisions that could be made. As one respondent put it:they could do a lot better: talking menus, spoken sub titles, and also spoken messages on screen.However, many expressed low expectations due to the continued absence of audio description on broadcast television:so, the other thing is, my expectations are quite low because of years of not having audio descriptions. I have slightly different expectations to other people.This reflection is important in considering both the shifting expectations regarding video on demand providers but also the need for a clear communication of what features are available so that providers can cater to—and therefore capture—niche markets.The survey identified captioning as the main accessibility problem of video on demand services. However, this may not accurately reflect the need for other accessibility features such as audio description. Rather, it may be indicative that this feature is often the only choice given to consumers. As, Chris Mikul identified, “the only disability being catered for to any great extent is deafness/hearing impairment”. Kingett agreed, noting:people who are deaf and hard of hearing are placed way before the rest because captions are beyond easy and cheap to create now. Please, there’s even companies that people use to crowd source captions so companies don’t have to do it anymore. This all came about because the deaf community has [banded] together … to achieve a cause. I know audio description isn’t as cheap to make as captions but, by these companies’ budgets that’s like dropping a penny.Advocacy and Political WillAs noted above, it has been argued by some that accessibility features that address vision impairments have been neglected. The reason behind this is twofold—the perception that this disability is experienced by a minority of the population and that, because blind people “don’t watch television”, it is not an important accessibility feature. This points towards a need for both disability advocacy and political will by politicians to introduce legislation. As one survey respondent identified, the reality is that, in Australia, neither politicians nor people with vision impairments have yet to address the issue on audio description in an organised or sustained way:we have very little audio described content available in Australia. We don’t have the population of blind people nor the political will by politicians to force providers to provide for us.However, Blind Citizens Australia—the coalition of television audiences with vision impairments who lodged the human rights complaint against the government and the ABC—suggest the tide is turning. Whereas advocates for people with vision impairments have traditionally focused on access to the workforce, the issue of television accessibility is increasingly gaining attention, particularly as a result of international activist efforts and the move towards video on demand (see Ellis and Kent).For example, Kingett’s Accessible Netflix Project in the US is considered one of the most successful accessibility movements towards the introduction of audio description. While its members are predominantly US-based, it does include several Australian members and continues to cover Netflix Australia’s stance on audio description, and be covered by Australian media and organisations (including Media Access Australia and Life Hacker). When Netflix launched in Australia, Kingett encouraged Australians to become more involved in the project (Ellis and Kent).However, despite the progress towards mandating of audio description in parliament and the resolution of efforts made by advocacy groups (including Vision Australia and Blind Citizens Australia), the status of audio description remains uncertain. Whilst some support has been gained—specifically through motions made by Senator Siewert and the ABC iView audio description trials—significant change has been slow. For example, conciliation discussions are still ongoing regarding the now four-year-old complaint brought against the ABC and the Federal Government by Blind Citizens Australia. Meanwhile, although the Senate supported Senator Siewert’s motion to change the Broadcasting Services Act to include audio description, the Act has yet to be amended.The results of multiple ABC trials of audio description remain in discussion. Whilst the recently released report on the findings of the April 2015—July 2016 iView trial states that the “trial has identified that those who utilised the audio description service found it a valuable enhancement to their media engagement and their social interactions” (ABC, “ABC iView Audio Description Trial” 18), it also cautioned that “any move to introduce AD services in Australia would have budgetary implications for the broadcasters in a constrained financial environment” and “broader legislative implications” (ABC, “ABC iView Audio Description Trial” 18). Indeed, although the trial was considered “successful”—in that experiences by users were generally positive and the benefits considerable (Media Access Australia, “New Report”)—the continuation of audio description on iView alone was clarified as representing “a systemic failure to provide people who are blind or have low vision with basic access to television now, given that iView is out of reach for many people in the blindness and low vision community” (Media Access Australia, “New Report”). Indeed, the relatively low numbers of plays of audio described content during the trial (158, 277 plays, representing 0.58% of total program plays on iView) were likely a result of a lack of access to smartphones or Internet technology, prohibitive data speeds and/or general Internet costs, all factors which affect the accessibility of video on demand significantly more for people with disability (Ellis et al., “Access for Everyone?”).On a more positive note, the culmination of advocacy pressure, the ABC iView trial, political attention, and increasing academic literature on the accessibility of Australian media has resulted in the establishment of an Audio Description Working Group by the government. This group consists of industry representatives, advocacy group representatives, academics, and “consumer representatives”. The aims of the group are to: identify options to sustainably increase access to audio description services; identify any impediments to the implementation of audio description; provide expert advice on audio description implementation options; and develop a report on the findings due at the end of 2017.ConclusionIn the absence of audio description, people who are blind or vision impaired report a less satisfying television experience (Cronin and King; Kingett). However, with each technological advancement in the delivery of television, from stereo sound to digital television, this group has held hopes for a more accessible experience. The reality, however, has been a continued lack of audio description, particularly in broadcast television.Several commentators have compared the provision of audio description with closed captioning. They find that audio description is not as widely available, and reflect this is likely a result of lack of legislation (Robare; Ellis, “Digital Television Flexibility”)—for example, in the Australian context, whereas the provision of captions is mandated in the Broadcasting Services Act 1992, audio description is not. As a result, there have been limited trials of audio description in this country and inconsistent standards in how to display it. As discussed throughout this paper, people with vision impairments and their allies therefore often draw on the example of the widespread “acceptance” of captions to make the case that audio description should also be more widely available.However, following the introduction of subscription video on demand in Australia, and particularly Netflix, the issue of audio description is receiving greater attention. It has been argued that video on demand has transformed television, particularly the ways in which television is accessed. Video on demand could also potentially transform the way we think about accessibility for audiences with disability. While captions are a well-established accessibility feature facilitating television access for people with a range of disabilities, video on demand is raising the profile of the importance of audio description for audiences with vision impairments.ReferencesABC. “Audio Description Trial on ABC Television: Report to the Minister for Broadband, Communications and the Digital Economy”. Dec. 2012. 8 Apr. 2017 <https://www.communications.gov.au/sites/g/files/net301/f/ABC-Audio-Description-Trial-Report2.pdf>.ABC. “ABC iView Audio Description Trial: Final Report to The Department of Communications and the Arts.” Oct. 2016. 6 Apr. 2017 <https://www.communications.gov.au/documents/final-report-trial-audio-description-abc-iview>.Alper, Meryl, et al. “Reimagining the Good Life with Disability: Communication, New Technology, and Humane Connections.” Communication and the Good Life. Ed. H. Wang. New York: Peter Lang, 2015.Australian Network on Disability. “Disability Statistics.” Mar. 2017. 30 Apr. 2017 <https://www.and.org.au/pages/disability-statistics.html>.Blind Citizens Australia. Government and ABC Fail to Deliver on Accessible TV for Australia’s Blind. Submission. 10 July 2013. 1 May 2017 <http://bca.org.au/submissions/>.C-Scott, Marc. “The Battle for Audiences as Free-TV Viewing Continues Its Decline.” Mumbrella 22 Apr. 2016. 24 May 2016 <https://mumbrella.com.au/the-battle-for-audiences-as-free-tv-viewing-continues-its-decline-362010>.Carmichael, Alex, et al. “Digital Switchover or Digital Divide: A Prognosis for Useable and Accessible Interactive Digital Television in the UK.” Universal Access in the Information Society 4 (2006): 400–16.Cronin, Barry J., and Sharon Robertson King. “The Development of the Descriptive Video Services.” National Center to Improve Practice in Special Education through Technology, Media and Materials. Sep. 1998. 8 May 2014 <https://www2.edc.org/NCIP/library/v&c/Cronin.htm>.Downey, G. “Constructing Closed-Captioning in the Public Interest: From Minority Media Accessibility to Mainstream Educational Technology.” Info 9.2–3 (2007): 69–82.Ellis, Katie. “Digital Television Flexibility: A Survey of Australians with Disability.” Media International Australia 150 (2014): 96.———. “Netflix Closed Captions Offer an Accessible Model for the Streaming Video Industry, But What about Audio Description?” Communication, Politics & Culture 47.3 (2015).———. “Television’s Transition to the Internet: Disability Accessibility and Broadband-Based TV in Australia.” Media International Australia 153 (2014): 53–63.Ellis, Katie, and Mike Kent. “Accessible Television: The New Frontier in Disability Media Studies Brings Together Industry Innovation, Government Legislation and Online Activism.” First Monday 20 (2015). <http://firstmonday.org/ojs/index.php/fm/article/view/6170>.Ellis, Katie, et al. Accessing Subscription Video on Demand: A Study of Disability and Streaming Television in Australia. Australian Communications Consumer Action Network. Aug. 2016. <https://accan.org.au/grants/current-grants/1066-accessing-video-on-demand-a-study-of-disability-and-streaming-television>.Ellis, Katie, et al. “Access for Everyone? Australia’s ‘Streaming Wars’ and Consumers with Disabilities.” Continuum (2017, publication pending).Kingett, Robert. “The Accessible Netflix Project Advocates Taking Steps to Ensure Netflix Accessibility for Everyone.” 2014. 30 Jan. 2014 <https://netflixproject.wordpress.com>.Media Access Australia. “Statistics on DVD Accessibility in Australia.” 2012. 21 Nov. 2014 <https://mediaaccess.org.au/dvds/Statistics%20on%20DVD%20accessibility%20in%20Australia>.———. “New Report on the Trial of A.D. on ABC iView.” 7 Mar. 2017. 30 Apr. 2017 <https://mediaaccess.org.au/latest_news/television/new-report-on-the-trial-of-ad-on-abc-iview>.Napoli, Philip M., ed. Audience Evolution: New Technologies and the Transformation of Media Audiences. New York: Columbia UP, 2011.Robare, Joshua S. “Television for All: Increasing Television Accessibility for the Visually Impaired through the FCC’s Ability to Regulate Video Description Technology.” Federal Communications Law Journal 63.2 (2011): 553–78.Tucker, Harry. “Netflix Leads the Streaming Wars, Followed by Foxtel’s Presto.” News.com.au 24 June 2016. 18 May 2016 <http://www.news.com.au/technology/home-entertainment/tv/netflix-leads-the-streaming-wars-followed-by-foxtels-presto/news-story/7adf45dcd7d9486ff47ec5ea5951287f>.Utray, Francisco, et al. “Monitoring Accessibility Services in Digital Television.” International Journal of Digital Multimedia Broadcasting (2012): 9.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Mallan, Kerry Margaret y Annette Patterson. "Present and Active: Digital Publishing in a Post-print Age". M/C Journal 11, n.º 4 (24 de junio de 2008). http://dx.doi.org/10.5204/mcj.40.

Texto completo
Resumen
At one point in Victor Hugo’s novel, The Hunchback of Notre Dame, the archdeacon, Claude Frollo, looked up from a book on his table to the edifice of the gothic cathedral, visible from his canon’s cell in the cloister of Notre Dame: “Alas!” he said, “this will kill that” (146). Frollo’s lament, that the book would destroy the edifice, captures the medieval cleric’s anxiety about the way in which Gutenberg’s print technology would become the new universal means for recording and communicating humanity’s ideas and artistic expression, replacing the grand monuments of architecture, human engineering, and craftsmanship. For Hugo, architecture was “the great handwriting of humankind” (149). The cathedral as the material outcome of human technology was being replaced by the first great machine—the printing press. At this point in the third millennium, some people undoubtedly have similar anxieties to Frollo: is it now the book’s turn to be destroyed by yet another great machine? The inclusion of “post print” in our title is not intended to sound the death knell of the book. Rather, we contend that despite the enduring value of print, digital publishing is “present and active” and is changing the way in which research, particularly in the humanities, is being undertaken. Our approach has three related parts. First, we consider how digital technologies are changing the way in which content is constructed, customised, modified, disseminated, and accessed within a global, distributed network. This section argues that the transition from print to electronic or digital publishing means both losses and gains, particularly with respect to shifts in our approaches to textuality, information, and innovative publishing. Second, we discuss the Children’s Literature Digital Resources (CLDR) project, with which we are involved. This case study of a digitising initiative opens out the transformative possibilities and challenges of digital publishing and e-scholarship for research communities. Third, we reflect on technology’s capacity to bring about major changes in the light of the theoretical and practical issues that have arisen from our discussion. I. Digitising in a “post-print age” We are living in an era that is commonly referred to as “the late age of print” (see Kho) or the “post-print age” (see Gunkel). According to Aarseth, we have reached a point whereby nearly all of our public and personal media have become more or less digital (37). As Kho notes, web newspapers are not only becoming increasingly more popular, but they are also making rather than losing money, and paper-based newspapers are finding it difficult to recruit new readers from the younger generations (37). Not only can such online-only publications update format, content, and structure more economically than print-based publications, but their wide distribution network, speed, and flexibility attract advertising revenue. Hype and hyperbole aside, publishers are not so much discarding their legacy of print, but recognising the folly of not embracing innovative technologies that can add value by presenting information in ways that satisfy users’ needs for content to-go or for edutainment. As Kho notes: “no longer able to satisfy customer demand by producing print-only products, or even by enabling online access to semi-static content, established publishers are embracing new models for publishing, web-style” (42). Advocates of online publishing contend that the major benefits of online publishing over print technology are that it is faster, more economical, and more interactive. However, as Hovav and Gray caution, “e-publishing also involves risks, hidden costs, and trade-offs” (79). The specific focus for these authors is e-journal publishing and they contend that while cost reduction is in editing, production and distribution, if the journal is not open access, then costs relating to storage and bandwith will be transferred to the user. If we put economics aside for the moment, the transition from print to electronic text (e-text), especially with electronic literary works, brings additional considerations, particularly in their ability to make available different reading strategies to print, such as “animation, rollovers, screen design, navigation strategies, and so on” (Hayles 38). Transition from print to e-text In his book, Writing Space, David Bolter follows Victor Hugo’s lead, but does not ask if print technology will be destroyed. Rather, he argues that “the idea and ideal of the book will change: print will no longer define the organization and presentation of knowledge, as it has for the past five centuries” (2). As Hayles noted above, one significant indicator of this change, which is a consequence of the shift from analogue to digital, is the addition of graphical, audio, visual, sonic, and kinetic elements to the written word. A significant consequence of this transition is the reinvention of the book in a networked environment. Unlike the printed book, the networked book is not bound by space and time. Rather, it is an evolving entity within an ecology of readers, authors, and texts. The Web 2.0 platform has enabled more experimentation with blending of digital technology and traditional writing, particularly in the use of blogs, which have spawned blogwriting and the wikinovel. Siva Vaidhyanathan’s The Googlization of Everything: How One Company is Disrupting Culture, Commerce and Community … and Why We Should Worry is a wikinovel or blog book that was produced over a series of weeks with contributions from other bloggers (see: http://www.sivacracy.net/). Penguin Books, in collaboration with a media company, “Six Stories to Start,” have developed six stories—“We Tell Stories,” which involve different forms of interactivity from users through blog entries, Twitter text messages, an interactive google map, and other features. For example, the story titled “Fairy Tales” allows users to customise the story using their own choice of names for characters and descriptions of character traits. Each story is loosely based on a classic story and links take users to synopses of these original stories and their authors and to online purchase of the texts through the Penguin Books sales website. These examples of digital stories are a small part of the digital environment, which exploits computer and online technologies’ capacity to be interactive and immersive. As Janet Murray notes, the interactive qualities of digital environments are characterised by their procedural and participatory abilities, while their immersive qualities are characterised by their spatial and encyclopedic dimensions (71–89). These immersive and interactive qualities highlight different ways of reading texts, which entail different embodied and cognitive functions from those that reading print texts requires. As Hayles argues: the advent of electronic textuality presents us with an unparalleled opportunity to reformulate fundamental ideas about texts and, in the process, to see print as well as electronic texts with fresh eyes (89–90). The transition to e-text also highlights how digitality is changing all aspects of everyday life both inside and outside the academy. Online teaching and e-research Another aspect of the commercial arm of publishing that is impacting on academe and other organisations is the digitising and indexing of print content for niche distribution. Kho offers the example of the Mark Logic Corporation, which uses its XML content platform to repurpose content, create new content, and distribute this content through multiple portals. As the promotional website video for Mark Logic explains, academics can use this service to customise their own textbooks for students by including only articles and book chapters that are relevant to their subject. These are then organised, bound, and distributed by Mark Logic for sale to students at a cost that is generally cheaper than most textbooks. A further example of how print and digital materials can form an integrated, customised source for teachers and students is eFictions (Trimmer, Jennings, & Patterson). eFictions was one of the first print and online short story anthologies that teachers of literature could customise to their own needs. Produced as both a print text collection and a website, eFictions offers popular short stories in English by well-known traditional and contemporary writers from the US, Australia, New Zealand, UK, and Europe, with summaries, notes on literary features, author biographies, and, in one instance, a YouTube movie of the story. In using the eFictions website, teachers can build a customised anthology of traditional and innovative stories to suit their teaching preferences. These examples provide useful indicators of how content is constructed, customised, modified, disseminated, and accessed within a distributed network. However, the question remains as to how to measure their impact and outcomes within teaching and learning communities. As Harley suggests in her study on the use and users of digital resources in the humanities and social sciences, several factors warrant attention, such as personal teaching style, philosophy, and specific disciplinary requirements. However, in terms of understanding the benefits of digital resources for teaching and learning, Harley notes that few providers in her sample had developed any plans to evaluate use and users in a systematic way. In addition to the problems raised in Harley’s study, another relates to how researchers can be supported to take full advantage of digital technologies for e-research. The transformation brought about by information and communication technologies extends and broadens the impact of research, by making its outputs more discoverable and usable by other researchers, and its benefits more available to industry, governments, and the wider community. Traditional repositories of knowledge and information, such as libraries, are juggling the space demands of books and computer hardware alongside increasing reader demand for anywhere, anytime, anyplace access to information. Researchers’ expectations about online access to journals, eprints, bibliographic data, and the views of others through wikis, blogs, and associated social and information networking sites such as YouTube compete with the traditional expectations of the institutions that fund libraries for paper-based archives and book repositories. While university libraries are finding it increasingly difficult to purchase all hardcover books relevant to numerous and varied disciplines, a significant proportion of their budgets goes towards digital repositories (e.g., STORS), indexes, and other resources, such as full-text electronic specialised and multidisciplinary journal databases (e.g., Project Muse and Proquest); electronic serials; e-books; and specialised information sources through fast (online) document delivery services. An area that is becoming increasingly significant for those working in the humanities is the digitising of historical and cultural texts. II. Bringing back the dead: The CLDR project The CLDR project is led by researchers and librarians at the Queensland University of Technology, in collaboration with Deakin University, University of Sydney, and members of the AustLit team at The University of Queensland. The CLDR project is a “Research Community” of the electronic bibliographic database AustLit: The Australian Literature Resource, which is working towards the goal of providing a complete bibliographic record of the nation’s literature. AustLit offers users with a single entry point to enhanced scholarly resources on Australian writers, their works, and other aspects of Australian literary culture and activities. AustLit and its Research Communities are supported by grants from the Australian Research Council and financial and in-kind contributions from a consortium of Australian universities, and by other external funding sources such as the National Collaborative Research Infrastructure Strategy. Like other more extensive digitisation projects, such as Project Gutenberg and the Rosetta Project, the CLDR project aims to provide a centralised access point for digital surrogates of early published works of Australian children’s literature, with access pathways to existing resources. The first stage of the CLDR project is to provide access to digitised, full-text, out-of-copyright Australian children’s literature from European settlement to 1945, with selected digitised critical works relevant to the field. Texts comprise a range of genres, including poetry, drama, and narrative for young readers and picture books, songs, and rhymes for infants. Currently, a selection of 75 e-texts and digital scans of original texts from Project Gutenberg and Internet Archive have been linked to the Children’s Literature Research Community. By the end of 2009, the CLDR will have digitised approximately 1000 literary texts and a significant number of critical works. Stage II and subsequent development will involve digitisation of selected texts from 1945 onwards. A precursor to the CLDR project has been undertaken by Deakin University in collaboration with the State Library of Victoria, whereby a digital bibliographic index comprising Victorian School Readers has been completed with plans for full-text digital surrogates of a selection of these texts. These texts provide valuable insights into citizenship, identity, and values formation from the 1930s onwards. At the time of writing, the CLDR is at an early stage of development. An extensive survey of out-of-copyright texts has been completed and the digitisation of these resources is about to commence. The project plans to make rich content searchable, allowing scholars from children’s literature studies and education to benefit from the many advantages of online scholarship. What digital publishing and associated digital archives, electronic texts, hypermedia, and so forth foreground is the fact that writers, readers, publishers, programmers, designers, critics, booksellers, teachers, and copyright laws operate within a context that is highly mediated by technology. In his article on large-scale digitisation projects carried out by Cornell and University of Michigan with the Making of America collection of 19th-century American serials and monographs, Hirtle notes that when special collections’ materials are available via the Web, with appropriate metadata and software, then they can “increase use of the material, contribute to new forms of research, and attract new users to the material” (44). Furthermore, Hirtle contends that despite the poor ergonomics associated with most electronic displays and e-book readers, “people will, when given the opportunity, consult an electronic text over the print original” (46). If this preference is universally accurate, especially for researchers and students, then it follows that not only will the preference for electronic surrogates of original material increase, but preference for other kinds of electronic texts will also increase. It is with this preference for electronic resources in mind that we approached the field of children’s literature in Australia and asked questions about how future generations of researchers would prefer to work. If electronic texts become the reference of choice for primary as well as secondary sources, then it seems sensible to assume that researchers would prefer to sit at the end of the keyboard than to travel considerable distances at considerable cost to access paper-based print texts in distant libraries and archives. We considered the best means for providing access to digitised primary and secondary, full text material, and digital pathways to existing online resources, particularly an extensive indexing and bibliographic database. Prior to the commencement of the CLDR project, AustLit had already indexed an extensive number of children’s literature. Challenges and dilemmas The CLDR project, even in its early stages of development, has encountered a number of challenges and dilemmas that centre on access, copyright, economic capital, and practical aspects of digitisation, and sustainability. These issues have relevance for digital publishing and e-research. A decision is yet to be made as to whether the digital texts in CLDR will be available on open or closed/tolled access. The preference is for open access. As Hayles argues, copyright is more than a legal basis for intellectual property, as it also entails ideas about authorship, creativity, and the work as an “immaterial mental construct” that goes “beyond the paper, binding, or ink” (144). Seeking copyright permission is therefore only part of the issue. Determining how the item will be accessed is a further matter, particularly as future technologies may impact upon how a digital item is used. In the case of e-journals, the issue of copyright payment structures are evolving towards a collective licensing system, pay-per-view, and other combinations of print and electronic subscription (see Hovav and Gray). For research purposes, digitisation of items for CLDR is not simply a scan and deliver process. Rather it is one that needs to ensure that the best quality is provided and that the item is both accessible and usable by researchers, and sustainable for future researchers. Sustainability is an important consideration and provides a challenge for institutions that host projects such as CLDR. Therefore, items need to be scanned to a high quality and this requires an expensive scanner and personnel costs. Files need to be in a variety of formats for preservation purposes and so that they may be manipulated to be useable in different technologies (for example, Archival Tiff, Tiff, Jpeg, PDF, HTML). Hovav and Gray warn that when technology becomes obsolete, then content becomes unreadable unless backward integration is maintained. The CLDR items will be annotatable given AustLit’s NeAt funded project: Aus-e-Lit. The Aus-e-Lit project will extend and enhance the existing AustLit web portal with data integration and search services, empirical reporting services, collaborative annotation services, and compound object authoring, editing, and publishing services. For users to be able to get the most out of a digital item, it needs to be searchable, either through double keying or OCR (optimal character recognition). The value of CLDR’s contribution The value of the CLDR project lies in its goal to provide a comprehensive, searchable body of texts (fictional and critical) to researchers across the humanities and social sciences. Other projects seem to be intent on putting up as many items as possible to be considered as a first resort for online texts. CLDR is more specific and is not interested in simply generating a presence on the Web. Rather, it is research driven both in its design and implementation, and in its focussed outcomes of assisting academics and students primarily in their e-research endeavours. To this end, we have concentrated on the following: an extensive survey of appropriate texts; best models for file location, distribution, and use; and high standards of digitising protocols. These issues that relate to data storage, digitisation, collections, management, and end-users of data are aligned with the “Development of an Australian Research Data Strategy” outlined in An Australian e-Research Strategy and Implementation Framework (2006). CLDR is not designed to simply replicate resources, as it has a distinct focus, audience, and research potential. In addition, it looks at resources that may be forgotten or are no longer available in reproduction by current publishing companies. Thus, the aim of CLDR is to preserve both the time and a period of Australian history and literary culture. It will also provide users with an accessible repository of rare and early texts written for children. III. Future directions It is now commonplace to recognize that the Web’s role as information provider has changed over the past decade. New forms of “collective intelligence” or “distributed cognition” (Oblinger and Lombardi) are emerging within and outside formal research communities. Technology’s capacity to initiate major cultural, social, educational, economic, political and commercial shifts has conditioned us to expect the “next big thing.” We have learnt to adapt swiftly to the many challenges that online technologies have presented, and we have reaped the benefits. As the examples in this discussion have highlighted, the changes in online publishing and digitisation have provided many material, network, pedagogical, and research possibilities: we teach online units providing students with access to e-journals, e-books, and customized archives of digitised materials; we communicate via various online technologies; we attend virtual conferences; and we participate in e-research through a global, digital network. In other words, technology is deeply engrained in our everyday lives. In returning to Frollo’s concern that the book would destroy architecture, Umberto Eco offers a placatory note: “in the history of culture it has never happened that something has simply killed something else. Something has profoundly changed something else” (n. pag.). Eco’s point has relevance to our discussion of digital publishing. The transition from print to digital necessitates a profound change that impacts on the ways we read, write, and research. As we have illustrated with our case study of the CLDR project, the move to creating digitised texts of print literature needs to be considered within a dynamic network of multiple causalities, emergent technological processes, and complex negotiations through which digital texts are created, stored, disseminated, and used. Technological changes in just the past five years have, in many ways, created an expectation in the minds of people that the future is no longer some distant time from the present. Rather, as our title suggests, the future is both present and active. References Aarseth, Espen. “How we became Postdigital: From Cyberstudies to Game Studies.” Critical Cyber-culture Studies. Ed. David Silver and Adrienne Massanari. New York: New York UP, 2006. 37–46. An Australian e-Research Strategy and Implementation Framework: Final Report of the e-Research Coordinating Committee. Commonwealth of Australia, 2006. Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, NJ: Erlbaum, 1991. Eco, Umberto. “The Future of the Book.” 1994. 3 June 2008 ‹http://www.themodernword.com/eco/eco_future_of_book.html>. Gunkel, David. J. “What's the Matter with Books?” Configurations 11.3 (2003): 277–303. Harley, Diane. “Use and Users of Digital Resources: A Focus on Undergraduate Education in the Humanities and Social Sciences.” Research and Occasional Papers Series. Berkeley: University of California. Centre for Studies in Higher Education. 12 June 2008 ‹http://www.themodernword.com/eco/eco_future_of_book.html>. Hayles, N. Katherine. My Mother was a Computer: Digital Subjects and Literary Texts. Chicago: U of Chicago P, 2005. Hirtle, Peter B. “The Impact of Digitization on Special Collections in Libraries.” Libraries & Culture 37.1 (2002): 42–52. Hovav, Anat and Paul Gray. “Managing Academic E-journals.” Communications of the ACM 47.4 (2004): 79–82. Hugo, Victor. The Hunchback of Notre Dame (Notre-Dame de Paris). Ware, Hertfordshire: Wordsworth editions, 1993. Kho, Nancy D. “The Medium Gets the Message: Post-Print Publishing Models.” EContent 30.6 (2007): 42–48. Oblinger, Diana and Marilyn Lombardi. “Common Knowledge: Openness in Higher Education.” Opening up Education: The Collective Advancement of Education Through Open Technology, Open Content and Open Knowledge. Ed. Toru Liyoshi and M. S. Vijay Kumar. Cambridge, MA: MIT Press, 2007. 389–400. Murray, Janet H. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, MA: MIT Press, 2001. Trimmer, Joseph F., Wade Jennings, and Annette Patterson. eFictions. New York: Harcourt, 2001.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Holmes, Ashley M. "Cohesion, Adhesion and Incoherence: Magazine Production with a Flickr Special Interest Group". M/C Journal 13, n.º 1 (22 de marzo de 2010). http://dx.doi.org/10.5204/mcj.210.

Texto completo
Resumen
This paper provides embedded, reflective practice-based insight arising from my experience collaborating to produce online and print-on-demand editions of a magazine showcasing the photography of members of haphazart! Contemporary Abstracts group (hereafter referred to as haphazart!). The group’s online visual, textual and activity-based practices via the photo sharing social networking site Flickr are portrayed as achieving cohesive visual identity. Stylistic analysis of pictures in support of this claim is not attempted. Rather negotiation, that Elliot has previously described in M/C Journal as innate in collaboration, is identified as the unifying factor. However, the collaborators’ adherence to Flickr’s communication platform proves problematic in the editorial context. Some technical incoherence with possible broader cultural implications is encountered during the process of repurposing images from screen to print. A Scan of Relevant Literature The photographic gaze perceives and captures objects which seem to ‘carry within them ready-made’ a work of art. But the reminiscences of the gaze are only made possible by knowing and associating with groups that define a tradition. The list of valorised subjects is not actually defined with reference to a culture, but rather by familiarity with a limited group. (Chamboredon 144) As part of the array of socio-cultural practices afforded by Web 2.0 interoperability, sites of produsage (Bruns) are foci for studies originating in many disciplines. Flickr provides a rich source of data that researchers interested in the interface between the technological and the social find useful to analyse. Access to the Flickr application programming interface enables quantitative researchers to observe a variety of means by which information is propagated, disseminated and shared. Some findings from this kind of research confirm the intuitive. For example, Negoecsu et al. find that “a large percentage of users engage in sharing with groups and that they do so significantly” ("Analyzing Flickr Groups" 425). They suggest that Flickr’s Groups feature appears to “naturally bring together two key aspects of social media: content and relations.” They also find evidence for what they call hyper-groups, which are “communities consisting of groups of Flickr groups” ("Flickr Hypergroups" 813). Two separate findings from another research team appear to contradict each other. On one hand, describing what they call “social cascades,” Cha et al. claim that “content in the form of ideas, products, and messages spreads across social networks like a virus” ("Characterising Social Cascades"). Yet in 2009 they claim that homocity and reciprocity ensure that “popularity of pictures is localised” ("Measurement-Driven Analysis"). Mislove et al. reflect that the affordances of Flickr influence the growth patterns they observe. There is optimism shared by some empiricists that through collation and analysis of Flickr tag data, the matching of perceptual structures of images and image annotation techniques will yield ontology-based taxonomy useful in automatic image annotation and ultimately, the Semantic Web endeavour (Kennedy et al.; Su et al.; Xu et al.). Qualitative researchers using ethnographic interview techniques also find Flickr a valuable resource. In concluding that the photo sharing hobby is for many a “serious leisure” activity, Cox et al. propose that “Flickr is not just a neutral information system but also value laden and has a role within a wider cultural order.” They also suggest that “there is genuinely greater scope for individual creativity, releasing the individual to explore their own identity in a way not possible with a camera club.” Davies claims that “online spaces provide an arena where collaboration over meanings can be transformative, impacting on how individuals locate themselves within local and global contexts” (550). She says that through shared ways of describing and commenting on images, Flickrites develop a common criticality in their endeavour to understand images, each other and their world (554).From a psychologist’s perspective, Suler observes that “interpersonal relationships rarely form and develop by images alone” ("Image, Word, Action" 559). He says that Flickr participants communicate in three dimensions: textual (which he calls “verbal”), visual, and via the interpersonal actions that the site affords, such as Favourites. This latter observation can surely be supplemented by including the various games that groups configure within the constraints of the discussion forums. These often include submissions to a theme and voting to select a winning image. Suler describes the place in Flickr where one finds identity as one’s “cyberpsychological niche” (556). However, many participants subscribe to multiple groups—45.6% of Flickrites who share images share them with more than 20 groups (Negoescu et al., "Analyzing Flickr Groups" 420). Is this a reflection of the existence of the hyper-groups they describe (2009) or, of the ranging that people do in search of a niche? It is also probable that some people explore more than a singular identity or visual style. Harrison and Bartell suggest that there are more interesting questions than why users create media products or what motivates them to do so: the more interesting questions center on understanding what users will choose to do ultimately with [Web2.0] capabilities [...] in what terms to define the success of their efforts, and what impact the opportunity for individual and collaborative expression will have on the evolution of communicative forms and character. (167) This paper addresseses such questions. It arises from a participatory observational context which differs from that of the research described above. It is intended that a different perspective about online group-based participation within the Flickr social networking matrix will avail. However, it will be seen that the themes cited in this introductory review prove pertinent. Context As a university teacher of a range of subjects in the digital media field, from contemporary photomedia to social media to collaborative multimedia practice, it is entirely appropriate that I embed myself in projects that engage, challenge and provide me with relevant first-hand experience. As an academic I also undertake and publish research. As a practicing new media artist I exhibit publically on a regular basis and consider myself semi-professional with respect to this activity. While there are common elements to both approaches to research, this paper is written more from the point of view of ‘reflective practice’ (Holmes, "Reconciling Experimentum") rather than ‘embedded ethnography’ (Pink). It is necessarily and unapologetically reflexive. Abstract Photography Hyper-Group A search of all Flickr groups using the query “abstract” is currently likely to return around 14,700 results. However, only in around thirty of them does the group name, its stated rules and, the stream of images that flow through the pool arguably reflect a sense of collective concept and aesthetic that is coherently abstract. This loose complex of groups comprises a hyper-group. Members of these groups often have co-memberships, reciprocal contacts, and regularly post images to a range of groups and comment on others’ posts to be found throughout. Given that one of Flickr’s largest groups, Black and White, currently has around 131,150 members and hosts 2,093,241 items in its pool, these abstract special interest groups are relatively small. The largest, Abstract Photos, has 11,338 members and hosts 89,306 items in its pool. The group that is the focus of this paper, haphazart!, currently has 2,536 members who have submitted 53,309 items. The group pool is more like a constantly flowing river because the most recently added images are foremost. Older images become buried in an archive of pages which cannot be reverse accessed at a rate greater than the seven pages linked from a current view. A member’s presence is most immediate through images posted to a pool. This structural feature of Flickr promotes a desire for currency; a need to post regularly to maintain presence. Negotiating Coherence to the Abstract The self-managing social dynamics in groups has, as Suler proposes to be the case for individuals, three dimensions: visual, textual and action. A group integrates the diverse elements, relationships and values which cumulatively constitute its identity with contributions from members in these dimensions. First impressions of that identity are usually derived from the group home page which consists of principal features: the group name, a selection of twelve most recent posts to the pool, some kind of description, a selection of six of the most recent discussion topics, and a list of rules (if any). In some of these groups, what is considered to constitute an abstract photographic image is described on the group home page. In some it is left to be contested and becomes the topic of ongoing forum debates. In others the specific issue is not discussed—the images are left to speak for themselves. Administrators of some groups require that images are vetted for acceptance. In haphazart! particular administrators dutifully delete from the pool on a regular basis any images that they deem not to comply with the group ethic. Whether reasons are given or not is left to the individual prosecutor. Mostly offending images just disappear from the group pool without trace. These are some of the ways that the coherence of a group’s visual identity is established and maintained. Two groups out of the abstract photography hyper-group are noteworthy in that their discussion forums are particularly active. A discussion is just the start of a new thread and may have any number of posts under it. At time of writing Abstract Photos has 195 discussions and haphazart! — the most talkative by this measure—has 333. Haphazart! invites submissions of images to regularly changing themes. There is always lively and idiosyncratic banter in the forum over the selection of a theme. To be submitted an image needs to be identified by a specific theme tag as announced on the group home page. The tag can be added by the photographer themselves or by anyone else who deems the image appropriate to the theme. An exhibition process ensues. Participant curators search all Flickr items according to the theme tag and select from the outcome images they deem to most appropriately and abstractly address the theme. Copies of the images together with comments by the curators are posted to a dedicated discussion board. Other members may also provide responses. This activity forms an ongoing record that may serve as a public indicator of the aesthetic that underlies the group’s identity. In Abstract Photos there is an ongoing discussion forum where one can submit an image and request that the moderators rule as to whether or not the image is ‘abstract’. The same group has ongoing discussions labelled “Hall of Appropriate” where worthy images are reposted and celebrated and, “Hall of Inappropriate” where images posted to the group pool have been removed and relegated because abstraction has been “so far stretched from its definition that it now resides in a parallel universe” (Askin). Reasons are mostly courteously provided. In haphazart! a relatively small core of around twelve group members regularly contribute to the group discussion board. A curious aspect of this communication is that even though participants present visually with a ‘buddy icon’ and most with a screen name not their real name, it is usual practice to address each other in discussions by their real Christian names, even when this is not evident in a member’s profile. This seems to indicate a common desire for authenticity. The makeup of the core varies from time to time depending on other activities in a member’s life. Although one or two may be professionally or semi-professionally engaged as photographers or artists or academics, most of these people would likely consider themselves to be “serious amateurs” (Cox). They are internationally dispersed with bias to the US, UK, Europe and Australia. English is the common language though not the natural tongue of some. The age range is approximately 35 to 65 and the gender mix 50/50. The group is three years old. Where Do We Go to from Here? In early January 2009 the haphazart! core was sparked into a frenzy of discussion by a post from a member headed “Where do we go to from here?” A proposal was mooted to produce a ‘book’ featuring images and texts representative of the group. Within three days a new public group with invited membership dedicated to the idea had been established. A smaller working party then retreated to a private Flickr group. Four months later Issue One of haphazart! magazine was available in print-on-demand and online formats. Following however is a brief critically reflective review of some of the collaborative curatorial, editorial and production processes for Issue Two which commenced in early June 2009. Most of the team had also been involved with Issue One. I was the only newcomer and replaced the person who had undertaken the design for Issue One. I was not provided access to the prior private editorial ruminations but apparently the collaborative curatorial and editorial decision-making practices the group had previously established persisted, and these took place entirely within the discussion forums of a new dedicated private Flickr group. Over a five-month period there were 1066 posts in 54 discussions concerning matters such as: change of format from the previous; selection of themes, artists and images; conduct of and editing of interviews; authoring of texts; copyright and reproduction. The idiom of those communications can be described as: discursive, sporadic, idiosyncratic, resourceful, collegial, cooperative, emphatic, earnest and purposeful. The selection process could not be said to follow anything close to a shared manifesto, or articulation of style. It was established that there would be two primary themes: the square format and contributors’ use of colour. Selection progressed by way of visual presentation and counter presentation until some kind of consensus was reached often involving informal votes of preference. Stretching the Limits of the Flickr Social Tools The magazine editorial collaborators continue to use the facilities with which they are familiar from regular Flickr group participation. However, the strict vertically linear format of the Flickr discussion format is particularly unsuited to lengthy, complex, asynchronous, multithreaded discussion. For this purpose it causes unnecessary strain, fatigue and confusion. Where images are included, the forums have set and maximum display sizes and are not flexibly configured into matrixes. Images cannot readily be communally changed or moved about like texts in a wiki. Likewise, the Flickrmail facility is of limited use for specialist editorial processes. Attachments cannot be added. This opinion expressed by a collaborator in the initial, open discussion for Issue One prevailed among Issue Two participants: do we want the members to go to another site to observe what is going on with the magazine? if that’s ok, then using google groups or something like that might make sense; if we want others to observe (and learn from) the process - we may want to do it here [in Flickr]. (Valentine) The opinion appears socially constructive; but because the final editorial process and production processes took place in a separate private forum, ultimately the suggested learning between one issue and the next did not take place. During Issue Two development the reluctance to try other online collaboration tools for the selection processes requiring visual comparative evaluation of images and trials of sequencing adhered. A number of ingenious methods of working within Flickr were devised and deployed and, in my opinion, proved frustratingly impractical and inefficient. The digital layout, design, collation and formatting of images and texts, all took place on my personal computer using professional software tools. Difficulties arose in progressively sharing this work for the purposes of review, appraisal and proofing. Eventually I ignored protests and insisted the team review demonstrations I had converted for sharing in Google Documents. But, with only one exception, I could not tempt collaborators to try commenting or editing in that environment. For example, instead of moving the sequence of images dynamically themselves, or even typing suggestions directly into Google Documents, they would post responses in Flickr. To Share and to Hold From the first imaginings of Issue One the need to have as an outcome something in one’s hands was expressed and this objective is apparently shared by all in the haphazart! core as an ongoing imperative. Various printing options have been nominated, discussed and evaluated. In the end one print-on-demand provider was selected on the basis of recommendation. The ethos of haphazart! is clearly not profit-making and conflicts with that of the printing organisation. Presumably to maintain an incentive to purchase the print copy online preview is restricted to the first 15 pages. To satisfy the co-requisite to make available the full 120 pages for free online viewing a second host that specialises in online presentation of publications is also utilised. In this way haphazart! members satisfy their common desires for sharing selected visual content and ideas with an online special interest audience and, for a physical object of art to relish—with all the connotations of preciousness, fetish, talisman, trophy, and bookish notions of haptic pleasure and visual treasure. The irony of publishing a frozen chunk of the ever-flowing Flickriver, whose temporally changing nature is arguably one of its most interesting qualities, is not a consideration. Most of them profess to be simply satisfying their own desire for self expression and would eschew any critical judgement as to whether this anarchic and discursive mode of operation results in a coherent statement about contemporary photographic abstraction. However there remains a distinct possibility that a number of core haphazart!ists aspire to transcend: popular taste; the discernment encouraged in camera clubs; and, the rhetoric of those involved professionally (Bourdieu et al.); and seek to engage with the “awareness of illegitimacy and the difficulties implied by the constitution of photography as an artistic medium” (Chamboredon 130). Incoherence: A Technical Note My personal experience of photography ranges from the filmic to the digital (Holmes, "Bridging Adelaide"). For a number of years I specialised in facsimile graphic reproduction of artwork. In those days I became aware that films were ‘blind’ to the psychophysical affect of some few particular paint pigments. They just could not be reproduced. Even so, as I handled the dozens of images contributed to haphazart!2, converting them from the pixellated place where Flickr exists to the resolution and gamut of the ink based colour space of books, I was surprised at the number of hue values that exist in the former that do not translate into the latter. In some cases the affect is subtle so that judicious tweaking of colour levels or local colour adjustment will satisfy discerning comparison between the screenic original and the ‘soft proof’ that simulates the printed outcome. In other cases a conversion simply does not compute. I am moved to contemplate, along with Harrison and Bartell (op. cit.) just how much of the experience of media in the shared digital space is incomparably new? Acknowledgement Acting on the advice of researchers experienced in cyberethnography (Bruckman; Suler, "Ethics") I have obtained the consent of co-collaborators to comment freely on proceedings that took place in a private forum. They have been given the opportunity to review and suggest changes to the account. References Askin, Dean (aka: dnskct). “Hall of Inappropriate.” Abstract Photos/Discuss/Hall of Inappropriate, 2010. 12 Jan. 2010 ‹http://www.flickr.com/groups/abstractphotos/discuss/72157623148695254/>. Bourdieu, Pierre, Luc Boltanski, Robert Castel, Jean-Claude Chamboredeon, and Dominique Schnapper. Photography: A Middle-Brow Art. 1965. Trans. Shaun Whiteside. Stanford: Stanford UP, 1990. Bruckman, Amy. Studying the Amateur Artist: A Perspective on Disguising Data Collected in Human Subjects Research on the Internet. 2002. 12 Jan. 2010 ‹http://www.nyu.edu/projects/nissenbaum/ethics_bru_full.html>. Bruns, Axel. “Towards Produsage: Futures for User-Led Content Production.” Proceedings: Cultural Attitudes towards Communication and Technology 2006. Perth: Murdoch U, 2006. 275–84. ———, and Mark Bahnisch. Social Media: Tools for User-Generated Content. Vol. 1 – “State of the Art.” Sydney: Smart Services CRC, 2009. Cha, Meeyoung, Alan Mislove, Ben Adams, and Krishna P. Gummadi. “Characterizing Social Cascades in Flickr.” Proceedings of the First Workshop on Online Social Networks. ACM, 2008. 13–18. ———, Alan Mislove, and Krishna P. Gummadi. “A Measurement-Driven Analysis of Information Propagation in the Flickr Social Network." WWW '09: Proceedings of the 18th International Conference on World Wide Web. ACM, 2009. 721–730. Cox, A.M., P.D. Clough, and J. Marlow. “Flickr: A First Look at User Behaviour in the Context of Photography as Serious Leisure.” Information Research 13.1 (March 2008). 12 Dec. 2009 ‹http://informationr.net/ir/13-1/paper336.html>. Chamboredon, Jean-Claude. “Mechanical Art, Natural Art: Photographic Artists.” Photography: A Middle-Brow Art. Pierre Bourdieu. et al. 1965. Trans. Shaun Whiteside. Stanford: Stanford UP, 1990. 129–149. Davies, Julia. “Display, Identity and the Everyday: Self-Presentation through Online Image Sharing.” Discourse: Studies in the Cultural Politics of Education 28.4 (Dec. 2007): 549–564. Elliott, Mark. “Stigmergic Collaboration: The Evolution of Group Work.” M/C Journal 9.2 (2006). 12 Jan. 2010 ‹http://journal.media-culture.org.au/0605/03-elliott.php>. Harrison, Teresa, M., and Brea Barthel. “Wielding New Media in Web 2.0: Exploring the History of Engagement with the Collaborative Construction of Media Products.” New Media & Society 11.1-2 (2009): 155–178. Holmes, Ashley. “‘Bridging Adelaide 2001’: Photography and Hyperimage, Spanning Paradigms.” VSMM 2000 Conference Proceedings. International Society for Virtual Systems and Multimedia, 2000. 79–88. ———. “Reconciling Experimentum and Experientia: Reflective Practice Research Methodology for the Creative Industries”. Speculation & Innovation: Applying Practice-Led Research in the Creative Industries. Brisbane: QUT, 2006. Kennedy, Lyndon, Mor Naaman, Shane Ahern, Rahul Nair, and Tye Rattenbury. “How Flickr Helps Us Make Sense of the World: Context and Content in Community-Contributed Media Collections.” MM’07. ACM, 2007. Miller, Andrew D., and W. Keith Edwards. “Give and Take: A Study of Consumer Photo-Sharing Culture and Practice.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2007. 347–356. Mislove, Alan, Hema Swetha Koppula, Krishna P. Gummadi, Peter Druschel and Bobby Bhattacharjee. “Growth of the Flickr Social Network.” Proceedings of the First Workshop on Online Social Networks. ACM, 2008. 25–30. Negoescu, Radu-Andrei, and Daniel Gatica-Perez. “Analyzing Flickr Groups.” CIVR '08: Proceedings of the 2008 International Conference on Content-Based Image and Video Retrieval. ACM, 2008. 417–426. ———, Brett Adams, Dinh Phung, Svetha Venkatesh, and Daniel Gatica-Perez. “Flickr Hypergroups.” MM '09: Proceedings of the Seventeenth ACM International Conference on Multimedia. ACM, 2009. 813–816. Pink, Sarah. Doing Visual Ethnography: Images, Media and Representation in Research. 2nd ed. London: Sage, 2007. Su, Ja-Hwung, Bo-Wen Wang, Hsin-Ho Yeh, and Vincent S. Tseng. “Ontology–Based Semantic Web Image Retrieval by Utilizing Textual and Visual Annotations.” 2009 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology – Workshops. 2009. Suler, John. “Ethics in Cyberspace Research: Consent, Privacy and Contribution.” The Psychology of Cyberspace. 1996. 12 Jan. 2010 ‹http://www-usr.rider.edu/~suler/psycyber/psycyber.html>. ———. “Image, Word, Action: Interpersonal Dynamics in a Photo-Sharing Community.” Cyberpsychology & Behavior 11.5 (2008): 555–560. Valentine, Mark. “HAPHAZART! Magazine/Discuss/image selections…” [discussion post]. 2009. 12 Jan. 2010 ‹http://www.flickr.com/groups/haphazartmagazin/discuss/72157613147017532/>. Xu, Hongtao, Xiangdong Zhou, Mei Wang, Yu Xiang, and Baile Shi. “Exploring Flickr’s Related Tags for Semantic Annotation of Web Images.” CIVR ’09. ACM, 2009.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía