Journal articles on the topic 'A network representation of technological processes and limited resources'

To see the other types of publications on this topic, follow the link: A network representation of technological processes and limited resources.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'A network representation of technological processes and limited resources.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kostyuchenko, Yuriy V. "On the Advanced Methodology of Risk-Based System Resilience Analysis." International Journal of Mathematical, Engineering and Management Sciences 6, no. 1 (October 29, 2020): 268–78. http://dx.doi.org/10.33889/ijmems.2021.6.1.017.

Full text
Abstract:
The modern evolution of technological systems from Hierarchical branching structures purposed to centralized transfer and distribution of limited resources to multi-agent interconnected self-organized networks aimed to produce, transport and consumption of resources are considered. The model of multi-agent interconnected self-organized adaptive networking systems is proposed, the network topology is considered, a system functioning model including transient processes is analyzed. A substantial limitation of the traditional reliability paradigm for a novel type of systems is demonstrated. It was assumed, that optimization approaches in the context of “big data” utilization lead to create a quasi-infinite space of non-structured decisions, which can be characterized as “big decisions”. The modified approach based on the “equally defended networked system” paradigm and the corresponding quantitative risk measure is proposed.
APA, Harvard, Vancouver, ISO, and other styles
2

Tomov, Pancho, and Lubomir Dimitrov. "THE ROLE OF DIGITAL INFORMATION MODELS FOR HORIZONTAL AND VERTICAL INTERACTION IN INTELLIGENT PRODUCTION." Facta Universitatis, Series: Mechanical Engineering 17, no. 3 (November 29, 2019): 397. http://dx.doi.org/10.22190/fume190422037t.

Full text
Abstract:
Intelligent production is the future of industrial production. It is the leading way to a new industrial era and it best defines the concept of the Fourth Industrial Revolution. Getting the real-time data on quality, resources and costs it provides significant advantages over classical production systems. Intelligent production must be built on sustainable and service-oriented technological and business practices. They are characterized by flexibility, adaptability and self-learning, resilience to failures, and risk management. The high levels of automation, on the other hand, become a mandatory standard for them, which is possible thanks to a flexible network of production-based systems that automatically monitor the production processes. Flexible systems and models that are capable of responding in real time allow internal processes to be radically optimized. Production benefits are not limited to one-off production conditions, and the capabilities include optimization through a global network of adaptive and self-regulating manufacturing components belonging to more than one operator.
APA, Harvard, Vancouver, ISO, and other styles
3

Kondrat’ev, V. "World Economy as Global Value Chain’s Network." World Economy and International Relations, no. 3 (2015): 5–17. http://dx.doi.org/10.20542/0131-2227-2015-3-5-17.

Full text
Abstract:
World trade and production are increasingly structured around “global value chains” (GVCs). A value chain identifies the full range of activities that firms undertake to bring a product or a service from its conception to its end use by final consumers. Technological progress, cost, access to resources and markets and trade policy reforms have facilitated the geographical fragmentation of production processes across the globe according to the comparative advantage of the locations. This international fragmentation of production is a powerful source of increased efficiency and firm competitiveness. Today, more than half of world manufactured imports are intermediate goods (primary goods, parts and components, semi-finished products), and more than 70% of world services imports are intermediate services. The emergence of GVCs during the last two decades has implications in many areas, including trade, investment and industrial development. Some of these implications have been explored in recent OECD work but the empirical evidence on GVCs remains limited. The last few years have witnessed a growing number of case studies on the globally integrated value chains at the product level, but such analyses only depict the situation for a specific product. The main objective of the article is to provide more and better evidence allowing to examine the position of countries within international production networks. The author deals with quantitative indicators that give a more accurate picture of the integration and position of countries in GVCs. A detailed assessment of global value chains is provided in six industries: agriculture and food products, chemicals, electrical and computing machinery, motor vehicles, business services, financial services.
APA, Harvard, Vancouver, ISO, and other styles
4

Lloret, Jaime, Sandra Sendra, Laura Garcia, and Jose M. Jimenez. "A Wireless Sensor Network Deployment for Soil Moisture Monitoring in Precision Agriculture." Sensors 21, no. 21 (October 30, 2021): 7243. http://dx.doi.org/10.3390/s21217243.

Full text
Abstract:
The use of precision agriculture is becoming more and more necessary to provide food for the world’s growing population, as well as to reduce environmental impact and enhance the usage of limited natural resources. One of the main drawbacks that hinder the use of precision agriculture is the cost of technological immersion in the sector. For farmers, it is necessary to provide low-cost and robust systems as well as reliability. Toward this end, this paper presents a wireless sensor network of low-cost sensor nodes for soil moisture that can help farmers optimize the irrigation processes in precision agriculture. Each wireless node is composed of four soil moisture sensors that are able to measure the moisture at different depths. Each sensor is composed of two coils wound onto a plastic pipe. The sensor operation is based on mutual induction between coils that allow monitoring the percentage of water content in the soil. Several prototypes with different features have been tested. The prototype that has offered better results has a winding ratio of 1:2 with 15 and 30 spires working at 93 kHz. We also have developed a specific communication protocol to improve the performance of the whole system. Finally, the wireless network was tested, in a real, cultivated plot of citrus trees, in terms of coverage and received signal strength indicator (RSSI) to check losses due to vegetation.
APA, Harvard, Vancouver, ISO, and other styles
5

Garcia-Morales, Victor Jesus, Rodrigo Martín-Rojas, and María Esmeralda Lardón-López. "Influence of social media technologies on organizational performance through knowledge and innovation." Baltic Journal of Management 13, no. 3 (July 2, 2018): 345–67. http://dx.doi.org/10.1108/bjm-04-2017-0123.

Full text
Abstract:
Purpose The purpose of this paper is to show how social media technologies (SMT) make the firm proficient to act on business opportunities and reconfigure business resources by encouraging networks to routinize the firm’s knowledge and innovation competencies. Design/methodology/approach The paper analyzes data obtained from a sample of 201 technological firms located in Spain. Structural equation modeling with Lisrel is used to test the hypotheses. Findings This paper contributes to the literature by reflecting empirically in a structural model how SMT drive technological knowledge competencies to improve organizational performance directly and indirectly by leveraging processes of innovation capability in the firm. Research limitations/implications The study has some limitations, among them transversal analysis of different constructs. The number of relationships analyzed is limited, as is the literature focuses on a digital vision from a social media point of view. Practical implications Some implications for managers emerge. SMT both enable an emergent participatory culture through ubiquitous digital devices and social networks and balance constant connectivity afforded by digital devices. Originality/value Drawing on complexity science, the authors develop a conceptual framework to explain how social media, as emergent IS phenomena, help firms to create business value, leveraging network effects and knowledge flows, and increasing innovative capability.
APA, Harvard, Vancouver, ISO, and other styles
6

Popov, Aleksandr, Anna Zapol'skaya, and Tat'yana Popova. "MULTICHOICE APPROACH TO SOLUTION OF OPTIMIZATION PROBLEMS OF ALGORITHMIC DESCRIPTION FORMATION OF PROCESSING ROUTES." Bulletin of Bryansk state technical university 2020, no. 11 (November 2, 2020): 18–25. http://dx.doi.org/10.30987/1999-8775-2020-11-18-25.

Full text
Abstract:
The paper is aimed at the solution of the problem in the formation of multi-level structural optimization of promising technological processes. Here there is manifested the necessity in thorough development of algorithmic problems on the formation of the best technological route for components and units processing taking into account requirements to quality and choice of the smallest cost characteristics of an industrial process. At the solution of optimization problems in engineering process designing there are used simulators reflecting basic principles of production work functioning. The search of an optimum solution was carried out by a mechanism of threshold structural optimization realization on network graphs. In the course of the investigation fulfillment on the problems of automated production introduction there were formed simulators taking into account single-phase and multi-phase systems of mass maintenance. At that to the first place there were moved cost parameters for the assessment of engineering process versions. In such a way was confirmed a hypothesis on the purposefulness of the definition of engineering process functional parameters with the further choice of an optimum solution. Conclusions: time decrease, high quality, and also a financial profit can be complex for the majority of real applications, therefore the majority of optimization methods try to find an ideal method for the solution of limited resources problem in the framework of different limitations.
APA, Harvard, Vancouver, ISO, and other styles
7

Sapio, Valentina. "Open Design." Academic Research Community publication 3, no. 4 (June 1, 2019): 78. http://dx.doi.org/10.21625/archive.v3i4.541.

Full text
Abstract:
The evolution of electronics, sustainable energy, digital and the web in the productive and entrepreneurial structure generated, in the second half of the twentieth century, the third industrial revolution. Defined by some scholars like Chris Anderson and economic newspapers like the "Financial Times": "A revolution in which the planner in general and the designer in particular have truly new technical, economic and above all formal language opportunities for the design of new elements". A phenomenon still in full swing, yet we are already talking about Industry 4.0, as synonymous with a fourth industrial revolution that presents a new feature, a new bidirectional relationship that re-examines two key players: producers and consumers. This complete connection has led to the creation of new products and services, which improve the level of efficiency of life by making it more productive.Cyber-physics, in fact, the current technological science that integrates software and networking with new techniques of abstraction, modeling, design and analysis to the dynamics of physical processes, joins traditional design processes, generating a new stream of production process. Defined by Denis Santachiara, designer and Professor at NABA in Milan «[...] a virtual representation of a manufacturing process in a software environment [...]».This new context presupposes the inclusion within the Internet network, "the network of networks", increasingly configured as a "Network Society", where to grasp the growing complexity of the digital revolution, the integration of new instruments that lead to the digital manufacturing. This determines an innovation in the language of designers, towards a new culture of the project, thanks to the resources developed by the new digital technologies. A new reality that turns into opportunities for young designers, in which transversal and multidisciplinary figures with a heterogeneous design background are needed, able to interact with the various facets of these means.
APA, Harvard, Vancouver, ISO, and other styles
8

Xiong, Mingfu, Zhiyu Gao, Ruimin Hu, Jia Chen, Ruhan He, Hao Cai, and Tao Peng. "A Lightweight Efficient Person Re-Identification Method Based on Multi-Attribute Feature Generation." Applied Sciences 12, no. 10 (May 12, 2022): 4921. http://dx.doi.org/10.3390/app12104921.

Full text
Abstract:
Person re-identification (re-ID) technology has attracted extensive interests in critical applications of daily lives, such as autonomous surveillance systems and intelligent control. However, light-weight and efficient person re-ID solutions are rare because the limited computing resources cannot guarantee accuracy and efficiency in detecting person features, which inevitably results in performance bottleneck in real-time applications. Aiming at this research challenge, this study developed a lightweight framework for generation of the person multi-attribute feature. The framework mainly consists of three sub-networks each conforming to a convolutional neural network architecture: (1) the accessory attribute network (a-ANet) grasps the person ornament information for an accessory descriptor; (2) the body attribute network (b-ANet) captures the person region structure for a body descriptor; and (3) the color attribute network (c-ANet) forms the color descriptor to maintain the consistency of the color of the person(s). Inspired by the human visual processing mechanism, these descriptors (each “descriptor” corresponds to the attribute of an individual person) are integrated via a tree-based feature-selection method to construct a global “feature”, i.e., a multi-attribute descriptor of the person serving as the key to identify the person. Distance learning is then exploited to measure the person similarity for the final person re-identification. Experiments have been performed on four public datasets to evaluate the proposed framework: CUHK-01, CUHK-03, Market-1501, and VIPeR. The results indicate that (1) the multi-attribute feature outperforms most of the existing feature-representation methods by 5–10% at rank@1 in terms of the cumulative matching curve criterion; and (2) the time required for recognition is as low as O(n) for real-time person re-ID applications.
APA, Harvard, Vancouver, ISO, and other styles
9

Hyla, Jakub, Wojciech Sułek, Weronika Izydorczyk, Leszek Dziczkowski, and Wojciech Filipowski. "Efficient LDPC Encoder Design for IoT-Type Devices." Applied Sciences 12, no. 5 (February 28, 2022): 2558. http://dx.doi.org/10.3390/app12052558.

Full text
Abstract:
Low-density parity-check (LDPC) codes are known to be one of the best error-correction coding (ECC) schemes in terms of correction performance. They have been utilized in many advanced data communication standards for which the codecs are typically implemented in custom integrated circuits (ICs). In this paper, we present a research work that shows that the LDPC coding scheme can also be applied in a system characterized by highly limited computational resources. We present a microcontroller-based application of an efficient LDPC encoding algorithm with efficient usage of memory resources for the code-parity-check matrix and the storage of the results of auxiliary computations. The developed implementation is intended for an IoT-type system, in which a low-complexity network node device encodes messages transmitted to a gateway. We present how the classic Richardson–Urbanke algorithm can be decomposed for the QC-LDPC subclass into cyclic shifts and GF(2) additions, directly corresponding to the CPU instructions. The experimental results show a significant gain in terms of memory usage and decoding timing of the proposed method in comparison with encoding with the direct parity check matrix representation. We also provide experimental comparisons with other known block codes (RS and BCH) showing that the memory requirements are not greater than for standard block codes, while the encoding time is reduced, which enables the energy consumption reduction. At the same time, the error-correction performance gain of LDPC codes is greater than for the mentioned standard block codes.
APA, Harvard, Vancouver, ISO, and other styles
10

Meadors, Patrick, Pamela Meadors, Beth York, and Declan Walsh. "Electronic distress screening for early symptom identification and management." Journal of Clinical Oncology 36, no. 34_suppl (December 1, 2018): 170. http://dx.doi.org/10.1200/jco.2018.36.34_suppl.170.

Full text
Abstract:
170 Background: Sustainable comprehensive screenings for cancer-related symptoms and distress is a challenge for large multisite cancer centers. Technological solutions are necessary to compile patient-reported outcomes at institutions with high volumes across broad geographic regions. Interdisciplinary collaboration and leadership engagement is required to effectively screen for and manage symptoms. Once integrated, electronic distress screening (EDS) produces large symptom databases for management of cancer related symptoms, strategic programmatic growth, and research. Methods: System-wide (n=44 clinic locations) implementation of EDS at consultation visits and simulations occurred in phases between November 2016-Dec 2016. EDS content, clinical sensitivity thresholds, and referral processes were developed with input from Supportive Oncology Department. Automatic alerts via email based on clinical thresholds developed and various symptoms profiles used to aid referrals to supportive oncology resources. Information and Analytic services ensured clinical integration into the EMR, delineation by clinic site, and troubleshooting. Ongoing completion rate tracked and sent to all practice managers. Results: The network-wide EDS completion rate for January 2017-June 2018 is 69% (26,564 completed out of 38,435 eligible patient encounters). Mean completion time per patient was 8 minutes. 84% of clinic sites (n=37) screened >50% of eligible patients and 46% of clinics (n=20) screened >75%. Triggered referral rates were established for all supportive oncology sections (e.g. 60% of patients reported clinically significant distress and 18% indicated being at risk for malnutrition). All screening data available for ongoing analysis. Conclusions: EDS can be integrated as a solution for distress and cancer related symptoms in multisite cancer centers. Interdisciplinary collaboration is needed to ensure clinical relevance. Phased rollouts, structured education, and completion rate dashboards help establish leader buy-in and consistent symptom screening. Such symptom databases allow large cancer hospital networks to strategically allocate limited resources based on highest volume/acuity/symptom profiles.
APA, Harvard, Vancouver, ISO, and other styles
11

Kowsher, Md, Md Shohanur Islam Sobuj, Md Fahim Shahriar, Nusrat Jahan Prottasha, Mohammad Shamsul Arefin, Pranab Kumar Dhar, and Takeshi Koshiba. "An Enhanced Neural Word Embedding Model for Transfer Learning." Applied Sciences 12, no. 6 (March 10, 2022): 2848. http://dx.doi.org/10.3390/app12062848.

Full text
Abstract:
Due to the expansion of data generation, more and more natural language processing (NLP) tasks are needing to be solved. For this, word representation plays a vital role. Computation-based word embedding in various high languages is very useful. However, until now, low-resource languages such as Bangla have had very limited resources available in terms of models, toolkits, and datasets. Considering this fact, in this paper, an enhanced BanglaFastText word embedding model is developed using Python and two large pre-trained Bangla models of FastText (Skip-gram and cbow). These pre-trained models were trained on a collected large Bangla corpus (around 20 million points of text data, in which every paragraph of text is considered as a data point). BanglaFastText outperformed Facebook’s FastText by a significant margin. To evaluate and analyze the performance of these pre-trained models, the proposed work accomplished text classification based on three popular textual Bangla datasets, and developed models using various machine learning classical approaches, as well as a deep neural network. The evaluations showed a superior performance over existing word embedding techniques and the Facebook Bangla FastText pre-trained model for Bangla NLP. In addition, the performance in the original work concerning these textual datasets provides excellent results. A Python toolkit is proposed, which is convenient for accessing the models and using the models for word embedding, obtaining semantic relationships word-by-word or sentence-by-sentence; sentence embedding for classical machine learning approaches; and also the unsupervised finetuning of any Bangla linguistic dataset.
APA, Harvard, Vancouver, ISO, and other styles
12

Hladiy, M. V., O. V. Kruglyak, and I. S. Martynyuk. "CRITERIA FOR ECONOMIC EVALUATION UNPRODUCTIVE COSTS FOR DAIRY CATTLE MAINTENANCE." Animal Breeding and Genetics 54 (November 29, 2017): 14–19. http://dx.doi.org/10.31073/abg.54.02.

Full text
Abstract:
Costs are an important economic category that have a decisive influence on the definition of pricing policies and the formation of financial performance of the enterprise, characterizing the level of organization of production and application of technologies. According to expediency of their spending, costs are divided into productive and unproductive. Unproductive costs arise in the event of a violation of technology, deficiencies in the organization of production, etc. Therefore, it is necessary to ensure that the economic evaluation of unproductive costs in breeding dairy cattle is carried out in order to determine their volume and specific weight in total expenses. An economic evaluation unproductive costs for dairy cattle maintenance should be carried out on the basis of criteria that take into account the main factors affecting the efficiency. These criteria include innovation, production, market, social and environmental. Their assessment under all criteria is conducted using the methods of economic analysis. The reasons and dynamics of their emergence should be the subject of a thorough study of management accounting and financial management of enterprises. According to the analysis of the structure of the cost of livestock production in state enterprises "Experimental farm "Niva" and "Experimental farm "Khrystynivske" that are part of the network of the Institute of Animal Breeding and Genetics nd. a. M.V.Zubets NAAS (Cherkasy region) (Table 1 ), in the structure of the cost of milk production the largest share is the cost of feed (42.6%) and labor remuneration combined with accrual (20.4%). In the production of live weight of cattle for feed and wages account for 76.6% of all costs. Rational use of feed is one of the main ways to strengthen the economy of the enterprise. In order to avoid unproductive costs, it is necessary to observe all technological stages of production, storage, distribution and feeding of forages. Provision for increasing the efficiency of feed costs is to increase the conversion of feed to products and reduce the cost of the diet and its individual components. In particular, finding suppliers with lower prices for concentrated feeds and milk replacers, growing fodder crops with higher nutrition. The system for keeping animals is also important. For example, with untied, labor costs by 67% lower than with tied, the profitability of milk production is increased by 4%. The most economic losses to farms are caused by diseases of the mammary gland, which lead to a decrease in the milk productivity of cows, the quality of milk; an increase in the incidence of calves, and the abandonment of livestock. At present, the total unproductive costs of udder disease are estimated at the equivalent of 5-8% of the gross annual income, or from 2.6 to 4.1 thousand UAH. per cow. Other unproductive costs of dairy cattle maintenance of productivity direction are financial losses from the reduction of genetic potential, prolongation of the interotional period, the reduction of duration of the economic use of cows, the low quality of milk and the non-equivalent milk sales prices. Comprehensive economic assessment of unproductive costs, determined in accordance with innovative production, market, social and environmental criteria, will allow, in conditions of relative limited resources, to ensure the effective management of production processes of breeding dairy cattle, aimed at increasing profits.
APA, Harvard, Vancouver, ISO, and other styles
13

Bila, Svitlana. "Strategic priorities of social production digitalization: world experience." University Economic Bulletin, no. 48 (March 30, 2021): 40–55. http://dx.doi.org/10.31470/2306-546x-2021-48-40-55.

Full text
Abstract:
Actual importance of study. At the beginning of the 2020s developed world countries and countries which are the leaders of world economic development faced up the challenges of radical structural reformation of social production (from industry to service system) which is based on digitalization. Digital technologies in world science and business practice are considered essential part of a complex technological phenomenon like ‘Industry 4.0’. Digitalization should cover development of all business processes and management processes at micro-, meso- and microlevels, processes of social production management at national and world economy levels. In general, in the 21st century world is shifting rapidly to the strategies of digital technologies application. The countries which introduce these strategies will gain guaranteed competitive advantages: from reducing production costs and improved quality of goods and services to developing new sales market and making guaranteed super-profits. The countries which stand aside from digitalization processes are at risk of being among the outsiders of socio-economic development. Such problem statement highlights the actual importance of determining the directions, trends and strategic priorities of social production digitalization. This issue is really crucial for all world countries, including Ukraine which is in midst of profound structural reformation of all national production system. Problem statement. Digital economy shapes the ground for ‘Industry 4.0’, information, It technologies and large databases become the key technologies. The main asset of ‘Industry 4.0’ is information, the major tool of production is cyberphysical systems that lead to formation the single unified highly productive environmental system of collecting, analyzing and applying data to production and other processes. Cyberphysical systems provides ‘smart machines’ (productive machines, tools and equipment which are programmed) integration via their connection to the Internet, or creation special network, ‘Industrial Internet’ (IIoT) which is regarded as a productive analogue of ‘Internet of Things’ (IoT) that is focused on the consumers. ‘Internet of Things’ can be connected with ‘smart factories’ which use ‘Industrial Internet’ to adjust production processes quickly turning into account the changes in costs and availability of resources as well as demand for production made. One of the most essential tasks for current economics and researchers of systems and processes of organization future maintenance of world production is to determine the main strategic priorities of social production digitalization. Analysis of latest studies and publications. Valuable contribution to the study of the core and directions of strategic priorities concerning social production digitalization was made by such foreign scientists as the Canadian researcher Tapscott D [1], foreigners Sun, L., Zhao, L [2], Mcdowell, M. [3] and others. Yet, the study of issues concerning social production digitalization are mainly done by the team of authors as such issues are complicated and multihierarchical. Furthermore, the problem of social production digitalization is closely linked to the transition to sustainable development, which is reflected in the works by Ukrainian scholars like Khrapkin V., Ustimenko V., Kudrin O., Sagirov A. and others in the monograph “Determinants of sustainable economy development” [4]. The edition of the first in Ukraine inter-disciplinary textbook on Internet economy by a group of scientists like Tatomyr I., Kvasniy L., Poyda S. and others [5] should also be mentioned. But the challenges of social production digitalization are constantly focused on by theoretical scientists, analytics and practitioners of these processes. Determining unexplored parts of general problem. Defining strategic priorities of social production digitalization requires clear understanding of prospective spheres of their application, economic advantages and risks which mass transition of social production from traditional (industrial and post-industrial)to digital technologies bear. A new system of technological equipment (production digitalization, Internet-economy, technology ‘Industry 4.0’, NBIC- technologies and circular economy) has a number of economic advantages for commodity producers and countries, as well as leads to dramatical changes in the whole social security system, changes at labour market and reformation the integral system of social relations in the society. Tasks and objectives of the study. The objective of the study is to highlight the core and define the main strategic priorities of social production digitalization, as they cause the process of radical structural reformation of industrial production, services and social spheres of national economy of world countries and world economy in general. To achieve the objective set in the article the following tasks are determined and solved: - to define the main priorities of digital technologies development, which is radically modify all social production business processes; - to study the essence and the role of circular economy for transition to sustainable development taken EU countries as an example; - to identify the strategic priorities of robotization of production processes and priority spheres of industrial and service robots application; - to define the role of NBIC-technologies in the process of social production structural reformation and its transition to new digital technologies in the 21st century. Method and methodology of the study. While studying strategic priorities of social production digitalization theoretical and empirical methods of study are used, such as historical and logical, analysis and synthesis, abstract and specific, casual (cause-and-effect) ones. All of them helped to keep the track of digital technologies evolution and its impact on structural reformation of social production. Synergetic approach, method of expert estimates and casual methods are applied to ground system influence of digital technologies, ‘Industry 4.0’ and their materialization as ‘circular economy’ on the whole complicated and multihierarchical system of social production in general. Basic material (the results of the study). Digital economy, i.e. economy where it is virtual but not material or physical assets and transactions are of the greatest value, institutional environment in which business processes as well as all managerial processes are developed on the basis of digital computer technologies and information and communication technologies (ICT), lies as the ground for social production digitalization. ICT sphere involves production of electronic equipment, computing, hardware,.software and services. It also provides various information sevices. Information Technology serves as a material basis for digital economy and digital technologies development. Among the basic digital technologies the following ones play the profound role: technology ‘Blockchain’, 3D priniting, unmanned aerial vehicles and flying drones, virtual reality (VR). Augmented reality (AR), Internet of Things (IoT), Industrial Internet of Things (IIoT), Internet of Value (IoV) which is founded on IT and blockchain technology, Internet of Everything (IoE), Artificial Intelligence (AI), neuron networks and robots. These basic digital technologies in business processes and management practices are applied in synergy, complexity and system but not in a single way. System combination of digital technologies gives maximal economic effect from their practical application in all spheres of social production-from industry to all kinds of services. For instance, in education digital technologies promote illustrating and virtual supplement of study materials; in tourism trade they promote engagement of virtual guides, transport and logistics security of tourist routes, virtual adverts and trips arrangements, virtual guidebooks, virtual demonstration of services and IT brochures and leaflets. Digital technologies radically change gambling and show businesses, in particular, they provide virtual games with ‘being there’ effect. Digital technologies drastically modify the retail trade sphere, advertisement and publishing, management and marketing, as well as provide a lot of opportunities for collecting unbiased data concerning changes in market conditions in real time. Digital technologies lie as the basis for ‘circular economy’, whose essence rests with non-linear, secondary, circular use of all existing natural and material resources to provide the production and consumption without loss of quality and availability of goods and services developed on the grounds of innovations, IT-technology application and ‘Industry 4.0’. Among priorities of circular economy potential applications the following ones should be mentioned: municipal services, solid household wastes management and their recycling, mass transition to smart houses and smart towns, circular agriculture development, circular and renewable energy, The potential of circular economy fully and equally corresponds to the demands for energy efficiency and rational consumption of limited natural resources, so it is widely applied in EU countries while transiting to sustainable development. In the 21st century processes of social production robotization draw the maximal attention of the society. There is a division between industrial and service robots which combine artificial intelligence and other various digital technologies in synergy. Industrial robots are widely used in production, including automotive industry, processing industry, energetic, construction sectors and agriculture Services are applied in all other spheres and sectors of national and world economies –from military-industrial complex (for instance, for mining and demining the areas, military drones) to robots-cleaners (robots-vacuum cleaners), robots-taxis, robots engaged in health care service and served as nurses (provide the ill person with water, tidy up, bring meals). Social production robotization is proceeding apace. According to “World Robotic Report 2020”, within 2014 – 2019 the total quantity of industrial robots increased by 85 %. By 2020 in the world the share of robots in the sphere of automated industrial production had comprised 34 %, in electronics – 25%, in metallurgy – 10 %. These indicators are constantly growing which results in structural reformation of the whole system of economic and industrial processes, radical changes in world labour market and the social sphere of world economy in general. Alongside with generally recognized types of digital technologies and robotization processes, an innovation segment of digital economy – NBIC – technologies (Nanotechnology, Biotechnology, Information technology, Cognitive Science) are rapidly spread. Among the priorities of NBIC-technologies development the special place belongs to interaction between information and cognitive technologies. As a material basis for its synergy in NBIC-technologies creation of neuron networks, artificial intelligence, artificial cyber brain for robots are applied. It is estimated as one of the most prospective and important achievements of digital economy which determines basic, innovational vector of social production structural reformations in the 21st century. The sphere of results application. International economic relations and world economy, development of competitive strategies of national and social production digitalization of world economy in general. Conclusions. Digital technologies radically change all spheres of social production and social life, including business and managerial processes at all levels. Digital technologies are constantly developing and modifying, that promotes emergence of new spheres and new business activities and management. 21st century witnessed establishing digital economy, smart economy, circular economy, green economy and other various arrangements of social production which are based on digital technologies. Social production digitalization and innovative digital technologies promotes business with flexible systems of arrangement and management, production and sales grounded on processing large Big Data permanently, on the basis of online monitoring in real time. Grounded on digital technologies business in real time mode processes a massive Big Data and on their results makes smart decisions in all business spheres and business processes management. Radical shifts in social production digitalization provides businesses of the states which in practice introduce digital technologies with significant competitive advantages - from decrease in goods and services production cost to targeted meeting of specific needs of consumers. Whereas, rapid introduction of digital technologies in the countries-leaders of world economic development results in a set of system socio-economic and socio-political challenges, including the following: crucial reformatting the world labour market and rise in mass unemployment, shift from traditional export developing countries’ specialization, breakups of traditional production networks being in force since the end of the 20th century, so called ‘chains of additional value shaping’, breakups of traditional cooperation links among world countries and shaping the new ones based on ‘Industry 4.0’ and ‘Industrial Internet’. Socio-economic and political consequences of radical structural reformation of all spheres in national and world economy in the 21st century, undoubtedly, will be stipulated with the processes of social production digitalization. It will require further systemic and fundamental scientific studies on this complicated and multi hierarchical process.
APA, Harvard, Vancouver, ISO, and other styles
14

Scafuto, Isabel Cristina, Priscila Rezende, and Marcos Mazzieri. "International Journal of Innovation - IJI completes 7 years." International Journal of Innovation 8, no. 2 (August 31, 2020): 137–43. http://dx.doi.org/10.5585/iji.v8i2.17965.

Full text
Abstract:
International Journal of Innovation - IJI completes 7 yearsInternational Journal of Innovation - IJI has now 7 years old! In this editorial comment, we not only want to talk about our evolution but get even closer to the IJI community. It is our first editorial comment, a new IJI's communication channel. Some of the changes are already described on our website.IJI is an innovation-focused journal that was created to support scientific research and thereby contribute to practice. Also, IJI was born internationally, receiving and supporting research from around the world. We welcome articles in Portuguese, English, and Spanish.We have published eight volumes in IJI since 2013, totaling 131 articles. Our journal is indexed in: Dialnet and Red Iberoamericana de Innovación y Conocimiento Científico; Ebsco Host; Erih Plus; Gale - Cengage Learning; Latindex; Proquest; Redalyc; Web of Science Core Collection (Emerging Sources Citation Index), among others. We provide free access “open access” to all its content. Articles can be read, downloaded, copied, distributed, printed and / or searched.We want to emphasize that none of this would be possible without the authors that recognized in IJI a relevant journal to publicize their work. Nor can we fail to mention the tireless and voluntary action of the reviewers, always contributing to the articles' improvement and skilling up our journal, more and more.All editors who passed through IJI have a fundamental role in this trajectory. And, none of this would be possible without the editorial team of Uninove. Everyone who passed and the current team. We want to express that our work as current editors of IJI would not be possible without you. Changes in the Intenational Journal of Innovation – IJIAs we mentioned earlier, IJI was born in 2013. And, over time, we are improving its structure always to improve it. In this section, we want to show some changes we made. We intend that editorial comments become a communication channel and that they can help our readers, authors, and reviewers to keep up with these changes.Although IJI is a comprehensive Innovation journal, one of the changes we want to inform you is that now, at the time of submission, the author will choose one of the available topics that best suit your article. The themes are: Innovative Entrepreneurship; Innovation and Learning; Innovation and Sustainability; Internationalization of Innovation; Innovation Systems; Emerging Innovation Themes and; Digital Transformation. Below, we present each theme so that everyone can get to know them:Innovative Entrepreneurship: emerging markets provided dynamic advantages for small businesses and their entrepreneurs to exploit the supply flows of resources, capacities, and knowledge-based on strategies oriented to the management of innovation. Topics covered in this theme include, for example: resources and capabilities that support innovative entrepreneurship; innovation habitats (Universities, Science and Technology Parks, Incubators and Accelerators) and their influences on the development of knowledge-intensive spin-offs and start-ups; open innovation, triple/quadruple helix, knowledge transfer, effectuation, bricolage and co-creation of value in knowledge-intensive entrepreneurship ecosystems; and adequate public policies to support innovative entrepreneurship.Innovation and Learning: discussions on this topic focus on the relationship between learning and innovation as topics with the potential to improve teaching and learning. They also focus on ways in which we acquire knowledge through innovation and how knowledge encourages new forms of innovation. Topics covered in this theme include, for example: innovative projects for learning; innovation-oriented learning; absorptive capacity; innovation in organizational learning and knowledge creation; unlearning and learning for technological innovation; new learning models; dynamics of innovation and learning; skills and innovation.Innovation and Sustainability: discussions on this topic seek to promote the development of innovation with a focus on sustainability, encouraging new ways of thinking about sustainable development issues. Topics covered in this theme include, for example: development of new sustainable products; circular economy; reverse logistic; smart cities; technological changes for sustainable development; innovation and health in the scope of sustainability; sustainable innovation and policies; innovation and education in sustainability and social innovation.Internationalization of Innovation: the rise of developing countries as an innovation center and their new nomenclature for emerging markets have occupied an important place in the international research agenda on global innovation and Research and Development (RD) strategies. Topics covered in this theme include, for example: resources and capabilities that support the internationalization of innovation and RD; global and local innovation and RD strategies; reverse innovation; internationalization of start-ups and digital companies; development of low-cost products, processes and services with a high-value offer internationalized to foreign markets; innovations at the base of the pyramid, disruptive and/or frugal developed and adopted in emerging markets and replicated in international markets; institutional factors that affect firms' innovation efforts in emerging markets.Innovation Systems: regulation and public policies define the institutional environment to drive innovation. Topics include industrial policy, technological trends and macroeconomic performance; investment ecosystem for the development and commercialization of new products, based on government and private investments; investment strategies related to new companies based on science or technology; Technology transfer to, from and between developing countries; technological innovation in all forms of business, political and economic systems. Topics such as triple helix, incubators, and other structures for cooperation, fostering and mobilizing innovation are expected in this section.Emerging Themes: from the applied themes, many emerging problems have a significant impact on management, such as industry 4.0, the internet of things, artificial intelligence or social innovations, or non-economic benefits. Intellectual property is treated as a cognitive database and can be understood as a technological library with the registration of the product of human creativity and invention. Social network analysis reveals the relationships between transforming agents and other elements; therefore, encouraged to be used in research and submitted in this section. The theoretical field not fully developed is not a barrier to explore any theme or question in this section.Digital Transformation: this interdisciplinary theme covers all the antecedents, intervening, and consequent effects of digital transformation in the field of technology-based companies and technology-based business ventures. The technological innovator (human side of innovation) as an entrepreneur, team member, manager, or employee is considered an object of study either as an agent of innovation or an element of the innovation process. Digital change or transformation is considered as a process that moves from the initial status to the new digital status, anchored in the theories of innovation, such as adoption, diffusion, push / pull of technology, innovation management, service innovation, disruptive innovation, innovation frugal innovation economy, organizational behavior, context of innovation, capabilities and transaction costs. Authors who submit to IJI will realize that they now need to make a structured summary at the time of submission. The summary must include the following information:(maximum of 250 words + title + keywords = Portuguese, English and Spanish).Title.Objective of the study (mandatory): Indicate the objective of the work, that is, what you want to demonstrate or describe.Methodology / approach (mandatory): Indicate the scientific method used in carrying out the study. In the case of theoretical essays, it is recommended that the authors indicate the theoretical approach adopted.Originality / Relevance (mandatory): Indicate the theoretical gap in which the study is inserted, also presenting the academic relevance of the discipline.Main results (mandatory): briefly indicate the main results achieved.Theoretical-methodological contributions (mandatory): Indicate the main theoretical and / or methodological implications that have been achieved with the results of the study.Social / managerial contributions (mandatory): Indicate the main managerial and / or social implications obtained through the results of the study.Keywords: between three and five keywords that characterize the work. Another change regarding the organization of the IJI concerns the types of work. In addition to the Editorial Comment and Articles, the journal will include Technological Articles, Perspectives, and Reviews. Thus, when submitting a study, authors will be able to choose from the available options for types of work. Throughout the next issues of the IJI, in the editorial comments, we will pass on pertinent information about every kind of work, to assist the authors in their submissions.Currently, the IJI is available to readers with new works three times a year (January-April; May-August; September-December) with publications in English, Portuguese and Spanish. From what comes next, we will have some changes in the periodicity. Next stepsAs editors, we want the IJI to continue with a national and international impact and increase its relevance in the indexing bases. For this, we will work together with the entire editorial team, reviewers, and authors to improve the work. We will do our best to give full support to the evaluators who are so dedicated to making constructive evaluations to the authors. We will also support authors with all the necessary information.With editorial comments, we intend to pass on knowledge to readers, authors, and reviewers to improve the articles gradually. We also aim to support classroom activities and content.Even with the changes reported here, we continue to accept all types of work, as long as they have an appropriate methodology. We also maintain our scope and continue to publish all topics involving innovation. We want to support academic events on fast tracks increasingly. About the articles in this edition of IJIThis issue is the first we consider the new organization of the International Journal of Innovation - IJI. We started with this editorial comment talking about the changes and improvements that we are making at IJI—as an example, showing the reader, reviewer, and author that the scope remains the same. However, at the time of submission, the author has to choose one of the proposed themes and have a mandatory abstract structured in three languages (English, Portuguese, and Spanish).In this issue, we have a section of perspectives that addresses the “Fake Agile” phenomenon. This phenomenon is related to the difficulties that companies face throughout the agile transformation, causing companies not to reach full agility and not return to their previous management model.Next, we publish the traditional section with scientific articles. The article “Critical success factors of the incubation network of enterprises of the IFES” brings critical success factors as the determining variables to keep business incubators competitive, improving their organizational processes, and ensuring their survival. Another published article, “The sharing economy dilemma: the response of incumbent firms to the rise of the sharing economy”, addresses the sharing economy in terms of innovation. The results of the study suggest that the current response to the sharing economy so far is moderate and limited. The article “Analysis of the provision for implementation of reverse logistics in the supermarket retail” made it possible to observe that through the variables that define retail characteristics, it is not possible to say whether a supermarket will implement the reverse logistics process. And the article “Capability building in fuzzy front end management in a high technology services company”, whose main objective was to assess the adherence among Fuzzy Front End (FFE) facilitators, was reported in the literature its application in the innovation process of a company, an innovative multinational high-tech services company.We also published the article “The evolution of triple helix movement: an analysis of scientific communications through bibliometric technique”. The study is a bibliometric review that brings essential contributions to the area. This issue also includes a literature review entitled “Service innovation tools: a literature review” that aimed to systematically review the frameworks proposed and applied by the literature on service innovation.The technological article “A model to adopt Enterprise Resource Planning (ERP) and Business Intelligence (BI) among Saudi SMEs”, in a new IJI publication section, addresses the main issues related to the intention to use ERPBI in the Saudi private sector.As we mentioned earlier in this editorial, IJI has a slightly different organization. With the new format, we intend to contribute to the promotion of knowledge in innovation. Also, we aim to increasingly present researchers and students with possibilities of themes and gaps for their research and bring insights to professionals in the field.Again, we thank the reviewers who dedicate their time and knowledge in the evaluations, always helping the authors. We wish you, readers, to enjoy the articles in this issue and feel encouraged to send your studies in innovation to the International Journal of Innovation - IJI.
APA, Harvard, Vancouver, ISO, and other styles
15

Thirsk, R. B. "Health care for deep space explorers." Annals of the ICRP 49, no. 1_suppl (July 31, 2020): 182–84. http://dx.doi.org/10.1177/0146645320935288.

Full text
Abstract:
[Formula: see text] There is a growing desire amongst space-faring nations to venture beyond the Van Allen radiation belts to a variety of intriguing locations in our inner solar system. Mars is the ultimate destination. In two decades, we hope to vicariously share in the adventure of an intrepid crew of international astronauts on the first voyage to the red planet. This will be a daunting mission with an operational profile unlike anything astronauts have flown before. A flight to Mars will be a 50-million-kilometre journey. Interplanetary distances are so great that voice and data communications between mission control on Earth and a base on Mars will feature latencies up to 20 min. Consequently, the ground support team will not have real-time control of the systems aboard the transit spacecraft nor the surface habitat. As cargo resupply from Earth will be impossible, the onboard inventory of equipment and supplies must be planned strategically in advance. Furthermore, the size, amount, and function of onboard equipment will be constrained by limited volume, mass, and power allowances. With less oversight from the ground, all vehicle systems will need to be reliable and robust. They must function autonomously. Astronauts will rely on their own abilities and onboard resources to deal with urgent situations that will inevitably arise. The deep space environment is hazardous. Zero- and reduced-gravity effects will trigger deconditioning of the cardiovascular, musculoskeletal, and other physiological systems. While living for 2.5 years in extreme isolation, Mars crews will experience psychological stressors such as loss of privacy, reduced comforts of living, and distant relationships with family members and friends. Beyond Earth’s protective magnetosphere, the fluence of ionising radiation will be higher. Longer exposure of astronauts to galactic cosmic radiation could result in the formation of cataracts, impaired wound healing, and degenerative tissue diseases. Genetic mutations and the onset of cancer later in life are also possible. Acute radiation sickness and even death could ensue from a large and unpredictable solar particle event. There are many technological barriers that prevent us from carrying out a mission to Mars today. Before launching the first crew, we will need to develop processes for in-situ resource utilisation. Rather than bringing along large quantities of oxygen, water, and propellant from Earth, future astronauts will need to produce some of these consumables from local space-based resources. Ion propulsion systems will be needed to reduce travel times to interplanetary destinations, and we will need systems to land larger payloads (up to 40 tonnes of equipment and supplies for a human mission) on planetary surfaces. These and other innovations will be needed before humans venture into deep space. However, it is the delivery of health care that is regarded as one of the most important obstacles to be overcome. Physicians, biomedical engineers, human factors specialists, and radiation experts are re-thinking operational concepts of health care, crew performance, and life support. Traditional oversight of astronaut health by ground-based medical teams will no longer be possible, particularly in urgent situations. Aborting a deep space mission to medically evacuate an ill or injured crew member to Earth will not be an option. Future crews must have all of the capability and responsibility to monitor and manage their own health. Onboard medical resources must include imaging, surgery, and emergency care, as well as laboratory analysis of blood, urine, and other biospecimens. At least one member of the crew should be a broadly trained physician with experience in remote medicine. She/he will be supported by an onboard health informatics network that is artificial intelligence enabled to assist with monitoring, diagnosis, and treatment. In other words, health care in deep space will become more autonomous, intelligent, and point of care. The International Commission on Radiological Protection (ICRP) has dedicated a day of its 5th International Symposium in Adelaide to the theme of Mars exploration. ICRP has brought global experts together today to consider the pressing issues of radiation protection. There are many issues to be addressed: Can the radiation countermeasures currently used in low Earth orbit be adapted for deep space? Can materials of low atomic weight be integrated into the structure of deep space vehicles to shield the crew? In the event of a major solar particle event, could a safe haven shelter the crew adequately from high doses of radiation? Could Martian regolith be used as shielding material for subterranean habitats? Will shielding alone be sufficient to minimise exposure, or will biological and pharmacological countermeasures also be needed? Beyond this symposium, I will value the continued involvement of ICRP in space exploration. ICRP has recently established Task Group 115 to examine radiation effects on the health of astronaut crew and to recommend exposure limits. This work will be vital. Biological effects of radiation could not only impact the health, well-being, and performance of future explorers, but also the length and quality of their lives. While humanity has dreamed of travel to the red planet for decades, an actual mission is finally starting to feel like a possibility. How exciting! I thank ICRP for its ongoing work to protect radiation workers on Earth. In the future, we will depend on counsel from ICRP to protect extraterrestrial workers and to enable the exploration of deep space.
APA, Harvard, Vancouver, ISO, and other styles
16

KOZACHENKO, Anna. "EVALUATION OF THE STATE OF INTERNAL AGRICULTURAL CONTROL IN THE SYSTEM OF MANAGEMENT OF SCIENTIFIC RESEARCH INSTITUTIONS." "EСONOMY. FINANСES. MANAGEMENT: Topical issues of science and practical activity", no. 3 (43) (March 2019): 131–43. http://dx.doi.org/10.37128/2411-4413-2019-3-11.

Full text
Abstract:
The activity of scientific research institutions is not deprived of the internal risks most commonly connected with unfair actions of official and materially responsible persons, instances of fraud and forgery. Taking the above into account, economic operations taking place in such institutions are subject to internal control for the purpose of a target and most effective use of financial resources. The mechanisms of realization of internal control can be diverse and depend on the sizes and the structure of households, experiment and selective stations and structures of their administration. As practice shows, internal control should not be limited only to checks of the legitimacy of the carried-out operations and the level of performance of planned targets. The strategic direction and control of a condition of economic processes at different stages of its consecutive implementation is the key to a stable functioning of scientific and research establishments, and a correctly created system of internal control is capable to prevent considerable volumes of the carried-out violations even without external influence. These problems defined the relevance of the article and the purpose of further research. The assessment of the system of internal control of The Institute of Bioenergy Crops and Sugar Beet (IBC&SB) NAAS which has its own network of the research stations and research households located in various agricultural regions is carried out and gave ground to be convinced that they for the purpose of specification of control functions in management process and at different stages of the sequence of their implementation, in this integrated research institution there are the following component (types) of control: 1. Administrative control is a set of procedures directed to the implementation of production and financial plans and also duties by officials. It is carried out automatically within the approved internal organizational and administrative documents. 2. Accounting control is a set of procedures for identification of deviations of the actual indicators from planned ones (accounting records), the safety of assets of the enterprise, prevention of shortcomings and elimination of risks in the activity of an enterprise, ensuring the reliability of accounting data and financial statements. 3. Technological control is a set of the procedures providing production technology compliance (cultivation of grain crops (except rice), leguminous crops and seeds of oil-bearing crops, vegetables and cucurbits crops, root crops, biopower cultures, and sugar beet), norms and standards of a raw materials consumption, quality of goods manufactured. The organization of internal control, as a rule, contains several stages the identification of which depends on the purposes, tasks, and expediency of check. The stages of internal control which take place at The Institute of Bioenergy Crops and Sugar Beet NAAS are the following: I stage - control planning of efficiency of budgetary funds usage. Within the given stage of internal control of IBC&SB NAAS, it is expedient to carry out the formation of the list of the priority directions of control on the basis of strategic objectives of the institution. Besides, the compliance with duty regulations validating distribution of functional authority of heads of departments for the purpose of determination of the adequacy of distribution of duties, for the competence of personnel is necessary; II stage - the realization of control actions within the monitoring procedure of effective management of budgetary funds. This stage includes the check of: 1) accuracy of the results of the statement of estimates implementation; 2) accuracy of account and use of noncurrent assets, supplies of the institution; 3) accuracy of petty cash operations management and observing cash discipline; 4) operations on accounts in banks and treasury; 5) reliability and reality of treatment in accounting of accounts receivable and accounts payable; 6) accuracy of payroll accounting and payroll payment and mandatory deductions; 7) accuracy of reflection in accounting and use of funding of the institution; 8) expenditures for the creation of an innovative product; 9) documentation of the operations on intellectual property items within capital assets and expenses on research works. III stage - summing up control activity. It contains the approval of reports on results of the check, control of the implementation of the recommendations aimed directly at control object. The efficiency of internal control of The Institute of Bioenergy Crops and Sugar Beet NAAS according to the conducted research depends not only on the considered check stages but also on the measures on its stimulation in the time aspect. The analysis of sources of scientific literature gave reason to make sure that most scientists divide control based on implementation term (timing) for: 1) the preliminary in-house control implemented at the stage of adoptions of management decisions prior to certain economic operations; 2) current internal control that is expedient to carry out in the reporting period for the purpose of timely and expeditious identification of the deviations and reasons which entailed them; 3) the following internal control directed to the check of lawful, correct carrying out and reflection of the financial and economic transactions presented in primary documents, account books and reporting forms. During the research of The Institute of Bioenergy Crops and Sugar Beet NAAS activity, it was necessary to observe the usage of only such form of control as the subsequent control which is implemented according to results of activity of the budgetary institution. It is proved in the article that internal control is one of the most important functions of management, as with its help the check on execution and efficiency of management decisions, control of existence and use of resources and also identification and elimination of deviations in the system of financial activity of institutions is provided. That is, internal control in the integrated research institutions has to provide timeliness of information introduction and allow to quickly prevent possible development of crisis situations with the purpose of adoption of effective management decisions. The combination of the offered stages of implementation of internal control, methodical procedures and measures of stimulation in time aspect to influence stable functioning of institutions including The Institute of Bioenergy Crops and Sugar Beet NAAS on assets saving and their rational use, observance of domestic policy, prevention and exposure of violations, the accuracy of accounting records, preparation of reliable and accurate financial information.
APA, Harvard, Vancouver, ISO, and other styles
17

Khorin, Alexander, and Ekaterina Voronova. "The Role of Visuals in the Communication Process." Filosofija. Sociologija 32, no. 1 (January 24, 2021). http://dx.doi.org/10.6001/fil-soc.v32i1.4381.

Full text
Abstract:
Information flows, without which it is impossible to imagine a modern society and its cultural environment, penetrate into all spheres of an individual’s life thanks to digital technologies. As a result, social processes experience drastic changes in the ways and forms of cognition and transformation of the surrounding reality, as well as in the processes of formation and representation of the individual in society. As a result of the development of digital information technologies, significant changes have occurred in the living space and the perception of time by a person of the 21st century. The Internet has formed its own unique information space, with its own special social and technological characteristics. A virtual network is a space for global communication and high-speed data dissemination, as well as an environment for people’s interaction that is not limited by the barriers of traditional mass media. All this has created a new type of culture – the culture of virtual reality, since our reality consists mainly of everyday virtual experiences.
APA, Harvard, Vancouver, ISO, and other styles
18

De Caro, Valerio. "Resilience as an investigation of the relationship between architecture and nature." ARSNET 2, no. 1 (April 30, 2022). http://dx.doi.org/10.7454/arsnet.v2i1.46.

Full text
Abstract:
Recent research involving the architectural project seems to have brought attention to natural elements and phenomena, highlighting particular interest in the interpretation of the dynamic processes of the environment and of all its entities, plant and natural that inhabit it. In a logic of resilience, which should lead to limiting the consumption of resources, the action of replacing technological devices with natural mechanisms tested throughout natural history appears to be a losing battle right from the start. Yet man, in the past, has shown that he is able to give interpretations of nature that are not limited to a mechanistic replacement of the natural process, but which are based on the representation and reformulation of organisms in relation to matter and space. Space understood as an architectural essence, which from the relationship with nature gives back meaningful forms and experiences. The analysis of a series of case studies starting from the Lascaux caves up to the contemporary reinterpretations by Anton-Garcia Abril and Terunobu Fujimori demonstrate how nature can be an element of inspiration for innovative research in the field of design without necessarily embarking on a drift technological.
APA, Harvard, Vancouver, ISO, and other styles
19

Dieter, Michael. "Amazon Noir." M/C Journal 10, no. 5 (October 1, 2007). http://dx.doi.org/10.5204/mcj.2709.

Full text
Abstract:
There is no diagram that does not also include, besides the points it connects up, certain relatively free or unbounded points, points of creativity, change and resistance, and it is perhaps with these that we ought to begin in order to understand the whole picture. (Deleuze, “Foucault” 37) Monty Cantsin: Why do we use a pervert software robot to exploit our collective consensual mind? Letitia: Because we want the thief to be a digital entity. Monty Cantsin: But isn’t this really blasphemic? Letitia: Yes, but god – in our case a meta-cocktail of authorship and copyright – can not be trusted anymore. (Amazon Noir, “Dialogue”) In 2006, some 3,000 digital copies of books were silently “stolen” from online retailer Amazon.com by targeting vulnerabilities in the “Search inside the Book” feature from the company’s website. Over several weeks, between July and October, a specially designed software program bombarded the Search Inside!™ interface with multiple requests, assembling full versions of texts and distributing them across peer-to-peer networks (P2P). Rather than a purely malicious and anonymous hack, however, the “heist” was publicised as a tactical media performance, Amazon Noir, produced by self-proclaimed super-villains Paolo Cirio, Alessandro Ludovico, and Ubermorgen.com. While controversially directed at highlighting the infrastructures that materially enforce property rights and access to knowledge online, the exploit additionally interrogated its own interventionist status as theoretically and politically ambiguous. That the “thief” was represented as a digital entity or machinic process (operating on the very terrain where exchange is differentiated) and the emergent act of “piracy” was fictionalised through the genre of noir conveys something of the indeterminacy or immensurability of the event. In this short article, I discuss some political aspects of intellectual property in relation to the complexities of Amazon Noir, particularly in the context of control, technological action, and discourses of freedom. Software, Piracy As a force of distribution, the Internet is continually subject to controversies concerning flows and permutations of agency. While often directed by discourses cast in terms of either radical autonomy or control, the technical constitution of these digital systems is more regularly a case of establishing structures of operation, codified rules, or conditions of possibility; that is, of guiding social processes and relations (McKenzie, “Cutting Code” 1-19). Software, as a medium through which such communication unfolds and becomes organised, is difficult to conceptualise as a result of being so event-orientated. There lies a complicated logic of contingency and calculation at its centre, a dimension exacerbated by the global scale of informational networks, where the inability to comprehend an environment that exceeds the limits of individual experience is frequently expressed through desires, anxieties, paranoia. Unsurprisingly, cautionary accounts and moral panics on identity theft, email fraud, pornography, surveillance, hackers, and computer viruses are as commonplace as those narratives advocating user interactivity. When analysing digital systems, cultural theory often struggles to describe forces that dictate movement and relations between disparate entities composed by code, an aspect heightened by the intensive movement of informational networks where differences are worked out through the constant exposure to unpredictability and chance (Terranova, “Communication beyond Meaning”). Such volatility partially explains the recent turn to distribution in media theory, as once durable networks for constructing economic difference – organising information in space and time (“at a distance”), accelerating or delaying its delivery – appear contingent, unstable, or consistently irregular (Cubitt 194). Attributing actions to users, programmers, or the software itself is a difficult task when faced with these states of co-emergence, especially in the context of sharing knowledge and distributing media content. Exchanges between corporate entities, mainstream media, popular cultural producers, and legal institutions over P2P networks represent an ongoing controversy in this respect, with numerous stakeholders competing between investments in property, innovation, piracy, and publics. Beginning to understand this problematic landscape is an urgent task, especially in relation to the technological dynamics that organised and propel such antagonisms. In the influential fragment, “Postscript on the Societies of Control,” Gilles Deleuze describes the historical passage from modern forms of organised enclosure (the prison, clinic, factory) to the contemporary arrangement of relational apparatuses and open systems as being materially provoked by – but not limited to – the mass deployment of networked digital technologies. In his analysis, the disciplinary mode most famously described by Foucault is spatially extended to informational systems based on code and flexibility. According to Deleuze, these cybernetic machines are connected into apparatuses that aim for intrusive monitoring: “in a control-based system nothing’s left alone for long” (“Control and Becoming” 175). Such a constant networking of behaviour is described as a shift from “molds” to “modulation,” where controls become “a self-transmuting molding changing from one moment to the next, or like a sieve whose mesh varies from one point to another” (“Postscript” 179). Accordingly, the crisis underpinning civil institutions is consistent with the generalisation of disciplinary logics across social space, forming an intensive modulation of everyday life, but one ambiguously associated with socio-technical ensembles. The precise dynamics of this epistemic shift are significant in terms of political agency: while control implies an arrangement capable of absorbing massive contingency, a series of complex instabilities actually mark its operation. Noise, viral contamination, and piracy are identified as key points of discontinuity; they appear as divisions or “errors” that force change by promoting indeterminacies in a system that would otherwise appear infinitely calculable, programmable, and predictable. The rendering of piracy as a tactic of resistance, a technique capable of levelling out the uneven economic field of global capitalism, has become a predictable catch-cry for political activists. In their analysis of multitude, for instance, Antonio Negri and Michael Hardt describe the contradictions of post-Fordist production as conjuring forth a tendency for labour to “become common.” That is, as productivity depends on flexibility, communication, and cognitive skills, directed by the cultivation of an ideal entrepreneurial or flexible subject, the greater the possibilities for self-organised forms of living that significantly challenge its operation. In this case, intellectual property exemplifies such a spiralling paradoxical logic, since “the infinite reproducibility central to these immaterial forms of property directly undermines any such construction of scarcity” (Hardt and Negri 180). The implications of the filesharing program Napster, accordingly, are read as not merely directed toward theft, but in relation to the private character of the property itself; a kind of social piracy is perpetuated that is viewed as radically recomposing social resources and relations. Ravi Sundaram, a co-founder of the Sarai new media initiative in Delhi, has meanwhile drawn attention to the existence of “pirate modernities” capable of being actualised when individuals or local groups gain illegitimate access to distributive media technologies; these are worlds of “innovation and non-legality,” of electronic survival strategies that partake in cultures of dispersal and escape simple classification (94). Meanwhile, pirate entrepreneurs Magnus Eriksson and Rasmus Fleische – associated with the notorious Piratbyrn – have promoted the bleeding away of Hollywood profits through fully deployed P2P networks, with the intention of pushing filesharing dynamics to an extreme in order to radicalise the potential for social change (“Copies and Context”). From an aesthetic perspective, such activist theories are complemented by the affective register of appropriation art, a movement broadly conceived in terms of antagonistically liberating knowledge from the confines of intellectual property: “those who pirate and hijack owned material, attempting to free information, art, film, and music – the rhetoric of our cultural life – from what they see as the prison of private ownership” (Harold 114). These “unruly” escape attempts are pursued through various modes of engagement, from experimental performances with legislative infrastructures (i.e. Kembrew McLeod’s patenting of the phrase “freedom of expression”) to musical remix projects, such as the work of Negativland, John Oswald, RTMark, Detritus, Illegal Art, and the Evolution Control Committee. Amazon Noir, while similarly engaging with questions of ownership, is distinguished by specifically targeting information communication systems and finding “niches” or gaps between overlapping networks of control and economic governance. Hans Bernhard and Lizvlx from Ubermorgen.com (meaning ‘Day after Tomorrow,’ or ‘Super-Tomorrow’) actually describe their work as “research-based”: “we not are opportunistic, money-driven or success-driven, our central motivation is to gain as much information as possible as fast as possible as chaotic as possible and to redistribute this information via digital channels” (“Interview with Ubermorgen”). This has led to experiments like Google Will Eat Itself (2005) and the construction of the automated software thief against Amazon.com, as process-based explorations of technological action. Agency, Distribution Deleuze’s “postscript” on control has proven massively influential for new media art by introducing a series of key questions on power (or desire) and digital networks. As a social diagram, however, control should be understood as a partial rather than totalising map of relations, referring to the augmentation of disciplinary power in specific technological settings. While control is a conceptual regime that refers to open-ended terrains beyond the architectural locales of enclosure, implying a move toward informational networks, data solicitation, and cybernetic feedback, there remains a peculiar contingent dimension to its limits. For example, software code is typically designed to remain cycling until user input is provided. There is a specifically immanent and localised quality to its actions that might be taken as exemplary of control as a continuously modulating affective materialism. The outcome is a heightened sense of bounded emergencies that are either flattened out or absorbed through reconstitution; however, these are never linear gestures of containment. As Tiziana Terranova observes, control operates through multilayered mechanisms of order and organisation: “messy local assemblages and compositions, subjective and machinic, characterised by different types of psychic investments, that cannot be the subject of normative, pre-made political judgments, but which need to be thought anew again and again, each time, in specific dynamic compositions” (“Of Sense and Sensibility” 34). This event-orientated vitality accounts for the political ambitions of tactical media as opening out communication channels through selective “transversal” targeting. Amazon Noir, for that reason, is pitched specifically against the material processes of communication. The system used to harvest the content from “Search inside the Book” is described as “robot-perversion-technology,” based on a network of four servers around the globe, each with a specific function: one located in the United States that retrieved (or “sucked”) the books from the site, one in Russia that injected the assembled documents onto P2P networks and two in Europe that coordinated the action via intelligent automated programs (see “The Diagram”). According to the “villains,” the main goal was to steal all 150,000 books from Search Inside!™ then use the same technology to steal books from the “Google Print Service” (the exploit was limited only by the amount of technological resources financially available, but there are apparent plans to improve the technique by reinvesting the money received through the settlement with Amazon.com not to publicise the hack). In terms of informational culture, this system resembles a machinic process directed at redistributing copyright content; “The Diagram” visualises key processes that define digital piracy as an emergent phenomenon within an open-ended and responsive milieu. That is, the static image foregrounds something of the activity of copying being a technological action that complicates any analysis focusing purely on copyright as content. In this respect, intellectual property rights are revealed as being entangled within information architectures as communication management and cultural recombination – dissipated and enforced by a measured interplay between openness and obstruction, resonance and emergence (Terranova, “Communication beyond Meaning” 52). To understand data distribution requires an acknowledgement of these underlying nonhuman relations that allow for such informational exchanges. It requires an understanding of the permutations of agency carried along by digital entities. According to Lawrence Lessig’s influential argument, code is not merely an object of governance, but has an overt legislative function itself. Within the informational environments of software, “a law is defined, not through a statue, but through the code that governs the space” (20). These points of symmetry are understood as concretised social values: they are material standards that regulate flow. Similarly, Alexander Galloway describes computer protocols as non-institutional “etiquette for autonomous agents,” or “conventional rules that govern the set of possible behavior patterns within a heterogeneous system” (7). In his analysis, these agreed-upon standardised actions operate as a style of management fostered by contradiction: progressive though reactionary, encouraging diversity by striving for the universal, synonymous with possibility but completely predetermined, and so on (243-244). Needless to say, political uncertainties arise from a paradigm that generates internal material obscurities through a constant twinning of freedom and control. For Wendy Hui Kyong Chun, these Cold War systems subvert the possibilities for any actual experience of autonomy by generalising paranoia through constant intrusion and reducing social problems to questions of technological optimisation (1-30). In confrontation with these seemingly ubiquitous regulatory structures, cultural theory requires a critical vocabulary differentiated from computer engineering to account for the sociality that permeates through and concatenates technological realities. In his recent work on “mundane” devices, software and code, Adrian McKenzie introduces a relevant analytic approach in the concept of technological action as something that both abstracts and concretises relations in a diffusion of collective-individual forces. Drawing on the thought of French philosopher Gilbert Simondon, he uses the term “transduction” to identify a key characteristic of technology in the relational process of becoming, or ontogenesis. This is described as bringing together disparate things into composites of relations that evolve and propagate a structure throughout a domain, or “overflow existing modalities of perception and movement on many scales” (“Impersonal and Personal Forces in Technological Action” 201). Most importantly, these innovative diffusions or contagions occur by bridging states of difference or incompatibilities. Technological action, therefore, arises from a particular type of disjunctive relation between an entity and something external to itself: “in making this relation, technical action changes not only the ensemble, but also the form of life of its agent. Abstraction comes into being and begins to subsume or reconfigure existing relations between the inside and outside” (203). Here, reciprocal interactions between two states or dimensions actualise disparate potentials through metastability: an equilibrium that proliferates, unfolds, and drives individuation. While drawing on cybernetics and dealing with specific technological platforms, McKenzie’s work can be extended to describe the significance of informational devices throughout control societies as a whole, particularly as a predictive and future-orientated force that thrives on staged conflicts. Moreover, being a non-deterministic technical theory, it additionally speaks to new tendencies in regimes of production that harness cognition and cooperation through specially designed infrastructures to enact persistent innovation without any end-point, final goal or natural target (Thrift 283-295). Here, the interface between intellectual property and reproduction can be seen as a site of variation that weaves together disparate objects and entities by imbrication in social life itself. These are specific acts of interference that propel relations toward unforeseen conclusions by drawing on memories, attention spans, material-technical traits, and so on. The focus lies on performance, context, and design “as a continual process of tuning arrived at by distributed aspiration” (Thrift 295). This later point is demonstrated in recent scholarly treatments of filesharing networks as media ecologies. Kate Crawford, for instance, describes the movement of P2P as processual or adaptive, comparable to technological action, marked by key transitions from partially decentralised architectures such as Napster, to the fully distributed systems of Gnutella and seeded swarm-based networks like BitTorrent (30-39). Each of these technologies can be understood as a response to various legal incursions, producing radically dissimilar socio-technological dynamics and emergent trends for how agency is modulated by informational exchanges. Indeed, even these aberrant formations are characterised by modes of commodification that continually spillover and feedback on themselves, repositioning markets and commodities in doing so, from MP3s to iPods, P2P to broadband subscription rates. However, one key limitation of this ontological approach is apparent when dealing with the sheer scale of activity involved, where mass participation elicits certain degrees of obscurity and relative safety in numbers. This represents an obvious problem for analysis, as dynamics can easily be identified in the broadest conceptual sense, without any understanding of the specific contexts of usage, political impacts, and economic effects for participants in their everyday consumptive habits. Large-scale distributed ensembles are “problematic” in their technological constitution, as a result. They are sites of expansive overflow that provoke an equivalent individuation of thought, as the Recording Industry Association of America observes on their educational website: “because of the nature of the theft, the damage is not always easy to calculate but not hard to envision” (“Piracy”). The politics of the filesharing debate, in this sense, depends on the command of imaginaries; that is, being able to conceptualise an overarching structural consistency to a persistent and adaptive ecology. As a mode of tactical intervention, Amazon Noir dramatises these ambiguities by framing technological action through the fictional sensibilities of narrative genre. Ambiguity, Control The extensive use of imagery and iconography from “noir” can be understood as an explicit reference to the increasing criminalisation of copyright violation through digital technologies. However, the term also refers to the indistinct or uncertain effects produced by this tactical intervention: who are the “bad guys” or the “good guys”? Are positions like ‘good’ and ‘evil’ (something like freedom or tyranny) so easily identified and distinguished? As Paolo Cirio explains, this political disposition is deliberately kept obscure in the project: “it’s a representation of the actual ambiguity about copyright issues, where every case seems to lack a moral or ethical basis” (“Amazon Noir Interview”). While user communications made available on the site clearly identify culprits (describing the project as jeopardising arts funding, as both irresponsible and arrogant), the self-description of the artists as political “failures” highlights the uncertainty regarding the project’s qualities as a force of long-term social renewal: Lizvlx from Ubermorgen.com had daily shootouts with the global mass-media, Cirio continuously pushed the boundaries of copyright (books are just pixels on a screen or just ink on paper), Ludovico and Bernhard resisted kickback-bribes from powerful Amazon.com until they finally gave in and sold the technology for an undisclosed sum to Amazon. Betrayal, blasphemy and pessimism finally split the gang of bad guys. (“Press Release”) Here, the adaptive and flexible qualities of informatic commodities and computational systems of distribution are knowingly posited as critical limits; in a certain sense, the project fails technologically in order to succeed conceptually. From a cynical perspective, this might be interpreted as guaranteeing authenticity by insisting on the useless or non-instrumental quality of art. However, through this process, Amazon Noir illustrates how forces confined as exterior to control (virality, piracy, noncommunication) regularly operate as points of distinction to generate change and innovation. Just as hackers are legitimately employed to challenge the durability of network exchanges, malfunctions are relied upon as potential sources of future information. Indeed, the notion of demonstrating ‘autonomy’ by illustrating the shortcomings of software is entirely consistent with the logic of control as a modulating organisational diagram. These so-called “circuit breakers” are positioned as points of bifurcation that open up new systems and encompass a more general “abstract machine” or tendency governing contemporary capitalism (Parikka 300). As a consequence, the ambiguities of Amazon Noir emerge not just from the contrary articulation of intellectual property and digital technology, but additionally through the concept of thinking “resistance” simultaneously with regimes of control. This tension is apparent in Galloway’s analysis of the cybernetic machines that are synonymous with the operation of Deleuzian control societies – i.e. “computerised information management” – where tactical media are posited as potential modes of contestation against the tyranny of code, “able to exploit flaws in protocological and proprietary command and control, not to destroy technology, but to sculpt protocol and make it better suited to people’s real desires” (176). While pushing a system into a state of hypertrophy to reform digital architectures might represent a possible technique that produces a space through which to imagine something like “our” freedom, it still leaves unexamined the desire for reformation itself as nurtured by and produced through the coupling of cybernetics, information theory, and distributed networking. This draws into focus the significance of McKenzie’s Simondon-inspired cybernetic perspective on socio-technological ensembles as being always-already predetermined by and driven through asymmetries or difference. As Chun observes, consequently, there is no paradox between resistance and capture since “control and freedom are not opposites, but different sides of the same coin: just as discipline served as a grid on which liberty was established, control is the matrix that enables freedom as openness” (71). Why “openness” should be so readily equated with a state of being free represents a major unexamined presumption of digital culture, and leads to the associated predicament of attempting to think of how this freedom has become something one cannot not desire. If Amazon Noir has political currency in this context, however, it emerges from a capacity to recognise how informational networks channel desire, memories, and imaginative visions rather than just cultivated antagonisms and counterintuitive economics. As a final point, it is worth observing that the project was initiated without publicity until the settlement with Amazon.com. There is, as a consequence, nothing to suggest that this subversive “event” might have actually occurred, a feeling heightened by the abstractions of software entities. To the extent that we believe in “the big book heist,” that such an act is even possible, is a gauge through which the paranoia of control societies is illuminated as a longing or desire for autonomy. As Hakim Bey observes in his conceptualisation of “pirate utopias,” such fleeting encounters with the imaginaries of freedom flow back into the experience of the everyday as political instantiations of utopian hope. Amazon Noir, with all its underlying ethical ambiguities, presents us with a challenge to rethink these affective investments by considering our profound weaknesses to master the complexities and constant intrusions of control. It provides an opportunity to conceive of a future that begins with limits and limitations as immanently central, even foundational, to our deep interconnection with socio-technological ensembles. References “Amazon Noir – The Big Book Crime.” http://www.amazon-noir.com/>. Bey, Hakim. T.A.Z.: The Temporary Autonomous Zone, Ontological Anarchy, Poetic Terrorism. New York: Autonomedia, 1991. Chun, Wendy Hui Kyong. Control and Freedom: Power and Paranoia in the Age of Fibre Optics. Cambridge, MA: MIT Press, 2006. Crawford, Kate. “Adaptation: Tracking the Ecologies of Music and Peer-to-Peer Networks.” Media International Australia 114 (2005): 30-39. Cubitt, Sean. “Distribution and Media Flows.” Cultural Politics 1.2 (2005): 193-214. Deleuze, Gilles. Foucault. Trans. Seán Hand. Minneapolis: U of Minnesota P, 1986. ———. “Control and Becoming.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 169-176. ———. “Postscript on the Societies of Control.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 177-182. Eriksson, Magnus, and Rasmus Fleische. “Copies and Context in the Age of Cultural Abundance.” Online posting. 5 June 2007. Nettime 25 Aug 2007. Galloway, Alexander. Protocol: How Control Exists after Decentralization. Cambridge, MA: MIT Press, 2004. Hardt, Michael, and Antonio Negri. Multitude: War and Democracy in the Age of Empire. New York: Penguin Press, 2004. Harold, Christine. OurSpace: Resisting the Corporate Control of Culture. Minneapolis: U of Minnesota P, 2007. Lessig, Lawrence. Code and Other Laws of Cyberspace. New York: Basic Books, 1999. McKenzie, Adrian. Cutting Code: Software and Sociality. New York: Peter Lang, 2006. ———. “The Strange Meshing of Impersonal and Personal Forces in Technological Action.” Culture, Theory and Critique 47.2 (2006): 197-212. Parikka, Jussi. “Contagion and Repetition: On the Viral Logic of Network Culture.” Ephemera: Theory & Politics in Organization 7.2 (2007): 287-308. “Piracy Online.” Recording Industry Association of America. 28 Aug 2007. http://www.riaa.com/physicalpiracy.php>. Sundaram, Ravi. “Recycling Modernity: Pirate Electronic Cultures in India.” Sarai Reader 2001: The Public Domain. Delhi, Sarai Media Lab, 2001. 93-99. http://www.sarai.net>. Terranova, Tiziana. “Communication beyond Meaning: On the Cultural Politics of Information.” Social Text 22.3 (2004): 51-73. ———. “Of Sense and Sensibility: Immaterial Labour in Open Systems.” DATA Browser 03 – Curating Immateriality: The Work of the Curator in the Age of Network Systems. Ed. Joasia Krysa. New York: Autonomedia, 2006. 27-38. Thrift, Nigel. “Re-inventing Invention: New Tendencies in Capitalist Commodification.” Economy and Society 35.2 (2006): 279-306. Citation reference for this article MLA Style Dieter, Michael. "Amazon Noir: Piracy, Distribution, Control." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/07-dieter.php>. APA Style Dieter, M. (Oct. 2007) "Amazon Noir: Piracy, Distribution, Control," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/07-dieter.php>.
APA, Harvard, Vancouver, ISO, and other styles
20

Sofoulis, Zoé. "Machinic Musings with Mumford." M/C Journal 2, no. 6 (September 1, 1999). http://dx.doi.org/10.5204/mcj.1781.

Full text
Abstract:
What is a machine? As part of his answer to this, historian and philosopher of technology Lewis Mumford cites a classic definition: "a machine is a combination of resistant bodies so arranged that by their means the mechanical forces of nature can be compelled to do work accompanied by certain determinant motions" (Reuleaux [1876], qtd. in Mumford, Technics and Civilisation 9). Mumford's own definition is focussed on machines as part of a technological continuum between human body and automaton: Machines have developed out of a complex of non-organic agents for converting energy, for performing work, for enlarging the mechanical or sensory capacities of the human body, or for reducing to a mensurable order and regularity the processes of life. The automaton is the last step in a process that began with the use of one part or another of the human body as a tool. (9-10) The tool and the machine can be distinguished along this technological continuum, with the tool more dependent on "the skill and motive power of the operator", subject to "manipulation", and potentially more flexible in its uses, whereas the machine lends itself more to "automatic action" of a specialised kind. However, it is difficult to ultimately separate them, since the embodied skill of the tool-user becomes more mechanical and reflexive with practice (Technics and Civilisation 10), while the machine also evolves along increasingly organic lines (367), and there are common examples of hybrid machine-tools like the lathe or drill, which combine "the accuracy of the finest machine ... with the skilled attendance of the workman" (10). A powerfully attractive feature of the computer is that it is an effective hybrid of machine and tool: like a machine it performs many specialised functions at super-human speed and accuracy on command, but like a tool it is flexible and adaptable (through add-on software and plug-in peripherals) to a seemingly endless variety of users and uses. Fascinating Assemblages The automatic machine ... involves the notion of an external source of power, a more or less complicated inter-relation of parts, and a limited kind of activity. From the beginning the machine was a sort of minor organism, designed to perform a single set of functions. (Mumford, Technics and Civilisation 11) The autonomy of the machine is perhaps its most fascinating aspect. That the machine is an assemblage of parts and restricted functions -- a "minor organism" as Mumford puts it -- suggests to us a body. There is something ineluctably erotic about scenes of lubricated pistons moving in and out of cylinders, or greased gear wheels moving around each other, and a masturbatory energy seems to be involved in the machine that repetitively and by itself performs the same limited actions over and over and over. While there are parallels between masculine masturbation and machinic repetition, there are also associations with femininity. As Andreas Huyssen pointed out, the modern machine became associated with a dangerous female sexuality and took the place of the early moderns' untamed Mother Nature as the principal representative of non-human forces with autonomy and agency that could evade human control. But arguably, expressed fears of machinic autonomy are the flip side of a wish for it, arising from masculine reproductive fantasies that have been played out in technoscience by generations of fictional and real-life Frankensteins fanatically seeking to create artificial life in the form of technoscientific brainchildren (who are nevertheless often neglected and left to run wild at birth). At a conscious level, machines express what may be interpreted as anal-sadistic desires for order, regularity and control, but unconsciously there is an element of masochistic pleasure in being passive, in yielding up control to the machine, in letting it set the scene and determine the actions and roles for the humans as well as non-humans (Sofia, "Contested Zones", and "Mythic Machine" 44-8). Machinic Zeal What is the use of conquering nature if we fall a prey to nature in the form of unbridled men? What is the use of equipping mankind with mighty powers to move and build and communicate, if the final result of this secure food supply and this excellent organisation is to enthrone the morbid impulses of a thwarted humanity? (Mumford, Technics and Civilisation 366) With his emphasis on the social context and drives towards technology, Mumford (Technics and Civilisation 364-5) suggests that while some kinds of machines have existed for thousands of years, what we have come to think of as the mechanical age only arose with the widespread adoption of the machine as a way of securing order, regularity and calculability of physical and human resources, coupled with the ideological shift which made the machine into "a goal of desire" and an object of almost obsessive veneration from the mid-18th century to the early 20th century. Now, he said (writing first in the early 1930s) faith in the machine has been somewhat shaken, and it is no longer seen as "the paragon of progress" but as "merely a series of instruments" to be used when useful; yet despite this loss of faith the machine in capitalist contexts continues to be "over-worked, over-enlarged, over-exploited because of the possibility of making money out of it" (Technics and Civilisation 367). Almost seventy years after Mumford was writing, the obsessive zeal for the machine still has not completely disappeared, but has been displaced from giant smoke-puffing steel assemblages, whirling cogs and gearwheels, or the motors driving trains, cars and planes, and onto the silicon, plastic and light of computers (whose machineries of production and assembly are largely hidden off-shore to the bulk of users, thereby producing the illusion of "post-industrial" societies). The computer is now the paragon of progress and has become the "defining technology" of our age (Bolter), its place reinforced by an actively boosterist popular press (e.g. popular computing magazines; regular computer supplements in newspapers). Sociotechnical Not Posthuman Mumford continually makes the point that questions posed by/in technology are never answerable only technologically. It always comes down to human choices, and even when the results of these "are uncontrollable they are not external" to human culture: Choice manifests itself in society in small increments and moment-to-moment decisions as well as in loud dramatic struggles; and he who does not see choice in the development of the machine merely betrays his incapacity to observe cumulative effects until they are bunched together so closely that they seem completely external and impersonal. (Mumford, Technics and Civilisation 6) In a certain way Mumford's perspective anticipates actor-network theory, which looks at artefacts -- including machines -- as parts of sociotechnical networks that involve human decisions, including about the distribution of agency to non-humans. Even in the most automated machine, Mumford argues "there must intervene somewhere, at the beginning and end of the process ... the conscious participation of a human agent" (10). Actor-network studies of the development of scientific and technological artefacts aim in part to critique the sense of the external, impersonal or inevitable in scientific and technical 'progress' by insisting that "things might have been otherwise" (Bijker & Law 3), not just at the beginning and end, but all the way through the process of an artefact's development and use. The artefact is studied as a particular outcome of a set of decisions and performances made in the midst of contingencies affecting human and non-human actors with conflicting goals and contested powers within a dynamic sociotechnical network. Although actor-network theory is very interested in non-human agents, it does not, as do some recent participants in and theorists of cyberculture, celebrate the so-called post-human. There can be no agentic machines without there having been human competencies downloaded into them; there can be no technical order that is not also social and cultural. As Latour argues, the modernist work of purification has tried vainly to impose a separation between the social and technical, denying their mutual inextricability. From this Latourian perspective, the notion of the "post-human" is not, as it appears to be, post modern, but thoroughly modern. It carries through the quintessentially modernist project of denying after the fact the human agency and capacities that have been invested in producing hybrid artefacts which are then proclaimed as extra-human; it denies the cumulative effects of sociotechnical choices and instead represents the machinic imperative as somehow impersonal and external to human affairs. The notion of the posthuman can readily reinforce the pervasive popular cultural myths of technological inevitability and dominance, conveniently for those humans and corporations who actually do profit from decisions they make about developing and marketing machines of increasing autonomy, intelligence and subtlety. Machines and Provision The role of the machine has been overemphasised in histories of technology, according to Mumford. For aside from tools and machines which perform dynamic actions, there are technologies of containment and supply, which he categorizes as utensils (like baskets or pots), apparatus (such as dye vats, brick kilns), utilities (reservoirs, aqueducts, roads, buildings) and the modern power utility (railroad tracks, electric transmission lines). Some of the most effective adaptations of the environment came, not from the invention of machines, but from the equally admirable invention of utensils, apparatus, and utilities. ... But since people's attention is directed most easily to the noisier and more active parts of the environment, the role of the utility and the apparatus has been neglected ... both [tool and utensil] have played an enormous part in the development of the modern environment and at no stage in history can the two means of adaptation be split apart. Every technological complex includes both: not least our modern one. (Technics and Civilisation, 11-2). The development of various utensils and apparatus for storage (urns, granaries) and flow (irrigation, aqueducts) was essential for the emergence of settled agricultural communities in the neolithic period (Mumford, Technics and Human Development 140-1). As I explore in a related article (Sofia, "Container"), Mumford finds a prudish sexism in the relative neglect of technologies evocative of the female organs of storage, nutrition and transformation, compared with the overemphasis on technologies that are extensions of the muscular masculine body (Technics and Human Development, 140). However, the contrast between dynamic, noisy, active and autonomous machines, and passive, quiet, backgrounded containers cannot be sustained. For one the utensil even in its most basic form, has something machinic about it: a container can perform its function autonomously, without needing manipulation like a tool. Further, it is arguable that holding or containing is not simply a property of a shaped space, but a form of action in itself. Moreover in practice there are many hybrids of machine and utensil or utility, for example in domestic technologies like the food processor, a container with a machine-driven blade, or the washing machine, featuring a tub with mechanical agitation and rotary motion. Although Mumford is primarily interested in the machine, he observes that as modern "neotechnics" proceeds to develop ever more sophisticated machinery, so does it evolve more complex technologies of containment, as described in this passage which depicts both machines and utilities as active agents: Behind the façade [of the crisp lines of steel and glass that define the modern built environment] are rows and rows of machines, weaving cotton, transporting coal ... [etc.], machines with steel fingers and lean muscular arms, with perfect reflexes, sometimes even with electric eyes. Alongside them are the new utilities -- the coke oven, the transformer, the dye vats -- chemically cooperating with these mechanical processes, assembling new qualities in chemical compounds and materials. Every effective part in this whole environment represents an effort of the collective mind to widen the province of order and control and provision. (Technics and Civilisation, 356) Another way of getting the over-emphasised machine back into proportion is to look more closely at what it is used for, what purposes it serves. Mumford writes of the machine as part of the effort to produce "order and regularity" into the processes of life (10); to "widen the province of order and control and provision" (356) or to produce a "secure food supply and ... excellent organisation" (366). In other words, the machine is serving the goals typically associated with utensils, utilities and apparatus: smoothing out fluctuations in supply and distributing resources more evenly. Likewise Mumford suggests that in the back of developments of machine and tool is the effort to adapt by extending the body's powers and/or by altering the environment, so that, for example, instead of a physiological adaptation to cold through hair growth or hibernation, "there is an environmental adaptation, such as that made possible by the use of clothes and the erection of shelters" (10). These technologies are not machines, but container technologies, in the province of what philosopher of technology Don Ihde would call "background technics". We can think of the shift in emphasis here in relation to the example of road works. The large machines for bulldozing a path and laying down layers of road surface are very impressive in their size, power and technical capacity. But the road surface could not be laid down without there being technologies (including hybrids of machine and container, like the pick-up truck) for transporting, storing and mixing the materials used. And when it is done, the big machines lumber off elsewhere, and what we have before us is a road, a utility which facilitates orderly communication, transport and the supply of people and materials. In other words, these machines have served the goal of provisioning. The machine can enthral us with its autonomy, its alterity, its thingness, but as Heidegger has claimed, even such a powerful and seemingly stand-alone machine as a plane on a runway ready for take-off is ultimately just a "completely unautonomous" element when considered as part of a global system ordered "to ensure the possibility of transportation" (17). Like other modern machines, its own objectness and machinic resistance is dissolved as it becomes part of the "standing reserve", which can be understood as a macro-technology of provisioning through a matrix of mobilisable human and non-human resources. In the broader project of which this piece is a fragment, I want to investigate more closely the role and relative importance of machines compared to other kinds of equipment, especially for containment, supply or provisioning in contemporary technoculture, on the suspicion that it is apparatus and utilities rather than machines that define our contemporary lifeworld. References Bijker, Wiebe E., and John Law. General Introduction. Shaping Technology/Building Society: Studies in Sociotechnical Change. Eds. Bijker and Law. Cambridge, Mass.: MIT P, 1992. Bolter, Jay David. "The Computer as a Defining Technology." Computers in the Human Context: Information Technology, Production, and People. Ed. Tom Forester. Oxford: Basil Blackwell, 1989. Heidegger, Martin. "The Question Concerning Technology." The Question Concerning Technology and Other Essays. Trans. William Lovitt. New York: Harper & Row, 1977. Andreas Huyssen. "The Vamp and the Machine: Technology and Sexuality in Fritz Lang's Metropolis." New German Critique 24-25 (1982), 221-37. Also in Huyssen. After the Great Divide. Bloomington: Indiana UP, 1986. Ihde, Don. Technology and the Lifeworld: From Garden to Earth. Bloomington: Indiana UP, 1990. Latour, Bruno. We Have Never Been Modern. Trans. Catherine Porter. Cambridge, Mass.: Harvard UP, 1993. Mumford, Lewis. Technics and Civilisation. New York: Harcourt Brace Jovanovich, 1962 [1934]. ---. Technics and Human Development. New York: Harcourt Brace & World, 1966. Sofia, Zoë. "Container Technologies." Hypatia, Spring 2000 (forthcoming). ---. "Contested Zones: Futurity and Technological Art." Leonardo: Journal of the International Society for the Arts, Sciences, and Technology 29.1 (1996): 59-66. ---. "The Mythic Machine: Gendered Irrationalities and Computer Culture." Education/Technology/Power: Educational Computing as a Social Practice. Eds. Hank Bromley and Michael W. Apple. Albany NY: SUNY, 1998. Citation reference for this article MLA style: Zoë Sofoulis. "Machinic Musings with Mumford." M/C: A Journal of Media and Culture 2.6 (1999). [your date of access] <http://www.uq.edu.au/mc/9909/mumford.php>. Chicago style: Zoë Sofoulis, "Machinic Musings with Mumford," M/C: A Journal of Media and Culture 2, no. 6 (1999), <http://www.uq.edu.au/mc/9909/mumford.php> ([your date of access]). APA style: Zoë Sofoulis. (1999) Machinic musings with Mumford. M/C: A Journal of Media and Culture 2(6). <http://www.uq.edu.au/mc/9909/mumford.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
21

Madsen, Signe Mørk. "Gaining customer centric understanding of retail displays for future innovations." International Journal of Retail & Distribution Management ahead-of-print, ahead-of-print (December 16, 2020). http://dx.doi.org/10.1108/ijrdm-08-2019-0280.

Full text
Abstract:
PurposeThe aim of this research is to provide insights for future display design through understanding the processes of sensemaking of retail displays in digitised retail.Design/methodology/approachThe research applies media elicited interviews and engages thematic analysis to understand agency and advance mental models of retail display. Actor Network Theory (ANT) is engaged to flatten the ontology to traverse digital and physical realms as well as more semiotic sources.FindingsThe article presents a system comprising sensemaking processes of displays in digitised retail and traces the blending traits of physical and digital displays labelling an emerging display terminology applicable across realms.Research limitations/implicationsThe participating retail concepts' limited resources for technological innovations plus the customers all being local and recruited through the physical store represent this study's limitations.Practical implicationsThe developed system reveals a process for abandoning the familiar but obsolete understanding of retail displays to replace it with new insights to support the judgement and decision process for designing innovative future displays with a customer centric logic.Originality/valueThe article is novel in flattening the ontology of retail displays to fit an organisational interface perception of the link between customer and retailer.
APA, Harvard, Vancouver, ISO, and other styles
22

Slyunyaev, Artem. "CLUSTERING OF THE NATIONAL ECONOMY IN THE DEVELOPMENT OF A NETWORK ECONOMY." Herald UNU. International Economic Relations And World Economy, no. 41 (2022). http://dx.doi.org/10.32782/2413-9971/2022-41-19.

Full text
Abstract:
The article is devoted to the processes of clustering in the conditions of development of network economy, post-industrial society. The production of intellectual product and new technologies is becoming more profitable and monopolized by developed countries. Today they are suppliers of a qualitatively new unlimited resource – information and knowledge, receiving at a deliberately reduced cost limited material resources from the countries of the world periphery. This division of labor creates a new mechanism for the formation and distribution of wealth. This division of labor creates a new mechanism for the formation and distribution of wealth. The hallmarks of an information society are: an increase in the role of information and knowledge in society; an increase in the share of information communications, products and services in the gross domestic product; creation of a global information space that provides effective information interaction of people, their access to world information resources and their needs for information products and services. The issues of network economy as a conductor of post-industrial society and clustering are revealed. States that do not join the technological process remain on the sidelines of global economic transformation and risk disappearing as independent. XX century was characterized by rapid transformation processes, which covered all groups of countries, had different nature of implementation and, accordingly, different results. Such views once again confirm the need for countries to achieve self-organization, gaining the opportunity to develop progressively, to be dynamic. Foreign experience of clustering is studied and attention is paid to European states. They are implemented in accordance with the decisions of the EU Lisbon Summit held in 2000 with the aim of introducing in the member states of the Union a knowledge economy capable of ensuring competitiveness on the basis of innovation clusters that exceeds the performance of the US and Japanese economies. In Ukraine, cluster potential exists in agriculture, food and metal production, oil, gas, transport services and logistics. We will leave for further development the interdependence of the quantity / quality of the country's labor force and the information component in the context of clustering development. Of course, the availability of skilled labor is an indicator of economic development and the availability of active information space.
APA, Harvard, Vancouver, ISO, and other styles
23

Tymchyshyn, Yuliya. "ENERGY COMPONENT OF UKRAINIAN REGIONS’ ECONOMIC SECURITY." International scientific journal "Internauka". Series: "Economic Sciences", no. 9(41) (2017). http://dx.doi.org/10.25313/2520-2294-2020-9-6321.

Full text
Abstract:
The article highlights the basic concepts of energy security of regions. Factors influencing the energy security of the regions are considered, they can be divided into two groups: factors that can be eliminated or localized in the current and strategic period, that are directly in the field of management (these include energy shortages, man-made accidents, financial problems); factors whose management period is based on the time of strategic programs (period of hypotheses) and which are managed only indirectly through the concepts of development (these include limited resources, environmental problems, etc.). The economic threats to the energy security of the regions are highlighted: lack of investment resources necessary for the development, modernization and technical support of the normal operation of the energy complex; financial instability of ensuring the functioning of the energy complex, provision of fuel resources, materials and components to support technological processes, stability of payment of all current costs; breach of economic ties; inefficient use of fuel and material resources; excessively high prices for fuel and material resources; high levels of monopoly of producers, suppliers and distributors of energy and fuel resources; technical constraints arising from lack of funds; imbalance of production and consumption (FER), shortage of energy capacity, insufficient network capacity. The main characteristics of the energy security system of the regions, which should be emphasized in the formation of the management mechanism: energy saving as a characteristic of organizational and technical policy of energy security in the framework of social responsibility of the subjects; energy efficiency as a characteristic of economic policy to ensure the optimal balance of energy supply and energy intensity and energy consumption of production systems; energy competitiveness as a characteristic of political regulation of consumption of energy resources of the region. The mechanism of energy security management of regions, which should be based on the most important principles and their combination, is analyzed.
APA, Harvard, Vancouver, ISO, and other styles
24

Skrynnyk, Olena. "Towards Organizational Development In Digital Organizational Twin." SocioEconomic Challenges 5, no. 3 (2021). http://dx.doi.org/10.21272/sec.5(3).126-133.2021.

Full text
Abstract:
Sustained continuous monitoring and replication of organizational development in digital organizational twins is of particular importance for labour-intensive enterprises and also those in which reciprocal relations between social, corporate, normative and performative aspects assume the leading role. The main purpose of the research is the developing of a digital representation of organizational processes, which focuses on the performance, working activities, organizational issues, behaviour and interactions between of the organizational members. Consequently, the objectives of research include the monitoring of current research state, concept and design of a digital twin. The implementation of digital organizational twin should improve considering timely optimization of proactive and reactive organizational development measures in the company in relation to the core variables of the 7S model. The created digital twin should map the dynamics of organizational development, as well as concomitant and deviating processes. Systematization literary sources and approaches for the digital replication of organizational development issues indicates the lack of publications on research and diffuse distribution of scientific interest. The initial design of organizational development in the digital twin is based on four main objects and limited to a certain number of investigated parameters. This paper compare the conventional and digitalized organizational development process, explain the data flow in digital organizational twin, the design of organizational development in the digital organizational twin, provide an overview of the individual facets of organizational development, list the parameterization models and exemplarily illustrate the visualization of selected parameters. The results of the research can be useful for the expansion of the tension bridge between organisational development and technologies and the development of new potentials for the study of socio-technical effects in companies. This can be extended to include the other facets of business management and supplemented by the connection of other technological resources.
APA, Harvard, Vancouver, ISO, and other styles
25

Trusova, Natalia V., Roman I. Oleksenko, Sergey V. Kalchenko, Denys V. Yeremenko, Stanislava R. Pasieka, and Svitlana A. Moroz. "Managing the Intellectual Potential in the Business-Network of Innovative Digital Technologies." Studies of Applied Economics 39, no. 5 (May 29, 2021). http://dx.doi.org/10.25115/eea.v39i5.4910.

Full text
Abstract:
The article considers the synergy of managing the intellectual potential of the enterprise in the business-network of innovative digital technologies. The paradigm of development of intellectual potential of enterprises in the network system of innovative digital technologies is presented, which determines the spatial-technological, information-virtual, intellectual security of economic entities, taking into account tools and components of inclusive growth in their transformation of business models and new markets. A methodological approach to the management of effective interaction of enterprises which determines the shift of entropy of their magnitude of intellectual potential, and affects the business system with a limited duration of the network of innovative digital technologies is developed. The main value of integration processes in the business-network of innovative digital technologies of enterprises, which is formed due to their internal incentives, as innovators have market value and significant intellectual assets, they are always at the center of the economic environment. Mathematical tools for describing the functionality of an integrated network are systematized. Innovative digital technologies of enterprises with a significant amount of intellectual assets are based on differential equations. It is proved that the synergetic proximity and ability to asymptotic development, i.e., the increase of intellectual assets in a complex business system, causes the participating company to change the network of innovative digital technologies and reorient to the terms of compensation for invested resources. Inconsistency and non-acceptance of the rules of cooperation, weak influence of the network on the participant will lead to its inefficiency and withdrawal from the network, the choice of “off-network” vector of development. The analysis of the index of intellectual development and volumes of intellectual capital in Ukraine and in the world is carried out. The volumes and structure of attracted investments in intellectual assets by the sizes of the enterprises of Ukraine are determined. The rating position of Ukraine on the world level of development of innovative digital technologies is investigated. The factors of effective cooperation of enterprises according to the criteria of intellectual potential management in innovative digital technologies are substantiated and the areas of interaction of the participants of the innovation process are determined.
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Pengwu. "A study on the intellectual capital management over cloud computing using analytic hierarchy process and partial least squares." Kybernetes ahead-of-print, ahead-of-print (September 23, 2021). http://dx.doi.org/10.1108/k-03-2021-0241.

Full text
Abstract:
PurposeIn the age of a knowledge-based economy and following extensive socio-economic changes, the success of organizations is not limited to gaining financial and material resources. Instead, it depends on the acquisition of intangible assets that can be used to achieve a sustainable competitive advantage. In the new strategic environment, organizations will thrive when they see themselves as a learning organization whose goal is to improve intellectual capital continually; an organization that cannot increase its intellectual capital cannot survive. The term intellectual capital is used in the overlap of all assets, intangible resources and non-physical resources of an organization, including processes, innovation capacity and implicit and explicit knowledge of its members and partner network. However, despite the growing importance of intellectual capital and cloud computing as vital resources for organizations' competitive advantage, there is a limited understanding of them. Simultaneously, the management of intellectual capital enables organizational managers to create, nurture, control and preserves a strong competitive advantage source, the advantage that competitors will not easily capture. So, the main objective of the present investigation is to check out the factors affecting the adoption of intellectual capital management systems based on cloud computing in hospitals.Design/methodology/approachIn the last two decades, we have moved toward economics, where investment in Information Technology (IT), human resources, development, research and advertising is essential to maintain competitive advantage and certify the sustainability of organizations. Therefore, it can be stated that the economic value is the creation and management of intangible assets, which are referred to as intellectual capital. On the other hand, cloud computing is presented as a new paradigm for hosting and providing services through the Internet. Cloud computing can lead to too many benefits to organizations, including cost reduction, flexibility and improved performance. The present article examines how optimal intellectual capital management can be achieved using cloud computing. So, seven hypotheses were developed through the dimensions of technology, environment, organization and innovation. In this study, the path analysis was performed using Analytic Hierarchy Process (AHP) and Partial Least Squares (PLS). By reviewing the literature related to the model of technology, organization, environment and innovation dissemination theory, four main criteria, and 15 sub-criteria were identified based on the opinions of specialists, professors and IT experts based on AHP and PLS methods.FindingsThe results of this investigation confirmed all the hypotheses. The results illustrated that environmental and technological factors should be regarded more when adopting intellectual capital management systems based on cloud computing. The results also indicated that intellectual capital highly influences improving performance. Furthermore, cloud apps, like other disruptive technology, deliver superior benefits while still presenting a slew of realistic challenges that must be tackled. In order to draw a growing customer base to this business model, software vendors should resolve these concerns. The literature revealed that the computing industry is making tremendous strides around the world. Nevertheless, in order to achieve a faster and softer adoption, newer and more advanced techniques are still required.Research limitations/implicationsThe research outcomes can significantly impact a wide range of organizations, such as health-related organizations. However, there are some limitations; for example, the sample is limited to one country. Therefore, future studies can measure the data of this study in different samples in different countries. Future researchers can also boost the model's predictive capability to adopt cloud computing in other organizations by adding environmental, organizational, innovation and other technical factors.Practical implicationsManagers will use these emerging innovations to minimize costs and maximize profits in the intellectual capital management competition. An effective cloud computing based on an electronic human resource management system can significantly increase system performance in industries. The investigators expect that the results will direct clinicians and scholars into a more advanced and developed age of cloud-based apps.Originality/valueInvestigations on the impact of cloud computing on intellectual capital management are rare. Accordingly, this investigation provided a new experience in terms of intellectual capital in the field of cloud computing. This study filled the scientific research gap to understand the factors affecting intellectual capital management systems based on cloud computing. This study provides a better insight into the power of organizational and environmental structure to adopt this technology in hospitals.
APA, Harvard, Vancouver, ISO, and other styles
27

Emilio Faroldi. "The architecture of differences." TECHNE - Journal of Technology for Architecture and Environment, May 26, 2021, 9–15. http://dx.doi.org/10.36253/techne-11023.

Full text
Abstract:
Following in the footsteps of the protagonists of the Italian architectural debate is a mark of culture and proactivity. The synthesis deriving from the artistic-humanistic factors, combined with the technical-scientific component, comprises the very root of the process that moulds the architect as an intellectual figure capable of governing material processes in conjunction with their ability to know how to skilfully select schedules, phases and actors: these are elements that – when paired with that magical and essential compositional sensitivity – have fuelled this profession since its origins. The act of X-raying the role of architecture through the filter of its “autonomy” or “heteronomy”, at a time when the hybridisation of different areas of knowledge and disciplinary interpenetration is rife, facilitates an understanding of current trends, allowing us to bring the fragments of a debate carved into our culture and tradition up to date. As such, heteronomy – as a condition in which an acting subject receives the norm of its action from outside itself: the matrix of its meaning, coming from ancient Greek, the result of the fusion of the two terms ἕτερος éteros “different, other” and νόμος nómos “law, ordinance” – suggests the existence of a dual sentiment now pervasive in architecture: the sin of self-reference and the strength of depending on other fields of knowledge. Difference, interpreted as a value, and the ability to establish relationships between different points of observation become moments of a practice that values the process and method of affirming architecture as a discipline. The term “heteronomy”, used in opposition to “autonomy”, has – from the time of Kant onwards – taken on a positive value connected to the mutual respect between reason and creativity, exact science and empirical approach, contamination and isolation, introducing the social value of its existence every time that it returns to the forefront. At the 1949 conference in Lima, Ernesto Nathan Rogers spoke on combining the principle of “Architecture is an Art” with the demands of a social dimension of architecture: «Alberti, in the extreme precision of his thought, admonishes us that the idea must be translated into works and that these must have a practical and moral purpose in order to adapt harmoniously ‘to the use of men’, and I would like to point out the use of the plural of ‘men’, society. The architect is neither a passive product nor a creator completely independent of his era: society is the raw material that he transforms, giving it an appearance, an expression, and the consciousness of those ideals that, without him, would remain implicit. Our prophecy, like that of the farmer, already contains the seeds for future growth, as our work also exists between heaven and earth. Poetry, painting, sculpture, dance and music, even when expressing the contemporary, are not necessarily limited within practical terms. But we architects, who have the task of synthesising the useful with the beautiful, must feel the fundamental drama of existence at every moment of our creative process, because life continually puts practical needs and spiritual aspirations at odds with one another. We cannot reject either of these necessities, because a merely practical or moralistic position denies the full value of architecture to the same extent that a purely aesthetic position would: we must mediate one position with the other» (Rogers, 1948). Rogers discusses at length the relationship between instinctive forces and knowledge acquired through culture, along with his thoughts on the role played by study in an artist’s training. It is in certain debates that have arisen within the “International Congresses of Modern Architecture” that the topic of architecture as a discipline caught between self-sufficiency and dependence acquires a certain centrality within the architectural context: in particular, in this scenario, the theme of the “autonomy” and “heteronomy” of pre-existing features of the environment plays a role of strategic importance. Arguments regarding the meaning of form in architecture and the need for liberation from heteronomous influences did not succeed in undermining the idea of an architecture capable of influencing the governing of society as a whole, thanks to an attitude very much in line with Rogers’ own writings. The idea of a project as the result of the fusion of an artistic idea and pre-existing features of an environment formed the translation of the push to coagulate the antithetical forces striving for a reading of the architectural work that was at once autonomous and heteronomous, as well as linked to geographical, cultural, sociological and psychological principles. The CIAM meeting in Otterlo was attended by Ignazio Gardella, Ernesto Nathan Rogers, Vico Magistretti and Giancarlo De Carlo as members of the Italian contingent: the architects brought one project each to share with the conference and comment on as a manifesto. Ernesto Nathan Rogers, who presented the Velasca Tower, and Giancarlo De Carlo, who presented a house in Matera in the Spine Bianche neighbourhood, were openly criticised as none of the principles established by the CIAM were recognisable in their work any longer, and De Carlo’s project represented a marked divergence from a consolidated method of designing and building in Matera. In this cultural condition, Giancarlo De Carlo – in justifying the choices he had made – even went so far as to say: «my position was not at all a flight from architecture, for example in sociology. I cannot stand those who, paraphrasing what I have said, dress up as politicians or sociologists because they are incapable of creating architecture. Architecture is – and cannot be anything other than – the organisation and form of physical space. It is not autonomous, it is heteronomous» (De Carlo, 2001). Even more so than in the past, it is not possible today to imagine an architecture encapsulated entirely within its own enclosure, autoimmune, averse to any contamination or relationships with other disciplinary worlds: architecture is the world and the world is the sum total of our knowledge. Architecture triggers reactions and phenomena: it is not solely and exclusively the active and passive product of a material work created by man. «We believed in the heteronomy of architecture, in its necessary dependence on the circumstances that produce it, in its intrinsic need to exist in harmony with history, with the happenings and expectations of individuals and social groups, with the arcane rhythms of nature. We denied that the purpose of architecture was to produce objects, and we argued that its fundamental role was to trigger processes of transformation of the physical environment that are capable of contributing to the improvement of the human condition» (De Carlo, 2001). Productive and cultural reinterpretations place the discipline of architecture firmly at the centre of the critical reconsideration of places for living and working. Consequently, new interpretative models continue to emerge which often highlight the instability of built architecture with the lack of a robust theoretical apparatus, demanding the sort of “technical rationality” capable of restoring the centrality of the act of construction, through the contribution of actions whose origins lie precisely in other subject areas. Indeed, the transformation of the practice of construction has resulted in direct changes to the structure of the nature of the knowledge of it, to the role of competencies, to the definition of new professional skills based on the demands emerging not just from the production system, but also from the socio-cultural system. The architect cannot disregard the fact that the making of architecture does not burn out by means of some implosive dynamic; rather, it is called upon to engage with the multiple facets and variations that the cognitive act of design itself implies, bringing into play a theory of disciplines which – to varying degrees and according to different logics – offer their significant contribution to the formation of the design and, ultimately, the work. As Álvaro Siza claims, «The architect is not a specialist. The sheer breadth and variety of knowledge that practicing design encompasses today – its rapid evolution and progressive complexity – in no way allow for sufficient knowledge and mastery. Establishing connections – pro-jecting [from Latin proicere, ‘to stretch out’] – is their domain, a place of compromise that is not tantamount to conformism, of navigation of the web of contradictions, the weight of the past and the weight of the doubts and alternatives of the future, aspects that explain the lack of a contemporary treatise on architecture. The architect works with specialists. The ability to chain things together, to cross bridges between fields of knowledge, to create beyond their respective borders, beyond the precarity of inventions, requires a specific education and stimulating conditions. [...] As such, architecture is risk, and risk requires impersonal desire and anonymity, starting with the merging of subjectivity and objectivity. In short, a gradual distancing from the ego. Architecture means compromise transformed into radical expression, in other words, a capacity to absorb the opposite and overcome contradiction. Learning this requires an education in search of the other within each of us» (Siza, 2008). We are seeing the coexistence of contrasting, often extreme, design trends aimed at recementing the historical and traditional mould of construction by means of the constant reproposal of the characteristics of “persistence” that long-established architecture, by its very nature, promotes, and at decrypting the evolutionary traits of architecture – markedly immaterial nowadays – that society promotes as phenomena of everyday living. Speed, temporariness, resilience, flexibility: these are just a few fragments. In other words, we indicate a direction which immediately composes and anticipates innovation as a characterising element, describing its stylistic features, materials, languages and technologies, and only later on do we tend to outline the space that these produce: what emerges is a largely anomalous path that goes from “technique” to “function” – by way of “form” – denying the circularity of the three factors at play. The threat of a short-circuit deriving from discourse that exceeds action – in conjunction with a push for standardisation aimed at asserting the dominance of construction over architecture, once again echoing the ideas posited by Rogers – may yet be able to finding a lifeline cast through the attempt to merge figurative research with technology in a balanced way, in the wake of the still-relevant example of the Bauhaus or by emulating the thinking of certain masters of modern Italian architecture who worked during that post-war period so synonymous with physical – and, at the same time, moral – reconstruction. These architectural giants’ aptitude for technical and formal transformation and adaptation can be held up as paradigmatic examples of methodological choice consistent with their high level of mastery over the design process and the rhythm of its phases. In all this exaltation of the outcome, the power of the process is often left behind in a haze: in the uncritical celebration of the architectural work, the method seems to dissolve entirely into the finished product. Technical innovation and disciplinary self-referentiality would seem to deny the concepts of continuity and transversality by means of a constant action of isolation and an insufficient relationship with itself: conversely, the act of designing, as an operation which involves selecting elements from a vast heritage of knowledge, cannot exempt itself from dealing in the variables of a functional, formal, material and linguistic nature – all of such closely intertwined intents – that have over time represented the energy of theoretical formulation and of the works created. For years, the debate in architecture has concentrated on the synergistic or contrasting dualism between cultural approaches linked to venustas and firmitas. Kenneth Frampton, with regard to the interpretative pair of “tectonics” and “form”, notes the existence of a dual trend that is both identifiable and contrasting: namely the predisposition to favour the formal sphere as the predominant one, rejecting all implications on the construction, on the one hand; and the tendency to celebrate the constructive matrix as the generator of the morphological signature – emphasised by the ostentation of architectural detail, including that of a technological matrix – on the other. The design of contemporary architecture is enriched with sprawling values that are often fundamental, yet at times even damaging to the successful completion of the work: it should identify the moment of coagulation within which the architect goes in pursuit of balance between all the interpretative categories that make it up, espousing the Vitruvian meaning, according to which practice is «the continuous reflection on utility» and theory «consists of being able to demonstrate and explain the things made with technical ability in terms of the principle of proportion» (Vitruvius Pollio, 15 BC). Architecture will increasingly be forced to demonstrate how it represents an applied and intellectual activity of a targeted synthesis, of a complex system within which it is not only desirable, but indeed critical, for the cultural, social, environmental, climatic, energy-related, geographical and many other components involved in it to interact proactively, together with the more spatial, functional and material components that are made explicit in the final construction itself through factors borrowed from neighbouring field that are not endogenous to the discipline of architecture alone. Within a unitary vision that exists parallel to the transcalarity that said vision presupposes, the technology of architecture – as a discipline often called upon to play the role of a collagen of skills, binding them together – acts as an instrument of domination within which science and technology interpret the tools for the translation of man’s intellectual needs, expressing the most up-to-date principles of contemporary culture. Within the concept of tradition – as inferred from its evolutionary character – form, technique and production, in their historical “continuity” and not placed in opposition to one other, make up the fields of application by which, in parallel, research proceeds with a view to ensuring a conforming overall design. The “technology of architecture” and “technological design” give the work of architecture its personal hallmark: a sort of DNA to be handed down to future generations, in part as a discipline dedicated to amalgamating the skills and expertise derived from other dimensions of knowledge. In the exercise of design, the categories of urban planning, composition, technology, structure and systems engineering converge, the result increasingly accentuated by multidisciplinary nuances in search of a sense of balance between the parts: a setup founded upon simultaneity and heteronomous logic in the study of variables, by means of translations, approaches and skills as expressions of multifaceted identities. «Architects can influence society with their theories and works, but they are not capable of completing any such transformation on their own, and end up being the interpreters of an overbearing historical reality under which, if the strongest and most honest do not succumb, that therefore means that they alone represent the value of a component that is algebraically added to the others, all acting in the common field» (Rogers, 1951). Construction, in this context, identifies the main element of the transmission of continuity in architecture, placing the “how” at the point of transition between past and future, rather than making it independent of any historical evolution. Architecture determines its path within a heteronomous practice of construction through an effective distinction between the strength of the principles and codes inherent to the discipline – long consolidated thanks to sedimented innovations – and the energy of experimentation in its own right. Architecture will have to seek out and affirm its own identity, its validity as a discipline that is at once scientific and poetic, its representation in the harmonies, codes and measures that history has handed down to us, along with the pressing duty of updating them in a way that is long overdue. The complexity of the architectural field occasionally expresses restricted forms of treatment bound to narrow disciplinary areas or, conversely, others that are excessively frayed, tending towards an eclecticism so vast that it prevents the tracing of any discernible cultural perimeter. In spite of the complex phenomenon that characterises the transformations that involve the status of the project and the figure of the architect themselves, it is a matter of urgency to attempt to renew the interpretation of the activity of design and architecture as a coherent system rather than a patchwork of components. «Contemporary architecture tends to produce objects, even though its most concrete purpose is to generate processes. This is a falsehood that is full of consequences because it confines architecture to a very limited band of its entire spectrum; in doing so, it isolates it, exposing it to the risks of subordination and delusions of grandeur, pushing it towards social and political irresponsibility. The transformation of the physical environment passes through a series of events: the decision to create a new organised space, detection, obtaining the necessary resources, defining the organisational system, defining the formal system, technological choices, use, management, technical obsolescence, reuse and – finally – physical obsolescence. This concatenation is the entire spectrum of architecture, and each link in the chain is affected by what happens in all the others. It is also the case that the cadence, scope and intensity of the various bands can differ according to the circumstances and in relation to the balances or imbalances within the contexts to which the spectrum corresponds. Moreover, each spectrum does not conclude at the end of the chain of events, because the signs of its existence – ruins and memory – are projected onto subsequent events. Architecture is involved with the entirety of this complex development: the design that it expresses is merely the starting point for a far-reaching process with significant consequences» (De Carlo, 1978). The contemporary era proposes the dialectic between specialisation, the coordination of ideas and actions, the relationship between actors, phases and disciplines: the practice of the organisational culture of design circumscribes its own code in the coexistence and reciprocal exploitation of specialised fields of knowledge and the discipline of synthesis that is architecture. With the revival of the global economy on the horizon, the dematerialisation of the working practice has entailed significant changes in the productive actions and social relationships that coordinate the process. Despite a growing need to implement skills and means of coordination between professional actors, disciplinary fields and sectors of activity, architectural design has become the emblem of the action of synthesis. This is a representation of society which, having developed over the last three centuries, from the division of social sciences that once defined it as a “machine”, an “organism” and a “system”, is now defined by the concept of the “network” or, more accurately, by that of the “system of networks”, in which a person’s desire to establish relationships places them within a multitude of social spheres. The “heteronomy” of architecture, between “hybridisation” and “contamination of knowledge”, is to be seen not only an objective fact, but also, crucially, as a concept aimed at providing the discipline with new and broader horizons, capable of putting it in a position of serenity, energy and courage allowing it to tackle the challenges that the cultural, social and economic landscape is increasingly throwing at the heart of our contemporary world.
APA, Harvard, Vancouver, ISO, and other styles
28

Marcheva, Marta. "The Networked Diaspora: Bulgarian Migrants on Facebook." M/C Journal 14, no. 2 (November 17, 2010). http://dx.doi.org/10.5204/mcj.323.

Full text
Abstract:
The need to sustain and/or create a collective identity is regularly seen as one of the cultural priorities of diasporic peoples and this, in turn, depends upon the existence of a uniquely diasporic form of communication and connection with the country of origin. Today, digital media technologies provide easy information recording and retrieval, and mobile IT networks allow global accessibility and participation in the redefinition of identities. Vis-à-vis our understanding of the proximity and connectivity associated with globalisation, the role of ICTs cannot be underestimated and is clearly more than a simple instrument for the expression of a pre-existing diasporic identity. Indeed, the concept of “e-diaspora” is gaining popularity. Consequently, research into the role of ICTs in the lives of diasporic peoples contributes to a definition of the concept of diaspora, understood here as the result of the dispersal of all members of a nation in several countries. In this context, I will demonstrate how members of the Bulgarian diaspora negotiate not only their identities but also their identifications through one of the most popular community websites, Facebook. My methodology consists of the active observation of Bulgarian users belonging to the diaspora, the participation in groups and forums on Facebook, and the analysis of discourses produced online. This research was conducted for the first time between 1 August 2008 and 31 May 2009 through the largest 20 (of 195) Bulgarian groups on the French version of Facebook and 40 (of over 500) on the English one. It is important to note that the public considered to be predominantly involved in Facebook is a young audience in the age group of 18-35 years. Therefore, this article is focused on two generations of Bulgarian immigrants: mostly recent young and second-generation migrants. The observed users are therefore members of the Bulgarian diaspora who have little or no experience of communism, who don’t feel the weight of the past, and who have grown up as free and often cosmopolitan citizens. Communist hegemony in Bulgaria began on 9 September 1944, when the army and the communist militiamen deposed the country’s government and handed power over to an anti-fascist coalition. During the following decades, Bulgaria became the perfect Soviet satellite and the imposed Stalinist model led to sharp curtailing of the economic and social contacts with the free world beyond the Iron Curtain. In 1989, the fall of the Berlin Wall marked the end of the communist era and the political and economic structures that supported it. Identity, Internet, and Diaspora Through the work of Mead, Todorov, and boyd it is possible to conceptualise the subject in terms of both of internal and external social identity (Mead, Todorov, boyd). In this article, I will focus, in particular, on social and national identities as expressions of the process of sharing stories, experiences, and understanding between individuals. In this respect, the phenomenon of Facebook is especially well placed to mediate between identifications which, according to Freud, facilitate the plural subjectivities and the establishment of an emotional network of mutual bonds between the individual and the group (Freud). This research also draws on Goffman who, from a sociological point of view, demystifies the representation of the Self by developing a dramaturgical theory (Goffman), whereby identity is constructed through the "roles" that people play on the social scene. Social life is a vast stage where the actors are required to adhere to certain socially acceptable rituals and guidelines. It means that we can consider the presentation of Self, or Others, as a facade or a construction of socially accepted features. Among all the ICTs, the Internet is, by far, the medium most likely to facilitate free expression of identity through a multitude of possible actions and community interactions. Personal and national memories circulate in the transnational space of the Internet and are reshaped when framed from specific circumstances such as those raised by the migration process. In an age of globalisation marked by the proliferation of population movements, instant communication, and cultural exchanges across geographic boundaries, the phenomenon of the diaspora has caught the attention of a growing number of scholars. I shall be working with Robin Cohen’s definition of diaspora which highlights the following common features: (1) dispersal from an original homeland; (2) the expansion from a homeland in search of work; (3) a collective memory and myth about the homeland; (4) an idealisation of the supposed ancestral homeland; (5) a return movement; (6) a strong ethnic group consciousness sustained over a long time; (7) a troubled relationship with host societies; (8) a sense of solidarity with co-ethnic members in other countries; and (9) the possibility of a distinctive creative, enriching life in tolerant host countries (Cohen). Following on this earlier work on the ways in which diasporas give rise to new forms of subjectivity, the concept of “e-diaspora” is now rapidly gaining in popularity. The complex association between diasporic groups and ICTs has led to a concept of e-diasporas that actively utilise ICTs to achieve community-specific goals, and that have become critical for the formation and sustenance of an exilic community for migrant groups around the globe (Srinivasan and Pyati). Diaspora and the Digital Age Anderson points out two key features of the Internet: first, it is a heterogeneous electronic medium, with hardly perceptible contours, and is in a state of constant development; second, it is a repository of “imagined communities” without geographical or legal legitimacy, whose members will probably never meet (Anderson). Unlike “real” communities, where people have physical interactions, in the imagined communities, individuals do not have face-to-face communication and daily contact, but they nonetheless feel a strong emotional attachment to the nation. The Internet not only opens new opportunities to gain greater visibility and strengthen the sense of belonging to community, but it also contributes to the emergence of a transnational public sphere where the communities scattered in various locations freely exchange their views and ideas without fear of restrictions or censorship from traditional media (Appadurai, Bernal). As a result, the Web becomes a virtual diasporic space which opens up, to those who have left their country, a new means of confrontation and social participation. Within this new diasporic space, migrants are bound in their disparate geographical locations by a common vision or myth about the homeland (Karim). Thanks to the Internet, the computer has become a primary technological intermediary between virtual networks, bringing its members closer in a “global village” where everyone is immediately connected to others. Thus, today’s diasporas are not the diaspora of previous generations in that the migration is experienced and negotiated very differently: people in one country are now able to continue to participate actively in another country. In this context, the arrival of community sites has increased the capacity of users to create a network on the Internet, to rediscover lost links, and strengthen new ones. Unlike offline communities, which may weaken once their members have left the physical space, online communities that are no longer limited by the requirement of physical presence in the common space have the capacity to endure. Identity Strategies of New Generations of Bulgarian Migrants It is very difficult to quantify migration to or from Bulgaria. Existing data is not only partial and limited but, in some cases, give an inaccurate view of migration from Bulgaria (Soultanova). Informal data confirm that one million Bulgarians, around 15 per cent of Bulgaria’s entire population (7,620,238 inhabitants in 2007), are now scattered around the world (National Statistical Institute of Bulgaria). The Bulgarian migrant is caught in a system of redefinition of identity through the duration of his or her relocation. Emigrating from a country like Bulgaria implies a high number of contingencies. Bulgarians’ self-identification is relative to the inferiority complex of a poor country which has a great deal to do to catch up with its neighbours. Before the accession of Bulgaria to the European Union, the country was often associated with what have been called “Third World countries” and seen as a source of crime and social problems. Members of the Bulgarian diaspora faced daily prejudice due to the bad reputation of their country of origin, though the extent of the hostility depended upon the “host” nation (Marcheva). Geographically, Bulgaria is one of the most eastern countries in Europe, the last to enter the European Union, and its image abroad has not facilitated the integration of the Bulgarian diaspora. The differences between Bulgarian migrants and the “host society” perpetuate a sentiment of marginality that is now countered with an online appeal for national identity markers and shared experiences. Facebook: The Ultimate Social Network The Growing Popularity of Facebook With more than 500 million active members, Facebook is the most visited website in the world. In June 2007, Facebook experienced a record annual increase of 270 per cent of connections in one year (source: comScore World Metrix). More than 70 translations of the site are available to date, including the Bulgarian version. What makes it unique is that Facebook positively encourages identity games. Moreover, Facebook provides the symbolic building blocks with which to build a collective identity through shared forms of discourse and ways of thinking. People are desperate to make a good impression on the Internet: that is why they spend so much time managing their online identity. One of the most important aspects of Facebook is that it enables users to control and manage their image, leaving the choice of how their profile appears on the pages of others a matter of personal preference at any given time. Despite some limitations, we will see that Facebook offers the Bulgarian community abroad the possibility of an intense and ongoing interaction with fellow nationals, including the opportunity to assert and develop a complex new national/transnational identity. Facebook Experiences of the Bulgarian Diaspora Created in the United States in 2004 and extended to use in Europe two or three years later, Facebook was quickly adopted by members of the Bulgarian diaspora. Here, it is very important to note that, although the Internet per se has enabled Bulgarians across the globe to introduce Cyrillic script into the public arena, it is definitely Facebook that has made digital Cyrillic visible. Early in computer history, keyboards with the Cyrillic alphabet simply did not exist. Thus, Bulgarians were forced to translate their language into Latin script. Today, almost all members of the Bulgarian population who own a computer use a keyboard that combines the two alphabets, Latin and Cyrillic, and this allows alternation between the two. This is not the case for the majority of Bulgarians living abroad who are forced to use a keyboard specific to their country of residence. Thus, Bulgarians online have adopted a hybrid code to speak and communicate. Since foreign keyboards are not equipped with the same consonants and vowels that exist in the Bulgarian language, they use the Latin letters that best suit the Bulgarian phonetic. Several possible interpretations of these “encoded” texts exist which become another way for the Bulgarian migrants to distinguish and assert themselves. One of these encoded scripts is supplemented by figures. For example, the number “6” written in Bulgarian “шест” is applied to represent the Bulgarian letter “ш.” Bulgarian immigrants therefore employ very specific codes of communication that enhance the feeling of belonging to a community that shares the same language, which is often incomprehensible to others. As the ultimate social networking website, Facebook brings together Bulgarians from all over the world and offers them a space to preserve online memorials and digital archives. As a result, the Bulgarian diaspora privileges this website in order to manage the strong links between its members. Indeed, within months of coming into online existence, Facebook established itself as a powerful social phenomenon for the Bulgarian diaspora and, very soon, a virtual map of the Bulgarian diaspora was formed. It should be noted, however, that this mapping was focused on the new generation of Bulgarian migrants more familiar with the Internet and most likely to travel. By identifying the presence of online groups by country or city, I was able to locate the most active Bulgarian communities: “Bulgarians in UK” (524 members), “Bulgarians in Chicago” (436 members), “Bulgarians studying in the UK” (346 members), “Bulgarians in America” (333 members), “Bulgarians in the USA” (314 members), “Bulgarians in Montreal” (249 members), “Bulgarians in Munich” (241 members), and so on. These figures are based on the “Groups” Application of Facebook as updated in February 2010. Through those groups, a symbolic diasporic geography is imagined and communicated: the digital “border crossing,” as well as the real one, becomes a major identity resource. Thus, Bulgarian users of Facebook are connecting from the four corners of the globe in order to rebuild family links and to participate virtually in the marriages, births, and lives of their families. It sometimes seems that the whole country has an appointment on Facebook, and that all the photos and stories of Bulgarians are more or less accessible to the community in general. Among its virtual initiatives, Facebook has made available to its users an effective mobilising tool, the Causes, which is used as a virtual noticeboard for activities and ideas circulating in “real life.” The members of the Bulgarian diaspora choose to adhere to different “causes” that may be local, national, or global, and that are complementary to the civic and socially responsible side of the identity they have chosen to construct online. Acting as a virtual realm in which distinct and overlapping trajectories coexist, Facebook thus enables users to articulate different stories and meanings and to foster a democratic imaginary about both the past and the future. Facebook encourages diasporas to produce new initiatives to revive or create collective memories and common values. Through photos and videos, scenes of everyday life are celebrated and manipulated as tools to reconstruct, reconcile, and display a part of the history and the identity of the migrant. By combating the feelings of disorientation, the consciousness of sharing the same national background and culture facilitates dialogue and neutralises the anxiety and loneliness of Bulgarian migrants. When cultural differences become more acute, the sense of isolation increases and this encourages migrants to look for company and solidarity online. As the number of immigrants connected and visible on Facebook gets larger, so the use of the Internet heightens their sense of a substantial collective identity. This is especially important for migrants during the early years of relocation when their sense of identity is most fragile. It can therefore be argued that, through the Internet, some Bulgarian migrants are replacing alienating face-to-face contact with virtual friends and enjoying the feeling of reassurance and belonging to a transnational community of compatriots. In this sense, Facebook is a propitious ground for the establishment of the three identity strategies defined by Herzfeld: cultural intimacy (or self-stereotypes); structural nostalgia (the evocation of a time when everything was going better); and the social poetic (the strategies aiming to retrieve a particular advantage and turn it into a permanent condition). In this way, the willingness to remain continuously in virtual contact with other Bulgarians often reveals a desire to return to the place of birth. Nostalgia and outsourcing of such sentiments help migrants to cope with feelings of frustration and disappointment. I observed that it is just after their return from summer holidays spent in Bulgaria that members of the Bulgarian diaspora are most active on the Bulgarian forums and pages on Facebook. The “return tourism” (Fourcade) during the summer or for the winter holidays seems to be a central theme in the forums on Facebook and an important source of emotional refuelling. Tensions between identities can also lead to creative formulations through Facebook’s pages. Thus, the group “You know you’re a Bulgarian when...”, which enjoys very active participation from the Bulgarian diaspora, is a space where everyone is invited to share, through a single sentence, some fact of everyday life with which all Bulgarians can identify. With humour and self-irony, this Facebook page demonstrates what is distinctive about being Bulgarian but also highlights frustration with certain prejudices and stereotypes. Frequently these profiles are characterised by seemingly “glocal” features. The same Bulgarian user could define himself as a Parisian, adhering to the group “You know you’re from Paris when...”, but also a native of a Bulgarian town (“You know you’re from Varna when...”). At the same time, he is an architect (“All architects on Facebook”), supporting the candidacy of Barack Obama, a fan of Japanese manga (“maNga”), of a French actor, an American cinema director, or Indian food. He joins a cause to save a wild beach on the Black Sea coast (“We love camping: Gradina Smokinia and Arapia”) and protests virtually against the slaughter of dolphins in the Faroe Islands (“World shame”). One month, the individual could identify as Bulgarian, but next month he might choose to locate himself in the country in which he is now resident. Thus, Facebook creates a virtual territory without borders for the cosmopolitan subject (Negroponte) and this confirms the premise that the Internet does not lead to the convergence of cultures, but rather confirms the opportunities for diversification and pluralism through multiple social and national affiliations. Facebook must therefore be seen as an advantageous space for the representation and interpretation of identity and for performance and digital existence. Bulgarian migrants bring together elements of their offline lives in order to construct, online, entirely new composite identities. The Bulgarians we have studied as part of this research almost never use pseudonyms and do not seem to feel the need to hide their material identities. This suggests that they are mature people who value their status as migrants of Bulgarian origin and who feel confident in presenting their natal identities rather than hiding behind a false name. Starting from this material social/national identity, which is revealed through the display of surname with a Slavic consonance, members of the Bulgarian diaspora choose to manage their complex virtual identities online. Conclusion Far from their homeland, beset with feelings of insecurity and alienation as well as daily experiences of social and cultural exclusion (much of it stemming from an ongoing prejudice towards citizens from ex-communist countries), it is no wonder that migrants from Bulgaria find relief in meeting up with compatriots in front of their screens. Although some migrants assume their Bulgarian identity as a mixture of different cultures and are trying to rethink and continuously negotiate their cultural practices (often through the display of contradictory feelings and identifications), others identify with an imagined community and enjoy drawing boundaries between what is “Bulgarian” and what is not. The indispensable daily visit to Facebook is clearly a means of forging an ongoing sense of belonging to the Bulgarian community scattered across the globe. Facebook makes possible the double presence of Bulgarian immigrants both here and there and facilitates the ongoing processes of identity construction that depend, more and more, upon new media. In this respect, the role that Facebook plays in the life of the Bulgarian diaspora may be seen as a facet of an increasingly dynamic transnational world in which interactive media may be seen to contribute creatively to the formation of collective identities and the deformation of monolithic cultures. References Anderson, Benedict. L’Imaginaire National: Réflexions sur l’Origine et l’Essor du Nationalisme. Paris: La Découverte, 1983. Appadurai, Ajun. Après le Colonialisme: Les Conséquences Culturelles de la Globalisation. Paris: Payot, 2001. Bernal, Victoria. “Diaspora, Cyberspace and Political Imagination: The Eritrean Diaspora Online.” Global Network 6 (2006): 161-79. boyd, danah. “Social Network Sites: Public, Private, or What?” Knowledge Tree (May 2007). Cohen, Robin. Global Diasporas: An Introduction. London: University College London Press. 1997. Goffman, Erving. La Présentation de Soi. Paris: Editions de Minuit, Collection Le Sens Commun, 1973. Fourcade, Marie-Blanche. “De l’Arménie au Québec: Itinéraires de Souvenirs Touristiques.” Ethnologies 27.1 (2005): 245-76. Freud, Sigmund. “Psychologie des Foules et Analyses du Moi.” Essais de Psychanalyse. Paris: Petite Bibliothèque Payot, 2001 (1921). Herzfeld, Michael. Intimité Culturelle. Presse de l’Université de Laval, 2008. Karim, Karim-Haiderali. The Media of Diaspora. Oxford: Routledge, 2003. Marcheva, Marta. “Bulgarian Diaspora and the Media Treatment of Bulgaria in the French, Italian and North American Press (1992–2007).” Unpublished PhD dissertation. Paris: University Panthéon – Assas Paris 2, 2010. Mead, George Herbert. L’Esprit, le Soi et la Société. Paris: PUF, 2006. Negroponte, Nicholas. Being Digital. Vintage, 2005. Soultanova, Ralitza. “Les Migrations Multiples de la Population Bulgare.” Actes du Dolloque «La France et les Migrants des Balkans: Un État des Lieux.” Paris: Courrier des Balkans, 2005. Srinivasan, Ramesh, and Ajit Pyati. “Diasporic Information Environments: Reframing Immigrant-Focused Information Research.” Journal of the American Society for Information Science and Technology 58.12 (2007): 1734-44. Todorov, Tzvetan. Nous et les Autres: La Réflexion Française sur la Diversité Humaine. Paris: Seuil, 1989.
APA, Harvard, Vancouver, ISO, and other styles
29

Holmes, Ashley M. "Cohesion, Adhesion and Incoherence: Magazine Production with a Flickr Special Interest Group." M/C Journal 13, no. 1 (March 22, 2010). http://dx.doi.org/10.5204/mcj.210.

Full text
Abstract:
This paper provides embedded, reflective practice-based insight arising from my experience collaborating to produce online and print-on-demand editions of a magazine showcasing the photography of members of haphazart! Contemporary Abstracts group (hereafter referred to as haphazart!). The group’s online visual, textual and activity-based practices via the photo sharing social networking site Flickr are portrayed as achieving cohesive visual identity. Stylistic analysis of pictures in support of this claim is not attempted. Rather negotiation, that Elliot has previously described in M/C Journal as innate in collaboration, is identified as the unifying factor. However, the collaborators’ adherence to Flickr’s communication platform proves problematic in the editorial context. Some technical incoherence with possible broader cultural implications is encountered during the process of repurposing images from screen to print. A Scan of Relevant Literature The photographic gaze perceives and captures objects which seem to ‘carry within them ready-made’ a work of art. But the reminiscences of the gaze are only made possible by knowing and associating with groups that define a tradition. The list of valorised subjects is not actually defined with reference to a culture, but rather by familiarity with a limited group. (Chamboredon 144) As part of the array of socio-cultural practices afforded by Web 2.0 interoperability, sites of produsage (Bruns) are foci for studies originating in many disciplines. Flickr provides a rich source of data that researchers interested in the interface between the technological and the social find useful to analyse. Access to the Flickr application programming interface enables quantitative researchers to observe a variety of means by which information is propagated, disseminated and shared. Some findings from this kind of research confirm the intuitive. For example, Negoecsu et al. find that “a large percentage of users engage in sharing with groups and that they do so significantly” ("Analyzing Flickr Groups" 425). They suggest that Flickr’s Groups feature appears to “naturally bring together two key aspects of social media: content and relations.” They also find evidence for what they call hyper-groups, which are “communities consisting of groups of Flickr groups” ("Flickr Hypergroups" 813). Two separate findings from another research team appear to contradict each other. On one hand, describing what they call “social cascades,” Cha et al. claim that “content in the form of ideas, products, and messages spreads across social networks like a virus” ("Characterising Social Cascades"). Yet in 2009 they claim that homocity and reciprocity ensure that “popularity of pictures is localised” ("Measurement-Driven Analysis"). Mislove et al. reflect that the affordances of Flickr influence the growth patterns they observe. There is optimism shared by some empiricists that through collation and analysis of Flickr tag data, the matching of perceptual structures of images and image annotation techniques will yield ontology-based taxonomy useful in automatic image annotation and ultimately, the Semantic Web endeavour (Kennedy et al.; Su et al.; Xu et al.). Qualitative researchers using ethnographic interview techniques also find Flickr a valuable resource. In concluding that the photo sharing hobby is for many a “serious leisure” activity, Cox et al. propose that “Flickr is not just a neutral information system but also value laden and has a role within a wider cultural order.” They also suggest that “there is genuinely greater scope for individual creativity, releasing the individual to explore their own identity in a way not possible with a camera club.” Davies claims that “online spaces provide an arena where collaboration over meanings can be transformative, impacting on how individuals locate themselves within local and global contexts” (550). She says that through shared ways of describing and commenting on images, Flickrites develop a common criticality in their endeavour to understand images, each other and their world (554).From a psychologist’s perspective, Suler observes that “interpersonal relationships rarely form and develop by images alone” ("Image, Word, Action" 559). He says that Flickr participants communicate in three dimensions: textual (which he calls “verbal”), visual, and via the interpersonal actions that the site affords, such as Favourites. This latter observation can surely be supplemented by including the various games that groups configure within the constraints of the discussion forums. These often include submissions to a theme and voting to select a winning image. Suler describes the place in Flickr where one finds identity as one’s “cyberpsychological niche” (556). However, many participants subscribe to multiple groups—45.6% of Flickrites who share images share them with more than 20 groups (Negoescu et al., "Analyzing Flickr Groups" 420). Is this a reflection of the existence of the hyper-groups they describe (2009) or, of the ranging that people do in search of a niche? It is also probable that some people explore more than a singular identity or visual style. Harrison and Bartell suggest that there are more interesting questions than why users create media products or what motivates them to do so: the more interesting questions center on understanding what users will choose to do ultimately with [Web2.0] capabilities [...] in what terms to define the success of their efforts, and what impact the opportunity for individual and collaborative expression will have on the evolution of communicative forms and character. (167) This paper addresseses such questions. It arises from a participatory observational context which differs from that of the research described above. It is intended that a different perspective about online group-based participation within the Flickr social networking matrix will avail. However, it will be seen that the themes cited in this introductory review prove pertinent. Context As a university teacher of a range of subjects in the digital media field, from contemporary photomedia to social media to collaborative multimedia practice, it is entirely appropriate that I embed myself in projects that engage, challenge and provide me with relevant first-hand experience. As an academic I also undertake and publish research. As a practicing new media artist I exhibit publically on a regular basis and consider myself semi-professional with respect to this activity. While there are common elements to both approaches to research, this paper is written more from the point of view of ‘reflective practice’ (Holmes, "Reconciling Experimentum") rather than ‘embedded ethnography’ (Pink). It is necessarily and unapologetically reflexive. Abstract Photography Hyper-Group A search of all Flickr groups using the query “abstract” is currently likely to return around 14,700 results. However, only in around thirty of them does the group name, its stated rules and, the stream of images that flow through the pool arguably reflect a sense of collective concept and aesthetic that is coherently abstract. This loose complex of groups comprises a hyper-group. Members of these groups often have co-memberships, reciprocal contacts, and regularly post images to a range of groups and comment on others’ posts to be found throughout. Given that one of Flickr’s largest groups, Black and White, currently has around 131,150 members and hosts 2,093,241 items in its pool, these abstract special interest groups are relatively small. The largest, Abstract Photos, has 11,338 members and hosts 89,306 items in its pool. The group that is the focus of this paper, haphazart!, currently has 2,536 members who have submitted 53,309 items. The group pool is more like a constantly flowing river because the most recently added images are foremost. Older images become buried in an archive of pages which cannot be reverse accessed at a rate greater than the seven pages linked from a current view. A member’s presence is most immediate through images posted to a pool. This structural feature of Flickr promotes a desire for currency; a need to post regularly to maintain presence. Negotiating Coherence to the Abstract The self-managing social dynamics in groups has, as Suler proposes to be the case for individuals, three dimensions: visual, textual and action. A group integrates the diverse elements, relationships and values which cumulatively constitute its identity with contributions from members in these dimensions. First impressions of that identity are usually derived from the group home page which consists of principal features: the group name, a selection of twelve most recent posts to the pool, some kind of description, a selection of six of the most recent discussion topics, and a list of rules (if any). In some of these groups, what is considered to constitute an abstract photographic image is described on the group home page. In some it is left to be contested and becomes the topic of ongoing forum debates. In others the specific issue is not discussed—the images are left to speak for themselves. Administrators of some groups require that images are vetted for acceptance. In haphazart! particular administrators dutifully delete from the pool on a regular basis any images that they deem not to comply with the group ethic. Whether reasons are given or not is left to the individual prosecutor. Mostly offending images just disappear from the group pool without trace. These are some of the ways that the coherence of a group’s visual identity is established and maintained. Two groups out of the abstract photography hyper-group are noteworthy in that their discussion forums are particularly active. A discussion is just the start of a new thread and may have any number of posts under it. At time of writing Abstract Photos has 195 discussions and haphazart! — the most talkative by this measure—has 333. Haphazart! invites submissions of images to regularly changing themes. There is always lively and idiosyncratic banter in the forum over the selection of a theme. To be submitted an image needs to be identified by a specific theme tag as announced on the group home page. The tag can be added by the photographer themselves or by anyone else who deems the image appropriate to the theme. An exhibition process ensues. Participant curators search all Flickr items according to the theme tag and select from the outcome images they deem to most appropriately and abstractly address the theme. Copies of the images together with comments by the curators are posted to a dedicated discussion board. Other members may also provide responses. This activity forms an ongoing record that may serve as a public indicator of the aesthetic that underlies the group’s identity. In Abstract Photos there is an ongoing discussion forum where one can submit an image and request that the moderators rule as to whether or not the image is ‘abstract’. The same group has ongoing discussions labelled “Hall of Appropriate” where worthy images are reposted and celebrated and, “Hall of Inappropriate” where images posted to the group pool have been removed and relegated because abstraction has been “so far stretched from its definition that it now resides in a parallel universe” (Askin). Reasons are mostly courteously provided. In haphazart! a relatively small core of around twelve group members regularly contribute to the group discussion board. A curious aspect of this communication is that even though participants present visually with a ‘buddy icon’ and most with a screen name not their real name, it is usual practice to address each other in discussions by their real Christian names, even when this is not evident in a member’s profile. This seems to indicate a common desire for authenticity. The makeup of the core varies from time to time depending on other activities in a member’s life. Although one or two may be professionally or semi-professionally engaged as photographers or artists or academics, most of these people would likely consider themselves to be “serious amateurs” (Cox). They are internationally dispersed with bias to the US, UK, Europe and Australia. English is the common language though not the natural tongue of some. The age range is approximately 35 to 65 and the gender mix 50/50. The group is three years old. Where Do We Go to from Here? In early January 2009 the haphazart! core was sparked into a frenzy of discussion by a post from a member headed “Where do we go to from here?” A proposal was mooted to produce a ‘book’ featuring images and texts representative of the group. Within three days a new public group with invited membership dedicated to the idea had been established. A smaller working party then retreated to a private Flickr group. Four months later Issue One of haphazart! magazine was available in print-on-demand and online formats. Following however is a brief critically reflective review of some of the collaborative curatorial, editorial and production processes for Issue Two which commenced in early June 2009. Most of the team had also been involved with Issue One. I was the only newcomer and replaced the person who had undertaken the design for Issue One. I was not provided access to the prior private editorial ruminations but apparently the collaborative curatorial and editorial decision-making practices the group had previously established persisted, and these took place entirely within the discussion forums of a new dedicated private Flickr group. Over a five-month period there were 1066 posts in 54 discussions concerning matters such as: change of format from the previous; selection of themes, artists and images; conduct of and editing of interviews; authoring of texts; copyright and reproduction. The idiom of those communications can be described as: discursive, sporadic, idiosyncratic, resourceful, collegial, cooperative, emphatic, earnest and purposeful. The selection process could not be said to follow anything close to a shared manifesto, or articulation of style. It was established that there would be two primary themes: the square format and contributors’ use of colour. Selection progressed by way of visual presentation and counter presentation until some kind of consensus was reached often involving informal votes of preference. Stretching the Limits of the Flickr Social Tools The magazine editorial collaborators continue to use the facilities with which they are familiar from regular Flickr group participation. However, the strict vertically linear format of the Flickr discussion format is particularly unsuited to lengthy, complex, asynchronous, multithreaded discussion. For this purpose it causes unnecessary strain, fatigue and confusion. Where images are included, the forums have set and maximum display sizes and are not flexibly configured into matrixes. Images cannot readily be communally changed or moved about like texts in a wiki. Likewise, the Flickrmail facility is of limited use for specialist editorial processes. Attachments cannot be added. This opinion expressed by a collaborator in the initial, open discussion for Issue One prevailed among Issue Two participants: do we want the members to go to another site to observe what is going on with the magazine? if that’s ok, then using google groups or something like that might make sense; if we want others to observe (and learn from) the process - we may want to do it here [in Flickr]. (Valentine) The opinion appears socially constructive; but because the final editorial process and production processes took place in a separate private forum, ultimately the suggested learning between one issue and the next did not take place. During Issue Two development the reluctance to try other online collaboration tools for the selection processes requiring visual comparative evaluation of images and trials of sequencing adhered. A number of ingenious methods of working within Flickr were devised and deployed and, in my opinion, proved frustratingly impractical and inefficient. The digital layout, design, collation and formatting of images and texts, all took place on my personal computer using professional software tools. Difficulties arose in progressively sharing this work for the purposes of review, appraisal and proofing. Eventually I ignored protests and insisted the team review demonstrations I had converted for sharing in Google Documents. But, with only one exception, I could not tempt collaborators to try commenting or editing in that environment. For example, instead of moving the sequence of images dynamically themselves, or even typing suggestions directly into Google Documents, they would post responses in Flickr. To Share and to Hold From the first imaginings of Issue One the need to have as an outcome something in one’s hands was expressed and this objective is apparently shared by all in the haphazart! core as an ongoing imperative. Various printing options have been nominated, discussed and evaluated. In the end one print-on-demand provider was selected on the basis of recommendation. The ethos of haphazart! is clearly not profit-making and conflicts with that of the printing organisation. Presumably to maintain an incentive to purchase the print copy online preview is restricted to the first 15 pages. To satisfy the co-requisite to make available the full 120 pages for free online viewing a second host that specialises in online presentation of publications is also utilised. In this way haphazart! members satisfy their common desires for sharing selected visual content and ideas with an online special interest audience and, for a physical object of art to relish—with all the connotations of preciousness, fetish, talisman, trophy, and bookish notions of haptic pleasure and visual treasure. The irony of publishing a frozen chunk of the ever-flowing Flickriver, whose temporally changing nature is arguably one of its most interesting qualities, is not a consideration. Most of them profess to be simply satisfying their own desire for self expression and would eschew any critical judgement as to whether this anarchic and discursive mode of operation results in a coherent statement about contemporary photographic abstraction. However there remains a distinct possibility that a number of core haphazart!ists aspire to transcend: popular taste; the discernment encouraged in camera clubs; and, the rhetoric of those involved professionally (Bourdieu et al.); and seek to engage with the “awareness of illegitimacy and the difficulties implied by the constitution of photography as an artistic medium” (Chamboredon 130). Incoherence: A Technical Note My personal experience of photography ranges from the filmic to the digital (Holmes, "Bridging Adelaide"). For a number of years I specialised in facsimile graphic reproduction of artwork. In those days I became aware that films were ‘blind’ to the psychophysical affect of some few particular paint pigments. They just could not be reproduced. Even so, as I handled the dozens of images contributed to haphazart!2, converting them from the pixellated place where Flickr exists to the resolution and gamut of the ink based colour space of books, I was surprised at the number of hue values that exist in the former that do not translate into the latter. In some cases the affect is subtle so that judicious tweaking of colour levels or local colour adjustment will satisfy discerning comparison between the screenic original and the ‘soft proof’ that simulates the printed outcome. In other cases a conversion simply does not compute. I am moved to contemplate, along with Harrison and Bartell (op. cit.) just how much of the experience of media in the shared digital space is incomparably new? Acknowledgement Acting on the advice of researchers experienced in cyberethnography (Bruckman; Suler, "Ethics") I have obtained the consent of co-collaborators to comment freely on proceedings that took place in a private forum. They have been given the opportunity to review and suggest changes to the account. References Askin, Dean (aka: dnskct). “Hall of Inappropriate.” Abstract Photos/Discuss/Hall of Inappropriate, 2010. 12 Jan. 2010 ‹http://www.flickr.com/groups/abstractphotos/discuss/72157623148695254/>. Bourdieu, Pierre, Luc Boltanski, Robert Castel, Jean-Claude Chamboredeon, and Dominique Schnapper. Photography: A Middle-Brow Art. 1965. Trans. Shaun Whiteside. Stanford: Stanford UP, 1990. Bruckman, Amy. Studying the Amateur Artist: A Perspective on Disguising Data Collected in Human Subjects Research on the Internet. 2002. 12 Jan. 2010 ‹http://www.nyu.edu/projects/nissenbaum/ethics_bru_full.html>. Bruns, Axel. “Towards Produsage: Futures for User-Led Content Production.” Proceedings: Cultural Attitudes towards Communication and Technology 2006. Perth: Murdoch U, 2006. 275–84. ———, and Mark Bahnisch. Social Media: Tools for User-Generated Content. Vol. 1 – “State of the Art.” Sydney: Smart Services CRC, 2009. Cha, Meeyoung, Alan Mislove, Ben Adams, and Krishna P. Gummadi. “Characterizing Social Cascades in Flickr.” Proceedings of the First Workshop on Online Social Networks. ACM, 2008. 13–18. ———, Alan Mislove, and Krishna P. Gummadi. “A Measurement-Driven Analysis of Information Propagation in the Flickr Social Network." WWW '09: Proceedings of the 18th International Conference on World Wide Web. ACM, 2009. 721–730. Cox, A.M., P.D. Clough, and J. Marlow. “Flickr: A First Look at User Behaviour in the Context of Photography as Serious Leisure.” Information Research 13.1 (March 2008). 12 Dec. 2009 ‹http://informationr.net/ir/13-1/paper336.html>. Chamboredon, Jean-Claude. “Mechanical Art, Natural Art: Photographic Artists.” Photography: A Middle-Brow Art. Pierre Bourdieu. et al. 1965. Trans. Shaun Whiteside. Stanford: Stanford UP, 1990. 129–149. Davies, Julia. “Display, Identity and the Everyday: Self-Presentation through Online Image Sharing.” Discourse: Studies in the Cultural Politics of Education 28.4 (Dec. 2007): 549–564. Elliott, Mark. “Stigmergic Collaboration: The Evolution of Group Work.” M/C Journal 9.2 (2006). 12 Jan. 2010 ‹http://journal.media-culture.org.au/0605/03-elliott.php>. Harrison, Teresa, M., and Brea Barthel. “Wielding New Media in Web 2.0: Exploring the History of Engagement with the Collaborative Construction of Media Products.” New Media & Society 11.1-2 (2009): 155–178. Holmes, Ashley. “‘Bridging Adelaide 2001’: Photography and Hyperimage, Spanning Paradigms.” VSMM 2000 Conference Proceedings. International Society for Virtual Systems and Multimedia, 2000. 79–88. ———. “Reconciling Experimentum and Experientia: Reflective Practice Research Methodology for the Creative Industries”. Speculation & Innovation: Applying Practice-Led Research in the Creative Industries. Brisbane: QUT, 2006. Kennedy, Lyndon, Mor Naaman, Shane Ahern, Rahul Nair, and Tye Rattenbury. “How Flickr Helps Us Make Sense of the World: Context and Content in Community-Contributed Media Collections.” MM’07. ACM, 2007. Miller, Andrew D., and W. Keith Edwards. “Give and Take: A Study of Consumer Photo-Sharing Culture and Practice.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2007. 347–356. Mislove, Alan, Hema Swetha Koppula, Krishna P. Gummadi, Peter Druschel and Bobby Bhattacharjee. “Growth of the Flickr Social Network.” Proceedings of the First Workshop on Online Social Networks. ACM, 2008. 25–30. Negoescu, Radu-Andrei, and Daniel Gatica-Perez. “Analyzing Flickr Groups.” CIVR '08: Proceedings of the 2008 International Conference on Content-Based Image and Video Retrieval. ACM, 2008. 417–426. ———, Brett Adams, Dinh Phung, Svetha Venkatesh, and Daniel Gatica-Perez. “Flickr Hypergroups.” MM '09: Proceedings of the Seventeenth ACM International Conference on Multimedia. ACM, 2009. 813–816. Pink, Sarah. Doing Visual Ethnography: Images, Media and Representation in Research. 2nd ed. London: Sage, 2007. Su, Ja-Hwung, Bo-Wen Wang, Hsin-Ho Yeh, and Vincent S. Tseng. “Ontology–Based Semantic Web Image Retrieval by Utilizing Textual and Visual Annotations.” 2009 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology – Workshops. 2009. Suler, John. “Ethics in Cyberspace Research: Consent, Privacy and Contribution.” The Psychology of Cyberspace. 1996. 12 Jan. 2010 ‹http://www-usr.rider.edu/~suler/psycyber/psycyber.html>. ———. “Image, Word, Action: Interpersonal Dynamics in a Photo-Sharing Community.” Cyberpsychology & Behavior 11.5 (2008): 555–560. Valentine, Mark. “HAPHAZART! Magazine/Discuss/image selections…” [discussion post]. 2009. 12 Jan. 2010 ‹http://www.flickr.com/groups/haphazartmagazin/discuss/72157613147017532/>. Xu, Hongtao, Xiangdong Zhou, Mei Wang, Yu Xiang, and Baile Shi. “Exploring Flickr’s Related Tags for Semantic Annotation of Web Images.” CIVR ’09. ACM, 2009.
APA, Harvard, Vancouver, ISO, and other styles
30

Burgess, Jean, and Axel Bruns. "Twitter Archives and the Challenges of "Big Social Data" for Media and Communication Research." M/C Journal 15, no. 5 (October 11, 2012). http://dx.doi.org/10.5204/mcj.561.

Full text
Abstract:
Lists and Social MediaLists have long been an ordering mechanism for computer-mediated social interaction. While far from being the first such mechanism, blogrolls offered an opportunity for bloggers to provide a list of their peers; the present generation of social media environments similarly provide lists of friends and followers. Where blogrolls and other earlier lists may have been user-generated, the social media lists of today are more likely to have been produced by the platforms themselves, and are of intrinsic value to the platform providers at least as much as to the users themselves; both Facebook and Twitter have highlighted the importance of their respective “social graphs” (their databases of user connections) as fundamental elements of their fledgling business models. This represents what Mejias describes as “nodocentrism,” which “renders all human interaction in terms of network dynamics (not just any network, but a digital network with a profit-driven infrastructure).”The communicative content of social media spaces is also frequently rendered in the form of lists. Famously, blogs are defined in the first place by their reverse-chronological listing of posts (Walker Rettberg), but the same is true for current social media platforms: Twitter, Facebook, and other social media platforms are inherently centred around an infinite, constantly updated and extended list of posts made by individual users and their connections.The concept of the list implies a certain degree of order, and the orderliness of content lists as provided through the latest generation of centralised social media platforms has also led to the development of more comprehensive and powerful, commercial as well as scholarly, research approaches to the study of social media. Using the example of Twitter, this article discusses the challenges of such “big data” research as it draws on the content lists provided by proprietary social media platforms.Twitter Archives for ResearchTwitter is a particularly useful source of social media data: using the Twitter API (the Application Programming Interface, which provides structured access to communication data in standardised formats) it is possible, with a little effort and sufficient technical resources, for researchers to gather very large archives of public tweets concerned with a particular topic, theme or event. Essentially, the API delivers very long lists of hundreds, thousands, or millions of tweets, and metadata about those tweets; such data can then be sliced, diced and visualised in a wide range of ways, in order to understand the dynamics of social media communication. Such research is frequently oriented around pre-existing research questions, but is typically conducted at unprecedented scale. The projects of media and communication researchers such as Papacharissi and de Fatima Oliveira, Wood and Baughman, or Lotan, et al.—to name just a handful of recent examples—rely fundamentally on Twitter datasets which now routinely comprise millions of tweets and associated metadata, collected according to a wide range of criteria. What is common to all such cases, however, is the need to make new methodological choices in the processing and analysis of such large datasets on mediated social interaction.Our own work is broadly concerned with understanding the role of social media in the contemporary media ecology, with a focus on the formation and dynamics of interest- and issues-based publics. We have mined and analysed large archives of Twitter data to understand contemporary crisis communication (Bruns et al), the role of social media in elections (Burgess and Bruns), and the nature of contemporary audience engagement with television entertainment and news media (Harrington, Highfield, and Bruns). Using a custom installation of the open source Twitter archiving tool yourTwapperkeeper, we capture and archive all the available tweets (and their associated metadata) containing a specified keyword (like “Olympics” or “dubstep”), name (Gillard, Bieber, Obama) or hashtag (#ausvotes, #royalwedding, #qldfloods). In their simplest form, such Twitter archives are commonly stored as delimited (e.g. comma- or tab-separated) text files, with each of the following values in a separate column: text: contents of the tweet itself, in 140 characters or less to_user_id: numerical ID of the tweet recipient (for @replies) from_user: screen name of the tweet sender id: numerical ID of the tweet itself from_user_id: numerical ID of the tweet sender iso_language_code: code (e.g. en, de, fr, ...) of the sender’s default language source: client software used to tweet (e.g. Web, Tweetdeck, ...) profile_image_url: URL of the tweet sender’s profile picture geo_type: format of the sender’s geographical coordinates geo_coordinates_0: first element of the geographical coordinates geo_coordinates_1: second element of the geographical coordinates created_at: tweet timestamp in human-readable format time: tweet timestamp as a numerical Unix timestampIn order to process the data, we typically run a number of our own scripts (written in the programming language Gawk) which manipulate or filter the records in various ways, and apply a series of temporal, qualitative and categorical metrics to the data, enabling us to discern patterns of activity over time, as well as to identify topics and themes, key actors, and the relations among them; in some circumstances we may also undertake further processes of filtering and close textual analysis of the content of the tweets. Network analysis (of the relationships among actors in a discussion; or among key themes) is undertaken using the open source application Gephi. While a detailed methodological discussion is beyond the scope of this article, further details and examples of our methods and tools for data analysis and visualisation, including copies of our Gawk scripts, are available on our comprehensive project website, Mapping Online Publics.In this article, we reflect on the technical, epistemological and political challenges of such uses of large-scale Twitter archives within media and communication studies research, positioning this work in the context of the phenomenon that Lev Manovich has called “big social data.” In doing so, we recognise that our empirical work on Twitter is concerned with a complex research site that is itself shaped by a complex range of human and non-human actors, within a dynamic, indeed volatile media ecology (Fuller), and using data collection and analysis methods that are in themselves deeply embedded in this ecology. “Big Social Data”As Manovich’s term implies, the Big Data paradigm has recently arrived in media, communication and cultural studies—significantly later than it did in the hard sciences, in more traditionally computational branches of social science, and perhaps even in the first wave of digital humanities research (which largely applied computational methods to pre-existing, historical “big data” corpora)—and this shift has been provoked in large part by the dramatic quantitative growth and apparently increased cultural importance of social media—hence, “big social data.” As Manovich puts it: For the first time, we can follow [the] imaginations, opinions, ideas, and feelings of hundreds of millions of people. We can see the images and the videos they create and comment on, monitor the conversations they are engaged in, read their blog posts and tweets, navigate their maps, listen to their track lists, and follow their trajectories in physical space. (Manovich 461) This moment has arrived in media, communication and cultural studies because of the increased scale of social media participation and the textual traces that this participation leaves behind—allowing researchers, equipped with digital tools and methods, to “study social and cultural processes and dynamics in new ways” (Manovich 461). However, and crucially for our purposes in this article, many of these scholarly possibilities would remain latent if it were not for the widespread availability of Open APIs for social software (including social media) platforms. APIs are technical specifications of how one software application should access another, thereby allowing the embedding or cross-publishing of social content across Websites (so that your tweets can appear in your Facebook timeline, for example), or allowing third-party developers to build additional applications on social media platforms (like the Twitter user ranking service Klout), while also allowing platform owners to impose de facto regulation on such third-party uses via the same code. While platform providers do not necessarily have scholarship in mind, the data access affordances of APIs are also available for research purposes. As Manovich notes, until very recently almost all truly “big data” approaches to social media research had been undertaken by computer scientists (464). But as part of a broader “computational turn” in the digital humanities (Berry), and because of the increased availability to non-specialists of data access and analysis tools, media, communication and cultural studies scholars are beginning to catch up. Many of the new, large-scale research projects examining the societal uses and impacts of social media—including our own—which have been initiated by various media, communication, and cultural studies research leaders around the world have begun their work by taking stock of, and often substantially extending through new development, the range of available tools and methods for data analysis. The research infrastructure developed by such projects, therefore, now reflects their own disciplinary backgrounds at least as much as it does the fundamental principles of computer science. In turn, such new and often experimental tools and methods necessarily also provoke new epistemological and methodological challenges. The Twitter API and Twitter ArchivesThe Open API was a key aspect of mid-2000s ideas about the value of the open Web and “Web 2.0” business models (O’Reilly), emphasising the open, cross-platform sharing of content as well as promoting innovation at the margins via third-party application development—and it was in this ideological environment that the microblogging service Twitter launched and experienced rapid growth in popularity among users and developers alike. As José van Dijck cogently argues, however, a complex interplay of technical, economic and social dynamics has seen Twitter shift from a relatively open, ad hoc and user-centred platform toward a more formalised media business: For Twitter, the shift from being primarily a conversational communication tool to being a global, ad-supported followers tool took place in a relatively short time span. This shift did not simply result from the owner’s choice for a distinct business model or from the company’s decision to change hardware features. Instead, the proliferation of Twitter as a tool has been a complex process in which technological adjustments are intricately intertwined with changes in user base, transformations of content and choices for revenue models. (van Dijck 343)The specifications of Twitter’s API, as well as the written guidelines for its use by developers (Twitter, “Developer Rules”) are an excellent example of these “technological adjustments” and the ways they are deeply interwined with Twitter’s search for a viable revenue model. These changes show how the apparent semantic openness or “interpretive flexibility” of the term “platform” allows its meaning to be reshaped over time as the business models of platform owners change (Gillespie).The release of the API was first announced on the Twitter blog in September 2006 (Stone), not long after the service’s launch but after some popular third-party applications (like a mashup of Twitter with Google Maps creating a dynamic display of recently posted tweets around the world) had already been developed. Since then Twitter has seen a flourishing of what the company itself referred to as the “Twitter ecosystem” (Twitter, “Developer Rules”), including third-party developed client software (like Twitterific and TweetDeck), institutional use cases (such as large-scale social media visualisations of the London Riots in The Guardian), and parasitic business models (including social media metrics services like HootSuite and Klout).While the history of Twitter’s API rules and related regulatory instruments (such as its Developer Rules of the Road and Terms of Use) has many twists and turns, there have been two particularly important recent controversies around data access and control. First, the company locked out developers and researchers from direct “firehose” (very high volume) access to the Twitter feed; this was accompanied by a crackdown on free and public Twitter archiving services like 140Kit and the Web version of Twapperkeeper (Sample), and coincided with the establishment of what was at the time a monopoly content licensing arrangement between Twitter and Gnip, a company which charges commercial rates for high-volume API access to tweets (and content from other social media platforms). A second wave of controversy among the developer community occurred in August 2012 in response to Twitter’s release of its latest API rules (Sippey), which introduce further, significant limits to API use and usability in certain circumstances. In essence, the result of these changes to the Twitter API rules, announced without meaningful consultation with the developer community which created the Twitter ecosystem, is a forced rebalancing of development activities: on the one hand, Twitter is explicitly seeking to “limit” (Sippey) the further development of API-based third-party tools which support “consumer engagement activities” (such as end-user clients), in order to boost the use of its own end-user interfaces; on the other hand, it aims to “encourage” the further development of “consumer analytics” and “business analytics” as well as “business engagement” tools. Implicit in these changes is a repositioning of Twitter users (increasingly as content consumers rather than active communicators), but also of commercial and academic researchers investigating the uses of Twitter (as providing a narrow range of existing Twitter “analytics” rather than engaging in a more comprehensive investigation both of how Twitter is used, and of how such uses continue to evolve). The changes represent an attempt by the company to cement a certain, commercially viable and valuable, vision of how Twitter should be used (and analysed), and to prevent or at least delay further evolution beyond this desired stage. Although such attempts to “freeze” development may well be in vain, given the considerable, documented role which the Twitter user base has historically played in exploring new and unforeseen uses of Twitter (Bruns), it undermines scholarly research efforts to examine actual Twitter uses at least temporarily—meaning that researchers are increasingly forced to invest time and resources in finding workarounds for the new restrictions imposed by the Twitter API.Technical, Political, and Epistemological IssuesIn their recent article “Critical Questions for Big Data,” danah boyd and Kate Crawford have drawn our attention to the limitations, politics and ethics of big data approaches in the social sciences more broadly, but also touching on social media as a particularly prevalent site of social datamining. In response, we offer the following complementary points specifically related to data-driven Twitter research relying on archives of tweets gathered using the Twitter API.First, somewhat differently from most digital humanities (where researchers often begin with a large pre-existing textual corpus), in the case of Twitter research we have no access to an original set of texts—we can access only what Twitter’s proprietary and frequently changing API will provide. The tools Twitter researchers use rely on various combinations of parts of the Twitter API—or, more accurately, the various Twitter APIs (particularly the Search and Streaming APIs). As discussed above, of course, in providing an API, Twitter is driven not by scholarly concerns but by an attempt to serve a range of potentially value-generating end-users—particularly those with whom Twitter can create business-to-business relationships, as in their recent exclusive partnership with NBC in covering the 2012 London Olympics.The following section from Twitter’s own developer FAQ highlights the potential conflicts between the business-case usage scenarios under which the APIs are provided and the actual uses to which they are often put by academic researchers or other dataminers:Twitter’s search is optimized to serve relevant tweets to end-users in response to direct, non-recurring queries such as #hashtags, URLs, domains, and keywords. The Search API (which also powers Twitter’s search widget) is an interface to this search engine. Our search service is not meant to be an exhaustive archive of public tweets and not all tweets are indexed or returned. Some results are refined to better combat spam and increase relevance. Due to capacity constraints, the index currently only covers about a week’s worth of tweets. (Twitter, “Frequently Asked Questions”)Because external researchers do not have access to the full, “raw” data, against which we could compare the retrieved archives which we use in our later analyses, and because our data access regimes rely so heavily on Twitter’s APIs—each with its technical quirks and limitations—it is impossible for us to say with any certainty that we are capturing a complete archive or even a “representative” sample (whatever “representative” might mean in a data-driven, textualist paradigm). In other words, the “lists” of tweets delivered to us on the basis of a keyword search are not necessarily complete; and there is no way of knowing how incomplete they are. The total yield of even the most robust capture system (using the Streaming API and not relying only on Search) depends on a number of variables: rate limiting, the filtering and spam-limiting functions of Twitter’s search algorithm, server outages and so on; further, because Twitter prohibits the sharing of data sets it is difficult to compare notes with other research teams.In terms of epistemology, too, the primary reliance on large datasets produces a new mode of scholarship in media, communication and cultural studies: what emerges is a form of data-driven research which tends towards abductive reasoning; in doing so, it highlights tensions between the traditional research questions in discourse or text-based disciplines like media and communication studies, and the assumptions and modes of pattern recognition that are required when working from the “inside out” of a corpus, rather than from the outside in (for an extended discussion of these epistemological issues in the digital humanities more generally, see Dixon).Finally, even the heuristics of our analyses of Twitter datasets are mediated by the API: the datapoints that are hardwired into the data naturally become the most salient, further shaping the type of analysis that can be done. For example, a common process in our research is to use the syntax of tweets to categorise it as one of the following types of activity: original tweets: tweets which are neither @reply nor retweetretweets: tweets which contain RT @user… (or similar) unedited retweets: retweets which start with RT @user… edited retweets: retweets do not start with RT @user…genuine @replies: tweets which contain @user, but are not retweetsURL sharing: tweets which contain URLs(Retweets which are made using the Twitter “retweet button,” resulting in verbatim passing-along without the RT @user syntax or an opportunity to add further comment during the retweet process, form yet another category, which cannot be tracked particularly effectively using the Twitter API.)These categories are driven by the textual and technical markers of specific kinds of interactions that are built into the syntax of Twitter itself (@replies or @mentions, RTs); and specific modes of referentiality (URLs). All of them focus on (and thereby tend to privilege) more informational modes of communication, rather than the ephemeral, affective, or ambiently intimate uses of Twitter that can be illuminated more easily using ethnographic approaches: approaches that can actually focus on the individual user, their social contexts, and the broader cultural context of the traces they leave on Twitter. ConclusionsIn this article we have described and reflected on some of the sociotechnical, political and economic aspects of the lists of tweets—the structured Twitter data upon which our research relies—which may be gathered using the Twitter API. As we have argued elsewhere (Bruns and Burgess)—and, hopefully, have begun to demonstrate in this paper—media and communication studies scholars who are actually engaged in using computational methods are well-positioned to contribute to both the methodological advances we highlight at the beginning of this paper and the political debates around computational methods in the “big social data” moment on which the discussion in the second part of the paper focusses. One pressing issue in the area of methodology is to build on current advances to bring together large-scale datamining approaches with ethnographic and other qualitative approaches, especially including close textual analysis. More broadly, in engaging with the “big social data” moment there is a pressing need for the development of code literacy in media, communication and cultural studies. In the first place, such literacy has important instrumental uses: as Manovich argues, much big data research in the humanities requires costly and time-consuming (and sometimes alienating) partnerships with technical experts (typically, computer scientists), because the free tools available to non-programmers are still limited in utility in comparison to what can be achieved using raw data and original code (Manovich, 472).But code literacy is also a requirement of scholarly rigour in the context of what David Berry calls the “computational turn,” representing a “third wave” of Digital Humanities. Berry suggests code and software might increasingly become in themselves objects of, and not only tools for, research: I suggest that we introduce a humanistic approach to the subject of computer code, paying attention to the wider aspects of code and software, and connecting them to the materiality of this growing digital world. With this in mind, the question of code becomes increasingly important for understanding in the digital humanities, and serves as a condition of possibility for the many new computational forms that mediate our experience of contemporary culture and society. (Berry 17)A first step here lies in developing a more robust working knowledge of the conceptual models and methodological priorities assumed by the workings of both the tools and the sources we use for “big social data” research. Understanding how something like the Twitter API mediates the cultures of use of the platform, as well as reflexively engaging with its mediating role in data-driven Twitter research, promotes a much more materialist critical understanding of the politics of the social media platforms (Gillespie) that are now such powerful actors in the media ecology. ReferencesBerry, David M. “Introduction: Understanding Digital Humanities.” Understanding Digital Humanities. Ed. David M. Berry. London: Palgrave Macmillan, 2012. 1-20.boyd, danah, and Kate Crawford. “Critical Questions for Big Data.” Information, Communication & Society 15.5 (2012): 662-79.Bruns, Axel. “Ad Hoc Innovation by Users of Social Networks: The Case of Twitter.” ZSI Discussion Paper 16 (2012). 18 Sep. 2012 ‹https://www.zsi.at/object/publication/2186›.Bruns, Axel, and Jean Burgess. “Notes towards the Scientific Study of Public Communication on Twitter.” Keynote presented at the Conference on Science and the Internet, Düsseldorf, 4 Aug. 2012. 18 Sep. 2012 http://snurb.info/files/2012/Notes%20towards%20the%20Scientific%20Study%20of%20Public%20Communication%20on%20Twitter.pdfBruns, Axel, Jean Burgess, Kate Crawford, and Frances Shaw. “#qldfloods and @QPSMedia: Crisis Communication on Twitter in the 2011 South East Queensland Floods.” Brisbane: ARC Centre of Excellence for Creative Industries and Innovation, 2012. 18 Sep. 2012 ‹http://cci.edu.au/floodsreport.pdf›Burgess, Jean E. & Bruns, Axel (2012) “(Not) the Twitter Election: The Dynamics of the #ausvotes Conversation in Relation to the Australian Media Ecology.” Journalism Practice 6.3 (2012): 384-402Dixon, Dan. “Analysis Tool Or Research Methodology: Is There an Epistemology for Patterns?” Understanding Digital Humanities. Ed. David M. Berry. London: Palgrave Macmillan, 2012. 191-209.Fuller, Matthew. Media Ecologies: Materialist Energies in Art and Technoculture. Cambridge, Mass.: MIT P, 2005.Gillespie, Tarleton. “The Politics of ‘Platforms’.” New Media & Society 12.3 (2010): 347-64.Harrington, Stephen, Highfield, Timothy J., & Bruns, Axel (2012) “More than a Backchannel: Twitter and Television.” Ed. José Manuel Noguera. Audience Interactivity and Participation. COST Action ISO906 Transforming Audiences, Transforming Societies, Brussels, Belgium, pp. 13-17. 18 Sept. 2012 http://www.cost-transforming-audiences.eu/system/files/essays-and-interview-essays-18-06-12.pdfLotan, Gilad, Erhardt Graeff, Mike Ananny, Devin Gaffney, Ian Pearce, and danah boyd. “The Arab Spring: The Revolutions Were Tweeted: Information Flows during the 2011 Tunisian and Egyptian Revolutions.” International Journal of Communication 5 (2011): 1375-1405. 18 Sep. 2012 ‹http://ijoc.org/ojs/index.php/ijoc/article/view/1246/613›.Manovich, Lev. “Trending: The Promises and the Challenges of Big Social Data.” Debates in the Digital Humanities. Ed. Matthew K. Gold. Minneapolis: U of Minnesota P, 2012. 460-75.Mejias, Ulises A. “Liberation Technology and the Arab Spring: From Utopia to Atopia and Beyond.” Fibreculture Journal 20 (2012). 18 Sep. 2012 ‹http://twenty.fibreculturejournal.org/2012/06/20/fcj-147-liberation-technology-and-the-arab-spring-from-utopia-to-atopia-and-beyond/›.O’Reilly, Tim. “What is Web 2.0? Design Patterns and Business Models for the Next Generation of Software.” O’Reilly Network 30 Sep. 2005. 18 Sep. 2012 ‹http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html›.Papacharissi, Zizi, and Maria de Fatima Oliveira. “Affective News and Networked Publics: The Rhythms of News Storytelling on #Egypt.” Journal of Communication 62.2 (2012): 266-82.Sample, Mark. “The End of Twapperkeeper (and What to Do about It).” ProfHacker. The Chronicle of Higher Education 8 Mar. 2011. 18 Sep. 2012 ‹http://chronicle.com/blogs/profhacker/the-end-of-twapperkeeper-and-what-to-do-about-it/31582›.Sippey, Michael. “Changes Coming in Version 1.1 of the Twitter API.” 16 Aug. 2012. Twitter Developers Blog. 18 Sep. 2012 ‹https://dev.Twitter.com/blog/changes-coming-to-Twitter-api›.Stone, Biz. “Introducing the Twitter API.” Twitter Blog 20 Sep. 2006. 18 Sep. 2012 ‹http://blog.Twitter.com/2006/09/introducing-Twitter-api.html›.Twitter. “Developer Rules of the Road.” Twitter Developers Website 17 May 2012. 18 Sep. 2012 ‹https://dev.Twitter.com/terms/api-terms›.Twitter. “Frequently Asked Questions.” 18 Sep. 2012 ‹https://dev.twitter.com/docs/faq›.Van Dijck, José. “Tracing Twitter: The Rise of a Microblogging Platform.” International Journal of Media and Cultural Politics 7.3 (2011): 333-48.Walker Rettberg, Jill. Blogging. Cambridge: Polity, 2008.Wood, Megan M., and Linda Baughman. “Glee Fandom and Twitter: Something New, or More of the Same Old Thing?” Communication Studies 63.3 (2012): 328-44.
APA, Harvard, Vancouver, ISO, and other styles
31

Egliston, Ben. "Building Skill in Videogames: A Play of Bodies, Controllers and Game-Guides." M/C Journal 20, no. 2 (April 26, 2017). http://dx.doi.org/10.5204/mcj.1218.

Full text
Abstract:
IntroductionIn his now-seminal book, Pilgrim in the Microworld (1983), David Sudnow details his process of learning to play the game Breakout on the Atari 2600. Sudnow develops an account of his graduation from a novice (having never played a videogame prior, and middle-aged at time of writing) to being able to fluidly perform the various configurative processes involved in an acclimated Breakout player’s repertoire.Sudnow’s account of videogame skill-development is not at odds with common-sense views on the matter: people become competent at videogames by playing them—we get used to how controllers work and feel, and to the timings of the game and those required of our bodies, through exposure. We learn by playing, failing, repeating, and ultimately internalising the game’s rhythms—allowing us to perform requisite actions. While he does not put it in as many words, Sudnow’s account affords parity to various human and nonhuman stakeholders involved in videogame-play: technical, temporal, and corporeal. Essentially, his point is that intertwined technical systems like software and human-interface devices—with their respective temporal rhythms, which coalesce and conflict with those of the human player—require management to play skilfully.The perspective Sudnow develops here is no doubt important, but modes of building competency cannot be strictly fixed around a player-videogame relationship; a relatively noncontroversial view in game studies. Videogame scholars have shown that there is currency in understanding how competencies in gameplay arise from engaging with ancillary objects beyond the thresholds of player-game relations; the literature to date casting a long shadow across a broad spectrum of materials and practices. Pursuing this thread, this article addresses the enterprise (and conceptualisation) of ‘skill building’ in videogames (taken as the ability to ‘beat games’ or cultivate the various competencies to do so) via the invocation of peripheral objects or practices. More precisely, this article develops the perspective that we need to attend to the impacts of ancillary objects on play—positioned as hybrid assemblage, as described in the work of writers like Sudnow. In doing so, I first survey how the intervention of peripheral game material has been researched and theorised in game studies, suggesting that many accounts deal too simply with how players build skill through these means—eliding the fact that play works as an engine of many moving parts. We do not simply become ‘better’ at videogames by engaging peripheral material. Furthering this view, I visit recent literature broadly associated with disciplines like post-phenomenology, which handles the hybridity of play and its extension across bodies, game systems, and other gaming material—attending to how skill building occurs; that is, through the recalibration of perceptual faculties operating in the bodily and temporal dimensions of videogame play. We become ‘better’ at videogames by drawing on peripheral gaming material to augment how we negotiate the rhythms of play.Following on from this, I conclude by mobilising post-phenomenological thinking to further consider skill-building through peripheral material, showing how such approaches can generate insights into important and emerging areas of this practice. Following recent games research, such as the work of James Ash, I adopt Bernard Stiegler’s formulation of technicity—pointing toward the conditioning of play through ancillary gaming objects: focusing particularly on the relationship between game skill, game guides, and embodied processes of memory and perception.In short, this article considers videogame skill-building, through means beyond the game, as a significant recalibration of embodied, temporal, and technical entanglements involved in play. Building Skill: From Guides to BodiesThere is a handsome literature that has sought to conceptualise the influence of ancillary game material, which can be traced to earlier theories of media convergence (Jenkins). More incisive accounts (pointing directly at game-skill) have been developed since, through theoretical rubrics such as paratext and metagaming. A point of congruence is the theme of relation: the idea that the locus of understanding and meaning can be specified through things outside the game. For scholars like Mia Consalvo (who popularised the notion of paratext in game studies), paratexts are a central motor in play. As Consalvo suggests, paratexts are quite often primed to condition how we do things in and around videogames; there is a great instructive potential in material like walkthrough guides, gaming magazines and cheating devices. Subsequent work has since made productive use of the concept to investigate game-skill and peripheral material and practice. Worth noting is Chris Paul’s research on World of Warcraft (WoW). Paul suggests that players disseminate high-level strategies through a practice known as ‘Theorycraft’ in the game’s community: one involving the use of paratextual statistics applications to optimise play—the results then disseminated across Web-forums (see also: Nardi).Metagaming (Salen and Zimmerman 482) is another concept that is often used to position the various extrinsic objects or practices installed in play—a concept deployed by scholars to conceptualise skill building through both games and the things at their thresholds (Donaldson). Moreover, the ability to negotiate out-of-game material has been positioned as a form of skill in its own right (see also: Donaldson). Becoming familiar with paratextual resources and being able to parse this information could then constitute skill-building. Ancillary gaming objects are important, and as some have argued, central in gaming culture (Consalvo). However, critical areas are left unexamined with respect to skill-building, because scholars often fail to place paratexts or metagaming in the contexts in which they operate; that is, amongst the complex technical, embodied and temporal conjunctures of play—such as those described by Sudnow. Conceptually, much of what Sudnow says in Microworld undergirds the post-human, object-oriented, or post-phenomenological literature that has begun to populate game studies (and indeed media studies more broadly). This materially-inflected writing takes seriously the fact that technical objects (like videogames) and human subjects are caught up in the rhythms of each other; digital media exists “as a mode or cluster of operations in consort with matter”, as Anna Munster tells us (330).To return to videogames, Patrick Crogan and Helen Kennedy argue that gameplay is about a “technicity” between human and nonhuman things, irreducible to any sole actor. Play is a confluence of metastable forces and conditions, a network of distributed agencies (see also Taylor, Assemblage). Others like Brendan Keogh forward post-phenomenological approaches (operating under scholars like Don Ihde)—looking past the subject-centred nature of videogame research. Ultimately, these theorists situate play as an ‘exploded diagram’, challenging anthropocentric accounts.This position has proven productive in research on ‘skilled’ or ‘high-level’ play (fertile ground for considering competency-development). Emma Witkowski, T.L. Taylor (Raising), and Todd Harper have suggested that skilled play in games emerges from the management of complex embodied and technical rhythms (echoing the points raised prior by Sudnow).Placing Paratexts in PlayWhile we have these varying accounts of how skill develops within and beyond player-game relationships, these two perspectives are rarely consolidated. That said, I address some of the limited body of work that has sought to place the paratext in the complex and distributed conjunctures of play; building a vocabulary and framework via encounters with what could loosely be called post-phenomenological thinking (not dissimilar to the just surveyed accounts). The strength of this work lies in its development of a more precise view of the operational reality of playing ‘with’ paratexts. The recent work of Darshana Jayemanne, Bjorn Nansen, and Thomas Apperley theorises the outward expansion of games and play, into diverse material, social, and spatial dimensions (147), as an ‘aesthetics of recruitment’. Consideration is given to ‘paratextual’ play and skill. For instance, they provide the example of players invoking the expertise they have witnessed broadcast through Websites like Twitch.tv or YouTube—skill-building operating here across various fronts, and through various modalities (155). Players are ‘recruited’, in different capacities, through expanded interfaces, which ultimately contour phenomenological encounters with games.Ash provides a fine-grained account in research on spatiotemporal perception and videogames—one much more focused on game-skill. Ash examines how high-level communities of players cultivate ‘spatiotemporal sensitivity’ in the game Street Fighter IV through—in Stiegler’s terms—‘exteriorising’ (Fault) game information into various data sets—producing what he calls ‘technicity’. In this way, Ash suggests that these paratextual materials don’t merely ‘influence play’ (Technology 200), but rather direct how players perceive time, and habituate exteriorised temporal rhythms into their embodied facility (a translation of high-level play). By doing so, the game can be played more proficiently. Following the broadly post-phenomenological direction of these works, I develop a brief account of two paratextual practices. Like Ash, I deploy the work of Stiegler (drawing also on Ash’s usage). I utilise Stiegler’s theoretical schema of technicity to roughly sketch how some other areas of skill-building via peripheral material can be placed within the context of play—looking particularly at the conditioning of embodied faculties of player anticipation, memory and perception through play and paratext alike. A Technicity of ParatextThe general premise of Stiegler’s technicity is that the human cannot be thought of independent from their technical supplements—that is, ‘exterior’ technical objects which could include, but are not limited to, technologies (Fault). Stiegler argues that the human, and their fundamental memory structure is finite, and as such is reliant on technical prostheses, which register and transmit experience (Fault 17). This technical supplement is what Stiegler terms ‘tertiary retention’. In short, for Stiegler, technicity can be understood as the interweaving of ‘lived’ consciousness (Cinematic 21) with tertiary retentional apparatus—which is palpably felt in our orientations in and toward time (Fault) and space (including the ‘space’ of our bodies, see New Critique 11).To be more precise, tertiary retention conditions the relationship between perception, anticipation, and subjective memory (or what Stiegler—by way of phenomenologist Edmund Husserl, whose work he renovates—calls primary retention, protention, and secondary retention respectively). As Ash demonstrates (Technology), Stiegler’s framework is rich with potential in investigating the relationship between videogames and their peripheral materials. Invoking technicity, we can rethink—and expand on—commonly encountered forms of paratexts, such as game guides or walkthroughs (an example Consalvo gives in Cheating). Stiegler’s framework provides a means to assess the technical organisation (through both games and paratexts) of embodied and temporal conditions of ‘skilled play’. Following Stiegler, Consalvo’s example of a game guide is a kind of ‘exteriorisation of play’ (to the guide) that adjusts the embodied and temporal conditions of anticipation and memory (which Sudnow would tell us are key in skill-development). To work through an example, if I was playing a hard game (such as Dark Souls [From Software]), the general idea is that I would be playing from memories of the just experienced, and with expectations of what’s to come based on everything that’s happened prior (following Stiegler). There is a technicity in the game’s design here, as Ash would tell us (Technology 190-91). By way of Stiegler (and his reading of Heidegger), Ash argues a popular trend in game design is to force a technologically-mediated interplay between memory, anticipation, and perception by making videogames ‘about’ a “a future outside of present experience” (Technology 191), but hinging this on past-memory. Players then, to be ‘skilful’, and move forward through the game environment without dying, need to manage cognitive and somatic memory (which, in Dark Souls, is conventionally accrued through trial-and-error play; learning through error incentivised through punitive game mechanics, such as item-loss). So, if I was playing against one of the game’s ‘bosses’ (powerful enemies), I would generally only be familiar with the way they manoeuvre, the speed with which they do so, and where and when to attack based on prior encounter. For instance, my past-experience (of having died numerous times) would generally inform me that using a two-handed sword allows me to get in two attacks on a boss before needing to retreat to avoid fatal damage. Following Stiegler, we can understand the inscription of videogame experience in objects like game guides as giving rise to anticipation and memory—albeit based on a “past that I have not lived but rather inherited as tertiary retentions” (Cinematic 60). Tertiary retentions trigger processes of selection in our anticipations, memories, and perceptions. Where videogame technologies are traditionally the tertiary retentions in play (Ash, Technologies), the use of game-guides refracts anticipation, memory, and perception through joint systems of tertiary retention—resulting in the outcome of more efficiently beating a game.To return to my previous example of navigating Dark Souls: where I might have died otherwise, via the guide, I’d be cognisant to the timings within which I can attack the boss without sustaining damage, and when to dodge its crushing blows—allowing me to eventually defeat it and move toward the stage’s end (prompting somatic and cognitive memory shifts, which influence my anticipation in-game). Through ‘neurological’ accounts of technology—such as Stiegler’s technicity—we can think more closely about how playing with a skill-building apparatus (like a game guide) works in practice; allowing us to identify how various situations ingame can be managed via deferring functions of the player (such as memory) to exteriorised objects—shifting conditions of skill building. The prism of technicity is also useful in conceptualising some of the new ways players are building skill beyond the game. In recent years, gaming paratexts have transformed in scope and scale. Gaming has shifted into an age of quantification—with analytics platforms which harvest, aggregate, and present player data gaining significant traction, particularly in competitive and multiplayer videogames. These platforms perform numerous operations that assist players in developing skill—and are marketed as tools for players to improve by reflecting on their own practices and the practices of others (functioning similarly to the previously noted practice of TheoryCraft, but operating at a wider scale). To focus on one example, the WarCraftLogs application in WoW (Image 1) is a highly-sophisticated form of videogame analytics; the perspective of technicity providing insights into its functionality as skill-building apparatus.Image 1: WarCraftLogs. Image credit: Ben Egliston. Following Ash’s use of Stiegler (Technology), quantifying the operations that go into playing WoW can be conceptualised as what Stiegler calls a system of traces (Technology 196). Because of his central thesis of ‘technical existence’, Stiegler maintains that ‘interiority’ is coincident with technical support. As such, there is no calculation, no mental phenomena, that does not arise from internal manipulation of exteriorised symbols (Cinematic 52-54). Following on with his discussion of videogames, Ash suggests that in the exteriorisation of gameplay there is “no opposition between gesture, calculation and the representation of symbols” (Technology 196); the symbols working as an ‘abbreviation’ of gameplay that can be read as such. Drawing influence from this view, I show that ‘Big Data’ analytics platforms like WarCraftLogs similarly allow users to ‘read’ play as a set of exteriorised symbols—with significant outcomes for skill-building; allowing users to exteriorise their own play, examine the exteriorised play of others, and compare exteriorisations of their own play with those of others. Image 2: WarCraftLogs Gameplay Breakdown. Image credit: Ben Egliston.Image 2 shows a screenshot of the WarCraftLogs interface. Here we can see the exteriorisation of gameplay, and how the platform breaks down player inputs and in-game occurrences (written and numeric, like Ash’s game data). The screenshot shows a ‘raid boss’ (where players team up to defeat powerful computer-controlled enemies)—atomising the sequence of inputs a player has made over the course of the encounter. This is an accurate ledger of play—a readout that can speak to mechanical performance (specific ingame events occurred at a specific time), as well as caching and providing parses of somatic inputs and execution (e.g. ability to trace the rates at which players expend in-game resources can provide insights into rapidity of button presses). If information falls outside what is presented, players can work with an Application Programming Interface to develop customised readouts (this is encouraged through other game-data platforms, like OpenDota in Dota 2). Through this system, players can exteriorise their own input and output or view the play of others—both useful in building skill. The first point here—of exteriorising one’s own experience—resonates with Stiegler’s renovation of Husserl's ‘temporal object’—that is, an object that exists in and is formed through time—through temporal fluxes of what appears, what happens and what manifests itself in disappearing (Cinematic 14). Stiegler suggests that tertiary retentional apparatus (e.g. a gramophone) allow us to re-experience a temporal object (e.g. a melody) which would otherwise not be possible due to the finitude of human memory.To elaborate, Stiegler argues that primary memories recede into secondary memory (which is selective reactivation of perception), but through technologies of recording, (such as game-data) we can re-experience these things verbatim. So ultimately, games analytics platforms—as exteriorised technologies of recording—facilitate this after-the-fact interplay between primary and secondary memory where players can ‘audit’ their past performance, reflecting on well-played encounters or revising error. These platforms allow the detailed examination of responses to game mechanics, and provide readouts of the technical and embodied rhythms of play (which can be incorporated into future play via reading the data). Beyond self-reflection, these platforms allow the examination of other’s play. The aggregation and sorting of game-data makes expertise both visible and legible. To elaborate, players are ranked on their performance based on all submitted log-data, offering a view of how expertise ‘works’.Image 3: Top-Ranking Players in WarCraftLogs. Image credit: Ben Egliston.Image 3 shows the top-ranked players on an encounter (the top 10 of over 100,000 logs), which means that these players have performed most competently out of all gameplay parses (the metric being most damage dealt per-second in defeating a boss). Users of the platform can look in detail at the actions performed by top players in that encounter—reading and mobilising data in a similar manner to game-guides; markedly different, however, in terms of the scope (i.e. there are many available logs to draw from) and richness of the data (more detailed and current—with log rankings recalibrated regularly). Conceptually, we can also draw parallels with previous work (see: Ash, Technology)—where the habituation of expert game data can produce new videogame technicities; ways of ‘experiencing’ play as ‘higher-level’ organisation of space and time (Ash, Technology). So, if a player wanted to ‘learn from the experts’ they would restructure their own rhythms of play around high-level logs which provide an ordered readout of various sequences of inputs involved in playing well. Moreover, the platform allows players to compare their logs to those of others—so these various introspective and outward-facing uses can work together, conditioning anticipations with inscriptions of past-play and ‘prosthetic’ memories through other’s log-data. In my experience as a WoW player, I often performed better (or built skill) by comparing and contrasting my own detailed readouts of play to the inputs and outputs of the best players in the world.To summarise, through technicity, I have briefly shown how exteriorising play shifts the conditions of skill-building from recalibrating msnesic and anticipatory processes through ‘firsthand’ play, to reworking these functions through engaging both games and extrinsic objects, like game guides and analytics platforms. Additionally, by reviewing and adopting various usages of technicity, I have pointed out how we might more holistically situate the gaming paratext in skill building. Conclusion There is little doubt—as exemplified through both scholarly and popular interest—that paratextual videogame material reframes modes of building game skill. Following recent work, and by providing a brief account of two paratextual practices (venturing the framework of technicity, via Stiegler and Ash—showing the complication of memory, perception, and anticipation in skill-building), I have contended that videogame-skill building—via paratextual material—can be rendered a process of operating outside of, but still caught up in, the complex assemblages of time, bodies, and technical architectures described by Sudnow at this article’s outset. Additionally, by reviewing and adopting ideas associated with technics and post-phenomenology, this article has aimed to contribute to the development of more ‘complete’ accounts of the processes and practices comprising skill building regimens of contemporary videogame players.References Ash, James. “Technology, Technicity and Emerging Practices of Temporal Sensitivity in Videogames.” Environment and Planning A 44.1 (2012): 187-201.———. “Technologies of Captivation: Videogames and the Attunement of Affect.” Body and Society 19.1 (2013): 27-51.Consalvo, Mia. Cheating: Gaining Advantage in Videogames. Cambridge: Massachusetts Institute of Technology P, 2007. Crogan, Patrick, and Helen Kennedy. “Technologies between Games and Culture.” Games and Culture 4.2 (2009): 107-14.Donaldson, Scott. “Mechanics and Metagame: Exploring Binary Expertise in League of Legends.” Games and Culture (2015). 4 Jun. 2015 <http://journals.sagepub.com/doi/abs/10.1177/1555412015590063>.From Software. Dark Souls. Playstation 3 Game. 2011.Harper, Todd. The Culture of Digital Fighting Games: Performance and Practice. New York: Routledge, 2014.Jayemanne, Darshana, Bjorn Nansen, and Thomas H. Apperley. “Postdigital Interfaces and the Aesthetics of Recruitment.” Transactions of the Digital Games Research Association 2.3 (2016): 145-72.Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York UP, 2006.Keogh, Brendan. “Across Worlds and Bodies.” Journal of Games Criticism 1.1 (2014). Jan. 2014 <http://gamescriticism.org/articles/keogh-1-1/>.Munster, Anna. “Materiality.” The Johns Hopkins Guide to Digital Media. Eds. Marie-Laure Ryan, Lori Emerson, and Benjamin J. Robertson. Baltimore: Johns Hopkins UP, 2014. 327-30. Nardi, Bonnie. My Life as Night Elf Priest: An Anthropological Account of World of Warcraft. Ann Arbor: Michigan UP, 2010. OpenDota. OpenDota. Web browser application. 2017.Paul, Christopher A. “Optimizing Play: How Theory Craft Changes Gameplay and Design.” Game Studies: The International Journal of Computer Game Research 11.2 (2011). May 2011 <http://gamestudies.org/1102/articles/paul>.Salen, Katie, and Eric Zimmerman. Rules of Play: Game Design Fundamentals. Cambridge: Massachusetts Institute of Technology P, 2004.Stiegler, Bernard. Technics and Time, 1: The Fault of Epimetheus. Stanford: Stanford UP, 1998.———. For a New Critique of Political Economy. Cambridge: Polity, 2010.———. Technics and Time, 3: Cinematic Time and the Question of Malaise. Stanford: Stanford UP, 2011.Sudnow, David. Pilgrim in the Microworld. New York: Warner Books, 1983.Taylor, T.L. “The Assemblage of Play.” Games and Culture 4.4 (2009): 331-39.———. Raising the Stakes: E-Sports and the Professionalization of Computer Gaming. Cambridge: Massachusetts Institute of Technology P, 2012.WarCraftLogs. WarCraftLogs. Web browser application. 2016.Witkowski, Emma. “On the Digital Playing Field: How We ‘Do Sport’ with Networked Computer Games.” Games and Culture 7.5 (2012): 349-74.
APA, Harvard, Vancouver, ISO, and other styles
32

Laak, Marin, and Piret Viires. "Kirjandus ja digitaalne tehnoloogia / Literature and Digital Technology." Methis. Studia humaniora Estonica 18, no. 23 (June 11, 2019). http://dx.doi.org/10.7592/methis.v18i23.14803.

Full text
Abstract:
Eesti kirjanduse ja digitehnoloogilise pöörde suhted ulatuvad juba enam kui kahekümne aasta tagusesse aega. Siinse artikliga antakse ülevaade, kuidas digitaalne tehnoloogia on mõjutanud Eestis kirjanduse, sh kirjandusajaloo üle mõtlemist ning nüüdisaegseid kirjanduslikke vorme. Tuuakse näiteid Eestis teostatud digihumanitaariaga seostatavatest projektidest ja digitaalse kirjanduse avaldumisvormidest. Samuti arutletakse artiklis digihumanitaaria mõiste üle ja selle üle, mida tähendab eesti kirjanduse uurimine digihumanitaaria kontekstis. Püstitatakse ka küsimus, kas digihumanitaaria muudab kirjandusuurimises midagi olemuslikult – kas ta on kirjandusuurimise tööriist/meetod või hoopis täiesti uus distsipliin. The relations between Estonian literature and the digital technological turn date back to more than twenty years. The aim of this article is to give an overview as to how digital technology has influenced re-thinking about literature and literary history in Estonia as well as has had impact on creating new digital literary genres. The authors of the article have a twenty-year experience both as researchers and practitioners in this field. The article introduces some examples of the projects created in Estonia that can be related to digital humanities and also some examples of Estonian digital literature. Also, the concept of digital humanities and its meaning for Estonian literary studies will be discussed below. The concept of digital humanities has been used actively during last decade. Although the field of digital humanities is quite broad, in Estonia this concept has been used rather as a synonym for methods of quantitative computer analysis: linguists have used it very productively in analysing text corpora and computer linguistics has developed it into an independent discipline. Up until now, there have been only a few attempts to analyse literary texts using quantitative computer analysis method in Estonia. However, the concept of digital humanities can be interpreted in a much broader sense. Susan Schreibman, Ray Siemens, John Unsworth (2016) find that digital humanities is not only computational modelling and analysis of humanities information, but also the cultural study of digital technologies, their creative possibilities, and their social impact (Schreibman et al 2016, xvii–xviii). It appears, then, that the concept of digital humanities is wider and it can be said that it encompasses also creative digital practices and analysing it, as well as creating, interpreting and analysing digital projects of literary historical narrative and cultural web resources. In Estonia, the research on the electronic new media and the application of digital technology in the field of literary studies can be traced back to the second half of the 1990s. Up to the present, the research has followed these three main directions: 1) New forms of literary genres in the electronic environment. Digitally born literature and the appearance of other new forms of art have been examined in Estonia since 1996, when the first hypertextual poems were created, followed by more complex works of digital literature combining different media (text, video, sound, image) and literature in social media. The article gives a short overview of this kind of literature in Estonia and poses a question about the limits of literature. What defines literature if digital literature is a hybrid artefact combining text, image, and video? Can we still talk about literature when it is created using the technology of virtual reality and has no traditional features at all? Is it still literature or rather a VR movie or a VR computer game? 2) Digitisation of earlier literature and the creation of digital bookshelves. These are literary environments created using digital technologies. As examples there can be mentioned the ongoing project “EWOD. Estonian Writers Online Dictionary” (University of Tartu) and the project for digitising earlier Estonian literature and creating a digital bookshelf “EEVA. The Text Corpus of Earlier Estonian Literature” (https://utlib.ut.ee/eeva/). The latter was created at the University of Tartu already in 2002; it makes accessible mostly the works of Baltic German writers. 3) Development of a new model of the literary historical narrative; application this model in the digital environment. Three large-scale projects for digital representation of Estonian literary history were initiated during the years 1997–2007, with the objective of developing a model of the new literary historical narrative for applying in the digital environment and creating new interactive information environments. The Estonian Literary Museum carried out an Estonian Tiger Leap project “ERNI. Estonian Literary History in texts 1924–25” in 1997–2001 (http://www2.kirmus.ee/erni/erni.html). The project tested the method of reception aesthetics in representing the Estonian literary history of the 1920s. Its objective was to use a relatively limited amount of well-studied material in testing a new type of literary historical narrative and it was based on the visualisation of the network of relations between literary texts and metatexts in the form of hypertext. At the University of Tartu, the project “The Estonian National Epic The Kalevipoeg” was developed within the framework of the project CULTOS (Cultural Tools of Learning: Tools and Services IST-2000-28134) in 2001–2003. Again, it was a project for visualising literary relationships, requiring the knowledge of the source text and intertexts and reproducing them in the form of a network of intertextual relations. The project “Kreutzwald’s Century: the Estonian Cultural History Web” (in progress) was created at the Estonian Literary Museum in 2004 with the objective of modelling and representing a new narrative of literary history (http://kreutzwald.kirmus.ee/). This was a hybrid project which synthesised the study of the classical narrative of literary history, the needs of the user of the digital new media theory, and the development of digital resources for memory institutions. The underlying idea of the project was to make all the works of fiction of one author, as well as their biography, archival sources, etc., dynamically visible for the reader on an interactive time axis. However, the final and so far open-ended question posed in the article is whether digital humanities is essentially a tool for literary research, or is it an entirely new research approach, a new discipline – Computational Literary Analyses or Digital Literary Studies. A further discussion is needed for finding answers to this challenging question. Regardless, it is clear that rapid developments in technology bring along also rapid changes in the humanities. Hence, the future of literature and literary research depends both on the developments regarding digital technology as well as the humanities and the mutual impacts of both domains.
APA, Harvard, Vancouver, ISO, and other styles
33

Muller, Vivienne. "Motherly Love." M/C Journal 5, no. 6 (November 1, 2002). http://dx.doi.org/10.5204/mcj.2008.

Full text
Abstract:
There is a humorously disturbing cartoon by Mary Leunig of a mother as a suffering Jesus figure crucified on the cross of motherhood. The image simultaneously evokes and countersigns the idealised portrait of mothers as serene and self-sacrificing Madonna figures. In Leunig’s cartoon the mother wears a crown of nappy pins; her children, inconsolable, bereft of the mother as a site of selfless love and nurture, look up at her on the cross. Over twenty years old, this cartoon haunts the viewer with its ironic/iconic motifs of motherhood, because it signifies the presence of an absence – the absence of the mother as a desiring subject, and a “space where the mother can be seen as woman” (Irigaray in Grosz 1989, 119). Steven Spielberg’s 2001 film AI (Artificial Intelligence) is, I think, an interesting attempt to speak/visualise that space, but it ultimately taps into our Western phallocentric society’s ongoing love affair with the woman as mother, and its denial of the mother as woman. Crucial to this denial is an erasure of the female body and love of self, both of which are sacrificed on the altar of the child’s needs and desires. Luce Irigaray’s concern for the absence of the mother’s subjectivity in Western discourses tempts her to claim that men are intent on giving birth to themselves, thus not only refuting the mother as woman, but disavowing even that limited, metonymic, signifying space allotted to the mother – the mother as the site of origin. She writes: in order to become men, they continue to consume … [the mother], draw on her resources and, at the same time, they deny her or disclaim her in their identification with and their belonging to the masculine world. They owed their existence, their body, their life and they forget or misrecognise this debt in order to set themselves up as powerful men, adults busying themselves with public affairs (Irigaray in Grosz 1989, 121) Developments in reproductive technology over the past twenty years have enabled the concept of the self-made man who disavows the maternal debt to become more of a reality. Mary Ann Doane, citing Andreas Huyssen, (in Wolmark, 1999) claims that the ultimate (male) fantasy of reproductive technology is the production of life without the mother (24). Doane also points out, however, that the technology/human reproduction interface in these processes “puts into crisis the very possibility of the question of origins, the Oedipal dilemma and the relation between subjectivity and knowledge that it supports” (26). Spielberg’s film AI (which screened to mixed reception world-wide) problematises the consequences of life without the biological mother due to male technological intervention, resolving this by reaffirming the importance of the woman as mother and celebrating the mother-child bond. In effect, one of the ‘messages’ in this film is that you cannot take the mother out of the woman, and I would suggest that part of the reviewers’ antipathy to the film is fed by an aversion to Spielberg’s focus on maternal space (Shepard, 2001). The film invites the viewer to be critical of the ways in which males use technology to tamper with nature, and suggests that such intrusions will have disastrous consequences. By revealing the robots as more moral and humane than their human counterparts, AI highlights a fundamental anxiety about the human “self”. However, in attempting to resolve these issues, AI reifies the woman as mother, aligning her with “nature” and positioning the mother and child as the precious casualties of technological intervention. Thus AI confirms that well rehearsed binarism that links women with nature/nurture and men with technology/culture, in the process endorsing a reductionist view of women’s bodies as essentially maternal (Grosz, 1987, 5-6) and subsuming the mother beneath a concern for the child (Walker, 1998). AI is set in a future society topographically ravaged by the effects of greenhouse gases. Major cities have been flooded and there are legal sanctions to strictly limit pregnancies. Robots called ‘mechas’ have been invented to serve the human race and to help conserve the limited resources. David is the perfect ‘mecha’ child who can be programmed to love – the realisation of the male scientist’s dreams. He is adopted by human parents, Monica and Martin, after it seems that Henry, their biological child, will never recover from a coma. The father Martin brings David home, a gesture that echoes the male scientist’s fantasy of the reproduction of life without a mother. Monica, the mother, falls in love with David, and it is Monica, not Martin, who programs David to love her in return. Thus it is suggested that motherhood is instinctive in women. The father retains an aloof detachment from the child: “You’re a toy,” he tells David, “I’m real”. This statement clearly distinguishes the technological from the human, culture from nature, and the gendered power hierarchy that these divisions uphold. Once David is programmed to love Monica, he calls her “Mommy”. David’s desire to be a ‘real live boy’ is fuelled by his potent love for the mother, which drives the drama and the narrative shape of his quest for reunion with her. The mother-as-woman disappears under the weight of nostalgia for the certainty of origins which mandates the woman-as-mother. Elizabeth Grosz, paraphrasing Luce Irigaray argues that, “with no access to social value in her own right, she becomes the mother who has only food (that is love) to give the child, a nurture that, in our culture, is deemed either excessive or inadequate”(1989, 119). Constructed by others, the mother here is both the phallic mother who satisfies all needs and desires, and the castrated mother who denies the self, both Freudian conceptualisations. AI further portrays motherhood without the mess, by transforming the female body into the ‘good and proper’ maternal body. It thus removes the potential for the abjectification of corporeal maternal space that occurs in many horror films dealing with the mother figure (see Creed’s work). The maternal body in AI is a sanitised space that is represented by the clean, bounded rooms of the family home, and the closed, groomed, feminine body of Monica. The film confirms the home as a traditional site of nurture and motherly love, a spatial marker of women’s identity as well as place. At the end of the film, David’s twofold wish of becoming a real live boy and being with his mother is finally granted, but not until after some 2000 years. In conferring human status on David, the narrative resolves any apprehension about loss of boundaries and the blurring of the distinctions between self and other, which is generated by the human/technology interface (Creed 1995, 147). When Monica is re-born by a male scientist figure in this future society, an event that echoes the immaculate conception birth of David, she is the woman-as-mother. Reunited in the family home with David, she comments that she feels strange and doesn’t quite know why she is there; but David’s presence and love reassures her, giving her access to her only identity, that of mother. The film closes on an idyllic scene in which David and the idealised mother are falling asleep together on the parental bed in the sanctuary of the home. Spielberg’s mother is not Mary Leunig’s suffering mother, but the film’s focus on the mother/child bond bears an interesting resemblance to feminist theories of the pre-oedipal as a space of potential disruption to the Oedipal narrative (Kristeva). In its demotion of the masculine and its criticism of technology, AI could be read as a celebration of the pre-oedipal, that phase of psycho-sexual identity formation in which the mother is privileged and which is usually devalued once the father intervenes in the mother-child dyad (as Martin does when he separates David from Monica, insisting he leaves after the real son Henry recovers). The phallic domain of the father, the symbolic, is spurned in this film, while the domain of the mother, the imaginary, and the pre-Oedipal, is valorised. This is visually enhanced in the film through the use of water imagery, the invigoration of the maternal figure (Monica/ Madonna), and the primacy given to the mother-child bond. Secure in the maternal space at the end of the film, David eschews the masculine: “No Henry, no Martin, no grief”. However such a reading still prioritises the woman-as-mother, not the mother-as-woman. The problem with singing the song of the pre-oedipal sublime (Helene Cixous’s ‘white ink’ and Julia Kristeva’s ‘semiotic’), is that it threatens to buy into the same idealisation of the mother as the phallocentric discourses that it attacks (Walker, 1998). I have argued here that the mother, mothering, motherhood and the maternal body continue to be represented in our culture as idealised experiences and states of being. Such textual representation allocates homogeneity and naturalness to motherhood, and a fixed conceptualisation, even obliteration, of women’s bodies and desires. Despite appearing to undermine the masculine and valorise the maternal, AI provides a metaphorical inscription of motherly love and love for the mother that fails to recognise the flesh behind the words. Works Cited Creed, Barbara. “Horror and the Carnivalesque.” Fields of Vision. Eds. Leslie Deveraux and Roger Hillman. Berkley: University of California Press, 1995. Doane, Mary Ann. “Technophilia: Technology, Representation, and the Feminine.” Cybersexualities: A reader on Feminist Theory, Cyborgs and Cyberspace. Ed. Jenny Wolmark. Edinburgh: Edinburgh University Press, 1999. Grosz, Elizabeth. “Notes Towards a Corporeal Feminism.” Australian Feminist Studies, 5 (1987): 1-9. Grosz, Elizabeth. Sexual Subversions: three French feminists. Sydney: Allen & Unwin, 1989. Spielberg, Steven. AI. Warner Bros. and Dreamworks Pictures, 2001. Shepard, Lucius. “AIEEEEEEEEEEEEE.” Fantasy & Science Fiction Vol 101 (Dec 2001): 112-117. Walker, Michelle. Philosophy and the maternal body: reading silence.London; New York: Routledge, 1998. Links http://www.benway.com/mary-leunig.shtml http://www.envf.port.ac.uk/illustration/images/vlsh/psycholo/irigaray.htm http://web.ukonline.co.uk/n.paradoxa/barnett.htm Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Muller, Vivienne. "Motherly Love" M/C: A Journal of Media and Culture 5.6 (2002). Dn Month Year < http://www.media-culture.org.au/0211/motherlylove.php>. APA Style Muller, V., (2002, Nov 20). Motherly Love. M/C: A Journal of Media and Culture, 5,(6). Retrieved Month Dn, Year, from http://www.media-culture.org.au/0211/motherlylove.html
APA, Harvard, Vancouver, ISO, and other styles
34

Henderson, Neil James. "Online Persona as Hybrid-Object: Tracing the Problems and Possibilities of Persona in the Short Film Noah." M/C Journal 17, no. 3 (June 10, 2014). http://dx.doi.org/10.5204/mcj.819.

Full text
Abstract:
Introduction The short film Noah (2013) depicts the contemporary story of an adolescent relationship breakdown and its aftermath. The film tells the story by showing events entirely as they unfold on the computer screen of Noah, the film’s teenaged protagonist. All of the characters, including Noah, appear on film solely via technological mediation.Although it is a fictional representation, Noah has garnered a lot of acclaim within an online public for the authenticity and realism of its portrayal of computer-mediated life (Berkowitz; Hornyak; Knibbs; Warren). Judging by the tenor of a lot of this commentary, the film has keyed in to a larger cultural anxiety around issues of communication and relationships online. Many reviewers and interested commentators have expressed concern at how closely Noah’s distracted, frenetic and problematic multitasking resembles their own computer usage (Beggs; Berkowitz; Trumbore). They frequently express the belief that it was this kind of behaviour that led to the relationship breakdown depicted in the film, as Noah proves to be “a lot better at opening tabs than at honest communication” (Knibbs para. 2).I believe that the cultural resonance of the film stems from the way in which the film is an implicit attempt to assess the nature of contemporary online persona. By understanding online persona as a particular kind of “hybrid object” or “quasi-object”—a combination of both human and technological creation (Latour We Have)—the sense of the overall problems, as well as the potential, of online persona as it currently exists, is traceable through the interactions depicted within the film. By understanding social relationships as constituted through dynamic interaction (Schutz), I understand the drama of Noah to stem principally from a tension in the operation of online persona between a) the technological automation of presentation that forms a core part of the nature of contemporary online persona, and b) the need for interaction in effective relationship development. However, any attempt to blame this tension on an inherent tendency in technology is itself problematised by the film’s presentation of an alternative type of online persona, in a Chatroulette conversation depicted in the film’s second half.Persona and Performance, Mediation and DelegationMarshall (“Persona Studies” 163) describes persona as “a new social construction of identity and public display.” This new type of social construction has become increasingly common due to a combination of “changes in work, transformation of our forms of social connection and networking via new technologies, and consequent new affective clusters and micropublics” (Marshall “Persona Studies” 166). New forms of “presentational” media play a key role in the construction of persona by providing the resources through which identity is “performed, produced and exhibited by the individual or other collectives” (Marshall “Persona Studies” 160).In this formulation of persona, it is not clear how performance and presentation interlink with the related concepts of production and exhibition. Marshall’s concept of “intercommunication” suggests a classificatory scheme for these multiple registers of media and communication that are possible in the contemporary media environment. However, Marshall’s primary focus has so far been on the relationship between existing mediated communication forms, and their historical transformation (Marshall “Intercommunication”). Marshall has not as yet made clear the theoretical link between performance, presentation, production and exhibition. Actor-Network Theory (ANT) can provide this theoretical link, and a way of understanding persona as it operates in an online context: as online persona.In ANT, everything that exists is an object. Objects are performative actors—the associations between objects produce the identity of objects and the way they perform. The performative actions of objects, equally, produce the nature of the associations between them (Latour Reassembling). Neither objects nor associations have a prior existence outside of their relationship to each other (Law).For Latour, the semiotic distinction between “human” and “non-human” is itself an outcome of the performances of objects and their associations. There are also objects, which Latour calls “quasi-objects” or “hybrids,” that do not fit neatly on one side of the human/non-human divide or the other (Latour We Have). Online persona is an example of such a hybrid or quasi-object: it is a combination of both human creation and technological mediation.Two concepts formulated by Latour provide some qualitative detail about the nature of the operation of Actor-Networks. Firstly, Latour emphasises that actors are also “mediators.” This name emphasises that when an actor acts to create a connection between two or more other objects, it actively transforms the way that objects encounter the performance of other objects (Latour Reassembling). This notion of mediation resembles Hassan’s definition of “media” as an active agent of transferral (Hassan). But Latour emphasises that all objects, not just communication technologies, act as mediators. Secondly, Latour describes how an actor can take on the actions originally performed by another actor. He refers to this process as “delegation.” Delegation, especially delegation of human action to a technological delegate, can render action more efficient in two ways. It can reduce the effort needed for action, causing “the transformation of a major effort into a minor one.” It can also reduce the time needed to exert effort in performing an action: the effort need not be ongoing, but can be “concentrated at the time of installation” (Latour “Masses” 229-31).Online persona, in the terminology of ANT, is a constructed, performative presentation of identity. It is constituted through a combination of human action, ongoing mediation of present human action, and the automation, through technological delegation, of previous actions. The action of the film Noah is driven by the changes in expected and actual interaction that these various aspects of persona encourage.The Problems and Potential of Online PersonaBy relaying the action entirely via a computer screen, the film Noah is itself a testament to how encounters with others solely via technological mediation can be genuinely meaningful. Relaying the action in this way is in fact creatively productive, providing new ways of communicating details about characters and relationships through the layout of the screen. For instance, the film introduces the character of Amy, Noah’s girlfriend, and establishes her importance to Noah through her visual presence as part of a photo on his desktop background at the start of the film. The film later communicates the end of the relationship when the computer boots up again, but this time with Amy’s photo notably absent from the background.However, the film deviates from a “pure” representation of a computer screen in a number of ways. Most notably, the camera frame is not static, and moves around the screen in order to give the viewer the sense that the camera is simulating Noah’s eye focus. According to the directors, the camera needed to show viewers where the focus of the action was as the story progressed. Without this indication of where to focus, it was hard to keep viewers engaged and interested in the story (Paulas).Within the story of the film itself, the sense of drama surrounding Noah’s actions similarly stem from the exploration of the various aspects of what it is and is not possible to achieve in the performance of persona – both the positive and the negative consequences. At the start of the film, Noah engages in a Skype conversation with his girlfriend Amy. While Noah is indeed “approximating being present” (Berkowitz para. 3) for the initial part of this conversation, once Noah hears an implication that Amy may want to break up with him, the audience sees his eye movements darting between Amy’s visible face in Skype and Amy’s Facebook profile, and nowhere else.It would be a mistake to think that this double focus means Noah is not fully engaging with Amy. Rather, he is engaging with two dimensions of Amy’s available persona: her Facebook profile, and her Skype presence. Noah is fully focusing on Amy at this point of the film, but the unitary persona he experiences as “Amy” is constructed from multiple media channels—one dynamic and real-time, the other comparatively stable and static. Noah’s experience of Amy is multiplexed, a unitary experience constructed from multiple channels of communication. This may actually enhance Noah’s affective involvement with Amy.It is true that at the very start of the Skype call, Noah is focusing on several unrelated activities, not just on Amy. The available technological mediators enable this division of attention. But more than that, the available technological mediators also assume in their functioning that the user’s attention can be and should be divided. Thus some of the distractions Noah experiences at this time are of his own making (e.g. the simple game he plays in a browser window), while others are to some degree configured by the available opportunity to divide one’s attention, and the assumption of others that the user will do so. One of the distractions faced by Noah comes in the form of repeated requests from his friend “Kanye East” to play the game Call of Duty. How socially obligated is Noah to respond to these requests as promptly as possible, regardless of what other important things (that his friend doesn’t know about) he may be doing?Unfortunately, and for reasons which the audience never learns, the Skype call terminates abruptly before Noah can fully articulate his concerns to Amy. With a keen eye, the audience can see that the image of Amy froze not long after Noah started talking to her in earnest. She did indeed appear to be having problems with her Skype, as her later text message suggested. But there’s no indication why Amy decided, as described in the same text message, to postpone the conversation after the Skype call failed.This is a fairly obvious example of the relatively common situation in which one actor unexpectedly refuses to co-operate with the purposes of another (Callon). Noah’s uncertainty at how to address this non-cooperation leads to the penultimate act of the film when he logs in to Amy’s Facebook account. In order to fully consider the ethical issues involved, a performative understanding of the self and of relationships is insufficient. Phenomenological understandings of the self and social relationships are more suited to ethical considerations.Online Persona and Social RelationshipsIn the “phenomenological sociology” of Alfred Schutz, consciousness is inescapably temporal, constantly undergoing slight modification by the very process of progressing through time. The constitution of a social relationship, for Schutz, occurs when two (and only two) individuals share a community of space and time, simultaneously experiencing the same external phenomena. More importantly, it also requires that these two individuals have an ongoing, mutual and simultaneous awareness of each other’s progress and development through time. Finally, it requires that the individuals be mutually aware of the very fact that they are aware of each other in this ongoing, mutual and simultaneous way (Schutz).Schutz refers to this ideal-typical relationship state as the “We-relationship,” and the communal experience that constitutes it as “growing older together.” The ongoing awareness of constantly generated new information about the other is what constitutes a social relationship, according to Schutz. Accordingly, a lack of such information exchange will lead to a weaker social bond. In situations where direct interaction does not occur, Schutz claimed that individuals would construct their knowledge of the other through “typification”: pre-learned schemas of identity of greater or lesser generality, affixed to the other based on whatever limited information may be available.In the film, when Amy is no longer available via Skype, an aspect of her persona is still available for interrogation. After the failed Skype call, Noah repeatedly refreshes Amy’s Facebook profile, almost obsessively checking her relationship status to see if it has changed from reading “in a relationship.” In the process he discovers that, not long after their aborted Skype conversation, Amy has changed her profile picture—from one that had an image of the two of them together, to one that contains an image of Amy only. He also in the process discovers that someone he does not know named “Dylan Ramshaw” has commented on all of Amy’s current and previous profile pictures. Dylan’s Facebook profile proves resistant to interrogation—Noah’s repeated, frustrated attempts to click on Dylan’s profile picture to bring up more detail yields no results. In the absence of an aspect of persona that undergoes constant temporal change, any new information attained—a profile picture changed, a not-previously noticed regular commenter discovered—seems to gain heightened significance in defining not just the current relationship status with another, but the trajectory which that relationship is taking. The “typification” that Noah constructs of Amy is that of a guilty, cheating girlfriend.The penultimate act of the film occurs when Noah chooses to log in to Amy’s Facebook account using her password (which he knows), “just to check for sketchy shit,” or so he initially claims to Kanye East. His suspicions appear to be confirmed when he discovers that private exchanges between Amy and Dylan which indicate that they had been meeting together without Noah’s knowledge. The suggestion to covertly read Amy’s private Facebook messages comes originally from Kanye East, when he asks Noah “have you lurked [covertly read] her texts or anything?” Noah’s response strongly suggests the normative uncertainty that the teenaged protagonist feels at the idea; his initial response to Kanye East reads “is that the thing to do now?” The operation of Facebook in this instance has two, somewhat contradictory, delegated tasks: let others feel connected to Amy and what she’s doing, but also protect Amy’s privacy. The success of the second goal interferes with Noah’s desire to achieve the first. And so he violates her privacy.The times that Noah’s mouse hovers and circles around a button that would send a message from Amy’s account or update Amy’s Facebook profile are probably the film’s most cringe-inducing moments. Ultimately Noah decides to update Amy’s relationship status to single. The feedback he receives to Amy’s account immediately afterwards seems to confirm his suspicions that this was what was going to happen anyway: one friend of Amy’s says “finally” in a private message, and the suspicious “Dylan” offers up a shoulder to cry on. Apparently believing that this reflects the reality of their relationship, Noah leaves the status on Amy’s Facebook profile as “single.”The tragedy of the film is that Noah’s assumptions were quite incorrect. Rather than reflecting their updated relationship status, the change revealed to Amy that he had violated her privacy. Dylan’s supposedly over-familiar messages were perfectly acceptable on the basis that Dylan was not actually heterosexual (and therefore a threat to Noah’s role as boyfriend), but gay.The Role of Technology: “It’s Complicated”One way to interpret the film would be to blame Noah’s issues on technology per se. This is far too easy. Rather, the film suggests that Facebook was to some degree responsible for Noah’s relationship issues and the problematic way in which he tried to address them. In the second half of the film, Noah engages in a very different form of online interaction via the communication service known as Chatroulette. This interaction stands in sharp contrast to the interactions that occurred via Facebook.Chatroulette is a video service that pairs strangers around the globe for a chat session. In the film, Noah experiences a fairly meaningful moment on Chatroulette with an unnamed girl on the service, who dismisses Facebook as “weird and creepy”. The sheer normative power of Facebook comes across when Noah initially refuses to believe the unnamed Chatroulette girl when she says she does not have a Facebook profile. She suggests, somewhat ironically, that the only way to have a real, honest conversation with someone is “with a stranger, in the middle of the night”, as just occurred on Chatroulette.Besides the explicit comparison between Facebook and Chatroulette in the dialogue, this scene also provides an implicit comparison between online persona as it is found on Facebook and as it is found on Chatroulette. The style of interaction on each service is starkly different. On Facebook, users largely present themselves and perform to a “micro-public” of their “friends.” They largely engage in static self-presentations, often “interacting” only through interrogating the largely static self-presentations of others. On Chatroulette, users interact with strangers chosen randomly by an algorithm. Users predominantly engage in dialogue one-on-one, and interaction tends to be a mutual, dynamic affair, much like “real life” conversation.Yet while the “real-time” dialogue possible on Chatroulette may seem more conducive to facilitating Schutz’ idea of “growing older together,” the service also has its issues. The randomness of connection with others is problematic, as the film frankly acknowledges in the uncensored shots of frontal male nudity that Noah experiences in his search for a chat partner. Also, the problematic lack of a permanent means of staying in contact with each other is illustrated by a further tragic moment in the film when the session with the unnamed girl ends, with Noah having no means of ever being able to find her again.ConclusionIt is tempting to dismiss the problems that Noah encounters while interacting via mediated communication with the exhortation to “just go out and live [… ] life in the real world” (Trumbore para. 4), but this is also over-simplistic. Rather, what we can take away from the film is that there are trade-offs to be had in the technological mediation of self-presentation and communication. The questions that we need to address are: what prompts the choice of one form of technological mediation over another? And what are the consequences of this choice? Contemporary persona, as conceived by David Marshall, is motivated by the commodification of the self, and by increased importance of affect in relationships (Marshall “Persona Studies”). In the realm of Facebook, the commodification of the self has to some degree flattened the available interactivity of the online self, in favour of what the unnamed Chatroulette girl derogatorily refers to as “a popularity contest.”The short film Noah is to some degree a cultural critique of dominant trends in contemporary online persona, notably of the “commodification of the self” instantiated on Facebook. By conceiving of online persona in the terms of ANT outlined here, it becomes possible to envision alternatives to this dominant form of persona, including a concept of persona as commodification. Further, it is possible to do this in a way that avoids the trap of blaming technology for all problems, and that recognises both the advantages and disadvantages of different ways of constructing online persona. The analysis of Noah presented here can therefore provide a guide for more sophisticated and systematic examinations of the hybrid-object “online persona.”References Beggs, Scott. “Short Film: The Very Cool ‘Noah’ Plays Out Madly on a Teenager’s Computer Screen.” Film School Rejects 11 Sep. 2013. 3 Mar. 2014. Callon, M. “Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay.” Power, Action and Belief: A New Sociology of Knowledge? Ed. John Law. London, UK: Routledge & Kegan Paul, 1986. 196–223. Berkowitz, Joe. “You Need to See This 17-Minute Film Set Entirely on a Teen’s Computer Screen.” Fast Company 10 Sep. 2013. 1 Mar. 2014. Hassan, Robert. Media, Politics and the Network Society. Maidenhead: Open University Press, 2004. Hornyak, Tim. “Short Film ‘Noah’ Will Make You Think Twice about Facebook—CNET.” CNET 19 Sep. 2013. 2 Mar. 2014. Knibbs, Kate. “‘Have You Lurked Her Texts?’: How the Directors of ‘Noah’ Captured the Pain of Facebook-Era Dating.” Digital Trends 14 Sep. 2013. 9 Feb. 2014. Latour, Bruno. Reassembling the Social: An Introduction to Actor-Network Theory. Oxford University Press, 2005. Latour, Bruno. We Have Never Been Modern. Cambridge, Mass: Harvard University Press, 1993. Latour, Bruno. “Where Are the Missing Masses? The Sociology of a Few Mundane Artifacts.” Shaping Technology/Building Society: Studies in Sociotechnical Change. Ed. Wiebe E. Bijker and John Law. Cambridge, MA: MIT Press, 1992. 225–58. Law, John. “After ANT: Complexity, Naming and Topology.” Actor-Network Theory and After. Ed. John Law and John Hassard. Oxford: Blackwell Publishers, 1999. 1–14. Marshall, P. David. “Persona Studies: Mapping the Proliferation of the Public Self.” Journalism 15.2 (2014): 153–170. Marshall, P. David. “The Intercommunication Challenge: Developing a New Lexicon of Concepts for a Transformed Era of Communication.” ICA 2011: Proceedings of the 61st Annual ICA Conference. Boston, MA: Intrenational Communication Association, 2011. 1–25. Paulas, Rick. “Step inside the Computer Screen of ‘Noah.’” VICE 18 Jan. 2014. 8 Feb. 2014. Schutz, Alfred. The Phenomenology of the Social World. Trans. George Walsh and Frederick Lehnert. London, UK: Heinemann, 1972. Trumbore, Dave. “Indie Spotlight: NOAH - A 17-Minute Short Film from Patrick Cederberg and Walter Woodman.” Collider 2013. 2 Apr. 2014. Warren, Christina. “The Short Film That Takes Place Entirely inside a Computer.” Mashable 13 Sep.2013. 9 Feb. 2014. Woodman, Walter, and Patrick Cederberg. Noah. 2013.
APA, Harvard, Vancouver, ISO, and other styles
35

Brown, Andrew R. "Code Jamming." M/C Journal 9, no. 6 (December 1, 2006). http://dx.doi.org/10.5204/mcj.2681.

Full text
Abstract:
Jamming culture has become associated with digital manipulation and reuse of materials. As well, the term jamming has long been used by musicians (and other performers) to mean improvisation, especially in collaborative situations. A practice that gets to the heart of both these meanings is live coding; where digital content (music and/or visuals predominantly) is created through computer programming as a performance. During live coding performances digital content is created and presented in real time. Normally the code from the performers screen is displayed via data projection so that the audience can see the unfolding process as well as see or hear the artistic outcome. This article will focus on live coding of music, but the issues it raises for jamming culture apply to other mediums also. Live coding of music uses the computer as an instrument, which is “played” by the direct construction and manipulation of sonic and musical processes. Gestural control involves typing at the computer keyboard but, unlike traditional “keyboard” instruments, these key gestures are usually indirect in their effect on the sonic result because they result in programming language text which is then interpreted by the computer. Some live coding performers, notably Amy Alexander, have played on the duality of the keyboard as direct and indirect input source by using it as both a text entry device, audio trigger, and performance prop. In most cases, keyboard typing produces notational description during live coding performances as an indirect music making, related to what may previously have been called composing or conducting; where sound generation is controlled rather than triggered. The computer system becomes performer and the degree of interpretive autonomy allocated to the computer can vary widely, but is typically limited to probabilistic choices, structural processes and use of pre-established sound generators. In live coding practices, the code is a medium of expression through which creative ideas are articulated. The code acts as a notational representation of computational processes. It not only leads to the sonic outcome but also is available for reflection, reuse and modification. The aspects of music described by the code are open to some variation, especially in relation to choices about music or sonic granularity. This granularity continuum ranges from a focus on sound synthesis at one end of the scale to the structural organisation of musical events or sections at the other end. Regardless of the level of content granularity being controlled, when jamming with code the time constraints of the live performance environment force the performer to develop succinct and parsimonious expressions and to create processes that sustain activity (often using repetition, iteration and evolution) in order to maintain a coherent and developing musical structure during the performance. As a result, live coding requires not only new performance skills but also new ways of describing the structures of and processes that create music. Jamming activities are additionally complex when they are collaborative. Live Coding performances can often be collaborative, either between several musicians and/or between music and visual live coders. Issues that arise in collaborative settings are both creative and technical. When collaborating between performers in the same output medium (e.g., two musicians) the roles of each performer need to be defined. When a pianist and a vocalist improvise the harmonic and melodic roles are relatively obvious, but two laptop performers are more like a guitar duo where each can take any lead, supportive, rhythmic, harmonic, melodic, textual or other function. Prior organisation and sensitivity to the needs of the unfolding performance are required, as they have always been in musical improvisations. At the technical level it may be necessary for computers to be networked so that timing information, at least, is shared. Various network protocols, most commonly Open Sound Control (OSC), are used for this purpose. Another collaboration takes place in live coding, the one between the performer and the computer; especially where the computational processes are generative (as is often the case). This real-time interaction between musician and algorithmic process has been termed Hyperimprovisation by Roger Dean. Jamming cultures that focus on remixing often value the sharing of resources, especially through the movement and treatment of content artefacts such as audio samples and digital images. In live coding circles there is a similarly strong culture of resource sharing, but live coders are mostly concerned with sharing techniques, processes and tools. In recognition of this, it is quite common that when distributing works live coding artists will include descriptions of the processes used to create work and even share the code. This practice is also common in the broader computational arts community, as evident in the sharing of flash code on sites such as Levitated by Jared Tarbell, in the Processing site (Reas & Fry), or in publications such as Flash Maths Creativity (Peters et al.). Also underscoring this culture of sharing, is a prioritising of reputation above (or prior to) profit. As a result of these social factors most live coding tools are freely distributed. Live Coding tools have become more common in the past few years. There are a number of personalised systems that utilise various different programming languages and environments. Some of the more polished programs, that can be used widely, include SuperCollider (McCartney), Chuck (Wang & Cook) and Impromptu (Sorensen). While these environments all use different languages and varying ways of dealing with sound structure granularity, they do share some common aspects that reveal the priorities and requirements of live coding. Firstly, they are dynamic environments where the musical/sonic processes are not interrupted by modifications to the code; changes can be made on the fly and code is modifiable at runtime. Secondly, they are text-based and quite general programming environments, which means that the full leverage of abstract coding structures can be applied during live coding performances. Thirdly, they all prioritise time, both at architectural and syntactic levels. They are designed for real-time performance where events need to occur reliably. The text-based nature of these tools means that using them in live performance is barely distinguishable from any other computer task, such as writing an email, and thus the practice of projecting the environment to reveal the live process has become standard in the live coding community as a way of communicating with an audience (Collins). It is interesting to reflect on how audiences respond to the projection of code as part of live coding performances. In the author’s experience as both an audience member and live coding performer, the reception has varied widely. Most people seem to find it curious and comforting. Even if they cannot follow the code, they understand or are reassured that the performance is being generated by the code. Those who understand the code often report a sense of increased anticipation as they see structures emerge, and sometimes opportunities missed. Some people dislike the projection of the code, and see it as a distasteful display of virtuosity or as a distraction to their listening experience. The live coding practitioners tend to see the projection of code as a way of revealing the underlying generative and gestural nature of their performance. For some, such as Julian Rohrhuber, code projection is a way of revealing ideas and their development during the performance. “The incremental process of livecoding really is what makes it an act of public reasoning” (Rohrhuber). For both audience and performer, live coding is an explicitly risky venture and this element of public risk taking has long been central to the appreciation of the performing arts (not to mention sport and other cultural activities). The place of live coding in the broader cultural setting is still being established. It certainly is a form of jamming, or improvisation, it also involves the generation of digital content and the remixing of cultural ideas and materials. In some ways it is also connected to instrument building. Live coding practices prioritise process and therefore have a link with conceptual visual art and serial music composition movements from the 20th century. Much of the music produced by live coding has aesthetic links, naturally enough, to electronic music genres including musique concrète, electronic dance music, glitch music, noise art and minimalism. A grouping that is not overly coherent besides a shared concern for processes and systems. Live coding is receiving greater popular and academic attention as evident in recent articles in Wired (Andrews), ABC Online (Martin) and media culture blogs including The Teeming Void (Whitelaw 2006). Whatever its future profile in the boarder cultural sector the live coding community continues to grow and flourish amongst enthusiasts. The TOPLAP site is a hub of live coding activities and links prominent practitioners including, Alex McLean, Nick Collins, Adrian Ward, Julian Rohrhuber, Amy Alexander, Frederick Olofsson, Ge Wang, and Andrew Sorensen. These people and many others are exploring live coding as a form of jamming in digital media and as a way of creating new cultural practices and works. References Andrews, R. “Real DJs Code Live.” Wired: Technology News 6 July 2006. http://www.wired.com/news/technology/0,71248-0.html>. Collins, N. “Generative Music and Laptop Performance.” Contemporary Music Review 22.4 (2004): 67-79. Fry, Ben, and Casey Reas. Processing. http://processing.org/>. Martin, R. “The Sound of Invention.” Catapult. ABC Online 2006. http://www.abc.net.au/catapult/indepth/s1725739.htm>. McCartney, J. “SuperCollider: A New Real-Time Sound Synthesis Language.” The International Computer Music Conference. San Francisco: International Computer Music Association, 1996. 257-258. Peters, K., M. Tan, and M. Jamie. Flash Math Creativity. Berkeley, CA: Friends of ED, 2004. Reas, Casey, and Ben Fry. “Processing: A Learning Environment for Creating Interactive Web Graphics.” International Conference on Computer Graphics and Interactive Techniques. San Diego: ACM SIGGRAPH, 2003. 1. Rohrhuber, J. Post to a Live Coding email list. livecode@slab.org. 10 Sep. 2006. Sorensen, A. “Impromptu: An Interactive Programming Environment for Composition and Performance.” In Proceedings of the Australasian Computer Music Conference 2005. Eds. A. R. Brown and T. Opie. Brisbane: ACMA, 2005. 149-153. Tarbell, Jared. Levitated. http://www.levitated.net/daily/index.html>. TOPLAP. http://toplap.org/>. Wang, G., and P.R. Cook. “ChucK: A Concurrent, On-the-fly, Audio Programming Language.” International Computer Music Conference. ICMA, 2003. 219-226 Whitelaw, M. “Data, Code & Performance.” The Teeming Void 21 Sep. 2006. http://teemingvoid.blogspot.com/2006/09/data-code-performance.html>. Citation reference for this article MLA Style Brown, Andrew R. "Code Jamming." M/C Journal 9.6 (2006). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0612/03-brown.php>. APA Style Brown, A. (Dec. 2006) "Code Jamming," M/C Journal, 9(6). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0612/03-brown.php>.
APA, Harvard, Vancouver, ISO, and other styles
36

Lotti, Laura. "DIY Cheese-making and Individuation: Towards a Reconfiguration of Taste in Contemporary Computer Culture." M/C Journal 17, no. 1 (March 3, 2014). http://dx.doi.org/10.5204/mcj.757.

Full text
Abstract:
Introduction The trope of food is often used in the humanities to discuss aspects of a culture that are customarily overlooked by a textualist approach, for food embodies a kind of knowledge that comes from the direct engagement with materials and processes, and involves taste as an aesthetics that exceeds the visual concept of the “beautiful.” Moreover, cooking is one of the most ancient cultural practices, and is considered the habit that defines us as humans in comparison to other animals—not only culturally, but also physiologically (Wrangham). Today we have entered a post-human age in which technological augmentations, while promoting the erasure of embodiment in favour of intelligence (Hayles), create new assemblages between the organic and the digital, thus redefining what it means to be human. In this context, a reassessment of the practice of cooking as the manipulation of what constitutes food—both for thought and for the body—may promote a more nuanced approach to contemporary culture, in which the agency of the non-human (from synthetic materials to the digital) affects our modes of being and reflects on our aesthetic sensibility. In the 1980s, Guy Debord observed that the food industry's standardisation and automation of methods of production and consumption have anaesthetised the consumer palate with broader political and cultural implications. Today the Internet has extended the intertwinement of food and technology to the social and aesthetic spheres, thus further impacting on taste. For instance, cultural trends such as “foodism” and “slow food” thrive on blogs and social networks and, while promoting an artisanal style in food preparation and presentation, they paradoxically may also homogenise cooking techniques and the experience of sharing a meal. This leads to questions regarding the extent to which the digitalisation of culture might be hindering our capacity to taste. Or, given the new possibilities for connectivity, can this digitalisation also foster an aesthetic sensibility associated with different attitudes and approaches to food—one that transgresses both the grand narratives and the standardisation promoted by such gastronomic fashions? It also leads to the question of how such activities reflect on the collective sphere, considering the contagious character of networked communication. While foodism thrives online, the Internet has nevertheless prompted a renewed interest in DIY (do-it-yourself) cooking techniques. As a recent issue of M/C Journal testifies, today cookbooks are produced and consulted at an unprecedented rate—either in print or online (Brien and Wessell). Taking the example of the online diffusion of DIY cheese-making recipes, I will below trace the connections between cooking, computer culture, and taste with the support of Gilbert Simondon's metaphysics of technics. Although Simondon never extensively discussed food in relation to technology, the positioning of technicity at the heart of culture allows his work to be used to address the multifaceted nature of taste in the light of recent technological development, in particular of the Network. As a matter of fact, today cooking is not only a technical activity, in the sense that it requires a certain practical and theoretical skilfulness—it is also a technological matter, for the amount of networked machines that are increasingly used for food production and marketing. Specifically, this paper argues that by disentangling the human—albeit partially—from the capitalist cycle of production-marketing-consumption and by triggering an awareness of the increasingly dominant role technology plays in food processing and manufacturing, the online sharing of home-cooking advice may promote a reconfiguration of taste, which would translate into a more nuanced approach to contemporary techno-culture. In the first part of this discussion, I introduce Simondon’s philosophy and foreground the technical dimension of cooking by discussing cheese-making as a process of individuation. In the second, I focus on Simondon’s definition of technical objects and technical ensembles to position Internet culture in relation to cooking, and highlight how technicity folds back on taste as aesthetic impression. Ultimately, I conclude with some reflections on how such a culinary-aesthetic approach may find application in other techno-cultural fields by promoting an aesthetic sensibility that extends beyond the experience of the “social” to encompass an ethical component. Cooking as Individuation: The Networked Dimension of Taste Simondon is known as the thinker, and “tinkerer”, of technics. His project is concerned with ontogenesis—that is, the becoming of objects in relation to the terms that constitute them as individual. Simondon’s philosophy of individuation allows for a better understanding of how the Internet fosters certain attitudes to food, for it is grounded on a notion of “energetic materiality in movement” (Deleuze and Guattari 408) that explains how “immaterial” algorithms can affect individual experience and cultural production. For Simondon, individuation is the process that arises from objects being out-of-phase with themselves. Put differently, individuation allows for “the conservation of being through becoming” (Genesis 301). Likewise, individualisation is “the individuation of an individuated being, resulting from an individuation, [and creating] a new structuration within the individual” (L’Individuation 132). Individuation and individualisation are processes common to all kinds of being. Any individual operates an internal and an external resonance within the system in which it is enmeshed, and produces an “associated milieu” capable of entering into relation with other individuals within the system. Simondon maintains that nature consists of three regimes of individuation, that is, three possible phases of every being: the physical, the biological, and the psycho-social—that develop from a metastable pre-individual field. Technology traverses all three regimes and allows for further individualisation via transductive operations across such phases—that is, via operations of conversion of energy from one form to another. The recent online diffusion of DIY cheese-making recipes lends itself to be analysed with the support of Simondon’s philosophy. Today cheese dominates degustation menus beside the finest wines, and constitutes a common obsession among “foodies.” Although, as an object, cheese defies more traditional canons of beauty and pleasure—its usual pale yellow colour is not especially inviting and, generally speaking, the stinkier and mouldier it is, the more exclusive and expensive it usually is—it has played a sizeable role in the collective imagination since ancient times. Although the genesis of cheese predates archival memory, it is commonly assumed to be the fruit of the chemical reaction naturally occurring in the interaction of milk with the rennet inherently contained in the bladders made of ruminants’ stomachs in which milk was contained during the long transits undertaken by the nomadic cultures of Central Asia. Cheese is an invention that reportedly occurred without human intervention, and only the technical need to preserve milk in high temperature impelled humans to learn to produce it. Since World War II its production is most exclusively factory-based, even in the case of artisanal cheese (McGee), which makes the renewed concern for homemade cheese more significant from a techno-cultural perspective. Following Simondon, the individualisation of cheese—and of people in relation to cheese—depends on the different objects involved in its production, and whose associated milieu affects the outcome of the ontogenetic process via transductive operations. In the specific case of an industrial block of cheese, these may include: the more or less ethical breeding and milking of cows in a factory environment; the types of bacteria involved in the cheese-making process; the energy and costs inherent in the fabrication of the packaging material and the packaging process itself; the CO2 emissions caused by transportations; the physical and intellectual labour implied in marketing, retailing and selling; and, last but not least, the arguable nutritional value of the factory-produced cheese—all of which, in spite of their “invisibility” to the eyes of the consumer, affect physical conditions and moods when they enter into relation with the human body (Bennet). To these, we may add, with specific reference to the packaging: the RFID tags that electronically index food items into databases for a more efficient management of supplies, and the QR codes used for social media marketing purposes. In contrast, the direct engagement with the techno-material conditions at the basis of the home cookery process allows one to grasp how different operations may affect the outcome of the recipe. DIY cheese-making recipes are specifically addressed to laypeople and, because they hardly demand professional equipment, they entail a greater attunement with, and to, the objects and processes required by the recipe. For instance, one needs to “feel” when milk has reached the right temperature (specifically, 82 degrees centigrade, which means that the surface of the milk should be slightly bubbly but not fully boiling) and, with practice, one learns how the slightest movement of the hand can lead to different results, in terms of consistency and aspect. Ultimately, DIY cheese-making allows the cook to be creative with moulding, seasonings, and marinading. Indeed, by directly engaging with the undiscovered properties and potentials of ingredients, by understanding the role that energy (both in the sense of induction and “transduction”) plays on form and matter, and by developing—often via processes of trial and error—technics for stirring, draining, moulding, marinading, canning, and so forth, making cheese at home an exercise in speculative pragmatics. An experimental approach to cooking, as the negotiation between the rigid axioms that make up a recipe and the creative and experimental components inherent in the operations of mixing and blending, allows one to feel the ultimate outcome of the cooking process as an event. The taste of a homemade cheese is linked to a new kind of knowledge—that is, an epistemology based on continuous breakages that allow for the cooking process to carry on until the ultimate result. It is a knowledge that comes from a commitment to objects being out-of-phase, and from the acknowledgement of the network of technical operations that bring cheese to our tables. The following section discusses how another kind of object may affect the outcome of a recipe, with important implications for aesthetics, that is, technical objects. The Internet as Ingredient: Technical Objects, Aesthetics, and Invention The notion of technical objects complements Simondon’s theory of individuation to define the becoming of technology in relation to culture. To Simondon: “the technical object is not this or that thing, given hic et nunc, but that of which there is a genesis” (Du Mode 20). Technical objects, therefore, are not simply technological artifacts but are constituted by a series of events that determine their evolution (De Vries). Analogously to other kinds of individuals, they are constituted by transductive operations across the three aforementioned phases of being. The evolution of technical objects extends from the element to the individual, and ultimately to the technical ensemble. Elements are less than individualised technical objects, while individuals that are in a relation of interconnection are called ensembles. According to Simondon, technical ensembles fully individualise with the realisation of the cybernetic project. Simondon observes that: “there is something eternal in a technical ensemble [...] and it is that which is always present, and can be conserved in a thing” (Les Cahiers 87). The Internet, as a thing-network, could be regarded as an instance of such technical ensembles, however, a clarification needs to be made. Simondon explains that “true technical ensembles are not those that use technical individuals, but those that are a network of technical individuals in a relation of interconnection” (Du mode 126). To Simondon, humankind has ceased to be a technical individual with the industrialisation and automation of methods of production, and has consigned this function to machines (128). Expanding this line of thought, examples such as the viral spreading of memes, and the hypnotic power of online marketing campaigns, demonstrate how digital technology seems to have intensified this process of alienation of people from the functioning of the machine. In short, no one seems to know how or why things happen on the Internet, but we cannot help but use it. In order to constitute “real” technical ensembles, we need to incorporate technics again into culture, in a relation of reciprocity and complementarity with machines, under the aegis of a technical culture. Simondon specifies that such a reconfiguration of the relation between man and machines can only be achieved by means of an invention. An invention entails the individualisation of the technical ensemble as a departure from the mind of the inventor or designer that conceived it, in order to acquire its own autonomous existence (“Technical Mentality”). It refers to the origin of an operative solidarity between individual agents in a network, which provides the support for a human relation based on the “model of transidividuality” (Du Mode 247). A “transindividual relation” is a relation of relations that puts the individual in direct contact with a real collective. The notion of real collective is opposed to that of an interindividual community or social sphere, which is poisoned by the anxieties that stem from a defected relation with the technical ensemble culture is embedded in. In the specific context of the online sharing of DIY cheese-making recipes, rather than a fully individualised technical ensemble per se, the Internet can be regarded as one of the ingredients that make up the final recipe—together with human and the food—for the invention of a true technical ensemble. In such a framework, praxis, as linked to the kind of non-verbal knowledge associated with “making,” defines individuation together with the types of objects that make up the Network. While in the case of foodism, the practice of online marketing and communication homogenises culture by creating “social phenomena,” in the case of DIY cooking advice, it fosters a diversification of tastes, experiences, and flavours linked to individual modes of doing and cooking, that put the cook in a new relation with the culinary process, with food, and with the guests who have the pleasure to taste her meal. This is a qualitative change in the network that constitutes culture, rather than a mere quantitative shift in energy induction. The term “conviviality” (from the Latin con-vivere) specifically means this: a “living together,” rather than a mere dinner party. For Simondon, a real technical ensemble is an assemblage of humans, machines, tools, resources and milieus, which can only be éprouve—i.e., experienced, also in the sense of “experimented with”—rather than represented. A technical ensemble is first and foremost an aesthetic affair—it can only be perceived by experimenting with the different agents involved in the networked operations that constitute it. For Simondon “aesthetics comes after technicity [and] it also returns to us in the heart of technicity” (Michaud in De Boever et al. 122). Therefore, any object bears an aesthetic potential—even something as trivial as a homemade block of cheese. Simondon rejects the idea of an aesthetic object, but affirms the power of technicity to foreground an aesthetic impression, which operates a convergence between the diverging forces that constitute the mediation between man and world, in terms of an ethical treatment of technics. For Simondon, the beautiful is a process: “it is never, properly speaking, the object that is beautiful: it is the encounter operating a propos of the object between a real aspect of the world and a human gesture” (Du Mode 191 emphasis added). If an analysis of cooking as individuation already foregrounds an aesthetics that is both networked and technical, the relational capabilities afforded by networked media have the power to amplify the aesthetic potential of the human gesture implied in a block of homemade cheese—which today extends from searching for (or writing) a recipe online, to pouring the milk and seasoning the cheese, and which entails less environmental waste due to the less intensive processing and the lack of, or certainly a reduction in, packaging materials (Rastogi). The praise of technical creativity resounds throughout Simondon’s thought. By using the Internet in order to create (or indeed cook) something new, the online sharing of DIY cooking techniques like cheese-making, which partially disengages the human (and food itself) from the cycle of production-marketing-consumption that characterises the food industry in capitalist society by fostering an awareness of the networked operations that constitute her as individual, is an invention in its own right. Although the impact of these DIY activities on the global food industry is still very limited, such a hands-on approach, imbued with a dose of technical creativity, partially overcomes the alienation of the individual from the production process, by providing the conditions to “feel” how the individualisation of cheese (and the human) is inscribed in a larger metabolism. This does not stop within the economy of the body but encompasses the techno-cultural ensemble that forms capitalist society as a whole, and in which humans play only a small part. This may be considered a first step towards the reconciliation between humans and technical culture—a true technical ensemble. Indeed, eating involves “experiments in art and technology”—as the name of the infamous 1960s art collective (E.A.T.) evokes. Home-cooking in this sense is a technical-aesthetic experiment in its own right, in which aesthetics acquires an ethical nuance. Simondon’s philosophy highlights how the aesthetics involved in the home cooking process entails a political component, aimed at the disentanglement of the human from the “false” technical ensemble constituted by capitalist society, which is founded on the alienation from the production process and is driven by economic interests. Surely, an ethical approach to food would entail considering the biopolitics of the guts from the perspective of sourcing materials, and perhaps even building one’s own tools. These days, however, keeping a cow or goat in the backyard is unconceivable and/or impossible for most of us. The point is that the Internet can foster inventiveness and creativity among the participants to the Network, in spite of the fixity of the frame in which culture is increasingly inscribed (for instance, the standardised format of a Wordpress blog), and in this way, can trigger an aesthetic impression that comprises an ethical component, which translates into a political stand against the syncopated, schizophrenic rhythms of the market. Conclusion In this discussion, I have demonstrated that cooking can be considered a process of individuation inscribed in a techno-cultural network in which different transductive operations have the power to affect the final taste of a recipe. Simondon’s theory of individuation allows us to account for the impact of ubiquitous networked media on traditionally considered “human” practices, thus suggesting a new kind of humanism—a sort of technological humanism—on the basis of a new model of perception, which acknowledges the non-human actants involved in the process of individuation. I have shown that, in the case of the online sharing of cheese-making recipes, Simondon’s philosophy allows us to uncover a concept of taste that extends beyond the mere gustatory experience provided by foodism, and in this sense it may indeed affirm a reconfiguration of human culture based on an ethical approach towards the technical ensemble that envelops individuals of any kind—be they physical, living, or technical. Analogously, a “culinary” approach to techno-culture in terms of a commitment to the ontogenetic character of objects’ behaviours could be transposed to the digital realm in order to enlighten new perspectives for the speculative design of occasions of interaction among different beings—including humans—in ethico-aesthetic terms, based on a creative, experimental engagement with techniques and technologies. As a result, this can foreground a taste for life and culture that exceeds human-centred egotistic pleasure to encompass both technology and nature. Considering that a worryingly high percentage of digital natives both in Australia and the UK today believe that cheese and yogurt grow on trees (Howden; Wylie), perhaps cooking should indeed be taught in school alongside (rather than separate to, or instead of) programming. References Bennet, Jane. Vibrant Matter: a Political Ecology of Things. Durham: Duke UP, 2010 Brien, Donna Lee, and Adele Wessell. “Cookbook: A New Scholarly View.” M/C Journal 16.3 (2013). 7 Jan. 2014. ‹http://journal.media-culture.org.au/index.php/mcjournal/article/viewArticle/688›. Crary, Jonathan, and Sanford Kwinter. Incorporations. New York: Zone, 1992. De Boever, Arne, Alex Murray, Jon Roffe, and Ashley Woodward, eds. Gilbert Simondon: Being and Technology. Edinburgh: Edinburgh UP, 2012. De Vries, Marc. “Gilbert Simondon and the Dual Nature of Technical Artifacts.” Techné: Research in Philosophy and Technology 12.1 (2008). Debord, Guy. “Abat-Faim.” Encyclopedie des Nuisances 5 (1985) 2 Jan. 2014. ‹http://www.notbored.org/abat-faim.html›. Deleuze, Gilles and Felix Guattari. A Thousand Plateaus. London: Continuum, 2004. Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: The University of Chicago Press, 1999. Howden, Saffron. “Cultural Cringe: Schoolchildren Can’t See the Yoghurt for the Trees.” The Sydney Morning Herald 5 Mar. 2012. 5 Jan. 2014. ‹http://www.smh.com.au/national/education/cultural-cringe-schoolchildren-cant-see-the-yoghurt-for-the-trees-20120304-1ub55.html›. McGee, Harold. On Food and Cooking: The Science and Lore of the Kitchen. New York: Scribner, 2004. Michaud, Yves. “The Aesthetics of Gilbert Simondon: Anticipation of the Contemporary Aesthetic Experience.” Gilbert Simondon: Being and Technology. Eds. Arne De Boever, Alex Murray, Jon Roffe, and Ashley Woodward. Edinburgh: Edinburgh UP, 2012. 121–32. Rastogi, Nina. “Soft Cheese for a Clean Planet”. Slate 15 Dec. 2009. 25 Jan. 2014. ‹http://www.slate.com/articles/health_and_science/the_green_lantern/2009/12/soft_cheese_for_a_clean_planet.html›. Simondon, Gilbert. Du Mode d’Existence des Objets Techniques. Paris: Aubier, 2001. ---. L’Individuation a La Lumière Des Notions de Forme et d’Information. Grenoble: Millon, 2005. ---. “Les Cahiers du Centre Culturel Canadien” 4, 2ème Colloque Sur La Mécanologie. Paris, 1976. ---. “Technical Mentality.” Parrhesia 7 (2009): 17–27.---. “The Genesis of the Individual.” Incorporations. Eds. Jonathan Crary, and Sanford Kwinter. New York: Zone, 1992. 296–319. Wrangham, Richard. “Reason in the Roasting of Eggs.” Collapse: Philosophical Research and Development Volume VII. Eds. Reza Negarestani, and Robin Mackay. London: Urbanomic, 2011. 331–44. Wylie, Catherine. “Significant Number of Children Believe Cheese Comes from Plants, Reveals New Survey.” The Independent 3 Jun. 2013. 5 Jan. 2014. ‹http://www.independent.co.uk/news/uk/home-news/significant-number-of-children-believe-cheese-comes-from-plants-reveals-new-survey-8641771.html›.
APA, Harvard, Vancouver, ISO, and other styles
37

Meese, James. "“It Belongs to the Internet”: Animal Images, Attribution Norms and the Politics of Amateur Media Production." M/C Journal 17, no. 2 (February 24, 2014). http://dx.doi.org/10.5204/mcj.782.

Full text
Abstract:
Cute pictures of animals feature as an inoffensive and adorable background to the contemporary online experience with cute content regularly shared on social media platforms. Indeed the demand for cuteness is so strong in the current cultural milieu that some animals become recognisable animal celebrities in the process (Hepola). However, despite the existence of this professionalisation in some sections of the cute economy, amateurs produce the majority of cute content that circulates online. This is largely because one of the central contributors to this steady stream of cute animal pictures is the subforum Aww, hosted on the online community Reddit. Aww is wholly dedicated to pictures of cute things and allows users to directly submit cute content directly to the site. Aww is one of the default subforums that new Reddit users are automatically subscribed to and is immensely popular, featuring over 4.2 million dedicated subscribers as well as untold casual visits. The section is self-described as: “Things that make you go AWW! -- like puppies, and bunnies, and so on...Feel free to post pictures, videos and stories of cute things” ("The cutest things on the internet!"). Users upload cute animal photos that they have taken and wait for the Reddit community to vote on their favourite pictures. The voting mechanism helps users to acknowledge their favourite posts, with the most popular featured on the front page of Aww (for a detailed critique of this process see van der Nagel 2013). The user-generated model of the site means that instead of visitors being confronted with a formally curated selection of cute animal photos, Aww offers a constantly changing mixture of amateur, semi-pro and professional content. Aww - and Reddit more generally - stand as an emblematic example of participatory culture (Jenkins 2006), with users playing an active role in the production and curation of online content. However, given the commercial nature of many user-generated content sites, this amateur media activity is becoming increasingly subject to intellectual property claims and conflicts (see Burgess; Kennedy). Across the internet there are growing tensions between website operators and amateur producers. As Jenny Kennedy (132) notes, while these platforms promote a public rhetoric of “sharing”, these corporate narratives “downplay their economic power” and imply “that they do not control the practices contained within their sites”. Subsequently, the expectations of users regarding how content is managed and organised can differ substantially from the corporate goals of social media companies. This paper contributes to the growing body of literature interested in the politics of amateur media production (see Hunter and Lastowka; Benkler; Burgess; Kennedy) by exploring the emergence of attribution norms and informal enforcement measures in and around the Aww online community. In contrast to professional content creators, amateurs often have fewer resources on hand to protect their copyrighted work and are also challenged by a pervasive online rhetoric that suggests that popular content essentially “belongs to the Internet” (Douglas). A number of communities on Reddit have questioned the company’s handling of amateur content with users suggesting that Reddit actively seeks to de-contextualise original content and not attribute original creators. By examining how amateur creators and online communities regulate content online, I interrogate the power relations that exist between social media platforms and users and explore how the corporate rhetoric of participatory culture interacts with the legal framework of copyright law. This article also contributes to existing legal scholarship on communities of practice and norms-based intellectual property systems. This literature has explored how social norms effectively regulate the protection of, among other things, recipes (Fauchart and Von Hippel), fashion design (Raustiala and Sprigman) and stand-up comedy routines (Oliar and Sprigman), in situations where copyright law does not function as an effective regulatory mechanism. Often these norms are in line with copyright law protections, but in other cases they diverge from these legal principles. In this paper I suggest that particular sections of Reddit function in a similar way, with their own set of self-governing norms, and that these norms largely align with the philosophical aims of copyright law. The paper begins by outlining a series of recent debates that have occurred between amateur media creators and Reddit, before exploring how norms are regulated on Reddit subforums Aww and Karma Court. I then offer some brief conclusions on the value of paying attention to how social norms structure forms of “sharing” (see Kennedy) and provide a useful way for amateur media producers to protect their content without going through formal legal processes. Introducing Reddit and the Confused Politics of Amateur Content Reddit is a social news site, a vibrant community and one of the most popular websites online. It stands as the most visible iteration of a long-standing tradition of user-generated and managed news, one that goes back to websites like Slashdot, which operated in the mid to late-90s. Founded in 2005 Reddit was launched after only one funding round of venture capital, receiving $100k in seed funding from Y Combinatory (Miller). Despite some early rivalry between Reddit and competitor site Digg, Reddit had enough potential to be purchased by Condé Nast for an estimated $20 million (Carr). Reddit’s audience numbers have grown exponentially in the last few years, with the site currently receiving over 5 billion page views and 114 million unique visitors per month (“About Reddit”). It has also changed focus significantly in the last few years with the site now “as much about posting interesting or funny pictures as it is about news” (Sepponen). Reddit hosts a number of individual subforums (called subreddits), which focus on a particular topic and function essentially like online bulletin boards. The front-page of Reddit showcases the most popular content from across the whole website, and user-generated content features heavily here. Amateur media cannot spread without the structural support of social media platforms, but this support is qualified in particular ways. Reddit stands as a paradigmatic case. Users on Reddit are “incentivized to submit direct links to images, because viewers can get to them more easily” (Douglas) and the website encourages amateur creators to use a preferred content server – Imgur – to host images. The Imgur service provides a direct public link to an image – even bypassing the Reddit discussion page – and with its free hosting and limited ads it has become a popular service and is used by most Reddit users (Slater-Robins). For the majority of Reddit users this is an unproblematic partnership. Imgur is free, effective and fast. However, a vocal minority of Reddit users and amateur creators claim that the partnership between Reddit and Imgur has created the equivalent of an online ghetto (Douglas).As Nick Douglas explains, when using services like Imgur there is no requirement to either provide an external link to a creators website or to attribute the creator, limiting the ability for an amateur creator to gain exposure. It also bypasses existing revenue streams that may have been set up by creators, including ad-supported websites or online stores offering merchandise. As a result creators have little opportunity to benefit either economically or reputationally from this system. This occurs to such an extent that “there are actually warnings against submitting your own [original] work” to particular subforums on Reddit (Douglas). For example, some forum moderators require submissions to either “link directly to a specific image file or to a website with minimal ads” (“Reddit Pics”). It is in this context, that the posting of original content without attribution is not actively policed. There are a number of complaints circulating within the Reddit community about these practices (see “Ok, look people. I know you heart Imgur, but webcomics? Just link to the freaking site”; “The problem with reddit”). Many creators have directly protested against this aspect of Reddit’s structural organisation. Blogger Benjamin Grelle (a.k.a The Frogman) and writer Chris Menning are two notable examples. Grelle’s protest was witty and dramatic. He wrote a blog post featuring a picture of an email he sent to Imgur offering the company a choice: send him a huge novelty check for $10,000 or alternatively, add a proper attribution system that allows artists, photographers and content creators to properly credit their work. Grelle estimates that his work generated around $20,000 in ad revenue for Imgur; however the structure of Reddit and Imgur meant he earned little income from the “viral” success of his content. Grelle claimed he was happy for his work to be shared, but attribution meant that it was more likely a fan would follow the link to his website and provide him with some financial recompense for his work. Unsurprisingly, Grelle didn’t receive a paycheck and so in response has developed a unique way to gain exposure. He has started to insert himself into his work, “[s]o when you see a stolen Frogman piece, you still see Ben Grelle’s face” (Douglas). Chris Menning posted a blog about being banned from Reddit, hoping to bring to light some of the inequalities that persist around Reddit’s current structure. He began by noting that he had received a significant amount of traffic from them in the past. He had responded in kind by looking to create original content for particular subforums, knowing what a particular community would enjoy. However, his habit of providing the link to his own website along with the content he posted saw him get labelled as a spammer and banned by administrators. Menning chose not to fight the ban:It seems that the only way I could avoid [getting banned] is if I were to relinquish any rights to my original content and post it exclusively to Imgur. In effect, reddit punishes the creation of original content, and rewards content theft (Menning). Instead he decided to quit Reddit, claiming that Reddit’s approach would carry long-term consequences as the platform provided little incentive for creators to produce wholly original content. It is worth noting that neither Menning nor Grelle turned to legal avenues in order to gain financial restitution. Considering the nature of the practices they were complaining about, compensation in the form of an injunction or damages would have certainly been possible. In Benjamin’s case, a user had combined a number of his copyrighted works into one image and posted the image to Imgur without attribution --this infringed Grelle’s copyright in his work as well as his moral right to be attributed as the creator of the work. However, the public comments of both creators suggest that despite the possibility of legal success, their issue was not so much to do with their individual cases but rather the broader structural issues at play within Reddit. While they might gain individually from a successful legal challenge, over the long term Reddit would continue to be a fraught place for amateur and semi-professional content creators. Certain parts of the Reddit community appear to be sympathetic to these issues, and the complaints of dissenting users like Menning and Grelle have received active support from some users and moderators on the site. This has led to changes in the way content is being posted and managed on Aww, and has also driven the emergence of a satirical user-run court entitled Karma Court. In these spaces moderators and members establish community norms, regularly police the correct attribution of works and challenge the de-contextualisation of content overtly encouraged by Reddit, Imgur and other subforums. In the following section I will examine both Aww and Karma Court in order to explore how these norms are established and negotiated by both moderators and users alike. reddit.com/r/aww: The Online Hub of Cute Animal Pictures As we have seen, the design of Reddit and Imgur creates a number of problems for amateur creators who wish to protect their intellectual property. To address these shortcomings, the Aww community has created its own informal regulatory systems. Volunteer moderators play a crucial role: they establish informal codes of conduct for the Aww community and enforce various rules about how the site should be used. One of these rules relates to attribution. Users are asked to to “post original content whenever possible or attribute original content creators” ("The cutest things on the internet!"). Due to the volunteer nature of the work and the size of the Aww sub-reddit, moderator enforcement is haphazard. Consequently, responsibility falls on the wider user community to self-police. Despite its informal nature, this process manages to facilitate a fairly consistent standard of attribution. In this way it functions as an informal method of intellectual property protection. It is worth noting however that this commitment to original content is not solely due to the moral character of Aww users. A significant motivation is the distribution of karma points amongst Reddit users. Karma, which represents your good standing within the Reddit community, can be earned through user likes and votes – these push the most popular content to the front page of each subforum. Thus karma stands as a numerical representation of a user’s value to Reddit. This ostensibly democratic system has the paradoxical effect of fuelling intellectual property violations on the site. Users often repost other users’ jpegs, animated gifs, and other content, in order to reap the social and cultural capital that comes with posting a popular picture. In some cases they claim authorship of the content; in other cases they simply re-post content that they feel “belongs to the internet” (Douglas). Some content is so popular or pervasive online (this content that is often described as “viral”) that users feel there is little reason or need to attribute content. This helps to explain the persistence of ownership and attribution conflicts on Reddit. In the eyes of some users and moderators the management of these rights and the correct distribution of karma are seen to be vital to the long-term functioning of site. The karma system offers a numerical representation of each contributor’s value. Re-posting already successful content and claiming it as your own challenges the proper functioning of the karma system and potentially ‘inhibits the innovative potential of contributions (Richterich). On Aww the re-posting of original content is viewed as a taboo act that breaches these norms. The poster is seen to have engaged in deceptive conduct in order to gain karma for their user profile. In addition there is a strong ethic that runs through these comment threads that the original creator deserves attribution. There is a presumption that this attribution is vital in order to increasing the possible marketability of the posted content and to recognise and courage creators within the community. This sort of community-driven regulation contrasts with the aforementioned site design of Reddit and Imgur, which frustrates effective authorship attribution practices. Aww users, in contrast, have shown a willingness to defend what they see as the intellectual property rights of content creators.A series of recent examples outline how this process works in practice. User “moonlikeme123” posted a picture of a cat with its hands on the steering wheel of a car. The picture was entitled “we don’t need to ask for directions, Helen”. During the same day, three separate users had identified the picture as a repost, with one noting that the same picture was already on the front page of Aww. “moonlikeme123” received no karma points for the picture. In a second example, the user “nibblur” posted a photo of a kitten “hunting” a toy mouse. Within a day, one enterprising user had identified the original photographer – “torode”, an amateur photographer – and linked to his Reddit profile (see fig. 2) ("ferocious cat hunting its prey: aww."). One further example: on 15 July 2013 “Cuzacelmare” posted a picture of two dogs comforting each other – an image which had originally been posted by “lauface”. Again, users were quick to point out the lack of attribution and the attempt to claim someone else’s content as their own (“Comforting her sister during a storm: aww). It is worth noting that some Reddit users consider attributing content to be entirely without benefit. Some deride karma as “meaningless” and suggest that as a significant amount of content online is regularly reposted elsewhere, there is little harm done in re-posting what is essentially amateur content destined to be lost in the bowels of the internet. For example, the comments that follow Cuzacelmare’s reflect an ambivalence about reposting, suggesting that users weigh up the benefits of exposure gained by the re-posting against the lack of attribution granted and the increasingly decontextualized nature of the photo itself:Why does everyone get so bitchy about reposts. Not everyone is on ALL the time or has been on Rreddit since it was created. I mean if you've seen it already ignore it. It's just picture you aren't forced to click the link. [sic] (“Comforting her sister during a storm: aww”)We're arguing semantics, but any content that gets attention can benefit the creator, whether it's reddit or Youtube (“Comforting her sister during a storm: aww”) Such discussions are common on comment threads following re-posts by other users. They underline the conflicted status of this ephemeral media and the underlying frictions that are part of these processes. These discussions underline the fact that on Reddit the “sharing” (Kennedy) and “spreading” (Jenkins et al.) of content is not seen as an unquestioned positive but rather as a contestable structural feature that needs to be constantly negotiated and discussed. These informal methods of identification, post-hoc attribution and criticism in comment threads have been the long-standing method used to redress questions of attribution and ownership of content on Reddit. However in recent times, Reddit users have turned to satirical methods of formal adjudication for particularly egregious cases. A sub-reddit, Karma Court, now functions as an informal tribunal in which punishment is meted out for “the abuse of karma and general contemptible actions heretofore identified as wrongdoing” (“Constitution and F.A.Q of the Karma Court”). Due to its double function as both an adjudicator and satire of users overly-invested in online debates, there is no limit to the possible “crimes” a user may be charged with. The following charges are only presented as guidelines and speak to common negative experiences on online: (1). Douchebaggery - When one is being a douche.(2). Defamation - Tarnishing another redditor's [user’s] username.(3). Public Indecency - When a user flexes his or her 'e-peen' with the intent to shame other users.(4). OhShit.exe - Intentional reposting that results in reddit Gold.(5). GrandTheft.jpg - Reposting while claiming credit for the post.(6). Obstruction of Justice - Impeding or interfering with an investigation, such as submitting false screenshots, deleting evidence, or providing false evidence to the court.(7). Other - Literally anything else you want. We like creative names for charges.(“Constitution and F.A.Q of the Karma Court”) In Karma Court, legal representation can be sourced from a list of attorneys and judges, populated by users who volunteer to help adjudicate the case. They are required to have been a Reddit member for over six months. The only punishment is a public shaming. Interestingly Karma Court has developed a fair reposting clause that attempts to manage the complex debates around reposting and attribution. Under the non-binding satirical clause, users are able to repost content if it has not featured on the front page of a sub-reddit for seven or more days, if the re-poster acknowledges in the title or description that they are re-posting or if the original poster has less than 30,000 link karma (which means that the original poster has not substantially contributed to the Reddit community). If a re-poster does not adhere by these rules and claims a re-post as their own original content (or “OC”), they can be charged with “grandtheft.jpg” and brought to trial by another Reddit user. As one of the most popular subforums, a number of cases have emerged from Aww. The aforementioned re-poster “Cuzacelmare” (“I am bringing /U/ Cuzacelmare to trial …”) was “charged” through this process and served with a summons after denying “cute and innocent animals of that subreddit of their much deserved karma”. Similar cases to do with re-posting without attribution on Aww involve “FreshCorio” (“Reddit vs. U/FreshCorio …”) and “ninjacollin” (“People of Reddit vs. /U/ ninjacollin”) who were also brought to karma court. In each case prosecutors were adamant that false authorship claims needed to be punished. With these mock trials run by volunteers it takes time for arguments to be heard and judgment to occur; however “ninjacollin” expedited the legal process by offering a full confession. As a new user, “ninjacollin” was reprimanded severely for his actions and the users on Karma Court underlined the consequences of not identifying original content creators when re-posting content. Ownership and Attribution: Amateur Media, Distribution and Law The practices outlined above offer a number of alternate ways to think about amateur media and how it is distributed. An increasingly complex picture of content attribution and circulation emerges once we take into account the structural operation of Reddit, the intellectual property norms of users, and the various formal and informal systems of regulation that are appearing on the site. Such practices require users to negotiate complex questions of ownership between each other and in relation to corporate bodies. These negotiations often lead to informal agreements around a set of norms to regulate the spread of content within a particular community, suggesting that the lack of a formal legal process in these debates does not mean that there is an absence of regulation. As noted throughout this paper, the spread of online content often involves progressive de-contextualisation. Website design features often support this process in the hopes of encouraging content to spread in a fashion amenable to their corporate goals. Considering this tendency for content to be decontextualized online, the presence of attribution norms on subforums like Aww is significant. Instead of remixing, spreading and re-purposing content indiscriminately, users retain a concept of ownership and attribution that tracks closely to the basic principles of copyright law. Rather than users radically redefining concepts of attribution and ownership, as prefigured in some of the more utopian accounts of participatory media, the dominant norms of the Reddit community extend a discourse of copyright and ownership. As well as providing a greater level of detail to contemporary debates around amateur media and its viral or spreadable nature (Burgess; Jenkins; Jenkins et al), this analysis offers some lessons for copyright law. The emergence of norms in particular Reddit subforums which govern the use of copyrighted content and the use of a mock court structure suggests that online communities have the capacity to engage in forms of redress for amateur creators. These organic forms of copyright management operate adjacent to formal legal structures of copyright law. However, they are more accessible and practical for amateur creators, who do not always have the money to hire lawyers, especially when the market value of their content might be negligible. The informal regulatory systems outlined above may not operate perfectly but they reveal communities who are willing to engage foundational conversations around the importance of attribution and ownership. Following the existing literature (Fauchart and Von Hippel; Raustiala and Sprigman; Schultz; Oliar and Sprigman), I suggest that these online social norms provide a useful form of alternative protection for amateur creators. Acknowledgements Thanks to Ramon Lobato and Emily van der Nagel for comments and productive discussions around these issues. I am also grateful to the two anonymous peer reviewers for their assistance in developing this argument. References “About Reddit.” Reddit, 2014. 29 Apr. 2014 ‹http://www.reddit.com/about/›. Benkler, Yochai. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven: Yale University Press, 2006. Burgess, Jean. “YouTube and the Formalisation of Amateur Media.” Amateur Media: Social, Cultural and Legal Perspectives. In Dan Hunter, Ramon Lobato, Megan Richardson, and Julian Thomas, eds. Oxford: Routledge, 2012. Carr, Nicholas. “Left Alone by Its Owner, Reddit Soars.” The New York Times: Business, 2 Sep. 2012. “Comforting Her Sister during a Storm: aww.” reddit: the front page of the internet, 15 July 2013. “Constitution and F.A.Q of the Karma Court.” reddit: the front page of the internet, 2014. Douglas, Nick. “Everything on the Internet Gets Stolen: Here’s How You Should Feel about That.” Slacktory, 8 Sep. 2009. Fauchart, Emmanual, and Eric von Hippel. “Norms-Based Intellectual Property Systems: The Case of French Chefs.” Organization Science 19.2 (2008): 187 - 201 "Ferocious Cat Hunting Its Prey: aww." reddit: the front page of the internet, 4 April 2013. 29 Apr. 2014 ‹http://www.rreddit.com/r/aww/comments/1bobcp/ferocious_cat_hunting_its_prey/›. Hepola, Sarah. “The Internet is Made of Kittens.” Salon.com, 11 Feb. 2009. 29 Apr. 2014 ‹http://www.salon.com/2009/02/10/cat_internet/›. Hunter, Dan, and Greg Lastowka. “Amateur-to-Amateur.” William & Mary Law Review 46 (2004): 951 - 1030. “I Am Bringing /U/ Cuzacelmare to Trial on the Basis of Being One of the Biggest _______ I’ve Ever Seen, by Reposting Cute Animal Pictures to /R/Awww. Feels.Jpg.” reddit: the front page of the internet, 21 March 2013. Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York University Press, 2006. Jenkins, Henry, Sam Ford, and Joshua Green. Spreadable Media: Creating Value and Meaning in a Networked Culture. New York: New York University Press, 2013. Menning, Chris. "So I Got Banned from Reddit" Modern Primate, 23 Aug. 2012. Miller, Keery. “How Y Combinator Helped Shape Reddit.” Bloomberg Businessweek, 26 Sep. 2007. 29 Apr. 2014 ‹http://www.businessweek.com/stories/2007-09-26/how-y-combinator-helped-shape-redditbusinessweek-business-news-stock-market-and-financial-advice›. “Ok, Look People. I Know You Heart Imgur, But Webcomics? Just Link to the Freaking Site.” reddit: the front page of the internet, 22 Aug. 2011. Oliar, Dotan, and Christopher Sprigman. “There’s No Free Laugh (Anymore): The Emergence of Intellectual Property Norms and the Transformation of Stand-Up Comedy.” Virginia Law Review 94.8 (2009): 1787 – 1867. “People of reddit vs. /U/Ninjacollin for Grandtheft.jpg.” reddit: the front page of the internet, 30 Jan. 2013. Raustiala, Kal, and Christopher Sprigman. “The Piracy Paradox: Innovation and Intellectual Property in Fashion Design”. Virginia Law Review 92.8 (2006): 1687-1777. “Reddit v. U/FreshCorio. User Uploads Popular Repost Picture of R/AWW and Claims It Is His Sister’s Cat. Falsely Claims It Is His Cakeday for Good Measure.” reddit: the front page of the internet, 12 Apr. 2013. 29 Apr. 2014 ‹http://www.reddit.com/r/KarmaCourt/comments/1c7vxz/reddit_vs_ufreshcorio_user_uploads_popular_repost/›. “Reddit Pics.” reddit: the front page of the internet, 2014. 29 Apr. 2014 ‹http://www.reddit.com/r/pics/›. Richterich, Annika. “’Karma, Precious Karma!’ Karmawhoring on Reddit and the Front Page’s Econometrisation.” Journal of Peer Production 4 (2014). 29 Apr. 2014 ‹http://peerproduction.net/issues/issue-4-value-and-currency/peer-reviewed-articles/karma-precious-karma/›. Schultz, Mark. “Fear and Norms and Rock & Roll: What Jambands Can Teach Us about Persuading People to Obey Copyright Law.” Berkley Technology Law Journal 21.2 (2006): 651 – 728. Sepponen, Bemmu. “Why Redditors Gave Imgur a Chance.” Social Media Today, 20 July 2011. Slater-Robins, Max. “From Rags to Riches: The Story of Imgur.” Neowin, 21 Apr. 2013. "The Cutest Things on the Internet!" reddit: the front page of the internet, n.d. “The Problem with reddit.” reddit: the front page of the internet, 23 Aug. 2012. 29 Apr. 2014 ‹http://www.rreddit.com/r/technology/comments/ypbe2/the_problem_with_rreddit/›. Van der Nagel, Emily. “Faceless Bodies: Negotiating Technological and Cultural Codes on reddit gonewild.” Scan: Journal of Media Arts Culture 10.2 (2013). "We Don’t Need to Ask for Directions, Helen: aww." reddit: the front page of the internet, 30 June 2013. 29 Apr. 2014 ‹http://www.rreddit.com/r/aww/comments/1heut6/we_dont_need_to_ask_for_directions_helen/›.
APA, Harvard, Vancouver, ISO, and other styles
38

Anaya, Ananya. "Minimalist Design in the Age of Archive Fever." M/C Journal 24, no. 4 (August 24, 2021). http://dx.doi.org/10.5204/mcj.2794.

Full text
Abstract:
In a listicle on becomingminimalist.com, Joshua Becker argues that advances in personal computing have contributed to the growing popularity of the minimalist lifestyle. Becker explains that computational media can efficiently absorb physical artefacts like books, photo albums, newspapers, clocks, calendars, and more. In Nawapol Thamrongrattanarit’s Happy Old Year (2019, ฮาวทูทิ้ง ทิ้งอย่างไร..ไม่ให้เหลือเธอ) the protagonist Jean also argues that material possessions are wasteful and unnecessary in the era of cloud storage. In the film, she redesigns her old-fashioned and messy childhood home to create a minimalist home office. In decluttering their material possessions through a partial reliance on computational storage, Jean and Becker conveniently dispense with the materiality of informational infrastructures and digital archives. Informational technology’s ever-growing capacity for storage and circulation also intensify anxieties about clutter. During our online interactions, we inadvertently leave an amassing trail of metadata behind that allows algorithms to “personalise” our interfaces. Consequently, our interfaces are “cluttered” with recommendations that range from toothpaste to news, movies, clothes, and more, based on a narrow and homophilic comparison of datasets. Notably, this hypertrophic trail of digital clutter threatens to overrepresent and blur personal identities. By mindfully reducing excessive consumption and discarding wasteful possessions, our personal spaces can become tidy and coherent. On the other hand, there is little that individuals can do to control nonhuman forms of digital accumulation and the datafied archives that meticulously record and store our activities on a micro-temporal scale. In this essay, I explore archive fever as the prosthetic externalisation of memory across physical and digital spaces. Paying close attention to Sianne Ngai’s work on vernacular aesthetic categories and Susanna Paasonen’s exploration of equivocal affective sensations, I study how advocates of minimalist design seek to recuperate our fraught capacities for affective experience in the digital era. In particular, I examine how Thamrongrattanarit problematises minimalist design, prosthetic memory, and the precarious materiality of digital media in Happy Old Year and Mary Is Happy, Mary Is Happy (2013, แมรี่ อีส แฮปปี้, แมรี่ อีส แฮปปี้). Transmedial Minimalist Networks and Empty Spaces Marie Kondo famously teaches us how to segregate objects that spark joy from material possessions that can be discarded (Kondo). The KonMari method has a strong transmedial presence with Kondo’s bestselling books, her blog and online store, a Netflix series, and sticky memes that feature her talking about objects that do not spark joy. It is interesting to note the rising popularity of prescriptive minimalist lifestyle blogs that utilise podcasts, video essays, tutorials, apps, and more to guide the mindful selection of essential material possessions from waste. Personal minimalism is presented as an antidote to late capitalist clutter as self-help gurus appear across our computational devices teach us how we can curb our carbon footprints and reduce consumerist excess. Yet, as noted by Katherine Hayles, maximal networked media demands a form of hyper-attention that implicates us in multiple information streams at once. There is a tension between the overwhelming simultaneity in the viewing experience of transmedial minimalist lifestyle networks and the rhetoric of therapeutic selection espoused in their content. In their ethnographic work with minimalists, Eun Jeong Cheon and Norman Makoto Su explore how mindfully constructed empty spaces can serve as a resource for technological design (Cheon and Su). Cheon and Su note how empty spaces possess a symbolic and functional value for their respondents. Decluttered empty spaces offer a sensuous experience for minimalists in coherently representing their identity and serve as a respite from congested and busy cities. Furthermore, empty spaces transform the home into a meaningful site of reflection about people’s objects and values as minimalists actively work to reduce their ownership of physical artefacts and the space that material possessions occupy in their homes and minds: the notion of gazing upon empty spaces is not simply about reading or processing information for minimalists. Rather, gazing gives minimalists, a visual indicator of their identity, progress, and values. (Cheon and Su 10) Instead of seeking to fill and augment empty space, Cheon and Su ask what it might mean to design technology that appreciates the absence of information and the limitation of space. The Interestingness of “Total Design and Internet Plenitude” Sianne Ngai argues that in a world where we are constantly hailed as aesthetic subjects, our aesthetic experiences grow increasingly fragile and ineffectual (Ngai 2015). Ngai further contends that late capitalism makes the elite exaggeration of the autonomy of art (at auction houses, mega-exhibitions, biennales, and more) concurrently possible with the hyper-aestheticisation of everyday life. The increase in inconsequential aesthetic experiences mirrors a larger habituation to aesthetic novelty along with the loss of the traditional friction between art and the commodity form: in tandem with these seismic changes to longstanding ideas of art’s vocation, weaker aesthetic categories crop up everywhere, testifying in their very proliferation to how, in a world of “total design and Internet plenitude”, aesthetic experience while less rarefied also becomes less intense. (Ngai 21) Ngai offers us the cute, interesting, and zany as the key vernacular categories that describe aesthetic experience in “the hyper-commodified, information-saturated, and performance-driven conditions of late-capitalist culture” (1). Aesthetic experience no longer subscribes to an exceptionally single feeling but is located at the ambiguous mixture of mundane affect. Susanna Paasonen notes how Ngai’s analysis of an everyday aesthetic experience that is complex and equivocal helps explain how seemingly contradictory and irreconcilable affective tensions might in fact be mutually co-dependent with each other (Paasonen). By critiquing the broad and binary generalisations about addiction and networked technologies, Paasonen emphasises the ambivalent and fleeting nature of affective formation in the era of networked media. Significantly, Paasonen explores how ubiquitous networked infrastructures bind us in dynamic sensations of attention and distraction, control and helplessness, and boredom and interest. For Ngai, the interesting is a “low, often hard-to-register flicker of affect accompanying our recognition of minor differences from a norm” (18). There is a discord between knowledge and feeling (and cognition and perception) at the heart of the interesting. We are drawn to the interesting object after noticing something peculiar about it and yet, we are simultaneously at a loss of knowledge about the exact contents of that peculiarity. The "interesting" is embodied in the seriality of constant circulation and a temporal experience of in-betweenness and anticipation in a paradoxical era of routinised novelty. Ngai notes how in the 1960s, many minimalist conceptual artists were preoccupied with tracking the movement of objects and information by transport and communication technologies. In offering a representation of networks of circulation, “merely interesting” conceptual art disseminates information about itself and makes technologies of distribution central to its process of production. The interesting is a pervasive aesthetic judgment that also explains our affectively complex rapport with information in the context of networked technologies. Acclimatised to the repetitive tempos of internet browsing and circular refreshing, Paasonen notes we often oscillate between boredom and interest during our usage of networked media. As Ngai explains, the interesting is “a discursive aesthetic about difference in the form of information and the pathways of its movement and exchange” (1). It is then “interesting” to explore how Thamrongrattanarit tracks the circulation of information and the pathways of transmedial exchange across Twitter and cinema in Mary Is Happy, Mary Is Happy. Digital Memory in MIHMIH Mary Is Happy, Mary Is Happy is adapted from a set of 410 consecutive tweets by Twitter user @marymaloney. The film instantiates the phatic, ephemeral flow of a Twitter feed through its deadpan and episodic narrative. The titular protagonist Mary is a fickle-headed high-school senior trying to design a minimalist yearbook for her school to preserve their important memories. Yet, the sudden entry of an autocratic principal forces her to follow the school administration’s arbitrary demands and curtail her artistic instincts. Ultimately, Mary produces a thick yearbook that is filled with hagiographic information about the anonymous principal. Thamrongrattanarit offers cheeky commentary about Thailand’s authoritarian royalist democracy where the combination of sudden coups and unquestioning obedience has fostered a peculiar environment of political amnesia. Hagiographic and bureaucratic informational overload is presented as an important means to sustain this combination of veneration and paranoia. @marymaloney’s haphazard tweets are superimposed in the film as intertitles and every scene also draws inspiration from the tweet displayed in an offhand manner. We see Mary swiftly do several random and unexplained things like purchase jellyfishes, sleep through a sudden trip to Paris, rob a restaurant, and more in rapid succession. The viewer is overwhelmed because of a synchronised engagement with two different informational currents. We simultaneously read the tweet and watch the scene. The durational tension between knowing and feeling draws our attention to the friction between conceptual interpretation and sensory perception. Like the conceptual artists of the 1960s, Thamrongrattanarit also shows “information in the act of being circulated” (Ngai 157). Throughout the film, we see Mary and her best friend Suri walk along emptied railway tracks that figuratively represent the routes of informational circulation across networked technologies. With its quirky vignettes and episodic narrative progression, MIHMIH closely mirrors Paasonen’s description of microevents and microflow-like movement on social media. The film also features several abrupt and spectacular “microshocks” that interrupt the narrative’s linear flow. For example, there is a running gag about Mary’s cheap and malfunctioning phone frequently exploding in the film while she is on a call. The repetitive explosions provide sudden jolts of deadpan humour. Notably, Mary also mentions how she uses bills of past purchases to document her daily thoughts rather than a notebook to save paper. The tweets are visually represented through the overwhelming accumulation of tiny bills that Mary often struggles to arrange in a coherent pattern. Thamrongrattanarit draws our attention to the fraught materiality of digital memory and microblogging that does not align with neat and orderly narrativisation. By encouraging a constant expression of thoughts within its distinctive character limit, Twitter promotes minimal writing and maximal fragmentation. Paasonen argues that our networked technologies take on a prosthetic function by externalising memory in their databases. This prosthetic reserve of datafied memory is utilised by the algorithmic unconscious of networked media for data mining. Our capacities for simultaneous multichannel attention and distraction are increasingly subsumed by capital’s novel forms of value extraction. Mary’s use of bills to document her diary takes on another “interesting” valence here as Thamrongrattanarit connects the circulation of information on social media with monetary transactions and the accumulation of debt. While memory in common parlance is normally associated with acts of remembrance and commemoration, digital memory refers to an address for storage and retrieval. Wendy Chun argues that software conflates storage with memory as the computer stores files in its memory (Chun). Furthermore, digital memory only endures through ephemeral processes of regeneration and degeneration. Even as our computational devices move towards planned obsolescence, digital memory paradoxically promises perpetual storage. The images of dusty and obsolete computers in MIHMIH recall the materiality of the devices whose databases formerly stored many prosthetic memories. For Wolfgang Ernst, digital archives displace cultural memory from a literary-based narrativised framework to a calculative and mathematical one as digital media environments increasingly control how a culture remembers. As Jussi Parikka notes “we are miniarchivists ourselves in this information society, which could be more aptly called an information management society” (2). While traditional archives required the prudent selection and curation of important objects that will be preserved for future use on a macro temporal scale, the Internet is an agglomerative storage and retrieval database that records information on a micro temporal scale. The proliferation of agglomerative mini archives also create anxieties about clutter where the miniarchivists of the “information-management society” must contend with the effects of our ever-expanding digital trail. It is useful to note how processes of selection and curation that remain central to minimalist decluttering can be connected with the design of a personal archive. Ernst further argues that digital memory cannot be visualised as a place where objects lay in static rest but is better understood as a collection of mini archives in motion that become perceptible because of dynamic signal-based processing. In MIHMIH, memory inscription is associated with the “minimalist” yearbook that Mary was trying to create along with the bills where she documents her tweets/thoughts. At one point, Mary tries to carefully arrange her overflowing bills across her wall in a pattern to make sense of her growing emotional crisis. Yet, she is overwhelmed by the impossibility of this task. Networked media’s storage of prosthetic memory also makes self-representation ambiguous and messy. As a result, Mary’s story does align with cathartic and linear narrativisation but a messy agglomerative database. Happy Old Year: Decluttering to Mend Prosthetic Memories Kylie Cardell argues that the KonMari method connects tidiness to the self-conscious design of a curated personal archive. Marie Kondo associates decluttering with self-representation. "As Kondo is acutely aware, making memories is not simply about recuperating and preserving symbolic objects of the past, but is a future-oriented process that positions subjects in a peculiar way" (Cardell 2). This narrative formation of personal identity involves carefully storing a limited number of physical artefacts that will spark joy for the future self. Yet, we must segregate these affectively charged objects from clutter. Kondo encourages us to make intuitive judgments of conviction by overcoming ambivalent feelings and attachments about the past that are distributed over a wide set of material possessions. Notably, this form of decluttering involves archiving the prosthetic memories that dwell in our (analogue) material possessions. In Happy Old Year, Jean struggles to curate her personal archive as she becomes painfully aware of the memories that reside in her belongings. Interestingly, the film’s Thai title loosely translates as “How to Dump”. Jean has an urgent deadline to declutter her home so that it can be designed into a minimalist home office. Nevertheless, she gradually realises that she cannot coldly “dump” all her things and decides to return some of the borrowed objects to her estranged friends. This form of decluttering helps assuage her guilt about letting go of the past and allows her to (awkwardly and) elegantly honour her prosthetic memories. HOY reverses the clichéd before-after progression of events since we begin with the minimalist home and go back in flashbacks to observe its inundated and messy state. HOY’s after-before narrative along with its peculiar title that substitutes ‘new’ with ‘old’ alludes to the clashing temporalities that Jean is caught up within. She is conflicted between deceptive nostalgic remembrance and her desire to start over with a minimalist-blank slate that is purged of her past regrets. In many remarkable moments, HOY instantiates movement on computational screens to mirror digital media’s dizzying speeds of circulation and storage. Significantly, the film begins with the machinic perspective of a phone screen capturing a set of minimalist designs from a book. Jean refuses to purchase (and store) the whole book since she only requires a few images that can be preserved in her phone’s memory. As noted in the introduction, minimalist organisation can effectively draw on computational storage to declutter physical spaces. In another subplot, Jean is forced to retrieve a photo that she took years ago for a friend. She grudgingly searches through a box of CDs (a cumbersome storage device in the era of clouds) but ultimately finds the image in her ex-boyfriend Aim’s hard disk. As she browses through a folder titled 2013, her hesitant clicks display a montage of happy and intimate moments that the couple shared together. Aim notes how the computer often behaves like a time machine. Unlike Aim, Jean did not carefully organise and store her prosthetic memories and was even willing to discard the box of CDs that were emblematic of defunct and wasteful accumulation. Speaking about how memory is externalised in digital storage, Thamrongrattanarit notes: for me, in the digital era, we just changed the medium, but human relationships stay the same. … It’s just more complicated because we can communicate from a distance, we can store a ton of memories, which couldn’t have ever happened in the past. (emphasis added) When Jean “dumped” Aim to move to Sweden, she blocked him across channels of networked communicational media to avoid any sense of ambient intimacy between them. In digitising our prosthetic memories and maintaining a sense of “connected presence” across social media, micro temporal databases have made it nearly impossible to erase and forget our past actions. Minimalist organisation might help us craft a coherent and stable representation of personal identity through meticulous decluttering. Yet, late-capitalist clutter takes on a different character in our digital archives where the algorithmic unconscious of networked media capitalises on prosthetic storage to make personal identity ambiguous and untidy. It is interesting to note that Jean initially gets in touch with Aim to return his old camera and apologise for their sudden breakup. The camera can record events to “freeze” them in time and space. Later in the film, Jean discovers a happy family photo that makes her reconsider whether she has been too harsh on her father because of how he “dumped” her family. Yet, Jean bitterly finds that her re-evaluation of her material possessions and their dated prosthetic memories is deceptive. In overidentifying with the frozen images and her affectively charged material possessions, she is misled by the overwhelming plenitude of nostalgic remembrance. Ultimately, Jean must “dump” all her things instead of trying to tidy up the jumbled temporal frictions. In the final sequence of HOY, Jean lies to her friend Pink about her relationship with Aim. She states that they are on good terms. Jean then unfriends Aim on Facebook, yet again rupturing any possibility of phatic and ambient intimacy between them. As they sit before her newly emptied house, Pink notes how Jean can do a lot with this expanded space. In a tight close-up, Jean gazes at her empty space with an ambiguous yet pained expression. Her plan to cathartically purge her regrets and fraught memories by recuperating her prosthetic memories failed. With the remnants of her past self expunged as clutter, Jean is left with a set of empty spaces that will eventually resemble the blank slate that we see at the beginning of the film. The new year and blank slate signify a fresh beginning for her future self. However, this reverse transition from a minimalist blank slate to her chaotically inundated childhood home frames a set of deeply equivocal affective sensations. Nonetheless, Jean must mislead Pink to sustain the notion of tidy and narrativised coherence that equivocally masks her fragmented sense of an indefinable loss. Conclusion MIHMIH and HOY explore the unresolvable and conflicting affective tensions that arise in an ecosystem of all-pervasive networked media. Paasonen argues that our ability to control networked technologies concurrently fosters our mundane and prosthetic dependency on them. Both Jean and Mary seek refuge in the simplicity of minimalist design to wrestle control over their overstimulating spaces and to tidy up their personal narratives. It is important to examine contemporary minimalist networks in conjunction with affective formation and aesthetic experience in the era of “total design and internet plenitude”. In an information-management society where prosthetic memories haunt our physical and digital spaces, minimalist decluttering becomes a form of personal archiving that simultaneously empowers unambiguous aesthetic feeling and linear and stable autobiographical representation. The neatness of minimalist decluttering conjugates with an ideal self that can resolve ambivalent affective attachments about the past and have a coherent vision for the future. Yet, we cannot sort the clutter that resides in digital memory’s micro temporal archives and drastically complicates our personal narratives. Significantly, the digital self is not compatible with neat and orderly narrativisation but instead resembles an unstable and agglomerative database. References Cardell, Kylie. “Modern Memory-Making: Marie Kondo, Online Journaling, and the Excavation, Curation, and Control of Personal Digital Data.” a/b: Auto/Biography Studies 32.3 (2017): 499–517. DOI: 10.1080/08989575.2017.1337993. Cheon, Eun Jeong, and Norman Makoto Su. “The Value of Empty Space for Design.” Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018. DOI: 10.1145/3173574.3173623. Ernst, Wolfgang, and Jussi Parikka. Digital Memory and the Archive. U of Minnesota P, 2013. Happy Old Year. Dir. Nawapol Thamrongrattanarit. Happy Ending Film, 2019. Hayles, N. Katherine. “How We Read: Close, Hyper, Machine.” ADE Bulletin (2010): 62-79. DOI: 10.1632/ade.150.62. Kondo, Marie. The Life-Changing Magic of Tidying Up. Ten Speed Press, 2010. Kyong, Chun Wendy Hui. Programmed Visions: Software and Memory. MIT P, 2013. Mankowski, Lukasz. “Interview with Nawapol Thamrongrattanarit: Happy Old Year Is Me in 100% for the First Time.” Asian Movie Pulse, 9 Feb. 2020. <http://asianmoviepulse.com/2020/02/interview-with-nawapol-thamrongrattanarit-2/>. Mary Is Happy, Mary Is Happy. Dir. Nawapol Thamrongrattanarit. Pop Pictures, 2013. Ngai, Sianne. Our Aesthetic Categories: Zany, Cute, Interesting. Harvard UP, 2015. Paasonen, Susanna. Dependent, Distracted, Bored: Affective Formations in Networked Media. MIT P, 2021. Stephens, Paul. The Poetics of Information Overload: From Gertrude Stein to Conceptual Writing. U of Minnesota P, 2015.
APA, Harvard, Vancouver, ISO, and other styles
39

Jones, Steve. "Seeing Sound, Hearing Image." M/C Journal 2, no. 4 (June 1, 1999). http://dx.doi.org/10.5204/mcj.1763.

Full text
Abstract:
“As the old technologies become automatic and invisible, we find ourselves more concerned with fighting or embracing what’s new”—Dennis Baron, From Pencils to Pixels: The Stage of Literacy Technologies Popular music is firmly rooted within realist practice, or what has been called the "culture of authenticity" associated with modernism. As Lawrence Grossberg notes, the accelleration of the rate of change in modern life caused, in post-war youth culture, an identity crisis or "lived contradiction" that gave rock (particularly) and popular music (generally) a peculiar position in regard to notions of authenticity. Grossberg places rock's authenticity within the "difference" it maintains from other cultural forms, and notes that its difference "can be justified aesthetically or ideologically, or in terms of the social position of the audiences, or by the economics of its production, or through the measure of its popularity or the statement of its politics" (205-6). Popular music scholars have not adequately addressed issues of authenticity and individuality. Two of the most important questions to be asked are: How is authenticity communicated in popular music? What is the site of the interpretation of authenticity? It is important to ask about sound, technology, about the attempt to understand the ideal and the image, the natural and artificial. It is these that make clear the strongest connections between popular music and contemporary culture. Popular music is a particularly appropriate site for the study of authenticity as a cultural category, for several reasons. For one thing, other media do not follow us, as aural media do, into malls, elevators, cars, planes. Nor do they wait for us, as a tape player paused and ready to play. What is important is not that music is "everywhere" but, to borrow from Vivian Sobchack, that it creates a "here" that can be transported anywhere. In fact, we are able to walk around enveloped by a personal aural environment, thanks to a Sony Walkman.1 Also, it is more difficult to shut out the aural than the visual. Closing one's ears does not entirely shut out sound. There is, additionally, the sense that sound and music are interpreted from within, that is, that they resonate through and within the body, and as such engage with one's self in a fashion that coincides with Charles Taylor's claim that the "ideal of authenticity" is an inner-directed one. It must be noted that authenticity is not, however, communicated only via music, but via text and image. Grossberg noted the "primacy of sound" in rock music, and the important link between music, visual image, and authenticity: Visual style as conceived in rock culture is usually the stage for an outrageous and self-conscious inauthenticity... . It was here -- in its visual presentation -- that rock often most explicitly manifested both an ironic resistance to the dominant culture and its sympathies with the business of entertainment ... . The demand for live performance has always expressed the desire for the visual mark (and proof) of authenticity. (208) But that relationship can also be reversed: Music and sound serve in some instances to provide the aural mark and proof of authenticity. Consider, for instance, the "tear" in the voice that Jensen identifies in Hank Williams's singing, and in that of Patsy Cline. For the latter, voicing, in this sense, was particularly important, as it meant more than a singing style, it also involved matters of self-identity, as Jensen appropriately associates with the move of country music from "hometown" to "uptown" (101). Cline's move toward a more "uptown" style involved her visual image, too. At a significant turning point in her career, Faron Young noted, Cline "left that country girl look in those western outfits behind and opted for a slicker appearance in dresses and high fashion gowns" (Jensen 101). Popular music has forged a link with visual media, and in some sense music itself has become more visual (though not necessarily less aural) the more it has engaged with industrial processes in the entertainment industry. For example, engagement with music videos and film soundtracks has made music a part of the larger convergence of mass media forms. Alongside that convergence, the use of music in visual media has come to serve as adjunct to visual symbolisation. One only need observe the increasingly commercial uses to which music is put (as in advertising, film soundtracks and music videos) to note ways in which music serves image. In the literature from a variety of disciplines, including communication, art and music, it has been argued that music videos are the visualisation of music. But in many respects the opposite is true. Music videos are the auralisation of the visual. Music serves many of the same purposes as sound does generally in visual media. One can find a strong argument for the use of sound as supplement to visual media in Silverman's and Altman's work. For Silverman, sound in cinema has largely been overlooked (pun intended) in favor of the visual image, but sound is a more effective (and perhaps necessary) element for willful suspension of disbelief. One may see this as well in the development of Dolby Surround Sound, and in increased emphasis on sound engineering among video and computer game makers, as well as the development of sub-woofers and high-fidelity speakers as computer peripherals. Another way that sound has become more closely associated with the visual is through the ongoing evolution of marketing demands within the popular music industry that increasingly rely on visual media and force image to the front. Internet technologies, particularly the WorldWideWeb (WWW), are also evidence of a merging of the visual and aural (see Hayward). The development of low-cost desktop video equipment and WWW publishing, CD-i, CD-ROM, DVD, and other technologies, has meant that visual images continue to form part of the industrial routine of the music business. The decrease in cost of many of these technologies has also led to the adoption of such routines among individual musicians, small/independent labels, and producers seeking to mimic the resources of major labels (a practice that has become considerably easier via the Internet, as it is difficult to determine capital resources solely from a WWW site). Yet there is another facet to the evolution of the link between the aural and visual. Sound has become more visual by way of its representation during its production (a representation, and process, that has largely been ignored in popular music studies). That representation has to do with the digitisation of sound, and the subsequent transformation sound and music can undergo after being digitised and portrayed on a computer screen. Once digitised, sound can be made visual in any number of ways, through traditional methods like music notation, through representation as audio waveform, by way of MIDI notation, bit streams, or through representation as shapes and colors (as in recent software applications particularly for children, like Making Music by Morton Subotnick). The impetus for these representations comes from the desire for increased control over sound (see Jones, Rock Formation) and such control seems most easily accomplished by way of computers and their concomitant visual technologies (monitors, printers). To make computers useful tools for sound recording it is necessary to employ some form of visual representation for the aural, and the flexibility of modern computers allows for new modes of predominately visual representation. Each of these connections between the aural and visual is in turn related to technology, for as audio technology develops within the entertainment industry it makes sense for synergistic development to occur with visual media technologies. Yet popular music scholars routinely analyse aural and visual media in isolation from one another. The challenge for popular music studies and music philosophy posed by visual media technologies, that they must attend to spatiality and context (both visual and aural), has not been taken up. Until such time as it is, it will be difficult, if not impossible, to engage issues of authenticity, because they will remain rootless instead of situated within the experience of music as fully sensual (in some cases even synaesthetic). Most of the traditional judgments of authenticity among music critics and many popular music scholars involve space and time, the former in terms of the movement of music across cultures and the latter in terms of history. None rely on notions of the "situatedness" of the listener or musicmaker in a particular aural, visual and historical space. Part of the reason for the lack of such an understanding arises from the very means by which popular music is created. We have become accustomed to understanding music as manipulation of sound, and so far as most modern music production is concerned such manipulation occurs as much visually as aurally, by cutting, pasting and otherwise altering audio waveforms on a computer screen. Musicians no more record music than they record fingering; they engage in sound recording. And recording engineers and producers rely less and less on sound and more on sight to determine whether a recording conforms to the demands of digital reproduction.2 Sound, particularly when joined with the visual, becomes a means to build and manipulate the environment, virtual and non-virtual (see Jones, "Sound"). Sound & Music As we construct space through sound, both in terms of audio production (e.g., the use of reverberation devices in recording studios) and in terms of everyday life (e.g., perception of aural stimuli, whether by ear or vibration in the body, from points surrounding us), we centre it within experience. Sound combines the psychological and physiological. Audio engineer George Massenburg noted that in film theaters: You couldn't utilise the full 360-degree sound space for music because there was an "exit sign" phenomena [sic]. If you had a lot of audio going on in the back, people would have a natural inclination to turn around and stare at the back of the room. (Massenburg 79-80) However, he went on to say, beyond observations of such reactions to multichannel sound technology, "we don't know very much". Research in psychoacoustics being used to develop virtual audio systems relies on such reactions and on a notion of human hardwiring for stimulus response (see Jones, "Sense"). But a major stumbling block toward the development of those systems is that none are able to account for individual listeners' perceptions. It is therefore important to consider the individual along with the social dimension in discussions of sound and music. For instance, the term "sound" is deployed in popular music to signify several things, all of which have to do with music or musical performance, but none of which is music. So, for instance, musical groups or performers can have a "sound", but it is distinguishable from what notes they play. Entire music scenes can have "sounds", but the music within such scenes is clearly distinct and differentiated. For the study of popular music this is a significant but often overlooked dimension. As Grossberg argues, "the authenticity of rock was measured by its sound" (207). Visually, he says, popular music is suspect and often inauthentic (sometimes purposefully so), and it is grounded in the aural. Similarly in country music Jensen notes that the "Nashville Sound" continually evoked conflicting definitions among fans and musicians, but that: The music itself was the arena in and through which claims about the Nashville Sound's authenticity were played out. A certain sound (steel guitar, with fiddle) was deemed "hard" or "pure" country, in spite of its own commercial history. (84) One should, therefore, attend to the interpretive acts associated with sound and its meaning. But why has not popular music studies engaged in systematic analysis of sound at the level of the individual as well as the social? As John Shepherd put it, "little cultural theoretical work in music is concerned with music's sounds" ("Value" 174). Why should this be a cause for concern? First, because Shepherd claims that sound is not "meaningful" in the traditional sense. Second, because it leads us to re-examine the question long set to the side in popular music studies: What is music? The structural homology, the connection between meaning and social formation, is a foundation upon which the concept of authenticity in popular music stands. Yet the ability to label a particular piece of music "good" shifts from moment to moment, and place to place. Frith understates the problem when he writes that "it is difficult ... to say how musical texts mean or represent something, and it is difficult to isolate structures of musical creation or control" (56). Shepherd attempts to overcome this difficulty by emphasising that: Music is a social medium in sound. What [this] means ... is that the sounds of music provide constantly moving and complex matrices of sounds in which individuals may invest their own meanings ... [however] while the matrices of sounds which seemingly constitute an individual "piece" of music can accommodate a range of meanings, and thereby allow for negotiability of meaning, they cannot accommodate all possible meanings. (Shepherd, "Art") It must be acknowledged that authenticity is constructed, and that in itself is an argument against the most common way to think of authenticity. If authenticity implies something about the "pure" state of an object or symbol then surely such a state is connected to some "objective" rendering, one not possible according to Shepherd's claims. In some sense, then, authenticity is autonomous, its materialisation springs not from any necessary connection to sound, image, text, but from individual acts of interpretation, typically within what in literary criticism has come to be known as "interpretive communities". It is not hard to illustrate the point by generalising and observing that rock's notion of authenticity is captured in terms of songwriting, but that songwriters are typically identified with places (e.g. Tin Pan Alley, the Brill Building, Liverpool, etc.). In this way there is an obvious connection between authenticity and authorship (see Jones, "Popular Music Studies") and geography (as well in terms of musical "scenes", e.g. the "Philly Sound", the "Sun Sound", etc.). The important thing to note is the resultant connection between the symbolic and the physical worlds rooted (pun intended) in geography. As Redhead & Street put it: The idea of "roots" refers to a number of aspects of the musical process. There is the audience in which the musician's career is rooted ... . Another notion of roots refers to music. Here the idea is that the sounds and the style of the music should continue to resemble the source from which it sprang ... . The issue ... can be detected in the argument of those who raise doubts about the use of musical high-technology by African artists. A final version of roots applies to the artist's sociological origins. (180) It is important, consequently, to note that new technologies, particularly ones associated with the distribution of music, are of increasing importance in regulating the tension between alienation and progress mentioned earlier, as they are technologies not simply of musical production and consumption, but of geography. That the tension they mediate is most readily apparent in legal skirmishes during an unsettled era for copyright law (see Brown) should not distract scholars from understanding their cultural significance. These technologies are, on the one hand, "liberating" (see Hayward, Young, and Marsh) insofar as they permit greater geographical "reach" and thus greater marketing opportunities (see Fromartz), but on the other hand they permit less commercial control, insofar as they permit digitised music to freely circulate without restriction or compensation, to the chagrin of copyright enthusiasts. They also create opportunities for musical collaboration (see Hayward) between performers in different zones of time and space, on a scale unmatched since the development of multitracking enabled the layering of sound. Most importantly, these technologies open spaces for the construction of authenticity that have hitherto been unavailable, particularly across distances that have largely separated cultures and fan communities (see Paul). The technologies of Internetworking provide yet another way to make connections between authenticity, music and sound. Community and locality (as Redhead & Street, as well as others like Sara Cohen and Ruth Finnegan, note) are the elements used by audience and artist alike to understand the authenticity of a performer or performance. The lived experience of an artist, in a particular nexus of time and space, is to be somehow communicated via music and interpreted "properly" by an audience. But technologies of Internetworking permit the construction of alternative spaces, times and identities. In no small way that has also been the situation with the mediation of music via most recordings. They are constructed with a sense of space, consumed within particular spaces, at particular times, in individual, most often private, settings. What the network technologies have wrought is a networked audience for music that is linked globally but rooted in the local. To put it another way, the range of possibilities when it comes to interpretive communities has widened, but the experience of music has not significantly shifted, that is, the listener experiences music individually, and locally. Musical activity, whether it is defined as cultural or commercial practice, is neither flat nor autonomous. It is marked by ever-changing tastes (hence not flat) but within an interpretive structure (via "interpretive communities"). Musical activity must be understood within the nexus of the complex relations between technical, commercial and cultural processes. As Jensen put it in her analysis of Patsy Cline's career: Those who write about culture production can treat it as a mechanical process, a strategic construction of material within technical or institutional systems, logical, rational, and calculated. But Patsy Cline's recording career shows, among other things, how this commodity production view must be linked to an understanding of culture as meaning something -- as defining, connecting, expressing, mattering to those who participate with it. (101) To achieve that type of understanding will require that popular music scholars understand authenticity and music in a symbolic realm. Rather than conceiving of authenticity as a limited resource (that is, there is only so much that is "pure" that can go around), it is important to foreground its symbolic and ever-changing character. Put another way, authenticity is not used by musician or audience simply to label something as such, but rather to mean something about music that matters at that moment. Authenticity therefore does not somehow "slip away", nor does a "pure" authentic exist. Authenticity in this regard is, as Baudrillard explains concerning mechanical reproduction, "conceived according to (its) very reproducibility ... there are models from which all forms proceed according to modulated differences" (56). Popular music scholars must carefully assess the affective dimensions of fans, musicians, and also record company executives, recording producers, and so on, to be sensitive to the deeply rooted construction of authenticity and authentic experience throughout musical processes. Only then will there emerge an understanding of the structures of feeling that are central to the experience of music. Footnotes For analyses of the Walkman's role in social settings and popular music consumption see du Gay; Hosokawa; and Chen. It has been thus since the advent of disc recording, when engineers would watch a record's grooves through a microscope lens as it was being cut to ensure grooves would not cross over one into another. References Altman, Rick. "Television/Sound." Studies in Entertainment. Ed. Tania Modleski. Bloomington: Indiana UP, 1986. 39-54. Baudrillard, Jean. Symbolic Death and Exchange. London: Sage, 1993. Brown, Ronald. Intellectual Property and the National Information Infrastructure: The Report of the Working Group on Intellectual Property Rights. Washington, DC: U.S. Department of Commerce, 1995. Chen, Shing-Ling. "Electronic Narcissism: College Students' Experiences of Walkman Listening." Annual meeting of the International Communication Association. Washington, D.C. 1993. Du Gay, Paul, et al. Doing Cultural Studies. London: Sage, 1997. Frith, Simon. Sound Effects. New York: Pantheon, 1981. Fromartz, Steven. "Starts-ups Sell Garage Bands, Bowie on Web." Reuters newswire, 4 Dec. 1996. Grossberg, Lawrence. We Gotta Get Out of This Place. London: Routledge, 1992. Hayward, Philip. "Enterprise on the New Frontier." Convergence 1.2 (Winter 1995): 29-44. Hosokawa, Shuhei. "The Walkman Effect." Popular Music 4 (1984). Jensen, Joli. The Nashville Sound: Authenticity, Commercialisation and Country Music. Nashville, Vanderbilt UP, 1998. Jones, Steve. Rock Formation: Music, Technology and Mass Communication. Newbury Park, CA: Sage, 1992. ---. "Popular Music Studies and Critical Legal Studies" Stanford Humanities Review 3.2 (Fall 1993): 77-90. ---. "A Sense of Space: Virtual Reality, Authenticity and the Aural." Critical Studies in Mass Communication 10.3 (Sep. 1993), 238-52. ---. "Sound, Space & Digitisation." Media Information Australia 67 (Feb. 1993): 83-91. Marrsh, Brian. "Musicians Adopt Technology to Market Their Skills." Wall Street Journal 14 Oct. 1994: C2. Massenburg, George. "Recording the Future." EQ (Apr. 1997): 79-80. Paul, Frank. "R&B: Soul Music Fans Make Cyberspace Their Meeting Place." Reuters newswire, 11 July 1996. Redhead, Steve, and John Street. "Have I the Right? Legitimacy, Authenticity and Community in Folk's Politics." Popular Music 8.2 (1989). Shepherd, John. "Art, Culture and Interdisciplinarity." Davidson Dunston Research Lecture. Carleton University, Canada. 3 May 1992. ---. "Value and Power in Music." The Sound of Music: Meaning and Power in Culture. Eds. John Shepherd and Peter Wicke. Cambridge: Polity, 1993. Silverman, Kaja. The Acoustic Mirror. Bloomington: Indiana UP, 1988. Sobchack, Vivian. Screening Space. New York: Ungar, 1982. Young, Charles. "Aussie Artists Use Internet and Bootleg CDs to Protect Rights." Pro Sound News July 1995. Citation reference for this article MLA style: Steve Jones. "Seeing Sound, Hearing Image: 'Remixing' Authenticity in Popular Music Studies." M/C: A Journal of Media and Culture 2.4 (1999). [your date of access] <http://www.uq.edu.au/mc/9906/remix.php>. Chicago style: Steve Jones, "Seeing Sound, Hearing Image: 'Remixing' Authenticity in Popular Music Studies," M/C: A Journal of Media and Culture 2, no. 4 (1999), <http://www.uq.edu.au/mc/9906/remix.php> ([your date of access]). APA style: Steve Jones. (1999) Seeing Sound, Hearing Image: "Remixing" Authenticity in Popular Music Studies. M/C: A Journal of Media and Culture 2(4). <http://www.uq.edu.au/mc/9906/remix.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
40

Newman, James. "Save the Videogame! The National Videogame Archive: Preservation, Supersession and Obsolescence." M/C Journal 12, no. 3 (July 15, 2009). http://dx.doi.org/10.5204/mcj.167.

Full text
Abstract:
Introduction In October 2008, the UK’s National Videogame Archive became a reality and after years of negotiation, preparation and planning, this partnership between Nottingham Trent University’s Centre for Contemporary Play research group and The National Media Museum, accepted its first public donations to the collection. These first donations came from Sony’s Computer Entertainment Europe’s London Studios who presented the original, pre-production PlayStation 2 EyeToy camera (complete with its hand-written #1 sticker) and Harmonix who crossed the Atlantic to deliver prototypes of the Rock Band drum kit and guitar controllers along with a slew of games. Since then, we have been inundated with donations, enquiries and volunteers offering their services and it is clear that we have exciting and challenging times ahead of us at the NVA as we seek to continue our collecting programme and preserve, conserve, display and interpret these vital parts of popular culture. This essay, however, is not so much a document of these possible futures for our research or the challenges we face in moving forward as it is a discussion of some of the issues that make game preservation a vital and timely undertaking. In briefly telling the story of the genesis of the NVA, I hope to draw attention to some of the peculiarities (in both senses) of the situation in which videogames currently exist. While considerable attention has been paid to the preservation and curation of new media arts (e.g. Cook et al.), comparatively little work has been undertaken in relation to games. Surprisingly, the games industry has been similarly neglectful of the histories of gameplay and gamemaking. Throughout our research, it has became abundantly clear that even those individuals and companies most intimately associated with the development of this form, do not hold their corporate and personal histories in the high esteem we expected (see also Lowood et al.). And so, despite the well-worn bluster of an industry that proclaims itself as culturally significant as Hollywood, it is surprisingly difficult to find a definitive copy of the boxart of the final release of a Triple-A title let alone any of the pre-production materials. Through our journeys in the past couple of years, we have encountered shoeboxes under CEOs’ desks and proud parents’ collections of tapes and press cuttings. These are the closest things to a formalised archive that we currently have for many of the biggest British game development and publishing companies. Not only is this problematic in and of itself as we run the risk of losing titles and documents forever as well as the stories locked up in the memories of key individuals who grow ever older, but also it is symptomatic of an industry that, despite its public proclamations, neither places a high value on its products as popular culture nor truly recognises their impact on that culture. While a few valorised, still-ongoing, franchises like the Super Mario and Legend of Zelda series are repackaged and (digitally) re-released so as to provide continuity with current releases, a huge number of games simply disappear from view once their short period of retail limelight passes. Indeed, my argument in this essay rests to some extent on the admittedly polemical, and maybe even antagonistic, assertion that the past business and marketing practices of the videogames industry are partly to blame for the comparatively underdeveloped state of game preservation and the seemingly low cultural value placed on old games within the mainstream marketplace. Small wonder, then, that archives and formalised collections are not widespread. However antagonistic this point may seem, this essay does not set out merely to criticise the games industry. Indeed, it is important to recognise that the success and viability of projects such as the NVA is derived partly from close collaboration with industry partners. As such, it is my hope that in addition to contributing to the conversation about the importance and need for formalised strategies of game preservation, this essay goes some way to demonstrating the necessity of universities, museums, developers, publishers, advertisers and retailers tackling these issues in partnership. The Best Game Is the Next Game As will be clear from these opening paragraphs, this essay is primarily concerned with ‘old’ games. Perhaps surprisingly, however, we shall see that ‘old’ games are frequently not that old at all as even the shiniest, and newest of interactive experiences soon slip from view under the pressure of a relentless industrial and institutional push towards the forthcoming release and the ‘next generation’. More surprising still is that ‘old’ games are often difficult to come by as they occupy, at best, a marginalised position in the contemporary marketplace, assuming they are even visible at all. This is an odd situation. Videogames are, as any introductory primer on game studies will surely reveal, big business (see Kerr, for instance, as well as trade bodies such as ELSPA and The ESA for up-to-date sales figures). Given the videogame industry seems dedicated to growing its business and broadening its audiences (see Radd on Sony’s ‘Game 3.0’ strategy, for instance), it seems strange, from a commercial perspective if no other, that publishers’ and developers’ back catalogues are not being mercilessly plundered to wring the last pennies of profit from their IPs. Despite being cherished by players and fans, some of whom are actively engaged in their own private collecting and curation regimes (sometimes to apparently obsessive excess as Jones, among others, has noted), videogames have, nonetheless, been undervalued as part of our national popular cultural heritage by institutions of memory such as museums and archives which, I would suggest, have largely ignored and sometimes misunderstood or misrepresented them. Most of all, however, I wish to draw attention to the harm caused by the videogames industry itself. Consumers’ attentions are focused on ‘products’, on audiovisual (but mainly visual) technicalities and high-definition video specs rather than on the experiences of play and performance, or on games as artworks or artefact. Most damagingly, however, by constructing and contributing to an advertising, marketing and popular critical discourse that trades almost exclusively in the language of instant obsolescence, videogames have been robbed of their historical value and old platforms and titles are reduced to redundant, legacy systems and easily-marginalised ‘retro’ curiosities. The vision of inevitable technological progress that the videogames industry trades in reminds us of Paul Duguid’s concept of ‘supersession’ (see also Giddings and Kennedy, on the ‘technological imaginary’). Duguid identifies supersession as one of the key tropes in discussions of new media. The reductive idea that each new form subsumes and replaces its predecessor means that videogames are, to some extent, bound up in the same set of tensions that undermine the longevity of all new media. Chun rightly notes that, in contrast with more open terms like multimedia, ‘new media’ has always been somewhat problematic. Unaccommodating, ‘it portrayed other media as old or dead; it converged rather than multiplied; it did not efface itself in favor of a happy if redundant plurality’ (1). The very newness of new media and of videogames as the apotheosis of the interactivity and multimodality they promise (Newman, "In Search"), their gleam and shine, is quickly tarnished as they are replaced by ever-newer, ever more exciting, capable and ‘revolutionary’ technologies whose promise and moment in the limelight is, in turn, equally fleeting. As Franzen has noted, obsolescence and the trail of abandoned, superseded systems is a natural, even planned-for, product of an infatuation with the newness of new media. For Kline et al., the obsession with obsolescence leads to the characterisation of the videogames industry as a ‘perpetual innovation economy’ whose institutions ‘devote a growing share of their resources to the continual alteration and upgrading of their products. However, it is my contention here that the supersessionary tendency exerts a more serious impact on videogames than some other media partly because the apparently natural logic of obsolescence and technological progress goes largely unchecked and partly because there remain few institutions dedicated to considering and acting upon game preservation. The simple fact, as Lowood et al. have noted, is that material damage is being done as a result of this manufactured sense of continual progress and immediate, irrefutable obsolescence. By focusing on the upcoming new release and the preview of what is yet to come; by exciting gamers about what is in development and demonstrating the manifest ways in which the sheen of the new inevitably tarnishes the old. That which is replaced is fit only for the bargain bin or the budget-priced collection download, and as such, it is my position that we are systematically undermining and perhaps even eradicating the possibility of a thorough and well-documented history for videogames. This is a situation that we at the National Videogame Archive, along with colleagues in the emerging field of game preservation (e.g. the International Game Developers Association Game Preservation Special Interest Group, and the Keeping Emulation Environments Portable project) are, naturally, keen to address. Chief amongst our concerns is better understanding how it has come to be that, in 2009, game studies scholars and colleagues from across the memory and heritage sectors are still only at the beginning of the process of considering game preservation. The IGDA Game Preservation SIG was founded only five years ago and its ‘White Paper’ (Lowood et al.) is just published. Surprisingly, despite the importance of videogames within popular culture and the emergence and consolidation of the industry as a potent creative force, there remains comparatively little academic commentary or investigation into the specific situation and life-cycles of games or the demands that they place upon archivists and scholars of digital histories and cultural heritage. As I hope to demonstrate in this essay, one of the key tasks of the project of game preservation is to draw attention to the consequences of the concentration, even fetishisation, of the next generation, the new and the forthcoming. The focus on what I have termed ‘the lure of the imminent’ (e.g. Newman, Playing), the fixation on not only the present but also the as-yet-unreleased next generation, has contributed to the normalisation of the discourses of technological advancement and the inevitability and finality of obsolescence. The conflation of gameplay pleasure and cultural import with technological – and indeed, usually visual – sophistication gives rise to a context of endless newness, within which there appears to be little space for the ‘outdated’, the ‘superseded’ or the ‘old’. In a commercial and cultural space in which so little value is placed upon anything but the next game, we risk losing touch with the continuities of development and the practices of play while simultaneously robbing players and scholars of the critical tools and resources necessary for contextualised appreciation and analysis of game form and aesthetics, for instance (see Monnens, "Why", for more on the value of preserving ‘old’ games for analysis and scholarship). Moreover, we risk losing specific games, platforms, artefacts and products as they disappear into the bargain bucket or crumble to dust as media decay, deterioration and ‘bit rot’ (Monnens, "Losing") set in. Space does not here permit a discussion of the scope and extent of the preservation work required (for instance, the NVA sets its sights on preserving, documenting, interpreting and exhibiting ‘videogame culture’ in its broadest sense and recognises the importance of videogames as more than just code and as enmeshed within complex networks of productive, consumptive and performative practices). Neither is it my intention to discuss here the specific challenges and numerous issues associated with archival and exhibition tools such as emulation which seek to rebirth code on up-to-date, manageable, well-supported hardware platforms but which are frequently insensitive to the specificities and nuances of the played experience (see Newman, "On Emulation", for some further notes on videogame emulation, archiving and exhibition and Takeshita’s comments in Nutt on the technologies and aesthetics of glitches, for instance). Each of these issues is vitally important and will, doubtless become a part of the forthcoming research agenda for game preservation scholars. My focus here, however, is rather more straightforward and foundational and though it is deliberately controversial, it is my hope that its casts some light over some ingrained assumptions about videogames and the magnitude and urgency of the game preservation project. Videogames Are Disappearing? At a time when retailers’ shelves struggle under the weight of newly-released titles and digital distribution systems such as Steam, the PlayStation Network, Xbox Live Marketplace, WiiWare, DSiWare et al bring new ways to purchase and consume playable content, it might seem strange to suggest that videogames are disappearing. In addition to what we have perhaps come to think of as the ‘usual suspects’ in the hardware and software publishing marketplace, over the past year or so Apple have, unexpectedly and perhaps even surprising themselves, carved out a new gaming platform with the iPhone/iPod Touch and have dramatically simplified the notoriously difficult process of distributing mobile content with the iTunes App Store. In the face of this apparent glut of games and the emergence and (re)discovery of new markets with the iPhone, Wii and Nintendo DS, videogames seem an ever more a vital and visible part of popular culture. Yet, for all their commercial success and seemingly penetration the simple fact is that they are disappearing. And at an alarming rate. Addressing the IGDA community of game developers and producers, Henry Lowood makes the point with admirable clarity (see also Ruggill and McAllister): If we fail to address the problems of game preservation, the games you are making will disappear, perhaps within a few decades. You will lose access to your own intellectual property, you will be unable to show new developers the games you designed or that inspired you, and you may even find it necessary to re-invent a bunch of wheels. (Lowood et al. 1) For me, this point hit home most persuasively a few years ago when, along with Iain Simons, I was invited by the British Film Institute to contribute a book to their ‘Screen Guides’ series. 100 Videogames (Newman and Simons) was an intriguing prospect that provided us with the challenge and opportunity to explore some of the key moments in videogaming’s forty year history. However, although the research and writing processes proved to be an immensely pleasurable and rewarding experience that we hope culminated in an accessible, informative volume offering insight into some well-known (and some less-well known) games, the project was ultimately tinged with a more than a little disappointment and frustration. Assuming our book had successfully piqued the interest of our readers into rediscovering games previously played or perhaps investigating games for the first time, what could they then do? Where could they go to find these games in order to experience their delights (or their flaws and problems) at first hand? Had our volume been concerned with television or film, as most of the Screen Guides are, then online and offline retailers, libraries, and even archives for less widely-available materials, would have been obvious ports of call. For the student of videogames, however, the choices are not so much limited as practically non-existant. It is only comparatively recently that videogame retailers have shifted away from an almost exclusive focus on new releases and the zeitgeist platforms towards a recognition of old games and systems through the creation of the ‘pre-owned’ marketplace. The ‘pre-owned’ transaction is one in which old titles may be traded in for cash or against the purchase of new releases of hardware or software. Surely, then, this represents the commercial viability of classic games and is a recognition on the part of retail that the new release is not the only game in town. Yet, if we consider more carefully the ‘pre-owned’ model, we find a few telling points. First, there is cold economic sense to the pre-owned business model. In their financial statements for FY08, ‘GAME revealed that the service isn’t just a key part of its offer to consumers, but its also represents an ‘attractive’ gross margin 39 per cent.’ (French). Second, and most important, the premise of the pre-owned business as it is communicated to consumers still offers nothing but primacy to the new release. That one would trade-in one’s old games in order to consume these putatively better new ones speaks eloquently in the language of obsolesce and what Dovey and Kennedy have called the ‘technological imaginary’. The wire mesh buckets of old, pre-owned games are not displayed or coded as treasure troves for the discerning or completist collector but rather are nothing more than bargain bins. These are not classic games. These are cheap games. Cheap because they are old. Cheap because they have had their day. This is a curious situation that affects videogames most unfairly. Of course, my caricature of the videogame retailer is still incomplete as a good deal of the instantly visible shopfloor space is dedicated neither to pre-owned nor new releases but rather to displays of empty boxes often sporting unfinalised, sometimes mocked-up, boxart flaunting titles available for pre-order. Titles you cannot even buy yet. In the videogames marketplace, even the present is not exciting enough. The best game is always the next game. Importantly, retail is not alone in manufacturing this sense of dissatisfaction with the past and even the present. The specialist videogames press plays at least as important a role in reinforcing and normalising the supersessionary discourse of instant obsolescence by fixing readers’ attentions and expectations on the just-visible horizon. Examining the pages of specialist gaming publications reveals them to be something akin to Futurist paeans dedicating anything from 70 to 90% of their non-advertising pages to previews, interviews with developers about still-in-development titles (see Newman, Playing, for more on the specialist gaming press’ love affair with the next generation and the NDA scoop). Though a small number of publications specifically address retro titles (e.g. Imagine Publishing’s Retro Gamer), most titles are essentially vehicles to promote current and future product lines with many magazines essentially operating as delivery devices for cover-mounted CDs/DVDs offering teaser videos or playable demos of forthcoming titles to further whet the appetite. Manufacturing a sense of excitement might seem wholly natural and perhaps even desirable in helping to maintain a keen interest in gaming culture but the effect of the imbalance of popular coverage has a potentially deleterious effect on the status of superseded titles. Xbox World 360’s magnificently-titled ‘Anticip–O–Meter’ ™ does more than simply build anticipation. Like regular features that run under headings such as ‘The Next Best Game in The World Ever is…’, it seeks to author not so much excitement about the imminent release but a dissatisfaction with the present with which unfavourable comparisons are inevitably drawn. The current or previous crop of (once new, let us not forget) titles are not simply superseded but rather are reinvented as yardsticks to judge the prowess of the even newer and unarguably ‘better’. As Ashton has noted, the continual promotion of the impressiveness of the next generation requires a delicate balancing act and a selective, institutionalised system of recall and forgetting that recovers the past as a suite of (often technical) benchmarks (twice as many polygons, higher resolution etc.) In the absence of formalised and systematic collecting, these obsoleted titles run the risk of being forgotten forever once they no longer serve the purpose of demonstrating the comparative advancement of the successors. The Future of Videogaming’s Past Even if we accept the myriad claims of game studies scholars that videogames are worthy of serious interrogation in and of themselves and as part of a multifaceted, transmedial supersystem, we might be tempted to think that the lack of formalised collections, archival resources and readily available ‘old/classic’ titles at retail is of no great significance. After all, as Jones has observed, the videogame player is almost primed to undertake this kind of activity as gaming can, at least partly, be understood as the act and art of collecting. Games such as Animal Crossing make this tendency most manifest by challenging their players to collect objects and artefacts – from natural history through to works of visual art – so as to fill the initially-empty in-game Museum’s cases. While almost all videogames from The Sims to Katamari Damacy can be considered to engage their players in collecting and collection management work to some extent, Animal Crossing is perhaps the most pertinent example of the indivisibility of the gamer/archivist. Moreover, the permeability of the boundary between the fan’s collection of toys, dolls, posters and the other treasured objects of merchandising and the manipulation of inventories, acquisitions and equipment lists that we see in the menus and gameplay imperatives of videogames ensures an extensiveness and scope of fan collecting and archival work. Similarly, the sociality of fan collecting and the value placed on private hoarding, public sharing and the processes of research ‘…bridges to new levels of the game’ (Jones 48). Perhaps we should be as unsurprised that their focus on collecting makes videogames similar to eBay as we are to the realisation that eBay with its competitiveness, its winning and losing states, and its inexorable countdown timer, is nothing if not a game? We should be mindful, however, of overstating the positive effects of fandom on the fate of old games. Alongside eBay’s veneration of the original object, p2p and bittorrent sites reduce the videogame to its barest. Quite apart from the (il)legality of emulation and videogame ripping and sharing (see Conley et al.), the existence of ‘ROMs’ and the technicalities of their distribution reveals much about the peculiar tension between the interest in old games and their putative cultural and economic value. (St)ripped down to the barest of code, ROMs deny the gamer the paratextuality of the instruction manual or boxart. In fact, divorced from its context and robbed of its materiality, ROMs perhaps serve to make the original game even more distant. More tellingly, ROMs are typically distributed by the thousand in zipped files. And so, in just a few minutes, entire console back-catalogues – every game released in every territory – are available for browsing and playing on a PC or Mac. The completism of the collections allows detailed scrutiny of differences in Japanese versus European releases, for instance, and can be seen as a vital investigative resource. However, that these ROMs are packaged into collections of many thousands speaks implicitly of these games’ perceived value. In a similar vein, the budget-priced retro re-release collection helps to diminish the value of each constituent game and serves to simultaneously manufacture and highlight the manifestly unfair comparison between these intriguingly retro curios and the legitimately full-priced games of now and next. Customer comments at Amazon.co.uk demonstrate the way in which historical and technological comparisons are now solidly embedded within the popular discourse (see also Newman 2009b). Leaving feedback on Sega’s PS3/Xbox 360 Sega MegaDrive Ultimate Collection customers berate the publisher for the apparently meagre selection of titles on offer. Interestingly, this charge seems based less around the quality, variety or range of the collection but rather centres on jarring technological schisms and a clear sense of these titles being of necessarily and inevitably diminished monetary value. Comments range from outraged consternation, ‘Wtf, only 40 games?’, ‘I wont be getting this as one disc could hold the entire arsenal of consoles and games from commodore to sega saturn(Maybe even Dreamcast’ through to more detailed analyses that draw attention to the number of bits and bytes but that notably neglect any consideration of gameplay, experientiality, cultural significance or, heaven forbid, fun. “Ultimate” Collection? 32Mb of games on a Blu-ray disc?…here are 40 Megadrive games at a total of 31 Megabytes of data. This was taking the Michael on a DVD release for the PS2 (or even on a UMD for the PSP), but for a format that can store 50 Gigabytes of data, it’s an insult. Sega’s entire back catalogue of Megadrive games only comes to around 800 Megabytes - they could fit that several times over on a DVD. The ultimate consequence of these different but complementary attitudes to games that fix attentions on the future and package up decontextualised ROMs by the thousand or even collections of 40 titles on a single disc (selling for less than half the price of one of the original cartridges) is a disregard – perhaps even a disrespect – for ‘old’ games. Indeed, it is this tendency, this dominant discourse of inevitable, natural and unimpeachable obsolescence and supersession, that provided one of the prime motivators for establishing the NVA. As Lowood et al. note in the title of the IGDA Game Preservation SIG’s White Paper, we need to act to preserve and conserve videogames ‘before it’s too late’.ReferencesAshton, D. ‘Digital Gaming Upgrade and Recovery: Enrolling Memories and Technologies as a Strategy for the Future.’ M/C Journal 11.6 (2008). 13 Jun 2009 ‹http://journal.media-culture.org.au/index.php/mcjournal/article/viewArticle/86›.Buffa, C. ‘How to Fix Videogame Journalism.’ GameDaily 20 July 2006. 13 Jun 2009 ‹http://www.gamedaily.com/articles/features/how-to-fix-videogame-journalism/69202/?biz=1›. ———. ‘Opinion: How to Become a Better Videogame Journalist.’ GameDaily 28 July 2006. 13 Jun 2009 ‹http://www.gamedaily.com/articles/features/opinion-how-to-become-a-better-videogame-journalist/69236/?biz=1. ———. ‘Opinion: The Videogame Review – Problems and Solutions.’ GameDaily 2 Aug. 2006. 13 Jun 2009 ‹http://www.gamedaily.com/articles/features/opinion-the-videogame-review-problems-and-solutions/69257/?biz=1›. ———. ‘Opinion: Why Videogame Journalism Sucks.’ GameDaily 14 July 2006. 13 Jun 2009 ‹http://www.gamedaily.com/articles/features/opinion-why-videogame-journalism-sucks/69180/?biz=1›. Cook, Sarah, Beryl Graham, and Sarah Martin eds. Curating New Media, Gateshead: BALTIC, 2002. Duguid, Paul. ‘Material Matters: The Past and Futurology of the Book.’ In Gary Nunberg, ed. The Future of the Book. Berkeley, CA: University of California Press, 1996. 63–101. French, Michael. 'GAME Reveals Pre-Owned Trading Is 18% of Business.’ MCV 22 Apr. 2009. 13 Jun 2009 ‹http://www.mcvuk.com/news/34019/GAME-reveals-pre-owned-trading-is-18-per-cent-of-business›. Giddings, Seth, and Helen Kennedy. ‘Digital Games as New Media.’ In J. Rutter and J. Bryce, eds. Understanding Digital Games. London: Sage. 129–147. Gillen, Kieron. ‘The New Games Journalism.’ Kieron Gillen’s Workblog 2004. 13 June 2009 ‹http://gillen.cream.org/wordpress_html/?page_id=3›. Jones, S. The Meaning of Video Games: Gaming and Textual Strategies, New York: Routledge, 2008. Kerr, A. The Business and Culture of Digital Games. London: Sage, 2006. Lister, Martin, John Dovey, Seth Giddings, Ian Grant and Kevin Kelly. New Media: A Critical Introduction. London and New York: Routledge, 2003. Lowood, Henry, Andrew Armstrong, Devin Monnens, Zach Vowell, Judd Ruggill, Ken McAllister, and Rachel Donahue. Before It's Too Late: A Digital Game Preservation White Paper. IGDA, 2009. 13 June 2009 ‹http://www.igda.org/wiki/images/8/83/IGDA_Game_Preservation_SIG_-_Before_It%27s_Too_Late_-_A_Digital_Game_Preservation_White_Paper.pdf›. Monnens, Devin. ‘Why Are Games Worth Preserving?’ In Before It's Too Late: A Digital Game Preservation White Paper. IGDA, 2009. 13 June 2009 ‹http://www.igda.org/wiki/images/8/83/IGDA_Game_Preservation_SIG_-_Before_It%27s_Too_Late_-_A_Digital_Game_Preservation_White_Paper.pdf›. ———. ‘Losing Digital Game History: Bit by Bit.’ In Before It's Too Late: A Digital Game Preservation White Paper. IGDA, 2009. 13 June 2009 ‹http://www.igda.org/wiki/images/8/83/IGDA_Game_Preservation_SIG_-_Before_It%27s_Too_Late_-_A_Digital_Game_Preservation_White_Paper.pdf›. Newman, J. ‘In Search of the Videogame Player: The Lives of Mario.’ New Media and Society 4.3 (2002): 407-425.———. ‘On Emulation.’ The National Videogame Archive Research Diary, 2009. 13 June 2009 ‹http://www.nationalvideogamearchive.org/index.php/2009/04/on-emulation/›. ———. ‘Our Cultural Heritage – Available by the Bucketload.’ The National Videogame Archive Research Diary, 2009. 10 Apr. 2009 ‹http://www.nationalvideogamearchive.org/index.php/2009/04/our-cultural-heritage-available-by-the-bucketload/›. ———. Playing with Videogames, London: Routledge, 2008. ———, and I. Simons. 100 Videogames. London: BFI Publishing, 2007. Nutt, C. ‘He Is 8-Bit: Capcom's Hironobu Takeshita Speaks.’ Gamasutra 2008. 13 June 2009 ‹http://www.gamasutra.com/view/feature/3752/›. Radd, D. ‘Gaming 3.0. Sony’s Phil Harrison Explains the PS3 Virtual Community, Home.’ Business Week 9 Mar. 2007. 13 June 2009 ‹http://www.businessweek.com/innovate/content/mar2007/id20070309_764852.htm?chan=innovation_game+room_top+stories›. Ruggill, Judd, and Ken McAllister. ‘What If We Do Nothing?’ Before It's Too Late: A Digital Game Preservation White Paper. IGDA, 2009. 13 June 2009. ‹http://www.igda.org/wiki/images/8/83/IGDA_Game_Preservation_SIG_-_Before_It%27s_Too_Late_-_A_Digital_Game_Preservation_White_Paper.pdf›. 16-19.
APA, Harvard, Vancouver, ISO, and other styles
41

Ibrahim, Yasmin. "Commodifying Terrorism." M/C Journal 10, no. 3 (June 1, 2007). http://dx.doi.org/10.5204/mcj.2665.

Full text
Abstract:
Introduction Figure 1 The counter-Terrorism advertising campaign of London’s Metropolitan Police commodifies some everyday items such as mobile phones, computers, passports and credit cards as having the potential to sustain terrorist activities. The process of ascribing cultural values and symbolic meanings to some everyday technical gadgets objectifies and situates Terrorism into the everyday life. The police, in urging people to look out for ‘the unusual’ in their normal day-to-day lives, juxtapose the everyday with the unusual, where day-to-day consumption, routines and flows of human activity can seemingly house insidious and atavistic elements. This again is reiterated in the Met police press release: Terrorists live within our communities making their plans whilst doing everything they can to blend in, and trying not to raise suspicions about their activities. (MPA Website) The commodification of Terrorism through uncommon and everyday objects situates Terrorism as a phenomenon which occupies a liminal space within the everyday. It resides, breathes and co-exists within the taken-for-granted routines and objects of ‘the everyday’ where it has the potential to explode and disrupt without warning. Since 9/11 and the 7/7 bombings Terrorism has been narrated through the disruption of mobility, whether in mid-air or in the deep recesses of the Underground. The resonant thread of disruption to human mobility evokes a powerful meta-narrative where acts of Terrorism can halt human agency amidst the backdrop of the metropolis, which is often a metaphor for speed and accelerated activities. If globalisation and the interconnected nature of the world are understood through discourses of risk, Terrorism bears the same footprint in urban spaces of modernity, narrating the vulnerability of the human condition in an inter-linked world where ideological struggles and resistance are manifested through inexplicable violence and destruction of lives, where the everyday is suspended to embrace the unexpected. As a consequence ambient fear “saturates the social spaces of everyday life” (Hubbard 2). The commodification of Terrorism through everyday items of consumption inevitably creates an intertextuality with real and media events, which constantly corrode the security of the metropolis. Paddy Scannell alludes to a doubling of place in our mediated world where “public events now occur simultaneously in two different places; the place of the event itself and that in which it is watched and heard. The media then vacillates between the two sites and creates experiences of simultaneity, liveness and immediacy” (qtd. in Moores 22). The doubling of place through media constructs a pervasive environment of risk and fear. Mark Danner (qtd. in Bauman 106) points out that the most powerful weapon of the 9/11 terrorists was that innocuous and “most American of technological creations: the television set” which provided a global platform to constantly replay and remember the dreadful scenes of the day, enabling the terrorist to appear invincible and to narrate fear as ubiquitous and omnipresent. Philip Abrams argues that ‘big events’ (such as 9/11 and 7/7) do make a difference in the social world for such events function as a transformative device between the past and future, forcing society to alter or transform its perspectives. David Altheide points out that since September 11 and the ensuing war on terror, a new discourse of Terrorism has emerged as a way of expressing how the world has changed and defining a state of constant alert through a media logic and format that shapes the nature of discourse itself. Consequently, the intensity and centralisation of surveillance in Western countries increased dramatically, placing the emphasis on expanding the forms of the already existing range of surveillance processes and practices that circumscribe and help shape our social existence (Lyon, Terrorism 2). Normalisation of Surveillance The role of technologies, particularly information and communication technologies (ICTs), and other infrastructures to unevenly distribute access to the goods and services necessary for modern life, while facilitating data collection on and control of the public, are significant characteristics of modernity (Reiman; Graham and Marvin; Monahan). The embedding of technological surveillance into spaces and infrastructures not only augment social control but also redefine data as a form of capital which can be shared between public and private sectors (Gandy, Data Mining; O’Harrow; Monahan). The scale, complexity and limitations of omnipresent and omnipotent surveillance, nevertheless, offer room for both subversion as well as new forms of domination and oppression (Marx). In surveillance studies, Foucault’s analysis is often heavily employed to explain lines of continuity and change between earlier forms of surveillance and data assemblage and contemporary forms in the shape of closed-circuit television (CCTV) and other surveillance modes (Dee). It establishes the need to discern patterns of power and normalisation and the subliminal or obvious cultural codes and categories that emerge through these arrangements (Fopp; Lyon, Electronic; Norris and Armstrong). In their study of CCTV surveillance, Norris and Armstrong (cf. in Dee) point out that when added to the daily minutiae of surveillance, CCTV cameras in public spaces, along with other camera surveillance in work places, capture human beings on a database constantly. The normalisation of surveillance, particularly with reference to CCTV, the popularisation of surveillance through television formats such as ‘Big Brother’ (Dee), and the expansion of online platforms to publish private images, has created a contradictory, complex and contested nature of spatial and power relationships in society. The UK, for example, has the most developed system of both urban and public space cameras in the world and this growth of camera surveillance and, as Lyon (Surveillance) points out, this has been achieved with very little, if any, public debate as to their benefits or otherwise. There may now be as many as 4.2 million CCTV cameras in Britain (cf. Lyon, Surveillance). That is one for every fourteen people and a person can be captured on over 300 cameras every day. An estimated £500m of public money has been invested in CCTV infrastructure over the last decade but, according to a Home Office study, CCTV schemes that have been assessed had little overall effect on crime levels (Wood and Ball). In spatial terms, these statistics reiterate Foucault’s emphasis on the power economy of the unseen gaze. Michel Foucault in analysing the links between power, information and surveillance inspired by Bentham’s idea of the Panopticon, indicated that it is possible to sanction or reward an individual through the act of surveillance without their knowledge (155). It is this unseen and unknown gaze of surveillance that is fundamental to the exercise of power. The design and arrangement of buildings can be engineered so that the “surveillance is permanent in its effects, even if it is discontinuous in its action” (Foucault 201). Lyon (Terrorism), in tracing the trajectory of surveillance studies, points out that much of surveillance literature has focused on understanding it as a centralised bureaucratic relationship between the powerful and the governed. Invisible forms of surveillance have also been viewed as a class weapon in some societies. With the advancements in and proliferation of surveillance technologies as well as convergence with other technologies, Lyon argues that it is no longer feasible to view surveillance as a linear or centralised process. In our contemporary globalised world, there is a need to reconcile the dialectical strands that mediate surveillance as a process. In acknowledging this, Giles Deleuze and Felix Guattari have constructed surveillance as a rhizome that defies linearity to appropriate a more convoluted and malleable form where the coding of bodies and data can be enmeshed to produce intricate power relationships and hierarchies within societies. Latour draws on the notion of assemblage by propounding that data is amalgamated from scattered centres of calculation where these can range from state and commercial institutions to scientific laboratories which scrutinise data to conceive governance and control strategies. Both the Latourian and Deleuzian ideas of surveillance highlight the disparate arrays of people, technologies and organisations that become connected to make “surveillance assemblages” in contrast to the static, unidirectional Panopticon metaphor (Ball, “Organization” 93). In a similar vein, Gandy (Panoptic) infers that it is misleading to assume that surveillance in practice is as complete and totalising as the Panoptic ideal type would have us believe. Co-optation of Millions The Metropolitan Police’s counter-Terrorism strategy seeks to co-opt millions where the corporeal body can complement the landscape of technological surveillance that already co-exists within modernity. In its press release, the role of civilian bodies in ensuring security of the city is stressed; Keeping Londoners safe from Terrorism is not a job solely for governments, security services or police. If we are to make London the safest major city in the world, we must mobilise against Terrorism not only the resources of the state, but also the active support of the millions of people who live and work in the capita. (MPA Website). Surveillance is increasingly simulated through the millions of corporeal entities where seeing in advance is the goal even before technology records and codes these images (William). Bodies understand and code risk and images through the cultural narratives which circulate in society. Compared to CCTV technology images, which require cultural and political interpretations and interventions, bodies as surveillance organisms implicitly code other bodies and activities. The travel bag in the Metropolitan Police poster reinforces the images of the 7/7 bombers and the renewed attempts to bomb the London Underground on the 21st of July. It reiterates the CCTV footage revealing images of the bombers wearing rucksacks. The image of the rucksack both embodies the everyday as well as the potential for evil in everyday objects. It also inevitably reproduces the cultural biases and prejudices where the rucksack is subliminally associated with a specific type of body. The rucksack in these terms is a laden image which symbolically captures the context and culture of risk discourses in society. The co-optation of the population as a surveillance entity also recasts new forms of social responsibility within the democratic polity, where privacy is increasingly mediated by the greater need to monitor, trace and record the activities of one another. Nikolas Rose, in discussing the increasing ‘responsibilisation’ of individuals in modern societies, describes the process in which the individual accepts responsibility for personal actions across a wide range of fields of social and economic activity as in the choice of diet, savings and pension arrangements, health care decisions and choices, home security measures and personal investment choices (qtd. in Dee). While surveillance in individualistic terms is often viewed as a threat to privacy, Rose argues that the state of ‘advanced liberalism’ within modernity and post-modernity requires considerable degrees of self-governance, regulation and surveillance whereby the individual is constructed both as a ‘new citizen’ and a key site of self management. By co-opting and recasting the role of the citizen in the age of Terrorism, the citizen to a degree accepts responsibility for both surveillance and security. In our sociological imagination the body is constructed both as lived as well as a social object. Erving Goffman uses the word ‘umwelt’ to stress that human embodiment is central to the constitution of the social world. Goffman defines ‘umwelt’ as “the region around an individual from which signs of alarm can come” and employs it to capture how people as social actors perceive and manage their settings when interacting in public places (252). Goffman’s ‘umwelt’ can be traced to Immanuel Kant’s idea that it is the a priori categories of space and time that make it possible for a subject to perceive a world (Umiker-Sebeok; qtd. in Ball, “Organization”). Anthony Giddens adapted the term Umwelt to refer to “a phenomenal world with which the individual is routinely ‘in touch’ in respect of potential dangers and alarms which then formed a core of (accomplished) normalcy with which individuals and groups surround themselves” (244). Benjamin Smith, in considering the body as an integral component of the link between our consciousness and our material world, observes that the body is continuously inscribed by culture. These inscriptions, he argues, encompass a wide range of cultural practices and will imply knowledge of a variety of social constructs. The inscribing of the body will produce cultural meanings as well as create forms of subjectivity while locating and situating the body within a cultural matrix (Smith). Drawing on Derrida’s work, Pugliese employs the term ‘Somatechnics’ to conceptualise the body as a culturally intelligible construct and to address the techniques in and through which the body is formed and transformed (qtd. in Osuri). These techniques can encompass signification systems such as race and gender and equally technologies which mediate our sense of reality. These technologies of thinking, seeing, hearing, signifying, visualising and positioning produce the very conditions for the cultural intelligibility of the body (Osuri). The body is then continuously inscribed and interpreted through mediated signifying systems. Similarly, Hayles, while not intending to impose a Cartesian dichotomy between the physical body and its cognitive presence, contends that the use and interactions with technology incorporate the body as a material entity but it also equally inscribes it by marking, recording and tracing its actions in various terrains. According to Gayatri Spivak (qtd. in Ball, “Organization”) new habits and experiences are embedded into the corporeal entity which then mediates its reactions and responses to the social world. This means one’s body is not completely one’s own and the presence of ideological forces or influences then inscribe the body with meanings, codes and cultural values. In our modern condition, the body and data are intimately and intricately bound. Outside the home, it is difficult for the body to avoid entering into relationships that produce electronic personal data (Stalder). According to Felix Stalder our physical bodies are shadowed by a ‘data body’ which follows the physical body of the consuming citizen and sometimes precedes it by constructing the individual through data (12). Before we arrive somewhere, we have already been measured and classified. Thus, upon arrival, the citizen will be treated according to the criteria ‘connected with the profile that represents us’ (Gandy, Panoptic; William). Following September 11, Lyon (Terrorism) reveals that surveillance data from a myriad of sources, such as supermarkets, motels, traffic control points, credit card transactions records and so on, was used to trace the activities of terrorists in the days and hours before their attacks, confirming that the body leaves data traces and trails. Surveillance works by abstracting bodies from places and splitting them into flows to be reassembled as virtual data-doubles, and in the process can replicate hierarchies and centralise power (Lyon, Terrorism). Mike Dee points out that the nature of surveillance taking place in modern societies is complex and far-reaching and in many ways insidious as surveillance needs to be situated within the broadest context of everyday human acts whether it is shopping with loyalty cards or paying utility bills. Physical vulnerability of the body becomes more complex in the time-space distanciated surveillance systems to which the body has become increasingly exposed. As such, each transaction – whether it be a phone call, credit card transaction, or Internet search – leaves a ‘data trail’ linkable to an individual person or place. Haggerty and Ericson, drawing from Deleuze and Guattari’s concept of the assemblage, describe the convergence and spread of data-gathering systems between different social domains and multiple levels (qtd. in Hier). They argue that the target of the generic ‘surveillance assemblage’ is the human body, which is broken into a series of data flows on which surveillance process is based. The thrust of the focus is the data individuals can yield and the categories to which they can contribute. These are then reapplied to the body. In this sense, surveillance is rhizomatic for it is diverse and connected to an underlying, invisible infrastructure which concerns interconnected technologies in multiple contexts (Ball, “Elements”). The co-opted body in the schema of counter-Terrorism enters a power arrangement where it constitutes both the unseen gaze as well as the data that will be implicated and captured in this arrangement. It is capable of producing surveillance data for those in power while creating new data through its transactions and movements in its everyday life. The body is unequivocally constructed through this data and is also entrapped by it in terms of representation and categorisation. The corporeal body is therefore part of the machinery of surveillance while being vulnerable to its discriminatory powers of categorisation and victimisation. As Hannah Arendt (qtd. in Bauman 91) had warned, “we terrestrial creatures bidding for cosmic significance will shortly be unable to comprehend and articulate the things we are capable of doing” Arendt’s caution conveys the complexity, vulnerability as well as the complicity of the human condition in the surveillance society. Equally it exemplifies how the corporeal body can be co-opted as a surveillance entity sustaining a new ‘banality’ (Arendt) in the machinery of surveillance. Social Consequences of Surveillance Lyon (Terrorism) observed that the events of 9/11 and 7/7 in the UK have inevitably become a prism through which aspects of social structure and processes may be viewed. This prism helps to illuminate the already existing vast range of surveillance practices and processes that touch everyday life in so-called information societies. As Lyon (Terrorism) points out surveillance is always ambiguous and can encompass genuine benefits and plausible rationales as well as palpable disadvantages. There are elements of representation to consider in terms of how surveillance technologies can re-present data that are collected at source or gathered from another technological medium, and these representations bring different meanings and enable different interpretations of life and surveillance (Ball, “Elements”). As such surveillance needs to be viewed in a number of ways: practice, knowledge and protection from threat. As data can be manipulated and interpreted according to cultural values and norms it reflects the inevitability of power relations to forge its identity in a surveillance society. In this sense, Ball (“Elements”) concludes surveillance practices capture and create different versions of life as lived by surveilled subjects. She refers to actors within the surveilled domain as ‘intermediaries’, where meaning is inscribed, where technologies re-present information, where power/resistance operates, and where networks are bound together to sometimes distort as well as reiterate patterns of hegemony (“Elements” 93). While surveillance is often connected with technology, it does not however determine nor decide how we code or employ our data. New technologies rarely enter passive environments of total inequality for they become enmeshed in complex pre-existing power and value systems (Marx). With surveillance there is an emphasis on the classificatory powers in our contemporary world “as persons and groups are often risk-profiled in the commercial sphere which rates their social contributions and sorts them into systems” (Lyon, Terrorism 2). Lyon (Terrorism) contends that the surveillance society is one that is organised and structured using surveillance-based techniques recorded by technologies, on behalf of the organisations and governments that structure our society. This information is then sorted, sifted and categorised and used as a basis for decisions which affect our life chances (Wood and Ball). The emergence of pervasive, automated and discriminatory mechanisms for risk profiling and social categorising constitute a significant mechanism for reproducing and reinforcing social, economic and cultural divisions in information societies. Such automated categorisation, Lyon (Terrorism) warns, has consequences for everyone especially in face of the new anti-terror measures enacted after September 11. In tandem with this, Bauman points out that a few suicidal murderers on the loose will be quite enough to recycle thousands of innocents into the “usual suspects”. In no time, a few iniquitous individual choices will be reprocessed into the attributes of a “category”; a category easily recognisable by, for instance, a suspiciously dark skin or a suspiciously bulky rucksack* *the kind of object which CCTV cameras are designed to note and passers-by are told to be vigilant about. And passers-by are keen to oblige. Since the terrorist atrocities on the London Underground, the volume of incidents classified as “racist attacks” rose sharply around the country. (122; emphasis added) Bauman, drawing on Lyon, asserts that the understandable desire for security combined with the pressure to adopt different kind of systems “will create a culture of control that will colonise more areas of life with or without the consent of the citizen” (123). This means that the inhabitants of the urban space whether a citizen, worker or consumer who has no terrorist ambitions whatsoever will discover that their opportunities are more circumscribed by the subject positions or categories which are imposed on them. Bauman cautions that for some these categories may be extremely prejudicial, restricting them from consumer choices because of credit ratings, or more insidiously, relegating them to second-class status because of their colour or ethnic background (124). Joseph Pugliese, in linking visual regimes of racial profiling and the shooting of Jean Charles de Menezes in the aftermath of 7/7 bombings in London, suggests that the discursive relations of power and visuality are inextricably bound. Pugliese argues that racial profiling creates a regime of visuality which fundamentally inscribes our physiology of perceptions with stereotypical images. He applies this analogy to Menzes running down the platform in which the retina transforms him into the “hallucinogenic figure of an Asian Terrorist” (Pugliese 8). With globalisation and the proliferation of ICTs, borders and boundaries are no longer sacrosanct and as such risks are managed by enacting ‘smart borders’ through new technologies, with huge databases behind the scenes processing information about individuals and their journeys through the profiling of body parts with, for example, iris scans (Wood and Ball 31). Such body profiling technologies are used to create watch lists of dangerous passengers or identity groups who might be of greater ‘risk’. The body in a surveillance society can be dissected into parts and profiled and coded through technology. These disparate codings of body parts can be assembled (or selectively omitted) to construct and represent whole bodies in our information society to ascertain risk. The selection and circulation of knowledge will also determine who gets slotted into the various categories that a surveillance society creates. Conclusion When the corporeal body is subsumed into a web of surveillance it often raises questions about the deterministic nature of technology. The question is a long-standing one in our modern consciousness. We are apprehensive about according technology too much power and yet it is implicated in the contemporary power relationships where it is suspended amidst human motive, agency and anxiety. The emergence of surveillance societies, the co-optation of bodies in surveillance schemas, as well as the construction of the body through data in everyday transactions, conveys both the vulnerabilities of the human condition as well as its complicity in maintaining the power arrangements in society. Bauman, in citing Jacques Ellul and Hannah Arendt, points out that we suffer a ‘moral lag’ in so far as technology and society are concerned, for often we ruminate on the consequences of our actions and motives only as afterthoughts without realising at this point of existence that the “actions we take are most commonly prompted by the resources (including technology) at our disposal” (91). References Abrams, Philip. Historical Sociology. Shepton Mallet, UK: Open Books, 1982. Altheide, David. “Consuming Terrorism.” Symbolic Interaction 27.3 (2004): 289-308. Arendt, Hannah. Eichmann in Jerusalem: A Report on the Banality of Evil. London: Faber & Faber, 1963. Bauman, Zygmunt. Liquid Fear. Cambridge, UK: Polity, 2006. Ball, Kristie. “Elements of Surveillance: A New Framework and Future Research Direction.” Information, Communication and Society 5.4 (2002): 573-90 ———. “Organization, Surveillance and the Body: Towards a Politics of Resistance.” Organization 12 (2005): 89-108. Dee, Mike. “The New Citizenship of the Risk and Surveillance Society – From a Citizenship of Hope to a Citizenship of Fear?” Paper Presented to the Social Change in the 21st Century Conference, Queensland University of Technology, Queensland, Australia, 22 Nov. 2002. 14 April 2007 http://eprints.qut.edu.au/archive/00005508/02/5508.pdf>. Deleuze, Gilles, and Felix Guattari. A Thousand Plateaus. Minneapolis: U of Minnesota P, 1987. Fopp, Rodney. “Increasing the Potential for Gaze, Surveillance and Normalization: The Transformation of an Australian Policy for People and Homeless.” Surveillance and Society 1.1 (2002): 48-65. Foucault, Michel. Discipline and Punish: The Birth of the Prison. London: Allen Lane, 1977. Giddens, Anthony. Modernity and Self-Identity. Self and Society in the Late Modern Age. Stanford: Stanford UP, 1991. Gandy, Oscar. The Panoptic Sort: A Political Economy of Personal Information. Boulder, CO: Westview, 1997. ———. “Data Mining and Surveillance in the Post 9/11 Environment.” The Intensification of Surveillance: Crime, Terrorism and War in the Information Age. Eds. Kristie Ball and Frank Webster. Sterling, VA: Pluto Press, 2003. Goffman, Erving. Relations in Public. Harmondsworth: Penguin, 1971. Graham, Stephen, and Simon Marvin. Splintering Urbanism: Networked Infrastructures, Technological Mobilities and the Urban Condition. New York: Routledge, 2001. Hier, Sean. “Probing Surveillance Assemblage: On the Dialectics of Surveillance Practices as Process of Social Control.” Surveillance and Society 1.3 (2003): 399-411. Hayles, Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics. Chicago: U of Chicago P, 1999. Hubbard, Phil. “Fear and Loathing at the Multiplex: Everyday Anxiety in the Post-Industrial City.” Capital & Class 80 (2003). Latour, Bruno. Science in Action. Cambridge, Mass: Harvard UP, 1987 Lyon, David. The Electronic Eye – The Rise of Surveillance Society. Oxford: Polity Press, 1994. ———. “Terrorism and Surveillance: Security, Freedom and Justice after September 11 2001.” Privacy Lecture Series, Queens University, 12 Nov 2001. 16 April 2007 http://privacy.openflows.org/lyon_paper.html>. ———. “Surveillance Studies: Understanding Visibility, Mobility and the Phonetic Fix.” Surveillance and Society 1.1 (2002): 1-7. Metropolitan Police Authority (MPA). “Counter Terrorism: The London Debate.” Press Release. 21 June 2006. 18 April 2007 http://www.mpa.gov.uk.access/issues/comeng/Terrorism.htm>. Pugliese, Joseph. “Asymmetries of Terror: Visual Regimes of Racial Profiling and the Shooting of Jean Charles de Menezes in the Context of the War in Iraq.” Borderlands 5.1 (2006). 30 May 2007 http://www.borderlandsejournal.adelaide.edu.au/vol15no1_2006/ pugliese.htm>. Marx, Gary. “A Tack in the Shoe: Neutralizing and Resisting the New Surveillance.” Journal of Social Issues 59.2 (2003). 18 April 2007 http://web.mit.edu/gtmarx/www/tack.html>. Moores, Shaun. “Doubling of Place.” Mediaspace: Place Scale and Culture in a Media Age. Eds. Nick Couldry and Anna McCarthy. Routledge, London, 2004. Monahan, Teri, ed. Surveillance and Security: Technological Politics and Power in Everyday Life. Routledge: London, 2006. Norris, Clive, and Gary Armstrong. The Maximum Surveillance Society: The Rise of CCTV. Oxford: Berg, 1999. O’Harrow, Robert. No Place to Hide. New York: Free Press, 2005. Osuri, Goldie. “Media Necropower: Australian Media Reception and the Somatechnics of Mamdouh Habib.” Borderlands 5.1 (2006). 30 May 2007 http://www.borderlandsejournal.adelaide.edu.au/vol5no1_2006 osuri_necropower.htm>. Rose, Nikolas. “Government and Control.” British Journal of Criminology 40 (2000): 321–399. Scannell, Paddy. Radio, Television and Modern Life. Oxford: Blackwell, 1996. Smith, Benjamin. “In What Ways, and for What Reasons, Do We Inscribe Our Bodies?” 15 Nov. 1998. 30 May 2007 http:www.bmezine.com/ritual/981115/Whatways.html>. Stalder, Felix. “Privacy Is Not the Antidote to Surveillance.” Surveillance and Society 1.1 (2002): 120-124. Umiker-Sebeok, Jean. “Power and the Construction of Gendered Spaces.” Indiana University-Bloomington. 14 April 2007 http://www.slis.indiana.edu/faculty/umikerse/papers/power.html>. William, Bogard. The Simulation of Surveillance: Hypercontrol in Telematic Societies. Cambridge: Cambridge UP, 1996. Wood, Kristie, and David M. Ball, eds. “A Report on the Surveillance Society.” Surveillance Studies Network, UK, Sep. 2006. 14 April 2007 http://www.ico.gov.uk/upload/documents/library/data_protection/ practical_application/surveillance_society_full_report_2006.pdf>. Citation reference for this article MLA Style Ibrahim, Yasmin. "Commodifying Terrorism: Body, Surveillance and the Everyday." M/C Journal 10.3 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0706/05-ibrahim.php>. APA Style Ibrahim, Y. (Jun. 2007) "Commodifying Terrorism: Body, Surveillance and the Everyday," M/C Journal, 10(3). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0706/05-ibrahim.php>.
APA, Harvard, Vancouver, ISO, and other styles
42

Jethani, Suneel. "Lists, Spatial Practice and Assistive Technologies for the Blind." M/C Journal 15, no. 5 (October 12, 2012). http://dx.doi.org/10.5204/mcj.558.

Full text
Abstract:
IntroductionSupermarkets are functionally challenging environments for people with vision impairments. A supermarket is likely to house an average of 45,000 products in a median floor-space of 4,529 square meters and many visually impaired people are unable to shop without assistance, which greatly impedes personal independence (Nicholson et al.). The task of selecting goods in a supermarket is an “activity that is expressive of agency, identity and creativity” (Sutherland) from which many vision-impaired persons are excluded. In response to this, a number of proof of concept (demonstrating feasibility) and prototype assistive technologies are being developed which aim to use smart phones as potential sensorial aides for vision impaired persons. In this paper, I discuss two such prototypic technologies, Shop Talk and BlindShopping. I engage with this issue’s list theme by suggesting that, on the one hand, list making is a uniquely human activity that demonstrates our need for order, reliance on memory, reveals our idiosyncrasies, and provides insights into our private lives (Keaggy 12). On the other hand, lists feature in the creation of spatial inventories that represent physical environments (Perec 3-4, 9-10). The use of lists in the architecture of assistive technologies for shopping illuminates the interaction between these two modalities of list use where items contained in a list are not only textual but also cartographic elements that link the material and immaterial in space and time (Haber 63). I argue that despite the emancipatory potential of assistive shopping technologies, their efficacy in practical situations is highly dependent on the extent to which they can integrate a number of lists to produce representations of space that are meaningful for vision impaired users. I suggest that the extent to which these prototypes may translate to becoming commercially viable, widely adopted technologies is heavily reliant upon commercial and institutional infrastructures, data sources, and regulation. Thus, their design, manufacture and adoption-potential are shaped by the extent to which certain data inventories are accessible and made interoperable. To overcome such constraints, it is important to better understand the “spatial syntax” associated with the shopping task for a vision impaired person; that is, the connected ordering of real and virtual spatial elements that result in a supermarket as a knowable space within which an assisted “spatial practice” of shopping can occur (Kellerman 148, Lefebvre 16).In what follows, I use the concept of lists to discuss the production of supermarket-space in relation to the enabling and disabling potentials of assistive technologies. First, I discuss mobile digital technologies relative to disability and impairment and describe how the shopping task produces a disabling spatial practice. Second, I present a case study showing how assistive technologies function in aiding vision impaired users in completing the task of supermarket shopping. Third, I discuss various factors that may inhibit the liberating potential of technology assisted shopping by vision-impaired people. Addressing Shopping as a Disabling Spatial Practice Consider how a shopping list might inform one’s experience of supermarket space. The way shopping lists are written demonstrate the variability in the logic that governs list writing. As Bill Keaggy demonstrates in his found shopping list Web project and subsequent book, Milk, Eggs, Vodka, a shopping list may be written on a variety of materials, be arranged in a number of orientations, and the writer may use differing textual attributes, such as size or underlining to show emphasis. The writer may use longhand, abbreviate, write neatly, scribble, and use an array of alternate spelling and naming conventions. For example, items may be listed based on knowledge of the location of products, they may be arranged on a list as a result of an inventory of a pantry or fridge, or they may be copied in the order they appear in a recipe. Whilst shopping, some may follow strictly the order of their list, crossing back and forth between aisles. Some may work through their list item-by-item, perhaps forward scanning to achieve greater economies of time and space. As a person shops, their memory may be stimulated by visual cues reminding them of products they need that may not be included on their list. For the vision impaired, this task is near impossible to complete without the assistance of a relative, friend, agency volunteer, or store employee. Such forms of assistance are often unsatisfactory, as delays may be caused due to the unavailability of an assistant, or the assistant having limited literacy, knowledge, or patience to adequately meet the shopper’s needs. Home delivery services, though readily available, impede personal independence (Nicholson et al.). Katie Ellis and Mike Kent argue that “an impairment becomes a disability due to the impact of prevailing ableist social structures” (3). It can be said, then, that supermarkets function as a disability producing space for the vision impaired shopper. For the vision impaired, a supermarket is a “hegemonic modern visual infrastructure” where, for example, merchandisers may reposition items regularly to induce customers to explore areas of the shop that they wouldn’t usually, a move which adds to the difficulty faced by those customers with impaired vision who work on the assumption that items remain as they usually are (Schillmeier 161).In addressing this issue, much emphasis has been placed on the potential of mobile communications technologies in affording vision impaired users greater mobility and flexibility (Jolley 27). However, as Gerard Goggin argues, the adoption of mobile communication technologies has not necessarily “gone hand in hand with new personal and collective possibilities” given the limited access to standard features, even if the device is text-to-speech enabled (98). Issues with Digital Rights Management (DRM) limit the way a device accesses and reproduces information, and confusion over whether audio rights are needed to convert text-to-speech, impede the accessibility of mobile communications technologies for vision impaired users (Ellis and Kent 136). Accessibility and functionality issues like these arise out of the needs, desires, and expectations of the visually impaired as a user group being considered as an afterthought as opposed to a significant factor in the early phases of design and prototyping (Goggin 89). Thus, the development of assistive technologies for the vision impaired has been left to third parties who must adopt their solutions to fit within certain technical parameters. It is valuable to consider what is involved in the task of shopping in order to appreciate the considerations that must be made in the design of shopping intended assistive technologies. Shopping generally consists of five sub-tasks: travelling to the store; finding items in-store; paying for and bagging items at the register; exiting the store and getting home; and, the often overlooked task of putting items away once at home. In this process supermarkets exhibit a “trichotomous spatial ontology” consisting of locomotor space that a shopper moves around the store, haptic space in the immediate vicinity of the shopper, and search space where individual products are located (Nicholson et al.). In completing these tasks, a shopper will constantly be moving through and switching between all three of these spaces. In the next section I examine how assistive technologies function in producing supermarkets as both enabling and disabling spaces for the vision impaired. Assistive Technologies for Vision Impaired ShoppersJason Farman (43) and Adriana de Douza e Silva both argue that in many ways spaces have always acted as information interfaces where data of all types can reside. Global Positioning System (GPS), Radio Frequency Identification (RFID), and Quick Response (QR) codes all allow for practically every spatial encounter to be an encounter with information. Site-specific and location-aware technologies address the desire for meaningful representations of space for use in everyday situations by the vision impaired. Further, the possibility of an “always-on” connection to spatial information via a mobile phone with WiFi or 3G connections transforms spatial experience by “enfolding remote [and latent] contexts inside the present context” (de Souza e Silva). A range of GPS navigation systems adapted for vision-impaired users are currently on the market. Typically, these systems convert GPS information into text-to-speech instructions and are either standalone devices, such as the Trekker Breeze, or they use the compass, accelerometer, and 3G or WiFi functions found on most smart phones, such as Loadstone. Whilst both these products are adequate in guiding a vision-impaired user from their home to a supermarket, there are significant differences in their interfaces and data architectures. Trekker Breeze is a standalone hardware device that produces talking menus, maps, and GPS information. While its navigation functionality relies on a worldwide radio-navigation system that uses a constellation of 24 satellites to triangulate one’s position (May and LaPierre 263-64), its map and text-to-speech functionality relies on data on a DVD provided with the unit. Loadstone is an open source software system for Nokia devices that has been developed within the vision-impaired community. Loadstone is built on GNU General Public License (GPL) software and is developed from private and user based funding; this overcomes the issue of Trekker Breeze’s reliance on trading policies and pricing models of the few global vendors of satellite navigation data. Both products have significant shortcomings if viewed in the broader context of the five sub-tasks involved in shopping described above. Trekker Breeze and Loadstone require that additional devices be connected to it. In the case of Trekker Breeze it is a tactile keypad, and with Loadstone it is an aftermarket screen reader. To function optimally, Trekker Breeze requires that routes be pre-recorded and, according to a review conducted by the American Foundation for the Blind, it requires a 30-minute warm up time to properly orient itself. Both Trekker Breeze and Loadstone allow users to create and share Points of Interest (POI) databases showing the location of various places along a given route. Non-standard or duplicated user generated content in POI databases may, however, have a negative effect on usability (Ellis and Kent 2). Furthermore, GPS-based navigation systems are accurate to approximately ten metres, which means that users must rely on their own mobility skills when they are required to change direction or stop for traffic. This issue with GPS accuracy is more pronounced when a vision-impaired user is approaching a supermarket where they are likely to encounter environmental hazards with greater frequency and both pedestrian and vehicular traffic in greater density. Here the relations between space defined and spaces poorly defined or undefined by the GPS device interact to produce the supermarket surrounds as a disabling space (Galloway). Prototype Systems for Supermarket Navigation and Product SelectionIn the discussion to follow, I look at two prototype systems using QR codes and RFID that are designed to be used in-store by vision-impaired shoppers. Shop Talk is a proof of concept system developed by researchers at Utah State University that uses synthetic verbal route directions to assist vision impaired shoppers with supermarket navigation, product search, and selection (Nicholson et al.). Its hardware consists of a portable computational unit, a numeric keypad, a wireless barcode scanner and base station, headphones for the user to receive the synthetic speech instructions, a USB hub to connect all the components, and a backpack to carry them (with the exception of the barcode scanner) which has been slightly modified with a plastic stabiliser to assist in correct positioning. Shop Talk represents the supermarket environment using two data structures. The first is comprised of two elements: a topological map of locomotor space that allows for directional labels of “left,” “right,” and “forward,” to be added to the supermarket floor plan; and, for navigation of haptic space, the supermarket inventory management system, which is used to create verbal descriptions of product information. The second data structure is a Barcode Connectivity Matrix (BCM), which associates each shelf barcode with several pieces of information such as aisle, aisle side, section, shelf, position, Universal Product Code (UPC) barcode, product description, and price. Nicholson et al. suggest that one of their “most immediate objectives for future work is to migrate the system to a more conventional mobile platform” such as a smart phone (see Mobile Shopping). The Personalisable Interactions with Resources on AMI-Enabled Mobile Dynamic Environments (PRIAmIDE) research group at the University of Deusto is also approaching Ambient Assisted Living (AAL) by exploring the smart phone’s sensing, communication, computing, and storage potential. As part of their work, the prototype system, BlindShopping, was developed to address the issue of assisted shopping using entirely off-the-shelf technology with minimal environmental adjustments to navigate the store and search, browse and select products (López-de-Ipiña et al. 34). Blind Shopping’s architecture is based on three components. Firstly, a navigation system provides the user with synthetic verbal instructions to users via headphones connected to the smart phone device being used in order to guide them around the store. This requires a RFID reader to be attached to the tip of the user’s white cane and road-marking-like RFID tag lines to be distributed throughout the aisles. A smartphone application processes the RFID data that is received by the smart phone via Bluetooth generating the verbal navigation commands as a result. Products are recognised by pointing a QR code reader enabled smart phone at an embossed code located on a shelf. The system is managed by a Rich Internet Application (RIA) interface, which operates by Web browser, and is used to register the RFID tags situated in the aisles and the QR codes located on shelves (López-de-Ipiña and 37-38). A typical use-scenario for Blind Shopping involves a user activating the system by tracing an “L” on the screen or issuing the “Location” voice command, which activates the supermarket navigation system which then asks the user to either touch an RFID floor marking with their cane or scan a QR code on a nearby shelf to orient the system. The application then asks the user to dictate the product or category of product that they wish to locate. The smart phone maintains a continuous Bluetooth connection with the RFID reader to keep track of user location at all times. By drawing a “P” or issuing the “Product” voice command, a user can switch the device into product recognition mode where the smart phone camera is pointed at an embossed QR code on a shelf to retrieve information about a product such as manufacturer, name, weight, and price, via synthetic speech (López-de-Ipiña et al. 38-39). Despite both systems aiming to operate with as little environmental adjustment as possible, as well as minimise the extent to which a supermarket would need to allocate infrastructural, administrative, and human resources to implementing assistive technologies for vision impaired shoppers, there will undoubtedly be significant establishment and maintenance costs associated with the adoption of production versions of systems resembling either prototype described in this paper. As both systems rely on data obtained from a server by invoking Web services, supermarkets would need to provide in-store WiFi. Further, both systems’ dependence on store inventory data would mean that commercial versions of either of these systems are likely to be supermarket specific or exclusive given that there will be policies in place that forbid access to inventory systems, which contain pricing information to third parties. Secondly, an assumption in the design of both prototypes is that the shopping task ends with the user arriving at home; this overlooks the important task of being able to recognise products in order to put them away or to use at a later time.The BCM and QR product recognition components of both respective prototypic systems associates information to products in order to assist users in the product search and selection sub-tasks. However, information such as use-by dates, discount offers, country of manufacture, country of manufacturer’s origin, nutritional information, and the labelling of products as Halal, Kosher, containing alcohol, nuts, gluten, lactose, phenylalanine, and so on, create further challenges in how different data sources are managed within the devices’ software architecture. The reliance of both systems on existing smartphone technology is also problematic. Changes in the production and uptake of mobile communication devices, and the software that they operate on, occurs rapidly. Once the fit-out of a retail space with the necessary instrumentation in order to accommodate a particular system has occurred, this system is unlikely to be able to cater to the requirement for frequent upgrades, as built environments are less flexible in the upgrading of their technological infrastructure (Kellerman 148). This sets up a scenario where the supermarket may persist as a disabling space due to a gap between the functional capacities of applications designed for mobile communication devices and the environments in which they are to be used. Lists and Disabling Spatial PracticeThe development and provision of access to assistive technologies and the data they rely upon is a commercial issue (Ellis and Kent 7). The use of assistive technologies in supermarket-spaces that rely on the inter-functional coordination of multiple inventories may have the unintended effect of excluding people with disabilities from access to legitimate content (Ellis and Kent 7). With de Certeau, we can ask of supermarket-space “What spatial practices correspond, in the area where discipline is manipulated, to these apparatuses that produce a disciplinary space?" (96).In designing assistive technologies, such as those discussed in this paper, developers must strive to achieve integration across multiple data inventories. Software architectures must be optimised to overcome issues relating to intellectual property, cross platform access, standardisation, fidelity, potential duplication, and mass-storage. This need for “cross sectioning,” however, “merely adds to the muddle” (Lefebvre 8). This is a predicament that only intensifies as space and objects in space become increasingly “representable” (Galloway), and as the impetus for the project of spatial politics for the vision impaired moves beyond representation to centre on access and meaning-making.ConclusionSupermarkets act as sites of hegemony, resistance, difference, and transformation, where the vision impaired and their allies resist the “repressive socialization of impaired bodies” through their own social movements relating to environmental accessibility and the technology assisted spatial practice of shopping (Gleeson 129). It is undeniable that the prototype technologies described in this paper, and those like it, indeed do have a great deal of emancipatory potential. However, it should be understood that these devices produce representations of supermarket-space as a simulation within a framework that attempts to mimic the real, and these representations are pre-determined by the industrial, technological, and regulatory forces that govern their production (Lefebvre 8). Thus, the potential of assistive technologies is dependent upon a range of constraints relating to data accessibility, and the interaction of various kinds of lists across the geographic area that surrounds the supermarket, locomotor, haptic, and search spaces of the supermarket, the home-space, and the internal spaces of a shopper’s imaginary. These interactions are important in contributing to the reproduction of disability in supermarkets through the use of assistive shopping technologies. The ways by which people make and read shopping lists complicate the relations between supermarket-space as location data and product inventories versus that which is intuited and experienced by a shopper (Sutherland). Not only should we be creating inventories of supermarket locomotor, haptic, and search spaces, the attention of developers working in this area of assistive technologies should look beyond the challenges of spatial representation and move towards a focus on issues of interoperability and expanded access of spatial inventory databases and data within and beyond supermarket-space.ReferencesDe Certeau, Michel. The Practice of Everyday Life. Berkeley: University of California Press, 1984. Print.De Souza e Silva, A. “From Cyber to Hybrid: Mobile Technologies As Interfaces of Hybrid Spaces.” Space and Culture 9.3 (2006): 261-78.Ellis, Katie, and Mike Kent. Disability and New Media. New York: Routledge, 2011.Farman, Jason. Mobile Interface Theory: Embodied Space and Locative Media. New York: Routledge, 2012.Galloway, Alexander. “Are Some Things Unrepresentable?” Theory, Culture and Society 28 (2011): 85-102.Gleeson, Brendan. Geographies of Disability. London: Routledge, 1999.Goggin, Gerard. Cell Phone Culture: Mobile Technology in Everyday Life. London: Routledge, 2006.Haber, Alex. “Mapping the Void in Perec’s Species of Spaces.” Tattered Fragments of the Map. Ed. Adam Katz and Brian Rosa. S.l.: Thelimitsoffun.org, 2009.Jolley, William M. When the Tide Comes in: Towards Accessible Telecommunications for People with Disabilities in Australia. Sydney: Human Rights and Equal Opportunity Commission, 2003.Keaggy, Bill. Milk Eggs Vodka: Grocery Lists Lost and Found. Cincinnati, Ohio: HOW Books, 2007.Kellerman, Aharon. Personal Mobilities. London: Routledge, 2006.Kleege, Georgia. “Blindness and Visual Culture: An Eyewitness Account.” The Disability Studies Reader. 2nd edition. Ed. Lennard J. Davis. New York: Routledge, 2006. 391-98.Lefebvre, Henri. The Production of Space. Oxford, UK: Blackwell, 1991.López-de-Ipiña, Diego, Tania Lorido, and Unai López. “Indoor Navigation and Product Recognition for Blind People Assisted Shopping.” Ambient Assisted Living. Ed. J. Bravo, R. Hervás, and V. Villarreal. Berlin: Springer-Verlag, 2011. 25-32. May, Michael, and Charles LaPierre. “Accessible Global Position System (GPS) and Related Orientation Technologies.” Assistive Technology for Visually Impaired and Blind People. Ed. Marion A. Hersh, and Michael A. Johnson. London: Springer-Verlag, 2008. 261-88. Nicholson, John, Vladimir Kulyukin, and Daniel Coster. “Shoptalk: Independent Blind Shopping Through Verbal Route Directions and Barcode Scans.” The Open Rehabilitation Journal 2.1 (2009): 11-23.Perec, Georges. Species of Spaces and Other Pieces. Trans. and Ed. John Sturrock. London: Penguin Books, 1997.Schillmeier, Michael W. J. Rethinking Disability: Bodies, Senses, and Things. New York: Routledge, 2010.Sutherland, I. “Mobile Media and the Socio-Technical Protocols of the Supermarket.” Australian Journal of Communication. 36.1 (2009): 73-84.
APA, Harvard, Vancouver, ISO, and other styles
43

Harju, Anu A. "A Relational Approach to the Digital Self: Plus-Sized Bloggers and the Double-Edged Sword of Market-Compromised Identity." M/C Journal 21, no. 2 (April 25, 2018). http://dx.doi.org/10.5204/mcj.1385.

Full text
Abstract:
Digital Articulations of the Relational Self Identity continues to be one of the enduring topics in digital media research. This interdisciplinary take on the digital self extends the discussion in my dissertation (Harju) of contemporary articulations of the relational self in the digital context by focusing on potentiality of the evolving self. I adopt a relational approach to being (Gergen Relational) where the self is seen as always already a product of relations, borne out of them as well as dependent on them (Gergen Realities). The self as fluid and processual is reflective of our liquid times (Bauman), of globalisation and digitalisation where we are surrounded by global flows of images, taste and trends (Appadurai).The view of the self as a process underlies future-oriented action, emphasing the becoming of the self. The process of becoming implies the potential of the self that can be narrated into existence. The relational view of the self, perhaps indirectly, also posits the self as a temporal interface between the present and the future, as a site where change unfolds. It is therefore important to critically reflect on the kinds of potentialities we can discover and engage with and the kinds of futures (Berardi) we can construct.Extending Gergen’s conceptualisation of the kinds of relations to include non-human actors (e.g. media technologies) as well socio-cultural and economic forces allows me to explore the conflicting forces shaping the self, for example, the influence the market exerts on self-construction together with the media logics that guide digital self-production practices. Because of the market’s dominant position in today’s imagination, I seek to explore the relational processes of inclusion and exclusion that position individuals relative to as well as in terms of the market as more or less included or excluded subjects (Harju).The digital environment is a unique setting for identity projects as it provides spatial and temporal flexibility, the possibility for curation, consideration and reconstruction. At the same time, it lacks a certain historicity; as Smith and Watson note, the self constructed online lacks narrative beginning and end that in “analog life writing [are] distinguishable by birth or death” (90). While it is tempting therefore to assume that self-construction online is free from all constraints, this is not necessarily so as the self is nevertheless produced within the wider socio-cultural context in which it also needs to “make sense,” these conditions persisting across these modes of being. Self as a relational process inevitably connects what for analytical purposes may be called online and offline social spaces as there is a processual linkage, a relational flow, that connects any online entity to a form outside the digital realm.Media institutions and the process of mediation (Rak Boom!) shape the autobiographical practices (Poletti), and the notion of automedia was introduced as a way to incorporate images, text and technologies as constitutive in autobiographic accounts (Smith and Watson) and help see online life as life instead of mere representation (Rak "Life"). The automedial approach rejects essentialist accounts of the self, assuming rather that the self is called into being and constructed in and by the materiality of the medium, in the process of mediation. This furthermore entails a move beyond the literary in terms of autobiographies toward consideration of the enabling and restricting roles of media technologies in the kinds of selves that can be constructed (Maguire 74).Viewing the self as always already relationally emergent (Gergen Relational) and combining this view with the framework of automedial construction of the self allows us to bring into the examination of the digital self the socio-cultural and economic forces and the diverse discourses meeting at the site of the self. Importantly, the relational approach prioritises relations and therefore the self is constituted in a relational flow in a process of becoming, placing importance on the kinds of relational configurations where the becoming of the self takes place.This paper explores how the digital self is forged under the joint pressures of consumerist logic and media logics in the contemporary society where “being a consumer” is the predominant subjectivity (Firat; Bauman). I draw on sociology of consumption to examine the relational tensions shaping identity construction of marginalised individuals. To empirically illustrate the discussion I draw on a previous study (Harju and Huovinen) on plus-sized fashion blogging and examine fatshion blogging as a form of automedia (see also Rak "Digital" on blogs).Plus-Sized Fashion Bloggers and Market-Mediated IdentityPlus-sized fashion bloggers, “fatshionistas,” actively seek social and cultural inclusion by way of fashion. As a collective activity, plus-sized fashion blogging is more than diary writing (see also Rak Digital) but also more than fashion blogging: the blogs constitute “networked, collective and active consumer resistance,” illuminating “marginalised consumers’ identity work at the intersection of commercial culture and the counter-representations of traditional femininity” (Harju and Huovinen 1603). Blogging resistant or subversive identities into being is thus also a form of activism and political action (Connell). As a form of automedia and autobiographical production, fatshion blogging has as its agenda the construction of alternative subjectivities and carving out a legitimate social space in the “fatosphere,” “a loosely interconnected network of online resources aimed at creating a safe space where individuals can counter fat prejudice, resist misconceptions of fat, engage in communal experiences and promote positive understandings of fat” (Gurrieri and Cherrier 279). Fashion blogs are rich in self-images portraying “fat fashion”: thus, not only fashion as a physical medium and the images representative of such materiality, but also the body acts as a medium.Plus-sized fashion bloggers feel marginalised as women due to body size but they also face rejection in and by the market. Normalised discourses around fashion and the female body as one that is fashioned render fashion blogging an avenue to normativity (Berlant): the symbolic power of taste (Bourdieu) embedded in fashion is harnessed to construct the desired self and to mobilise discourses of acceptable subjectivity. However, it is these very discourses that also construct the “state of being fat” as deviant and stigmatise the larger body as something falling outside the definition of good taste (LeBesco).The description on the Fatshionista! Livejournal page summarizes the agenda that despite the focus on fashion carries political undertones:Welcome, fatshionistas! We are a diverse fat-positive, anti-racist, disabled-friendly, trans-inclusive, queer-flavored, non-gender-specific community, open to everyone. Here we will discuss the ins and outs of fat fashions, seriously and stupidly--but above all--standing tall, and with panache. We fatshionistas are self-accepting despite The Man's Saipan-made boot at our chubby, elegant throats. We are silly, and serious, and want shit to fit.In a previous study (Harju and Huovinen) on the conflicted identity construction of plus-sized fashion bloggers (see also Gurrieri and Cherrier; Limatius) we found the complex performative tactics used in constructing the plus-sized blogger identity both resisted the market as well as embraced it: the bloggers seek similarity via appeals to normativity (see also Coleman and Figueroa) yet underline difference by rejecting the demands of normative ideals.The bloggers’ similarity seeking tactics (Harju and Huovinen) emphasise shared commonalities with the feminine ideals (ultra-femininity, posing and girliness) and on the face of it contribute to reproducing not only the gendered self but also the market-compromised self that endorses a very specific type of femininity. The plus-sized blogger identity, although inherently subversive as it seeks to challenge and expand the repertoire and imagery available to women, nevertheless seeks inclusion by way of the market, the very same that rejected them as “consumers”. This relational tension is negotiated on the blogs, and resistance emerges through articulating difference.Thus, the bloggers’ diversity asserting tactics (Harju and Huovinen) add to the complexity of the identity project and constitute explicit resistance, giving rise to resistant consumer identity. Bodily differences are highlighted (e.g. the bigger body is embraced, skin and body revealed rather than concealed) as the bloggers take control of how they are represented, using media to challenge the market that defines acceptable femininity in ways that ostracises fat women. The contradictory processes at the site of the self give rise to relational tension (Gergen Relational) and blogging offers a site for collective negotiation. For the plus-sized bloggers, to be included means no longer occupying the margins: self-images displaying the fat body contribute to corporeal empowerment (Harju and Huovinen) where flaunting the fat body helps construct the identity of a “fatshionista” blogger liberated from shame and stigma attached to the bigger body:I decided to start this blog after being a regular poster on the Fatshionista LiveJournal community. Finding that community changed my whole outlook on life, I was fat (still am) & unhappy with myself (not so much now). I was amazed to find a place where fat people celebrated their bodies, instead of being ashamed. (Harju and Huovinen 1614).The fatshion blog as a form of automedia is driven by the desire for change in the social circumstances where self-construction can take place, toward the future potential of the self, by diversifying acceptable subject positions and constructing novel identification points for fat women. The means are limited, however, and despite the explicit agenda of promoting body positivity, the collective aspirations are rooted in consumption and realised in the realm of fashion and the market.The question, therefore, is whether resistance outside the market is possible when so much of our social existence is bound up with the market and consumerist logic, or whether the desire for inclusion, manifest in aspirational normativity (Berlant) with the promise of social acceptance linked to normative way of life, necessitates market participation and the adoption of consumer subjectivity? Consumer subjectivity offers normative intelligibility in the various expressions of identity, providing tools for the becoming of an included subject. However, it raises the question of whether resistant identity can occur outside the market and outside the logic of consumption when it seeks social inclusion.Market-compromised identity is a double-edged sword; while participation via the market may help construct a self that is intelligible, market participation also disciplines the subject to take part in a certain way, of becoming a certain type of consuming subject, all the time harnessing the self for the benefit of the market. With no beginning or an end, the digital self is in constant processual flux, responding to conflicting relational input. The market adds to this complexity as “the neoliberal subject is compelled to participate in society as both an enthusiastic consumer and as a self-controlled subject” (Guthman 193).Social Imaginaries as Horizons of Constrained Possibility Identity possibilities are inscribed in the popular imagination, and the concept of social imaginary (Castoriadis; Taylor) provides a useful lens through which to examine articulations of the digital self. Social imaginaries are not unitary constructions and different imaginaries are evoked in different contexts. Likewise, although often shared, they are nevertheless unique to the individual, presenting as a terrain of conceivable action befitting of the individual engaged in the act of imagining.In our socially saturated times relational input is greater than ever (Gergen Relational). Imagining now draws on a wider range of identity possibilities, the ways of imagining the self being reflective of the values of any given time. Both consumption and media infiltrate the social imagination which today is not only compromised by market logic but has become constitutive of a terrain where the parameters for inclusion, change and resistance are limited. Practices of performing desirable femininity normalise a certain way of being and strike a constitutive boundary between what is desirable and what is not. The plus-sized fashion blogging makes visible the lack of diversity in the popular imagination (Harju and Huovinen) while fatshion blogging also reveals what possibilities there are for inclusion (i.e. via consumption and by mobilising normative femininity) and where the boundaries of identity work lie (see also Connell).The fat body is subjected to discipline (Giovanelli and Ostertag; LeBesco) and “becoming fat” is regularly viewed as a lack of control. Not limited to fat subjects, the prevalent discourses of the self emphasise control and responsibility for the self (rather than community), often masquerading as self-approval. The same discourses, however, highlight work on the self (McRobbie) and cultivating the self by various means of self-management or self-tracking (Rettberg). Such self-disciplining carries the implication of the self as somewhat lacking (Skeggs Imagining, Exchange), of being in some way unintelligible (Butler).In plus-sized blogging, the fat body needs to be subjected to fashioning to become intelligible within the dominant discourses in the public sphere. The fatshionista community is a politically oriented movement that rejects the normative demands governing the body, yet regimes of ‘self-improvement’ are evident on the individual blogs displaying the fashioned body, which is befitting of the normative understandings of the female subject as sexualised, as something to be consumed (see also Maguire). Contrary to the discourses of fat female subjects where the dimension of sexuality is largely absent, this is also linked to the problematics related to the visibility of female subjects. The negotiation of relational tension is manifest as negotiation of competing discourses where bloggers adopt the hegemonic visual discourses to subvert the stigmatising discourses that construct the fat female subject as lacking. Utilising media logics (e.g. micro-celebrity) to gain visibility as fat subjects is an important aspect of the fatshionistas’ automedial self-construction.I argue that social imaginaries that feed into identity construction and offer pathways to normalcy cannot be seen simply and only as enabling, but instead they construct horizons of constrained possibility (Harju), thereby imposing limitations to the kind of acceptable identity positions marginalised individuals can seek. Digital productions form chains of symbolic entities and acquire their meaning by being interconnected as well as by being connected to popular social imaginaries. Thus, the narrative construction of the self in the digital production, and the recognition of the self in the becoming, is the very utility of the digital object. This is because through the digital artefact the individual becomes relationally linked to chains of significations (Harju). Through such linkages and subverted discourses, the disenfranchised may become enfranchised.Toward Horizons of Potentiality and PossibilityThe relational self is a process under continual change and thus always becoming. This approach opens up new avenues for exploring the complexities of the digital self that is never ‘just’ a reproduction. Automedia entails both the media about the maker (the subject) and the process of mediating the self (Rak "Life" 161) The relational approach helps overcome the binary distinction in modes of being (online versus offline), instead bringing into focus the relational flow between various articulations of the self in different relational scenarios. Then perhaps the question is not “what kinds of selves become or are borne digital” (Rak Life 177), but what kinds of selves are possible in the first place under the current conditions that include the digital as one mode of being, mediating the becoming, with the digital as one relational space of articulation of the self among many.Where in On Being Online I discussed the constraining effects of market ideology embedded in social imaginaries on how the self can be articulated, Berardi in his book Futurability offers a more optimistic take, noting how the different paths we take result in different possibilities becoming realised, resulting in different social realities in the future. Future is not a linear development from the present; rather, the present harbours the potential for multiple futures. Berardi notes how the “[f]uture is not prescribed but inscribed, so it must be selected and extracted through interpretation” (236). Despite the dominant code - which in our times is consumption (Baudrillard) - hindering the process of interpretation, there is hope in Berardi’s notion of inscribed possibilities for resistance and change, for different ways of being and becoming.This is the space the plus-sized fashion bloggers occupy as they grasp the potentialities in the present and construct new ways of being that unfold as different social realities in the future. In blogging, platform affordances together with other media technologies are intertwined with future-oriented life narration in the construction of the fatshionista identity which involves retrospective interpretation of life experiences as a fat woman as well as self-liberation in the form of conscious rejection of the dominant discourses around fat female subjects.The digital self is able to negotiate such diverse, even conflicting forces in the active shaping of the social reality of its existence. Blogging as automedia can constitute an act of carving out alternative futures not limited to the digital realm. Perhaps when freed from aspirational normativity (Berlant) we are able to recover hope in the inscribed possibilities that might also hide the potential for a transition from a subjectivity enslaved to the market logic (see Firat Violence) to a self actively engaged in changing the social circumstances and the conditions in which subjectivity is construed (see Firat and Dholakia). In the becoming, the digital self occupies a place between the present and the future, enmeshed in various discourses of aspiration, mediated by material practices of consumption and articulated within the limits of current media practices (Harju). A self in the making, it is variably responsive to the multitude of relational forces continually flowing at the site of it.Although the plus-sized bloggers’ identity work can be seen as an attempt to transform or discipline the self into something more intelligible that better fits the existing narratives of the self, they are also adding new narratives to the repertoire. If we adopt the view of self-conception as discourse about the self, that is, “the performance of languages available in the public sphere” (Gergen, Realities 185) whereby the self is made culturally intelligible by way of narration within ongoing relationships, we can see how the existing cultural discourses of the self are not only inclusive, but also alienating and othering. There is a need for identity politics that encourage the production of alternative discourses of the self for more inclusive practices of imagining. Blogging as automedia is not only a way of making visible that which occupies the margins, it also actively contributes to diversifying identification points in the public sphere that are not limited to the digital, but have implications regarding the production of social realities, regardless of the mode in which these are experienced.ReferencesAppadurai, Arjun. Modernity at Large: Cultural Dimensions of Globalization. Minneapolis: University of Minnesota P, 1996.Baudrillard, Jean. The Consumer Society: Myths and Structures. Trans. C. Turner. London: Sage, 1998 [1970].Bauman, Zygmunt. “The Self in Consumer Society.” The Hedgehog Review: Critical Refections on Contemporary Culture 1 (1999): 35-40. ———. Liquid Modernity. Cambridge: Polity, 2000.———. “Consuming Life.” Journal of Consumer Culture 1 (2001): 9–29.———, and Benedetto Vecchi. Identity: Conversations with Benedetto Vecchi. Cambridge: Polity, 2004.Berlant, Lauren. “Nearly Utopian, Nearly Normal: Post-Fordist Affect in La Promesse and Rosetta.” Public Culture 19 (2007): 273-301.Bourdieu, Pierre. Distinction: A Social Critique of the Judgement of Taste. London: Routledge. 1986.Butler, Judith. Gender Trouble: Feminism and the Subversion of Identity. London: Routledge. 2006 [1990].Castoriadis, Cornelius. “Radical Imagination and the Social Instituting Imaginary.” Rethinking Imagination: Culture and Creativity. Eds. G. Robinson and J.F. Rundell. Abingdon: Routledge, 1994. 136-154.Coleman, Rebecca, and Mónica Moreno Figueroa. “Past and Future Perfect? Beauty, Affect and Hope.” Journal for Cultural Research 14 (2010): 357-373.Connell, Catherine. “Fashionable Resistance: Queer “Fa(t)shion Blogging as Counterdiscourse.” Women’s Studies Quarterly 41 (2013): 209-224.Firat, Fuat A. “The Consumer in Postmodernity.” NA - Advances in Consumer Research 18 (1991): 70-76. ———. “Violence in/by the Market.” Journal of Marketing Management, 2018.Firat, Fuat A., and Nikhilesh Dholakia. “From Consumer to Construer: Travels in Human Subjectivity.” Journal of Consumer Culture 17 (2016): 504-522.Franco “Bifo” Berardi. Futurability: The Age of Impotence and the Horizon of Possibility. London: Verso, 2017. Gergen, Kenneth J. Realities and Relationships: Soundings in Social Construction. Cambridge: Harvard University P. 1994.———. Relational Being: Beyond Self and Community. New York: Oxford University P., 2009.Giovanelli, Dina, and Stephen Ostertag. “Controlling the Body: Media Representations, Body Size, and Self-Discipline.” Fat Studies Reader. Eds. E. Rothblum and S. Solovay. New York: New York University P, 2009. 289-296.Gurrieri, Lauren, and Hélène Cherrier. “Queering Beauty: Fatshionistas in the Fatosphere.” Qualitative Market Research: An International Journal 16 (2013): 276-295.Guthman, Julie. “Neoliberalism and the Constitution of Contemporary Bodies.” Fat Studies Reader. Eds. E. Rothblum and S. Solovay. New York: New York University P, 2009. 187-196.Harju, Anu A., and Annamari Huovinen. ”Fashionably Voluptuous: Normative Femininity and Resistant Performative Tactics in Fatshion Blogs.” Journal of Marketing Management 31 (2015): 1602–1625.Harju, Anu A. On ‘Being’ Online: Insights on Contemporary Articulations of the Relational Self. Dissertation. Helsinki: Aalto University, 2017. <http://urn.fi/URN:ISBN:978-952-60-7434-4>.LeBesco, Kathleen. “Revolting Bodies? The Struggle to Redefine Fat Identity. U of Massachusetts P, 2004.Limatius, Hanna. “’There Really Is Nothing like Pouring Your Heart Out to a Fellow Fat Chick’: Constructing a Body Positive Blogger Identity in Plus-Size Fashion Blogs.” Token: A Journal of English Linguistics 6 (2017).Maguire, Emma. “Self-Branding, Hotness, and Girlhood in the Video Blogs of Jenna Marbles.” Biography 38.1 (2015): 72-86.McRobbie, Angela. “Post-Feminism and Popular Culture.” Feminist Media Studies 4 (2004): 255-264. Poletti, Anna. “What's Next? Mediation.” a/b: Auto/Biography Studies 32 (2017): 263-266.Rak, Julie. “The Digital Queer: Weblogs and Internet Identity.” Biography 28 (2005): 166-182.———. Boom! Manufacturing Memoir for the Popular Market. Waterloo: Wilfred Laurier UP. 2013.———. “Life Writing versus Automedia: The Sims 3 Game as a Life Lab.” Biography 38 (2015): 155-180.Rettberg, Jill W. “Self-Representation in Social Media.” Sage Handbook of Social Media. Eds. J. Burgess, A. Marwick, and T. Poell, 2017. 5 Feb. 2018 <http://hdl.handle.net/1956/13073>.Skeggs, Beverley. “Exchange, Value and Affect: Bourdieu and ‘the Self’.” The Sociological Review 52 (2004): 75-95.———. “Imagining Personhood Differently: Person Value and Autonomist Working-Class Value Practices.” The Sociological Review 59 (2011): 496-513.Smith, Sidonie, and Julia Watson. “Virtually Me.” Identity Technologies: Constructing the Self Online. Eds. A. Poletti and J. Rak. University of Wisconsin Press, 2014. 70-95.Taylor, Charles. “Modern Social Imaginaries.” Public Culture 14 (2002): 91-124.
APA, Harvard, Vancouver, ISO, and other styles
44

Goggin, Gerard. "Broadband." M/C Journal 6, no. 4 (August 1, 2003). http://dx.doi.org/10.5204/mcj.2219.

Full text
Abstract:
Connecting I’ve moved house on the weekend, closer to the centre of an Australian capital city. I had recently signed up for broadband, with a major Australian Internet company (my first contact, cf. Turner). Now I am the proud owner of a larger modem than I have ever owned: a white cable modem. I gaze out into our new street: two thick black cables cosseted in silver wire. I am relieved. My new home is located in one of those streets, double-cabled by Telstra and Optus in the data-rush of the mid-1990s. Otherwise, I’d be moth-balling the cable modem, and the thrill of my data percolating down coaxial cable. And it would be off to the computer supermarket to buy an ASDL modem, then to pick a provider, to squeeze some twenty-first century connectivity out of old copper (the phone network our grandparents and great-grandparents built). If I still lived in the country, or the outskirts of the city, or anywhere else more than four kilometres from the phone exchange, and somewhere that cable pay TV will never reach, it would be a dish for me — satellite. Our digital lives are premised upon infrastructure, the networks through which we shape what we do, fashion the meanings of our customs and practices, and exchange signs with others. Infrastructure is not simply the material or the technical (Lamberton), but it is the dense, fibrous knotting together of social visions, cultural resources, individual desires, and connections. No more can one easily discern between ‘society’ and ‘technology’, ‘carriage’ and ‘content’, ‘base’ and ‘superstructure’, or ‘infrastructure’ and ‘applications’ (or ‘services’ or ‘content’). To understand telecommunications in action, or the vectors of fibre, we need to consider the long and heterogeneous list of links among different human and non-human actors — the long networks, to take Bruno Latour’s evocative concept, that confect our broadband networks (Latour). The co-ordinates of our infrastructure still build on a century-long history of telecommunications networks, on the nineteenth-century centrality of telegraphy preceding this, and on the histories of the public and private so inscribed. Yet we are in the midst of a long, slow dismantling of the posts-telegraph-telephone (PTT) model of the monopoly carrier for each nation that dominated the twentieth century, with its deep colonial foundations. Instead our New World Information and Communication Order is not the decolonising UNESCO vision of the late 1970s and early 1980s (MacBride, Maitland). Rather it is the neoliberal, free trade, market access model, its symbol the 1984 US judicial decision to require the break-up of AT&T and the UK legislation in the same year that underpinned the Thatcherite twin move to privatize British Telecom and introduce telecommunications competition. Between 1984 and 1999, 110 telecommunications companies were privatized, and the ‘acquisition of privatized PTOs [public telecommunications operators] by European and American operators does follow colonial lines’ (Winseck 396; see also Mody, Bauer & Straubhaar). The competitive market has now been uneasily installed as the paradigm for convergent communications networks, not least with the World Trade Organisation’s 1994 General Agreement on Trade in Services and Annex on Telecommunications. As the citizen is recast as consumer and customer (Goggin, ‘Citizens and Beyond’), we rethink our cultural and political axioms as well as the axes that orient our understandings in this area. Information might travel close to the speed of light, and we might fantasise about optical fibre to the home (or pillow), but our terrain, our band where the struggle lies today, is narrower than we wish. Begging for broadband, it seems, is a long way from warchalking for WiFi. Policy Circuits The dreary everyday business of getting connected plugs the individual netizen into a tangled mess of policy circuits, as much as tricky network negotiations. Broadband in mid-2003 in Australia is a curious chimera, welded together from a patchwork of technologies, old and newer communications industries, emerging economies and patterns of use. Broadband conjures up grander visions, however, of communication and cultural cornucopia. Broadband is high-speed, high-bandwidth, ‘always-on’, networked communications. People can send and receive video, engage in multimedia exchanges of all sorts, make the most of online education, realise the vision of home-based work and trading, have access to telemedicine, and entertainment. Broadband really entered the lexicon with the mass takeup of the Internet in the early to mid-1990s, and with the debates about something called the ‘information superhighway’. The rise of the Internet, the deregulation of telecommunications, and the involuted convergence of communications and media technologies saw broadband positioned at the centre of policy debates nearly a decade ago. In 1993-1994, Australia had its Broadband Services Expert Group (BSEG), established by the then Labor government. The BSEG was charged with inquiring into ‘issues relating to the delivery of broadband services to homes, schools and businesses’. Stung by criticisms of elite composition (a narrow membership, with only one woman among its twelve members, and no consumer or citizen group representation), the BSEG was prompted into wider public discussion and consultation (Goggin & Newell). The then Bureau of Transport and Communications Economics (BTCE), since transmogrified into the Communications Research Unit of the Department of Communications, Information Technology and the Arts (DCITA), conducted its large-scale Communications Futures Project (BTCE and Luck). The BSEG Final report posed the question starkly: As a society we have choices to make. If we ignore the opportunities we run the risk of being left behind as other countries introduce new services and make themselves more competitive: we will become consumers of other countries’ content, culture and technologies rather than our own. Or we could adopt new technologies at any cost…This report puts forward a different approach, one based on developing a new, user-oriented strategy for communications. The emphasis will be on communication among people... (BSEG v) The BSEG proposed a ‘National Strategy for New Communications Networks’ based on three aspects: education and community access, industry development, and the role of government (BSEG x). Ironically, while the nation, or at least its policy elites, pondered the weighty question of broadband, Australia’s two largest telcos were doing it. The commercial decision of Telstra/Foxtel and Optus Vision, and their various television partners, was to nail their colours (black) to the mast, or rather telegraph pole, and to lay cable in the major capital cities. In fact, they duplicated the infrastructure in cities such as Sydney and Melbourne, then deciding it would not be profitable to cable up even regional centres, let alone small country towns or settlements. As Terry Flew and Christina Spurgeon observe: This wasteful duplication contrasted with many other parts of the country that would never have access to this infrastructure, or to the social and economic benefits that it was perceived to deliver. (Flew & Spurgeon 72) The implications of this decision for Australia’s telecommunications and television were profound, but there was little, if any, public input into this. Then Minister Michael Lee was very proud of his anti-siphoning list of programs, such as national sporting events, that would remain on free-to-air television rather than screen on pay, but was unwilling, or unable, to develop policy on broadband and pay TV cable infrastructure (on the ironies of Australia’s television history, see Given’s masterly account). During this period also, it may be remembered, Australia’s Internet was being passed into private hands, with the tendering out of AARNET (see Spurgeon for discussion). No such national strategy on broadband really emerged in the intervening years, nor has the market provided integrated, accessible broadband services. In 1997, landmark telecommunications legislation was enacted that provided a comprehensive framework for competition in telecommunications, as well as consolidating and extending consumer protection, universal service, customer service standards, and other reforms (CLC). Carrier and reseller competition had commenced in 1991, and the 1997 legislation gave it further impetus. Effective competition is now well established in long distance telephone markets, and in mobiles. Rivalrous competition exists in the market for local-call services, though viable alternatives to Telstra’s dominance are still few (Fels). Broadband too is an area where there is symbolic rivalry rather than effective competition. This is most visible in advertised ADSL offerings in large cities, yet most of the infrastructure for these services is comprised by Telstra’s copper, fixed-line network. Facilities-based duopoly competition exists principally where Telstra/Foxtel and Optus cable networks have been laid, though there are quite a number of ventures underway by regional telcos, power companies, and, most substantial perhaps, the ACT government’s TransACT broadband network. Policymakers and industry have been greatly concerned about what they see as slow takeup of broadband, compared to other countries, and by barriers to broadband competition and access to ‘bottleneck’ facilities (such as Telstra or Optus’s networks) by potential competitors. The government has alternated between trying to talk up broadband benefits and rates of take up and recognising the real difficulties Australia faces as a large country with a relative small and dispersed population. In March 2003, Minister Alston directed the ACCC to implement new monitoring and reporting arrangements on competition in the broadband industry. A key site for discussion of these matters has been the competition policy institution, the Australian Competition and Consumer Commission, and its various inquiries, reports, and considerations (consult ACCC’s telecommunications homepage at http://www.accc.gov.au/telco/fs-telecom.htm). Another key site has been the Productivity Commission (http://www.pc.gov.au), while a third is the National Office on the Information Economy (NOIE - http://www.noie.gov.au/projects/access/access/broadband1.htm). Others have questioned whether even the most perfectly competitive market in broadband will actually provide access to citizens and consumers. A great deal of work on this issue has been undertaken by DCITA, NOIE, the regulators, and industry bodies, not to mention consumer and public interest groups. Since 1997, there have been a number of governmental inquiries undertaken or in progress concerning the takeup of broadband and networked new media (for example, a House of Representatives Wireless Broadband Inquiry), as well as important inquiries into the still most strategically important of Australia’s companies in this area, Telstra. Much of this effort on an ersatz broadband policy has been piecemeal and fragmented. There are fundamental difficulties with the large size of the Australian continent and its harsh terrain, the small size of the Australian market, the number of providers, and the dominant position effectively still held by Telstra, as well as Singtel Optus (Optus’s previous overseas investors included Cable & Wireless and Bell South), and the larger telecommunications and Internet companies (such as Ozemail). Many consumers living in metropolitan Australia still face real difficulties in realising the slogan ‘bandwidth for all’, but the situation in parts of rural Australia is far worse. Satellite ‘broadband’ solutions are available, through Telstra Countrywide or other providers, but these offer limited two-way interactivity. Data can be received at reasonable speeds (though at far lower data rates than how ‘broadband’ used to be defined), but can only be sent at far slower rates (Goggin, Rural Communities Online). The cultural implications of these digital constraints may well be considerable. Computer gamers, for instance, are frustrated by slow return paths. In this light, the final report of the January 2003 Broadband Advisory Group (BAG) is very timely. The BAG report opens with a broadband rhapsody: Broadband communications technologies can deliver substantial economic and social benefits to Australia…As well as producing productivity gains in traditional and new industries, advanced connectivity can enrich community life, particularly in rural and regional areas. It provides the basis for integration of remote communities into national economic, cultural and social life. (BAG 1, 7) Its prescriptions include: Australia will be a world leader in the availability and effective use of broadband...and to capture the economic and social benefits of broadband connectivity...Broadband should be available to all Australians at fair and reasonable prices…Market arrangements should be pro-competitive and encourage investment...The Government should adopt a National Broadband Strategy (BAG 1) And, like its predecessor nine years earlier, the BAG report does make reference to a national broadband strategy aiming to maximise “choice in work and recreation activities available to all Australians independent of location, background, age or interests” (17). However, the idea of a national broadband strategy is not something the BAG really comes to grips with. The final report is keen on encouraging broadband adoption, but not explicit on how barriers to broadband can be addressed. Perhaps this is not surprising given that the membership of the BAG, dominated by representatives of large corporations and senior bureaucrats was even less representative than its BSEG predecessor. Some months after the BAG report, the Federal government did declare a broadband strategy. It did so, intriguingly enough, under the rubric of its response to the Regional Telecommunications Inquiry report (Estens), the second inquiry responsible for reassuring citizens nervous about the full-privatisation of Telstra (the first inquiry being Besley). The government’s grand $142.8 million National Broadband Strategy focusses on the ‘broadband needs of regional Australians, in partnership with all levels of government’ (Alston, ‘National Broadband Strategy’). Among other things, the government claims that the Strategy will result in “improved outcomes in terms of services and prices for regional broadband access; [and] the development of national broadband infrastructure assets.” (Alston, ‘National Broadband Strategy’) At the same time, the government announced an overall response to the Estens Inquiry, with specific safeguards for Telstra’s role in regional communications — a preliminary to the full Telstra sale (Alston, ‘Future Proofing’). Less publicised was the government’s further initiative in indigenous telecommunications, complementing its Telecommunications Action Plan for Remote Indigenous Communities (DCITA). Indigenous people, it can be argued, were never really contemplated as citizens with the ken of the universal service policy taken to underpin the twentieth-century government monopoly PTT project. In Australia during the deregulatory and re-regulatory 1990s, there was a great reluctance on the part of Labor and Coalition Federal governments, Telstra and other industry participants, even to research issues of access to and use of telecommunications by indigenous communicators. Telstra, and to a lesser extent Optus (who had purchased AUSSAT as part of their licence arrangements), shrouded the issue of indigenous communications in mystery that policymakers were very reluctant to uncover, let alone systematically address. Then regulator, the Australian Telecommunications Authority (AUSTEL), had raised grave concerns about indigenous telecommunications access in its 1991 Rural Communications inquiry. However, there was no government consideration of, nor research upon, these issues until Alston commissioned a study in 2001 — the basis for the TAPRIC strategy (DCITA). The elision of indigenous telecommunications from mainstream industry and government policy is all the more puzzling, if one considers the extraordinarily varied and significant experiments by indigenous Australians in telecommunications and Internet (not least in the early work of the Tanami community, made famous in media and cultural studies by the writings of anthropologist Eric Michaels). While the government’s mid-2003 moves on a ‘National Broadband Strategy’ attend to some details of the broadband predicament, they fall well short of an integrated framework that grasps the shortcomings of the neoliberal communications model. The funding offered is a token amount. The view from the seat of government is a glance from the rear-view mirror: taking a snapshot of rural communications in the years 2000-2002 and projecting this tableau into a safety-net ‘future proofing’ for the inevitable turning away of a fully-privately-owned Telstra from its previously universal, ‘carrier of last resort’ responsibilities. In this aetiolated, residualist policy gaze, citizens remain constructed as consumers in a very narrow sense in this incremental, quietist version of state securing of market arrangements. What is missing is any more expansive notion of citizens, their varied needs, expectations, uses, and cultural imaginings of ‘always on’ broadband networks. Hybrid Networks “Most people on earth will eventually have access to networks that are all switched, interactive, and broadband”, wrote Frances Cairncross in 1998. ‘Eventually’ is a very appropriate word to describe the parlous state of broadband technology implementation. Broadband is in a slow state of evolution and invention. The story of broadband so far underscores the predicament for Australian access to bandwidth, when we lack any comprehensive, integrated, effective, and fair policy in communications and information technology. We have only begun to experiment with broadband technologies and understand their evolving uses, cultural forms, and the sense in which they rework us as subjects. Our communications networks are not superhighways, to invoke an enduring artefact from an older technology. Nor any longer are they a single ‘public’ switched telecommunications network, like those presided over by the post-telegraph-telephone monopolies of old. Like roads themselves, or the nascent postal system of the sixteenth century, broadband is a patchwork quilt. The ‘fibre’ of our communications networks is hybrid. To be sure, powerful corporations dominate, like the Tassis or Taxis who served as postmasters to the Habsburg emperors (Briggs & Burke 25). Activating broadband today provides a perspective on the path dependency of technology history, and how we can open up new threads of a communications fabric. Our options for transforming our multitudinous networked lives emerge as much from everyday tactics and strategies as they do from grander schemes and unifying policies. We may care to reflect on the waning potential for nation-building technology, in the wake of globalisation. We no longer gather our imagined community around a Community Telephone Plan as it was called in 1960 (Barr, Moyal, and PMG). Yet we do require national and international strategies to get and stay connected (Barr), ideas and funding that concretely address the wider dimensions of access and use. We do need to debate the respective roles of Telstra, the state, community initiatives, and industry competition in fair telecommunications futures. Networks have global reach and require global and national integration. Here vision, co-ordination, and resources are urgently required for our commonweal and moral fibre. To feel the width of the band we desire, we need to plug into and activate the policy circuits. Thanks to Grayson Cooke, Patrick Lichty, Ned Rossiter, John Pace, and an anonymous reviewer for helpful comments. Works Cited Alston, Richard. ‘ “Future Proofing” Regional Communications.’ Department of Communications, Information Technology and the Arts, Canberra, 2003. 17 July 2003 <http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115485,00.php> —. ‘A National Broadband Strategy.’ Department of Communications, Information Technology and the Arts, Canberra, 2003. 17 July 2003 <http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115486,00.php>. Australian Competition and Consumer Commission (ACCC). Broadband Services Report March 2003. Canberra: ACCC, 2003. 17 July 2003 <http://www.accc.gov.au/telco/fs-telecom.htm>. —. Emerging Market Structures in the Communications Sector. Canberra: ACCC, 2003. 15 July 2003 <http://www.accc.gov.au/pubs/publications/utilities/telecommu... ...nications/Emerg_mar_struc.doc>. Barr, Trevor. new media.com: The Changing Face of Australia’s Media and Telecommunications. Sydney: Allen & Unwin, 2000. Besley, Tim (Telecommunications Service Inquiry). Connecting Australia: Telecommunications Service Inquiry. Canberra: Department of Information, Communications and the Arts, 2000. 17 July 2003 <http://www.telinquiry.gov.au/final_report.php>. Briggs, Asa, and Burke, Peter. A Social History of the Internet: From Gutenberg to the Internet. Cambridge: Polity, 2002. Broadband Advisory Group. Australia’s Broadband Connectivity: The Broadband Advisory Group’s Report to Government. Melbourne: National Office on the Information Economy, 2003. 15 July 2003 <http://www.noie.gov.au/publications/NOIE/BAG/report/index.htm>. Broadband Services Expert Group. Networking Australia’s Future: Final Report. Canberra: Australian Government Publishing Service (AGPS), 1994. Bureau of Transport and Communications Economics (BTCE). Communications Futures Final Project. Canberra: AGPS, 1994. Cairncross, Frances. The Death of Distance: How the Communications Revolution Will Change Our Lives. London: Orion Business Books, 1997. Communications Law Centre (CLC). Australian Telecommunications Regulation: The Communications Law Centre Guide. 2nd edition. Sydney: Communications Law Centre, University of NSW, 2001. Department of Communications, Information Technology and the Arts (DCITA). Telecommunications Action Plan for Remote Indigenous Communities: Report on the Strategic Study for Improving Telecommunications in Remote Indigenous Communities. Canberra: DCITA, 2002. Estens, D. Connecting Regional Australia: The Report of the Regional Telecommunications Inquiry. Canberra: DCITA, 2002. <http://www.telinquiry.gov.au/rti-report.php>, accessed 17 July 2003. Fels, Alan. ‘Competition in Telecommunications’, speech to Australian Telecommunications Users Group 19th Annual Conference. 6 March, 2003, Sydney. <http://www.accc.gov.au/speeches/2003/Fels_ATUG_6March03.doc>, accessed 15 July 2003. Flew, Terry, and Spurgeon, Christina. ‘Television After Broadcasting’. In The Australian TV Book. Ed. Graeme Turner and Stuart Cunningham. Allen & Unwin, Sydney. 69-85. 2000. Given, Jock. Turning Off the Television. Sydney: UNSW Press, 2003. Goggin, Gerard. ‘Citizens and Beyond: Universal service in the Twilight of the Nation-State.’ In All Connected?: Universal Service in Telecommunications, ed. Bruce Langtry. Melbourne: University of Melbourne Press, 1998. 49-77 —. Rural Communities Online: Networking to link Consumers to Providers. Melbourne: Telstra Consumer Consultative Council, 2003. Goggin, Gerard, and Newell, Christopher. Digital Disability: The Social Construction of Disability in New Media. Lanham, MD: Rowman & Littlefield, 2003. House of Representatives Standing Committee on Communications, Information Technology and the Arts (HoR). Connecting Australia!: Wireless Broadband. Report of Inquiry into Wireless Broadband Technologies. Canberra: Parliament House, 2002. <http://www.aph.gov.au/house/committee/cita/Wbt/report.htm>, accessed 17 July 2003. Lamberton, Don. ‘A Telecommunications Infrastructure is Not an Information Infrastructure’. Prometheus: Journal of Issues in Technological Change, Innovation, Information Economics, Communication and Science Policy 14 (1996): 31-38. Latour, Bruno. Science in Action: How to Follow Scientists and Engineers Through Society. Cambridge, MA: Harvard University Press, 1987. Luck, David. ‘Revisiting the Future: Assessing the 1994 BTCE communications futures project.’ Media International Australia 96 (2000): 109-119. MacBride, Sean (Chair of International Commission for the Study of Communication Problems). Many Voices, One World: Towards a New More Just and More Efficient World Information and Communication Order. Paris: Kegan Page, London. UNESCO, 1980. Maitland Commission (Independent Commission on Worldwide Telecommunications Development). The Missing Link. Geneva: International Telecommunications Union, 1985. Michaels, Eric. Bad Aboriginal Art: Tradition, Media, and Technological Horizons. Sydney: Allen & Unwin, 1994. Mody, Bella, Bauer, Johannes M., and Straubhaar, Joseph D., eds. Telecommunications Politics: Ownership and Control of the Information Highway in Developing Countries. Mahwah, NJ: Erlbaum, 1995. Moyal, Ann. Clear Across Australia: A History of Telecommunications. Melbourne: Thomas Nelson, 1984. Post-Master General’s Department (PMG). Community Telephone Plan for Australia. Melbourne: PMG, 1960. Productivity Commission (PC). Telecommunications Competition Regulation: Inquiry Report. Report No. 16. Melbourne: Productivity Commission, 2001. <http://www.pc.gov.au/inquiry/telecommunications/finalreport/>, accessed 17 July 2003. Spurgeon, Christina. ‘National Culture, Communications and the Information Economy.’ Media International Australia 87 (1998): 23-34. Turner, Graeme. ‘First Contact: coming to terms with the cable guy.’ UTS Review 3 (1997): 109-21. Winseck, Dwayne. ‘Wired Cities and Transnational Communications: New Forms of Governance for Telecommunications and the New Media’. In The Handbook of New Media: Social Shaping and Consequences of ICTs, ed. Leah A. Lievrouw and Sonia Livingstone. London: Sage, 2002. 393-409. World Trade Organisation. General Agreement on Trade in Services: Annex on Telecommunications. Geneva: World Trade Organisation, 1994. 17 July 2003 <http://www.wto.org/english/tratop_e/serv_e/12-tel_e.htm>. —. Fourth protocol to the General Agreement on Trade in Services. Geneva: World Trade Organisation. 17 July 2003 <http://www.wto.org/english/tratop_e/serv_e/4prote_e.htm>. Links http://www.accc.gov.au/pubs/publications/utilities/telecommunications/Emerg_mar_struc.doc http://www.accc.gov.au/speeches/2003/Fels_ATUG_6March03.doc http://www.accc.gov.au/telco/fs-telecom.htm http://www.aph.gov.au/house/committee/cita/Wbt/report.htm http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115485,00.html http://www.dcita.gov.au/Article/0,,0_1-2_3-4_115486,00.html http://www.noie.gov.au/projects/access/access/broadband1.htm http://www.noie.gov.au/publications/NOIE/BAG/report/index.htm http://www.pc.gov.au http://www.pc.gov.au/inquiry/telecommunications/finalreport/ http://www.telinquiry.gov.au/final_report.html http://www.telinquiry.gov.au/rti-report.html http://www.wto.org/english/tratop_e/serv_e/12-tel_e.htm http://www.wto.org/english/tratop_e/serv_e/4prote_e.htm Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Goggin, Gerard. "Broadband" M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0308/02-featurebroadband.php>. APA Style Goggin, G. (2003, Aug 26). Broadband. M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0308/02-featurebroadband.php>
APA, Harvard, Vancouver, ISO, and other styles
45

Holleran, Samuel. "Better in Pictures." M/C Journal 24, no. 4 (August 19, 2021). http://dx.doi.org/10.5204/mcj.2810.

Full text
Abstract:
While the term “visual literacy” has grown in popularity in the last 50 years, its meaning remains nebulous. It is described variously as: a vehicle for aesthetic appreciation, a means of defence against visual manipulation, a sorting mechanism for an increasingly data-saturated age, and a prerequisite to civic inclusion (Fransecky 23; Messaris 181; McTigue and Flowers 580). Scholars have written extensively about the first three subjects but there has been less research on how visual literacy frames civic life and how it might help the public as a tool to address disadvantage and assist in removing social and cultural barriers. This article examines a forerunner to visual literacy in the push to create an international symbol language born out of popular education movements, a project that fell short of its goals but still left a considerable impression on graphic media. This article, then, presents an analysis of visual literacy campaigns in the early postwar era. These campaigns did not attempt to invent a symbolic language but posited that images themselves served as a universal language in which students could receive training. Of particular interest is how the concept of visual literacy has been mobilised as a pedagogical tool in design, digital humanities and in broader civic education initiatives promoted by Third Space institutions. Behind the creation of new visual literacy curricula is the idea that images can help anchor a world community, supplementing textual communication. Figure 1: Visual Literacy Yearbook. Montebello Unified School District, USA, 1973. Shedding Light: Origins of the Visual Literacy Frame The term “visual literacy” came to the fore in the early 1970s on the heels of mass literacy campaigns. The educators, creatives and media theorists who first advocated for visual learning linked this aim to literacy, an unassailable goal, to promote a more radical curricular overhaul. They challenged a system that had hitherto only acknowledged a very limited pathway towards academic success; pushing “language and mathematics”, courses “referred to as solids (something substantial) as contrasted with liquids or gases (courses with little or no substance)” (Eisner 92). This was deemed “a parochial view of both human ability and the possibilities of education” that did not acknowledge multiple forms of intelligence (Gardner). This change not only integrated elements of mass culture that had been rejected in education, notably film and graphic arts, but also encouraged the critique of images as a form of good citizenship, assuming that visually literate arbiters could call out media misrepresentations and manipulative political advertising (Messaris, “Visual Test”). This movement was, in many ways, reactive to new forms of mass media that began to replace newspapers as key forms of civic participation. Unlike simple literacy (being able to decipher letters as a mnemonic system), visual literacy involves imputing meanings to images where meanings are less fixed, yet still with embedded cultural signifiers. Visual literacy promised to extend enlightenment metaphors of sight (as in the German Aufklärung) and illumination (as in the French Lumières) to help citizens understand an increasingly complex marketplace of images. The move towards visual literacy was not so much a shift towards images (and away from books and oration) but an affirmation of the need to critically investigate the visual sphere. It introduced doubt to previously upheld hierarchies of perception. Sight, to Kant the “noblest of the senses” (158), was no longer the sense “least affected” by the surrounding world but an input centre that was equally manipulable. In Kant’s view of societal development, the “cosmopolitan” held the key to pacifying bellicose states and ensuring global prosperity and tranquillity. The process of developing a cosmopolitan ideology rests, according to Kant, on the gradual elimination of war and “the education of young people in intellectual and moral culture” (188-89). Transforming disparate societies into “a universal cosmopolitan existence” that would “at last be realised as the matrix within which all the original capacities of the human race may develop” and would take well-funded educational institutions and, potentially, a new framework for imparting knowledge (Kant 51). To some, the world of the visual presented a baseline for shared experience. Figure 2: Exhibition by the Gesellschafts- und Wirtschaftsmuseum in Vienna, photograph c. 1927. An International Picture Language The quest to find a mutually intelligible language that could “bridge worlds” and solder together all of humankind goes back to the late nineteenth century and the Esperanto movement of Ludwig Zamenhof (Schor 59). The expression of this ideal in the world of the visual picked up steam in the interwar years with designers and editors like Fritz Kahn, Gerd Arntz, and Otto and Marie Neurath. Their work transposing complex ideas into graphic form has been rediscovered as an antecedent to modern infographics, but the symbols they deployed were not to merely explain, but also help education and build international fellowship unbounded by spoken language. The Neuraths in particular are celebrated for their international picture language or Isotypes. These pictograms (sometimes viewed as proto-emojis) can be used to represent data without text. Taken together they are an “intemporal, hieroglyphic language” that Neutrath hoped would unite working-class people the world over (Lee 159). The Neuraths’ work was done in the explicit service of visual education with a popular socialist agenda and incubated in the social sphere of Red Vienna at the Gesellschafts- und Wirtschaftsmuseum (Social and Economic Museum) where Otto served as Director. The Wirtschaftsmuseum was an experiment in popular education, with multiple branches and late opening hours to accommodate the “the working man [who] has time to see a museum only at night” (Neurath 72-73). The Isotype contained universalist aspirations for the “making of a world language, or a helping picture language—[that] will give support to international developments generally” and “educate by the eye” (Neurath 13). Figure 3: Gerd Arntz Isotype Images. (Source: University of Reading.) The Isotype was widely adopted in the postwar era in pre-packaged sets of symbols used in graphic design and wayfinding systems for buildings and transportation networks, but with the socialism of the Neuraths’ peeled away, leaving only the system of logos that we are familiar with from airport washrooms, charts, and public transport maps. Much of the uptake in this symbol language could be traced to increased mobility and tourism, particularly in countries that did not make use of a Roman alphabet. The 1964 Olympics in Tokyo helped pave the way when organisers, fearful of jumbling too many scripts together, opted instead for black and white icons to represent the program of sports that summer. The new focus on the visual was both technologically mediated—cheaper printing and broadcast technologies made the diffusion of image increasingly possible—but also ideologically supported by a growing emphasis on projects that transcended linguistic, ethnic, and national borders. The Olympic symbols gradually morphed into Letraset icons, and, later, symbols in the Unicode Standard, which are the basis for today’s emojis. Wordless signs helped facilitate interconnectedness, but only in the most literal sense; their application was limited primarily to sports mega-events, highway maps, and “brand building”, and they never fulfilled their role as an educational language “to give the different nations a common outlook” (Neurath 18). Universally understood icons, particularly in the form of emojis, point to a rise in visual communication but they have fallen short as a cosmopolitan project, supporting neither the globalisation of Kantian ethics nor the transnational socialism of the Neuraths. Figure 4: Symbols in use. Women's bathroom. 1964 Tokyo Olympics. (Source: The official report of the Organizing Committee.) Counter Education By mid-century, the optimism of a universal symbol language seemed dated, and focus shifted from distillation to discernment. New educational programs presented ways to study images, increasingly reproducible with new technologies, as a language in and of themselves. These methods had their roots in the fin-de-siècle educational reforms of John Dewey, Helen Parkhurst, and Maria Montessori. As early as the 1920s, progressive educators were using highly visual magazines, like National Geographic, as the basis for lesson planning, with the hopes that they would “expose students to edifying and culturally enriching reading” and “develop a more catholic taste or sensibility, representing an important cosmopolitan value” (Hawkins 45). The rise in imagery from previously inaccessible regions helped pupils to see themselves in relation to the larger world (although this connection always came with the presumed superiority of the reader). “Pictorial education in public schools” taught readers—through images—to accept a broader world but, too often, they saw photographs as a “straightforward transcription of the real world” (Hawkins 57). The images of cultures and events presented in Life and National Geographic for the purposes of education and enrichment were now the subject of greater analysis in the classroom, not just as “windows into new worlds” but as cultural products in and of themselves. The emerging visual curriculum aimed to do more than just teach with previously excluded modes (photography, film and comics); it would investigate how images presented and mediated the world. This gained wider appeal with new analytical writing on film, like Raymond Spottiswoode's Grammar of the Film (1950) which sought to formulate the grammatical rules of visual communication (Messaris 181), influenced by semiotics and structural linguistics; the emphasis on grammar can also be seen in far earlier writings on design systems such as Owen Jones’s 1856 The Grammar of Ornament, which also advocated for new, universalising methods in design education (Sloboda 228). The inventorying impulse is on display in books like Donis A. Dondis’s A Primer of Visual Literacy (1973), a text that meditates on visual perception but also functions as an introduction to line and form in the applied arts, picking up where the Bauhaus left off. Dondis enumerates the “syntactical guidelines” of the applied arts with illustrations that are in keeping with 1920s books by Kandinsky and Klee and analyse pictorial elements. However, at the end of the book she shifts focus with two chapters that examine “messaging” and visual literacy explicitly. Dondis predicts that “an intellectual, trained ability to make and understand visual messages is becoming a vital necessity to involvement with communication. It is quite likely that visual literacy will be one of the fundamental measures of education in the last third of our century” (33) and she presses for more programs that incorporate the exploration and analysis of images in tertiary education. Figure 5: Ideal spatial environment for the Blueprint charts, 1970. (Image: Inventory Press.) Visual literacy in education arrived in earnest with a wave of publications in the mid-1970s. They offered ways for students to understand media processes and for teachers to use visual culture as an entry point into complex social and scientific subject matter, tapping into the “visual consciousness of the ‘television generation’” (Fransecky 5). Visual culture was often seen as inherently democratising, a break from stuffiness, the “artificialities of civilisation”, and the “archaic structures” that set sensorial perception apart from scholarship (Dworkin 131-132). Many radical university projects and community education initiatives of the 1960s made use of new media in novel ways: from Maurice Stein and Larry Miller’s fold-out posters accompanying Blueprint for Counter Education (1970) to Emory Douglas’s graphics for The Black Panther newspaper. Blueprint’s text- and image-dense wall charts were made via assemblage and they were imagined less as charts and more as a “matrix of resources” that could be used—and added to—by youth to undertake their own counter education (Cronin 53). These experiments in visual learning helped to break down old hierarchies in education, but their aim was influenced more by countercultural notions of disruption than the universal ideals of cosmopolitanism. From Image as Text to City as Text For a brief period in the 1970s, thinkers like Marshall McLuhan (McLuhan et al., Massage) and artists like Bruno Munari (Tanchis and Munari) collaborated fruitfully with graphic designers to create books that mixed text and image in novel ways. Using new compositional methods, they broke apart traditional printing lock-ups to superimpose photographs, twist text, and bend narrative frames. The most famous work from this era is, undoubtedly, The Medium Is the Massage (1967), McLuhan’s team-up with graphic designer Quentin Fiore, but it was followed by dozens of other books intended to communicate theory and scientific ideas with popularising graphics. Following in the footsteps of McLuhan, many of these texts sought not just to explain an issue but to self-consciously reference their own method of information delivery. These works set the precedent for visual aids (and, to a lesser extent, audio) that launched a diverse, non-hierarchical discourse that was nonetheless bound to tactile artefacts. In 1977, McLuhan helped develop a media textbook for secondary school students called City as Classroom: Understanding Language and Media. It is notable for its direct address style and its focus on investigating spaces outside of the classroom (provocatively, a section on the third page begins with “Should all schools be closed?”). The book follows with a fine-grained analysis of advertising forms in which students are asked to first bring advertisements into class for analysis and later to go out into the city to explore “a man-made environment, a huge warehouse of information, a vast resource to be mined free of charge” (McLuhan et al., City 149). As a document City as Classroom is critical of existing teaching methods, in line with the radical “in the streets” pedagogy of its day. McLuhan’s theories proved particularly salient for the counter education movement, in part because they tapped into a healthy scepticism of advertisers and other image-makers. They also dovetailed with growing discontent with the ad-strew visual environment of cities in the 1970s. Budgets for advertising had mushroomed in the1960s and outdoor advertising “cluttered” cities with billboards and neon, generating “fierce intensities and new hybrid energies” that threatened to throw off the visual equilibrium (McLuhan 74). Visual literacy curricula brought in experiential learning focussed on the legibility of the cities, mapping, and the visualisation of urban issues with social justice implications. The Detroit Geographical Expedition and Institute (DGEI), a “collective endeavour of community research and education” that arose in the aftermath of the 1967 uprisings, is the most storied of the groups that suffused the collection of spatial data with community engagement and organising (Warren et al. 61). The following decades would see a tamed approach to visual literacy that, while still pressing for critical reading, did not upend traditional methods of educational delivery. Figure 6: Beginning a College Program-Assisting Teachers to Develop Visual Literacy Approaches in Public School Classrooms. 1977. ERIC. Searching for Civic Education The visual literacy initiatives formed in the early 1970s both affirmed existing civil society institutions while also asserting the need to better inform the public. Most of the campaigns were sponsored by universities, major libraries, and international groups such as UNESCO, which published its “Declaration on Media Education” in 1982. They noted that “participation” was “essential to the working of a pluralistic and representative democracy” and the “public—users, citizens, individuals, groups ... were too systematically overlooked”. Here, the public is conceived as both “targets of the information and communication process” and users who “should have the last word”. To that end their “continuing education” should be ensured (Study 18). Programs consisted primarily of cognitive “see-scan-analyse” techniques (Little et al.) for younger students but some also sought to bring visual analysis to adult learners via continuing education (often through museums eager to engage more diverse audiences) and more radical popular education programs sponsored by community groups. By the mid-80s, scores of modules had been built around the comprehension of visual media and had become standard educational fare across North America, Australasia, and to a lesser extent, Europe. There was an increasing awareness of the role of data and image presentation in decision-making, as evidenced by the surprising commercial success of Edward Tufte’s 1982 book, The Visual Display of Quantitative Information. Visual literacy—or at least image analysis—was now enmeshed in teaching practice and needed little active advocacy. Scholarly interest in the subject went into a brief period of hibernation in the 1980s and early 1990s, only to be reborn with the arrival of new media distribution technologies (CD-ROMs and then the internet) in classrooms and the widespread availability of digital imaging technology starting in the late 1990s; companies like Adobe distributed free and reduced-fee licences to schools and launched extensive teacher training programs. Visual literacy was reanimated but primarily within a circumscribed academic field of education and data visualisation. Figure 7: Visual Literacy; What Research Says to the Teacher, 1975. National Education Association. USA. Part of the shifting frame of visual literacy has to do with institutional imperatives, particularly in places where austerity measures forced strange alliances between disciplines. What had been a project in alternative education morphed into an uncontested part of the curriculum and a dependable budget line. This shift was already forecasted in 1972 by Harun Farocki who, writing in Filmkritik, noted that funding for new film schools would be difficult to obtain but money might be found for “training in media education … a discipline that could persuade ministers of education, that would at the same time turn the budget restrictions into an advantage, and that would match the functions of art schools” (98). Nearly 50 years later educators are still using media education (rebranded as visual or media literacy) to make the case for fine arts and humanities education. While earlier iterations of visual literacy education were often too reliant on the idea of cracking the “code” of images, they did promote ways of learning that were a deep departure from the rote methods of previous generations. Next-gen curricula frame visual literacy as largely supplemental—a resource, but not a program. By the end of the 20th century, visual literacy had changed from a scholarly interest to a standard resource in the “teacher’s toolkit”, entering into school programs and influencing museum education, corporate training, and the development of public-oriented media (Literacy). An appreciation of image culture was seen as key to creating empathetic global citizens, but its scope was increasingly limited. With rising austerity in the education sector (a shift that preceded the 2008 recession by decades in some countries), art educators, museum enrichment staff, and design researchers need to make a case for why their disciplines were relevant in pedagogical models that are increasingly aimed at “skills-based” and “job ready” teaching. Arts educators worked hard to insert their fields into learning goals for secondary students as visual literacy, with the hope that “literacy” would carry the weight of an educational imperative and not a supplementary field of study. Conclusion For nearly a century, educational initiatives have sought to inculcate a cosmopolitan perspective with a variety of teaching materials and pedagogical reference points. Symbolic languages, like the Isotype, looked to unite disparate people with shared visual forms; while educational initiatives aimed to train the eyes of students to make them more discerning citizens. The term ‘visual literacy’ emerged in the 1960s and has since been deployed in programs with a wide variety of goals. Countercultural initiatives saw it as a prerequisite for popular education from the ground up, but, in the years since, it has been formalised and brought into more staid curricula, often as a sort of shorthand for learning from media and pictures. The grand cosmopolitan vision of a complete ‘visual language’ has been scaled back considerably, but still exists in trace amounts. Processes of globalisation require images to universalise experiences, commodities, and more for people without shared languages. Emoji alphabets and globalese (brands and consumer messaging that are “visual-linguistic” amalgams “increasingly detached from any specific ethnolinguistic group or locality”) are a testament to a mediatised banal cosmopolitanism (Jaworski 231). In this sense, becoming “fluent” in global design vernacular means familiarity with firms and products, an understanding that is aesthetic, not critical. It is very much the beneficiaries of globalisation—both state and commercial actors—who have been able to harness increasingly image-based technologies for their benefit. To take a humorous but nonetheless consequential example, Spanish culinary boosters were able to successfully lobby for a paella emoji (Miller) rather than having a food symbol from a less wealthy country such as a Senegalese jollof or a Morrocan tagine. This trend has gone even further as new forms of visual communication are increasingly streamlined and managed by for-profit media platforms. The ubiquity of these forms of communication and their global reach has made visual literacy more important than ever but it has also fundamentally shifted the endeavour from a graphic sorting practice to a critical piece of social infrastructure that has tremendous political ramifications. Visual literacy campaigns hold out the promise of educating students in an image-based system with the potential to transcend linguistic and cultural boundaries. This cosmopolitan political project has not yet been realised, as the visual literacy frame has drifted into specialised silos of art, design, and digital humanities education. It can help bridge the “incomplete connections” of an increasingly globalised world (Calhoun 112), but it does not have a program in and of itself. Rather, an evolving visual literacy curriculum might be seen as a litmus test for how we imagine the role of images in the world. References Brown, Neil. “The Myth of Visual Literacy.” Australian Art Education 13.2 (1989): 28-32. Calhoun, Craig. “Cosmopolitanism in the Modern Social Imaginary.” Daedalus 137.3 (2008): 105–114. Cronin, Paul. “Recovering and Rendering Vital Blueprint for Counter Education at the California Institute for the Arts.” Blueprint for Counter Education. Inventory Press, 2016. 36-58. Dondis, Donis A. A Primer of Visual Literacy. MIT P, 1973. Dworkin, M.S. “Toward an Image Curriculum: Some Questions and Cautions.” Journal of Aesthetic Education 4.2 (1970): 129–132. Eisner, Elliot. Cognition and Curriculum: A Basis for Deciding What to Teach. Longmans, 1982. Farocki, Harun. “Film Courses in Art Schools.” Trans. Ted Fendt. Grey Room 79 (Apr. 2020): 96–99. Fransecky, Roger B. Visual Literacy: A Way to Learn—A Way to Teach. Association for Educational Communications and Technology, 1972. Gardner, Howard. Frames Of Mind. Basic Books, 1983. Hawkins, Stephanie L. “Training the ‘I’ to See: Progressive Education, Visual Literacy, and National Geographic Membership.” American Iconographic. U of Virginia P, 2010. 28–61. Jaworski, Adam. “Globalese: A New Visual-Linguistic Register.” Social Semiotics 25.2 (2015): 217-35. Kant, Immanuel. Anthropology from a Pragmatic Point of View. Cambridge UP, 2006. Kant, Immanuel. “Perpetual Peace.” Political Writings. Ed. H. Reiss. Cambridge UP, 1991 [1795]. 116–130. Kress, G., and T. van Leeuwen. Reading images: The Grammar of Visual Design. Routledge, 1996. Literacy Teaching Toolkit: Visual Literacy. Department of Education and Training (DET), State of Victoria. 29 Aug. 2018. 30 Sep. 2020 <https://www.education.vic.gov.au:443/school/teachers/teachingresources/discipline/english/literacy/ readingviewing/Pages/litfocusvisual.aspx>. Lee, Jae Young. “Otto Neurath's Isotype and the Rhetoric of Neutrality.” Visible Language 42.2: 159-180. Little, D., et al. Looking and Learning: Visual Literacy across the Disciplines. Wiley, 2015. Messaris, Paul. “Visual Literacy vs. Visual Manipulation.” Critical Studies in Mass Communication 11.2: 181-203. DOI: 10.1080/15295039409366894 ———. “A Visual Test for Visual ‘Literacy.’” The Annual Meeting of the Speech Communication Association. 31 Oct. to 3 Nov. 1991. Atlanta, GA. <https://files.eric.ed.gov/fulltext/ED347604.pdf>. McLuhan, Marshall. Understanding Media: The Extensions of Man. McGraw-Hill, 1964. McLuhan, Marshall, Quentin Fiore, and Jerome Agel. The Medium Is the Massage, Bantam Books, 1967. McLuhan, Marshall, Kathryn Hutchon, and Eric McLuhan. City as Classroom: Understanding Language and Media. Agincourt, Ontario: Book Society of Canada, 1977. McTigue, Erin, and Amanda Flowers. “Science Visual Literacy: Learners' Perceptions and Knowledge of Diagrams.” Reading Teacher 64.8: 578-89. Miller, Sarah. “The Secret History of the Paella Emoji.” Food & Wine, 20 June 2017. <https://www.foodandwine.com/news/true-story-paella-emoji>. Munari, Bruno. Square, Circle, Triangle. Princeton Architectural Press, 2016. Newfield, Denise. “From Visual Literacy to Critical Visual Literacy: An Analysis of Educational Materials.” English Teaching-Practice and Critique 10 (2011): 81-94. Neurath, Otto. International Picture Language: The First Rules of Isotype. K. Paul, Trench, Trubner, 1936. Schor, Esther. Bridge of Words: Esperanto and the Dream of a Universal Language. Henry Holt and Company, 2016. Sloboda, Stacey. “‘The Grammar of Ornament’: Cosmopolitanism and Reform in British Design.” Journal of Design History 21.3 (2008): 223-36. Study of Communication Problems: Implementation of Resolutions 4/19 and 4/20 Adopted by the General Conference at Its Twenty-First Session; Report by the Director-General. UNESCO, 1983. Tanchis, Aldo, and Bruno Munari. Bruno Munari: Design as Art. MIT P, 1987. Warren, Gwendolyn, Cindi Katz, and Nik Heynen. “Myths, Cults, Memories, and Revisions in Radical Geographic History: Revisiting the Detroit Geographical Expedition and Institute.” Spatial Histories of Radical Geography: North America and Beyond. Wiley, 2019. 59-86.
APA, Harvard, Vancouver, ISO, and other styles
46

Potts, Jason. "The Alchian-Allen Theorem and the Economics of Internet Animals." M/C Journal 17, no. 2 (February 18, 2014). http://dx.doi.org/10.5204/mcj.779.

Full text
Abstract:
Economics of Cute There are many ways to study cute: for example, neuro-biology (cute as adaptation); anthropology (cute in culture); political economy (cute industries, how cute exploits consumers); cultural studies (social construction of cute); media theory and politics (representation and identity of cute), and so on. What about economics? At first sight, this might point to a money-capitalism nexus (“the cute economy”), but I want to argue here that the economics of cute actually works through choice interacting with fixed costs and what economists call ”the substitution effect”. Cute, in conjunction with the Internet, affects the trade-offs involved in choices people make. Let me put that more starkly: cute shapes the economy. This can be illustrated with internet animals, which at the time of writing means Grumpy Cat. I want to explain how that mechanism works – but to do so I will need some abstraction. This is not difficult – a simple application of a well-known economics model, namely the Allen-Alchian theorem, or the “third law of demand”. But I am going to take some liberties in order to represent that model clearly in this short paper. Specifically, I will model just two extremes of quality (“opera” and “cat videos”) to represent end-points of a spectrum. I will also assume that the entire effect of the internet is to lower the cost of cat videos. Now obviously these are just simplifying assumptions “for the purpose of the model”. And the purpose of the model is to illuminate a further aspect of how we might understand cute, by using an economic model of choice and its consequences. This is a standard technique in economics, but not so in cultural studies, so I will endeavour to explain these moments as we go, so as to avoid any confusion about analytic intent. The purpose of this paper is to suggest a way that a simple economic model might be applied to augment the cultural study of cute by seeking to unpack its economic aspect. This can be elucidated by considering the rise of internet animals as a media-cultural force, as epitomized by “cat videos”. We can explain this through an application of price theory and the theory of demand that was first proposed by Armen Alchian and William Allen. They showed how an equal fixed cost that was imposed to both high-quality and low-quality goods alike caused a shift in consumption toward the higher-quality good, because it is now relatively cheaper. Alchian and Allen had in mind something like transport costs on agricultural goods (such as apples). But it is also true that the same effect works in reverse (Cowen), and the purpose of this paper is to develop that logic to contribute to explaining how certain structural shifts in production and consumption in digital media, particularly the rise of blog formats such as Tumblr, a primary supplier of kittens on the Internet, can be in part understood as a consequence of this economic mechanism. There are three key assumptions to build this argument. The first is that the cost of the internet is independent of what it carries. This is certainly true at the level of machine code, and largely true at higher levels. What might be judged aesthetically high quality or low quality content – say of a Bach cantata or a funny cat video – are treated the same way if they both have the same file size. This is a physical and computational aspect of net-neutrality. The internet – or digitization – functions as a fixed cost imposed regardless of what cultural quality is moving across it. Second, while there are costs to using the internet (for example, in hardware or concerning digital literacy) these costs are lower than previous analog forms of information and cultural production and dissemination. This is not an empirical claim, but a logical one (revealed preference): if it were not so, people would not have chosen it. The first two points – net neutrality and lowered cost – I want to take as working assumptions, although they can obviously be debated. But that is not the purpose of the paper, which is instead the third point – the “Alchian-Allen theorem”, or the third fundamental law of demand. The Alchian-Allen Theorem The Alchian-Allen theorem is an extension of the law of demand (Razzolini et al) to consider how the distribution of high quality and low quality substitutes of the same good (such as apples) is affected by the imposition of a fixed cost (such as transportation). It is also known as the “shipping the good apples out” theorem, after Borcherding and Silberberg explained why places that produce a lot of apples – such as Seattle in the US – often also have low supplies of high quality apples compared to places that do not produce apples, such as New York. The puzzle of “why can’t you get good apples in Seattle?” is a simple but clever application of price theory. When a place produces high quality and low quality items, it will be rational for those in faraway places to consume the high quality items, and it will be rational for the producers to ship them, leaving only the low quality items locally.Why? Assume preferences and incomes are the same everywhere and that transport cost is the same regardless of whether the item shipped is high or low quality. Both high quality and low quality apples are more expensive in New York compared to Seattle, but because the fixed transport cost applies to both the high quality apples are relatively less expensive. Rational consumers in New York will consume more high quality apples. This makes fewer available in Seattle.Figure 1: Change in consumption ratio after the imposition of a fixed cost to all apples Another example: Australians drink higher quality Californian wine than Californians, and vice versa, because it is only worth shipping the high quality wine out. A counter-argument is that learning effects dominate: with high quality local product, local consumers learn to appreciate quality, and have different preferences (Cowen and Tabarrok).The Alchian-Allen theorem applies to any fixed cost that applies generally. For example, consider illegal drugs (such as alcohol during the US prohibition, or marijuana or cocaine presently) and the implication of a fixed penalty – such as a fine, or prison sentence, which is like a cost – applied to trafficking or consumption. Alchian-Allen predicts a shift toward higher quality (or stronger) drugs, because with a fixed penalty and probability of getting caught, the relatively stronger substance is now relatively cheaper. Empirical work finds that this effect did occur during alcohol prohibition, and is currently occurring in narcotics (Thornton Economics of Prohibition, "Potency of illegal drugs").Another application proposed by Steven Cuellar uses Alchian-Allen to explain a well-known statistical phenomenon why women taking the contraceptive pill on average prefer “more masculine” men. This is once again a shift toward quality predicted on falling relative price based on a common ‘fixed price’ (taking the pill) of sexual activity. Jean Eid et al show that the result also applies to racehorses (the good horses get shipped out), and Staten and Umbeck show it applies to students – the good students go to faraway universities, and the good student in those places do the same. So that’s apples, drugs, sex and racehorses. What about the Internet and kittens?Allen-Alchian Explains Why the Internet Is Made of CatsIn analog days, before digitization and Internet, the transactions costs involved with various consumption items, whether commodities or media, meant that the Alchian-Allen effect pushed in the direction of higher quality, bundled product. Any additional fixed costs, such as higher transport costs, or taxes or duties, or transactions costs associated with search and coordination and payment, i.e. costs that affected all substitutes in the same way, would tend to make the higher quality item relatively less expensive, increasing its consumption.But digitisation and the Internet reverse the direction of these transactions costs. Rather than adding a fixed cost, such as transport costs, the various aspects of the digital revolution are equivalent to a fall in fixed costs, particularly access.These factors are not just one thing, but a suite of changes that add up to lowered transaction costs in the production, distribution and consumption of media, culture and games. These include: The internet and world-wide-web, and its unencumbered operation The growth and increasing efficacy of search technology Growth of universal broadband for fast, wide band-width access Growth of mobile access (through smartphones and other appliances) Growth of social media networks (Facebook, Twitter; Metcalfe’s law) Growth of developer and distribution platforms (iPhone, android, iTunes) Globally falling hardware and network access costs (Moore’s law) Growth of e-commerce (Ebay, Amazon, Etsy) and e-payments (paypal, bitcoin) Expansions of digital literacy and competence Creative commons These effects do not simply shift us down a demand curve for each given consumption item. This effect alone simply predicts that we consume more. But the Alchian-Allen effect makes a different prediction, namely that we consume not just more, but also different.These effects function to reduce the overall fixed costs or transactions costs associated with any consumption, sharing, or production of media, culture or games over the internet (or in digital form). With this overall fixed cost component now reduced, it represents a relatively larger decline in cost at the lower-quality, more bite-sized or unbundled end of the media goods spectrum. As such, this predicts a change in the composition of the overall consumption basket to reflect the changed relative prices that these above effects give rise to. See Figure 2 below (based on a blog post by James Oswald). The key to the economics of cute, in consequence of digitisation, is to follow through the qualitative change that, because of the Alchian-Allen effect, moves away from the high-quality, highly-bundled, high-value end of the media goods spectrum. The “pattern prediction” here is toward more, different, and lower quality: toward five minutes of “Internet animals”, rather than a full day at the zoo. Figure 2: Reducing transaction costs lowers the relative price of cat videos Consider five dimensions in which this more and different tendency plays out. Consumption These effects make digital and Internet-based consumption cheaper, shifting us down a demand curve, so we consume more. That’s the first law of demand in action: i.e. demand curves slope downwards. But a further effect – brilliantly set out in Cowen – is that we also consume lower-quality media. This is not a value judgment. These lower-quality media may well have much higher aesthetic value. They may be funnier, or more tragic and sublime; or faster, or not. This is not about absolute value; only about relative value. Digitization operating through Allen-Alchian skews consumption toward the lower quality ends in some dimensions: whether this is time, as in shorter – or cost, as in cheaper – or size, as in smaller – or transmission quality, as in gifs. This can also be seen as a form of unbundling, of dropping of dimensions that are not valued to create a simplified product.So we consume different, with higher variance. We sample more than we used to. This means that we explore a larger information world. Consumption is bite-sized and assorted. This tendency is evident in the rise of apps and in the proliferation of media forms and devices and the value of interoperability.ProductionAs consumption shifts (lower quality, greater variety), so must production. The production process has two phases: (1) figuring out what to do, or development; and (2) doing it, or making. The world of trade and globalization describes the latter part: namely efficient production. The main challenge is the world of innovation: the entrepreneurial and experimental world of figuring out what to do, and how. It is this second world that is radically transformed by implications of lowered transaction costs.One implication is growth of user-communities based around collaborative media projects (such as open source software) and community-based platforms or common pool resources for sharing knowledge, such as the “Maker movement” (Anderson 2012). This phenomenon of user-co-creation, or produsers, has been widely recognized as an important new phenomenon in the innovation and production process, particularly those processes associated with new digital technologies. There are numerous explanations for this, particularly around preferences for cooperation, community-building, social learning and reputational capital, and entrepreneurial expectations (Quiggin and Potts, Banks and Potts). Business Models The Alchian-Allen effect on consumption and production follows through to business models. A business model is a way of extracting value that represents some strategic equilibrium between market forms, organizational structures, technological possibilities and institutional framework and environmental conditions that manifests in entrepreneurial patterns of business strategy and particular patterns of investment and organization. The discovery of effective business models is a key process of market capitalist development and competition. The Alchian-Allen effect impacts on the space of effective viable business models. Business models that used to work will work less well, or not at all. And new business models will be required. It is a significant challenge to develop these “economic technologies”. Perhaps no less so than development of the physical technologies, new business models are produced through experimental trial and error. They cannot be known in advance or planned. But business models will change, which will affect not only the constellation of existing companies and the value propositions that underlie them, but also the broader specializations based on these in terms of skill sets held and developed by people, locations of businesses and people, and so on. New business models will emerge from a process of Schumpeterian creative destruction as it unfolds (Beinhocker). The large production, high development cost, proprietary intellectual property and systems based business model is not likely to survive, other than as niche areas. More experimental, discovery-focused, fast-development-then-scale-up based business models are more likely to fit the new ecology. Social Network Markets & Novelty Bundling MarketsThe growth of variety and diversity of choice that comes with this change in the way media is consumed to reflect a reallocation of consumption toward smaller more bite-sized, lower valued chunks (the Alchian-Allen effect) presents consumers with a problem, namely that they have to make more choices over novelty. Choice over novelty is difficult for consumers because it is experimental and potentially costly due to risk of mistakes (Earl), but it also presents entrepreneurs with an opportunity to seek to help solve that problem. The problem is a simple consequence of bounded rationality and time scarcity. It is equivalent to saying that the cost of choice rises monotonically with the number of choices, and that because there is no way to make a complete rational choice, agents will use decision or choice heuristics. These heuristics can be developed independently by the agents themselves through experience, or they can be copied or adopted from others (Earl and Potts). What Potts et al call “social network markets” and what Potts calls “novelty bundling markets” are both instances of the latter process of copying and adoption of decision rules. Social network markets occur when agents use a “copy the most common” or “copy the highest rank” meta-level decision rule (Bentley et al) to deal with uncertainty. Social network markets can be efficient aggregators of distributed information, but they can also be path-dependent, and usually lead to winner-take all situations and dynamics. These can result in huge pay-offs differentials between first and second or fifth place, even when the initial quality differentials are slight or random. Diversity, rapid experimentation, and “fast-failure” are likely to be effective strategies. It also points to the role of trust and reputation in using adopted decision rules and the information economics that underlies that: namely that specialization and trade applies to the production and consumption of information as well as commodities. Novelty bundling markets are an entrepreneurial response to this problem, and observable in a range of new media and creative industries contexts. These include arts, music or food festivals or fairs where entertainment and sociality is combined with low opportunity cost situations in which to try bundles of novelty and connect with experts. These are by agents who developed expert preferences through investment and experience in consumption of the particular segment or domain. They are expert consumers and are selling their “decision rules” and not just the product. The more production and consumption of media and digital information goods and services experiences the Alchian-Allen effect, the greater the importance of novelty bundling markets. Intellectual Property & Regulation A further implication is that rent-seeking solutions may also emerge. This can be seen in two dimensions; pursuit of intellectual property (Boldrin and Levine); and demand for regulations (Stigler). The Alchian-Allen induced shift will affect markets and business models (and firms), and because this will induce strategic defensive and aggressive responses from different organizations. Some organizations will seek to fight and adapt to this new world through innovative competition. Other firms will fight through political connections. Most incumbent firms will have substantial investments in IP or in the business model it supports. Yet the intellectual property model is optimized for high-quality large volume centralized production and global sales of undifferentiated product. Much industrial and labour regulation is built on that model. How governments support such industries is predicated on the stability of this model. The Alchian-Allen effect threatens to upset that model. Political pushback will invariably take the form of opposing most new business models and the new entrants they carry. Conclusion I have presented here a lesser-known but important theorem in applied microeconomics – the Alchian-Allen effect – and explain why its inverse is central to understanding the evolution of new media industries, and also why cute animals proliferate on the Internet. The theorem states that when a fixed cost is added to substitute goods, consumers will shift to the higher quality item (now relatively less expensive). The theorem also holds in reverse, when a fixed cost is removed from substitute items we expect a shift to lower quality consumption. The Internet has dramatically lowered fixed costs of access to media consumption, and various development platforms have similarly lowered the costs of production. Alchian-Allen predicts a shift to lower-quality, ”bittier” cuter consumption (Cowen). References Alchian, Arman, and William Allen. Exchange and Production. 2nd ed. Belmont, CA: Wadsworth, 1967. Anderson, Chris. Makers. New York: Crown Business, 2012. Banks, John, and Jason Potts. "Consumer Co-Creation in Online Games." New Media and Society 12.2 (2010): 253-70. Beinhocker, Eric. Origin of Wealth. Cambridge, Mass.: Harvard University Press, 2005. Bentley, R., et al. "Regular Rates of Popular Culture Change Reflect Random Copying." Evolution and Human Behavior 28 (2007): 151-158. Borcherding, Thomas, and Eugene Silberberg. "Shipping the Good Apples Out: The Alchian and Allen Theorem Reconsidered." Journal of Political Economy 86.1 (1978): 131-6. Cowen, Tyler. Create Your Own Economy. New York: Dutton, 2009. (Also published as The Age of the Infovore: Succeeding in the Information Economy. Penguin, 2010.) Cowen, Tyler, and Alexander Tabarrok. "Good Grapes and Bad Lobsters: The Alchian and Allen Theorem Revisited." Journal of Economic Inquiry 33.2 (1995): 253-6. Cuellar, Steven. "Sex, Drugs and the Alchian-Allen Theorem." Unpublished paper, 2005. 29 Apr. 2014 ‹http://www.sonoma.edu/users/c/cuellar/research/Sex-Drugs.pdf›.Earl, Peter. The Economic Imagination. Cheltenham: Harvester Wheatsheaf, 1986. Earl, Peter, and Jason Potts. "The Market for Preferences." Cambridge Journal of Economics 28 (2004): 619–33. Eid, Jean, Travis Ng, and Terence Tai-Leung Chong. "Shipping the Good Horses Out." Wworking paper, 2012. http://homes.chass.utoronto.ca/~ngkaho/Research/shippinghorses.pdf Potts, Jason, et al. "Social Network Markets: A New Definition of Creative Industries." Journal of Cultural Economics 32.3 (2008): 166-185. Quiggin, John, and Jason Potts. "Economics of Non-Market Innovation & Digital Literacy." Media International Australia 128 (2008): 144-50. Razzolini, Laura, William Shughart, and Robert Tollison. "On the Third Law of Demand." Economic Inquiry 41.2 (2003): 292–298. Staten, Michael, and John Umbeck. “Shipping the Good Students Out: The Effect of a Fixed Charge on Student Enrollments.” Journal of Economic Education 20.2 (1989): 165-171. Stigler, George. "The Theory of Economic Regulation." Bell Journal of Economics 2.1 (1971): 3-22. Thornton, Mark. The Economics of Prohibition. Salt Lake City: University of Utah Press, 1991.Thornton, Mark. "The Potency of Illegal Drugs." Journal of Drug Issues 28.3 (1998): 525-40.
APA, Harvard, Vancouver, ISO, and other styles
47

Goggin, Gerard. "Innovation and Disability." M/C Journal 11, no. 3 (July 2, 2008). http://dx.doi.org/10.5204/mcj.56.

Full text
Abstract:
Critique of Ability In July 2008, we could be on the eve of an enormously important shift in disability in Australia. One sign of change is the entry into force on 3 May 2008 of the United Nations convention on the Rights of Persons with Disabilities, which will now be adopted by the Rudd Labor government. Through this, and other proposed measures, the Rudd government has indicated its desire for a seachange in the area of disability. Bill Shorten MP, the new Parliamentary Secretary for Disabilities and Children’s Services has been at pains to underline his commitment to a rights-based approach to disability. In this inaugural speech to Parliament, Senator Shorten declared: I believe the challenge for government is not to fit people with disabilities around programs but for programs to fit the lives, needs and ambitions of people with disabilities. The challenge for all of us is to abolish once and for all the second-class status that too often accompanies Australians living with disabilities. (Shorten, “Address in reply”; see also Shorten, ”Speaking up”) Yet if we listen to the voices of people with disability, we face fundamental issues of justice, democracy, equality and how we understand the deepest aspects of ourselves and our community. This is a situation that remains dire and palpably unjust, as many people with disabilities have attested. Elsewhere I have argued (Goggin and Newell) that disability constitutes a systemic form of exclusion and othering tantamount to a “social apartheid” . While there have been improvements and small gains since then, the system that reigns in Australia is still fundamentally oppressive. Nonetheless, I would suggest that through the rise of the many stranded movements of disability, the demographic, economic and social changes concerning impairment, we are seeing significant changes in how we understand impairment and ability (Barnes, Oliver and Barton; Goggin and Newell, Disability in Australia; Snyder, Brueggemann, and Garland-Thomson; Shakespeare; Stiker). There is now considerable, if still incomplete, recognition of disability as a category that is constituted through social, cultural, and political logics, as well as through complex facets of impairment, bodies (Corker and Shakespeare), experiences, discourses (Fulcher), and modes of materiality and subjectivity (Butler), identity and government (Tremain). Also there is growing awareness of the imbrication of disability and other categories such as sex and gender (Fine and Asch; Thomas), race, age, culture, class and distribution of wealth (Carrier; Cole; Davis, Bending over Backwards, and Enforcing Normalcy; Oliver; Rosenblum and Travis), ecology and war (Bourke; Gerber; Muir). There are rich and wide-ranging debates that offer fundamental challenges to the suffocating grip of the dominant biomedical model of disability (that conceives disability as individual deficit — for early critiques see: Borsay; Walker), as well as the still influential and important (if at times limiting) social model of disability (Oliver; Barnes and Mercer; Shakespeare). All in all,there have been many efforts to transform the social and political relations of disability. If disability has been subject to considerable examination, there has not yet been an extended, concomitant critique of ability. Nor have we witnessed a thoroughgoing recognition of unmarked, yet powerful operations of ability in our lives and thought, and the potential implications of challenging these. Certainly there have been important attempts to reframe the relationship between “ability” and “disability” (for example, see Jones and Mark). And we are all familiar with the mocking response to some neologisms that seek to capture this, such as the awkward yet pointed “differently-abled.” Despite such efforts we lack still a profound critique of ability, an exploration of “able”, the topic that this special issue invites us to consider. If we think of the impact and significance of “whiteness”, as a way to open up space for how to critically think about and change concepts of race; or of “masculinity” as a project for thinking about gender and sexuality — we can see that this interrogation of the unmarked category of “able” and “ability” is much needed (for one such attempt, see White). In this paper I would like to make a small contribution to such a critique of ability, by considering what the concept of innovation and its contemporary rhetorics have to offer for reframing disability. Innovation is an important discourse in contemporary life. It offers interesting possibilities for rethinking ability — and indeed disability. And it is this relatively unexplored prospect that this paper seeks to explore. Beyond Access, Equity & Diversity In this scene of disability, there is attention being given to making long over-due reforms. Yet the framing of many of these reforms, such as the strengthening of national and international legal frameworks, for instance, also carry with them considerable problems. Disability is too often still seen as something in need of remediation, or special treatment. Access, equity, and anti-discrimination frameworks offer important resources for challenging this “special” treatment, so too do the diversity approaches which have supplemented or supplanted them (Goggin and Newell, “Diversity as if Disability Mattered”). In what new ways can we approach disability and policies relevant to it? In a surprisingly wide range of areas, innovation has featured as a new, cross-sectoral approach. Innovation has been a long-standing topic in science, technology and economics. However, its emergence as master-theme comes from its ability to straddle and yoke together previously diverse fields. Current discussions of innovation bring together and extend work on the information society, the knowledge economy, and the relationships between science and technology. We are now familiar for instance with arguments about how digital networked information and communications technologies and their consumption are creating new forms of innovation (Benkler; McPherson; Passiante, Elia, and Massari). Innovation discourse has extended to many other unfamiliar realms too, notably the area of social and community development, where a new concept of social innovation is now proposed (Mulgan), often aligned with new ideas of social entrepreneurship that go beyond earlier accounts of corporate social responsibility. We can see the importance of innovation in the ‘creative industries’ discourses and initiatives which have emerged since the 1990s. Here previously distinct endeavours of arts and culture have become reframed in a way that puts their central achievement of creativity to the fore, and recognises its importance across all sorts of service and manufacturing industries, in particular. More recently, theorists of creative industries, such as Cunningham, have begun to talk about “social network markets,” as a way to understand the new hybrid of creativity, innovation, digital technology, and new economic logics now being constituted (Cunningham and Potts). Innovation is being regarded as a cardinal priority for societies and their governments. Accordingly, the Australian government has commissioned a Review of The National Innovation System, led by Dr Terry Cutler, due to report in the second half of 2008. The Cutler review is especially focussed upon gaps and weaknesses in the Australian innovation system. Disability has the potential to figure very strongly in this innovation talk, however there has been little discussion of disability in the innovation discourse to date. The significance of disability in relation to innovation was touched upon some years ago, in a report on Disablism from the UK Demos Foundation (Miller, Parker and Gillinson). In a chapter entitled “The engine of difference: disability, innovation and creativity,” the authors discuss the area of inclusive design, and make the argument for the “involvement of disabled people to create a stronger model of user design”:Disabled people represented a market of 8.6 million customers at the last count and their experiences aren’t yet feeding through into processes of innovation. But the role of disabled people as innovators can and should be more active; we should include disabled people in the design process because they are good at it. (57) There are two reasons given for this expertise of disabled people in design. Firstly, “disabled people are often outstanding problem solvers because they have to be … life for disabled people at the moment is a series of challenges to be overcome” (57). Secondly, “innovative ideas are more likely to come from those who have a new or different angle on old problems” (57). The paradox in this argument is that as life becomes more equitable for people with disabilities, then these ‘advantages’ should disappear” (58). Accordingly, Miller et al. make a qualified argument, namely that “greater participation of disabled people in innovation in the short term may just be the necessary trigger for creating an altogether different, and better, system of innovation for everyone in the future” (58). The Demos Disablism report was written at a time when rhetorics of innovation were just beginning to become more generalized and mainstream. This was also at a time in the UK, when there was hope that new critical approaches to disability would see it become embraced as a part of the diverse society that Blair’s New Labor Britain had been indicating. The argument Disablism offers about disability and innovation is in some ways a more formalized version of vernacular theory (McLaughlin, 1996). In the disability movement we often hear, with good reason, that people with disability, by dint of their experience and knowledge are well positioned to develop and offer particular kinds of expertise. However, Miller et al. also gesture towards a more generalized account of disability and innovation, one that would intersect with the emerging frameworks around innovation. It is this possibility that I wish to take up and briefly explore here. I want to consider the prospects for a fully-fledged encounter between disability and innovation. I would like to have a better sense of whether this is worth pursuing, and what it would add to our understanding of both disability and innovation? Would the disability perspective be integrated as a long-term part of our systems of innovation rather than, as Miller et al. imply, deployed temporarily to develop better innovation systems? What pitfalls might be bound up with, or indeed be the conditions of, such a union between disability and innovation? The All-Too-Able User A leading area where disability figures profoundly in innovation is in the field of technology — especially digital technology. There is now a considerable literature and body of practice on disability and digital technology (Annable, Goggin, and Stienstra; Goggin and Newell, Digital Disability; National Council on Disability), however for my purposes here I would like to focus upon the user, the abilities ascribed to various kinds of users, and the user with disability in particular. Digital technologies are replete with challenges and opportunities; they are multi-layered, multi-media, and global in their manifestation and function. In Australia, Britain, Canada, the US, and Europe, there have been some significant digital technology initiatives which have resulted in improved accessibility for many users and populations (Annable, Goggin, and Stienstra; National Council on Disability) . There are a range of examples of ways in which users with disability are intervening and making a difference in design. There is also a substantial body of literature that clarifies why we need to include the perspective of the disabled if we are to be truly innovative in our design practices (Annable, Goggin and Stienstra; Goggin and Newell, “Disability, Identity and Interdependence”). I want to propose, however, that there is merit in going beyond recognition of the role of people with disability in technology design (vital and overlooked as it remains), to consider how disability can enrich contemporary discourses on innovation. There is a very desirable cross-over to be promoted between the emphasis on the user-as-expert in the sphere of disability and technology, and on the integral role of disability groups in the design process, on the one hand, and the rise of the user in digital culture generally, on the other. Surprisingly, such connections are nowhere near as widespread and systematic as they should be. It may be that contemporary debates about the user, and about the user as co-creator, or producer, of technology (Haddon et al.; von Hippel) actually reinstate particular notions of ability, and the able user, understood with reference to notions of disability. The current emphasis on the productive user, based as it is on changing understandings of ability and disability, provides rich material for critical revision of the field and those assumptions surrounding ability. It opens up possibilities for engaging more fully with disability and incorporating disability into the new forms and relations of digital technology that celebrate the user (Goggin and Newell, Digital Disability). While a more detailed consideration of these possibilities require more time than this essay allows, let us consider for a moment the idea of a genuine encounter between the activated user springing from the disability movement, and the much feted user in contemporary digital culture and theories of innovation. People with disability are using these technologies in innovative ways, so have much to contribute to wider discussions of digital technology (Annable, Goggin and Stienstra). The Innovation Turn Innovation policy, the argument goes, is important because it stands to increase productivity, which in turn leads to greater international competitiveness and economic benefit. Especially with the emergence of capitalism (Gleeson), productivity has strong links to particular notions of which types of production and produce are valued. Productivity is also strongly conditioned by how we understand ability and, last in a long chain of strong associations, how we as a society understand and value those kinds of people and bodies believed to contain and exercise the ordained and rewarded types of ability, produce, and productivity. Disability is often seen as antithetical to productivity (a revealing text on the contradictions of disability and productivity is the 2004 Productivity Commission Review of the Disability Discrimination Act). When we think about the history of disability, we quickly realize that productivity, and by extension, innovation, are strongly ideological. Ideological, that is, in the sense that these fields of human endeavour and our understanding of them are shaped by power relations, and are built upon implicit ‘ableist’ assumptions about productivity. In this case, the power relations of disability go right to the heart of the matter, highlighting who and what are perceived to be of value, contributing economically and in other ways to society, and who and what are considered as liabilities, as less valued and uneconomical. A stark recent example of this is the Howard government workplace and welfare reforms, which further disenfranchised, controlled, and impoverished people with disability. If we need to rethink our ideas of productivity and ability in the light of new notions of disability, then so too do we need to rethink our ideas about innovation and disability. Here the new discourses of innovation may actually be useful, but also contain limited formulations and assumptions about ability and disability that need to be challenged. The existing problems of a fresh approach to disability and innovation can be clearly observed in the touchstones of national science and technology “success.” Beyond One-Sided Innovation Disability does actually feature quite prominently in the annals of innovation. Take, for instance, the celebrated case of the so-called “bionic ear” (or cochlear implant) hailed as one of Australia’s great scientific inventions of the past few decades. This is something we can find on display in the Powerhouse Museum of Technology and Design, in Sydney. Yet the politics of the cochlear implant are highly controversial, not least as it is seen by many (for instance, large parts of the Deaf community) as not involving people with disabilities, nor being informed by their desires (Campbell, also see “Social and Ethical Aspects of Cochlear Implants”). A key problem with the cochlear implant and many other technologies is that they are premised on the abolition or overcoming of disability — rather than being shaped as technology that acknowledges and is informed by disabled users in their diverse guises. The failure to learn the lessons of the cochlear implant for disability and innovation can be seen in the fact that we are being urged now to band together to support the design of a “bionic eye” by the year 2020, as a mark of distinction of achieving a great nation (2020 Summit Initial Report). Again, there is no doubting the innovation and achievement in these artefacts and their technological systems. But their development has been marked by a distinct lack of consultation and engagement with people with disabilities; or rather the involvement has been limited to a framework that positions them as passive users of technology, rather than as “producer/users”. Further, what notions of disability and ability are inscribed in these technological systems, and what do they represent and symbolize in the wider political and social field? Unfortunately, such technologies have the effect of reproducing an ableist framework, “enforcing normalcy” (Davis), rather than building in, creating and contributing to new modes of living, which embrace difference and diversity. I would argue that this represents a one-sided logic of innovation. A two-sided logic of innovation, indeed what we might call a double helix (at least) of innovation would be the sustained, genuine interaction between different users, different notions of ability, disability and impairment, and the processes of design. If such a two-sided (or indeed many-sided logic) is to emerge there is good reason to think it could more easily do so in the field of digital cultures and technologies, than say, biotechnology. The reason for this is the emphasis in digital communication technologies on decentralized, participatory, user-determined governance and design, coming from many sources. Certainly this productive, democratic, participatory conception of the user is prevalent in Internet cultures. Innovation here is being reshaped to harness the contribution and knowledge of users, and could easily be extended to embrace pioneering efforts in disability. Innovating with Disability In this paper I have tried to indicate why it is productive for discourses of innovation to consider disability; the relationship between disability and innovation is rich and complex, deserving careful elaboration and interrogation. In suggesting this, I am aware that there are also fundamental problems that innovation raises in its new policy forms. There are the issues of what is at stake when the state is redefining its traditional obligations towards citizens through innovation frameworks and discourses. And there is the troubling question of whether particular forms of activity are normatively judged to be innovative — whereas other less valued forms are not seen as innovative. By way of conclusion, however, I would note that there are now quite basic, and increasingly accepted ways, to embed innovation in design frameworks, and while they certainly have been adopted in the disability and technology area, there is much greater scope for this. However, a few things do need to change before this potential for disability to enrich innovation is adequately realized. Firstly, we need further research and theorization to clarify the contribution of disability to innovation, work that should be undertaken and directed by people with disability themselves. Secondly, there is a lack of resources for supporting disability and technology organisations, and the development of training and expertise in this area (especially to provide viable career paths for experts with disability to enter the field and sustain their work). If this is addressed, the economic benefits stand to be considerable, not to mention the implications for innovation and productivity. Thirdly, we need to think about how we can intensify existing systems of participatory design, or, better still, introduce new user-driven approaches into strategically important places in the design processes of ICTs (and indeed in the national innovation system). Finally, there is an opportunity for new approaches to governance in ICTs at a general level, informed by disability. New modes of organising, networking, and governance associated with digital technology have attracted much attention, also featuring recently in the Australia 2020 Summit. Less well recognised are new ideas about governance that come from the disability community, such as the work of Queensland Advocacy Incorporated, Rhonda Galbally’s Our Community, disability theorists such as Christopher Newell (Newell), or the Canadian DIS-IT alliance (see, for instance, Stienstra). The combination of new ideas in governance from digital culture, new ideas from the disability movement and disability studies, and new approaches to innovation could be a very powerful cocktail indeed.Dedication This paper is dedicated to my beloved friend and collaborator, Professor Christopher Newell AM (1964-2008), whose extraordinary legacy will inspire us all to continue exploring and questioning the idea of able. References Abberley, Paul. “The Concept of Oppression and the Development of a Social Theory of Disability.” Disability, Handicap & Society 2.1 (1987): 5–20. Annable, Gary, Gerard Goggin, and Deborah Stienstra, eds. “Accessibility and Inclusion in Information Technologies.” Special issue of The Information Society 23.3 (2007): 145-147. Australia 2020 Summit. Australia 2020 Summit — Initial Report. Commonwealth of Australia 20 April 2008. 15 May 2008 ‹http://www.australia2020.gov.au/docs/2020_Summit_initial_report.doc›. Barnes, Colin, and Geoff Mercer, eds. Implementing the Social Model of Disability: Theory and Research. Leeds: The Disability Press, 2004. Barnes, Colin, Mike Oliver, and Len Barton, eds. Disability Studies Today. Cambridge: Polity Press, 2002. Benkler, Yochai. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press, 2006. Borsay, Anne. “Personal Trouble or Public Issue? Toward a Model of Policy for People with Physical and Mental Disabilities.” Disability, Handicap and Society 1.2 (1986): 179-195. Bourke, Joanna. Dismembering the Male: Men’s Bodies, Britain and the Great War. Chicago: University of Chicago Press, 1996. Butler, Judith. Bodies that Matter: On the Discursive Limits of “Sex.” London: Routledge, 1993. Campbell, Fiona. “Selling the Cochlear Implant.” Disability Studies Quarterly 25.3 (2005). ‹http://www.dsq-sds-archives.org/_articles_html/2005/summer/campbell.asp›. Carrier, James G. Learning Disability: Social Class and the Construction of Inequality in American Education. New York: Greenword Press, 1986. Cole, Mike, ed. Education, Equality and Human Rights: Issues of Gender, ‘Race’, Sexuality, Disability and Social Class. London and New York: Routledge, 2006. Corker, Mairean, and Tom Shakespeare, eds. Disability/Postmodernity: Embodying Disability Theory. London: Continuum, 2002. Davis, Lennard J. Bending Over Backwards: Disability, Dismodernism, and other Difficult Positions. New York, NY: New York University Press, 2002. ———. Enforcing Normalcy: Disability, Deafness and the Body. London: Verso, 1995. Fine, Michelle, and Adrienne Asch, eds. Women with Disabilities: Essays in Psychology, Culture, and Politics. Philadelphia: Temple University Press, 1988. Fulcher, Gillian. Disabling Policies? London: Falmer Press, 1989. Gerber, David A., ed. Disabled Veterans in History. Ann Arbor, MI: University of Michigan Press, 2000. Gleeson, Brendan. Geographies of Disability. London and New York: Routledge, 1999. Goggin, Gerard, and Christopher Newell. Digital Disability: The Social Construction of Disability in New Media. Lanham, MD: Rowman & Littlefield, 2003. ———. Disability in Australia: Exposing a Social Apartheid. Sydney: University of New South Wales Press, 2005. ———, eds. “Disability, Identity, and Interdependence: ICTs and New Social Forms.” Special issue of Information, Communication & Society 9.3 (2006). ———. “Diversity as if Disability Mattered.” Australian Journal of Communication 30.3 (2003): 1-6. ———, eds. “Technology and Disability.” Special double issue of Disability Studies Quarterly 25.2-3 (2005). Haddon, Leslie, Enid Mante, Bartolomeo Sapio, Kari-Hans Kommonen, Leopoldina Fortunati, and Annevi Kant, eds. Everyday Innovators: Researching the Role of Users in Shaping ICTs. London: Springer, 2005. Jones, Melinda, and Anne Basser Marks Lee, eds. Disability, Divers-ability and Legal Change. The Hague: Martinus Nijhoff, 1999. McLaughlin, Thomas. Street Smarts and Critical Theory: Listening to the Vernacular. Madison: University of Wisconsin Press, 1996. McPherson, Tara, ed. Digital Youth, Innovation, and the Unexpected. Cambridge, MA: MIT Press, 2008. Meekosha, Helen. “Drifting Down the Gulf Stream: Navigating the Cultures of Disability Studies.” Disability & Society 19.7 (2004): 721-733. Miller, Paul, Sophia Parker, and Sarah Gillinson. Disablism: How to Tackle the Last Prejudice. London: Demos, 2004. ‹http://www.demos.co.uk/publications/disablism›. Mulgan, Geoff. “The Process of Social Innovation.” Innovations 1.2 (2006): 145-62. Muir, Kristy. “‘That Bastard’s Following Me!’ Mentally Ill Australian Veterans Struggling to Maintain Control.” Social Histories of Disability and Deformity. Ed. in David M. Turner and Kevin Stagg. New York: Routledge. 161-74. National Council on Disability (NCD). Design for Inclusion: Creating a New Marketplace. Washington: NCD, 2004. Newell, Christopher. “Debates Regarding Governance: A Disability Perspective.” Disability & Society 13.2 (1998): 295-296. Oliver, Michael. The Politics of Disablement: A Sociological Approach. New York: St. Martin’s Press, 1990. Passiante, Giuseppina, Valerio Elia, and Tommaso Massari, eds. Digital Innovation: Innovation Processes in Virtual Clusters and Digital Regions. London: Imperial College Press, 2003. Productivity Commission. Review of the Disability Discrimination Act 1992. Melbourne: Productivity Commission, 2004. ‹http://www.pc.gov.au/inquiry/dda/docs/finalreport›. Shakespeare, Tom. Disability Rights and Wrongs. New York: Routledge, 2006. Shorten, Bill. Address-in-Reply, Governor-General’s Speech. Hansard 14 Feb. 2008: 328-333. ———. “Speaking Up for True Battlers.” Daily Telegraph 12 March 2008. ‹http://www.billshorten.com.au/press/index.cfm?Fuseaction=pressreleases_full&ID=1328›. Snyder, Sharon L., Brenda Brueggemann, and Rosemary Garland-Thomson, eds. Disability Studies: Enabling the Humanities. New York: Modern Language Association of America, 2002. Stienstra, Deborah. “The Critical Space Between: Access, Inclusion and Standards in Information Technologies.” Information, Communication & Society 9.3 (2006): 335-354. Stiker, Henri-Jacques. A History of Disability. Trans. William Sayers. Ann Arbor: University of Michigan Press, 1999. Thomas, Carol. Female Forms: Experiencing and Understanding Disability. Buckingham: Open University, 1999. Rosenblum, Karen E., and Toni-Michelle C. Travis, eds. The Meaning of Difference: American Constructions of Race, Sex and Gender, Social Class, Sexual Orientation, and Disability. New York, NY: McGraw-Hill, 2008. Von Hippel, Eric. Democratizing Innovation. Cambridge, MA: MIT Press, 2005. Walker, Alan. “The Social Origins of Impairment, Disability and Handicap.” Medicine and Society 6.2-3 (1980): 18-26. White, Michele. “Where Do You Want to Sit Today: Computer Programmers’ Static Bodies and Disability.” Information, Communication and Society 9.3 (2006): 396-416.
APA, Harvard, Vancouver, ISO, and other styles
48

Acland, Charles. "Matinees, Summers and Opening Weekends." M/C Journal 3, no. 1 (March 1, 2000). http://dx.doi.org/10.5204/mcj.1824.

Full text
Abstract:
Newspapers and the 7:15 Showing Cinemagoing involves planning. Even in the most impromptu instances, one has to consider meeting places, line-ups and competing responsibilities. One arranges child care, postpones household chores, or rushes to finish meals. One must organise transportation and think about routes, traffic, parking or public transit. And during the course of making plans for a trip to the cinema, whether alone or in the company of others, typically one turns to locate a recent newspaper. Consulting its printed page lets us ascertain locations, a selection of film titles and their corresponding show times. In preparing to feed a cinema craving, we burrow through a newspaper to an entertainment section, finding a tableau of information and promotional appeals. Such sections compile the mini-posters of movie advertisements, with their truncated credits, as well as various reviews and entertainment news. We see names of shopping malls doubling as names of theatres. We read celebrity gossip that may or may not pertain to the film selected for that occasion. We informally rank viewing priorities ranging from essential theatrical experiences to those that can wait for the videotape release. We attempt to assess our own mood and the taste of our filmgoing companions, matching up what we suppose are appropriate selections. Certainly, other media vie to supplant the newspaper's role in cinemagoing; many now access on-line sources and telephone services that offer the crucial details about start times. Nonetheless, as a campaign by the Newspaper Association of America in Variety aimed to remind film marketers, 80% of cinemagoers refer to newspaper listings for times and locations before heading out. The accuracy of that association's statistics notwithstanding, for the moment, the local daily or weekly newspaper has a secure place in the routines of cinematic life. A basic impetus for the newspaper's role is its presentation of a schedule of show times. Whatever the venue -- published, phone or on-line -- it strikes me as especially telling that schedules are part of the ordinariness of cinemagoing. To be sure, there are those who decide what film to see on site. Anecdotally, I have had several people comment recently that they no longer decide what movie to see, but where to see a (any) movie. Regardless, the schedule, coupled with the theatre's location, figures as a point of coordination for travel through community space to a site of film consumption. The choice of show time is governed by countless demands of everyday life. How often has the timing of a film -- not the film itself, the theatre at which it's playing, nor one's financial situation --determined one's attendance? How familiar is the assessment that show times are such that one cannot make it, that the film begins a bit too earlier, that it will run too late for whatever reason, and that other tasks intervene to take precedence? I want to make several observations related to the scheduling of film exhibition. Most generally, it makes manifest that cinemagoing involves an exercise in the application of cinema knowledge -- that is, minute, everyday facilities and familiarities that help orchestrate the ordinariness of cultural life. Such knowledge informs what Michel de Certeau characterises as "the procedures of everyday creativity" (xiv). Far from random, the unexceptional decisions and actions involved with cinemagoing bear an ordering and a predictability. Novelty in audience activity appears, but it is alongside fairly exact expectations about the event. The schedule of start times is essential to the routinisation of filmgoing. Displaying a Fordist logic of streamlining commodity distribution and the time management of consumption, audiences circulate through a machine that shapes their constituency, providing a set time for seating, departure, snack purchases and socialising. Even with the staggered times offered by multiplex cinemas, schedules still lay down a fixed template around which other activities have to be arrayed by the patron. As audiences move to and through the theatre, the schedule endeavours to regulate practice, making us the subjects of a temporal grid, a city context, a cinema space, as well as of the film itself. To be sure, one can arrive late and leave early, confounding the schedule's disciplining force. Most importantly, with or without such forms of evasion, it channels the actions of audiences in ways that consideration of the gaze cannot address. Taking account of the scheduling of cinema culture, and its implication of adjunct procedures of everyday life, points to dimensions of subjectivity neglected by dominant theories of spectatorship. To be the subject of a cinema schedule is to understand one assemblage of the parameters of everyday creativity. It would be foolish to see cinema audiences as cattle, herded and processed alone, in some crude Gustave LeBon fashion. It would be equally foolish not to recognise the manner in which film distribution and exhibition operates precisely by constructing images of the activity of people as demographic clusters and generalised cultural consumers. The ordinary tactics of filmgoing are supplemental to, and run alongside, a set of industrial structures and practices. While there is a correlation between a culture industry's imagined audience and the life that ensues around its offerings, we cannot neglect that, as attention to film scheduling alerts us, audiences are subjects of an institutional apparatus, brought into being for the reproduction of an industrial edifice. Streamline Audiences In this, film is no different from any culture industry. Film exhibition and distribution relies on an understanding of both the market and the product or service being sold at any given point in time. Operations respond to economic conditions, competing companies, and alternative activities. Economic rationality in this strategic process, however, only explains so much. This is especially true for an industry that must continually predict, and arguably give shape to, the "mood" and predilections of disparate and distant audiences. Producers, distributors and exhibitors assess which films will "work", to whom they will be marketed, as well as establish the very terms of success. Without a doubt, much of the film industry's attentions act to reduce this uncertainty; here, one need only think of the various forms of textual continuity (genre films, star performances, etc.) and the economies of mass advertising as ways to ensure box office receipts. Yet, at the core of the operations of film exhibition remains a number of flexible assumptions about audience activity, taste and desire. These assumptions emerge from a variety of sources to form a brand of temporary industry "commonsense", and as such are harbingers of an industrial logic. Ien Ang has usefully pursued this view in her comparative analysis of three national television structures and their operating assumptions about audiences. Broadcasters streamline and discipline audiences as part of their organisational procedures, with the consequence of shaping ideas about consumers as well as assuring the reproduction of the industrial structure itself. She writes, "institutional knowledge is driven toward making the audience visible in such a way that it helps the institutions to increase their power to get their relationship with the audience under control, and this can only be done by symbolically constructing 'television audience' as an objectified category of others that can be controlled, that is, contained in the interest of a predetermined institutional goal" (7). Ang demonstrates, in particular, how various industrially sanctioned programming strategies (programme strips, "hammocking" new shows between successful ones, and counter-programming to a competitor's strengths) and modes of audience measurement grow out of, and invariably support, those institutional goals. And, most crucially, her approach is not an effort to ascertain the empirical certainty of "actual" audiences; instead, it charts the discursive terrain in which the abstract concept of audience becomes material for the continuation of industry practices. Ang's work tenders special insight to film culture. In fact, television scholarship has taken full advantage of exploring the routine nature of that medium, the best of which deploys its findings to lay bare configurations of power in domestic contexts. One aspect has been television time and schedules. For example, David Morley points to the role of television in structuring everyday life, discussing a range of research that emphasises the temporal dimension. Alerting us to the non- necessary determination of television's temporal structure, he comments that we "need to maintain a sensitivity to these micro-levels of division and differentiation while we attend to the macro-questions of the media's own role in the social structuring of time" (265). As such, the negotiation of temporal structures implies that schedules are not monolithic impositions of order. Indeed, as Morley puts it, they "must be seen as both entering into already constructed, historically specific divisions of space and time, and also as transforming those pre-existing division" (266). Television's temporal grid has been address by others as well. Paddy Scannell characterises scheduling and continuity techniques, which link programmes, as a standardisation of use, making radio and television predictable, 'user friendly' media (9). John Caughie refers to the organization of flow as a way to talk about the national particularities of British and American television (49-50). All, while making their own contributions, appeal to a detailing of viewing context as part of any study of audience, consumption or experience; uncovering the practices of television programmers as they attempt to apprehend and create viewing conditions for their audiences is a first step in this detailing. Why has a similar conceptual framework not been applied with the same rigour to film? Certainly the history of film and television's association with different, at times divergent, disciplinary formations helps us appreciate such theoretical disparities. I would like to mention one less conspicuous explanation. It occurs to me that one frequently sees a collapse in the distinction between the everyday and the domestic; in much scholarship, the latter term appears as a powerful trope of the former. The consequence has been the absenting of a myriad of other -- if you will, non-domestic -- manifestations of everyday-ness, unfortunately encouraging a rather literal understanding of the everyday. The impression is that the abstractions of the everyday are reduced to daily occurrences. Simply put, my minor appeal is for the extension of this vein of television scholarship to out-of-home technologies and cultural forms, that is, other sites and locations of the everyday. In so doing, we pay attention to extra-textual structures of cinematic life; other regimes of knowledge, power, subjectivity and practice appear. Film audiences require a discussion about the ordinary, the calculated and the casual practices of cinematic engagement. Such a discussion would chart institutional knowledge, identifying operating strategies and recognising the creativity and multidimensionality of cinemagoing. What are the discursive parameters in which the film industry imagines cinema audiences? What are the related implications for the structures in which the practice of cinemagoing occurs? Vectors of Exhibition Time One set of those structures of audience and industry practice involves the temporal dimension of film exhibition. In what follows, I want to speculate on three vectors of the temporality of cinema spaces (meaning that I will not address issues of diegetic time). Note further that my observations emerge from a close study of industrial discourse in the U.S. and Canada. I would be interested to hear how they are manifest in other continental contexts. First, the running times of films encourage turnovers of the audience during the course of a single day at each screen. The special event of lengthy anomalies has helped mark the epic, and the historic, from standard fare. As discussed above, show times coordinate cinemagoing and regulate leisure time. Knowing the codes of screenings means participating in an extension of the industrial model of labour and service management. Running times incorporate more texts than the feature presentation alone. Besides the history of double features, there are now advertisements, trailers for coming attractions, trailers for films now playing in neighbouring auditoriums, promotional shorts demonstrating new sound systems, public service announcements, reminders to turn off cell phones and pagers, and the exhibitor's own signature clips. A growing focal point for filmgoing, these introductory texts received a boost in 1990, when the Motion Picture Association of America changed its standards for the length of trailers, boosting it from 90 seconds to a full two minutes (Brookman). This intertextuality needs to be supplemented by a consideration of inter- media appeals. For example, advertisements for television began appearing in theatres in the 1990s. And many lobbies of multiplex cinemas now offer a range of media forms, including video previews, magazines, arcades and virtual reality games. Implied here is that motion pictures are not the only media audiences experience in cinemas and that there is an explicit attempt to integrate a cinema's texts with those at other sites and locations. Thus, an exhibitor's schedule accommodates an intertextual strip, offering a limited parallel to Raymond Williams's concept of "flow", which he characterised by stating -- quite erroneously -- "in all communication systems before broadcasting the essential items were discrete" (86-7). Certainly, the flow between trailers, advertisements and feature presentations is not identical to that of the endless, ongoing text of television. There are not the same possibilities for "interruption" that Williams emphasises with respect to broadcasting flow. Further, in theatrical exhibition, there is an end-time, a time at which there is a public acknowledgement of the completion of the projected performance, one that necessitates vacating the cinema. This end-time is a moment at which the "rental" of the space has come due; and it harkens a return to the street, to the negotiation of city space, to modes of public transit and the mobile privatisation of cars. Nonetheless, a schedule constructs a temporal boundary in which audiences encounter a range of texts and media in what might be seen as limited flow. Second, the ephemerality of audiences -- moving to the cinema, consuming its texts, then passing the seat on to someone else -- is matched by the ephemerality of the features themselves. Distributors' demand for increasing numbers of screens necessary for massive, saturation openings has meant that films now replace one another more rapidly than in the past. Films that may have run for months now expect weeks, with fewer exceptions. Wider openings and shorter runs have created a cinemagoing culture characterised by flux. The acceleration of the turnover of films has been made possible by the expansion of various secondary markets for distribution, most importantly videotape, splintering where we might find audiences and multiplying viewing contexts. Speeding up the popular in this fashion means that the influence of individual texts can only be truly gauged via cross-media scrutiny. Short theatrical runs are not axiomatically designed for cinemagoers anymore; they can also be intended to attract the attention of video renters, purchasers and retailers. Independent video distributors, especially, "view theatrical release as a marketing expense, not a profit center" (Hindes & Roman 16). In this respect, we might think of such theatrical runs as "trailers" or "loss leaders" for the video release, with selected locations for a film's release potentially providing visibility, even prestige, in certain city markets or neighbourhoods. Distributors are able to count on some promotion through popular consumer- guide reviews, usually accompanying theatrical release as opposed to the passing critical attention given to video release. Consequently, this shapes the kinds of uses an assessment of the current cinema is put to; acknowledging that new releases function as a resource for cinema knowledge highlights the way audiences choose between and determine big screen and small screen films. Taken in this manner, popular audiences see the current cinema as largely a rough catalogue to future cultural consumption. Third, motion picture release is part of the structure of memories and activities over the course of a year. New films appear in an informal and ever-fluctuating structure of seasons. The concepts of summer movies and Christmas films, or the opening weekends that are marked by a holiday, sets up a fit between cinemagoing and other activities -- family gatherings, celebrations, etc. Further, this fit is presumably resonant for both the industry and popular audiences alike, though certainly for different reasons. The concentration of new films around visible holiday periods results in a temporally defined dearth of cinemas; an inordinate focus upon three periods in the year in the U.S. and Canada -- the last weekend in May, June/July/August and December -- creates seasonal shortages of screens (Rice-Barker 20). In fact, the boom in theatre construction through the latter half of the 1990s was, in part, to deal with those short-term shortages and not some year-round inadequate seating. Configurations of releasing colour a calendar with the tactical manoeuvres of distributors and exhibitors. Releasing provides a particular shape to the "current cinema", a term I employ to refer to a temporally designated slate of cinematic texts characterised most prominently by their newness. Television arranges programmes to capitalise on flow, to carry forward audiences and to counter-programme competitors' simultaneous offerings. Similarly, distributors jostle with each other, with their films and with certain key dates, for the limited weekends available, hoping to match a competitor's film intended for one audience with one intended for another. Industry reporter Leonard Klady sketched some of the contemporary truisms of releasing based upon the experience of 1997. He remarks upon the success of moving Liar, Liar (Tom Shadyac, 1997) to a March opening and the early May openings of Austin Powers: International Man of Mystery (Jay Roach, 1997) and Breakdown (Jonathan Mostow, 1997), generally seen as not desirable times of the year for premieres. He cautions against opening two films the same weekend, and thus competing with yourself, using the example of Fox's Soul Food (George Tillman, Jr., 1997) and The Edge (Lee Tamahori, 1997). While distributors seek out weekends clear of films that would threaten to overshadow their own, Klady points to the exception of two hits opening on the same date of December 19, 1997 -- Tomorrow Never Dies (Roger Spottiswoode, 1997) and Titanic (James Cameron, 1997). Though but a single opinion, Klady's observations are a peek into a conventional strain of strategising among distributors and exhibitors. Such planning for the timing and appearance of films is akin to the programming decisions of network executives. And I would hazard to say that digital cinema, reportedly -- though unlikely -- just on the horizon and in which texts will be beamed to cinemas via satellite rather than circulated in prints, will only augment this comparison; releasing will become that much more like programming, or at least will be conceptualised as such. To summarize, the first vector of exhibition temporality is the scheduling and running time; the second is the theatrical run; the third is the idea of seasons and the "programming" of openings. These are just some of the forces streamlining filmgoers; the temporal structuring of screenings, runs and film seasons provides a material contour to the abstraction of audience. Here, what I have delineated are components of an industrial logic about popular and public entertainment, one that offers a certain controlled knowledge about and for cinemagoing audiences. Shifting Conceptual Frameworks A note of caution is in order. I emphatically resist an interpretation that we are witnessing the becoming-film of television and the becoming-tv of film. Underneath the "inversion" argument is a weak brand of technological determinism, as though each asserts its own essential qualities. Such a pat declaration seems more in line with the mythos of convergence, and its quasi-Darwinian "natural" collapse of technologies. Instead, my point here is quite the opposite, that there is nothing essential or unique about the scheduling or flow of television; indeed, one does not have to look far to find examples of less schedule-dependent television. What I want to highlight is that application of any term of distinction -- event/flow, gaze/glance, public/private, and so on -- has more to do with our thinking, with the core discursive arrangements that have made film and television, and their audiences, available to us as knowable and different. So, using empirical evidence to slide one term over to the other is a strategy intended to supplement and destabilise the manner in which we draw conclusions, and even pose questions, of each. What this proposes is, again following the contributions of Ien Ang, that we need to see cinemagoing in its institutional formation, rather than some stable technological, textual or experiential apparatus. The activity is not only a function of a constraining industrial practice or of wildly creative patrons, but of a complex inter-determination between the two. Cinemagoing is an organisational entity harbouring, reviving and constituting knowledge and commonsense about film commodities, audiences and everyday life. An event of cinema begins well before the dimming of an auditorium's lights. The moment a newspaper is consulted, with its local representation of an internationally circulating current cinema, its listings belie a scheduling, an orderliness, to the possible projections in a given location. As audiences are formed as subjects of the current cinema, we are also agents in the continuation of a set of institutions as well. References Ang, Ien. Desperately Seeking the Audience. New York: Routledge, 1991. Brookman, Faye. "Trailers: The Big Business of Drawing Crowds." Variety 13 June 1990: 48. Caughie, John. "Playing at Being American: Games and Tactics." Logics of Television: Essays in Cultural Criticism. Ed. Patricia Mellencamp. Bloomington: Indiana UP, 1990. De Certeau, Michel. The Practice of Everyday Life. Trans. Steve Rendall. Berkeley: U of California P, 1984. Hindes, Andrew, and Monica Roman. "Video Titles Do Pitstops on Screens." Variety 16-22 Sep. 1996: 11+. Klady, Leonard. "Hitting and Missing the Market: Studios Show Savvy -- or Just Luck -- with Pic Release Strategies." Variety 19-25 Jan. 1998: 18. Morley, David. Television, Audiences and Cultural Studies. New York: Routledge, 1992. Newspaper Association of America. "Before They See It Here..." Advertisement. Variety 22-28 Nov. 1999: 38. Rice-Barker, Leo. "Industry Banks on New Technology, Expanded Slates." Playback 6 May 1996: 19-20. Scannell, Paddy. Radio, Television and Modern Life. Oxford: Blackwell, 1996. Williams, Raymond. Television: Technology and Cultural Form. New York: Schocken, 1975. Citation reference for this article MLA style: Charles Acland. "Matinees, Summers and Opening Weekends: Cinemagoing Audiences as Institutional Subjects." M/C: A Journal of Media and Culture 3.1 (2000). [your date of access] <http://www.uq.edu.au/mc/0003/cinema.php>. Chicago style: Charles Acland, "Matinees, Summers and Opening Weekends: Cinemagoing Audiences as Institutional Subjects," M/C: A Journal of Media and Culture 3, no. 1 (2000), <http://www.uq.edu.au/mc/0003/cinema.php> ([your date of access]). APA style: Charles Acland. (2000) Matinees, Summers and Opening Weekends: Cinemagoing Audiences as Institutional Subjects. M/C: A Journal of Media and Culture 3(1). <http://www.uq.edu.au/mc/0003/cinema.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
49

Moore, Christopher Luke. "Digital Games Distribution: The Presence of the Past and the Future of Obsolescence." M/C Journal 12, no. 3 (July 15, 2009). http://dx.doi.org/10.5204/mcj.166.

Full text
Abstract:
A common criticism of the rhythm video games genre — including series like Guitar Hero and Rock Band, is that playing musical simulation games is a waste of time when you could be playing an actual guitar and learning a real skill. A more serious criticism of games cultures draws attention to the degree of e-waste they produce. E-waste or electronic waste includes mobiles phones, computers, televisions and other electronic devices, containing toxic chemicals and metals whose landfill, recycling and salvaging all produce distinct environmental and social problems. The e-waste produced by games like Guitar Hero is obvious in the regular flow of merchandise transforming computer and video games stores into simulation music stores, filled with replica guitars, drum kits, microphones and other products whose half-lives are short and whose obsolescence is anticipated in the annual cycles of consumption and disposal. This paper explores the connection between e-waste and obsolescence in the games industry, and argues for the further consideration of consumers as part of the solution to the problem of e-waste. It uses a case study of the PC digital distribution software platform, Steam, to suggest that the digital distribution of games may offer an alternative model to market driven software and hardware obsolescence, and more generally, that such software platforms might be a place to support cultures of consumption that delay rather than promote hardware obsolescence and its inevitability as e-waste. The question is whether there exists a potential for digital distribution to be a means of not only eliminating the need to physically transport commodities (its current 'green' benefit), but also for supporting consumer practices that further reduce e-waste. The games industry relies on a rapid production and innovation cycle, one that actively enforces hardware obsolescence. Current video game consoles, including the PlayStation 3, the Xbox 360 and Nintendo Wii, are the seventh generation of home gaming consoles to appear within forty years, and each generation is accompanied by an immense international transportation of games hardware, software (in various storage formats) and peripherals. Obsolescence also occurs at the software or content level and is significant because the games industry as a creative industry is dependent on the extensive management of multiple intellectual properties. The computing and video games software industry operates a close partnership with the hardware industry, and as such, software obsolescence directly contributes to hardware obsolescence. The obsolescence of content and the redundancy of the methods of policing its scarcity in the marketplace has been accelerated and altered by the processes of disintermediation with a range of outcomes (Flew). The music industry is perhaps the most advanced in terms of disintermediation with digital distribution at the center of the conflict between the legitimate and unauthorised access to intellectual property. This points to one issue with the hypothesis that digital distribution can lead to a reduction in hardware obsolescence, as the marketplace leader and key online distributor of music, Apple, is also the major producer of new media technologies and devices that are the paragon of stylistic obsolescence. Stylistic obsolescence, in which fashion changes products across seasons of consumption, has long been observed as the dominant form of scaled industrial innovation (Slade). Stylistic obsolescence is differentiated from mechanical or technological obsolescence as the deliberate supersedence of products by more advanced designs, better production techniques and other minor innovations. The line between the stylistic and technological obsolescence is not always clear, especially as reduced durability has become a powerful market strategy (Fitzpatrick). This occurs where the design of technologies is subsumed within the discourses of manufacturing, consumption and the logic of planned obsolescence in which the product or parts are intended to fail, degrade or under perform over time. It is especially the case with signature new media technologies such as laptop computers, mobile phones and portable games devices. Gamers are as guilty as other consumer groups in contributing to e-waste as participants in the industry's cycles of planned obsolescence, but some of them complicate discussions over the future of obsolescence and e-waste. Many gamers actively work to forestall the obsolescence of their games: they invest time in the play of older games (“retrogaming”) they donate labor and creative energy to the production of user-generated content as a means of sustaining involvement in gaming communities; and they produce entirely new game experiences for other users, based on existing software and hardware modifications known as 'mods'. With Guitar Hero and other 'rhythm' games it would be easy to argue that the hardware components of this genre have only one future: as waste. Alternatively, we could consider the actual lifespan of these objects (including their impact as e-waste) and the roles they play in the performances and practices of communities of gamers. For example, the Elmo Guitar Hero controller mod, the Tesla coil Guitar Hero controller interface, the Rock Band Speak n' Spellbinder mashup, the multiple and almost sacrilegious Fender guitar hero mods, the Guitar Hero Portable Turntable Mod and MAKE magazine's Trumpet Hero all indicate a significant diversity of user innovation, community formation and individual investment in the post-retail life of computer and video game hardware. Obsolescence is not just a problem for the games industry but for the computing and electronics industries more broadly as direct contributors to the social and environmental cost of electrical waste and obsolete electrical equipment. Planned obsolescence has long been the experience of gamers and computer users, as the basis of a utopian mythology of upgrades (Dovey and Kennedy). For PC users the upgrade pathway is traversed by the consumption of further hardware and software post initial purchase in a cycle of endless consumption, acquisition and waste (as older parts are replaced and eventually discarded). The accumulation and disposal of these cultural artefacts does not devalue or accrue in space or time at the same rate (Straw) and many users will persist for years, gradually upgrading and delaying obsolescence and even perpetuate the circulation of older cultural commodities. Flea markets and secondhand fairs are popular sites for the purchase of new, recent, old, and recycled computer hardware, and peripherals. Such practices and parallel markets support the strategies of 'making do' described by De Certeau, but they also continue the cycle of upgrade and obsolescence, and they are still consumed as part of the promise of the 'new', and the desire of a purchase that will finally 'fix' the users' computer in a state of completion (29). The planned obsolescence of new media technologies is common, but its success is mixed; for example, support for Microsoft's operating system Windows XP was officially withdrawn in April 2009 (Robinson), but due to the popularity in low cost PC 'netbooks' outfitted with an optimised XP operating system and a less than enthusiastic response to the 'next generation' Windows Vista, XP continues to be popular. Digital Distribution: A Solution? Gamers may be able to reduce the accumulation of e-waste by supporting the disintermediation of the games retail sector by means of online distribution. Disintermediation is the establishment of a direct relationship between the creators of content and their consumers through products and services offered by content producers (Flew 201). The move to digital distribution has already begun to reduce the need to physically handle commodities, but this currently signals only further support of planned, stylistic and technological obsolescence, increasing the rate at which the commodities for recording, storing, distributing and exhibiting digital content become e-waste. Digital distribution is sometimes overlooked as a potential means for promoting communities of user practice dedicated to e-waste reduction, at the same time it is actively employed to reduce the potential for the unregulated appropriation of content and restrict post-purchase sales through Digital Rights Management (DRM) technologies. Distributors like Amazon.com continue to pursue commercial opportunities in linking the user to digital distribution of content via exclusive hardware and software technologies. The Amazon e-book reader, the Kindle, operates via a proprietary mobile network using a commercially run version of the wireless 3G protocols. The e-book reader is heavily encrypted with Digital Rights Management (DRM) technologies and exclusive digital book formats designed to enforce current copyright restrictions and eliminate second-hand sales, lending, and further post-purchase distribution. The success of this mode of distribution is connected to Amazon's ability to tap both the mainstream market and the consumer demand for the less-than-popular; those books, movies, music and television series that may not have been 'hits' at the time of release. The desire to revisit forgotten niches, such as B-sides, comics, books, and older video games, suggests Chris Anderson, linked with so-called “long tail” economics. Recently Webb has queried the economic impact of the Long Tail as a business strategy, but does not deny the underlying dynamics, which suggest that content does not obsolesce in any straightforward way. Niche markets for older content are nourished by participatory cultures and Web 2.0 style online services. A good example of the Long Tail phenomenon is the recent case of the 1971 book A Lion Called Christian, by Anthony Burke and John Rendall, republished after the author's film of a visit to a resettled Christian in Africa was popularised on YouTube in 2008. Anderson's Long Tail theory suggests that over time a large number of items, each with unique rather than mass histories, will be subsumed as part of a larger community of consumers, including fans, collectors and everyday users with a long term interest in their use and preservation. If digital distribution platforms can reduce e-waste, they can perhaps be fostered by to ensuring digital consumers have access to morally and ethically aware consumer decisions, but also that they enjoy traditional consumer freedoms, such as the right to sell on and change or modify their property. For it is not only the fixation on the 'next generation' that contributes to obsolescence, but also technologies like DRM systems that discourage second hand sales and restrict modification. The legislative upgrades, patches and amendments to copyright law that have attempted to maintain the law's effectiveness in competing with peer-to-peer networks have supported DRM and other intellectual property enforcement technologies, despite the difficulties that owners of intellectual property have encountered with the effectiveness of DRM systems (Moore, Creative). The games industry continues to experiment with DRM, however, this industry also stands out as one of the few to have significantly incorporated the user within the official modes of production (Moore, Commonising). Is the games industry capable (or willing) of supporting a digital delivery system that attempts to minimise or even reverse software and hardware obsolescence? We can try to answer this question by looking in detail at the biggest digital distributor of PC games, Steam. Steam Figure 1: The Steam Application user interface retail section Steam is a digital distribution system designed for the Microsoft Windows operating system and operated by American video game development company and publisher, Valve Corporation. Steam combines online games retail, DRM technologies and internet-based distribution services with social networking and multiplayer features (in-game voice and text chat, user profiles, etc) and direct support for major games publishers, independent producers, and communities of user-contributors (modders). Steam, like the iTunes games store, Xbox Live and other digital distributors, provides consumers with direct digital downloads of new, recent and classic titles that can be accessed remotely by the user from any (internet equipped) location. Steam was first packaged with the physical distribution of Half Life 2 in 2004, and the platform's eventual popularity is tied to the success of that game franchise. Steam was not an optional component of the game's installation and many gamers protested in various online forums, while the platform was treated with suspicion by the global PC games press. It did not help that Steam was at launch everything that gamers take objection to: a persistent and initially 'buggy' piece of software that sits in the PC's operating system and occupies limited memory resources at the cost of hardware performance. Regular updates to the Steam software platform introduced social network features just as mainstream sites like MySpace and Facebook were emerging, and its popularity has undergone rapid subsequent growth. Steam now eclipses competitors with more than 20 million user accounts (Leahy) and Valve Corporation makes it publicly known that Steam collects large amounts of data about its users. This information is available via the public player profile in the community section of the Steam application. It includes the average number of hours the user plays per week, and can even indicate the difficulty the user has in navigating game obstacles. Valve reports on the number of users on Steam every two hours via its web site, with a population on average between one and two million simultaneous users (Valve, Steam). We know these users’ hardware profiles because Valve Corporation makes the results of its surveillance public knowledge via the Steam Hardware Survey. Valve’s hardware survey itself conceptualises obsolescence in two ways. First, it uses the results to define the 'cutting edge' of PC technologies and publishing the standards of its own high end production hardware on the companies blog. Second, the effect of the Survey is to subsequently define obsolescent hardware: for example, in the Survey results for April 2009, we can see that the slight majority of users maintain computers with two central processing units while a significant proportion (almost one third) of users still maintained much older PCs with a single CPU. Both effects of the Survey appear to be well understood by Valve: the Steam Hardware Survey automatically collects information about the community's computer hardware configurations and presents an aggregate picture of the stats on our web site. The survey helps us make better engineering and gameplay decisions, because it makes sure we're targeting machines our customers actually use, rather than measuring only against the hardware we've got in the office. We often get asked about the configuration of the machines we build around the office to do both game and Steam development. We also tend to turn over machines in the office pretty rapidly, at roughly every 18 months. (Valve, Team Fortress) Valve’s support of older hardware might counter perceptions that older PCs have no use and begins to reverse decades of opinion regarding planned and stylistic obsolescence in the PC hardware and software industries. Equally significant to the extension of the lives of older PCs is Steam's support for mods and its promotion of user generated content. By providing software for mod creation and distribution, Steam maximises what Postigo calls the development potential of fan-programmers. One of the 'payoffs' in the information/access exchange for the user with Steam is the degree to which Valve's End-User Licence Agreement (EULA) permits individuals and communities of 'modders' to appropriate its proprietary game content for use in the creation of new games and games materials for redistribution via Steam. These mods extend the play of the older games, by requiring their purchase via Steam in order for the individual user to participate in the modded experience. If Steam is able to encourage this kind of appropriation and community support for older content, then the potential exists for it to support cultures of consumption and practice of use that collaboratively maintain, extend, and prolong the life and use of games. Further, Steam incorporates the insights of “long tail” economics in a purely digital distribution model, in which the obsolescence of 'non-hit' game titles can be dramatically overturned. Published in November 2007, Unreal Tournament 3 (UT3) by Epic Games, was unappreciated in a market saturated with games in the first-person shooter genre. Epic republished UT3 on Steam 18 months later, making the game available to play for free for one weekend, followed by discounted access to new content. The 2000 per cent increase in players over the game's 'free' trial weekend, has translated into enough sales of the game for Epic to no longer consider the release a commercial failure: It’s an incredible precedent to set: making a game a success almost 18 months after a poor launch. It’s something that could only have happened now, and with a system like Steam...Something that silently updates a purchase with patches and extra content automatically, so you don’t have to make the decision to seek out some exciting new feature: it’s just there anyway. Something that, if you don’t already own it, advertises that game to you at an agreeably reduced price whenever it loads. Something that enjoys a vast community who are in turn plugged into a sea of smaller relevant communities. It’s incredibly sinister. It’s also incredibly exciting... (Meer) Clearly concerns exist about Steam's user privacy policy, but this also invites us to the think about the economic relationship between gamers and games companies as it is reconfigured through the private contractual relationship established by the EULA which accompanies the digital distribution model. The games industry has established contractual and licensing arrangements with its consumer base in order to support and reincorporate emerging trends in user generated cultures and other cultural formations within its official modes of production (Moore, "Commonising"). When we consider that Valve gets to tax sales of its virtual goods and can further sell the information farmed from its users to hardware manufacturers, it is reasonable to consider the relationship between the corporation and its gamers as exploitative. Gabe Newell, the Valve co-founder and managing director, conversely believes that people are willing to give up personal information if they feel it is being used to get better services (Leahy). If that sentiment is correct then consumers may be willing to further trade for services that can reduce obsolescence and begin to address the problems of e-waste from the ground up. Conclusion Clearly, there is a potential for digital distribution to be a means of not only eliminating the need to physically transport commodities but also supporting consumer practices that further reduce e-waste. For an industry where only a small proportion of the games made break even, the successful relaunch of older games content indicates Steam's capacity to ameliorate software obsolescence. Digital distribution extends the use of commercially released games by providing disintermediated access to older and user-generated content. For Valve, this occurs within a network of exchange as access to user-generated content, social networking services, and support for the organisation and coordination of communities of gamers is traded for user-information and repeat business. Evidence for whether this will actively translate to an equivalent decrease in the obsolescence of game hardware might be observed with indicators like the Steam Hardware Survey in the future. The degree of potential offered by digital distribution is disrupted by a range of technical, commercial and legal hurdles, primary of which is the deployment of DRM, as part of a range of techniques designed to limit consumer behaviour post purchase. While intervention in the form of legislation and radical change to the insidious nature of electronics production is crucial in order to achieve long term reduction in e-waste, the user is currently considered only in terms of 'ethical' consumption and ultimately divested of responsibility through participation in corporate, state and civil recycling and e-waste management operations. The message is either 'careful what you purchase' or 'careful how you throw it away' and, like DRM, ignores the connections between product, producer and user and the consumer support for environmentally, ethically and socially positive production, distribrution, disposal and recycling. This article, has adopted a different strategy, one that sees digital distribution platforms like Steam, as capable, if not currently active, in supporting community practices that should be seriously considered in conjunction with a range of approaches to the challenge of obsolescence and e-waste. References Anderson, Chris. "The Long Tail." Wired Magazine 12. 10 (2004). 20 Apr. 2009 ‹http://www.wired.com/wired/archive/12.10/tail.html›. De Certeau, Michel. The Practice of Everyday Life. Berkeley: U of California P, 1984. Dovey, Jon, and Helen Kennedy. Game Cultures: Computer Games as New Media. London: Open University Press,2006. Fitzpatrick, Kathleen. The Anxiety of Obsolescence. Nashville: Vanderbilt UP, 2008. Flew, Terry. New Media: An Introduction. South Melbourne: Oxford UP, 2008. Leahy, Brian. "Live Blog: DICE 2009 Keynote - Gabe Newell, Valve Software." The Feed. G4TV 18 Feb. 2009. 16 Apr. 2009 ‹http://g4tv.com/thefeed/blog/post/693342/Live-Blog-DICE-2009-Keynote-–-Gabe-Newell-Valve-Software.html›. Meer, Alec. "Unreal Tournament 3 and the New Lazarus Effect." Rock, Paper, Shotgun 16 Mar. 2009. 24 Apr. 2009 ‹http://www.rockpapershotgun.com/2009/03/16/unreal-tournament-3-and-the-new-lazarus-effect/›.Moore, Christopher. "Commonising the Enclosure: Online Games and Reforming Intellectual Property Regimes." Australian Journal of Emerging Technologies and Society 3. 2, (2005). 12 Apr. 2009 ‹http://www.swin.edu.au/sbs/ajets/journal/issue5-V3N2/abstract_moore.htm›. Moore, Christopher. "Creative Choices: Changes to Australian Copyright Law and the Future of the Public Domain." Media International Australia 114 (Feb. 2005): 71–83. Postigo, Hector. "Of Mods and Modders: Chasing Down the Value of Fan-Based Digital Game Modification." Games and Culture 2 (2007): 300-13. Robinson, Daniel. "Windows XP Support Runs Out Next Week." PC Business Authority 8 Apr. 2009. 16 Apr. 2009 ‹http://www.pcauthority.com.au/News/142013,windows-xp-support-runs-out-next-week.aspx›. Straw, Will. "Exhausted Commodities: The Material Culture of Music." Canadian Journal of Communication 25.1 (2000): 175. Slade, Giles. Made to Break: Technology and Obsolescence in America. Cambridge: Harvard UP, 2006. Valve. "Steam and Game Stats." 26 Apr. 2009 ‹http://store.steampowered.com/stats/›. Valve. "Team Fortress 2: The Scout Update." Steam Marketing Message 20 Feb. 2009. 12 Apr. 2009 ‹http://storefront.steampowered.com/Steam/Marketing/message/2269/›. Webb, Richard. "Online Shopping and the Harry Potter Effect." New Scientist 2687 (2008): 52-55. 16 Apr. 2009 ‹http://www.newscientist.com/article/mg20026873.300-online-shopping-and-the-harry-potter-effect.html?page=2›. With thanks to Dr Nicola Evans and Dr Frances Steel for their feedback and comments on drafts of this paper.
APA, Harvard, Vancouver, ISO, and other styles
50

Conti, Olivia. "Disciplining the Vernacular: Fair Use, YouTube, and Remixer Agency." M/C Journal 16, no. 4 (August 11, 2013). http://dx.doi.org/10.5204/mcj.685.

Full text
Abstract:
Introduction The research from which this piece derives explores political remix video (PRV), a genre in which remixers critique dominant discourses and power structures through guerrilla remixing of copyrighted footage (“What Is Political Remix Video?”). Specifically, I examined the works of political video remixer Elisa Kreisinger, whose queer remixes of shows such as Sex and the City and Mad Men received considerable attention between 2010 and the present. As a rhetoric scholar, I am attracted not only to the ways that remix functions discursively but also the ways in which remixers are constrained in their ability to argue, and what recourse they have in these situations of legal and technological constraint. Ultimately, many of these struggles play out on YouTube. This is unsurprising: many studies of YouTube and other user-generated content (UGC) platforms focus on the fact that commercial sites cannot constitute utopian, democratic, or free environments (Hilderbrand; Hess; Van Dijck). However, I find that, contrary to popular belief, YouTube’s commercial interests are not the primary factor limiting remixer agency. Rather, United States copyright law as enacted on YouTube has the most potential to inhibit remixers. This has led to many remixers becoming advocates for fair use, the provision in the Copyright Act of 1976 that allows for limited use of copyrighted content. With this in mind, I decided to delve more deeply into the framing of fair use by remixers and other advocates such as the Electronic Frontier Foundation (EFF) and the Center for Social Media. In studying discourses of fair use as they play out in the remix community, I find that the framing of fair use bears a striking similarity to what rhetoric scholars have termed vernacular discourse—a discourse emanating from a small segment of the larger civic community (Ono and Sloop 23). The vernacular is often framed as that which integrates the institutional or mainstream while simultaneously asserting its difference through appropriation and subversion. A video qualifies as fair use if it juxtaposes source material in a new way for the purposes of critique. In turn, a vernacular text asserts its “vernacularity” by taking up parts of pre-existing dominant institutional discourses in a way that resonates with a smaller community. My argument is that this tension between institutional and vernacular gives political remix video a multivalent argument—one that presents itself both in the text of the video itself as well as in the video’s status as a fair use of copyrighted material. Just as fair use represents the assertion of creator agency against unfair copyright law, vernacular discourse represents the assertion of a localised community within a world dominated by institutional discourses. In this way, remixers engage rights holders and other institutions in a pleasurable game of cat and mouse, a struggle to expose the boundaries of draconian copyright law. YouTube’s Commercial InterestsYouTube’s commercial interests operate at a level potentially invisible to the casual user. While users provide YouTube with content, they also provide the site with data—both metadata culled from their navigations of the site (page views, IP addresses) as well as member-provided data (such as real name and e-mail address). YouTube mines this data for a number of purposes—anything from interface optimisation to targeted advertising via Google’s AdSense. Users also perform a certain degree of labour to keep the site running smoothly, such as reporting videos that violate the Terms of Service, giving videos the thumbs up or thumbs down, and reporting spam comments. As such, users involved in YouTube’s participatory culture are also necessarily involved in the site’s commercial interests. While there are legitimate concerns regarding the privacy of personal information, especially after Google introduced policies in 2012 to facilitate a greater flow of information across all of their subsidiaries, it does not seem that this has diminished YouTube’s popularity (“Google: Privacy Policy”).Despite this, some make the argument that users provide the true benefit of UGC platforms like YouTube, yet reap few rewards, creating an exploitative dynamic (Van Dijck, 46). Two assumptions seem to underpin this argument: the first is that users do not desire to help these platforms prosper, the second is that users expect to profit from their efforts on the website. In response to these arguments, it’s worth calling attention to scholars who have used alternative economic models to account for user-platform coexistence. This is something that Henry Jenkins addresses in his recent book Spreadable Media, largely by focusing on assigning alternate sorts of value to user and fan labour—either the cultural worth of the gift, or the satisfaction of a job well done common to pre-industrial craftsmanship (61). However, there are still questions of how to account for participatory spaces in which labours of love coexist with massively profitable products. In service of this point, Jenkins calls up Lessig, who posits that many online networks operate as hybrid economies, which combine commercial and sharing economies. In a commercial economy, profit is the primary consideration, while a sharing economy is composed of participants who are there because they enjoy doing the work without any expectation of compensation (176). The strict separation between the two economies is, in Lessig’s estimation, essential to the hybrid economy’s success. While it would be difficult to incorporate these two economies together once each had been established, platforms like YouTube have always operated under the hybrid principle. YouTube’s users provide the site with its true value (through their uploading of content, provision of metadata, and use of the site), yet users do not come to YouTube with these tasks in mind—they come to YouTube because it provides an easy-to-use platform by which to share amateur creativity, and a community with whom to interact. Additionally, YouTube serves as the primary venue where remixers can achieve visibility and viral status—something Elisa Kreisinger acknowledged in our interviews (2012). However, users who are not concerned with broad visibility as much as with speaking to particular viewers may leave YouTube if they feel that the venue does not suit their content. Some feminist fan vidders, for instance, have withdrawn from YouTube due to what they perceived as a community who didn’t understand their work (Kreisinger, 2012). Additionally, Kreisinger ended up garnering many more views of her Queer Men remix on Vimeo due simply to the fact that the remix’s initial upload was blocked via YouTube’s Content ID feature. By the time Kreisinger had argued her case with YouTube, the Vimeo link had become the first stop for those viewing and sharing the remix, which received 72,000 views to date (“Queer Men”). Fair Use, Copyright, and Content IDThis instance points to the challenge that remixers face when dealing with copyright on YouTube, a site whose processes are not designed to accommodate fair use. Specifically, Title II, Section 512 of the DMCA (the Digital Millennium Copyright Act, passed in 1998) states that certain websites may qualify as “safe harbours” for copyright infringement if users upload the majority of the content to the site, or if the site is an information location service. These sites are insulated from copyright liability as long as they cooperate to some extent with rights holders. A common objection to Section 512 is that it requires media rights holders to police safe harbours in search of infringing content, rather than placing the onus on the platform provider (Meyers 939). In order to cooperate with Section 512 and rights holders, YouTube initiated the Content ID system in 2007. This system offers rights holders the ability to find and manage their content on the site by creating archives of footage against which user uploads are checked, allowing rights holders to automatically block, track, or monetise uses of their content (it is also worth noting that rights holders can make these responses country-specific) (“How Content ID Works”). At the current time, YouTube has over 15 million reference files against which it checks uploads (“Statistics - YouTube”). Thus, it’s fairly common for uploaded work to get flagged as a violation, especially when that work is a remix of popular institutional footage. If an upload is flagged by the Content ID system, the user can dispute the match, at which point the rights holder has the opportunity to either allow the video through, or to issue a DMCA takedown notice. They can also sue at any point during this process (“A Guide to YouTube Removals”). Content ID matches are relatively easy to dispute and do not generally require legal intervention. However, disputing these automatic takedowns requires users to be aware of their rights to fair use, and requires rights holders to acknowledge a fair use (“YouTube Removals”). This is only compounded by the fact that fair use is not a clearly defined right, but rather a vague provision relying on a balance between four factors: the purpose of the use, character of the work, the amount used, and the effect on the market value of the original (“US Copyright Office–Fair Use”). As Aufderheide and Jaszi observed in 2008, the rejection of videos for Content ID matches combined with the vagaries of fair use has a chilling effect on user-generated content. Rights Holders versus RemixersRights holders’ objections to Section 512 illustrate the ruling power dynamic in current intellectual property disputes: power rests with institutional rights-holding bodies (the RIAA, the MPAA) who assert their dominance over DMCA safe harbours such as YouTube (who must cooperate to stay in business) who, in turn, exert power over remixers (the lowest on the food chain, so to speak). Beyond the observed chilling effect of Content ID, remix on YouTube is shot through with discursive struggle between these rights-holding bodies and remixers attempting to express themselves and reach new communities. However, this has led political video remixers to become especially vocal when arguing for their uses of content. For instance, in the spring of 2009, Elisa Kreisinger curated a show entitled “REMOVED: The Politics of Remix Culture” in which blocked remixes screened alongside the remixers’ correspondence with YouTube. Kreisinger writes that each of these exchanges illustrate the dynamic between rights holders and remixers: “Your video is no longer available because FOX [or another rights-holding body] has chosen to block it (“Remixed/Removed”). Additionally, as Jenkins notes, even Content ID on YouTube is only made available to the largest rights holders—smaller companies must still go through an official DMCA takedown process to report infringement (Spreadable 51). In sum, though recent technological developments may give the appearance of democratising access to content, when it comes to policing UGC, technology has made it easier for the largest rights holders to stifle the creation of content.Additionally, it has been established that rights holders do occasionally use takedowns abusively, and recent court cases—specifically Lenz v. Universal Music Corp.—have established the need for rights holders to assess fair use in order to make a “good faith” assertion that users intend to infringe copyright prior to issuing a takedown notice. However, as Joseph M. Miller notes, the ruling fails to rebalance the burdens and incentives between rights holders and users (1723). This means that while rights holders are supposed to take fair use into account prior to issuing takedowns, there is no process in place that either effectively punishes rights holders who abuse copyright, or allows users to defend themselves without the possibility of massive financial loss (1726). As such, the system currently in place does not disallow or discourage features like Content ID, though cases like Lenz v. Universal indicate a push towards rebalancing the burden of determining fair use. In an effort to turn the tables, many have begun arguing for users’ rights and attempting to parse fair use for the layperson. The Electronic Frontier Foundation (EFF), for instance, has espoused an “environmental rhetoric” of fair use, casting intellectual property as a resource for users (Postigo 1020). Additionally, they have created practical guidelines for UGC creators dealing with DMCA takedowns and Content ID matches on YouTube. The Center for Social Media has also produced a number of fair use guides tailored to different use cases, one of which targeted online video producers. All of these efforts have a common goal: to educate content creators about the fair use of copyrighted content, and then to assert their use as fair in opposition to large rights-holding institutions (though they caution users against unfair uses of content or making risky legal moves that could lead to lawsuits). In relation to remix specifically, this means that remixers must differentiate themselves from institutional, commercial content producers, standing up both for the argument contained in their remix as well as their fair use of copyrighted content.In their “Code of Best Practices for Fair Use in Online Video,” the Center for Social Media note that an online video qualifies as a fair use if (among other things) it critiques copyrighted material and if it “recombines elements to make a new work that depends for its meaning on (often unlikely) relationships between the elements” (8). These two qualities are also two of the defining qualities of political remix video. For instance, they write that work meets the second criteria if it creates “new meaning by juxtaposition,” noting that in these cases “the recombinant new work has a cultural identity of its own and addresses an audience different from those for which its components were intended” (9). Remixes that use elements of familiar sources in unlikely combinations, such as those made by Elisa Kreisinger, generally seek to reach an audience who are familiar with the source content, but also object to it. Sex and the City, for instance, while it initially seemed willing to take on previously “taboo” topics in its exploration of dating in Manhattan, ended with each of the heterosexual characters paired with an opposite sex partner, and forays from this heteronormative narrative were contained either within in one-off episodes or tokenised gay characters. For this reason, Kreisinger noted that the intended audience for Queer Carrie were the queer and feminist viewers of Sex and the City who felt that the show was overly normative and exclusionary (Kreisinger, Art:21). As a result, the target audience of these remixes is different from the target audience of the source material—though the full nuance of the argument is best understood by those familiar with the source. Thus, the remix affirms the segment of the viewing community who saw only tokenised representations of their identity in the source text, and in so doing offers a critique of the original’s heteronormative focus.Fair Use and the VernacularVernacular discourse, as broadly defined by Kent A. Ono and John M. Sloop, refers to discourses that “emerge from discussions between members of self-identified smaller communities within the larger civic community.” It operates partially through appropriating dominant discourses in ways better suited to the vernacular community, through practices of pastiche and cultural syncretism (23). In an effort to better describe the intricacies of this type of discourse, Robert Glenn Howard theorised a hybrid “dialectical vernacular” that oscillates between institutional and vernacular discourse. This hybridity arises from the fact that the institutional and the vernacular are fundamentally inseparable, the vernacular establishing its meaning by asserting itself against the institutional (Howard, Toward 331). When put into use online, this notion of a “dialectical vernacular” is particularly interesting as it refers not only to the content of vernacular messages but also to their means of production. Howard notes that discourse embodying the dialectical vernacular is by nature secondary to institutional discourse, that the institutional must be clearly “structurally prior” (Howard, Vernacular 499). With this in mind it is unsurprising that political remix video—which asserts its secondary nature by calling upon pre-existing copyrighted content while simultaneously reaching out to smaller segments of the civic community—would qualify as a vernacular discourse.The notion of an institutional source’s structural prevalence also echoes throughout work on remix, both in practical guides such as the Center for Social Media’s “Best Practices” as well as in more theoretical takes on remix, like Eduardo Navas’ essay “Turbulence: Remixes + Bonus Beats,” in which he writes that:In brief, the remix when extended as a cultural practice is a second mix of something pre-existent; the material that is mixed for a second time must be recognized, otherwise it could be misunderstood as something new, and it would become plagiarism […] Without a history, the remix cannot be Remix. An elegant theoretical concept, this becomes muddier when considered in light of copyright law. If the history of remix is what gives it its meaning—the source text from which it is derived—then it is this same history that makes a fair use remix vulnerable to DMCA takedowns and other forms of discipline on YouTube. However, as per the criteria outlined by the Center for Social Media, it is also from this ironic juxtaposition of institutional sources that the remix object establishes its meaning, and thus its vernacularity. In this sense, the force of a political remix video’s argument is in many ways dependent on its status as an object in peril: vulnerable to the force of a law that has not yet swung in its favor, yet subversive nonetheless.With this in mind, YouTube and other UGC platforms represent a fraught layer of mediation between institutional and vernacular. As a site for the sharing of amateur video, YouTube has the potential to affirm small communities as users share similar videos, follow one particular channel together, or comment on videos posted by people in their networks. However, YouTube’s interface (rife with advertisements, constantly reminding users of its affiliation with Google) and cooperation with rights holders establish it as an institutional space. As such, remixes on the site are already imbued with the characteristic hybridity of the dialectical vernacular. This is especially true when the remixers (as in the case of PRV) have made the conscious choice to advocate for fair use at the same time that they distribute remixes dealing with other themes and resonating with other communities. ConclusionPolitical remix video sits at a fruitful juncture with regard to copyright as well as vernacularity. Like almost all remix, it makes its meaning through juxtaposing sources in a unique way, calling upon viewers to think about familiar texts in a new light. This creation invokes a new audience—a quality that makes it both vernacular and also a fair use of content. Given that PRV is defined by the “guerrilla” use of copyrighted footage, it has the potential to stand as a political statement outside of the thematic content of the remix simply due to the nature of its composition. This gives PRV tremendous potential for multivalent argument, as a video can simultaneously represent a marginalised community while advocating for copyright reform. This is only reinforced by the fact that many political video remixers have become vocal in advocating for fair use, asserting the strength of their community and their common goal.In addition to this argumentative richness, PRV’s relation to fair use and vernacularity exposes the complexity of the remix form: it continually oscillates between institutional affiliations and smaller vernacular communities. However, the hybridity of these remixes produces tension, much of which manifests on YouTube, where videos are easily responded to and challenged by both institutuional and vernacular authorities. In addition, a tension exists in the remix text itself between the source and the new, remixed message. Further research should attend to these areas of tension, while also exploring the tenacity of the remix community and their ability to advocate for themselves while circumventing copyright law.References“About Political Remix Video.” Political Remix Video. 15 Feb. 2012. ‹http://www.politicalremixvideo.com/what-is-political-remix/›.Aufderheide, Patricia, and Peter Jaszi. Reclaiming Fair Use: How to Put Balance Back in Copyright. Chicago: U of Chicago P, 2008. Kindle.“Code of Best Practices for Fair Use in Online Video.” The Center For Social Media, 2008. Van Dijck, José. “Users like You? Theorizing Agency in User-Generated Content.” Media Culture Society 31 (2009): 41-58.“A Guide to YouTube Removals,” The Electronic Frontier Foundation, 15 June 2013 ‹https://www.eff.org/issues/intellectual-property/guide-to-YouTube-removals›.Hilderbrand, Lucas. “YouTube: Where Cultural Memory and Copyright Converge.” Film Quarterly 61.1 (2007): 48-57.Howard, Robert Glenn. “The Vernacular Web of Participatory Media.” Critical Studies in Media Communication 25.5 (2008): 490-513.Howard, Robert Glenn. “Toward a Theory of the World Wide Web Vernacular: The Case for Pet Cloning.” Journal of Folklore Research 42.3 (2005): 323-60.“How Content ID Works.” YouTube. 21 June 2013. ‹https://support.google.com/youtube/answer/2797370?hl=en›.Jenkins, Henry, Sam Ford, and Joshua Green. Spreadable Media: Creating Value and Meaning in a Networked Culture. New York: New York U P, 2013. Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York U P, 2006. Kreisinger, Elisa. Interview with Nick Briz. Art:21. Art:21, 30 June 2011. 21 June 2013.Kreisinger, Elisa. “Queer Video Remix and LGBTQ Online Communities,” Transformative Works and Cultures 9 (2012). 19 June 2013 ‹http://journal.transformativeworks.org/index.php/twc/article/view/395/264›.Kreisinger, Elisa. Pop Culture Pirate. < http://www.popculturepirate.com/ >.Lessig, Lawrence. Remix: Making Art and Commerce Thrive in the Hybrid Economy. New York: Penguin Books, 2008. PDF.Meyers, B.G. “Filtering Systems or Fair Use? A Comparative Analysis of Proposed Regulations for User-Generated Content.” Cardozo Arts & Entertainment Law Journal 26.3: 935-56.Miller, Joseph M. “Fair Use through the Lenz of § 512(c) of the DMCA: A Preemptive Defense to a Premature Remedy?” Iowa Law Review 95 (2009-2010): 1697-1729.Navas, Eduardo. “Turbulence: Remixes + Bonus Beats.” New Media Fix 1 Feb. 2007. 10 June 2013 ‹http://newmediafix.net/Turbulence07/Navas_EN.html›.Ono, Kent A., and John M. Sloop. Shifting Borders: Rhetoric, Immigration and California’s Proposition 187. Philadelphia: Temple U P, 2002.“Privacy Policy – Policies & Principles.” Google. 19 June 2013 ‹http://www.google.com/policies/privacy/›.Postigo, Hector. “Capturing Fair Use for The YouTube Generation: The Digital Rights Movement, the Electronic Frontier Foundation, and the User-Centered Framing of Fair Use.” Information, Communication & Society 11.7 (2008): 1008-27.“Statistics – YouTube.” YouTube. 21 June 2013 ‹http://www.youtube.com/yt/press/statistics.html›.“US Copyright Office: Fair Use,” U.S. Copyright Office. 19 June 2013 ‹http://www.copyright.gov/fls/fl102.html›.“YouTube Help.” YouTube FAQ. 19 June 2013 ‹http://support.google.com/youtube/?hl=en&topic=2676339&rd=2›.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography