Journal articles on the topic 'Communication Networks and Services not elsewhere classified'

To see the other types of publications on this topic, follow the link: Communication Networks and Services not elsewhere classified.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 39 journal articles for your research on the topic 'Communication Networks and Services not elsewhere classified.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lee, Chongdeuk. "Self-Detecting Traffic Interference Control for Multi-Zone Services under 5G-Based Cellular Networks." Sensors 21, no. 7 (March 31, 2021): 2409. http://dx.doi.org/10.3390/s21072409.

Full text
Abstract:
In this paper, we propose a multi-zone service control scheme to maximize the performance of each service zone when a large number of cellular service zones and Device-to-Device (D2D) service zones are composed into the 5G cellular network. This paper also improves performance of service zone by dividing traffic into real-time traffic and non-real-time traffic in order to minimize traffic interference. Real-time traffic and non-real-time traffic have a significant impact on communication performance. We propose a new self-detection traffic interference control technique to improve the Quality of Service (QoS) and throughput of D2D and Cellular-to-Device (C2D) communication in a cellular network, Self-detecting Traffic Interference Control Scheme (STICS). The proposed STICS mechanism distinguishes between short-term traffic congestion process and long-term traffic congestion process according to traffic characteristics to detect and control traffic. When the proposed scheme is applied to the 5G-based cellular network environment, it is expected that the traffic type will be efficiently classified by self-detecting the traffic according to the flow. Such classified traffic is less sensitive to communication between the D2D and C2D links, thereby reducing traffic overload. We evaluate the performance of the proposed scheme through simulation and show that the proposed scheme is more efficient than other comparison schemes.
APA, Harvard, Vancouver, ISO, and other styles
2

Pazhani.A, Azhagu Jaisudhan, P. Gunasekaran, Vimal Shanmuganathan, Sangsoon Lim, Kaliappan Madasamy, Rajesh Manoharan, and Amit Verma. "Peer–Peer Communication Using Novel Slice Handover Algorithm for 5G Wireless Networks." Journal of Sensor and Actuator Networks 11, no. 4 (November 29, 2022): 82. http://dx.doi.org/10.3390/jsan11040082.

Full text
Abstract:
The goal of 5G wireless networks is to address the growing need for network services among users. User equipment has progressed to the point where users now expect diverse services from the network. The latency, reliability, and bandwidth requirements of users can all be classified. To fulfil the different needs of users in an economical manner, while guaranteeing network resources are resourcefully assigned to consumers, 5G systems plan to leverage technologies like Software Defined Networks, Network Function Virtualization, and Network Slicing. For the purpose of ensuring continuous handover among network slices, while catering to the advent of varied 5G application scenarios, new mobility management techniques must be adopted in Sliced 5G networks. Users want to travel from one region of coverage to another region without any fading in their network connection. Different network slices can coexist in 5G networks, with every slice offering services customized to various QoS demands. As a result, when customers travel from one region of coverage to another, the call can be transferred to a slice that caters to similar or slightly different requirements. The goal of this study was to develop an intra- and inter-slice algorithm for determining handover decisions in sliced 5G networks and to assess performance by comparing intra- and inter-slice handovers. The proposed work shows that an inter-slice handover algorithm offers superior quality of service when compared to an intra-slice algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Bi, Yuming, Lei Tian, Mengmeng Liu, Zhenzi Liu, and Wei Chen. "Research on Joint Handoff Algorithm in Vehicles Networks." Chinese Journal of Engineering 2016 (April 18, 2016): 1–10. http://dx.doi.org/10.1155/2016/3190264.

Full text
Abstract:
With the communication services evolution from the fourth generation (4G) to the fifth generation (5G), we are going to face diverse challenges from the new network systems. On the one hand, seamless handoff is expected to integrate universal access among various network mechanisms. On the other hand, a variety of 5G technologies will complement each other to provide ubiquitous high speed wireless connectivity. Because the current wireless network cannot support the handoff among Wireless Access for Vehicular Environment (WAVE), WiMAX, and LTE flexibly, the paper provides an advanced handoff algorithm to solve this problem. Firstly, the received signal strength is classified, and the vehicle speed and data rate under different channel conditions are optimized. Then, the optimal network is selected for handoff. Simulation results show that the proposed algorithm can well adapt to high speed environment, guarantee flexible and reasonable vehicles access to a variety of networks, and prevent ping-pong handoff and link access failure effectively.
APA, Harvard, Vancouver, ISO, and other styles
4

Zain ul Abideen, Muhammad, Shahzad Saleem, and Madiha Ejaz. "VPN Traffic Detection in SSL-Protected Channel." Security and Communication Networks 2019 (October 29, 2019): 1–17. http://dx.doi.org/10.1155/2019/7924690.

Full text
Abstract:
In recent times, secure communication protocols over web such as HTTPS (Hypertext Transfer Protocol Secure) are being widely used instead of plain web communication protocols like HTTP (Hypertext Transfer Protocol). HTTPS provides end-to-end encryption between the user and service. Nowadays, organizations use network firewalls and/or intrusion detection and prevention systems (IDPS) to analyze the network traffic to detect and protect against attacks and vulnerabilities. Depending on the size of organization, these devices may differ in their capabilities. Simple network intrusion detection system (NIDS) and firewalls generally have no feature to inspect HTTPS or encrypted traffic, so they rely on unencrypted traffic to manage the encrypted payload of the network. Recent and powerful next-generation firewalls have Secure Sockets Layer (SSL) inspection feature which are expensive and may not be suitable for every organizations. A virtual private network (VPN) is a service which hides real traffic by creating SSL-protected channel between the user and server. Every Internet activity is then performed under the established SSL tunnel. The user inside the network with malicious intent or to hide his activity from the network security administration of the organization may use VPN services. Any VPN service may be used by users to bypass the filters or signatures applied on network security devices. These services may be the source of new virus or worm injected inside the network or a gateway to facilitate information leakage. In this paper, we have proposed a novel approach to detect VPN activity inside the network. The proposed system analyzes the communication between user and the server to analyze and extract features from network, transport, and application layer which are not encrypted and classify the incoming traffic as malicious, i.e., VPN traffic or standard traffic. Network traffic is analyzed and classified using DNS (Domain Name System) packets and HTTPS- (Hypertext Transfer Protocol Secure-) based traffic. Once traffic is classified, the connection based on the server’s IP, TCP port connected, domain name, and server name inside the HTTPS connection is analyzed. This helps in verifying legitimate connection and flags the VPN-based traffic. We worked on top five freely available VPN services and analyzed their traffic patterns; the results show successful detection of the VPN activity performed by the user. We analyzed the activity of five users, using some sort of VPN service in their Internet activity, inside the network. Out of total 729 connections made by different users, 329 connections were classified as legitimate activity, marking 400 remaining connections as VPN-based connections. The proposed system is lightweight enough to keep minimal overhead, both in network and resource utilization and requires no specialized hardware.
APA, Harvard, Vancouver, ISO, and other styles
5

Ruliyanta, Ruliyanta, Mohd Riduan Ahmad, and Azmi Awang Md Isa. "Adaptive Wi-Fi offloading schemes in heterogeneous networks, the survey." Indonesian Journal of Electrical Engineering and Computer Science 28, no. 1 (October 1, 2022): 254. http://dx.doi.org/10.11591/ijeecs.v28.i1.pp254-268.

Full text
Abstract:
At present, the need for data traffic is experiencing tremendous growth. The growth of smartphones technology offers new applications. On the other hand, the growth in cellular network access infrastructure has not been able to keep up with the increasing demand for data package services. For this reason, Wi-Fi offloading is needed, namely cellular users, using Wi-Fi access for their data needs. In 2016, global data communication traffic growth reached 63%. Many researchers have proposed the adaptive wireless fidelity (Wi-Fi) offloading (AAWO) algorithm to transfer data on heterogeneous networks. In this study, the proposed adaptive incentive scheme is classified, to obtain an adaptive scheme based on cost, energy, service quality, and others. From the survey results shown, there is no proposed adaptive algorithm based on the quality of experience (QoE). This provides an opportunity for further research where the Wi-Fi offloading scheme uses the perspective user or user experient options. In addition, open research uses artificial intelligence and machine learning methods as adaptive methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Khafidin, Ahmad, Tatyantoro Andrasto, and Suryono Suryono. "Implementation flow control to improve quality of service on computer networks." Indonesian Journal of Electrical Engineering and Computer Science 16, no. 3 (December 1, 2019): 1474. http://dx.doi.org/10.11591/ijeecs.v16.i3.pp1474-1481.

Full text
Abstract:
<p>Quality of Service (QoS) is the collective effect of service performances, which determine the degree of satisfaction of a user of the service. In addition, QoS defined as the ability of a network to provide good service. QoS aims to provide different quality of services for various needs in the IP network. QoS parameters that can be used to analyze the data communication services are jitter, packet loss, throughput, and delay. The quality of QoS parameters in the network is affected by congestion. Congestion occurs because there is an excessive queue in the network. Congestion can be prevented by implementing flow control on network. Flow control is a method to control the data packet flow in a network. By controlling of the data packet flow, it can improve of QoS. This study intends to find out value of QoS on the internet network at Faculty Engineering, State University of Semarang by measuring network performance using QoS parameters. Then, in this research will be implemented the token bucket method as a flow control mechanism at the network to improve the QoS. After research and data analysis, internet network at Faculty Engineering State University of Semarang has QoS value was 3,5 with 87,5 % of percentage and classified in satisfying of category. When measuring the network performance, there are decreases of performance at access point that having data rates 150 Mbps with many users connected. It has 9,0 ms of delay value, 0.046 ms of jitter, 16,6% of packet loss and, 1293407 bps of throughput. After token bucket was applied as flow control mechanism that be simulated on Graphical Network Simulator 3, the internet network has QoS values 3,75 with 93,75 % of percentage and classified as “satisfying” category. Furthermore, the percentage of the throughput value obtained on network by implementing flow control is 62%, while on the existing network is 41%.</p>
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Min-Gu, Hoon Ko, and Sung Bum Pan. "A Study on User Recognition Using 2D ECG Image Based on Ensemble Networks for Intelligent Vehicles." Wireless Communications and Mobile Computing 2019 (February 3, 2019): 1–11. http://dx.doi.org/10.1155/2019/6458719.

Full text
Abstract:
IoT enabled smart car era is expected to begin in the near future as convergence between car and IT accelerates. Current smart cars can provide various information and services needed by the occupants via wearable devices or Vehicle to Everything (V2X) communication environment. In order to provide such services, a system to analyze wearable device information on the smart car platform needs to be designed. In this paper a real time user recognition method using 2D ECG (Electrocardiogram) images, a biometric signal that can be obtained from wearable devices, will be studied. ECG (Electrocardiogram) signal can be classified by fiducial point method using feature points detection or nonfiducial point method due to time change. In the proposed algorithm, a CNN based ensemble network was designed to improve performance by overcoming problems like overfitting which occur in a single network. Test results show that 2D ECG image based user recognition accuracy improved by 1%~1.7% for the fiducial point method and by 0.9%~2% for the nonfiducial point method. By showing 13% higher performance compared to the single network in which recognition rate reduction occurs because similar characteristics are shown between classes, capability for use in a smart vehicle platform based user recognition system that requires reliability was demonstrated by the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
8

Keti, Faris, Salih M. S. Atroshey, and Jalil A. Hamadamin. "A REVIEW OF NEW IMPROVEMENTS IN RESOURCE ALLOCATION PROBLEM OPTIMIZATION IN 5G USING NON-ORTHOGONAL MULTIPLE ACCESS." Academic Journal of Nawroz University 11, no. 4 (November 8, 2022): 245–54. http://dx.doi.org/10.25007/ajnu.v11n4a1308.

Full text
Abstract:
Because of the arising requirements of emerging networks (Fifth generation and beyond) such as supporting diverse Quality of Services (QoS), low latency, and high spectral efficiency; the previous and traditional generations of communication systems are becoming inappropriate. Furthermore, due to the huge connectivity and ever-growing demands of diverse services and high data rate applications, more effective radio access techniques are required for the purpose of a full-scale implementation of the fifth generation (5G) and beyond wireless systems. Therefore; the researchers are looking for new mechanisms to accomplish these demands, and one of the key techniques been proposed is NOMA due to its capability of spectrum efficiency enhancement. In NOMA-Based systems, the information signals of various users are superimposed at the transmitter side, by utilizing the differences of channel gain to work for different users simultaneously. In this study, recent papers on resource allocation problem based on Power-Doman NOMA (PD-NOMA) in 5G networks were reviewed and the goal (objective function), optimization methods used, and obtained results of each analyzed paper are investigated. In addition, the discussed resource allocation problems were classified into: optimal rate problems and power/energy-efficient problems and the proposed solutions in each of it are analyzed. Finally, this study highlights some of the present and future challenges in this field.
APA, Harvard, Vancouver, ISO, and other styles
9

Nawaz Jadoon, Rab, Mohsin Fayyaz, WuYang Zhou, Muhammad Amir Khan, and Ghulam Mujtaba. "PCOI: Packet Classification‐Based Optical Interconnect for Data Centre Networks." Mathematical Problems in Engineering 2020 (July 17, 2020): 1–11. http://dx.doi.org/10.1155/2020/2903157.

Full text
Abstract:
To support cloud services, Data Centre Networks (DCNs) are constructed to have many servers and network devices, thus increasing the routing complexity and energy consumption of the DCN. The introduction of optical technology in DCNs gives several benefits related to routing control and energy efficiency. This paper presents a novel Packet Classification based Optical interconnect (PCOI) architecture for DCN which simplifies the routing process by classifying the packet at the sender rack and reduces energy consumption by utilizing the passive optical components. This architecture brings some key benefits to optical interconnects in DCNs which include (i) routing simplicity, (ii) reduced energy consumption, (iii) scalability to large port count, (iv) packet loss avoidance, and (v) all-to-one communication support. The packets are classified based on destination rack and are arranged in the input queues. This paper presents the input and output queuing analysis of the PCOI architecture in terms of mathematical analysis, the TCP simulation in NS2, and the physical layer analysis by conducting simulation in OptiSystem. The packet loss in the PCOI has been avoided by adopting the input and output queuing model. The output queue of PCOI architecture represents an M/D/32 queue. The simulation results show that PCOI achieved a significant improvement in terms of throughput and low end-to-end delay. The eye-diagram results show that a good quality optical signal is received at the output, showing a very low Bit Error Rate (BER).
APA, Harvard, Vancouver, ISO, and other styles
10

Durga, S., Esther Daniel, J. Andrew Onesimu, and Yuichi Sei. "Resource Provisioning Techniques in Multi-Access Edge Computing Environments: Outlook, Expression, and Beyond." Mobile Information Systems 2022 (December 19, 2022): 1–24. http://dx.doi.org/10.1155/2022/7283516.

Full text
Abstract:
Mobile cloud computing promises a research foundation in information and communication technology (ICT). Multi-access edge computing is an intermediate solution that reduces latency by delivering cloud computing services close to IoT and mobile clients (MCs), hence addressing the performance issues of mobile cloud computing. However, the provisioning of resources is a significant and challenging process in mobile cloud-based environments as it organizes the heterogeneous sensing and processing capacities to provide the customers with an elastic pool of resources. Resource provisioning techniques must meet quality of service (QoS) considerations such as availability, responsiveness, and reliability to avoid service-level agreement (SLA) breaches. This investigation is essential because of the unpredictable change in service demands from diverse regions and the limits of MEC’s available computing resources. In this study, resource provisioning approaches for mobile cloud computing are thoroughly and comparatively studied and classified as taxonomies of previous research. The paper concludes with an insightful summary that gives recommendations for future enhancements.
APA, Harvard, Vancouver, ISO, and other styles
11

Zaruba, Viktor, and Irina Parfentenko. "Methods of using websites in integrated business promotion of enterprises." Economy of Industry 2, no. 94 (June 25, 2021): 125–40. http://dx.doi.org/10.15407/econindustry2021.02.125.

Full text
Abstract:
The use of digital tools in marketing communications has become an obvious precondition for a successful business. At the same time, the modern concept of holistic marketing establishes that the purpose of marketing communications is to promote business of a firm, which includes not only the promotion of its goods and services, but also the management of its relations with all stakeholders. These provisions fully apply to Ukrainian industrial enterprises that seek to develop foreign sales markets. In the paper the concept of a site was chosen as the key term for research of marketing online communications. From the standpoint of their purpose, sites represent different types of social media that make it easier for their users to communicate and exchange multimedia information with each other. Websites are viewed as marketing communication channels that use certain methods to promote business activities. This raises the problem of integrating promotion channels into a single marketing communications system. The paper is devoted to the analysis of methods of promoting business activities of enterprises using social media sites and their systematization to create integrated marketing communications. We have classified websites according to the role they can play in integrated business promotion. It has been established that one should highlight the official sites of an enterprise promoting its business activities and the sites used to obtain information services. Many enterprises have two official websites: a representative one for presenting their business activities and a transactional one for e-commerce. The developers of information services sites are their owners, for whom the provision of these services constitutes the content of their business activity. These sites include social media supporting email, social networks, blogs, instant messengers. The analysis of the main methods of promotion in various social media is carried out. It allows one to establish the necessary logical connections between the promotion processes through various communication channels for their integration into a single system.
APA, Harvard, Vancouver, ISO, and other styles
12

Koufos, Konstantinos, Karim EI Haloui, Mehrdad Dianati, Matthew Higgins, Jaafar Elmirghani, Muhammad Ali Imran, and Rahim Tafazolli. "Trends in Intelligent Communication Systems: Review of Standards, Major Research Projects, and Identification of Research Gaps." Journal of Sensor and Actuator Networks 10, no. 4 (October 12, 2021): 60. http://dx.doi.org/10.3390/jsan10040060.

Full text
Abstract:
The increasing complexity of communication systems, following the advent of heterogeneous technologies, services and use cases with diverse technical requirements, provide a strong case for the use of artificial intelligence (AI) and data-driven machine learning (ML) techniques in studying, designing and operating emerging communication networks. At the same time, the access and ability to process large volumes of network data can unleash the full potential of a network orchestrated by AI/ML to optimise the usage of available resources while keeping both CapEx and OpEx low. Driven by these new opportunities, the ongoing standardisation activities indicate strong interest to reap the benefits of incorporating AI and ML techniques in communication networks. For instance, 3GPP has introduced the network data analytics function (NWDAF) at the 5G core network for the control and management of network slices, and for providing predictive analytics, or statistics, about past events to other network functions, leveraging AI/ML and big data analytics. Likewise, at the radio access network (RAN), the O-RAN Alliance has already defined an architecture to infuse intelligence into the RAN, where closed-loop control models are classified based on their operational timescale, i.e., real-time, near real-time, and non-real-time RAN intelligent control (RIC). Different from the existing related surveys, in this review article, we group the major research studies in the design of model-aided ML-based transceivers following the breakdown suggested by the O-RAN Alliance. At the core and the edge networks, we review the ongoing standardisation activities in intelligent networking and the existing works cognisant of the architecture recommended by 3GPP and ETSI. We also review the existing trends in ML algorithms running on low-power micro-controller units, known as TinyML. We conclude with a summary of recent and currently funded projects on intelligent communications and networking. This review reveals that the telecommunication industry and standardisation bodies have been mostly focused on non-real-time RIC, data analytics at the core and the edge, AI-based network slicing, and vendor inter-operability issues, whereas most recent academic research has focused on real-time RIC. In addition, intelligent radio resource management and aspects of intelligent control of the propagation channel using reflecting intelligent surfaces have captured the attention of ongoing research projects.
APA, Harvard, Vancouver, ISO, and other styles
13

Chansanam, Wirapong, Kanyarat Kwiecien, Marut Buranarach, and Kulthida Tuamsuk. "A Digital Thesaurus of Ethnic Groups in the Mekong River Basin." Informatics 8, no. 3 (August 9, 2021): 50. http://dx.doi.org/10.3390/informatics8030050.

Full text
Abstract:
This research was aimed at constructing a thesaurus of the ethnic groups in the Mekong River Basin that is a compilation of controlled vocabularies of both Thai and English language, with a digital platform that enables semantic search and linked open data. The research method involved four steps: (1) organization of knowledge content; (2) construction of the thesaurus; (3) development of a digital thesaurus platform; and (4) evaluation. The concepts and theories used in the research comprised knowledge organization, thesaurus construction, digital platform development, and system evaluation. The tool for developing the digital thesaurus was the Tematres web application. The research results are: (1) there are 4273 principle words related to the ethnic groups that have been compiled and classified by the terms for each of the eight deep levels, 2596 were found to have hierarchical relationships, and 6858 had associative relationships; (2) the digital thesaurus platform was able to manage the controlled vocabularies related to the Mekong ethnic groups by storing both Thai and English vocabularies. When retrieved, the vocabulary, details of the broader term, narrow term, related term, cross reference, and scope note are displayed. Thus, semantic search is viable through applications, linked open data technology, and web services.
APA, Harvard, Vancouver, ISO, and other styles
14

Apreh Siaw, Gladys, and John Kahuthu Gitau. "Aspects of Electronic Customer Relationship Management and Guest Satisfaction: A Perspective of 4-Star Hotels in Nairobi County, Kenya." International Journal of Technology and Management Research 5, no. 1 (April 17, 2020): 55–71. http://dx.doi.org/10.47127/ijtmr.v5i1.81.

Full text
Abstract:
Key among the aims of many service organizations are to establish and maintain stronger relationships with their customers. In recent times, organizations building strong communication networks with their customers by means of new electronic technologies to facilitate this process. The ultimate aim is to develop customer satisfaction, loyalty, and retention. However, customer satisfaction and loyalty have been an issue for many hotels in the hospitality industry. Therefore, the purpose of this study is to establish the effect of aspects of e-CRM such as trust, convenience and security on customer satisfaction among classified hotels in Nairobi City County, Kenya. A descriptive cross-sectional study of 384 customers through self-administered questionnaires was conducted. All variables were measured using constructs developed from the literature. Cronbach's Alpha was used to assess the reliability of the constructs. Pearson correlation technique was used to establish interrelationships between the study variables. The findings of the study revealed significant direct relationships between trust, convenience, and security of online transactions and customer satisfaction. The study recommends to the management of classified 4-star hotels to ensure that their online platforms have major tools such as regular review of websites and customers’ privacy that would ensure that services and transactions are believable and trusted. Citation: Gladys Apreh Siaw1 and John Kahuthu Gitau2. Aspects of Electronic Customer Relationship Management and Guest Satisfaction: A Perspective of 4-Star Hotels in Nairobi County, Kenya, 2020; 5(1): 1-17. Received: (February 13, 2020) Accepted: (March 31, 2020)
APA, Harvard, Vancouver, ISO, and other styles
15

Lim, Myung-Jin, Moung-Ho Yi, and Ju-Hyun Shin. "Intrinsic Emotion Recognition Considering the Emotional Association in Dialogues." Electronics 12, no. 2 (January 8, 2023): 326. http://dx.doi.org/10.3390/electronics12020326.

Full text
Abstract:
Computer communication via text messaging or Social Networking Services (SNS) has become increasingly popular. At this time, many studies are being conducted to analyze user information or opinions and recognize emotions by using a large amount of data. Currently, the methods for the emotion recognition of dialogues requires an analysis of emotion keywords or vocabulary, and dialogue data are mostly classified as a single emotion. Recently, datasets classified as multiple emotions have emerged, but most of them are composed of English datasets. For accurate emotion recognition, a method for recognizing various emotions in one sentence is required. In addition, multi-emotion recognition research in Korean dialogue datasets is also needed. Since dialogues are exchanges between speakers. One’s feelings may be changed by the words of others, and feelings, once generated, may last for a long period of time. Emotions are expressed not only through vocabulary, but also indirectly through dialogues. In order to improve the performance of emotion recognition, it is necessary to analyze Emotional Association in Dialogues (EAD) to effectively reflect various factors that induce emotions. Therefore, in this paper, we propose a more accurate emotion recognition method to overcome the limitations of single emotion recognition. We implement Intrinsic Emotion Recognition (IER) to understand the meaning of dialogue and recognize complex emotions. In addition, conversations are classified according to their characteristics, and the correlation between IER is analyzed to derive Emotional Association in Dialogues (EAD) and apply them. To verify the usefulness of the proposed technique, IER applied with EAD is tested and evaluated. This evaluation determined that Micro-F1 of the proposed method exhibited the best performance, with 74.8% accuracy. Using IER to assess the EAD proposed in this paper can improve the accuracy and performance of emotion recognition in dialogues.
APA, Harvard, Vancouver, ISO, and other styles
16

Abdellaoui , Abdelkader. "Social changes, changes in spaces: what management strategy for cities;the case of the city of Laghouat(Algeria)." Sociology International Journal 6, no. 3 (May 16, 2022): 84–89. http://dx.doi.org/10.15406/sij.2022.06.000268.

Full text
Abstract:
With the communication highways, information begins, little by little, to escape from the traditional centers of political, military, business or even religious power. In society, which believes that it is gradually freeing itself from these powers, doubt is beginning to settle in, a number of questions arise, a new breed of commentators, youtubers, influencers, interpreters and preachers in all areas of social life that try to draw you into their context, their culture or their fashion. Moreover, the globalization of exchanges, relations, transportation tools, financial networks and marketing calls into question, implicitly and slyly, the concepts of State, Nation, Community, the very belonging of the 'Individual to a “group”, as part of a united Whole. The citizen is becoming more and more an individual connected to his telephone and isolated from the rest. But with the decrease in resources and the increase in constraints of all kinds, the people wonder about their past, their future, their present; the malaise it is going through. She wonders about the modes of governance that are imposed on her, about the many problems she faces daily and the solutions that we cannot offer her. She looks elsewhere and compares, compares herself to other societies and begins to dream, to hope and, very quickly, to express her anger. The viable geographical space which is shrinking from year to year, the anarchic and how rapid expansion of agglomerations, the resources which are running out are all sources of additional concern for societies which are looking for themselves and no longer find themselves in an organizational system they no longer understand, tossed as they are between a protective past, a tempting present and an elusive future. This article attempts, for the city of Laghouat, to identify the difficulties to be overcome for a new management of spaces and for societal education. He then proposes the implementation of a management strategy based on the creation of a complete database on the physical spaces, the infrastructures, the networks (including the urban transport networks) as well as the services involved in the day-to-day management of the city.
APA, Harvard, Vancouver, ISO, and other styles
17

Jallouli, Rim, and Safa Kaabi. "Mapping Top Strategic E-commerce Technologies in the Digital Marketing Literature." Journal of Telecommunications and the Digital Economy 10, no. 3 (September 26, 2022): 149–64. http://dx.doi.org/10.18080/jtde.v10n3.554.

Full text
Abstract:
The increasing use of e-commerce technologies has been studied in several fields and from different perspectives: technological, economic, organizational and social: hence, the need for a literature review that provides a map of top strategic e-commerce technologies from a managerial perspective and, more specifically, for digital marketing research. This paper aims to provide researchers with a comprehensive overview of the range of e-commerce technologies that have had a significant role in shaping digital marketing strategies. Based on a thematic analysis, e-commerce technologies were classified through eleven categories. The objective is to reveal how each set of technologies affected the different digital marketing strategies. Both descriptive and clustering analyses show that the most evoked technologies in the digital marketing literature are Information and Communication Technologies and platforms. Results outline the growing interest in artificial intelligence technologies. Moreover, this literature review reveals how digital marketing research has focused on technology-enabled segmentation and targeting strategies, along with the use of social platforms and the development of new products and services. The scarcity of marketing papers studying the impact of cloud technologies, IoT, blockchain and data analytics orient researchers towards exploring further the potential of these technologies for digital strategies.
APA, Harvard, Vancouver, ISO, and other styles
18

Adwan, Ahmad Al, and Raed Aladwan. "Use of artificial intelligence system to predict consumers’ behaviors." International Journal of Data and Network Science 6, no. 4 (2022): 1223–32. http://dx.doi.org/10.5267/j.ijdns.2022.6.011.

Full text
Abstract:
In online shopping enterprises, AI technology has been widely used to provide accurate and fast personalized consumer services. This research demonstrates the use of AI technology in the e-commerce business, specifically online enterprises, to determine different effects. The study was conducted in Jordan and involved about 230 participants. The study evaluated different impacts of AI, such as e-payment and stimulating consumers' sentiments. The study used the Stimulus–Organism–Response model (SOR) empirical model, which states that the examination of human processes differs from that of the machine assessment. The model classified the AI technology experienced by the customers' when they visit online to do purchasing. Online purchasing behaviors can be influenced by insight, accuracy, and interaction experience. Also, the perceived value was used as a mediating variable from the prospects of perceived hedonic and utility value. The research integrated empirical research models such as SEM and SPSS to analyze the data on the effects of three-dimension. The results indicated that the AI technology accuracy, interactive experience, and insight significantly affected customers' perceived hedonic and utilitarian values.
APA, Harvard, Vancouver, ISO, and other styles
19

Redkina, N. S. "Global trends of libraries development: optimism vs pessimism (foreign literature review) Part 1." Bibliosphere, no. 4 (December 30, 2018): 87–94. http://dx.doi.org/10.20913/1815-3186-2018-4-87-94.

Full text
Abstract:
The dynamic development of the external technological environment, on the one hand, impacts on libraries questioning their future existence, on the other, helps libraries to work more productively, increases competitiveness and efficiency, expands the range of social projects, develops new ways and forms of work with users taking into account their preferences in information and services. The review is based on over 500 articles searched in the world's largest databases (Google Scholar, Web of Science, Scopus, etc.), which discuss trends and future development of libraries. Then the documents were classified according to sections and types of libraries, as well as advanced technologies. Examples of information technologies were collected and reviewed, as well as articles related to the implementation of information technologies when creating new services, with the emphasis on those that may affect libraries in the future. The latest information technologies that can be applied to the next generation library have been studied. The material is structured in blocks and presented in two parts. Thie 1st one presents such sections as: 1) challenges of the external environment and the future of libraries, 2) modern information technologies in libraries development (mobile technologies and applications, cloud computing, big data, internet of things, virtual and augmented reality, technical innovations, etc.), 4) Library 4.0 concept - new directions for libraries development. The 2nd part of the review article (Bibliosphere, 2019, 1) will touch the following issues: 1) user preferences and new library services (software for information literacy development, research data management, web archiving, etc.), 2) libraries as centers of intellectual leisure, communication platforms, places for learning, co-working, renting equipment, creativity, work, scientific experiments and leisure, 3) smart buildings and smart libraries, 4) future optimism. Based on the content analysis of publications, it is concluded that libraries should not only accumulate resources and provide access to them, but renew existing approaches to forms and content of their activities, as well as goals, missions and prospects for their development using various hard- and software, cloud computing technologies, mobile technologies and apps, social networks, etc.
APA, Harvard, Vancouver, ISO, and other styles
20

Chetty, Swarna Bindu, Hamed Ahmadi, Sachin Sharma, and Avishek Nag. "Virtual Network Function Embedding under Nodal Outage Using Deep Q-Learning." Future Internet 13, no. 3 (March 23, 2021): 82. http://dx.doi.org/10.3390/fi13030082.

Full text
Abstract:
With the emergence of various types of applications such as delay-sensitive applications, future communication networks are expected to be increasingly complex and dynamic. Network Function Virtualization (NFV) provides the necessary support towards efficient management of such complex networks, by virtualizing network functions and placing them on shared commodity servers. However, one of the critical issues in NFV is the resource allocation for the highly complex services; moreover, this problem is classified as an NP-Hard problem. To solve this problem, our work investigates the potential of Deep Reinforcement Learning (DRL) as a swift yet accurate approach (as compared to integer linear programming) for deploying Virtualized Network Functions (VNFs) under several Quality-of-Service (QoS) constraints such as latency, memory, CPU, and failure recovery requirements. More importantly, the failure recovery requirements are focused on the node-outage problem where outage can be either due to a disaster or unavailability of network topology information (e.g., due to proprietary and ownership issues). In DRL, we adopt a Deep Q-Learning (DQL) based algorithm where the primary network estimates the action-value function Q, as well as the predicted Q, highly causing divergence in Q-value’s updates. This divergence increases for the larger-scale action and state-space causing inconsistency in learning, resulting in an inaccurate output. Thus, to overcome this divergence, our work has adopted a well-known approach, i.e., introducing Target Neural Networks and Experience Replay algorithms in DQL. The constructed model is simulated for two real network topologies—Netrail Topology and BtEurope Topology—with various capacities of the nodes (e.g., CPU core, VNFs per Core), links (e.g., bandwidth and latency), several VNF Forwarding Graph (VNF-FG) complexities, and different degrees of the nodal outage from 0% to 50%. We can conclude from our work that, with the increase in network density or nodal capacity or VNF-FG’s complexity, the model took extremely high computation time to execute the desirable results. Moreover, with the rise in complexity of the VNF-FG, the resources decline much faster. In terms of the nodal outage, our model provided almost 70–90% Service Acceptance Rate (SAR) even with a 50% nodal outage for certain combinations of scenarios.
APA, Harvard, Vancouver, ISO, and other styles
21

Fushteі, Oksana. "SOCIAL PREVENTION OF CYBERBULING AMONG STUDENTS." Scientific Bulletin of Uzhhorod University. Series: «Pedagogy. Social Work», no. 1(50) (May 31, 2022): 294–97. http://dx.doi.org/10.24144/2524-0609.2022.50.294-297.

Full text
Abstract:
The problem of virtual space violence is very relevant in Ukraine. The state's unstable situation and lack of proper punishment for cyberbullying and its prevention cause an increase in cases of this phenomenon. Cyberbullying is a problem that is rapidly spreading and causing irreparable damage to young people. The purpose of the article is to highlight the problem of cyberbullying and determine the main directions of the social worker’s work on the social prevention of cyberbullying among students. To achieve the goal of the study, a set of methods was used: analysis of social-pedagogical, psychological and pedagogical literature, comparison of scientific sources, which became the basis for determining the degree of scientific development of the problem, systematization in order to determine and clarify the basic concepts of research and existing scientific approaches to solving the specified problem, generalization - to formulate final provisions and conclusions. Theoretical study and empirical research allow us to draw the following conclusions regarding the concept of "cyberbullying" and examine its occurrence, causes, consequences and main directions of social prevention of cyberbullying. Therefore, cyberbullying is a type of bullying, but the specifics are the use of electronic communications. Cyberbullying can occur in various forms. Therefore, it can be classified into eight types, namely, disputes, or fleming, assaults, and constant debilitating attacks, slander and restatement, impostor (impersonation of oneself into another person), erase confidential information and spread it, alienate, cyber-harassment, and hepisleping. Preventive work is carried out in two directions. The first area is associated with the development of technical means that restrict unwanted content (filters, censorship), which provides social networks and websites "complain" buttons, as well as setting up the privacy of personal accounts. This area includes the creation of a system of rapid response of content providers and services, law enforcement agencies, communication operators for illegal activities and the Internet. The second area of cyberbullying prevention involves teaching Internet users the basic rules of security and correct behavior towards other community users. On the basis of theoretical study and empirical research, the main directions of social prevention of cyberbullying were revealed and the project "Social prevention of cyberbullying among young people in the Internet space" was developed
APA, Harvard, Vancouver, ISO, and other styles
22

Nayyar, Anand, Rudra Rameshwar, and Piyush Kanti Dutta. "Special Issue on Recent Trends and Future of Fog and Edge Computing, Services and Enabling Technologies." Scalable Computing: Practice and Experience 20, no. 2 (May 2, 2019): iii—vi. http://dx.doi.org/10.12694/scpe.v20i2.1558.

Full text
Abstract:
Recent Trends and Future of Fog and Edge Computing, Services, and Enabling Technologies Cloud computing has been established as the most popular as well as suitable computing infrastructure providing on-demand, scalable and pay-as-you-go computing resources and services for the state-of-the-art ICT applications which generate a massive amount of data. Though Cloud is certainly the most fitting solution for most of the applications with respect to processing capability and storage, it may not be so for the real-time applications. The main problem with Cloud is the latency as the Cloud data centres typically are very far from the data sources as well as the data consumers. This latency is ok with the application domains such as enterprise or web applications, but not for the modern Internet of Things (IoT)-based pervasive and ubiquitous application domains such as autonomous vehicle, smart and pervasive healthcare, real-time traffic monitoring, unmanned aerial vehicles, smart building, smart city, smart manufacturing, cognitive IoT, and so on. The prerequisite for these types of application is that the latency between the data generation and consumption should be minimal. For that, the generated data need to be processed locally, instead of sending to the Cloud. This approach is known as Edge computing where the data processing is done at the network edge in the edge devices such as set-top boxes, access points, routers, switches, base stations etc. which are typically located at the edge of the network. These devices are increasingly being incorporated with significant computing and storage capacity to cater to the need for local Big Data processing. The enabling of Edge computing can be attributed to the Emerging network technologies, such as 4G and cognitive radios, high-speed wireless networks, and energy-efficient sophisticated sensors. Different Edge computing architectures are proposed (e.g., Fog computing, mobile edge computing (MEC), cloudlets, etc.). All of these enable the IoT and sensor data to be processed closer to the data sources. But, among them, Fog computing, a Cisco initiative, has attracted the most attention of people from both academia and corporate and has been emerged as a new computing-infrastructural paradigm in recent years. Though Fog computing has been proposed as a different computing architecture than Cloud, it is not meant to replace the Cloud. Rather, Fog computing extends the Cloud services to network edges for providing computation, networking, and storage services between end devices and data centres. Ideally, Fog nodes (edge devices) are supposed to pre-process the data, serve the need of the associated applications preliminarily, and forward the data to the Cloud if the data are needed to be stored and analysed further. Fog computing enhances the benefits from smart devices operational not only in network perimeter but also under cloud servers. Fog-enabled services can be deployed anywhere in the network, and with these services provisioning and management, huge potential can be visualized to enhance intelligence within computing networks to realize context-awareness, high response time, and network traffic offloading. Several possibilities of Fog computing are already established. For example, sustainable smart cities, smart grid, smart logistics, environment monitoring, video surveillance, etc. To design and implementation of Fog computing systems, various challenges concerning system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. are needed to be addressed. Also, to make Fog compatible with Cloud several factors such as Fog and Cloud system integration, service collaboration between Fog and Cloud, workload balance between Fog and Cloud, and so on need to be taken care of. It is our great privilege to present before you Volume 20, Issue 2 of the Scalable Computing: Practice and Experience. We had received 20 Research Papers and out of which 14 Papers are selected for Publication. The aim of this special issue is to highlight Recent Trends and Future of Fog and Edge Computing, Services and Enabling technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to Fog Computing, Cloud Computing and Edge Computing. Sujata Dash et al. contributed a paper titled “Edge and Fog Computing in Healthcare- A Review” in which an in-depth review of fog and mist computing in the area of health care informatics is analysed, classified and discussed. The review presented in this paper is primarily focussed on three main aspects: The requirements of IoT based healthcare model and the description of services provided by fog computing to address then. The architecture of an IoT based health care system embedding fog computing layer and implementation of fog computing layer services along with performance and advantages. In addition to this, the researchers have highlighted the trade-off when allocating computational task to the level of network and also elaborated various challenges and security issues of fog and edge computing related to healthcare applications. Parminder Singh et al. in the paper titled “Triangulation Resource Provisioning for Web Applications in Cloud Computing: A Profit-Aware” proposed a novel triangulation resource provisioning (TRP) technique with a profit-aware surplus VM selection policy to ensure fair resource utilization in hourly billing cycle while giving the quality of service to end-users. The proposed technique use time series workload forecasting, CPU utilization and response time in the analysis phase. The proposed technique is tested using CloudSim simulator and R language is used to implement prediction model on ClarkNet weblog. The proposed approach is compared with two baseline approaches i.e. Cost-aware (LRM) and (ARMA). The response time, CPU utilization and predicted request are applied in the analysis and planning phase for scaling decisions. The profit-aware surplus VM selection policy used in the execution phase for select the appropriate VM for scale-down. The result shows that the proposed model for web applications provides fair utilization of resources with minimum cost, thus provides maximum profit to application provider and QoE to the end users. Akshi kumar and Abhilasha Sharma in the paper titled “Ontology driven Social Big Data Analytics for Fog enabled Sentic-Social Governance” utilized a semantic knowledge model for investigating public opinion towards adaption of fog enabled services for governance and comprehending the significance of two s-components (sentic and social) in aforesaid structure that specifically visualize fog enabled Sentic-Social Governance. The results using conventional TF-IDF (Term Frequency-Inverse Document Frequency) feature extraction are empirically compared with ontology driven TF-IDF feature extraction to find the best opinion mining model with optimal accuracy. The results concluded that implementation of ontology driven opinion mining for feature extraction in polarity classification outperforms the traditional TF-IDF method validated over baseline supervised learning algorithms with an average of 7.3% improvement in accuracy and approximately 38% reduction in features has been reported. Avinash Kaur and Pooja Gupta in the paper titled “Hybrid Balanced Task Clustering Algorithm for Scientific workflows in Cloud Computing” proposed novel hybrid balanced task clustering algorithm using the parameter of impact factor of workflows along with the structure of workflow and using this technique, tasks can be considered for clustering either vertically or horizontally based on value of impact factor. The testing of the algorithm proposed is done on Workflowsim- an extension of CloudSim and DAG model of workflow was executed. The Algorithm was tested on variables- Execution time of workflow and Performance Gain and compared with four clustering methods: Horizontal Runtime Balancing (HRB), Horizontal Clustering (HC), Horizontal Distance Balancing (HDB) and Horizontal Impact Factor Balancing (HIFB) and results stated that proposed algorithm is almost 5-10% better in makespan time of workflow depending on the workflow used. Pijush Kanti Dutta Pramanik et al. in the paper titled “Green and Sustainable High-Performance Computing with Smartphone Crowd Computing: Benefits, Enablers and Challenges” presented a comprehensive statistical survey of the various commercial CPUs, GPUs, SoCs for smartphones confirming the capability of the SCC as an alternative to HPC. An exhaustive survey is presented on the present and optimistic future of the continuous improvement and research on different aspects of smartphone battery and other alternative power sources which will allow users to use their smartphones for SCC without worrying about the battery running out. Dhanapal and P. Nithyanandam in the paper titled “The Slow HTTP Distributed Denial of Service (DDOS) Attack Detection in Cloud” proposed a novel method to detect slow HTTP DDoS attacks in cloud to overcome the issue of consuming all available server resources and making it unavailable to the real users. The proposed method is implemented using OpenStack cloud platform with slowHTTPTest tool. The results stated that proposed technique detects the attack in efficient manner. Mandeep Kaur and Rajni Mohana in the paper titled “Static Load Balancing Technique for Geographically partitioned Public Cloud” proposed a novel approach focused upon load balancing in the partitioned public cloud by combining centralized and decentralized approaches, assuming the presence of fog layer. A load balancer entity is used for decentralized load balancing at partitions and a controller entity is used for centralized level to balance the overall load at various partitions. Results are compared with First Come First Serve (FCFS) and Shortest Job First (SJF) algorithms. In this work, the researchers compared the Waiting Time, Finish Time and Actual Run Time of tasks using these algorithms. To reduce the number of unhandled jobs, a new load state is introduced which checks load beyond conventional load states. Major objective of this approach is to reduce the need of runtime virtual machine migration and to reduce the wastage of resources, which may be occurring due to predefined values of threshold. Mukta and Neeraj Gupta in the paper titled “Analytical Available Bandwidth Estimation in Wireless Ad-Hoc Networks considering Mobility in 3-Dimensional Space” proposes an analytical approach named Analytical Available Bandwidth Estimation Including Mobility (AABWM) to estimate ABW on a link. The major contributions of the proposed work are: i) it uses mathematical models based on renewal theory to calculate the collision probability of data packets which makes the process simple and accurate, ii) consideration of mobility under 3-D space to predict the link failure and provides an accurate admission control. To test the proposed technique, the researcher used NS-2 simulator to compare the proposed technique i.e. AABWM with AODV, ABE, IAB and IBEM on throughput, Packet loss ratio and Data delivery. Results stated that AABWM performs better as compared to other approaches. R.Sridharan and S. Domnic in the paper titled “Placement Strategy for Intercommunicating Tasks of an Elastic Request in Fog-Cloud Environment” proposed a novel heuristic IcAPER,(Inter-communication Aware Placement for Elastic Requests) algorithm. The proposed algorithm uses the network neighborhood machine for placement, once current resource is fully utilized by the application. The performance IcAPER algorithm is compared with First Come First Serve (FCFS), Random and First Fit Decreasing (FFD) algorithms for the parameters (a) resource utilization (b) resource fragmentation and (c) Number of requests having intercommunicating tasks placed on to same PM using CloudSim simulator. Simulation results shows IcAPER maps 34% more tasks on to the same PM and also increase the resource utilization by 13% while decreasing the resource fragmentation by 37.8% when compared to other algorithms. Velliangiri S. et al. in the paper titled “Trust factor based key distribution protocol in Hybrid Cloud Environment” proposed a novel security protocol comprising of two stages: first stage, Group Creation using the trust factor and develop key distribution security protocol. It performs the communication process among the virtual machine communication nodes. Creating several groups based on the cluster and trust factors methods. The second stage, the ECC (Elliptic Curve Cryptography) based distribution security protocol is developed. The performance of the Trust Factor Based Key Distribution protocol is compared with the existing ECC and Diffie Hellman key exchange technique. The results state that the proposed security protocol has more secure communication and better resource utilization than the ECC and Diffie Hellman key exchange technique in the Hybrid cloud. Vivek kumar prasad et al. in the paper titled “Influence of Monitoring: Fog and Edge Computing” discussed various techniques involved for monitoring for edge and fog computing and its advantages in addition to a case study based on Healthcare monitoring system. Avinash Kaur et al. elaborated a comprehensive view of existing data placement schemes proposed in literature for cloud computing. Further, it classified data placement schemes based on their assess capabilities and objectives and in addition to this comparison of data placement schemes. Parminder Singh et al. presented a comprehensive review of Auto-Scaling techniques of web applications in cloud computing. The complete taxonomy of the reviewed articles is done on varied parameters like auto-scaling, approach, resources, monitoring tool, experiment, workload and metric, etc. Simar Preet Singh et al. in the paper titled “Dynamic Task Scheduling using Balanced VM Allocation Policy for Fog Computing Platform” proposed a novel scheme to improve the user contentment by improving the cost to operation length ratio, reducing the customer churn, and boosting the operational revenue. The proposed scheme is learnt to reduce the queue size by effectively allocating the resources, which resulted in the form of quicker completion of user workflows. The proposed method results are evaluated against the state-of-the-art scene with non-power aware based task scheduling mechanism. The results were analyzed using parameters-- energy, SLA infringement and workflow execution delay. The performance of the proposed schema was analyzed in various experiments particularly designed to analyze various aspects for workflow processing on given fog resources. The LRR (35.85 kWh) model has been found most efficient on the basis of average energy consumption in comparison to the LR (34.86 kWh), THR (41.97 kWh), MAD (45.73 kWh) and IQR (47.87 kWh). The LRR model has been also observed as the leader when compared on the basis of number of VM migrations. The LRR (2520 VMs) has been observed as best contender on the basis of mean of number of VM migrations in comparison with LR (2555 VMs), THR (4769 VMs), MAD (5138 VMs) and IQR (5352 VMs).
APA, Harvard, Vancouver, ISO, and other styles
23

Ben Brahim, Ghassen, Nazeeruddin Mohammad, Wassim El-Hajj, Gerard Parr, and Bryan Scotney. "Performance evaluation and comparison study of adaptive MANET service location and discovery protocols for highly dynamic environments." EURASIP Journal on Wireless Communications and Networking 2022, no. 1 (January 10, 2022). http://dx.doi.org/10.1186/s13638-021-02081-4.

Full text
Abstract:
AbstractA critical requirement in Mobile Ad Hoc Networks (MANETs) is its ability to automatically discover existing services as well as their locations. Several solutions have been proposed in various communication domains which could be classified into two categories: (1) directory based, and (2) directory-less. The former is efficient but suffers from the amount of control messages being exchanged to maintain all directories in an agile environment. However, the latter approach attempts to reduce the amount of control messages to update directories, by simply sending broadcast messages to discover services; which is also a non-desirable approach in MANETs. This research work builds on top of our prior work (Nazeeruddin et al. in IFIP/IEEE international conference on management of multimedia networks and services, Springer, Berlin, 2006)) where we introduced a new efficient protocol for service discovery in MANETs (MSLD); a lightweight, robust, scalable, and flexible protocol which supports node heterogeneity and dynamically adapts to network changes while not flooding the network with extra protocol messages—a major challenge in today’s network environments, such as Internet of Things (IoT). Extensive simulations study was conducted on MSLD to: (1) initially evaluate its performance in terms of latency, service availability, and overhead messages, then (2) compare its performance to Dir-Based, Dir-less, and PDP protocols under various network conditions. For most performance metrics, simulation results show that MSLD outperforms Dir-Based, Dir-less, and PDP by either matching or achieving high service availability, low service discovery latency, and considerably less communication overhead.
APA, Harvard, Vancouver, ISO, and other styles
24

Tian, Hui, Fang Peng, Hanyu Quan, and Chin-Chen Chang. "Identity-Based Public Auditing for Cloud Storage of Internet-of-Vehicles Data." ACM Transactions on Internet Technology, March 9, 2022. http://dx.doi.org/10.1145/3433543.

Full text
Abstract:
The Internet of Vehicles (IoV), with the help of cloud computing, can provide rich and powerful application services for vehicles and drivers by sharing and analysing various IoV data. However, how to ensure the integrity of IoV data with multiple sources and diversity outsourced in the cloud is still an open challenge. To address this concern, this paper first presents an identity-based public auditing scheme for cloud storage of IoV data, which can fully achieve the essential function and security requirements, such as classified auditing, multi-source auditing and privacy protection. Particularly, we design a new authenticated data structure, called data mapping table, to track the distribution of each type of IoV data to ensure fine and rapid audits. Moreover, our scheme can reduce the overheads for both the key management and the generation of block tags. We formally prove the security of the presented scheme and evaluate its performance by comprehensive comparisons with the state-of-the-art schemes designed for traditional scenarios. The theoretical analyses and experimental results demonstrate that our scheme can securely and efficiently realize public auditing for IoV data, and outperforms the previous ones in both the computation and communication overheads in most cases.
APA, Harvard, Vancouver, ISO, and other styles
25

Brown, Brian. "Will Work For Free: The Biopolitics of Unwaged Digital Labour." tripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society 12, no. 2 (September 1, 2014). http://dx.doi.org/10.31269/triplec.v12i2.538.

Full text
Abstract:
This paper begins with a survey of the literature regarding a particular, yet ever more consequential and profitable, typology of digital labour: ‘free labour’ (Terranova, 2000, 2004), ‘unwaged immaterial labour’ (Brown and Quan Haase, 2012; Brown, 2013), and/or immaterial labour 2.0 (Coté & Pybus, 2007), to name a few of the more common terms. It then moves on to proffer a critical synthesis of this body of work so as to conclude with a much more theoretically nuanced definition of unwaged digital labour than that which has thus far been provided. In sum, the author argues that there are five central facets to unwaged digital labour that defines and differentiates it from its waged brethren. The first is that unwaged digital labour is fundamentally and inherently autonomous. Free of management oversight, the cooperative and creative capacities of content-generators produce massive amounts of digital artefacts that in the majority of cases also yield massive amounts of profit for the owners of Web 2.0 sites and services. The surplus value produced by this first facet refracts into the second. Following the work of Fuchs (2010, 2011, 2012, 2013), unwaged digital labour is (in the majority of circumstances) hyper-exploited. As has been argued elsewhere (Brown, 2013), this hyper-exploitation is the primary cause for recurrent ‘user’ uproar on Web 2.0 sites and services. This kind of exploitation, then, is met with the third facet of digital labour considered herein: resistance or struggle. Facile recourse to nebulous conceptions regarding the invasion of one’s privacy on eminently social networks no longer suffices in explaining these instances of ‘user’ uproar. Thus, a more nuanced consideration of the forms of resistance that occur on social media sites and services is offered. Similar, yet different, to its waged genus, the fourth facet of unwaged digital labour is that it is intrinsically collaborative, cooperative, and generative of social relationships. The differences that obtain between the orientation of the social relationships constituted by waged and unwaged digital labour respectively are indicative of political potentials that have up until this point been under-theorized. Thus, building on the four aforementioned facets, as well as the arguments put forth by Hardt and Negri regarding the biopolitical dimensions of ‘immaterial labour’ (2000, 2004, 2009), the fifth and most theoretically provocative facet is that this kind of labour is inspired, guided, and regulated by a radically different amalgam of biopolitical power relationships that point to the potentials of a commons-based political economy existing beyond the hyper-exploitative dimensions of capital.
APA, Harvard, Vancouver, ISO, and other styles
26

Bukvova, Helena. "Scientists online: A framework for the analysis of Internet profiles." First Monday, September 27, 2011. http://dx.doi.org/10.5210/fm.v16i10.3584.

Full text
Abstract:
Many scientists use the Internet to present themselves and their work. The content they create could be used to improve the awareness and communication within the scientific community. This requires a sound understanding of the contents on scientists’ profiles, especially with regard to their structure. Existing literature offers mostly basic categorisation, focusing only on single platforms. This article presents a study of scientists’ profiles on institutional and private Web pages, social networking services, blogs, and microblogs. The aim of the study was to describe structures within the profile contents. For this purpose, 79 profiles belonging to 15 German scientists were identified and analysed using the constructivist grounded theory method. The result was a framework, suitable for structuring and further analysis of scientists’ profiles. The framework describes three levels for the study of profiles: profile networks, profile instances, and content units. The content on the profiles can be classified with regard to its type, verbosity, and placement. The developed framework serves as a basic structure for further research into scientists’ online self–presentation.
APA, Harvard, Vancouver, ISO, and other styles
27

Finquelievich, Susana, and Mariana Salgado. "Introduction." Journal of Community Informatics 9, no. 3 (May 2, 2013). http://dx.doi.org/10.15353/joci.v9i3.3151.

Full text
Abstract:
Launching our Call for papers on the role of users in socio-technical innovation has been similar to throwing a bottle with a message into the sea. Who would find it, and how many researchers, in the vast shores of social studies on Information Society, would answer it? What would be the “catch”, in terms of research results, of understanding about the routes in which individuals and groups appropriate and turn information and communication technology useful for their own specific practices? The Call for Papers itself turned into a kind of practical research on how users are relevant regarding socio-technical innovation. A number of colleagues have answered this Call, proving the deep interest which exists in the Community Informatics world about the analysis of the processes through which specialists observe the innovations carried on by communities or individuals and integrate them into new products. In the last decades studies and experience have shown that users matter in regards to technological innovation. Books such as "The Co-Construction of Users and Technology[1]” analyse the creative capacity of users to shape technology in all phases, from design to implementation. Lately, citizen´s labs are also trying to integrate individuals and communities to technological innovation. They try to combine the old “collaboratory” concept launched in the 1990s in academic environments, or virtual laboratories, where scientists collaborate though networking, with the concept of citizens´ networks, in which citizens collaborate in a digital environment for various uses, and that have become freshly popular through social networks such as Facebook or Twitter. Individuals, groups and community have actively participated in the process of technological innovation and are increasingly aware of their capacity for making and changing technologies. Internet - based social networks, open source software, content creation, redesign by use, citizens ´participation in living labs, are just a few examples of people actively enlarging the original uses of information and communication technologies (ICT). The goal of this special issue is to examine, using a variety of multidisciplinary approaches, the mutual interaction between ICT and users. The authors have reflected on the hypothesis that any understanding of users must take into consideration the multiplicity of roles they play, and that the conventional distinction between users and producers is largely formal and artificial. Contributing knowledge about the process in which individuals and communities appropriate and makes information and communication technology functional for their own specific purposes is the goal of this special issue of JOCI. The objective is to advance on the subject of how communities utilize technology, meanwhile creating innovative uses. The papers published in this issue consider how users consume, modify, domesticate, design, reconfigure, and resist technological development, as well as in which ways users are changed by ICT. The papers may be classified into three main categories: Social and Technological Networks, Technological and organizational tools for innovation, Living labs experiences. Some of the key issues that are reflected upon are: - Case studies about technology appropriation and modification of ICT changes by communities. - Alternative-use hunters: analysis of the processes through which experts perceive the changes by communities or individuals and incorporate them into the goods or services. - The follow-up and analysis of the framework of technological relationships between human and non-human agents [1] [1] http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=10755
APA, Harvard, Vancouver, ISO, and other styles
28

McCosker, Anthony, and Rowan Wilken. "Café Space, Communication, Creativity, and Materialism." M/C Journal 15, no. 2 (May 2, 2012). http://dx.doi.org/10.5204/mcj.459.

Full text
Abstract:
IntroductionCoffee, as a stimulant, and the spaces in which it is has been consumed, have long played a vital role in fostering communication, creativity, and sociality. This article explores the interrelationship of café space, communication, creativity, and materialism. In developing these themes, this article is structured in two parts. The first looks back to the coffee houses of the seventeenth and eighteenth centuries to give a historical context to the contemporary role of the café as a key site of creativity through its facilitation of social interaction, communication and information exchange. The second explores the continuation of the link between cafés, communication and creativity, through an instance from the mid-twentieth century where this process becomes individualised and is tied more intrinsically to the material surroundings of the café itself. From this, we argue that in order to understand the connection between café space and creativity, it is valuable to consider the rich polymorphic material and aesthetic composition of cafés. The Social Life of Coffee: London’s Coffee Houses While the social consumption of coffee has a long history, here we restrict our focus to a discussion of the London coffee houses of the seventeenth and eighteenth centuries. It was during the seventeenth century that the vogue of these coffee houses reached its zenith when they operated as a vibrant site of mercantile activity, as well as cultural and political exchange (Cowan; Lillywhite; Ellis). Many of these coffee houses were situated close to the places where politicians, merchants, and other significant people congregated and did business, near government buildings such as Parliament, as well as courts, ports and other travel route hubs (Lillywhite 17). A great deal of information was shared within these spaces and, as a result, the coffee house became a key venue for communication, especially the reading and distribution of print and scribal publications (Cowan 85). At this time, “no coffee house worth its name” would be without a ready selection of newspapers for its patrons (Cowan 173). By working to twenty-four hour diurnal cycles and heightening the sense of repetition and regularity, coffee houses also played a crucial role in routinising news as a form of daily consumption alongside other forms of habitual consumption (including that of coffee drinking). In Cowan’s words, “restoration coffee houses soon became known as places ‘dasht with diurnals and books of news’” (172). Among these was the short-lived but nonetheless infamous social gossip publication, The Tatler (1709-10), which was strongly associated with the London coffee houses and, despite its short publication life, offers great insight into the social life and scandals of the time. The coffee house became, in short, “the primary social space in which ‘news’ was both produced and consumed” (Cowan 172). The proprietors of coffee houses were quick to exploit this situation by dealing in “news mongering” and developing their own news publications to supplement their incomes (172). They sometimes printed news, commentary and gossip that other publishers were not willing to print. However, as their reputation as news providers grew, so did the pressure on coffee houses to meet the high cost of continually acquiring or producing journals (Cowan 173; Ellis 185-206). In addition to the provision of news, coffee houses were vital sites for other forms of communication. For example, coffee houses were key venues where “one might deposit and receive one’s mail” (Cowan 175), and the Penny Post used coffeehouses as vital pick-up and delivery centres (Lillywhite 17). As Cowan explains, “Many correspondents [including Jonathan Swift] used a coffeehouse as a convenient place to write their letters as well as to send them” (176). This service was apparently provided gratis for regular patrons, but coffee house owners were less happy to provide this for their more infrequent customers (Cowan 176). London’s coffee houses functioned, in short, as notable sites of sociality that bundled together drinking coffee with news provision and postal and other services to attract customers (Cowan; Ellis). Key to the success of the London coffee house of the seventeenth and eighteenth centuries was the figure of the virtuoso habitué (Cowan 105)—an urbane individual of the middle or upper classes who was skilled in social intercourse, skills that were honed through participation in the highly ritualised and refined forms of interpersonal communication, such as visiting the stately homes of that time. In contrast to such private visits, the coffee house provided a less formalised and more spontaneous space of sociality, but where established social skills were distinctly advantageous. A striking example of the figure of the virtuoso habitué is the philosopher, architect and scientist Robert Hooke (1635-1703). Hooke, by all accounts, used the opportunities provided by his regular visits to coffee houses “to draw on the knowledge of a wide variety of individuals, from servants and skilled laborers to aristocrats, as well as to share and display novel scientific instruments” (Cowan 105) in order to explore and develop his virtuoso interests. The coffee house also served Hooke as a place to debate philosophy with cliques of “like-minded virtuosi” and thus formed the “premier locale” through which he could “fulfil his own view of himself as a virtuoso, as a man of business, [and] as a man at the centre of intellectual life in the city” (Cowan 105-06). For Hooke, the coffee house was a space for serious work, and he was known to complain when “little philosophical work” was accomplished (105-06). Sociality operates in this example as a form of creative performance, demonstrating individual skill, and is tied to other forms of creative output. Patronage of a coffee house involved hearing and passing on gossip as news, but also entailed skill in philosophical debate and other intellectual pursuits. It should also be noted that the complex role of the coffee house as a locus of communication, sociality, and creativity was repeated elsewhere. During the 1600s in Egypt (and elsewhere in the Middle East), for example, coffee houses served as sites of intensive literary activity as well as the locations for discussions of art, sciences and literature, not to mention also of gambling and drug use (Hattox 101). While the popularity of coffee houses had declined in London by the 1800s, café culture was flowering elsewhere in mainland Europe. In the late 1870s in Paris, Edgar Degas and Edward Manet documented the rich café life of the city in their drawings and paintings (Ellis 216). Meanwhile, in Vienna, “the kaffeehaus offered another evocative model of urban and artistic modernity” (Ellis 217; see also Bollerey 44-81). Serving wine and dinners as well as coffee and pastries, the kaffeehaus was, like cafés elsewhere in Europe, a mecca for writers, artists and intellectuals. The Café Royal in London survived into the twentieth century, mainly through the patronage of European expatriates and local intellectuals such as Wyndham Lewis, Ezra Pound, T. S. Elliot, and Henri Bergson (Ellis 220). This pattern of patronage within specific and more isolated cafés was repeated in famous gatherings of literary identities elsewhere in Europe throughout the twentieth century. From this historical perspective, a picture emerges of how the social functions of the coffee house and its successors, the espresso bar and modern café, have shifted over the course of their histories (Bollerey 44-81). In the seventeenth and eighteenth centuries, the coffee house was an important location for vibrant social interaction and the consumption and distribution of various forms of communication such as gossip, news, and letters. However, in the years of the late nineteenth and early twentieth centuries, the café was more commonly a site for more restricted social interaction between discrete groups. Studies of cafés and creativity during this era focus on cafés as “factories of literature, inciters to art, and breeding places for new ideas” (Fitch, The Grand 18). Central in these accounts are bohemian artists, their associated social circles, and their preferred cafés de bohème (for detailed discussion, see Wilson; Fitch, Paris Café; Brooker; Grafe and Bollerey 4-41). As much of this literature on café culture details, by the early twentieth century, cafés emerge as places that enable individuals to carve out a space for sociality and creativity which was not possible elsewhere in the modern metropolis. Writing on the modern metropolis, Simmel suggests that the concentration of people and things in cities “stimulate[s] the nervous system of the individual” to such an extent that it prompts a kind of self-preservation that he terms a “blasé attitude” (415). This is a form of “reserve”, he writes, which “grants to the individual a [certain] kind and an amount of personal freedom” that was hitherto unknown (416). Cafés arguably form a key site in feeding this dynamic insofar as they facilitate self-protectionism—Fitch’s “pool of privacy” (The Grand 22)—and, at the same time, produce a sense of individual freedom in Simmel’s sense of the term. That is to say, from the early-to-mid twentieth century, cafés have become complex settings in terms of the relationships they enable or constrain between living in public, privacy, intimacy, and cultural practice. (See Haine for a detailed discussion of how this plays out in relation to working class engagement with Paris cafés, and Wilson as well as White on other cultural contexts, such as Japan.) Threaded throughout this history is a clear celebration of the individual artist as a kind of virtuoso habitué of the contemporary café. Café Jama Michalika The following historical moment, drawn from a powerful point in the mid-twentieth century, illustrates this last stage in the evolution of the relationship between café space, communication, and creativity. This particular historical moment concerns the renowned Polish composer and conductor Krzysztof Penderecki, who is most well-known for his avant-garde piece Threnody to the Victims of Hiroshima (1960), his Polymorphia (1961), and St Luke Passion (1963-66), all of which entailed new compositional and notation techniques. Poland, along with other European countries devastated by the Second World War, underwent significant rebuilding after the war, also investing heavily in the arts, musical education, new concert halls, and conservatoria (Monastra). In the immediate post-war period, Poland and Polish culture was under the strong ideological influence exerted by the Soviet Union. However, as Thomas notes, within a year of Stalin’s death in 1953, “there were flickering signs of moderation in Polish culture” (83). With respect to musical creativity, a key turning point was the Warsaw Autumn Music Festival of 1956. “The driving force” behind the first festival (which was to become an annual event), was Polish “composers’ overwhelming sense of cultural isolation and their wish to break the provincial nature of Polish music” at that time (Thomas 85). Penderecki was one of a younger generation of composers who participated in, and benefited from, these early festivals, making his first appearance in 1959 with his composition Strophes, and successive appearances with Dimensions of Time and Silence in 1960, and Threnody in 1961 (Thomas 90). Penderecki married in the 1950s and had a child in 1955. This, in combination with the fact that his wife was a pianist and needed to practice daily, restricted Penderecki’s ability to work in their small Krakow apartment. Nor could he find space at the music school which was free from the intrusion of the sound of other instruments. Instead, he frequented the café Jama Michalika off the central square of Krakow, where he worked most days between nine in the morning and noon, when he would leave as a pianist began to play. Penderecki states that because of the small space of the café table, he had to “invent [a] special kind of notation which allowed me to write the piece which was for 52 instruments, like Threnody, on one small piece of paper” (Krzysztof Penderecki, 2000). In this, Penderecki created a completely new set of notation symbols, which assisted him in graphically representing tone clustering (Robinson 6) while, in his score for Polymorphia, he implemented “novel graphic notation, comparable with medical temperature charts, or oscillograms” (Schwinger 29) to represent in the most compact way possible the dense layering of sounds and vocal elements that is developed in this particular piece. This historical account is valuable because it contributes to discussions on individual creativity that both depends on, and occurs within, the material space of the café. This relationship is explored in Walter Benjamin’s essay “Polyclinic”, where he develops an extended analogy between the writer and the café and the surgeon and his instruments. As Cohen summarises, “Benjamin constructs the field of writerly operation both in medical terms and as a space dear to Parisian intellectuals, as an operating table that is also the marble-topped table of a café” (179). At this time, the space of the café itself thus becomes a vital site for individual cultural production, putting the artist in touch with the social life of the city, as many accounts of writers and artists in the cafés of Paris, Prague, Vienna, and elsewhere in Europe attest. “The attraction of the café for the writer”, Fitch argues, “is that seeming tension between the intimate circle of privacy in a comfortable room, on the one hand, and the flow of (perhaps usable) information all around on the other” (The Grand 11). Penderecki talks about searching for a sound while composing in café Jama Michalika and, hearing the noise of a passing tram, subsequently incorporated it into his famous composition, Threnody (Krzysztof Penderecki, 2000). There is an indirect connection here with the attractions of the seventeenth century coffee houses in London, where news writers drew much of their gossip and news from the talk within the coffee houses. However, the shift is to a more isolated, individualistic habitué. Nonetheless, the aesthetic composition of the café space remains essential to the creative productivity described by Penderecki. A concept that can be used to describe this method of composition is contained within one of Penderecki’s best-known pieces, Polymorphia (1961). The term “polymorphia” refers not to the form of the music itself (which is actually quite conventionally structured) but rather to the multiple blending of sounds. Schwinger defines polymorphia as “many formedness […] which applies not […] to the form of the piece, but to the broadly deployed scale of sound, [the] exchange and simultaneous penetration of sound and noise, the contrast and interflow of soft and hard sounds” (131). This description also reflects the rich material context of the café space as Penderecki describes its role in shaping (both enabling and constraining) his creative output. Creativity, Technology, Materialism The materiality of the café—including the table itself for Penderecki—is crucial in understanding the relationship between the forms of creative output and the material conditions of the spaces that enable them. In Penderecki’s case, to understand the origins of the score and even his innovative forms of musical notation as artefacts of communication, we need to understand the material conditions under which they were created. As a fixture of twentieth and twenty-first century urban environments, the café mediates the private within the public in a way that offers the contemporary virtuoso habitué a rich, polymorphic sensory experience. In a discussion of the indivisibility of sensation and its resistance to language, writer Anna Gibbs describes these rich experiential qualities: sitting by the window in a café watching the busy streetscape with the warmth of the morning sun on my back, I smell the delicious aroma of coffee and simultaneously feel its warmth in my mouth, taste it, and can tell the choice of bean as I listen idly to the chatter in the café around me and all these things blend into my experience of “being in the café” (201). Gibbs’s point is that the world of the café is highly synaesthetic and infused with sensual interconnections. The din of the café with its white noise of conversation and overlaying sounds of often carefully chosen music illustrates the extension of taste beyond the flavour of the coffee on the palate. In this way, the café space provides the infrastructure for a type of creative output that, in Gibbs’s case, facilitates her explanation of expression and affect. The individualised virtuoso habitué, as characterised by Penderecki’s work within café Jama Michalika, simply describes one (celebrated) form of the material conditions of communication and creativity. An essential factor in creative cultural output is contained in the ways in which material conditions such as these come to be organised. As Elizabeth Grosz expresses it: Art is the regulation and organisation of its materials—paint, canvas, concrete, steel, marble, words, sounds, bodily movements, indeed any materials—according to self-imposed constraints, the creation of forms through which these materials come to generate and intensify sensation and thus directly impact living bodies, organs, nervous systems (4). Materialist and medium-oriented theories of media and communication have emphasised the impact of physical constraints and enablers on the forms produced. McLuhan, for example, famously argued that the typewriter brought writing, speech, and publication into closer association, one effect of which was the tighter regulation of spelling and grammar, a pressure toward precision and uniformity that saw a jump in the sales of dictionaries (279). In the poetry of E. E. Cummings, McLuhan sees the typewriter as enabling a patterned layout of text that functions as “a musical score for choral speech” (278). In the same way, the café in Penderecki’s recollections both constrains his ability to compose freely (a creative activity that normally requires ample flat surface), but also facilitates the invention of a new language for composition, one able to accommodate the small space of the café table. Recent studies that have sought to materialise language and communication point to its physicality and the embodied forms through which communication occurs. As Packer and Crofts Wiley explain, “infrastructure, space, technology, and the body become the focus, a move that situates communication and culture within a physical, corporeal landscape” (3). The confined and often crowded space of the café and its individual tables shape the form of productive output in Penderecki’s case. Targeting these material constraints and enablers in her discussion of art, creativity and territoriality, Grosz describes the “architectural force of framing” as liberating “the qualities of objects or events that come to constitute the substance, the matter, of the art-work” (11). More broadly, the design features of the café, the form and layout of the tables and the space made available for individual habitation, the din of the social encounters, and even the stimulating influences on the body of the coffee served there, can be seen to act as enablers of communication and creativity. Conclusion The historical examples examined above indicate a material link between cafés and communication. They also suggest a relationship between materialism and creativity, as well as the roots of the romantic association—or mythos—of cafés as a key source of cultural life as they offer a “shared place of composition” and an “environment for creative work” (Fitch, The Grand 11). We have detailed one example pertaining to European coffee consumption, cafés and creativity. While we believe Penderecki’s case is valuable in terms of what it can tell us about forms of communication and creativity, clearly other cultural and historical contexts may reveal additional insights—as may be found in the cases of Middle Eastern cafés (Hattox) or the North American diner (Hurley), and in contemporary developments such as the café as a source of free WiFi and the commodification associated with global coffee chains. Penderecki’s example, we suggest, also sheds light on a longer history of creativity and cultural production that intersects with contemporary work practices in city spaces as well as conceptualisations of the individual’s place within complex urban spaces. References Benjamin, Walter. “Polyclinic” in “One-Way Street.” One-Way Street and Other Writings. Trans. Edmund Jephcott and Kingsley Shorter. London: Verso, 1998: 88-9. Bollerey, Franziska. “Setting the Stage for Modernity: The Cosmos of the Coffee House.” Cafés and Bars: The Architecture of Public Display. Eds. Christoph Grafe and Franziska Bollerey. New York: Routledge, 2007. 44-81. Brooker, Peter. Bohemia in London: The Social Scene of Early Modernism. Houndmills, Hamps.: Palgrave Macmillan, 2007. Cohen, Margaret. Profane Illumination: Walter Benjamin and the Paris of Surrealist Revolution. Berkeley: U of California P, 1995. Cowan, Brian. The Social Life of Coffee: The Emergence of the British Coffeehouse. New Haven: Yale UP, 2005. Ellis, Markman. The Coffee House: A Cultural History. London: Weidenfeld & Nicholson, 2004. Fitch, Noël Riley. Paris Café: The Sélect Crowd. Brooklyn: Soft Skull Press, 2007. -----. The Grand Literary Cafés of Europe. London: New Holland Publishers (UK), 2006. Gibbs, Anna. “After Affect: Sympathy, Synchrony, and Mimetic Communication.” The Affect Theory Reader. Eds. Melissa Gregg and Gregory J. Siegworth. Durham: Duke University Press, 2010. 186-205. Grafe, Christoph, and Franziska Bollerey. “Introduction: Cafés and Bars—Places for Sociability.” Cafés and Bars: The Architecture of Public Display. Eds. Christoph Grafe and Franziska Bollerey. New York: Routledge, 2007. 4-41. Grosz, Elizabeth. Chaos, Territory, Art: Deleuze and the Framing of the Earth. New York: Columbia UP, 2008. Haine, W. Scott. The World of the Paris Café. Baltimore: Johns Hopkins UP, 1996. Hattox, Ralph S. Coffee and Coffeehouses: The Origins of a Social Beverage in the Medieval Near East. Seattle: U of Washington P, 1985. Hurley, Andrew. Diners, Bowling Alleys and Trailer Parks: Chasing the American Dream in the Postwar Consumer Culture. New York: Basic Books, 2001. Krzysztof Penderecki. Dir. Andreas Missler-Morell. Spektrum TV production and Telewizja Polska S.A. Oddzial W Krakowie for RM Associates and ZDF in cooperation with ARTE, 2000. Lillywhite, Bryant. London Coffee Houses: A Reference Book of Coffee Houses of the Seventeenth, Eighteenth, and Nineteenth Centuries. London: George Allen & Unwin, 1963. McLuhan, Marshall. Understanding Media: The Extensions of Man. London: Abacus, 1974. Monastra, Peggy. “Krzysztof Penderecki’s Polymorphia and Fluorescence.” Moldenhauer Archives, [US] Library of Congress. 12 Jan. 2012 ‹http://memory.loc.gov/ammem/collections/moldenhauer/2428143.pdf› Packer, Jeremy, and Stephen B. Crofts Wiley. “Introduction: The Materiality of Communication.” Communication Matters: Materialist Approaches to Media, Mobility and Networks. New York, Routledge, 2012. 3-16. Robinson, R. Krzysztof Penderecki: A Guide to His Works. Princeton, NJ: Prestige Publications, 1983. Schwinger, Wolfram. Krzysztof Penderecki: His Life and Work. Encounters, Biography and Musical Commentary. London: Schott, 1979. Simmel, Georg. The Sociology of Georg Simmel. Ed. and trans. Kurt H. Wolff. Glencoe, IL: The Free P, 1960. Thomas, Adrian. Polish Music since Szymanowski. Cambridge: Cambridge UP, 2005. White, Merry I. Coffee Life in Japan. Berkeley: U of California P, 2012. Wilson, Elizabeth. “The Bohemianization of Mass Culture.” International Journal of Cultural Studies 2.1 (1999): 11-32.
APA, Harvard, Vancouver, ISO, and other styles
29

Burgess, Jean, and Axel Bruns. "Twitter Archives and the Challenges of "Big Social Data" for Media and Communication Research." M/C Journal 15, no. 5 (October 11, 2012). http://dx.doi.org/10.5204/mcj.561.

Full text
Abstract:
Lists and Social MediaLists have long been an ordering mechanism for computer-mediated social interaction. While far from being the first such mechanism, blogrolls offered an opportunity for bloggers to provide a list of their peers; the present generation of social media environments similarly provide lists of friends and followers. Where blogrolls and other earlier lists may have been user-generated, the social media lists of today are more likely to have been produced by the platforms themselves, and are of intrinsic value to the platform providers at least as much as to the users themselves; both Facebook and Twitter have highlighted the importance of their respective “social graphs” (their databases of user connections) as fundamental elements of their fledgling business models. This represents what Mejias describes as “nodocentrism,” which “renders all human interaction in terms of network dynamics (not just any network, but a digital network with a profit-driven infrastructure).”The communicative content of social media spaces is also frequently rendered in the form of lists. Famously, blogs are defined in the first place by their reverse-chronological listing of posts (Walker Rettberg), but the same is true for current social media platforms: Twitter, Facebook, and other social media platforms are inherently centred around an infinite, constantly updated and extended list of posts made by individual users and their connections.The concept of the list implies a certain degree of order, and the orderliness of content lists as provided through the latest generation of centralised social media platforms has also led to the development of more comprehensive and powerful, commercial as well as scholarly, research approaches to the study of social media. Using the example of Twitter, this article discusses the challenges of such “big data” research as it draws on the content lists provided by proprietary social media platforms.Twitter Archives for ResearchTwitter is a particularly useful source of social media data: using the Twitter API (the Application Programming Interface, which provides structured access to communication data in standardised formats) it is possible, with a little effort and sufficient technical resources, for researchers to gather very large archives of public tweets concerned with a particular topic, theme or event. Essentially, the API delivers very long lists of hundreds, thousands, or millions of tweets, and metadata about those tweets; such data can then be sliced, diced and visualised in a wide range of ways, in order to understand the dynamics of social media communication. Such research is frequently oriented around pre-existing research questions, but is typically conducted at unprecedented scale. The projects of media and communication researchers such as Papacharissi and de Fatima Oliveira, Wood and Baughman, or Lotan, et al.—to name just a handful of recent examples—rely fundamentally on Twitter datasets which now routinely comprise millions of tweets and associated metadata, collected according to a wide range of criteria. What is common to all such cases, however, is the need to make new methodological choices in the processing and analysis of such large datasets on mediated social interaction.Our own work is broadly concerned with understanding the role of social media in the contemporary media ecology, with a focus on the formation and dynamics of interest- and issues-based publics. We have mined and analysed large archives of Twitter data to understand contemporary crisis communication (Bruns et al), the role of social media in elections (Burgess and Bruns), and the nature of contemporary audience engagement with television entertainment and news media (Harrington, Highfield, and Bruns). Using a custom installation of the open source Twitter archiving tool yourTwapperkeeper, we capture and archive all the available tweets (and their associated metadata) containing a specified keyword (like “Olympics” or “dubstep”), name (Gillard, Bieber, Obama) or hashtag (#ausvotes, #royalwedding, #qldfloods). In their simplest form, such Twitter archives are commonly stored as delimited (e.g. comma- or tab-separated) text files, with each of the following values in a separate column: text: contents of the tweet itself, in 140 characters or less to_user_id: numerical ID of the tweet recipient (for @replies) from_user: screen name of the tweet sender id: numerical ID of the tweet itself from_user_id: numerical ID of the tweet sender iso_language_code: code (e.g. en, de, fr, ...) of the sender’s default language source: client software used to tweet (e.g. Web, Tweetdeck, ...) profile_image_url: URL of the tweet sender’s profile picture geo_type: format of the sender’s geographical coordinates geo_coordinates_0: first element of the geographical coordinates geo_coordinates_1: second element of the geographical coordinates created_at: tweet timestamp in human-readable format time: tweet timestamp as a numerical Unix timestampIn order to process the data, we typically run a number of our own scripts (written in the programming language Gawk) which manipulate or filter the records in various ways, and apply a series of temporal, qualitative and categorical metrics to the data, enabling us to discern patterns of activity over time, as well as to identify topics and themes, key actors, and the relations among them; in some circumstances we may also undertake further processes of filtering and close textual analysis of the content of the tweets. Network analysis (of the relationships among actors in a discussion; or among key themes) is undertaken using the open source application Gephi. While a detailed methodological discussion is beyond the scope of this article, further details and examples of our methods and tools for data analysis and visualisation, including copies of our Gawk scripts, are available on our comprehensive project website, Mapping Online Publics.In this article, we reflect on the technical, epistemological and political challenges of such uses of large-scale Twitter archives within media and communication studies research, positioning this work in the context of the phenomenon that Lev Manovich has called “big social data.” In doing so, we recognise that our empirical work on Twitter is concerned with a complex research site that is itself shaped by a complex range of human and non-human actors, within a dynamic, indeed volatile media ecology (Fuller), and using data collection and analysis methods that are in themselves deeply embedded in this ecology. “Big Social Data”As Manovich’s term implies, the Big Data paradigm has recently arrived in media, communication and cultural studies—significantly later than it did in the hard sciences, in more traditionally computational branches of social science, and perhaps even in the first wave of digital humanities research (which largely applied computational methods to pre-existing, historical “big data” corpora)—and this shift has been provoked in large part by the dramatic quantitative growth and apparently increased cultural importance of social media—hence, “big social data.” As Manovich puts it: For the first time, we can follow [the] imaginations, opinions, ideas, and feelings of hundreds of millions of people. We can see the images and the videos they create and comment on, monitor the conversations they are engaged in, read their blog posts and tweets, navigate their maps, listen to their track lists, and follow their trajectories in physical space. (Manovich 461) This moment has arrived in media, communication and cultural studies because of the increased scale of social media participation and the textual traces that this participation leaves behind—allowing researchers, equipped with digital tools and methods, to “study social and cultural processes and dynamics in new ways” (Manovich 461). However, and crucially for our purposes in this article, many of these scholarly possibilities would remain latent if it were not for the widespread availability of Open APIs for social software (including social media) platforms. APIs are technical specifications of how one software application should access another, thereby allowing the embedding or cross-publishing of social content across Websites (so that your tweets can appear in your Facebook timeline, for example), or allowing third-party developers to build additional applications on social media platforms (like the Twitter user ranking service Klout), while also allowing platform owners to impose de facto regulation on such third-party uses via the same code. While platform providers do not necessarily have scholarship in mind, the data access affordances of APIs are also available for research purposes. As Manovich notes, until very recently almost all truly “big data” approaches to social media research had been undertaken by computer scientists (464). But as part of a broader “computational turn” in the digital humanities (Berry), and because of the increased availability to non-specialists of data access and analysis tools, media, communication and cultural studies scholars are beginning to catch up. Many of the new, large-scale research projects examining the societal uses and impacts of social media—including our own—which have been initiated by various media, communication, and cultural studies research leaders around the world have begun their work by taking stock of, and often substantially extending through new development, the range of available tools and methods for data analysis. The research infrastructure developed by such projects, therefore, now reflects their own disciplinary backgrounds at least as much as it does the fundamental principles of computer science. In turn, such new and often experimental tools and methods necessarily also provoke new epistemological and methodological challenges. The Twitter API and Twitter ArchivesThe Open API was a key aspect of mid-2000s ideas about the value of the open Web and “Web 2.0” business models (O’Reilly), emphasising the open, cross-platform sharing of content as well as promoting innovation at the margins via third-party application development—and it was in this ideological environment that the microblogging service Twitter launched and experienced rapid growth in popularity among users and developers alike. As José van Dijck cogently argues, however, a complex interplay of technical, economic and social dynamics has seen Twitter shift from a relatively open, ad hoc and user-centred platform toward a more formalised media business: For Twitter, the shift from being primarily a conversational communication tool to being a global, ad-supported followers tool took place in a relatively short time span. This shift did not simply result from the owner’s choice for a distinct business model or from the company’s decision to change hardware features. Instead, the proliferation of Twitter as a tool has been a complex process in which technological adjustments are intricately intertwined with changes in user base, transformations of content and choices for revenue models. (van Dijck 343)The specifications of Twitter’s API, as well as the written guidelines for its use by developers (Twitter, “Developer Rules”) are an excellent example of these “technological adjustments” and the ways they are deeply interwined with Twitter’s search for a viable revenue model. These changes show how the apparent semantic openness or “interpretive flexibility” of the term “platform” allows its meaning to be reshaped over time as the business models of platform owners change (Gillespie).The release of the API was first announced on the Twitter blog in September 2006 (Stone), not long after the service’s launch but after some popular third-party applications (like a mashup of Twitter with Google Maps creating a dynamic display of recently posted tweets around the world) had already been developed. Since then Twitter has seen a flourishing of what the company itself referred to as the “Twitter ecosystem” (Twitter, “Developer Rules”), including third-party developed client software (like Twitterific and TweetDeck), institutional use cases (such as large-scale social media visualisations of the London Riots in The Guardian), and parasitic business models (including social media metrics services like HootSuite and Klout).While the history of Twitter’s API rules and related regulatory instruments (such as its Developer Rules of the Road and Terms of Use) has many twists and turns, there have been two particularly important recent controversies around data access and control. First, the company locked out developers and researchers from direct “firehose” (very high volume) access to the Twitter feed; this was accompanied by a crackdown on free and public Twitter archiving services like 140Kit and the Web version of Twapperkeeper (Sample), and coincided with the establishment of what was at the time a monopoly content licensing arrangement between Twitter and Gnip, a company which charges commercial rates for high-volume API access to tweets (and content from other social media platforms). A second wave of controversy among the developer community occurred in August 2012 in response to Twitter’s release of its latest API rules (Sippey), which introduce further, significant limits to API use and usability in certain circumstances. In essence, the result of these changes to the Twitter API rules, announced without meaningful consultation with the developer community which created the Twitter ecosystem, is a forced rebalancing of development activities: on the one hand, Twitter is explicitly seeking to “limit” (Sippey) the further development of API-based third-party tools which support “consumer engagement activities” (such as end-user clients), in order to boost the use of its own end-user interfaces; on the other hand, it aims to “encourage” the further development of “consumer analytics” and “business analytics” as well as “business engagement” tools. Implicit in these changes is a repositioning of Twitter users (increasingly as content consumers rather than active communicators), but also of commercial and academic researchers investigating the uses of Twitter (as providing a narrow range of existing Twitter “analytics” rather than engaging in a more comprehensive investigation both of how Twitter is used, and of how such uses continue to evolve). The changes represent an attempt by the company to cement a certain, commercially viable and valuable, vision of how Twitter should be used (and analysed), and to prevent or at least delay further evolution beyond this desired stage. Although such attempts to “freeze” development may well be in vain, given the considerable, documented role which the Twitter user base has historically played in exploring new and unforeseen uses of Twitter (Bruns), it undermines scholarly research efforts to examine actual Twitter uses at least temporarily—meaning that researchers are increasingly forced to invest time and resources in finding workarounds for the new restrictions imposed by the Twitter API.Technical, Political, and Epistemological IssuesIn their recent article “Critical Questions for Big Data,” danah boyd and Kate Crawford have drawn our attention to the limitations, politics and ethics of big data approaches in the social sciences more broadly, but also touching on social media as a particularly prevalent site of social datamining. In response, we offer the following complementary points specifically related to data-driven Twitter research relying on archives of tweets gathered using the Twitter API.First, somewhat differently from most digital humanities (where researchers often begin with a large pre-existing textual corpus), in the case of Twitter research we have no access to an original set of texts—we can access only what Twitter’s proprietary and frequently changing API will provide. The tools Twitter researchers use rely on various combinations of parts of the Twitter API—or, more accurately, the various Twitter APIs (particularly the Search and Streaming APIs). As discussed above, of course, in providing an API, Twitter is driven not by scholarly concerns but by an attempt to serve a range of potentially value-generating end-users—particularly those with whom Twitter can create business-to-business relationships, as in their recent exclusive partnership with NBC in covering the 2012 London Olympics.The following section from Twitter’s own developer FAQ highlights the potential conflicts between the business-case usage scenarios under which the APIs are provided and the actual uses to which they are often put by academic researchers or other dataminers:Twitter’s search is optimized to serve relevant tweets to end-users in response to direct, non-recurring queries such as #hashtags, URLs, domains, and keywords. The Search API (which also powers Twitter’s search widget) is an interface to this search engine. Our search service is not meant to be an exhaustive archive of public tweets and not all tweets are indexed or returned. Some results are refined to better combat spam and increase relevance. Due to capacity constraints, the index currently only covers about a week’s worth of tweets. (Twitter, “Frequently Asked Questions”)Because external researchers do not have access to the full, “raw” data, against which we could compare the retrieved archives which we use in our later analyses, and because our data access regimes rely so heavily on Twitter’s APIs—each with its technical quirks and limitations—it is impossible for us to say with any certainty that we are capturing a complete archive or even a “representative” sample (whatever “representative” might mean in a data-driven, textualist paradigm). In other words, the “lists” of tweets delivered to us on the basis of a keyword search are not necessarily complete; and there is no way of knowing how incomplete they are. The total yield of even the most robust capture system (using the Streaming API and not relying only on Search) depends on a number of variables: rate limiting, the filtering and spam-limiting functions of Twitter’s search algorithm, server outages and so on; further, because Twitter prohibits the sharing of data sets it is difficult to compare notes with other research teams.In terms of epistemology, too, the primary reliance on large datasets produces a new mode of scholarship in media, communication and cultural studies: what emerges is a form of data-driven research which tends towards abductive reasoning; in doing so, it highlights tensions between the traditional research questions in discourse or text-based disciplines like media and communication studies, and the assumptions and modes of pattern recognition that are required when working from the “inside out” of a corpus, rather than from the outside in (for an extended discussion of these epistemological issues in the digital humanities more generally, see Dixon).Finally, even the heuristics of our analyses of Twitter datasets are mediated by the API: the datapoints that are hardwired into the data naturally become the most salient, further shaping the type of analysis that can be done. For example, a common process in our research is to use the syntax of tweets to categorise it as one of the following types of activity: original tweets: tweets which are neither @reply nor retweetretweets: tweets which contain RT @user… (or similar) unedited retweets: retweets which start with RT @user… edited retweets: retweets do not start with RT @user…genuine @replies: tweets which contain @user, but are not retweetsURL sharing: tweets which contain URLs(Retweets which are made using the Twitter “retweet button,” resulting in verbatim passing-along without the RT @user syntax or an opportunity to add further comment during the retweet process, form yet another category, which cannot be tracked particularly effectively using the Twitter API.)These categories are driven by the textual and technical markers of specific kinds of interactions that are built into the syntax of Twitter itself (@replies or @mentions, RTs); and specific modes of referentiality (URLs). All of them focus on (and thereby tend to privilege) more informational modes of communication, rather than the ephemeral, affective, or ambiently intimate uses of Twitter that can be illuminated more easily using ethnographic approaches: approaches that can actually focus on the individual user, their social contexts, and the broader cultural context of the traces they leave on Twitter. ConclusionsIn this article we have described and reflected on some of the sociotechnical, political and economic aspects of the lists of tweets—the structured Twitter data upon which our research relies—which may be gathered using the Twitter API. As we have argued elsewhere (Bruns and Burgess)—and, hopefully, have begun to demonstrate in this paper—media and communication studies scholars who are actually engaged in using computational methods are well-positioned to contribute to both the methodological advances we highlight at the beginning of this paper and the political debates around computational methods in the “big social data” moment on which the discussion in the second part of the paper focusses. One pressing issue in the area of methodology is to build on current advances to bring together large-scale datamining approaches with ethnographic and other qualitative approaches, especially including close textual analysis. More broadly, in engaging with the “big social data” moment there is a pressing need for the development of code literacy in media, communication and cultural studies. In the first place, such literacy has important instrumental uses: as Manovich argues, much big data research in the humanities requires costly and time-consuming (and sometimes alienating) partnerships with technical experts (typically, computer scientists), because the free tools available to non-programmers are still limited in utility in comparison to what can be achieved using raw data and original code (Manovich, 472).But code literacy is also a requirement of scholarly rigour in the context of what David Berry calls the “computational turn,” representing a “third wave” of Digital Humanities. Berry suggests code and software might increasingly become in themselves objects of, and not only tools for, research: I suggest that we introduce a humanistic approach to the subject of computer code, paying attention to the wider aspects of code and software, and connecting them to the materiality of this growing digital world. With this in mind, the question of code becomes increasingly important for understanding in the digital humanities, and serves as a condition of possibility for the many new computational forms that mediate our experience of contemporary culture and society. (Berry 17)A first step here lies in developing a more robust working knowledge of the conceptual models and methodological priorities assumed by the workings of both the tools and the sources we use for “big social data” research. Understanding how something like the Twitter API mediates the cultures of use of the platform, as well as reflexively engaging with its mediating role in data-driven Twitter research, promotes a much more materialist critical understanding of the politics of the social media platforms (Gillespie) that are now such powerful actors in the media ecology. ReferencesBerry, David M. “Introduction: Understanding Digital Humanities.” Understanding Digital Humanities. Ed. David M. Berry. London: Palgrave Macmillan, 2012. 1-20.boyd, danah, and Kate Crawford. “Critical Questions for Big Data.” Information, Communication & Society 15.5 (2012): 662-79.Bruns, Axel. “Ad Hoc Innovation by Users of Social Networks: The Case of Twitter.” ZSI Discussion Paper 16 (2012). 18 Sep. 2012 ‹https://www.zsi.at/object/publication/2186›.Bruns, Axel, and Jean Burgess. “Notes towards the Scientific Study of Public Communication on Twitter.” Keynote presented at the Conference on Science and the Internet, Düsseldorf, 4 Aug. 2012. 18 Sep. 2012 http://snurb.info/files/2012/Notes%20towards%20the%20Scientific%20Study%20of%20Public%20Communication%20on%20Twitter.pdfBruns, Axel, Jean Burgess, Kate Crawford, and Frances Shaw. “#qldfloods and @QPSMedia: Crisis Communication on Twitter in the 2011 South East Queensland Floods.” Brisbane: ARC Centre of Excellence for Creative Industries and Innovation, 2012. 18 Sep. 2012 ‹http://cci.edu.au/floodsreport.pdf›Burgess, Jean E. & Bruns, Axel (2012) “(Not) the Twitter Election: The Dynamics of the #ausvotes Conversation in Relation to the Australian Media Ecology.” Journalism Practice 6.3 (2012): 384-402Dixon, Dan. “Analysis Tool Or Research Methodology: Is There an Epistemology for Patterns?” Understanding Digital Humanities. Ed. David M. Berry. London: Palgrave Macmillan, 2012. 191-209.Fuller, Matthew. Media Ecologies: Materialist Energies in Art and Technoculture. Cambridge, Mass.: MIT P, 2005.Gillespie, Tarleton. “The Politics of ‘Platforms’.” New Media & Society 12.3 (2010): 347-64.Harrington, Stephen, Highfield, Timothy J., & Bruns, Axel (2012) “More than a Backchannel: Twitter and Television.” Ed. José Manuel Noguera. Audience Interactivity and Participation. COST Action ISO906 Transforming Audiences, Transforming Societies, Brussels, Belgium, pp. 13-17. 18 Sept. 2012 http://www.cost-transforming-audiences.eu/system/files/essays-and-interview-essays-18-06-12.pdfLotan, Gilad, Erhardt Graeff, Mike Ananny, Devin Gaffney, Ian Pearce, and danah boyd. “The Arab Spring: The Revolutions Were Tweeted: Information Flows during the 2011 Tunisian and Egyptian Revolutions.” International Journal of Communication 5 (2011): 1375-1405. 18 Sep. 2012 ‹http://ijoc.org/ojs/index.php/ijoc/article/view/1246/613›.Manovich, Lev. “Trending: The Promises and the Challenges of Big Social Data.” Debates in the Digital Humanities. Ed. Matthew K. Gold. Minneapolis: U of Minnesota P, 2012. 460-75.Mejias, Ulises A. “Liberation Technology and the Arab Spring: From Utopia to Atopia and Beyond.” Fibreculture Journal 20 (2012). 18 Sep. 2012 ‹http://twenty.fibreculturejournal.org/2012/06/20/fcj-147-liberation-technology-and-the-arab-spring-from-utopia-to-atopia-and-beyond/›.O’Reilly, Tim. “What is Web 2.0? Design Patterns and Business Models for the Next Generation of Software.” O’Reilly Network 30 Sep. 2005. 18 Sep. 2012 ‹http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html›.Papacharissi, Zizi, and Maria de Fatima Oliveira. “Affective News and Networked Publics: The Rhythms of News Storytelling on #Egypt.” Journal of Communication 62.2 (2012): 266-82.Sample, Mark. “The End of Twapperkeeper (and What to Do about It).” ProfHacker. The Chronicle of Higher Education 8 Mar. 2011. 18 Sep. 2012 ‹http://chronicle.com/blogs/profhacker/the-end-of-twapperkeeper-and-what-to-do-about-it/31582›.Sippey, Michael. “Changes Coming in Version 1.1 of the Twitter API.” 16 Aug. 2012. Twitter Developers Blog. 18 Sep. 2012 ‹https://dev.Twitter.com/blog/changes-coming-to-Twitter-api›.Stone, Biz. “Introducing the Twitter API.” Twitter Blog 20 Sep. 2006. 18 Sep. 2012 ‹http://blog.Twitter.com/2006/09/introducing-Twitter-api.html›.Twitter. “Developer Rules of the Road.” Twitter Developers Website 17 May 2012. 18 Sep. 2012 ‹https://dev.Twitter.com/terms/api-terms›.Twitter. “Frequently Asked Questions.” 18 Sep. 2012 ‹https://dev.twitter.com/docs/faq›.Van Dijck, José. “Tracing Twitter: The Rise of a Microblogging Platform.” International Journal of Media and Cultural Politics 7.3 (2011): 333-48.Walker Rettberg, Jill. Blogging. Cambridge: Polity, 2008.Wood, Megan M., and Linda Baughman. “Glee Fandom and Twitter: Something New, or More of the Same Old Thing?” Communication Studies 63.3 (2012): 328-44.
APA, Harvard, Vancouver, ISO, and other styles
30

Bruns, Axel. "Archiving the Ephemeral." M/C Journal 1, no. 2 (August 1, 1998). http://dx.doi.org/10.5204/mcj.1708.

Full text
Abstract:
They may have been obscured by the popular media's fascination with the World Wide Web, but for many Net users, Usenet newsgroups still constitute an equally important interactive tool. While Web pages present relatively static information that can be structured through hypertext links and searched using Yahoo! and similar services, newsgroups are fora for open, interactive discussion on virtually any conceivable subject, amongst participants from around the world -- more than any other part of the Net, they are instrumental in the formation of virtual communities by allowing like-minded individuals to get together, exchange their opinions, and organise themselves. Having emerged from email mailing-lists, newsgroups are among the oldest uses of computer networks for communication; based around a simple exchange of all new postings among the participants, they offer few of the comforts of more modern technologies, and so it is not surprising that a number of services have begun to provide powerful Web interfaces for newsgroup access. One of the oldest and most reknowned amongst these sites is Deja News. Since its launch in May 1995, Deja News has expanded to cover around 50,000 different newsgroups, which are contained in a memory archive that is several hundred gigabytes in size, according to some reports (Woods, n.pag.); the company itself boasts accesses by more than three million users each month to the hundreds of millions of articles it has archived ("Company Background", n. pag.). What makes Deja News attractive to so many users are the many search options the service has added to its newsgroups interface. Deja News visitors can search for any article subjects, keywords, newsgroups, or participants' names they are interested in: if you're looking for the absolutely latest on the White House sex scandal, search for "Clinton + Lewinski" and limit your search to articles posted in the last few days or hours; if you want to know what newsgroups your co-worker is writing to in her lunch break, simply ask Deja News to find all postings by "Annabel Smith". If you can't quite remember the address for the latest Internet camera site, Deja News will. To put such powerful research capabilities at the fingertips of any Web user raises any number of legal as well as ethical questions, however. To begin with, does Deja News even have the right to archive anyone's and everyone's postings to newsgroups? The simple fact that users' articles are posted openly, for all newsgroup participants to see, does not necessarily automatically imply that the article may be made available to a greater public elsewhere -- by analogy, if you have a casual conversation with a group of people, you aren't usually expecting to see your words in the history books the next day (but analogies between the Internet and 'real life' are always dangerous). Unless you spoke to someone with a photographic memory, you can usually rely on them gradually forgetting what you said, even if you made a fool out of yourself -- that's human. Appearing as the result of a Deja News search, articles lose their original context -- the newsgroup discussion they were part of -- and are given an entirely different one, a context in which by virtue of being so presented they may gain new and potentially questionable authority. As is so often the case for information on Websites, the information in articles which can thus be found through services like Deja News cannot easily be verified or disproven; due to the loss of context, researchers cannot even gain a feel for a writer's trustworthiness in the same way that seasoned newsgroup members can. Neither may they always detect intentional irony and humour: playful exaggeration may easily appear as deliberate misinformation, friendly oneupmanship as an angry attack. The results of author-based searches may be even more potentially damaging, however. Deja News offers various mechanisms that can include searches restricted to the articles written by a particular user, culminating in the 'author profile' that can be used to list all posts ever made by a particular user. One does not need to be paranoid to imagine ways in which such a powerful research tool may be abused -- perhaps even (and most easily) by the Deja News company itself: since, following general Web etiquette, access to the service is free for normal users, Deja News relies on other avenues of income, and doubtlessly it would be tempting to sell the rights to exploit the Deja News database to professional spammers (Internet junk mailers). These could then aim their advertising emails directly towards the most promising target audience -- those who have in their newsgroups postings shown the most interest in a particular product or service. Indeed, Deja News notes, somewhat vaguely, that it "can provide efficient and effective Internet-based marketing for various types of online marketing goals, such as testing messages, building brand awareness, increasing Website traffic or generating leads" ("Company Background", n. pag.). While such uninvited advertising may be annoying to unsuspecting Internet users, more damaging and mean-spirited uses of the 'author profile' can also be imagined easily. What if your prospective new employer finds the comments you made about the company in a newsgroup article last year? What if they check your author profile for your attitude to drugs (did you ever write an article in rec.drugs.cannabis) or your sexual orientation (any posts to alt.sex.homosexual)? What if some extremist group targets you over your support for multiculturalism? Thanks to Deja News and similar services, anything you've ever said may be used against you. The virtual walls of cyberspace have ears, come with a perfect memory, and are prepared to share their knowledge with anyone. A valid line of argument against such criticism notes that we are all responsible for our own actions: newsgroups are, after all, public discussion fora that are open to all participants -- if you make a controversial statement in such a place, you must be prepared to suffer the consequences. However, the threat of being taken out of context must once more be emphasised at this point: while the articles that can be found through Deja News appear to accurately reflect their writers' views, the background against which such views were expressed is much more difficult to extract. Furthermore, only very few newsgroup participants will be aware that their postings are continually being archived, since newsgroups generally appear as a fairly ephemeral medium: only the last few days or, at most, weeks of newsgroup traffic are usually stored on news servers. An awareness of being archived would help writers protect themselves -- it may also serve to impoverish newsgroup discussion, however. Even more importantly, the already-digital, computer-mediated nature of newsgroup discussions has far-reaching implications. Dealing with units of data that come in a handy, easily stored format, services like Deja News tend to archive Usenet newsgroups interminably -- your first flames as a 'clueless newbie', years ago, may therefore today still be used to embarass you. This is the most important new development: analogue, organic, human memory eventually fades; we tend to organise our memories, and remember those things we regard as most important, while others gradually vanish. Many modern legal systems reflect this process by gradually deleting minor and even major offences from a citizen's criminal records -- they forgive as they forget. Other than in cases of extreme Alzheimer's brought about by server crashes and hard disk defects, digital memory, on the other hand, is perfect for unlimited periods of time: once entered into a database, newsgroup articles may be stored forever; ephemeral discussions become matters of permanent record. The computer doesn't forget. While its many Internet accolades bear witness to the benefits users have found in Deja News, the ethical questions the service raises have hardly been addressed so far, least of all on the Deja News Website itself. While apparently the inclusion of a header line "X-noarchive: yes" may prevent articles from being archived by Deja News, this isn't advertised anywhere; neither are there any statements justifying the unauthorised archiving of newsgroups, or any easily accessible mechanisms for users to have their own articles deleted from the archives. As has often been the case on the Internet, a private organisation has therefore become a semi-official institution, simply by virtue of having thought of an idea first; ethics and laws are left behind by technological development, and find that they have some catching-up to do. Of course, none of this should be seen as condemning Deja News as a malevolent organisation out to spy on Internet participants -- in fact, the company so far appears to have shown admirable restraint in declining to exploit its database. Deja News is at the centre of this controversy only by virtue of having implemented the idea of a 'memory of Usenet' too perfectly: not Deja News is the problem, but those who would use, to their own and possibly sinister ends, information made available without charge by Deja News. Eventually, in any way, Deja News itself may be overwhelmed by its own creation: with the amount of Internet users still continually increasing, and with newsgroup articles accumulating, it is gradually getting harder to still find the few most important postings amongst a multitude of discussions (in a similar way, search engine users are beginning to have trouble locating the most relevant Websites on any specific topic amongst a large number of less useful 'vanity' homepages). In the end, then, this new 'perfect' digital memory may have to learn an important capability from its analogue human counterpart: Deja News and similar archives may have to learn how to forget articles of lesser significance. A very simple first step towards this process has already been made: since December 1997, junk mail postings ('spam') are being removed from the Deja News archives (Woods, n. pag.). While such articles, whose uselessness is almost universally agreed upon by the Internet community at large, constitute a clear case for removal, however, any further deletions will mean a significant step away from the original Deja News goal of providing a complete archive of Usenet newsgroups, and towards increasingly controversial value-judgments -- who, after all, is to decide which postings are worth archiving, and which are irrelevant? If memory is to be selective, who will do the selecting? Eventually (and even if new memory management technologies help prevent outright deletion by relegating less important information to some sort of second-rate, less accessed memory space), it seems, the problem returns to being an ethical one -- of what is archived where and for how long, of who has access to these data, and of how newsgroup writers can regain control of their articles to protect themselves and prevent abuse. Deja News and the Internet community as a whole would be well-advised to address the problems raised by this perfect memory of originally ephemeral conversations before any major and damaging abuse can occur. References "Company Background." Deja News. 1998. 11 Aug. 1998 <http://www.dejanews.com/emarket/about/background.shtml> "Deja News Invites Internet Users to Search the World's Largest On-Line Newsgroup Archive." Deja News. 30 May 1996. 11 Aug. 1998 <http://www.dejanews.com/emarket/about/pr/1996/dnpr_960530.shtml>. Woods, Bob. "Deja News Cuts, Increases Content." Newsbytes 8 Dec. 1997. Citation reference for this article MLA style: Axel Bruns. "Archiving the Ephemeral: Deja News and the Ethics of Perfect Memory." M/C: A Journal of Media and Culture 1.2 (1998). [your date of access] <http://www.uq.edu.au/mc/9808/deja.php>. Chicago style: Axel Bruns, "Archiving the Ephemeral: Deja News and the Ethics of Perfect Memory," M/C: A Journal of Media and Culture 1, no. 2 (1998), <http://www.uq.edu.au/mc/9808/deja.php> ([your date of access]). APA style: Axel Bruns. (1998) Archiving the ephemeral: Deja News and the ethics of perfect memory. M/C: A Journal of Media and Culture 1(2). <http://www.uq.edu.au/mc/9808/deja.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
31

Munro, Ealasaid. "Developing the Rural Creative Economy ‘from Below’: Exploring Practices of Market-Building amongst Creative Entrepreneurs in Rural and Remote Scotland." M/C Journal 19, no. 3 (June 22, 2016). http://dx.doi.org/10.5204/mcj.1071.

Full text
Abstract:
IntroductionThis paper is concerned with recent attempts to develop the creative economy in rural Scotland. Research shows that the creative economy is far from self-organising, and that an appropriate institutional landscape is important to its development (Andersson and Henrekson). In Scotland, there is a proliferation of support mechanisms – from those designed to help creative entrepreneurs improve their business, management, or technical expertise, to infrastructure projects, to collective capacity-building. In rural Scotland, this support landscape is particularly cluttered. This article tackles the question: How do rural creative entrepreneurs negotiate this complex funding and support landscape, and how do they aid the development of the rural creative economy ‘from below’? From Creative Industries to the Creative EconomyThe creative industries have been central to the UK’s economic growth strategy since the 1990s. According to the Centre for Economics and Business Research the creative industries contributed £5.9bn to the economy in 2013 (CEBR 17). In the last five years there have been significant improvements in ICTs, leading to growth in digital creative production, distribution, and consumption. The established creative industries, along with the nascent ‘digital industries’ are often grouped together as a separate economic sector – the ‘creative economy’ (Nesta A Manifesto for the Creative Economy).Given its close association with creative city discourses (see Florida 2002), research on the creative economy remains overwhelmingly urban-focused. As a result of this urban bias, the rural creative economy is under-researched. Bell and Jayne (209) note that in the last decade a small body of academic work on the rural creative economy has emerged (Harvey et al.; White). In particular, the Australian context has generated a wealth of discussion as regards national and regional attempts to develop the rural creative economy, the contribution of ‘creativity’ to rural economic and social development, sustainability and resilience, and the role that individual creative practitioners play in developing the rural creative economy (see Argent et al.; Gibson, Gibson and Connell; Waitt and Gibson).In the absence of suitable infrastructure, such as: adequate transport infrastructure, broadband and mobile phone connectivity, workspaces and business support, it often falls to rural creative practitioners themselves to ‘patch the gaps’ in the institutional infrastructure. This paper is concerned with the ways in which rural creative practitioners attempt to contribute to the development of the creative economy ‘from below’. ICTs have great potential to benefit rural areas in this respect, by “connecting people and places, businesses and services” (Townsend et al. Enhanced Broadband Access 581).The Scottish InfrastructureSince 1998, cultural policy has been devolved to Scotland, and has fallen under the control of the Scottish Government and Parliament. In an earlier examination of a Scottish creative business support agency, I noted that the Scottish Government has adopted a creative industries development strategy broadly in line with that coming out of Westminster, and subsequently taken up worldwide, and that the Scottish institutional infrastructure is extremely complex (Schlesinger et al.). Crucially, the idea of ‘intervention’, or, the availability of a draw-down programme of funding and support that will help creative practitioners develop a business from their talent, is key (Schlesinger).The main funder for Scottish artists and creative practitioners is Creative Scotland, who distribute money from the Scottish Government and the National Lottery. Highlands and Islands Enterprise (HIE) also offer funding and support for creative practitioners working in the Highlands and Islands region. Further general business support may be drawn down from Business Gateway (who work Scotland-wide but are not creative-industries specific), or Scottish Enterprise (who work Scotland-wide, are not creative-industries specific, and are concerned with businesses turning over more than £250,000 p.a.). Additionally, creative-sector specific advice and support may be sought from Cultural Enterprise Office (based in Glasgow and primarily serving the Central Belt), Creative Edinburgh, Dundee or Stirling (creative networks that serve their respective cities), the Creative Arts and Business Network (based in Dumfries, serving the Borders), and Emergents (based in Inverness, dealing with rural craftspeople and authors).MethodologyThe article draws on material gathered as part of three research projects, all concerned with the current support landscape for creative practitioners in Scotland. The first, ‘Supporting Creative Business’ was funded by the Arts and Humanities Research Council, the second, ‘Towards a model of support for the rural creative industries’ was funded by the University of Glasgow and the third, ‘The effects of improved communications technology of rural creative entrepreneurs’ funded by CREATe, the Research Council's UK Centre for the Study of Copyright and New Business Models in the Creative Economy.In all three cases, the research was theoretically and practically informed by the multi-sited ethnographies of cultural, creative and media work conducted by Moeran (Ethnography at Work, The Business of Ethnography) and Mould et al. Whilst the methodology for all three of my projects was ethnography, the methods utilised included interviews (n=23) – with interviewees drawn from across rural Scotland – participant and non-participant observation, and media and document analysis. Interviewees and study sites were accessed via snowball sampling, which was enabled by the measure of continuity between the three projects. This paper draws primarily on interview material and ethnographic ‘vignettes’. All individuals cited in the paper are anonymised in line with the University of Glasgow’s ethics guidelines.Cities, Creativity, and ‘Buzz’As noted earlier, cities are seen as the driving force behind the creative industries; and accordingly, much of the institutional infrastructure that supports the rural creative industries is modelled on urban systems of intervention. Cities are seen as breeding grounds for creativity by virtue of what Storper and Venables call their ‘buzz’ – consider, for example, the sheer numbers of creative practitioners that congregate in cities, the presence of art schools, work spaces and so on. Several of the creative practitioners I spoke to identified the lack of ‘buzz’ as one key difference between working in cities and working from rural places:It can be isolating out here. There are days when I miss art school, and my peers. I really valued their support and just the general chit chat and news. […] And having everything on your doorstep. (Visual artist, Argyll)Of course, rural creatives didn’t equate the ‘buzz’ of activity in cities with personal or professional creative success. Rather, they felt that developing a creative business was made easier by the fact that most funders and support agencies were based in Scotland’s Central Belt. The creatives resident there were able to take advantage of that proximity and the relationships that it enabled them to build, but also, the institutional landscape was supplemented by the creative ‘buzz’, which was difficult to quantify and impossible to replicate in rural areas.Negotiating the Funding and Support LandscapeI spoke to rural creative practitioners about whether the institutional infrastructure – in this case, relevant policy at national and UK level, funding and support agencies, membership bodies etcetera – was adequate. A common perspective was that the institutional infrastructure was extremely complex, which acted as a barrier for creatives seeking funding and support:Everything works ok, the problem is that there’s so many different places to go to for advice, and so many different criteria that you have to meet if you wanted funding, and what’s your first port of call, and it’s just too complicated. I feel that as a rural artist I fall between the cracks […] am I a creative business, a rural creative business, or just a rural business? (Craftsperson, Shetland) Interviewees suggested that there were ‘gaps’ in the institutional infrastructure, caused not by the lack of appropriate policy, funders, or support agencies but rather by their proliferation and a sense of confusion about who to approach. Furthermore, funding agencies such as Creative Scotland have, in recent years, come under fire for the complexity of their funding and support systems:They have simplified their application process, but I just can’t be bothered trying to get anything out of Creative Scotland at the moment. I don’t find their support that useful and they directed me to Cultural Enterprise Office when I asked for advice on filling in the form and tailoring the application, and CEO were just so pushed for time, I couldn’t get a Skype with them. The issue with getting funding from anywhere is the teeny tiny likelihood of getting money, coupled with how time-consuming the application process is. So for now, I’m just trying to be self-sufficient without asking for any development funds. But I am not sure how sustainable that is. (Craftsperson, Skye, interview) There was a sense that ‘what works’ to enable urban creative practitioners to develop their practice is not necessarily sufficient to help rural creatives. Because most policymakers, funders and support bodies are based in the Central Belt, rural creatives feel that the challenges they face are poorly understood. One arts administrator summed up why, statingthe problem is that people in the Central Belt don’t get what we’re dealing with up here, unless they’ve actually lived here. The remoteness, poor transport links, internet and mobile access […] it impacts on your ability to develop your business. If I want to attend a course, some organisations will pay travel and accommodation. But they don’t account for the fact that if I travel from Eigg, I’ll need to work around the ferry times, which might mean two extra nights’ accommodation plus the cost of travel … we’re excluded from opportunities because of our location. (Arts administrator, the Small Isles) A further issue identified by several participants in this research is that funding and support agencies Scotland-wide tend to work to standardised definitions of the creative industries that privilege high-growth sectors (see Luckman). This led to many heritage and craft businesses feeling excluded. One local authority stakeholder told me,exactly what the creative industries are, well that might be obvious on paper but real life is a bit more complicated. Where do we put a craftsperson whose craft work is done in her spare time but pays just enough to stop her needing a second job? How do we tell people like this, who say they are in the creative industries, that they aren’t actually according to this criteria or that criteria? (Local authority stakeholder, Shetland, interview)Creating Virtual ‘Buzz’? The Potential of ICTsAccording to 2015 OFCOM figures (10-12), in rural Scotland 85.9% of households can receive broadband, and 6.3% can receive superfast. The Scottish Government’s ambition is to deliver superfast broadband to up to 90 per cent of premises in Scotland by March 2016, and to extend this to 95 per cent by 2017. Whilst the current landscape as regards broadband provision is far from ideal, there are signs that improved provision is profoundly affecting the way that rural creatives develop their practice, and the way they engage with the institutional infrastructure set up to support them.At an industry event run by HIE in July 2015, a diverse panel of rural creatives spoke of how they exploited the possibilities associated with improved ICTs in order to offset some of the aforementioned problems of working from rural and remote areas. As the event was conducted under Chatham House rules, the following is adapted from field notes,It was clear from the panel and the Q&A that followed that improved ICTs meant that creatives could access training and support in new ways–online courses and training materials, webinars, and one-on-one Skype coaching, training and mentoring. Whilst of course most people would prefer face-to-face contact in this respect, the willingness of training providers to offer online solutions was appreciated, and most of the creatives on the panel (and many in the audience) had taken advantage of these partial solutions. The rural creatives on the panel also detailed the tactics that they used in order to ‘patch the gaps’ in the institutional infrastructure:There were four things that emerged from the panel discussion, Q&A and subsequent conversations I had on how technology benefited rural creatives: peer support, proximity to decision-makers, marketing and sales, and heritage and provenance.In terms of peer support, the panel felt that improved connectivity allowed them to access ‘virtual’ peer support through the internet. This was particularly important in terms of seeking advice regarding funding, business support and training, generating new creative ideas, and seeking emotional support from others who were familiar with the strains of running a creative business.Rural creatives found that social media (in particular) meant that they had a closer relationship with ‘distant’ decision-makers. They felt able to join events via livestreaming, and took advantage of hash tagging to take part in events, ‘policy hacks’ and consultations. Attendees I spoke to also mentioned that prominent Government ministers and other decision-makers had a strong Twitter presence and made it clear that they were at times ‘open’ to direct communication. In this way, rural creatives felt that they could ‘make their voices heard’ in new ways.In terms of marketing and sales, panel members found social media invaluable in terms of building online ‘presence’. All of the panel members sold services and products through dedicated websites (and noted that improved broadband speeds and 3G meant that these websites were increasingly sophisticated, allowing them to upload photographs and video clips, or act as client ‘portals’), however they also sought out other local creatives, or creatives working in the same sector in order to build visible networks on social media such as Instagram, Twitter and Facebook. This echoes an interview I conducted with a designer from Orkney, who suggested that these online networks allowed designers to build a rapport with customers, but also to showcase their products and build virtual ‘buzz’ around their work (and the work of others) in the hope their designs would be picked up by bloggers, the fashion press and stylists.The designer on the panel also noted that social media allowed her to showcase the provenance of her products. As she spoke I checked her Twitter and Instagram feeds, as well as the feeds of other designers she was linked to; a large part of their ‘advertising’ through these channels entailed giving followers an insight into life on the islands. The visual nature of these media also allowed them to document how local histories of making had influenced their practice, and how their rural location had influenced their work. It struck me that this was a really effective way to capture consumers’ imaginations. As we can see, improved ICTs had a substantial impact on rural creatives’ practice. Not only did several of the panel members suggest that improved ICTs changed the nature of the products that they could produce (by enabling them to buy in different materials and tools, and cultivate longer and more complex supply chains), they also noted that improved ICTs enabled them to cultivate new markets, to build stronger networks and to participate more fully in discussions with ‘distant’ policymakers and decision makers. Furthermore, ICTs were seen as acting as a proxy for ‘buzz’ for rural creatives, that is, face-to-face communication was still preferred, but savvy use of ICTs went some way to mitigating the problems of a rural location. This extends Storper and Venables’s conceptualisation of the idea, which understands ‘buzz’ as the often-intangible benefits of face-to-face contact.Problematically however, as Townsend et al. state, “rural isolation is amplified by the technological landscape, with rural communities facing problems both in terms of broadband access technologies and willingness or ability of residents to adopt these” (Enhanced Broadband Access 5). As such, the development activities of rural creatives are hampered by poor provision and a slow ‘roll out’ of broadband and mobile coverage. ConclusionsThis paper is concerned with recent attempts to develop the rural creative economy in Scotland. The paper can be read in relation to a small but expanding body of work that seeks to understand the distinctive formation of the rural creative industries across Europe and elsewhere (Bell and Jayne), and how these can best be developed and supported (White). Recent, targeted intervention in the rural creative industries speaks to concerns about the emergence of a ‘two tier’ Europe, with remote and sparsely-populated rural regions with narrow economic bases falling behind more resilient cities and city-regions (Markusen and Gadwa; Wiggering et al.), yet exactly how the rural creative industries function and can be further developed is an underdeveloped research area.In order to contribute to this body of work, this paper has sketched out some of the problems associated with recent attempts to develop the creative economy in rural Scotland. On a Scotland-wide scale, there is a proliferation of policies, funding bodies, and support agencies designed to organise and regulate the creative economy. In rural areas, there is also an ‘overlap’ between Scotland-wide bodies and rural-specific bodies, meaning that many rural creatives feel as if they ‘fall through the cracks’ in terms of funding and support. Additionally, rural creatives noted that Central Belt-based funders and support agencies struggled to fully understand the difficulties associated with making a living from a rural location.The sense of being distant from decision makers and isolated in terms of practice meant that many rural creatives took it upon themselves to develop the creative economy ‘from below’. The creatives that I spoke to had an array of ‘tactics’ that they used, some of which I have detailed here. In this short paper I have focused on one issue articulated within interviews – the idea of exploiting ICTs in order to build stronger networks between creatives and between creatives and decision makers within funding bodies and support agencies. Problematically, however, it was recognised that these creative-led initiatives could only do so much to mitigate the effects of a cluttered, piecemeal funding and support landscape.My research suggests that as it stands, ‘importing’ models from urban contexts is alienating and frustrating for rural creatives and targeted, rural-specific intervention is required. Research demonstrates that creative practitioners often seek to bring about social and cultural impact through their work, rather than engaging in creative activities merely for economic gain (McRobbie Be Creative, Rethinking Creative Economies; Waitt and Gibson). Whilst this is true of creatives in both urban and rural areas, my research suggests that this is particularly important to rural creatives, who see themselves as contributing economically, social and culturally to the development of the communities within which they are embedded (see Duxbury and Campbell; Harvey et al.). ‘Joined up’ support for this broad-based set of aims would greatly benefit rural creatives and maximise the potential of the rural creative industries.ReferencesAndersson, Martin, and Magnus Henrekson. "Local Competiveness Fostered through Local Institutions for Entrepreneurship." Research Institute on Industrial Economics Work Paper Series (2014), 0-57. Argent, Neil, Matthew Tonts, Roy Jones and John Holmes. “A Creativity-Led Rural Renaissance? Amenity-Led Migration, the Creative Turn and the Uneven Development of Rural Australia.” Applied Geography 44 (2013): 88-98.Bell, David, and Mark Jayne. "The Creative Countryside: Policy and Practice in the UK Rural Cultural Economy." Journal of Rural Studies 26.3 (2010): 209-18.Centre for Economic and Business Research. The Contribution of the Arts and Culture to the National Economy. London: CEBR, 2013. 1-13.Duxbury, Nancy, and Heather Campbell. “Developing and Revitalizing Rural Communities through Arts and Culture.” Small Cities Imprint 3.1 (2011): 1-7.Florida, Richard. The Rise of the Creative Class: And How It's Transforming Work, Leisure, Community and Everyday Life. London: Basic Books, 2002.Gibson, Chris. “Cultural Economy: Achievements, Divergences, Future Prospects.” Geographical Research 50.3 (2012): 282-290.Gibson, Chris, and Jason Connell. “The Role of Festivals in Drought-Affected Australian Communities.” Event Management 19.4 (2015): 445-459.Harvey, David, Harriet Hawkins, and Nicola Thomas. "Thinking Creative Clusters beyond the City: People, Places and Networks." Geoforum 43.3 (2012): 529-39.Luckman, Susan. Locating Cultural Work: The Politics and Poetics of Rural, Regional and Remote Creativity. London: Palgrave Macmillan, 2012.McRobbie, Angela. Be Creative! London: Polity, 2016.———. “Rethinking Creative Economies as Radical Social Enterprise.” Variant 41 (2011): 32–33 Moeran, Brian. Ethnography at Work. London: A&C Black, 2007.———. The Business of Ethnography. London: Berg, 2005.Mould, Oliver, Tim Vorley, and Kai Liu. “Invisible Creativity? Highlighting the Hidden Impact of Freelancing in London's Creative Industries.” European Planning Studies 12 (2014): 2436-55.Nesta. Creative Industries and Rural Innovation. London: Nesta, 2007.———. A Manifesto for the Creative Economy. London: Nesta, 2013.Oakley, Kate. "Good Work? Rethinking Cultural Entrepreneurship." Handbook of Management and Creativity (2014): 145-59.O'Brien, Dave, and Peter Matthews. After Urban Regeneration: Communities, Policy and Place. London: Policy Press, 2015.Office of the Communications Regulator. Communications Market Report 2015. London: OFCOM, 2015. i-431.Schlesinger, Philip. “Foreword.” In Bob Last, Creativity, Value and Money. Glasgow: Cultural Enterprise Office, forthcoming 2016. 1-2.Schlesinger, Philip, Melanie Selfe, and Ealasaid Munro. Curators of Cultural Enterprise: A Critical Analysis of a Creative Business Intermediary. London: Springer, 2015. 1-134.Storper, Michael, and Anthony J. Venables. "Buzz: Face-to-Face Contact and the Urban Economy." Journal of Economic Geography 4.4 (2004): 351-70.Townsend, Leanne, Arjun Sathiaseelan, Gorry Fairhurst, and Claire Wallace. "Enhanced Broadband Access as a Solution to the Social and Economic Problems of the Rural Digital Divide." Local Economy 28.6 (2013): 580-95.Townsend, Leanne, Claire Wallace, Alison Smart, and Timothy Norman. “Building Virtual Bridges: How Rural Micro-Enterprises Develop Social Capital in Online and Face-to-Face Settings.” Sociologia Ruralis 56.1 (2016): 29-47.Waitt, Gordon, and Chris Gibson. “The Spiral Gallery: Non-Market Creativity and Belonging in an Australian Country Town.” Journal of Rural Studies 30 (2013): 75-85.White, Pauline. "Creative Industries in a Rural Region: Creative West: The Creative Sector in the Western Region of Ireland." Creative Industries Journal 3.1 (2010): 79-88.
APA, Harvard, Vancouver, ISO, and other styles
32

Rossiter, Ned. "Creative Industries and the Limits of Critique from." M/C Journal 6, no. 3 (June 1, 2003). http://dx.doi.org/10.5204/mcj.2208.

Full text
Abstract:
‘Every space has become ad space’. Steve Hayden, Wired Magazine, May 2003. Marshall McLuhan’s (1964) dictum that media technologies constitute a sensory extension of the body shares a conceptual affinity with Ernst Jünger’s notion of ‘“organic construction” [which] indicates [a] synergy between man and machine’ and Walter Benjamin’s exploration of the mimetic correspondence between the organic and the inorganic, between human and non-human forms (Bolz, 2002: 19). The logo or brand is co-extensive with various media of communication – billboards, TV advertisements, fashion labels, book spines, mobile phones, etc. Often the logo is interchangeable with the product itself or a way or life. Since all social relations are mediated, whether by communications technologies or architectonic forms ranging from corporate buildings to sporting grounds to family living rooms, it follows that there can be no outside for sociality. The social is and always has been in a mutually determining relationship with mediating forms. It is in this sense that there is no outside. Such an idea has become a refrain amongst various contemporary media theorists. Here’s a sample: There is no outside position anymore, nor is this perceived as something desirable. (Lovink, 2002a: 4) Both “us” and “them” (whoever we are, whoever they are) are all always situated in this same virtual geography. There’s no outside …. There is nothing outside the vector. (Wark, 2002: 316) There is no more outside. The critique of information is in the information itself. (Lash, 2002: 220) In declaring a universality for media culture and information flows, all of the above statements acknowledge the political and conceptual failure of assuming a critical position outside socio-technically constituted relations. Similarly, they recognise the problems inherent in the “ideology critique” of the Frankfurt School who, in their distinction between “truth” and “false-consciousness”, claimed a sort of absolute knowledge for the critic that transcended the field of ideology as it is produced by the culture industry. Althusser’s more complex conception of ideology, material practices and subject formation nevertheless also fell prey to the pretence of historical materialism as an autonomous “science” that is able to determine the totality, albeit fragmented, of lived social relations. One of the key failings of ideology critique, then, is its incapacity to account for the ways in which the critic, theorist or intellectual is implicated in the operations of ideology. That is, such approaches displace the reflexivity and power relationships between epistemology, ontology and their constitution as material practices within socio-political institutions and historical constellations, which in turn are the settings for the formation of ideology. Scott Lash abandons the term ideology altogether due to its conceptual legacies within German dialectics and French post-structuralist aporetics, both of which ‘are based in a fundamental dualism, a fundamental binary, of the two types of reason. One speaks of grounding and reconciliation, the other of unbridgeability …. Both presume a sphere of transcendence’ (Lash, 2002: 8). Such assertions can be made at a general level concerning these diverse and often conflicting approaches when they are reduced to categories for the purpose of a polemic. However, the work of “post-structuralists” such as Foucault, Deleuze and Guattari and the work of German systems theorist Niklas Luhmann is clearly amenable to the task of critique within information societies (see Rossiter, 2003). Indeed, Lash draws on such theorists in assembling his critical dispositif for the information age. More concretely, Lash (2002: 9) advances his case for a new mode of critique by noting the socio-technical and historical shift from ‘constitutive dualisms of the era of the national manufacturing society’ to global information cultures, whose constitutive form is immanent to informational networks and flows. Such a shift, according to Lash, needs to be met with a corresponding mode of critique: Ideologycritique [ideologiekritik] had to be somehow outside of ideology. With the disappearance of a constitutive outside, informationcritique must be inside of information. There is no outside any more. (2002: 10) Lash goes on to note, quite rightly, that ‘Informationcritique itself is branded, another object of intellectual property, machinically mediated’ (2002: 10). It is the political and conceptual tensions between information critique and its regulation via intellectual property regimes which condition critique as yet another brand or logo that I wish to explore in the rest of this essay. Further, I will question the supposed erasure of a “constitutive outside” to the field of socio-technical relations within network societies and informational economies. Lash is far too totalising in supposing a break between industrial modes of production and informational flows. Moreover, the assertion that there is no more outside to information too readily and simplistically assumes informational relations as universal and horizontally organised, and hence overlooks the significant structural, cultural and economic obstacles to participation within media vectors. That is, there certainly is an outside to information! Indeed, there are a plurality of outsides. These outsides are intertwined with the flows of capital and the imperial biopower of Empire, as Hardt and Negri (2000) have argued. As difficult as it may be to ascertain the boundaries of life in all its complexity, borders, however defined, nonetheless exist. Just ask the so-called “illegal immigrant”! This essay identifies three key modalities comprising a constitutive outside: material (uneven geographies of labour-power and the digital divide), symbolic (cultural capital), and strategic (figures of critique). My point of reference in developing this inquiry will pivot around an analysis of the importation in Australia of the British “Creative Industries” project and the problematic foundation such a project presents to the branding and commercialisation of intellectual labour. The creative industries movement – or Queensland Ideology, as I’ve discussed elsewhere with Danny Butt (2002) – holds further implications for the political and economic position of the university vis-à-vis the arts and humanities. Creative industries constructs itself as inside the culture of informationalism and its concomitant economies by the very fact that it is an exercise in branding. Such branding is evidenced in the discourses, rhetoric and policies of creative industries as adopted by university faculties, government departments and the cultural industries and service sectors seeking to reposition themselves in an institutional environment that is adjusting to ongoing structural reforms attributed to the demands by the “New Economy” for increased labour flexibility and specialisation, institutional and economic deregulation, product customisation and capital accumulation. Within the creative industries the content produced by labour-power is branded as copyrights and trademarks within the system of Intellectual Property Regimes (IPRs). However, as I will go on to show, a constitutive outside figures in material, symbolic and strategic ways that condition the possibility of creative industries. The creative industries project, as envisioned by the Blair government’s Department of Culture, Media and Sport (DCMS) responsible for the Creative Industry Task Force Mapping Documents of 1998 and 2001, is interested in enhancing the “creative” potential of cultural labour in order to extract a commercial value from cultural objects and services. Just as there is no outside for informationcritique, for proponents of the creative industries there is no culture that is worth its name if it is outside a market economy. That is, the commercialisation of “creativity” – or indeed commerce as a creative undertaking – acts as a legitimising function and hence plays a delimiting role for “culture” and, by association, sociality. And let us not forget, the institutional life of career academics is also at stake in this legitimating process. The DCMS cast its net wide when defining creative sectors and deploys a lexicon that is as vague and unquantifiable as the next mission statement by government and corporate bodies enmeshed within a neo-liberal paradigm. At least one of the key proponents of the creative industries in Australia is ready to acknowledge this (see Cunningham, 2003). The list of sectors identified as holding creative capacities in the CITF Mapping Document include: film, music, television and radio, publishing, software, interactive leisure software, design, designer fashion, architecture, performing arts, crafts, arts and antique markets, architecture and advertising. The Mapping Document seeks to demonstrate how these sectors consist of ‘... activities which have their origin in individual creativity, skill and talent and which have the potential for wealth and job creation through generation and exploitation of intellectual property’ (CITF: 1998/2001). The CITF’s identification of intellectual property as central to the creation of jobs and wealth firmly places the creative industries within informational and knowledge economies. Unlike material property, intellectual property such as artistic creations (films, music, books) and innovative technical processes (software, biotechnologies) are forms of knowledge that do not diminish when they are distributed. This is especially the case when information has been encoded in a digital form and distributed through technologies such as the internet. In such instances, information is often attributed an “immaterial” and nonrivalrous quality, although this can be highly misleading for both the conceptualisation of information and the politics of knowledge production. Intellectual property, as distinct from material property, operates as a scaling device in which the unit cost of labour is offset by the potential for substantial profit margins realised by distribution techniques availed by new information and communication technologies (ICTs) and their capacity to infinitely reproduce the digital commodity object as a property relation. Within the logic of intellectual property regimes, the use of content is based on the capacity of individuals and institutions to pay. The syndication of media content ensures that market saturation is optimal and competition is kept to a minimum. However, such a legal architecture and hegemonic media industry has run into conflict with other net cultures such as open source movements and peer-to-peer networks (Lovink, 2002b; Meikle, 2002), which is to say nothing of the digital piracy of software and digitally encoded cinematic forms. To this end, IPRs are an unstable architecture for extracting profit. The operation of Intellectual Property Regimes constitutes an outside within creative industries by alienating labour from its mode of information or form of expression. Lash is apposite on this point: ‘Intellectual property carries with it the right to exclude’ (Lash, 2002: 24). This principle of exclusion applies not only to those outside the informational economy and culture of networks as result of geographic, economic, infrastructural, and cultural constraints. The very practitioners within the creative industries are excluded from control over their creations. It is in this sense that a legal and material outside is established within an informational society. At the same time, this internal outside – to put it rather clumsily – operates in a constitutive manner in as much as the creative industries, by definition, depend upon the capacity to exploit the IP produced by its primary source of labour. For all the emphasis the Mapping Document places on exploiting intellectual property, it’s really quite remarkable how absent any elaboration or considered development of IP is from creative industries rhetoric. It’s even more astonishing that media and cultural studies academics have given at best passing attention to the issues of IPRs. Terry Flew (2002: 154-159) is one of the rare exceptions, though even here there is no attempt to identify the implications IPRs hold for those working in the creative industries sectors. Perhaps such oversights by academics associated with the creative industries can be accounted for by the fact that their own jobs rest within the modern, industrial institution of the university which continues to offer the security of a salary award system and continuing if not tenured employment despite the onslaught of neo-liberal reforms since the 1980s. Such an industrial system of traditional and organised labour, however, does not define the labour conditions for those working in the so-called creative industries. Within those sectors engaged more intensively in commercialising culture, labour practices closely resemble work characterised by the dotcom boom, which saw young people working excessively long hours without any of the sort of employment security and protection vis-à-vis salary, health benefits and pension schemes peculiar to traditional and organised labour (see McRobbie, 2002; Ross, 2003). During the dotcom mania of the mid to late 90s, stock options were frequently offered to people as an incentive for offsetting the often minimum or even deferred payment of wages (see Frank, 2000). It is understandable that the creative industries project holds an appeal for managerial intellectuals operating in arts and humanities disciplines in Australia, most particularly at Queensland University of Technology (QUT), which claims to have established the ‘world’s first’ Creative Industries faculty (http://www.creativeindustries.qut.com/). The creative industries provide a validating discourse for those suffering anxiety disorders over what Ruth Barcan (2003) has called the ‘usefulness’ of ‘idle’ intellectual pastimes. As a project that endeavours to articulate graduate skills with labour markets, the creative industries is a natural extension of the neo-liberal agenda within education as advocated by successive governments in Australia since the Dawkins reforms in the mid 1980s (see Marginson and Considine, 2000). Certainly there’s a constructive dimension to this: graduates, after all, need jobs and universities should display an awareness of market conditions; they also have a responsibility to do so. And on this count, I find it remarkable that so many university departments in my own field of communications and media studies are so bold and, let’s face it, stupid, as to make unwavering assertions about market demands and student needs on the basis of doing little more than sniffing the wind! Time for a bit of a reality check, I’d say. And this means becoming a little more serious about allocating funds and resources towards market research and analysis based on the combination of needs between students, staff, disciplinary values, university expectations, and the political economy of markets. However, the extent to which there should be a wholesale shift of the arts and humanities into a creative industries model is open to debate. The arts and humanities, after all, are a set of disciplinary practices and values that operate as a constitutive outside for creative industries. Indeed, in their creative industries manifesto, Stuart Cunningham and John Hartley (2002) loath the arts and humanities in such confused, paradoxical and hypocritical ways in order to establish the arts and humanities as a cultural and ideological outside. To this end, to subsume the arts and humanities into the creative industries, if not eradicate them altogether, is to spell the end of creative industries as it’s currently conceived at the institutional level within academe. Too much specialisation in one post-industrial sector, broad as it may be, ensures a situation of labour reserves that exceed market needs. One only needs to consider all those now unemployed web-designers that graduated from multi-media programs in the mid to late 90s. Further, it does not augur well for the inevitable shift from or collapse of a creative industries economy. Where is the standing reserve of labour shaped by university education and training in a post-creative industries economy? Diehard neo-liberals and true-believers in the capacity for perpetual institutional flexibility would say that this isn’t a problem. The university will just “organically” adapt to prevailing market conditions and shape their curriculum and staff composition accordingly. Perhaps. Arguably if the university is to maintain a modality of time that is distinct from the just-in-time mode of production characteristic of informational economies – and indeed, such a difference is a quality that defines the market value of the educational commodity – then limits have to be established between institutions of education and the corporate organisation or creative industry entity. The creative industries project is a reactionary model insofar as it reinforces the status quo of labour relations within a neo-liberal paradigm in which bids for industry contracts are based on a combination of rich technological infrastructures that have often been subsidised by the state (i.e. paid for by the public), high labour skills, a low currency exchange rate and the lowest possible labour costs. In this respect it is no wonder that literature on the creative industries omits discussion of the importance of unions within informational, networked economies. What is the place of unions in a labour force constituted as individualised units? The conditions of possibility for creative industries within Australia are at once its frailties. In many respects, the success of the creative industries sector depends upon the ongoing combination of cheap labour enabled by a low currency exchange rate and the capacity of students to access the skills and training offered by universities. Certainly in relation to matters such as these there is no outside for the creative industries. There’s a great need to explore alternative economic models to the content production one if wealth is to be successfully extracted and distributed from activities in the new media sectors. The suggestion that the creative industries project initiates a strategic response to the conditions of cultural production within network societies and informational economies is highly debateable. The now well documented history of digital piracy in the film and software industries and the difficulties associated with regulating violations to proprietors of IP in the form of copyright and trademarks is enough of a reason to look for alternative models of wealth extraction. And you can be sure this will occur irrespective of the endeavours of the creative industries. To conclude, I am suggesting that those working in the creative industries, be they content producers or educators, need to intervene in IPRs in such a way that: 1) ensures the alienation of their labour is minimised; 2) collectivising “creative” labour in the form of unions or what Wark (2001) has termed the “hacker class”, as distinct from the “vectoralist class”, may be one way of achieving this; and 3) the advocates of creative industries within the higher education sector in particular are made aware of the implications IPRs have for graduates entering the workforce and adjust their rhetoric, curriculum, and policy engagements accordingly. Works Cited Barcan, Ruth. ‘The Idleness of Academics: Reflections on the Usefulness of Cultural Studies’. Continuum: Journal of Media & Cultural Studies (forthcoming, 2003). Bolz, Norbert. ‘Rethinking Media Aesthetics’, in Geert Lovink, Uncanny Networks: Dialogues with the Virtual Intelligentsia. Cambridge, Mass.: MIT Press, 2002, 18-27. Butt, Danny and Rossiter, Ned. ‘Blowing Bubbles: Post-Crash Creative Industries and the Withering of Political Critique in Cultural Studies’. Paper presented at Ute Culture: The Utility of Culture and the Uses of Cultural Studies, Cultural Studies Association of Australia Conference, Melbourne, 5-7 December, 2002. Posted to fibreculture mailing list, 10 December, 2002, http://www.fibreculture.org/archives/index.html Creative Industry Task Force: Mapping Document, DCMS (Department of Culture, Media and Sport), London, 1998/2001. http://www.culture.gov.uk/creative/mapping.html Cunningham, Stuart. ‘The Evolving Creative Industries: From Original Assumptions to Contemporary Interpretations’. Seminar Paper, QUT, Brisbane, 9 May, 2003, http://www.creativeindustries.qut.com/research/cirac/documen... ...ts/THE_EVOLVING_CREATIVE_INDUSTRIES.pdf Cunningham, Stuart; Hearn, Gregory; Cox, Stephen; Ninan, Abraham and Keane, Michael. Brisbane’s Creative Industries 2003. Report delivered to Brisbane City Council, Community and Economic Development, Brisbane: CIRAC, 2003. http://www.creativeindustries.qut.com/research/cirac/documen... ...ts/bccreportonly.pdf Flew, Terry. New Media: An Introduction. Oxford: Oxford University Press, 2002. Frank, Thomas. One Market under God: Extreme Capitalism, Market Populism, and the End of Economic Democracy. New York: Anchor Books, 2000. Hartley, John and Cunningham, Stuart. ‘Creative Industries: from Blue Poles to fat pipes’, in Malcolm Gillies (ed.) The National Humanities and Social Sciences Summit: Position Papers. Canberra: DEST, 2002. Hayden, Steve. ‘Tastes Great, Less Filling: Ad Space – Will Advertisers Learn the Hard Lesson of Over-Development?’. Wired Magazine 11.06 (June, 2003), http://www.wired.com/wired/archive/11.06/ad_spc.html Hardt, Michael and Negri, Antonio. Empire. Cambridge, Mass.: Harvard University Press, 2000. Lash, Scott. Critique of Information. London: Sage, 2002. Lovink, Geert. Uncanny Networks: Dialogues with the Virtual Intelligentsia. Cambridge, Mass.: MIT Press, 2002a. Lovink, Geert. Dark Fiber: Tracking Critical Internet Culture. Cambridge, Mass.: MIT Press, 2002b. McLuhan, Marshall. Understanding Media: The Extensions of Man. London: Routledge and Kegan Paul, 1964. McRobbie, Angela. ‘Clubs to Companies: Notes on the Decline of Political Culture in Speeded up Creative Worlds’, Cultural Studies 16.4 (2002): 516-31. Marginson, Simon and Considine, Mark. The Enterprise University: Power, Governance and Reinvention in Australia. Cambridge: Cambridge University Press, 2000. Meikle, Graham. Future Active: Media Activism and the Internet. Sydney: Pluto Press, 2002. Ross, Andrew. No-Collar: The Humane Workplace and Its Hidden Costs. New York: Basic Books, 2003. Rossiter, Ned. ‘Processual Media Theory’, in Adrian Miles (ed.) Streaming Worlds: 5th International Digital Arts & Culture (DAC) Conference. 19-23 May. Melbourne: RMIT University, 2003, 173-184. http://hypertext.rmit.edu.au/dac/papers/Rossiter.pdf Sassen, Saskia. Losing Control? Sovereignty in an Age of Globalization. New York: Columbia University Press, 1996. Wark, McKenzie. ‘Abstraction’ and ‘Hack’, in Hugh Brown, Geert Lovink, Helen Merrick, Ned Rossiter, David Teh, Michele Willson (eds). Politics of a Digital Present: An Inventory of Australian Net Culture, Criticism and Theory. Melbourne: Fibreculture Publications, 2001, 3-7, 99-102. Wark, McKenzie. ‘The Power of Multiplicity and the Multiplicity of Power’, in Geert Lovink, Uncanny Networks: Dialogues with the Virtual Intelligentsia. Cambridge, Mass.: MIT Press, 2002, 314-325. Links http://hypertext.rmit.edu.au/dac/papers/Rossiter.pdf http://www.creativeindustries.qut.com/ http://www.creativeindustries.qut.com/research/cirac/documents/THE_EVOLVING_CREATIVE_INDUSTRIES.pdf http://www.creativeindustries.qut.com/research/cirac/documents/bccreportonly.pdf http://www.culture.gov.uk/creative/mapping.html http://www.fibreculture.org/archives/index.html http://www.wired.com/wired/archive/11.06/ad_spc.html Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Rossiter, Ned. "Creative Industries and the Limits of Critique from " M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0306/11-creativeindustries.php>. APA Style Rossiter, N. (2003, Jun 19). Creative Industries and the Limits of Critique from . M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0306/11-creativeindustries.php>
APA, Harvard, Vancouver, ISO, and other styles
33

Barbour, Kim, P. David Marshall, and Christopher Moore. "Persona to Persona Studies." M/C Journal 17, no. 3 (June 17, 2014). http://dx.doi.org/10.5204/mcj.841.

Full text
Abstract:
Sometimes a particular concept—a simple term—is the spark to a series of ideas. It might be ostentatious and perhaps hubristic that the editors of an issue on persona might imagine that their choice of the term persona has provided this intellectual spark. Fully aware of that risk, we want to announce that it has. The response to the call for papers related to persona was our first sign that something special was being initiated. The sheer number and interdisciplinary breadth of the abstracts and ultimate submissions was evidence that the term ‘persona’ was the catalyst to an explosion of ideas. As the responses flowed into the journal and to us, we became aware of the meme-like qualities of the many interpretations and history of the term, each with its own idiosyncratic coding of patterned similarity. The reality of this development is that it was not entirely unexpected. The editors have been developing the concept of persona and persona studies over the past four years, and persona studies has emerged from a congruence in our collective research interests as an interdisciplinary investigation of the presentation of the self in the contemporary moment. Together, we have been involved in the development of the Persona Celebrity Publics Research Group (PCP) at Deakin University. Within that group, we have concentrated ourselves in the Persona Research cluster, made up of a group of 15 or so academics along with another smaller group from other institutions. Emerging from our work is the forthcoming book entitled Persona Studies: Celebrity, Identity, and the Transformation of Public Culture (forthcoming Wiley 2015). Both the book and the research group are intent on exploring what has been altering in our worlds, our cultures, and our communities that make us think the new intensified play of the personal in public needs closer scrutiny. The impetus for us as a team of scholars is quite clearly linked to the uses of online culture and how greater aspects of our lives are now involved in public displays, mediated displays, and a peculiar new blend of interpersonal and presentational constructions of identities and selves. Persona as a specific area of inquiry has emerged from the close study of the public self. Its immediate intellectual past has its strongest links with research on celebrity. In the Celebrity Studies Reader collection, Marshall began forming the idea that a new public self was emerging through new media (New Media). In subsequent work, Marshall identifies celebrity culture as one of the pedagogic sources for how the wider population presented itself in online culture and social media (Marshall, Promotion). Barbour and Marshall expanded their thinking about the presentation of the self through a closer study of online academic persona and the different strategic ways individuals were managing and building reputations and prestige through these techniques. Terms such as the ‘comprehensive,’ ’networked’, and ‘uncontained’ self, mapped the various kinds of public personalities that were emerging through the most prominent academics (Barbour and Marshall). In a similar vein, Barbour’s research has looked closely at the online and public personas that fringe artists—specifically tattoo artists, craftivists, performance poets and street artists—produce and maintain in the contemporary moment (Hiding; Finding). Her work has advanced the concepts of “registers of performance” (Registers), where a closer analysis of how the personal, the professional, and sometimes the intimate registers are constructed and deployed to produce a public persona that demonstrates ‘artistness’. By analysing persona through registers of performance, Barbour is able to differentiate between the types of identity building activity that occurs online. This provides insight into the ways that impression management occurs in spaces that suffer from context collapse due to the intersection of friends, family, fans, and followers. Moore’s work (Hats; Magic; Invigorating) on the player’s assembly of a networked online ‘gamer’ persona considers the intersection of social media and video game culture and contributes analysis of the affective dimensions of player-oriented game objects and their public curation and display. His recent research visualising Twitter and Flickr data (Screenshots, forthcoming) advances an understanding of the accumulation and online presentation of the self through digital game artefacts, specifically video game screenshots. He is currently researching the interaction of social media activity, reputation management, and everyday identity ‘play’ within public game cultures and the larger dynamics of production and consumption of games and play in the video game industry. Most recently, Marshall called for what he titled a “persona studies manifesto”: the public presentation of the self demands a more extensive analysis of the play and deployment of persona in contemporary culture. Beyond popular culture, the development of reputation and persona and its intersection with online culture especially needs to be explored in those professions, disciplines and activities where this form of investigation has never been attempted (Marshall, Persona Studies). The initiative of persona studies then is in some ways turning the cultural studies’ approach to the study of the audience on its head: it is a study of agency and the processes by which agency has been individualized and assembled across contemporary culture, but highly privileged in online culture (Marshall, Personifying). Persona studies involves a close investigation of the personalized and negotiated presentation of the self. So, what is persona? The articles here assume different, but connected, understandings of the term, each with levels of deference to writers such as Jung, Goffman, Butler, and Foucault, along with some expected allusions to the ancient Greeks and Romans who coined the term. The Greek origins identify that persona is a mask and derived from performance and acting. From Hannah Arendt’s reading of the Greeks this mask of public identity was not seen in a derogatory way; rather it was natural to assume a public/political persona that was quite removed from the private and home sphere. A political persona defined by citizenry was a clearly conscious separation from the household of activity. Jung’s take on persona is that it was designed for collective experience and for the outside world and therapy would lead to an understanding of the individual that delved beneath the persona. The resurgence in interest in Goffman’s dramaturgical analogy allows us to consider persona as an everyday performance, where the purpose of the presentation of self is to convince the audience (and at times, the performer) that the performance is genuine and authentic. All of us know what it is like to act in a role, to wear a uniform or costume, to create a profile. More than a few of us know what it is to suffer through the ‘individualising’ categories of a social networking sign-up survey that do not adequately account for distinctions. Persona is all these things, or rather, through the various everyday activities of our work, social, and online selves we contribute to the accretion of the identity at the base of its structure. Persona functions like the construct or automated script that we assemble to interact with the world with on our behalf. This involves the technologies of computation and mediation and their interfaces that function to automate, produce and filter communication with us; email, blogs, Twitter accounts, and so on. These golems interconnect and can interact on their own in unpredictable ways on our behalf; connecting our Facebook account to a product, brand or petition; using Google as a portal to login into other web enabled services; or authorising an app to record our location. Then there are the traces that we leave scattered across digital networks, intranets, hard drives, and lost USB memory sticks, from scattered collections of digital photos to the contact lists of our mobile devices and the ‘achievements’ in our online gaming profiles. Persona can also be something that happens to us, as friends tag unflattering images via Facebook, or another Twitter user publicly addresses us with a unwanted, or unwarranted commentary, using the ‘@’ and the ‘#’ functions. We have an extensive degree of control over the ways we assemble ourselves online and yet the contemporary experience is one of constant negotiation with forces that seeks to disavow their responsibilities to us, and maximise the limitations under which we can act. Our personas serve as a buffer to these forces. We can strategically assemble our persona to participate in, influence and use to our advantage to transmit messages across the network and communicate a mediated form of ourselves. The many ways to account persona stands as a primary and apparently Sisyphean task for persona studies: no sooner than when we might assemble a complete topology of the many accounts, traditions, domains, methodologies and theories for account of for the self, we will have arrived at possibly entirely new way of conceptualising the presentation of online persona through some post-Facebook, Oculus Rift, or Google Glass augmented reality experience. One of the challenges of persona studies will be to provide a series of methodological and theoretical tools, as well as a common touchstone from which multiple perspectives may converge around the meme-like qualities of this dramatic term. It will be necessary to consider the future of the presentation of the self, as much as the past accounts for the understanding of the self and its compositions. In the contemporary moment we consider a series of common currents and features of the iterations of persona with which we might begin this endeavour. The collective objective of the ‘persona’ theme edition is to coalesce around the emerging significance of the public self, and to map that activity within disciplinary traditions, historical precedents and the cultural and technological predispositions that have made this kind of reading of the contemporary world valuable, important, and ultimately, sensible. This collection of articles on persona is innovative in terms of the diversity of issues it tackles through the term. Given the massive change in public identity that we have identified as an elemental part of online culture, it is not surprising that social media and online constructions of persona figure prominently throughout the issue. However, we are also pleased to include papers that consider fictional performances from both television and film and even character studies of public figures. Marshall’s feature article for the edition continues his theorisation of persona. Seriality is identified as one of the ways that a consistency of persona is developed and the article charts the usefulness in analogizing how the construction of a serial character or ‘personnage’ for an actor/performer provides insights into the relationship between the person and persona in other settings that are emerging in the contemporary moment. In ‘Darkly Dreaming (in) Authenticity: The Self/Persona Opposition in Dexter,’ Glenn D'Cruz uses Dexter Morgan, the novelised serial killer and Showtime TV anti-hero to examine the connections between self and persona and the discourse of authenticity. D’Cruz foresees a series of challenges for persona studies and considers key concerns ahead, in terms of the critical vocabulary and scholarly agenda and addresses the need for critical genealogy of the term ‘persona’. Talia Morag, in ‘Persons and Their Personas: Living with Yourself’, considers the tensions identified in the persona of the private domain, and examines the patterns of social interaction that work to affect an ‘endorsed’ private persona, compared to those patterns classified as ‘hidden’. She frames the negotiation of these tensions as a move to better understand the sphere of the private self, as well as the those strains which arise on the private persona and the grounds from which they come to occupy our time. Together these two approaches predict the convergence of the private, the performed and the public persona which occupies Neil Henderson’s ‘The Contingency of Online Persona and Its Tension with Relationship Development’. Henderson’s engagement with the dimensions of online persona in the short film, Noah, takes a position at the crossroads between Marshall’s celebrity-inscribed approach to persona studies and the application of actor-network theory in order to map the potential pitfalls of identity management through ubiquitous technologies and broader critical questions about the play of our online selves in the everyday. Moving to the multi-user virtual environment of Second Life, Lesley Procter draws on the symbolic interactionist theories of social identity and the role of the avatar in ‘A Mirror without a Tain: Personae, Avatars, and Selves in a Multi-User Virtual Environment’. Procter’s contribution to persona studies highlights the actual and conceptual mirroring involved in the sense of the self involved in the interaction with others online. Taina Bucher’s ‘About a Bot: Hoax, Fake, Performance Art’ is a revealing examination of the Twitter bot phenomenon. Bucher’s case study on ‘bot fakeness’ considers the automation and performance of persona and the interactions and relationships between people and bots. Brady Robards, in ‘Digital Traces of the Persona through Ten Years of Facebook’, offers a critical reading of the Facebook ‘look back’ video creation application made to celebrate the social network’s ten year. As with Bucher and Proctor, Robards is concerned with the ways persona is created through highly mediated social networking platforms, where the agency of the individual is countered by the intervention of the software itself. Robards considers in particular two functions of Facebook: first as a site for the performance of life narratives, and second as a location for reflection on public and private disclosure. Taking a specific element of this idea of disclosure—the sharing of one’s legal name—as a starting point, Ellen Moll’s ‘What’s in a Nym?: Gender, Race, Pseudonymity, and the Imagining of the Online Persona’ is a study of the reactions of feminist and anti-racist bloggers in the ongoing battles over pseudonymity online. Moll’s contribution centres around current concerns with the ‘real name policies’ of social media and web-based platforms and services. What is at stake here in the negotiation between the individuals, technologies and institutions over the rights of self-determination and agency in the digital and online environments. Narrowing the focus to a single case study, Emma Maguire’s study of author website as a site of self-presentation in ‘Home, About, Shop, Contact: Constructing an Authorial Persona via the Author Website’ examines the authorial persona produced for consumption within literary markets. Framing of the authorial website as ‘automedial text’, rather than as direct representations of a pre-existing self, Maguire employs authorship theory to understand the website as a genre of persona performance and textuality. Shifting away from the focus on social media, this issue concludes with a trio of character studies, each of which involves a detailed and critical account of the dimensions of a public assembly of a persona. Nathan Farrell’s ‘From Activist to Entrepreneur: Peace One Day and the Changing Persona of the Social Campaigner’ is the first, and considers the ways that an individual manages his persona for different audiences. Farrell’s focus is Jeremy Gilley, a documentary filmmaker and peace campaigner, and the paper speaks to the dimensions of overlapping audiences connected to an articulation of a socially aware entrepreneurial persona. Sally Totman and Mat Hardy have a very different figure in their contribution as they examine the many different public personas of Libya’s Colonel Muammar Qaddafi. In ‘The Many Personas of Colonel Qaddafi’, Totman and Hardy interrogate the multiple aspects of Qaddafi’s construction as a brotherly revolutionary, philosopher, liberator, leader, and clown. The authors chart the progression of his often conflicted and chaotic legacy, and of this political, ideological and even messianic presentation of the self to the Western and Arab worlds. Anastasia Denisova, completes the triptych of persona case studies for this collection, with ‘How Vladimir Putin's Divorce Story Was Constructed and Received, or When the President Divorced His Wife and Married the Country Instead’. Denisova contends Vladimir Putin’s divorce is representative of the degree to which political and private persona are mediated and merged across often competing channels of communication. The analysis contends with online discourse, images, and texts, which reveal the extensive personification of politics in Putin’s public persona in an environment of reception by an audience which also consider the values and attributes of their own country as a national persona. Conclusion We have structured the narrative flow of articles in this issue on persona from the fictional through to the online transformations of the self and then even further into the analyses of the public and political dimensions that are part of the constitution of public selves. No doubt, you as a reader will see different connections and intersections and will play with what makes the idea of persona so meaningful and valuable in understanding the strategic construction of a public identity and so central to comprehending the contemporary moment. We invite you to engage with this further with the issue editors’ planned 2015 launch of a journal called Persona Studies. Until then, this issue of M/C Journal certainly represents the most comprehensive and, we think, interesting, collection of writing on persona as we explore the implications behind the mask of public identity. We hope the issue stimulates discussion and with that hope, we hope to hear from you.AcknowledgmentsThe editors would like to thank Alison Bennett for creating an original gif for the cover image of this issue. More of Bennett's work, including her augmented reality images of tattoos from the internationally acclaimed exhibition Shifting Skin, can be found at her website, alisonbennett.com.au.Thanks also to Trent Griffiths for his copy-editing assistance. References Arendt, Hannah. The Human Condition. Charles R. Walgreen Foundation Lectures. Chicago, Ill.: University of Chicago Press, 1958. Barbour, Kim. “Hiding in Plain Sight: Street Artists Online.” Platform Journal of Media and Communication. 5.1 (2013). Barbour, Kim. “Registers of Performance: Negotiating the professional, personal, and intimate.” MeCCSA 2014. Bournemouth, 8-10 Jan. 2014. Barbour, Kim. “Finding the Edge: Online persona creation by fringe artists.” Contemporary Publics International Symposium. 24-25 Feb. 2014. Barbour, Kim, and P. David Marshall. "The Academic Online: Constructing Persona through the World Wide Web." First Monday 17.9 (2012). ‹http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/viewArticle/3969/3292›. Goffman, Erving. The Presentation of Self in Everyday Life. USA: Anchor Books, 1959. Jung, Carl Gustav. Two Essays on Analytical Psychology. Bollingen Series. 2nd ed. Princeton, N.J.: Princeton University Press, 1966. Marshall, P. David. "New Media New Self, the Changing Power of the Celebrity." The Celebrity Culture Reader. Ed. P. David Marshall. London: Routledge, 2006. 634-44. Marshall, P. David. "The Promotion and Presentation of the Self: Celebrity as Marker of Presentational Media." Celebrity Studies 1.1 (2010): 35-48. Marshall, P. David. "Personifying Agency: The Public–Persona–Place–Issue Continuum." Celebrity Studies 4.3 (2013): 369-71.Marshall, P. David. "Persona Studies: Mapping the Proliferation of the Public Self." Journalism 15.2 (2014): 153-70. Marshall, P. David, Chris Moore and Kim Barbour, Persona Studies: Celebrity, Identity and the Transformation of Public Culture. Hoboken NJ: Wiley, forthcoming 2015. Moore, Chris. “Hats of Affect: A Study of Affect, Achievements and Hats in Team Fortress 2.” Game Studies 11 (2011). ‹http://gamestudies.org/1101/articles/moore›. Moore, Chris. “The Magic Circle and the Mobility of Play.” Convergence 17 (2011): 373-387. Moore, Chris. “Invigorating Play: The Role of Affect in Online Multiplayer FPS Game.” Guns, Grenades, and Grunts: First-Person Shooter Games. Ed. Gerald A. Voorhees, Josh Call, and Katie Whitlock. London: Continuum, 2012. 341-363. Moore, Chris. “Screenshots as Virtual Photography: Cybernetics, Remediation and Affect.” Advancing Digital Humanities. Ed. Paul Longley Arthur and Katherine Bode. Palgrave Macmillan. Forthcoming 2014. .
APA, Harvard, Vancouver, ISO, and other styles
34

Goggin, Gerard. "Innovation and Disability." M/C Journal 11, no. 3 (July 2, 2008). http://dx.doi.org/10.5204/mcj.56.

Full text
Abstract:
Critique of Ability In July 2008, we could be on the eve of an enormously important shift in disability in Australia. One sign of change is the entry into force on 3 May 2008 of the United Nations convention on the Rights of Persons with Disabilities, which will now be adopted by the Rudd Labor government. Through this, and other proposed measures, the Rudd government has indicated its desire for a seachange in the area of disability. Bill Shorten MP, the new Parliamentary Secretary for Disabilities and Children’s Services has been at pains to underline his commitment to a rights-based approach to disability. In this inaugural speech to Parliament, Senator Shorten declared: I believe the challenge for government is not to fit people with disabilities around programs but for programs to fit the lives, needs and ambitions of people with disabilities. The challenge for all of us is to abolish once and for all the second-class status that too often accompanies Australians living with disabilities. (Shorten, “Address in reply”; see also Shorten, ”Speaking up”) Yet if we listen to the voices of people with disability, we face fundamental issues of justice, democracy, equality and how we understand the deepest aspects of ourselves and our community. This is a situation that remains dire and palpably unjust, as many people with disabilities have attested. Elsewhere I have argued (Goggin and Newell) that disability constitutes a systemic form of exclusion and othering tantamount to a “social apartheid” . While there have been improvements and small gains since then, the system that reigns in Australia is still fundamentally oppressive. Nonetheless, I would suggest that through the rise of the many stranded movements of disability, the demographic, economic and social changes concerning impairment, we are seeing significant changes in how we understand impairment and ability (Barnes, Oliver and Barton; Goggin and Newell, Disability in Australia; Snyder, Brueggemann, and Garland-Thomson; Shakespeare; Stiker). There is now considerable, if still incomplete, recognition of disability as a category that is constituted through social, cultural, and political logics, as well as through complex facets of impairment, bodies (Corker and Shakespeare), experiences, discourses (Fulcher), and modes of materiality and subjectivity (Butler), identity and government (Tremain). Also there is growing awareness of the imbrication of disability and other categories such as sex and gender (Fine and Asch; Thomas), race, age, culture, class and distribution of wealth (Carrier; Cole; Davis, Bending over Backwards, and Enforcing Normalcy; Oliver; Rosenblum and Travis), ecology and war (Bourke; Gerber; Muir). There are rich and wide-ranging debates that offer fundamental challenges to the suffocating grip of the dominant biomedical model of disability (that conceives disability as individual deficit — for early critiques see: Borsay; Walker), as well as the still influential and important (if at times limiting) social model of disability (Oliver; Barnes and Mercer; Shakespeare). All in all,there have been many efforts to transform the social and political relations of disability. If disability has been subject to considerable examination, there has not yet been an extended, concomitant critique of ability. Nor have we witnessed a thoroughgoing recognition of unmarked, yet powerful operations of ability in our lives and thought, and the potential implications of challenging these. Certainly there have been important attempts to reframe the relationship between “ability” and “disability” (for example, see Jones and Mark). And we are all familiar with the mocking response to some neologisms that seek to capture this, such as the awkward yet pointed “differently-abled.” Despite such efforts we lack still a profound critique of ability, an exploration of “able”, the topic that this special issue invites us to consider. If we think of the impact and significance of “whiteness”, as a way to open up space for how to critically think about and change concepts of race; or of “masculinity” as a project for thinking about gender and sexuality — we can see that this interrogation of the unmarked category of “able” and “ability” is much needed (for one such attempt, see White). In this paper I would like to make a small contribution to such a critique of ability, by considering what the concept of innovation and its contemporary rhetorics have to offer for reframing disability. Innovation is an important discourse in contemporary life. It offers interesting possibilities for rethinking ability — and indeed disability. And it is this relatively unexplored prospect that this paper seeks to explore. Beyond Access, Equity & Diversity In this scene of disability, there is attention being given to making long over-due reforms. Yet the framing of many of these reforms, such as the strengthening of national and international legal frameworks, for instance, also carry with them considerable problems. Disability is too often still seen as something in need of remediation, or special treatment. Access, equity, and anti-discrimination frameworks offer important resources for challenging this “special” treatment, so too do the diversity approaches which have supplemented or supplanted them (Goggin and Newell, “Diversity as if Disability Mattered”). In what new ways can we approach disability and policies relevant to it? In a surprisingly wide range of areas, innovation has featured as a new, cross-sectoral approach. Innovation has been a long-standing topic in science, technology and economics. However, its emergence as master-theme comes from its ability to straddle and yoke together previously diverse fields. Current discussions of innovation bring together and extend work on the information society, the knowledge economy, and the relationships between science and technology. We are now familiar for instance with arguments about how digital networked information and communications technologies and their consumption are creating new forms of innovation (Benkler; McPherson; Passiante, Elia, and Massari). Innovation discourse has extended to many other unfamiliar realms too, notably the area of social and community development, where a new concept of social innovation is now proposed (Mulgan), often aligned with new ideas of social entrepreneurship that go beyond earlier accounts of corporate social responsibility. We can see the importance of innovation in the ‘creative industries’ discourses and initiatives which have emerged since the 1990s. Here previously distinct endeavours of arts and culture have become reframed in a way that puts their central achievement of creativity to the fore, and recognises its importance across all sorts of service and manufacturing industries, in particular. More recently, theorists of creative industries, such as Cunningham, have begun to talk about “social network markets,” as a way to understand the new hybrid of creativity, innovation, digital technology, and new economic logics now being constituted (Cunningham and Potts). Innovation is being regarded as a cardinal priority for societies and their governments. Accordingly, the Australian government has commissioned a Review of The National Innovation System, led by Dr Terry Cutler, due to report in the second half of 2008. The Cutler review is especially focussed upon gaps and weaknesses in the Australian innovation system. Disability has the potential to figure very strongly in this innovation talk, however there has been little discussion of disability in the innovation discourse to date. The significance of disability in relation to innovation was touched upon some years ago, in a report on Disablism from the UK Demos Foundation (Miller, Parker and Gillinson). In a chapter entitled “The engine of difference: disability, innovation and creativity,” the authors discuss the area of inclusive design, and make the argument for the “involvement of disabled people to create a stronger model of user design”:Disabled people represented a market of 8.6 million customers at the last count and their experiences aren’t yet feeding through into processes of innovation. But the role of disabled people as innovators can and should be more active; we should include disabled people in the design process because they are good at it. (57) There are two reasons given for this expertise of disabled people in design. Firstly, “disabled people are often outstanding problem solvers because they have to be … life for disabled people at the moment is a series of challenges to be overcome” (57). Secondly, “innovative ideas are more likely to come from those who have a new or different angle on old problems” (57). The paradox in this argument is that as life becomes more equitable for people with disabilities, then these ‘advantages’ should disappear” (58). Accordingly, Miller et al. make a qualified argument, namely that “greater participation of disabled people in innovation in the short term may just be the necessary trigger for creating an altogether different, and better, system of innovation for everyone in the future” (58). The Demos Disablism report was written at a time when rhetorics of innovation were just beginning to become more generalized and mainstream. This was also at a time in the UK, when there was hope that new critical approaches to disability would see it become embraced as a part of the diverse society that Blair’s New Labor Britain had been indicating. The argument Disablism offers about disability and innovation is in some ways a more formalized version of vernacular theory (McLaughlin, 1996). In the disability movement we often hear, with good reason, that people with disability, by dint of their experience and knowledge are well positioned to develop and offer particular kinds of expertise. However, Miller et al. also gesture towards a more generalized account of disability and innovation, one that would intersect with the emerging frameworks around innovation. It is this possibility that I wish to take up and briefly explore here. I want to consider the prospects for a fully-fledged encounter between disability and innovation. I would like to have a better sense of whether this is worth pursuing, and what it would add to our understanding of both disability and innovation? Would the disability perspective be integrated as a long-term part of our systems of innovation rather than, as Miller et al. imply, deployed temporarily to develop better innovation systems? What pitfalls might be bound up with, or indeed be the conditions of, such a union between disability and innovation? The All-Too-Able User A leading area where disability figures profoundly in innovation is in the field of technology — especially digital technology. There is now a considerable literature and body of practice on disability and digital technology (Annable, Goggin, and Stienstra; Goggin and Newell, Digital Disability; National Council on Disability), however for my purposes here I would like to focus upon the user, the abilities ascribed to various kinds of users, and the user with disability in particular. Digital technologies are replete with challenges and opportunities; they are multi-layered, multi-media, and global in their manifestation and function. In Australia, Britain, Canada, the US, and Europe, there have been some significant digital technology initiatives which have resulted in improved accessibility for many users and populations (Annable, Goggin, and Stienstra; National Council on Disability) . There are a range of examples of ways in which users with disability are intervening and making a difference in design. There is also a substantial body of literature that clarifies why we need to include the perspective of the disabled if we are to be truly innovative in our design practices (Annable, Goggin and Stienstra; Goggin and Newell, “Disability, Identity and Interdependence”). I want to propose, however, that there is merit in going beyond recognition of the role of people with disability in technology design (vital and overlooked as it remains), to consider how disability can enrich contemporary discourses on innovation. There is a very desirable cross-over to be promoted between the emphasis on the user-as-expert in the sphere of disability and technology, and on the integral role of disability groups in the design process, on the one hand, and the rise of the user in digital culture generally, on the other. Surprisingly, such connections are nowhere near as widespread and systematic as they should be. It may be that contemporary debates about the user, and about the user as co-creator, or producer, of technology (Haddon et al.; von Hippel) actually reinstate particular notions of ability, and the able user, understood with reference to notions of disability. The current emphasis on the productive user, based as it is on changing understandings of ability and disability, provides rich material for critical revision of the field and those assumptions surrounding ability. It opens up possibilities for engaging more fully with disability and incorporating disability into the new forms and relations of digital technology that celebrate the user (Goggin and Newell, Digital Disability). While a more detailed consideration of these possibilities require more time than this essay allows, let us consider for a moment the idea of a genuine encounter between the activated user springing from the disability movement, and the much feted user in contemporary digital culture and theories of innovation. People with disability are using these technologies in innovative ways, so have much to contribute to wider discussions of digital technology (Annable, Goggin and Stienstra). The Innovation Turn Innovation policy, the argument goes, is important because it stands to increase productivity, which in turn leads to greater international competitiveness and economic benefit. Especially with the emergence of capitalism (Gleeson), productivity has strong links to particular notions of which types of production and produce are valued. Productivity is also strongly conditioned by how we understand ability and, last in a long chain of strong associations, how we as a society understand and value those kinds of people and bodies believed to contain and exercise the ordained and rewarded types of ability, produce, and productivity. Disability is often seen as antithetical to productivity (a revealing text on the contradictions of disability and productivity is the 2004 Productivity Commission Review of the Disability Discrimination Act). When we think about the history of disability, we quickly realize that productivity, and by extension, innovation, are strongly ideological. Ideological, that is, in the sense that these fields of human endeavour and our understanding of them are shaped by power relations, and are built upon implicit ‘ableist’ assumptions about productivity. In this case, the power relations of disability go right to the heart of the matter, highlighting who and what are perceived to be of value, contributing economically and in other ways to society, and who and what are considered as liabilities, as less valued and uneconomical. A stark recent example of this is the Howard government workplace and welfare reforms, which further disenfranchised, controlled, and impoverished people with disability. If we need to rethink our ideas of productivity and ability in the light of new notions of disability, then so too do we need to rethink our ideas about innovation and disability. Here the new discourses of innovation may actually be useful, but also contain limited formulations and assumptions about ability and disability that need to be challenged. The existing problems of a fresh approach to disability and innovation can be clearly observed in the touchstones of national science and technology “success.” Beyond One-Sided Innovation Disability does actually feature quite prominently in the annals of innovation. Take, for instance, the celebrated case of the so-called “bionic ear” (or cochlear implant) hailed as one of Australia’s great scientific inventions of the past few decades. This is something we can find on display in the Powerhouse Museum of Technology and Design, in Sydney. Yet the politics of the cochlear implant are highly controversial, not least as it is seen by many (for instance, large parts of the Deaf community) as not involving people with disabilities, nor being informed by their desires (Campbell, also see “Social and Ethical Aspects of Cochlear Implants”). A key problem with the cochlear implant and many other technologies is that they are premised on the abolition or overcoming of disability — rather than being shaped as technology that acknowledges and is informed by disabled users in their diverse guises. The failure to learn the lessons of the cochlear implant for disability and innovation can be seen in the fact that we are being urged now to band together to support the design of a “bionic eye” by the year 2020, as a mark of distinction of achieving a great nation (2020 Summit Initial Report). Again, there is no doubting the innovation and achievement in these artefacts and their technological systems. But their development has been marked by a distinct lack of consultation and engagement with people with disabilities; or rather the involvement has been limited to a framework that positions them as passive users of technology, rather than as “producer/users”. Further, what notions of disability and ability are inscribed in these technological systems, and what do they represent and symbolize in the wider political and social field? Unfortunately, such technologies have the effect of reproducing an ableist framework, “enforcing normalcy” (Davis), rather than building in, creating and contributing to new modes of living, which embrace difference and diversity. I would argue that this represents a one-sided logic of innovation. A two-sided logic of innovation, indeed what we might call a double helix (at least) of innovation would be the sustained, genuine interaction between different users, different notions of ability, disability and impairment, and the processes of design. If such a two-sided (or indeed many-sided logic) is to emerge there is good reason to think it could more easily do so in the field of digital cultures and technologies, than say, biotechnology. The reason for this is the emphasis in digital communication technologies on decentralized, participatory, user-determined governance and design, coming from many sources. Certainly this productive, democratic, participatory conception of the user is prevalent in Internet cultures. Innovation here is being reshaped to harness the contribution and knowledge of users, and could easily be extended to embrace pioneering efforts in disability. Innovating with Disability In this paper I have tried to indicate why it is productive for discourses of innovation to consider disability; the relationship between disability and innovation is rich and complex, deserving careful elaboration and interrogation. In suggesting this, I am aware that there are also fundamental problems that innovation raises in its new policy forms. There are the issues of what is at stake when the state is redefining its traditional obligations towards citizens through innovation frameworks and discourses. And there is the troubling question of whether particular forms of activity are normatively judged to be innovative — whereas other less valued forms are not seen as innovative. By way of conclusion, however, I would note that there are now quite basic, and increasingly accepted ways, to embed innovation in design frameworks, and while they certainly have been adopted in the disability and technology area, there is much greater scope for this. However, a few things do need to change before this potential for disability to enrich innovation is adequately realized. Firstly, we need further research and theorization to clarify the contribution of disability to innovation, work that should be undertaken and directed by people with disability themselves. Secondly, there is a lack of resources for supporting disability and technology organisations, and the development of training and expertise in this area (especially to provide viable career paths for experts with disability to enter the field and sustain their work). If this is addressed, the economic benefits stand to be considerable, not to mention the implications for innovation and productivity. Thirdly, we need to think about how we can intensify existing systems of participatory design, or, better still, introduce new user-driven approaches into strategically important places in the design processes of ICTs (and indeed in the national innovation system). Finally, there is an opportunity for new approaches to governance in ICTs at a general level, informed by disability. New modes of organising, networking, and governance associated with digital technology have attracted much attention, also featuring recently in the Australia 2020 Summit. Less well recognised are new ideas about governance that come from the disability community, such as the work of Queensland Advocacy Incorporated, Rhonda Galbally’s Our Community, disability theorists such as Christopher Newell (Newell), or the Canadian DIS-IT alliance (see, for instance, Stienstra). The combination of new ideas in governance from digital culture, new ideas from the disability movement and disability studies, and new approaches to innovation could be a very powerful cocktail indeed.Dedication This paper is dedicated to my beloved friend and collaborator, Professor Christopher Newell AM (1964-2008), whose extraordinary legacy will inspire us all to continue exploring and questioning the idea of able. References Abberley, Paul. “The Concept of Oppression and the Development of a Social Theory of Disability.” Disability, Handicap & Society 2.1 (1987): 5–20. Annable, Gary, Gerard Goggin, and Deborah Stienstra, eds. “Accessibility and Inclusion in Information Technologies.” Special issue of The Information Society 23.3 (2007): 145-147. Australia 2020 Summit. Australia 2020 Summit — Initial Report. Commonwealth of Australia 20 April 2008. 15 May 2008 ‹http://www.australia2020.gov.au/docs/2020_Summit_initial_report.doc›. Barnes, Colin, and Geoff Mercer, eds. Implementing the Social Model of Disability: Theory and Research. Leeds: The Disability Press, 2004. Barnes, Colin, Mike Oliver, and Len Barton, eds. Disability Studies Today. Cambridge: Polity Press, 2002. Benkler, Yochai. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, CT: Yale University Press, 2006. Borsay, Anne. “Personal Trouble or Public Issue? Toward a Model of Policy for People with Physical and Mental Disabilities.” Disability, Handicap and Society 1.2 (1986): 179-195. Bourke, Joanna. Dismembering the Male: Men’s Bodies, Britain and the Great War. Chicago: University of Chicago Press, 1996. Butler, Judith. Bodies that Matter: On the Discursive Limits of “Sex.” London: Routledge, 1993. Campbell, Fiona. “Selling the Cochlear Implant.” Disability Studies Quarterly 25.3 (2005). ‹http://www.dsq-sds-archives.org/_articles_html/2005/summer/campbell.asp›. Carrier, James G. Learning Disability: Social Class and the Construction of Inequality in American Education. New York: Greenword Press, 1986. Cole, Mike, ed. Education, Equality and Human Rights: Issues of Gender, ‘Race’, Sexuality, Disability and Social Class. London and New York: Routledge, 2006. Corker, Mairean, and Tom Shakespeare, eds. Disability/Postmodernity: Embodying Disability Theory. London: Continuum, 2002. Davis, Lennard J. Bending Over Backwards: Disability, Dismodernism, and other Difficult Positions. New York, NY: New York University Press, 2002. ———. Enforcing Normalcy: Disability, Deafness and the Body. London: Verso, 1995. Fine, Michelle, and Adrienne Asch, eds. Women with Disabilities: Essays in Psychology, Culture, and Politics. Philadelphia: Temple University Press, 1988. Fulcher, Gillian. Disabling Policies? London: Falmer Press, 1989. Gerber, David A., ed. Disabled Veterans in History. Ann Arbor, MI: University of Michigan Press, 2000. Gleeson, Brendan. Geographies of Disability. London and New York: Routledge, 1999. Goggin, Gerard, and Christopher Newell. Digital Disability: The Social Construction of Disability in New Media. Lanham, MD: Rowman & Littlefield, 2003. ———. Disability in Australia: Exposing a Social Apartheid. Sydney: University of New South Wales Press, 2005. ———, eds. “Disability, Identity, and Interdependence: ICTs and New Social Forms.” Special issue of Information, Communication & Society 9.3 (2006). ———. “Diversity as if Disability Mattered.” Australian Journal of Communication 30.3 (2003): 1-6. ———, eds. “Technology and Disability.” Special double issue of Disability Studies Quarterly 25.2-3 (2005). Haddon, Leslie, Enid Mante, Bartolomeo Sapio, Kari-Hans Kommonen, Leopoldina Fortunati, and Annevi Kant, eds. Everyday Innovators: Researching the Role of Users in Shaping ICTs. London: Springer, 2005. Jones, Melinda, and Anne Basser Marks Lee, eds. Disability, Divers-ability and Legal Change. The Hague: Martinus Nijhoff, 1999. McLaughlin, Thomas. Street Smarts and Critical Theory: Listening to the Vernacular. Madison: University of Wisconsin Press, 1996. McPherson, Tara, ed. Digital Youth, Innovation, and the Unexpected. Cambridge, MA: MIT Press, 2008. Meekosha, Helen. “Drifting Down the Gulf Stream: Navigating the Cultures of Disability Studies.” Disability & Society 19.7 (2004): 721-733. Miller, Paul, Sophia Parker, and Sarah Gillinson. Disablism: How to Tackle the Last Prejudice. London: Demos, 2004. ‹http://www.demos.co.uk/publications/disablism›. Mulgan, Geoff. “The Process of Social Innovation.” Innovations 1.2 (2006): 145-62. Muir, Kristy. “‘That Bastard’s Following Me!’ Mentally Ill Australian Veterans Struggling to Maintain Control.” Social Histories of Disability and Deformity. Ed. in David M. Turner and Kevin Stagg. New York: Routledge. 161-74. National Council on Disability (NCD). Design for Inclusion: Creating a New Marketplace. Washington: NCD, 2004. Newell, Christopher. “Debates Regarding Governance: A Disability Perspective.” Disability & Society 13.2 (1998): 295-296. Oliver, Michael. The Politics of Disablement: A Sociological Approach. New York: St. Martin’s Press, 1990. Passiante, Giuseppina, Valerio Elia, and Tommaso Massari, eds. Digital Innovation: Innovation Processes in Virtual Clusters and Digital Regions. London: Imperial College Press, 2003. Productivity Commission. Review of the Disability Discrimination Act 1992. Melbourne: Productivity Commission, 2004. ‹http://www.pc.gov.au/inquiry/dda/docs/finalreport›. Shakespeare, Tom. Disability Rights and Wrongs. New York: Routledge, 2006. Shorten, Bill. Address-in-Reply, Governor-General’s Speech. Hansard 14 Feb. 2008: 328-333. ———. “Speaking Up for True Battlers.” Daily Telegraph 12 March 2008. ‹http://www.billshorten.com.au/press/index.cfm?Fuseaction=pressreleases_full&ID=1328›. Snyder, Sharon L., Brenda Brueggemann, and Rosemary Garland-Thomson, eds. Disability Studies: Enabling the Humanities. New York: Modern Language Association of America, 2002. Stienstra, Deborah. “The Critical Space Between: Access, Inclusion and Standards in Information Technologies.” Information, Communication & Society 9.3 (2006): 335-354. Stiker, Henri-Jacques. A History of Disability. Trans. William Sayers. Ann Arbor: University of Michigan Press, 1999. Thomas, Carol. Female Forms: Experiencing and Understanding Disability. Buckingham: Open University, 1999. Rosenblum, Karen E., and Toni-Michelle C. Travis, eds. The Meaning of Difference: American Constructions of Race, Sex and Gender, Social Class, Sexual Orientation, and Disability. New York, NY: McGraw-Hill, 2008. Von Hippel, Eric. Democratizing Innovation. Cambridge, MA: MIT Press, 2005. Walker, Alan. “The Social Origins of Impairment, Disability and Handicap.” Medicine and Society 6.2-3 (1980): 18-26. White, Michele. “Where Do You Want to Sit Today: Computer Programmers’ Static Bodies and Disability.” Information, Communication and Society 9.3 (2006): 396-416.
APA, Harvard, Vancouver, ISO, and other styles
35

Wolbring, Gregor. "A Culture of Neglect: Climate Discourse and Disabled People." M/C Journal 12, no. 4 (August 28, 2009). http://dx.doi.org/10.5204/mcj.173.

Full text
Abstract:
Introduction The scientific validity of climate change claims, how to intervene (if at all) in environmental, economic, political and social consequences of climate change, and the adaptation and mitigation needed with any given climate change scenario, are contested areas of public, policy and academic discourses. For marginalised populations, the climate discourses around adaptation, mitigation, vulnerability and resilience are of particular importance. This paper considers the silence around disabled people in these discourses. Marci Roth of the Spinal Cord Injury Association testified before Congress in regards to the Katrina disaster: [On August 29] Susan Daniels called me to enlist my help because her sister in-law, a quadriplegic woman in New Orleans, had been unsuccessfully trying to evacuate to the Superdome for two days. […] It was clear that this woman, Benilda Caixetta, was not being evacuated. I stayed on the phone with Benilda, for the most part of the day. […] She kept telling me she’d been calling for a ride to the Superdome since Saturday; but, despite promises, no one came. The very same paratransit system that people can’t rely on in good weather is what was being relied on in the evacuation. […] I was on the phone with Benilda when she told me, with panic in her voice “the water is rushing in.” And then her phone went dead. We learned five days later that she had been found in her apartment dead, floating next to her wheelchair. […] Benilda did not have to drown. (National Council on Disability, emphasis added) According to the Intergovernmental Panel on Climate Change (IPCC), adaptation is the “Adjustment in natural or human systems in response to actual or expected climatic stimuli or their effects, which moderates harm or exploits beneficial opportunities” (IPCC, Climate Change 2007). Adaptations can be anticipatory or reactive, and depending on their degree of spontaneity they can be autonomous or planned (IPCC, Fourth Assessment Report). Adaptations can be private or public (IPCC, Fourth Assessment Report), technological, behavioural, managerial and structural (National Research Council of Canada). Adaptation, in the context of human dimensions of global change, usually refers to a process, action or outcome in a system (household, community, group, sector, region, country) in order for that system to better cope with, manage or adjust to some changing condition, stress, hazard, risk or opportunity (Smit and Wandel). Adaptation can encompass national or regional strategies as well as practical steps taken at the community level or by individuals. According to Smit et al, a framework for systematically defining adaptations is based on three questions: (i) adaptation to what; (ii) who or what adapts; and (iii) how does adaptation occur? These are essential questions that have to be looked at from many angles including cultural and anthropological lenses as well as lenses of marginalised and highly vulnerable populations. Mitigation (to reduce or prevent changes in the climate system), vulnerability (the degree to which a system is susceptible to, and unable to cope with, the adverse effects of climate change), and resilience (the amount of change a system can undergo without changing state), are other important concepts within the climate change discourse. Non-climate stresses can increase vulnerability to climate change by reducing resilience and can also reduce adaptive capacity because of resource deployment to competing needs. Extending this to the context of disabled people, ableism (sentiment to expect certain abilities within humans) (Wolbring, “Is there an end to out-able?”) and disablism (the unwillingness to accommodate different needs) (Miller, Parker and Gillinson) are two concepts that will thus play themselves out in climate discourses. The “Summary for Policymakers” of the IPCC 2007 report, Climate Change 2007: Impacts, Adaptation and Vulnerability, states: “Poor communities can be especially vulnerable, in particular those concentrated in high-risk areas. They tend to have more limited adaptive capacities, and are more dependent on climate-sensitive resources such as local water and food supplies.” From this quote one can conclude that disabled people are particularly impacted, as the majority of disabled people live in poverty (Elwan). For instance, CARE International, a humanitarian organisation fighting global poverty, the UN Office for the Coordination of Humanitarian Affairs, and Maplecroft, a company that specialises in the calculation, analysis and visualisation of global risks, conclude: “The degree of vulnerability is determined by underlying natural, human, social, physical and financial factors and is a major reason why poor people—especially those in marginalised social groups like women, children, the elderly and people with disabilities—are most affected by disasters” (CARE International). The purpose of this paper is to expose the reader to (a) how disabled people are situated in the culture of the climate, adaptation, mitigation and resilience discourse; (b) how one would answer the three questions, (i) adaptation to what, (ii) who or what adapts, and (iii) how does adaptation occur (Smit et al), using a disabled people lens; and (c) what that reality of the involvement of disabled people within the climate change discourse might herald for other groups in the future. The paper contends that there is a pressing need for the climate discourse to be more inclusive and to develop a new social contract to modify existing dynamics of ableism and disablism so as to avoid the uneven distribution of evident burdens already linked to climate change. A Culture of Neglect: The Situation of Disabled People As climates changes, environmental events that are classified as natural disasters are expected to be more frequent. In the face of recent disaster responses, how effective have these efforts been as they relate to the needs and challenges faced by disabled people? Almost immediately after Hurricane Katrina devastated the Gulf Coast, the National Council on Disability (NCD) in the United States estimated that 155,000 people with disabilities lived in the three cities hardest hit by the hurricane (about 25 per cent of the cities’ populations). The NCD urged emergency managers and government officials to recognise that the need for basic necessities by hurricane survivors with disabilities was “compounded by chronic health conditions and functional impairments … [which include] people who are blind, people who are deaf, people who use wheelchairs, canes, walkers, crutches, people with service animals, and people with mental health needs.” The NCD estimated that a disproportionate number of fatalities were people with disabilities. They cited one statistic from the American Association of Retired Persons (AARP): “73 per cent of Hurricane Katrina-related deaths in New Orleans area were among persons age 60 and over, although they comprised only 15 per cent of the population in New Orleans.” As the NCD stated, “most of those individuals had medical conditions and functional or sensory disabilities that made them more vulnerable. Many more people with disabilities under the age of 60 died or were otherwise impacted by the hurricanes.” As these numbers are very likely linked to the impaired status of the elderly, it seems reasonable to assume similar numbers for non-elderly disabled people. Hurricane Katrina is but one example of how disabled people are neglected in a disaster (Hemingway and Priestley; Fjord and Manderson). Disabled people were also disproportionately impacted in other disasters, such as the 1995 Great Hanshin Earthquake in Japan (Nakamura) or the 2003 heatwave in France, where 63 per cent of heat-related deaths occurred in institutions, with a quarter of these in nursing homes (Holstein et al.). A review of 18 US heatwave response plans revealed that although people with mental or chronic illnesses and the homeless constitute a significant proportion of the victims in recent heatwaves, only one plan emphasised outreach to disabled persons, and only two addressed the shelter and water needs of the homeless (Ebi and Meehl; Bernhard and McGeehin). Presence of Disabled People in Climate Discourse Although climate change will disproportionately impact disabled people, despite the less than stellar record of disaster adaptation and mitigation efforts towards disabled people, and despite the fact that other social groups (such as women, children, ‘the poor’, indigenous people, farmers and displaced people) are mentioned in climate-related reports such as the IPCC reports and the Human Development Report 2007/2008, the same reports do not mention disabled people. Even worse, the majority of the material generated by, and physically set up for, discourses on climate, is inaccessible for many disabled people (Australian Human Rights Commission). For instance, the IPCC report, Climate Change 2007: Impacts, Adaptation and Vulnerability, contains Box 8.2: Gender and natural disasters, makes the following points: (a) “men and women are affected differently in all phases of a disaster, from exposure to risk and risk perception; to preparedness behaviour, warning communication and response; physical, psychological, social and economic impacts; emergency response; and ultimately to recovery and reconstruction”; (b) “natural disasters have been shown to result in increased domestic violence against, and post-traumatic stress disorders in, women”; and (c) “women make an important contribution to disaster reduction, often informally through participating in disaster management and acting as agents of social change. Their resilience and their networks are critical in household and community recovery.” The content of Box 8.2 acknowledges the existence of different perspectives and contributions to the climate discourse, and that it is beneficial to explore these differences. It seems reasonable to assume that differences in perspectives, contributions and impact may well also exist between people with and without disabilities, and that it may be likewise beneficial to explore these differences. Disabled people are differently affected in all phases of a disaster, from exposure to risk and risk perception; to preparedness behaviour, warning communication and response; physical, psychological, social and economic impacts; emergency response; and ultimately to recovery and reconstruction. Disabled people could also make an important contribution to disaster reduction, often informally through participating in disaster management and acting as agents of social change. Their resilience and their networks are critical in household and community recovery, important as distributors of relief efforts and in reconstruction design. The Bonn Declaration from the 2007 international conference, Disasters are always Inclusive: Persons with Disabilities in Humanitarian Emergency Situations, highlighted many problems disabled people are facing and gives recommendations for inclusive disaster preparedness planning, for inclusive response in acute emergency situations and immediate rehabilitation measures, and for inclusive post-disaster reconstruction and development measures. Many workshops were initiated by disabled people groups, such as Rehabilitation International. However, the disabled people disaster adaptation and mitigation discourse is not mainstreamed. Advocacy by people with disability for accessible transport and universal or “life-cycle” housing (among other things) shows how they can contribute significantly to more effective social systems and public facilities. These benefit everyone and help to shift public expectations towards accessible and flexible amenities and services—for example, emergency response and evacuation procedures are much easier for all if such facilities are universally accessible. Most suggestions by disabled people for a more integrative, accessible physical environment and societal attitude benefit everyone, and gain special importance with the ever-increasing proportion of elderly people in society. The IPCC Fourth Assessment Report is intended to be a balanced assessment of current knowledge on climate change mitigation. However, none of the 2007 IPCC reports mention disabled people. Does that mean that disabled people are not impacted by, or impact, climate change? Does no knowledge of adaptation, mitigation and adaptation capacity from a disabled people lens exist, or does the knowledge not reach the IPCC, or does the IPCC judge this knowledge as irrelevant? This culture of neglect and unbalanced assessment of knowledge evident in the IPCC reports was recognised before for rise of a ‘global’ climate discourse. For instance, a 2001 Canadian government document asked that research agendas be developed with the involvement of, among others, disabled people (Health Canada). The 2009 Nairobi Declaration on Africa’s response to climate change (paragraph 36) also asks for the involvement of disabled people (African Ministerial Conference on the Environment). However, so far nothing has trickled up to the international bodies, like the IPCC, or leading conferences such as the United Nations Climate Change Conference Copenhagen 2009. Where Will It End? In his essay, “We do not need climate change apartheid in adaptation”, in the Human Development Report 2007/2008, Archbishop Desmond Tutu suggests that we are drifting into a situation of global adaptation apartheid—that adaptation becomes a euphemism for social injustice on a global scale (United Nations Development Programme). He uses the term “adaptation apartheid” to highlight the inequality of support for adaptation capacity between high and low income countries: “Inequality in capacity to adapt to climate change is emerging as a potential driver of wider disparities in wealth, security and opportunities for human development”. I submit that “adaptation apartheid” also exists in regard to disabled people, with the invisibility of disabled people in the climate discourse being just one facet. The unwillingness to accommodate, to help the “other,” is nothing new for disabled people. The ableism that favours species-typical bodily functioning (Wolbring, “Is there an end to out-able?”; Wolbring, “Why NBIC?”) and disablism (Miller, Parker, and Gillinson)—the lack of accommodation enthusiasm for the needs of people with ‘below’ species-typical body abilities and the unwillingness to adapt to the needs of “others”—is a form of “adaptation apartheid,” of accommodation apartheid, of adaptation disablism that has been battled by disabled people for a long time. In a 2009 online survey of 2000 British people, 38 per cent believed that most people in British society see disabled people as a “drain on resources” (Scope). A majority of human geneticist concluded in a survey in 1999 that disabled people will never be given the support they need (Nippert and Wolff). Adaptation disablism is visible in the literature and studies around other disasters. The 1988 British Medical Association discussion document, Selection of casualties for treatment after nuclear attack, stated “casualties whose injuries were likely to lead to a permanent disability would receive lower priority than those expected to fully recover” (Sunday Morning Herald). Famine is seen to lead to increased infanticide, increased competitiveness and decreased collaboration (Participants of the Nuclear Winter: The Anthropology of Human Survival Session). Ableism and disablism notions experienced by disabled people can now be extended to include those challenges expected to arise from the need to adapt to climate change. It is reasonable to expect that ableism will prevail, expecting people to cope with certain forms of climate change, and that disablism will be extended, with the ones less affected being unwilling to accommodate the ones more affected beyond a certain point. This ableism/disablism will not only play itself out between high and low income countries, as Desmond Tutu described, but also within high income countries, as not every need will be accommodated. The disaster experience of disabled people is just one example. And there might be climate change consequences that one can only mitigate through high tech bodily adaptations that will not be available to many of the ones who are so far accommodated in high income countries. Desmond Tutu submits that adaptation apartheid might work for the fortunate ones in the short term, but will be destructive for them in the long term (United Nations Development Programme). Disability studies scholar Erik Leipoldt proposed that the disability perspective of interdependence is a practical guide from the margins for making new choices that may lead to a just and sustainable world—a concept that reduces the distance between each other and our environment (Leipoldt). This perspective rejects ableism and disablism as it plays itself out today, including adaptation apartheid. Planned adaptation involves four basic steps: information development and awareness-raising; planning and design; implementation; and monitoring and evaluation (Smit et al). Disabled people have important knowledge to contribute to these four basic steps that goes far beyond their community. Their understanding and acceptance of, for example, the concept of interdependence, is just one major contribution. Including the concept of interdependence within the set of tools that inform the four basic steps of adaptation and other facets of climate discourse has the potential to lead to a decrease of adaptation apartheid, and to increase the utility of the climate discourse for the global community as a whole. References African Ministerial Conference on the Environment. Nairobi Declaration on the African Process for Combating Climate Change. 2009. 26 Aug. 2009 ‹ http://www.unep.org/roa/Amcen/Amcen_Events/3rd_ss/Docs/nairobi-Decration-2009.pdf ›. American Association of Retired Persons. We Can Do Better: Lessons Learned for Protecting Older Persons in Disasters. 2009. 26 Aug. 2009 ‹ http://assets.aarp.org/rgcenter/il/better.pdf ›. Australian Human Rights Commission. “Climate Change Secretariat Excludes People with Disabilities.” 2008. 26 Aug. 2009 ‹ http://www.hreoc.gov.au/about/media/media_releases/2008/95_08.html ›. Bernhard, S., and M. McGeehin. “Municipal Heatwave Response Plans.” American Journal of Public Health 94 (2004): 1520-21. CARE International, the UN Office for the Coordination of Humanitarian Affairs, and Maplecroft. Humanitarian Implications of Climate Change: Mapping Emerging Trends and Risk Hotspots for Humanitarian Actors. CARE International, 2008. 26 Aug. 2009 ‹ http://www.careclimatechange.org/files/reports/Human_Implications_PolicyBrief.pdf ›, ‹ http://www.careclimatechange.org/files/reports/CARE_Human_Implications.pdf ›. "Disasters Are Always Inclusive: Persons with Disabilities in Humanitarian Emergency Situations." Bonn Declaration from the International Conference: Disasters Are Always Inclusive: Persons with Disabilities in Humanitarian Emergency Situations. 2007. 26 Aug. 2009 ‹ http://www.disabilityfunders.org/webfm_send/6, http://www.disabilityfunders.org/emergency_preparedness ›, ‹ http://bezev.de/bezev/aktuelles/index.htm ›. Ebi, K., and G. Meehl. Heatwaves and Global Climate Change: The Heat Is On: Climate Change and Heatwaves in the Midwest. 2007. 26 Aug. 2009 ‹ www.pewclimate.org/docUploads/Regional-Impacts-Midwest.pdf ›. Elwan, A. Poverty and Disability: A Survey of the Literature. Worldbank, Social Protection Discussion Paper Series (1999): 9932. 26 Aug. 2009 ‹ http://siteresources.worldbank.org/DISABILITY/Resources/Poverty/Poverty_and_Disability_A_Survey_of_the_Literature.pdf ›. Fjord, L., and L. Manderson. “Anthropological Perspectives on Disasters and Disability: An Introduction.” Human Organisation 68.1 (2009): 64-72. Health Canada. First Annual National Health and Climate Change Science and Policy Research Consensus Conference: How Will Climate Change Affect Priorities for Your Health Science and Policy Research? Health Canada, 2001. 26 Aug. 2009 ‹ http://www.hc-sc.gc.ca/ewh-semt/pubs/climat/research-agenda-recherche/population-eng.php ›. Hemingway, L., and M. Priestley. “Natural Hazards, Human Vulnerability and Disabling Societies: A Disaster for Disabled People?” The Review of Disability Studies (2006). 26 Aug. 2009 ‹ http://www.rds.hawaii.edu/counter/count.php?id=13 ›. Holstein, J., et al. “Were Less Disabled Patients the Most Affected by the 2003 Heatwave in Nursing Homes in Paris, France?” Journal of Public Health Advance 27.4 (2005): 359-65. Intergovernmental Panel on Climate Change. Climate Change 2007: Impacts, Adaptation and Vulnerability. 2007. 26 Aug. 2009 ‹ http://www.ipcc.ch/publications_and_data/publications_ipcc_fourth_assessment_report_wg2_report_impacts_adaptation_and_vulnerability.htm ›. Intergovernmental Panel on Climate Change. “Summary for Policymakers.” Eds. O. F. Canziani, J. P. Palutikof, P. J. van der Linden, C. E. Hanson, and M.L.Parry. Cambridge, UK: Cambridge University Press, 2007. 7-22. 26 Aug. 2009 ‹ http://www.ipcc.ch/pdf/assessment-report/ar4/wg2/ar4-wg2-spm.pdf ›. Intergovernmental Panel on Climate Change. IPCC Fourth Assessment Report Working Group III Report: Mitigation of Climate Change Glossary. 2007. 26 Aug. 2009 ‹ http://www.ipcc.ch/ipccreports/ar4-wg3.htm, http://www.ipcc.ch/pdf/assessment-report/ar4/wg3/ar4-wg3-annex1.pdf ›. Leipoldt, E. “Disability Experience: A Contribution from the Margins. Towards a Sustainable Future.” Journal of Futures Studies 10 (2006): 3-15. Miller, P., S. Parker and S. Gillinson. “Disablism: How to Tackle the Last Prejudice.” Demos, 2004. 26 Aug. 2009 ‹ http://www.demos.co.uk/files/disablism.pdf ›. Nakamura, K. “Disability, Destitution, and Disaster: Surviving the 1995 Great Hanshin Earthquake in Japan.” Human Organisation 68.1 (2009): 82-88. National Council on Disability, National Council on Independent Living, National Organization on Disability, and National Spinal Cord Injury Association and the Paralyzed Veterans of America. Emergency Management and People with Disabilities: before, during and after Congressional Briefing, 10 November 2005. 26 Aug. 2009 ‹ http://www.ncd.gov/newsroom/publications/2005/transcript_emergencymgt.htm ›. National Council on Disability. National Council on Disability on Hurricane Katrina Affected Areas. 2005. 26 Aug. 2009 ‹ http://www.ncd.gov/newsroom/publications/2005/katrina2.htm ›. National Research Council of Canada. From Impacts to Adaptation: Canada in a Changing Climate 2007. 26 Aug. 2009 ‹ http://adaptation.nrcan.gc.ca/assess/2007/pdf/full-complet_e.pdf ›. Nippert, I. and G. Wolff. “Ethik und Genetik: Ergebnisse der Umfrage zu Problemaspekten angewandter Humangenetik 1994-1996, 37 Länder.” Medgen 11 (1999): 53-61. Participants of the Nuclear Winter: The Anthropology of Human Survival Session. Proceedings of the 84th American Anthropological Association's Annual Meeting. Washington, D.C., 6 Dec. 1985. 26 Aug. 2009 ‹ http://www.fas.org/sgp/othergov/doe/lanl/lib-www/la-pubs/00173165.pdf ›. Scope. “Most Britons Think Others View Disabled People ‘As Inferior’.” 2009. 26 Aug. 2009 ‹ http://www.scope.org.uk/cgi-bin/np/viewnews.cgi?id=1244379033, http://www.comres.co.uk/resources/7/Social%20Polls/Scope%20PublicPoll%20Results%20May09.pdf ›. Smit, B., et al. “The Science of Adaptation: A Framework for Assessment.” Mitigation and Adaptation Strategies for Global Change 4 (1999): 199-213. Smit, B., and J. Wandel. “Adaptation, Adaptive Capacity and Vulnerability.” Global Environmental Change 16 (2006): 282-92. Sunday Morning Herald. “Who Lives and Dies in Britain after the Bomb.” Sunday Morning Herald 1988. 26 Aug. 2009 ‹ http://news.google.com/newspapers?nid=1301&dat=19880511&id=wFYVAAAAIBAJ&sjid=kOQDAAAAIBAJ&pg=3909,113100 ›. United Nations Development Programme. Human Development Report 2007/2008: Fighting Climate Change – Human Solidarity in a Divided World. 2008. 26 Aug. 2009 ‹ http://hdr.undp.org/en/media/HDR_20072008_EN_Complete.pdf ›. Wolbring, Gregor. “Is There an End to Out-Able? Is There an End to the Rat Race for Abilities?” M/C Journal 11.3 (2008). 26 Aug. 2009 ‹ http://journal.media-culture.org.au/index.php/mcjournal/article/viewArticle/57 ›. Wolbring, Gregor. “Why NBIC? Why Human Performance Enhancement?” Innovation: The European Journal of Social Science Research 21.1 (2008): 25-40.
APA, Harvard, Vancouver, ISO, and other styles
36

Haupt, Adam. "Queering Hip-Hop, Queering the City: Dope Saint Jude’s Transformative Politics." M/C Journal 19, no. 4 (August 31, 2016). http://dx.doi.org/10.5204/mcj.1125.

Full text
Abstract:
This paper argues that artist Dope Saint Jude is transforming South African hip-hop by queering a genre that has predominantly been male and heteronormative. Specifically, I analyse the opening skit of her music video “Keep in Touch” in order to unpack the ways which she revives Gayle, a gay language that adopted double-coded forms of speech during the apartheid era—a context in which homosexuals were criminalised. The use of Gayle and spaces close to the city centre of Cape Town (such as Salt River and Woodstock) speaks to the city as it was before it was transformed by the decline of industries due to the country’s adoption of neoliberal economics and, more recently, by the gentrification of these spaces. Dope Saint Jude therefore reclaims these city spaces through her use of gay modes of speech that have a long history in Cape Town and by positioning her work as hip-hop, which has been popular in the city for well over two decades. Her inclusion of transgender MC and DJ Angel Ho pushes the boundaries of hegemonic and binary conceptions of gender identity even further. In essence, Dope Saint Jude is transforming local hip-hop in a context that is shaped significantly by US cultural imperialism. The artist is also transforming our perspective of spaces that have been altered by neoliberal economics.Setting the SceneDope Saint Jude (DSJ) is a queer MC from Elsies River, a working class township located on Cape Town's Cape Flats in South Africa. Elsies River was defined as a “coloured” neighbourhood under the apartheid state's Group Areas Act, which segregated South Africans racially. With the aid of the Population Registration Act, citizens were classified, not merely along the lines of white, Asian, or black—black subjects were also divided into further categories. The apartheid state also distinguished between black and “coloured” subjects. Michael MacDonald contends that segregation “ordained blacks to be inferior to whites; apartheid cast them to be indelibly different” (11). Apartheid declared “African claims in South Africa to be inferior to white claims” and effectively claimed that black subjects “belonged elsewhere, in societies of their own, because their race was different” (ibid). The term “coloured” defined people as “mixed race” to separate communities that might otherwise have identified as black in the broad and inclusive sense (Erasmus 16). Racial categorisation was used to create a racial hierarchy with white subjects at the top of that hierarchy and those classified as black receiving the least resources and benefits. This frustrated attempts to establish broad alliances of black struggles against apartheid. It is in this sense that race is socially and politically constructed and continues to have currency, despite the fact that biologically essentialist understandings of race have been discredited (Yudell 13–14). Thanks to apartheid town planning and resource allocation, many townships on the Cape Flats were poverty-stricken and plagued by gang violence (Salo 363). This continues to be the case because post-apartheid South Africa's embrace of neoliberal economics failed to address racialised class inequalities significantly (Haupt, Static 6–8). This is the '90s context in which socially conscious hip-hop crews, such as Prophets of da City or Black Noise, came together. They drew inspiration from Black Consciousness philosophy via their exposure to US hip-hop crews such as Public Enemy in order to challenge apartheid policies, including their racial interpellation as “coloured” as distinct from the more inclusive category, black (Haupt, “Black Thing” 178). Prophets of da City—whose co-founding member, Shaheen Ariefdien, also lived in Elsies River—was the first South African hip-hop outfit to record an album. Whilst much of their work was performed in English, they quickly transformed the genre by rapping in non-standard varieties of Afrikaans and by including MCs who rap in African languages (ibid). They therefore succeeded in addressing key issues related to race, language, and class disparities in relation to South Africa's transition to democracy (Haupt, “Black Thing”; Haupt, Stealing Empire). However, as is the case with mainstream US hip-hop, specifically gangsta rap (Clay 149), South African hip-hop has been largely dominated by heterosexual men. This includes the more commercial hip-hop scene, which is largely perceived to be located in Johannesburg, where male MCs like AKA and Cassper Nyovest became celebrities. However, certain female MCs have claimed the genre, notably EJ von Lyrik and Burni Aman who are formerly of Godessa, the first female hip-hop crew to record and perform locally and internationally (Haupt, Stealing Empire 166; Haupt, “Can a Woman in Hip-Hop”). DSJ therefore presents the exception to a largely heteronormative and male-dominated South African music industry and hip-hop scene as she transforms it with her queer politics. While queer hip-hop is not new in the US (Pabón and Smalls), this is new territory for South Africa. Writing about the US MC Jean Grae in the context of a “male-dominated music industry and genre,” Shanté Paradigm Smalls contends,Heteronormativity blocks the materiality of the experiences of Black people. Yet, many Black people strive for a heteronormative effect if not “reality”. In hip hop, there is a particular emphasis on maintaining the rigidity of categories, even if those categories fail [sic]. (87) DSJ challenges these rigid categories. Keep in TouchDSJ's most visible entry onto the media landscape to date has been her appearance in an H&M recycling campaign with British Sri Lankan artist MIA (H&M), some fashion shoots, her new EP—Reimagine (Dope Saint Jude)—and recent Finnish, US and French tours as well as her YouTube channel, which features her music videos. As the characters’ theatrical costumes suggest, “Keep in Touch” is possibly the most camp and playful music video she has produced. It commences somewhat comically with Dope Saint Jude walking down Salt River main road to a public telephone, where she and a young woman in pig tails exchange dirty looks. Salt River is located at the foot of Devil's Peak not far from Cape Town's CBD. Many factories were located there, but the area is also surrounded by low-income housing, which was designated a “coloured” area under apartheid. After apartheid, neighbourhoods such as Salt River, Woodstock, and the Bo-Kaap became increasingly gentrified and, instead of becoming more inclusive, many parts of Cape Town continued to be influenced by policies that enable racialised inequalities. Dope Saint Jude calls Angel Ho: DSJ: Awêh, Angie! Yoh, you must check this kak sturvy girl here by the pay phone. [Turns to the girl, who walks away as she bursts a chewing gum bubble.] Ja, you better keep in touch. Anyway, listen here, what are you wys?Angel Ho: Ah, just at the salon getting my hair did. What's good? DSJ: Wanna catch on kak today?Angel Ho: Yes, honey. But, first, let me Gayle you this. By the jol by the art gallery, this Wendy, nuh. This Wendy tapped me on the shoulder and wys me, “This is a place of decorum.”DSJ: What did she wys?Angel Ho: De-corum. She basically told me this is not your house. DSJ: I know you told that girl to keep in touch!Angel Ho: Yes, Mama! I'm Paula, I told that bitch, “Keep in touch!” [Points index finger in the air.](Saint Jude, Dope, “Keep in Touch”)Angel Ho's name is a play on the male name Angelo and refers to the trope of the ho (whore) in gangsta rap lyrics and in music videos that present objectified women as secondary to male, heterosexual narratives (Sharpley-Whiting 23; Collins 27). The queering of Angelo, along with Angel Ho’s non-binary styling in terms of hair, make-up, and attire, appropriates a heterosexist, sexualised stereotype of women in order to create room for a gender identity that operates beyond heteronormative male-female binaries. Angel Ho’s location in a hair salon also speaks to stereotypical associations of salons with women and gay subjects. In a discussion of gender stereotypes about hair salons, Kristen Barber argues that beauty work has traditionally been “associated with women and with gay men” and that “the body beautiful has been tightly linked to the concept of femininity” (455–56). During the telephonic exchange, Angel Ho and Dope Saint Jude code-switch between standard and non-standard varieties of English and Afrikaans, as the opening appellation, “Awêh,” suggests. In this context, the term is a friendly greeting, which intimates solidarity. “Sturvy” means pretentious, whilst “kak” means shit, but here it is used to qualify “sturvy” and means that the girl at the pay phone is very pretentious or “full of airs.” To be “wys” means to be wise, but it can also mean that you are showing someone something or educating them. The meanings of these terms shift, depending on the context. The language practices in this skit are in line with the work of earlier hip-hop crews, such as Prophets of da City and Brasse vannie Kaap, to validate black, multilingual forms of speech and expression that challenge the linguistic imperialism of standard English and Afrikaans in South Africa, which has eleven official languages (Haupt, “Black Thing”; Haupt, Stealing Empire; Williams). Henry Louis Gates’s research on African American speech varieties and literary practices emerging from the repressive context of slavery is essential to understanding hip-hop’s language politics. Hip-hop artists' multilingual wordplay creates parallel discursive universes that operate both on the syntagmatic axis of meaning-making and the paradigmatic axis (Gates 49; Haupt, “Stealing Empire” 76–77). Historically, these discursive universes were those of the slave masters and the slaves, respectively. While white hegemonic meanings are produced on the syntagmatic axis (which is ordered and linear), black modes of speech as seen in hip-hop word play operate on the paradigmatic axis, which is connotative and non-linear (ibid). Distinguishing between Signifyin(g) / Signification (upper case, meaning black expression) and signification (lower case, meaning white dominant expression), he argues that “the signifier ‘Signification’ has remained identical in spelling to its white counterpart to demonstrate [. . .] that a simultaneous, but negated, parallel discursive (ontological, political) universe exists within the larger white discursive universe” (Gates 49). The meanings of terms and expressions can change, depending on the context and manner in which they are used. It is therefore the shared experiences of speech communities (such as slavery or racist/sexist oppression) that determine the negotiated meanings of certain forms of expression. Gayle as a Parallel Discursive UniverseDSJ and Angel Ho's performance of Gayle takes these linguistic practices further. Viewers are offered points of entry into Gayle via the music video’s subtitles. We learn that Wendy is code for a white person and that to keep in touch means exactly the opposite. Saint Jude explains that Gayle is a very fun queer language that was used to kind of mask what people were saying [. . .] It hides meanings and it makes use of women's names [. . . .] But the thing about Gayle is it's constantly changing [. . .] So everywhere you go, you kind of have to pick it up according to the context that you're in. (Ovens, Saint Jude and Haupt)According to Kathryn Luyt, “Gayle originated as Moffietaal [gay language] in the coloured gay drag culture of the Western Cape as a form of slang amongst Afrikaans-speakers which over time, grew into a stylect used by gay English and Afrikaans-speakers across South Africa” (Luyt 8; Cage 4). Given that the apartheid state criminalised homosexuals, Gayle was coded to evade detection and to seek out other members of this speech community (Luyt 8). Luyt qualifies the term “language” by arguing, “The term ‘language’ here, is used not as a constructed language with its own grammar, syntax, morphology and phonology, but in the same way as linguists would discuss women’s language, as a way of speaking, a kind of sociolect” (Luyt 8; Cage 1). However, the double-coded nature of Gayle allows one to think of it as creating a parallel discursive universe as Gates describes it (49). Whereas African American and Cape Flats discursive practices function parallel to white, hegemonic discourses, gay modes of speech run parallel to heteronormative communication. Exclusion and MicroaggressionsThe skit brings both discursive practices into play by creating room for one to consider that DSJ queers a male-dominated genre that is shaped by US cultural imperialism (Haupt, Stealing Empire 166) as a way of speaking back to intersectional forms of marginalisation (Crenshaw 1244), which are created by “white supremacist capitalist patriarchy” (hooks 116). This is significant in South Africa where “curative rape” of lesbians and other forms of homophobic violence are prominent (cf. Gqola; Hames; Msibi). Angel Ho's anecdote conveys a sense of the extent to which black individuals are subject to scrutiny. Ho's interpretation of the claim that the gallery “is a place of decorum” is correct: it is not Ho's house. Black queer subjects are not meant to feel at home or feel a sense of ownership. This functions as a racial microaggression: “subtle insults (verbal, nonverbal, and/or visual) directed toward people of color, often automatically or unconsciously” (Solorzano, Ceja, and Yosso 60). This speaks to DSJ's use of Salt River, Woodstock, and Bo-Kaap for the music video, which features black queer bodies in performance—all of these spaces are being gentrified, effectively pushing working class people of colour out of the city (cf. Didier, Morange, and Peyroux; Lemanski). Gustav Visser explains that gentrification has come to mean a unit-by-unit acquisition of housing which replaces low-income residents with high-income residents, and which occurs independent of the structural condition, architecture, tenure or original cost level of the housing (although it is usually renovated for or by the new occupiers). (81–82) In South Africa this inequity plays out along racial lines because its neoliberal economic policies created a small black elite without improving the lives of the black working class. Instead, the “new African bourgeoisie, because it shares racial identities with the bulk of the poor and class interests with white economic elites, is in position to mediate the reinforcing cleavages between rich whites and poor blacks without having to make more radical changes” (MacDonald 158). In a news article about a working class Salt River family of colour’s battle against an eviction, Christine Hogg explains, “Gentrification often means the poor are displaced as the rich move in or buildings are upgraded by new businesses. In Woodstock and Salt River both are happening at a pace.” Angel Ho’s anecdote, as told from a Woodstock hair salon, conveys a sense of what Woodstock’s transformation from a coloured, working class Group Area to an upmarket, trendy, and arty space would mean for people of colour, including black, queer subjects. One could argue that this reading of the video is undermined by DSJ’s work with global brand H&M. Was she was snared by neoliberal economics? Perhaps, but one response is that the seeds of any subculture’s commercial co-option lie in the fact it speaks through commodities (for example clothing, make-up, CDs, vinyl, or iTunes / mp3 downloads (Hebdige 95; Haupt, Stealing Empire 144–45). Subcultures have a window period in which to challenge hegemonic ideologies before they are delegitimated or commercially co-opted. Hardt and Negri contend that the means that extend the reach of corporate globalisation could be used to challenge it from within it (44–46; Haupt, Stealing Empire 26). DSJ utilises her H&M work, social media, the hip-hop genre, and international networks to exploit that window period to help mainstream black queer identity politics.ConclusionDSJ speaks back to processes of exclusion from the city, which was transformed by apartheid and, more recently, gentrification, by claiming it as a creative and playful space for queer subjects of colour. She uses Gayle to lay claim to the city as it has a long history in Cape Town. In fact, she says that she is not reviving Gayle, but is simply “putting it on a bigger platform” (Ovens, Saint Jude, and Haupt). The use of subtitles in the video suggests that she wants to mainstream queer identity politics. Saint Jude also transforms hip-hop heteronormativity by queering the genre and by locating her work within the history of Cape hip-hop’s multilingual wordplay. ReferencesBarber, Kristin. “The Well-Coiffed Man: Class, Race, and Heterosexual Masculinity in the Hair Salon.” Gender and Society 22.4 (2008): 455–76.Cage, Ken. “An Investigation into the Form and Function of Language Used by Gay Men in South Africa.” Rand Afrikaans University: MA thesis, 1999.Clay, Andreana. “‘I Used to Be Scared of the Dick’: Queer Women of Color and Hip-Hop Masculinity.” Home Girls Make Some Noise: Hip Hop Feminism Anthology. Ed. Gwendolyn D. Pough, Elain Richardson, Aisha Durham, and Rachel Raimist. California: Sojourns, 2007.Collins, Patricia Hill. Black Sexual Politics: African Americans, Gender, and the New Racism. New York: Routledge, 2005. Crenshaw, Kimberle. “Mapping the Margins: Intersectionality, Identity Politics, and Violence against Women of Color”. Stanford Law Review 43.6 (1991): 1241–299.Didier, Sophie, Marianne Morange, and Elisabeth Peyroux. “The Adaptative Nature of Neoliberalism at the Local Scale: Fifteen Years of City Improvement Districts in Cape Town and Johannesburg.” Antipode 45.1 (2012): 121–39.Erasmus, Zimitri. “Introduction.” Coloured by History, Shaped by Place. Ed. Zimitri Erasmus. Cape Town: Kwela Books & SA History Online, 2001. Gates, Henry Louis. The Signifying Monkey: A Theory of Afro-American Literary Criticism. Oxford: Oxford UP, 1988.Gqola, Pumla Dineo. Rape: A South African Nightmare. Johannesburg: Jacana, 2015.Hames, Mary. “Violence against Black Lesbians: Minding Our Language.” Agenda 25.4 (2011): 87–91.Hardt, Michael, and Antonio Negri. Empire. London: Harvard UP, 2000.Haupt, Adam. “Can a Woman in Hip Hop Speak on Her Own Terms?” Africa Is a Country. 23 Mar. 2015. <http://africasacountry.com/2015/03/the-double-consciousness-of-burni-aman-can-a-woman-in-hip-hop-speak-on-her-own-terms/>.Haupt, Adam. Static: Race & Representation in Post-Apartheid Music, Media & Film. Cape Town: HSRC Press, 2012. Haupt, Adam. Stealing Empire: P2P, Intellectual Property and Hip-Hop Subversion. Cape Town: HSRC Press, 2008. Haupt, Adam. “Black Thing: Hip-Hop Nationalism, ‘Race’ and Gender in Prophets of da City and Brasse vannie Kaap.” Coloured by History, Shaped by Place. Ed. Zimitri Erasmus. Cape Town: Kwela Books & SA History Online, 2001. Hebdige, Dick. Subculture: The Meaning of Style. London: Routledge, 1979.Hogg, Christine. “In Salt River Gentrification Often Means Eviction: Family Set to Lose Their Home of 11 Years.” Ground Up. 15 June 2016. <http://www.groundup.org.za/article/salt-river-gentrification-often-means-eviction/>.hooks, bell. Outlaw: Culture: Resisting Representations. New York: Routledge, 1994.Lemanski, Charlotte. “Hybrid Gentrification in South Africa: Theorising across Southern and Northern Cities.” Urban Studies 51.14 (2014): 2943–60.Luyt, Kathryn. “Gay Language in Cape Town: A Study of Gayle – Attitudes, History and Usage.” University of Cape Town: MA thesis, 2014.MacDonald, Michael. Why Race Matters in South Africa. University of Kwazulu-Natal Press: Scottsville, 2006.Msibi, Thabo. “Not Crossing the Line: Masculinities and Homophobic Violence in South Africa”. Agenda. 23.80 (2009): 50–54.Pabón, Jessica N., and Shanté Paradigm Smalls. “Critical Intimacies: Hip Hop as Queer Feminist Pedagogy.” Women & Performance: A Journal of Feminist Theory (2014): 1–7.Salo, Elaine. “Negotiating Gender and Personhood in the New South Africa: Adolescent Women and Gangsters in Manenberg Township on the Cape Flats.” Journal of European Cultural Studies 6.3 (2003): 345–65.Solórzano, Daniel, Miguel Ceja, and Tara Yosso. “Critical Race Theory, Racial Microaggressions, and Campus Racial Climate: The Experiences of African American College Students.” Journal of Negro Education 69.1/2 (2000): 60–73.Sharpley-Whiting, T. Denean. Pimps Up, Ho’s Down: Hip Hop’s Hold on Young Black Women. New York: New York UP, 2007.Smalls, Shanté Paradigm. “‘The Rain Comes Down’: Jean Grae and Hip Hop Heteronormativity.” American Behavioral Scientist 55.1 (2011): 86–95.Visser, Gustav. “Gentrification: Prospects for Urban South African Society?” Acta Academica Supplementum 1 (2003): 79–104.Williams, Quentin E. “Youth Multilingualism in South Africa’s Hip-Hop Culture: a Metapragmatic Analysis.” Sociolinguistic Studies 10.1 (2016): 109–33.Yudell, Michael. “A Short History of the Race Concept.” Race and the Genetic Revolution: Science, Myth, and Culture. Ed. Sheldon Krimsky and Kathleen Sloan. New York: Columbia UP, 2011.InterviewsOvens, Neil, Dope Saint Jude, and Adam Haupt. One FM Radio interview. Cape Town. 21 Apr. 2016.VideosSaint Jude, Dope. “Keep in Touch.” YouTube. 23 Feb. 2015. <https://www.youtube.com/watch?v=w2ux9R839lE>. H&M. “H&M World Recycle Week Featuring M.I.A.” YouTube. 11 Apr. 2016. <https://www.youtube.com/watch?v=f7MskKkn2Jg>. MusicSaint Jude, Dope. Reimagine. 15 June 2016. <https://dopesaintjude.bandcamp.com/album/reimagine>.
APA, Harvard, Vancouver, ISO, and other styles
37

Leaver, Tama. "Going Dark." M/C Journal 24, no. 2 (April 28, 2021). http://dx.doi.org/10.5204/mcj.2774.

Full text
Abstract:
The first two months of 2021 saw Google and Facebook ‘go dark’ in terms of news content on the Australia versions of their platforms. In January, Google ran a so-called “experiment” which removed or demoted current news in the search results available to a segment of Australian users. While Google was only darkened for some, in February news on Facebook went completely dark, with the company banning all news content and news sharing for users within Australian. Both of these instances of going dark occurred because of the imminent threat these platforms faced from the News Media Bargaining Code legislation that was due to be finalised by the Australian parliament. This article examines how both Google and Facebook responded to the draft Code, focussing on their threats to go dark, and the extent to which those threats were carried out. After exploring the context which produced the threats of going dark, this article looks at their impact, and how the Code was reshaped in light of those threats before it was finally legislated in early March 2021. Most importantly, this article outlines why Google and Facebook were prepared to go dark in Australia, and whether they succeeded in trying to prevent Australia setting the precedent of national governments dictating the terms by which digital platforms should pay for news content. From the Digital Platforms Inquiry to the Draft Code In July 2019, the Australian Treasurer released the Digital Platforms Inquiry Final Report which had been prepared by the Australian Competition and Consumer Commission (ACCC). It outlined a range of areas where Australian law, policies and practices were not keeping pace with the realities of a digital world of search giants, social networks, and streaming media. Analysis of the submissions made as part of the Digital Platforms Inquiry found that the final report was “primarily framed around the concerns of media companies, particularly News Corp Australia, about the impact of platform companies’ market dominance of content distribution and advertising share, leading to unequal economic bargaining relationships and the gradual disappearance of journalism jobs and news media publishers” (Flew et al. 13). As such, one of the most provocative recommendations made was the establishment of a new code that would “address the imbalance in the bargaining relationship between leading digital platforms and news media businesses” (Australian Competition and Consumer Commission, Digital Platforms Inquiry 16). The ACCC suggested such a code would assist Australian news organisations of any size in negotiating with Facebook, Google and others for some form of payment for news content. The report was released at a time when there was a greatly increased global appetite for regulating digital platforms. Thus the battle over the Code was watched across the world as legislation that had the potential to open the door for similar laws in other countries (Flew and Wilding). Initially the report suggested that the digital giants should be asked to develop their own codes of conduct for negotiating with news organisations. These codes would have then been enforced within Australia if suitably robust. However, after months of the big digital platforms failing to produce meaningful codes of their own, the Australian government decided to commission their own rules in this arena. The ACCC thus prepared the draft legislation that was tabled in July 2020 as the Australian News Media Bargaining Code. According to the ACCC the Code, in essence, tried to create a level playing field where Australian news companies could force Google and Facebook to negotiate a ‘fair’ payment for linking to, or showing previews of, their news content. Of course, many commentators, and the platforms themselves, retorted that they already bring significant value to news companies by referring readers to news websites. While there were earlier examples of Google and Facebook paying for news, these were largely framed as philanthropy: benevolent digital giants supporting journalism for the good of democracy. News companies and the ACCC argued this approach completely ignored the fact that Google and Facebook commanded more than 80% of the online advertising market in Australia at that time (Meade, “Google, Facebook and YouTube”). Nor did the digital giants acknowledge their disruptive power given the bulk of that advertising revenue used to flow to news companies. Some of the key features of this draft of the Code included (Australian Competition and Consumer Commission, “News Media Bargaining Code”): Facebook and Google would be the (only) companies initially ‘designated’ by the Code (i.e. specific companies that must abide by the Code), with Instagram included as part of Facebook. The Code applied to all Australian news organisations, and specifically mentioned how small, regional, and rural news media would now be able to meaningfully bargain with digital platforms. Platforms would have 11 weeks after first being contacted by a news organisation to reach a mutually negotiated agreement. Failure to reach agreements would result in arbitration (using a style of arbitration called final party arbitration which has both parties present a final offer or position, with an Australian arbiter simply choosing between the two offers in most cases). Platforms were required to give 28 days notice of any change to their algorithms that would impact on the ways Australian news was ranked and appeared on their platform. Penalties for not following the Code could be ten million dollars, or 10% of the platform’s annual turnover in Australia (whichever was greater). Unsurprisingly, Facebook, Google and a number of other platforms and companies reacted very negatively to the draft Code, with their formal submissions arguing: that the algorithm change notifications would give certain news companies an unfair advantage while disrupting the platforms’ core business; that charging for linking would break the underlying free nature of the internet; that the Code overstated the importance and reach of news on each platform; and many other objections were presented, including strong rejections of the proposed model of arbitration which, they argued, completely favoured news companies without providing any real or reasonable limit on how much news organisations could ask to be paid (Google; Facebook). Google extended their argument by making a second submission in the form of a report with the title ‘The Financial Woes of News Publishers in Australia’ (Shapiro et al.) that argued Australian journalism and news was financially unsustainable long before digital platforms came along. However, in stark contrast the Digital News Report: Australia 2020 found that Google and Facebook were where many Australians found their news; in 2020, 52% of Australians accessed news on social media (up from 46% the year before), with 39% of Australians getting news from Facebook, and that number jumping to 49% when specifically focusing on news seeking during the first COVID-19 pandemic peak in April 2021 (Park et al.). The same report highlighted that 43% of people distrust news found on social media (with a further 29% neutral, and only 28% of people explicitly trusting news found via social media). Moreover, 64% of Australians were concerned about misinformation online, and of all the platforms mentioned in the survey, respondents were most concerned about Facebook as a source of misinformation, with 36% explicitly indicating this was the place they were most concerned about encountering ‘fake news’. In this context Facebook and Google battled the Code by launching a public relations campaigns, appealing directly to Australian consumers. Google Drives a Bus Across Australia Google’s initial response to the draft Code was a substantial public relations campaign which saw the technology company advocating against the Code but not necessarily the ideas behind it. Google instead posited their own alternative way of paying for journalism in Australia. On the main Google search landing page, the usually very white surrounds of the search bar included the text “Supporting Australian journalism: a constructive path forward” which linked to a Google page outlining their version of a ‘Fair Code’. Popup windows appeared across many of Google’s services and apps, noting Google “are willing to pay to support journalism”, with a button labelled ‘Hear our proposal’. Figure 1: Popup notification on Google Australia directing users to Google’s ‘A Fair Code’ proposal rebutting the draft Code. (Screen capture by author, 29 January 2021) Google’s popups and landing page links were visible for more than six months as the Code was debated. In September 2020, a Google blog post about the Code was accompanied by a YouTube video campaign featuring Australia comedian Greta Lee Jackson (Google Australia, Google Explains Arbitration). Jackson used the analogy of Google as a bus driver, who is forced to pay restaurants for delivering customers to them, and then pay part of the running costs of restaurants, too. The video reinforced Google’s argument that the draft Code was asking digital platforms to pay potentially enormous costs for news content without acknowledging the value of Google bringing readers to the news sites. However, the video opened with the line that “proposed laws can be confusing, so I'll use an analogy to break it down”, setting a tone that would seem patronising to many people. Moreover, the video, and Google’s main argument, completely ignored the personal data Google receives every time a user searches for, or clicks on, a news story via Google Search or any other Google service. If Google’s analogy was accurate, then the bus driver would be going through every passenger’s bag while they were on the bus, taking copies of all their documents from drivers licenses to loyalty cards, keeping a record of every time they use the bus, and then using this information to get advertisers to pay for a tailored advertisement on the back of the seat in front of every passenger, every time they rode the bus. Notably, by the end of March 2021, the video had only received 10,399 views, which suggests relatively few people actually clicked on it to watch. In early January 2021, at the height of the debate about the Code, Google ran what they called “an experiment” which saw around 1% of Australian users suddenly only receive “older or less relevant content” when searching for news (Barnet, “Google’s ‘Experiment’”). While ostensibly about testing options for when the Code became law, the unannounced experiment also served as a warning shot. Google very effectively reminded users and politicians about their important role in determining which news Australian users find, and what might happen if Google darkened what they returned as news results. On 21 January 2021, Mel Silva, the Managing Director and public face of Google in Australia and New Zealand gave public testimony about the company’s position before a Senate inquiry. Silva confirmed that Google were indeed considering removing Google Search in Australia altogether if the draft Code was not amended to address their key concerns (Silva, “Supporting Australian Journalism: A Constructive Path Forward An Update on the News Media Bargaining Code”). Google’s seemingly sudden escalation in their threat to go dark led to articles such as a New York Times piece entitled ‘An Australia with No Google? The Bitter Fight behind a Drastic Threat’ (Cave). Google also greatly amplified their appeal to the Australian public, with a video featuring Mel Silva appearing frequently on all Google sites in Australia to argue their position (Google Australia, An Update). By the end of March 2021, Silva’s video had been watched more than 2.2 million times on YouTube. Silva’s testimony, video and related posts from Google all characterised the Code as: breaking “how Google search works in Australia”; creating a world where links online are paid for and thus both breaking Google and “undermin[ing] how the web works”; and saw Google offer their News Showcase as a viable alternative that, in Google’s view, was “a fair one” (Silva, “Supporting Australian Journalism”). Google emphasised submissions about the Code which backed their position, including World Wide Web inventor Tim Berners-Lee who agreed that the idea of charging for links could have a more wide-reaching impact, challenging the idea of a free web (Leaver). Google also continued to release their News Showcase product in other parts of the world. They emphasised that there were existing arrangements for Showcase in Australia, but the current regulatory uncertainty meant it was paused in Australia until the debates about the Code were resolved. In the interim, news media across Australia, and the globe, were filled with stories speculating what an Australia would look like if Google went completely dark (e.g. Cave; Smyth). Even Microsoft weighed in to supporting the Code and offer their search engine Bing as a viable alternative to fill the void if Google really did go dark (Meade, “Microsoft’s Bing”). In mid-February, the draft Code was tabled in Australian parliament. Many politicians jumped at the chance to sing the Code’s praises and lament the power that Google and Facebook have across various spheres of Australian life. Yet as these speeches were happening, the Australian Treasurer Josh Frydenberg was holding weekend meetings with executives from Google and Facebook, trying to smooth the path toward the Code (Massola). In these meetings, a number of amendments were agreed to, including the Code more clearly taking in to account any existing deals already on the table before it became law. In these meetings the Treasurer made in clear to Google that if the deals done prior to the Code were big enough, he would consider not designating Google under the Code, which in effect would mean Google is not immediately subject to it (Samios and Visentin). With that concession in hand Google swiftly signed deals with over 50 Australian news publishers, including Seven West Media, Nine, News Corp, The Guardian, the ABC, and some smaller publishers such as Junkee Media (Taylor; Meade, “ABC Journalism”). While the specific details of these deals were not made public, the deals with Seven West Media and Nine were both reported to be worth around $30 million Australian dollars (Dudley-Nicholson). In reacting to Google's deals Frydenberg described them as “generous deals, these are fair deals, these are good deals for the Australian media businesses, deals that they are making off their own bat with the digital giants” (Snape, “‘These Are Good Deals’”). During the debates about the Code, Google had ultimately ensured that every Australian user was well aware that Google was, in their words, asking for a “fair” Code, and before the Code became law even the Treasurer was conceding that Google’s was offering a “fair deal” to Australian news companies. Facebook Goes Dark on News While Google never followed through on their threat to go completely dark, Facebook took a very different path, with a lot less warning. Facebook’s threat to remove all news from the platform for users in Australia was not made explicit in their formal submissions the draft of the Code. However, to be fair, Facebook’s Managing Director in Australia and New Zealand Will Easton did make a blog post at the end of August 2020 in which he clearly stated: “assuming this draft code becomes law, we will reluctantly stop allowing publishers and people in Australia from sharing local and international news on Facebook and Instagram” (Easton). During the negotiations in late 2020 Instagram was removed as an initial target of the Code (just as YouTube was not included as part of Google) along with a number of other concessions, but Facebook were not sated. Yet Easton’s post about removing news received very little attention after it was made, and certainly Facebook made no obvious attempt to inform their millions of Australian users that news might be completely blocked. Hence most Australians were shocked when that was exactly what Facebook did. Facebook’s power has, in many ways, always been exercised by what the platform’s algorithms display to users, what content is most visible and equally what content is made invisible (Bucher). The morning of Wednesday, 17 February 2021, Australian Facebook users awoke to find that all traditional news and journalism had been removed from the platform. Almost all pages associated with news organisations were similarly either disabled or wiped clean, and that any attempt to share links to news stories was met with a notification: “this post can’t be shared”. The Australian Prime Minister Scott Morrison reacted angrily, publicly lamenting Facebook’s choice to “unfriend Australia”, adding their actions were “as arrogant as they were disappointing”, vowing that Australia would “not be intimidated by big tech” (Snape, “Facebook Unrepentant”). Figure 2: Facebook notification appearing when Australians attempted to share news articles on the platform. (Screen capture by author, 20 February 2021) Facebook’s news ban in Australia was not limited to official news pages and news content. Instead, their ban initially included a range of pages and services such as the Australian Bureau of Meteorology, emergency services pages, health care pages, hospital pages, services providing vital information about the COVID-19 pandemic, and so forth. The breadth of the ban may have been purposeful, as one of Facebook’s biggest complaints was that the Code defined news too broadly (Facebook). Yet in the Australian context, where the country was wrestling with periodic lockdowns and the Coronavirus pandemic on one hand, and bushfires and floods on the other, the removal of these vital sources of information showed a complete lack of care or interest in Australian Facebook users. Beyond the immediate inconvenience of not being able to read or share news on Facebook, there were a range of other, immediate, consequences. As Barnet, amongst others, warned, a Facebook with all credible journalism banned would almost certainly open the floodgates to a tide of misinformation, with nothing left to fill the void; it made Facebook’s “public commitment to fighting misinformation look farcical” (Barnet, “Blocking Australian News”). Moreover, Bossio noted, “reputational damage from blocking important sites that serve Australia’s public interest overnight – and yet taking years to get on top of user privacy breaches and misinformation – undermines the legitimacy of the platform and its claimed civic intentions” (Bossio). If going dark and turning off news in Australia was supposed to win the sympathy of Australian Facebook users, then the plan largely backfired. Yet as with Google, the Australian Treasurer was meeting with Mark Zuckerberg and Facebook executives behind closed doors, which did eventually lead to changes before the Code was finally legislated (Massola). Facebook gained a number of concessions, including: a longer warning period before a Facebook could be designated by the Code; a longer period before news organisations would be able to expect negotiations to be concluded; an acknowledgement that existing deals would be taken in to account during negotiations; and, most importantly, a clarification that if Facebook was to once again block news this would both prevent them being subject to the Code and was not be something the platform could be punished for. Like Google, though, Facebook’s biggest gain was again the Treasurer making it clear that by making deals in advance on the Code becoming law, it was likely that Facebook would not be designated, and thus not subject to the Code at all (Samios and Visentin). After these concessions the news standoff ended and on 23 February the Australian Treasurer declared that after tense negotiations Facebook had “refriended Australia”; the company had “committed to entering into good-faith negotiations with Australian news media businesses and seeking to reach agreements to pay for content” (Visentin). Over the next month there were some concerns voiced about slow progress, but then major deals were announced between Facebook and News Corp Australia, and with Nine, with other deals following closely (Meade, “Rupert Murdoch”). Just over a week after the ban began, Facebook returned news to their platform in Australia. Facebook obviously felt they had won the battle, but Australia Facebook users were clearly cannon fodder, with their interests and wellbeing ignored. Who Won? The Immediate Aftermath of the Code After the showdowns with Google and Facebook, the final amendments to the Code were made and it was legislated as the News Media and Digital Platforms Mandatory Bargaining Code (Australian Treasury), going into effect on 2 March 2021. However, when it became legally binding, not one single company was ‘designated’, meaning that the Code did not immediately apply to anyone. Yet deals had been struck, money would flow to Australian news companies, and Facebook had returned news to its platform in Australia. At the outset, Google, Facebook, news companies in Australia and the Australian government all claimed to have won the battle over the Code. Having talked up their tough stance on big tech platforms when the Digital Platforms Inquiry landed in 2019, the Australian Government was under public pressure to deliver on that rhetoric. The debates and media coverage surrounding the Code involved a great deal of political posturing and gained much public attention. The Treasurer was delighted to see deals being struck that meant Facebook and Google would pay Australian news companies. He actively portrayed this as the government protecting Australia’s interest and democracy. The fact that the Code was leveraged as a threat does mean that the nuances of the Code are unlikely to be tested in a courtroom in the near future. Yet as a threat it was an effective one, and it does remain in the Treasurer’s toolkit, with the potential to be deployed in the future. While mostly outside the scope of this article, it should definitely be noted that the biggest winner in the Code debate was Rupert Murdoch, executive chairman of News Corp. They were the strongest advocates of regulation forcing the digital giants to pay for news in the first place, and had the most to gain and least to lose in the process. Most large news organisations in Australia have fared well, too, with new revenue flowing in from Google and Facebook. However, one of the most important facets of the Code was the inclusion of mechanisms to ensure that regional and small news publishers in Australia would be able to negotiate with Facebook and Google. While some might be able to band together and strike terms (and some already have) it is likely that many smaller news companies in Australia will miss out, since the deals being struck with the bigger news companies appear to be big enough to ensure they are not designated, and thus not subject to the Code (Purtill). A few weeks after the Code became law ACCC Chair Rod Sims stated that the “problem we’re addressing with the news media code is simply that we wanted to arrest the decline in money going to journalism” (Kohler). On that front the Code succeeded. However, there is no guarantee the deals will mean money will support actual journalists, rather than disappearing as extra corporate profits. Nor is there any onus on Facebook or Google to inform news organisations about changes to their algorithms that might impact on news rankings. Also, as many Australia news companies are now receiving payments from Google and Facebook, there is a danger the news media will become dependent on that revenue, which may make it harder for journalists to report on the big tech giants without some perceptions of a conflict of interest. In a diplomatic post about the Code, Google thanked everyone who had voiced concerns with the initial drafts of the legislation, thanked Australian users, and celebrated that their newly launched Google News Showcase had “two million views of content” with more than 70 news partners signed up within Australia (Silva, “An Update”). Given that News Showcase had already begun rolling out elsewhere in the world, it is likely Google were already aware they were going to have to contribute to the production of journalism across the globe. The cost of paying for news in Australia may well have fallen within the parameters Google had already decided were acceptable and inevitable before the debate about the Code even began (Purtill). In the aftermath of the Code becoming legislation, Google also posted a cutting critique of Microsoft, arguing they were “making self-serving claims and are even willing to break the way the open web works in an effort to undercut a rival” (Walker). In doing so, Google implicitly claimed that the concessions and changes to the Code they had managed to negotiate effectively positioned them as having championed the free and open web. At the end of February 2021, in a much more self-congratulatory post-mortem of the Code entitled “The Real Story of What Happened with News on Facebook in Australia”, Facebook reiterated their assertion that they bring significant value to news publishers and that the platform receives no real value in return, stating that in 2020 Facebook provided “approximately 5.1 billion free referrals to Australian publishers worth an estimated AU$407 million to the news industry” (Clegg). Deploying one last confused metaphor, Facebook argued the original draft of the Code was “like forcing car makers to fund radio stations because people might listen to them in the car — and letting the stations set the price.” Of course, there was no mention that following that metaphor, Facebook would have bugged the car and used that information to plaster the internal surfaces with personalised advertising. Facebook also touted the success of their Facebook News product in the UK, albeit without setting a date for the rollout of the product in Australia. While Facebook did concede that “the decision to stop the sharing of news in Australia appeared to come out of nowhere”, what the company failed to do was apologise to Australian Facebook users for the confusion and inconvenience they experienced. Nevertheless, on Facebook’s own terms, they certainly positioned themselves as having come out winners. Future research will need to determine whether Facebook’s actions damaged their reputation or encouraged significant numbers of Australians to leave the platform permanently, but in the wake of a number of high-profile scandals, including Cambridge Analytica (Vaidhyanathan), it is hard to see how Facebook’s actions would not have further undermined consumer trust in the company and their main platform (Park et al.). In fighting the Code, Google and Facebook were not just battling the Australian government, but also the implication that if they paid for news in Australia, they likely would also have to do so in other countries. The Code was thus seen as a dangerous precedent far more than just a mechanism to compel payment in Australia. Since both companies ensured they made deals prior to the Code becoming law, neither was initially ‘designated’, and thus neither were actually subject to the Code at the time of writing. The value of the Code has been as a threat and a means to force action from the digital giants. How effective it is as a piece of legislation remains to be seen in the future if, indeed, any company is ever designated. For other countries, the exact wording of the Code might not be as useful as a template, but its utility to force action has surely been noted. Like the inquiry which initiated it, the Code set “the largest digital platforms, Google and Facebook, up against the giants of traditional media, most notably Rupert Murdoch’s News Corporation” (Flew and Wilding 50). Yet in a relatively unusual turn of events, both sides of that battle claim to have won. At the same time, EU legislators watched the battle closely as they considered an “Australian-style code” of their own (Dillon). Moreover, in the month immediately following the Code being legislated, both the US and Canada were actively pursuing similar regulation (Baier) with Facebook already threatening to remove news and go dark for Canadian Facebook users (van Boom). For Facebook, and Google, the battle continues, but fighting the Code has meant the genie of paying for news content is well and truly out of the bottle. References Australian Competition and Consumer Commission. Digital Platforms Inquiry: Final Report. 25 July 2019. <https://www.accc.gov.au/focus-areas/inquiries/digital-platforms-inquiry/final-report-executive-summary>. ———. “News Media Bargaining Code: Draft Legislation.” Australian Competition and Consumer Commission, 22 July 2020. <https://www.accc.gov.au/focus-areas/digital-platforms/news-media-bargaining-code/draft-legislation>. Australian Treasury. Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Act 2021. Attorney-General’s Department, 2 Mar. 2021. <https://www.legislation.gov.au/Details/C2021A00021/Html/Text>. Baier, Jansen. “US Could Allow News Distribution Fees for Google, Facebook.” MediaFile, 31 Mar. 2021. <http://www.mediafiledc.com/us-could-allow-news-distribution-fees-for-google-facebook/>. Barnet, Belinda. “Blocking Australian News Shows Facebook’s Pledge to Fight Misinformation Is Farcical.” The Guardian, 18 Feb. 2021. <http://www.theguardian.com/commentisfree/2021/feb/18/blocking-australian-news-shows-facebooks-pledge-to-fight-misinformation-is-farcical>. ———. “Google’s ‘Experiment’ Hiding Australian News Just Shows Its Inordinate Power.” The Guardian, 14 Jan. 2021. <http://www.theguardian.com/commentisfree/2021/jan/14/googles-experiment-hiding-australian-news-just-shows-its-inordinate-power>. Bossio, Diana. “Facebook Has Pulled the Trigger on News Content — and Possibly Shot Itself in the Foot.” The Conversation, 18 Feb. 2021. <http://theconversation.com/facebook-has-pulled-the-trigger-on-news-content-and-possibly-shot-itself-in-the-foot-155547>. Bucher, Taina. “Want to Be on the Top? Algorithmic Power and the Threat of Invisibility on Facebook.” New Media & Society 14.7 (2012): 1164–80. DOI:10.1177/1461444812440159. Cave, Damien. “An Australia with No Google? The Bitter Fight behind a Drastic Threat.” The New York Times, 22 Jan. 2021. <https://www.nytimes.com/2021/01/22/business/australia-google-facebook-news-media.html>. Clegg, Nick. “The Real Story of What Happened with News on Facebook in Australia.” About Facebook, 24 Feb. 2021. <https://about.fb.com/news/2021/02/the-real-story-of-what-happened-with-news-on-facebook-in-australia/>. Dillon, Grace. “EU Contemplates Australia-Style Media Bargaining Code; China Imposes New Antitrust Rules.” ExchangeWire.com, 9 Feb. 2021. <https://www.exchangewire.com/blog/2021/02/09/eu-contemplates-australia-style-media-bargaining-code-china-imposes-new-antitrust-rules/>. Dudley-Nicholson, Jennifer. “Google May Escape Laws after Spending Spree.” The Daily Telegraph, 17 Feb. 2021. <https://www.dailytelegraph.com.au/news/national/google-may-escape-tough-australian-news-laws-after-a-lastminute-spending-spree/news-story/d3b37406bf279ff6982287d281d1fbdd>. Easton, Will. “An Update about Changes to Facebook’s Services in Australia.” About Facebook, 1 Sep. 2020. <https://about.fb.com/news/2020/08/changes-to-facebooks-services-in-australia/>. Facebook. Facebook Response to the Australian Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2020. 28 Aug. 2020. <https://www.accc.gov.au/system/files/Facebook_0.pdf>. Flew, Terry, et al. “Return of the Regulatory State: A Stakeholder Analysis of Australia’s Digital Platforms Inquiry and Online News Policy.” The Information Society 37.2 (2021): 128–45. DOI:10.1080/01972243.2020.1870597. Flew, Terry, and Derek Wilding. “The Turn to Regulation in Digital Communication: The ACCC’s Digital Platforms Inquiry and Australian Media Policy.” Media, Culture & Society 43.1 (2021): 48–65. DOI:10.1177/0163443720926044. Google. Draft News Media and Platforms Mandatory Bargaining Code: Submissions in Response. 28 Aug. 2020. <https://www.accc.gov.au/system/files/Google_0.pdf>. Google Australia. An Update from Google on the News Media Bargaining Code. 2021. YouTube. <https://www.youtube.com/watch?v=dHypeuHePEI>. ———. Google Explains Arbitration under the News Media Bargaining Code. 2020. YouTube. <https://www.youtube.com/watch?v=6Io01W3migk>. Kohler, Alan. “The News Bargaining Code Is Officially Dead.” The New Daily, 16 Mar. 2021. <https://thenewdaily.com.au/news/2021/03/17/alan-kohler-news-bargaining-code-dead/>. Leaver, Tama. “Web’s Inventor Says News Media Bargaining Code Could Break the Internet. He’s Right — but There’s a Fix.” The Conversation, 21 Jan. 2021. <http://theconversation.com/webs-inventor-says-news-media-bargaining-code-could-break-the-internet-hes-right-but-theres-a-fix-153630>. Massola, James. “Frydenberg, Facebook Negotiating through the Weekend.” The Sydney Morning Herald, 20 Feb. 2021. <https://www.smh.com.au/politics/federal/frydenberg-facebook-negotiating-through-the-weekend-on-new-media-laws-20210219-p573zp.html>. Meade, Amanda. “ABC Journalism to Appear on Google’s News Showcase in Lucrative Deal.” The Guardian, 22 Feb. 2021. <http://www.theguardian.com/media/2021/feb/23/abc-journalism-to-appear-on-googles-showcase-in-lucrative-deal>. ———. “Google, Facebook and YouTube Found to Make Up More than 80% of Australian Digital Advertising.” The Guardian, 23 Oct. 2020. <http://www.theguardian.com/media/2020/oct/23/google-facebook-and-youtube-found-to-make-up-more-than-80-of-australian-digital-advertising>. ———. “Microsoft’s Bing Ready to Step in If Google Pulls Search from Australia, Minister Says.” The Guardian, 1 Feb. 2021. <http://www.theguardian.com/technology/2021/feb/01/microsofts-bing-ready-to-step-in-if-google-pulls-search-from-australia-minister-says>. ———. “Rupert Murdoch’s News Corp Strikes Deal as Facebook Agrees to Pay for Australian Content.” The Guardian, 15 Mar. 2021. <http://www.theguardian.com/media/2021/mar/16/rupert-murdochs-news-corp-strikes-deal-as-facebook-agrees-to-pay-for-australian-content>. Park, Sora, et al. Digital News Report: Australia 2020. Canberra: News and Media Research Centre, 16 June 2020. DOI:10.25916/5ec32f8502ef0. Purtill, James. “Facebook Thinks It Won the Battle of the Media Bargaining Code — but So Does the Government.” ABC News, 25 Feb. 2021. <https://www.abc.net.au/news/science/2021-02-26/facebook-google-who-won-battle-news-media-bargaining-code/13193106>. Samios, Zoe, and Lisa Visentin. “‘Historic Moment’: Treasurer Josh Frydenberg Hails Google’s News Content Deals.” The Sydney Morning Herald, 17 Feb. 2021. <https://www.smh.com.au/business/companies/historic-moment-treasurer-josh-frydenberg-hails-google-s-news-content-deals-20210217-p573eu.html>. Shapiro, Carl, et al. The Financial Woes of News Publishers in Australia. 27 Aug. 2020. <https://www.accc.gov.au/system/files/Google%20Annex.PDF>. Silva, Mel. “An Update on the News Media Bargaining Code.” Google Australia, 1 Mar. 2021. <http://www.google.com.au/google-in-australia/an-open-letter/>. ———. “Supporting Australian Journalism: A Constructive Path Forward – An Update on the News Media Bargaining Code.” Google Australia, 22 Jan. 2021. <https://about.google/intl/ALL_au/google-in-australia/jan-6-letter/>. Smyth, Jamie. “Australian Companies Forced to Imagine Life without Google.” Financial Times, 9 Feb. 2021. <https://www.ft.com/content/fa66e8dc-afb1-4a50-8dfa-338a599ad82d>. Snape, Jack. “Facebook Unrepentant as Prime Minister Dubs Emergency Services Block ‘Arrogant.’” ABC News, 18 Feb. 2021. <https://www.abc.net.au/news/2021-02-18/facebook-unrepentant-scott-morrison-dubs-move-arrogant/13169340>. ———. “‘These Are Good Deals’: Treasurer Praises Google News Deals amid Pressure from Government Legislation.” ABC News, 17 Feb. 2021. <https://www.abc.net.au/news/2021-02-17/treasurer-praises-good-deals-between-google-news-seven/13163676>. Taylor, Josh. “Guardian Australia Strikes Deal with Google to Join News Showcase.” The Guardian, 20 Feb. 2021. <http://www.theguardian.com/technology/2021/feb/20/guardian-australia-strikes-deal-with-google-to-join-news-showcase>. Vaidhyanathan, Siva. Antisocial Media: How Facebook Disconnects Us and Undermines Democracy. Oxford: Oxford UP, 2018. Van Boom, Daniel. “Facebook Could Block News in Canada like It Did in Australia.” CNET, 29 Mar. 2021. <https://www.cnet.com/news/facebook-could-block-news-in-canada-like-it-did-in-australia/>. Visentin, Lisa. “Facebook Refriends Australia after Last-Minute Changes to Media Code.” The Sydney Morning Herald, 23 Feb. 2021. <https://www.smh.com.au/politics/federal/government-agrees-to-last-minute-amendments-to-media-code-20210222-p574kc.html>. Walker, Kent. “Our Ongoing Commitment to Supporting Journalism.” Google, 12 Mar. 2021. <https://blog.google/products/news/google-commitment-supporting-journalism/>.
APA, Harvard, Vancouver, ISO, and other styles
38

Merchant, Melissa, Katie M. Ellis, and Natalie Latter. "Captions and the Cooking Show." M/C Journal 20, no. 3 (June 21, 2017). http://dx.doi.org/10.5204/mcj.1260.

Full text
Abstract:
While the television cooking genre has evolved in numerous ways to withstand competition and become a constant feature in television programming (Collins and College), it has been argued that audience demand for televisual cooking has always been high because of the daily importance of cooking (Hamada, “Multimedia Integration”). Early cooking shows were characterised by an instructional discourse, before quickly embracing an entertainment focus; modern cooking shows take on a more competitive, out of the kitchen focus (Collins and College). The genre has continued to evolve, with celebrity chefs and ordinary people embracing transmedia affordances to return to the instructional focus of the early cooking shows. While the television cooking show is recognised for its broad cultural impacts related to gender (Ouellette and Hay), cultural capital (Ibrahim; Oren), television formatting (Oren), and even communication itself (Matwick and Matwick), its role in the widespread adoption of television captions is significantly underexplored. Even the fact that a cooking show was the first ever program captioned on American television is almost completely unremarked within cooking show histories and literature.A Brief History of Captioning WorldwideWhen captions were first introduced on US television in the early 1970s, programmers were guided by the general principle to make the captioned program “accessible to every deaf viewer regardless of reading ability” (Jensema, McCann and Ramsey 284). However, there were no exact rules regarding captioning quality and captions did not reflect verbatim what was said onscreen. According to Jensema, McCann and Ramsey (285), less than verbatim captioning continued for many years because “deaf people were so delighted to have captions that they accepted almost anything thrown on the screen” (see also Newell 266 for a discussion of the UK context).While the benefits of captions for people who are D/deaf or hard of hearing were immediate, its commercial applications also became apparent. When the moral argument that people who were D/deaf or hard of hearing had a right to access television via captions proved unsuccessful in the fight for legislation, advocates lobbied the US Congress about the mainstream commercial benefits such as in education and the benefits for people learning English as a second language (Downey). Activist efforts and hard-won legal battles meant D/deaf and hard of hearing viewers can now expect closed captions on almost all television content. With legislation in place to determine the provision of captions, attention began to focus on their quality. D/deaf viewers are no longer just delighted to accept anything thrown on the screen and have begun to demand verbatim captioning. At the same time, market-based incentives are capturing the attention of television executives seeking to make money, and the widespread availability of verbatim captions has been recognised for its multimedia—and therefore commercial—applications. These include its capacity for information retrieval (Miura et al.; Agnihotri et al.) and for creative repurposing of television content (Blankinship et al.). Captions and transcripts have been identified as being of particular importance to augmenting the information provided in cooking shows (Miura et al.; Oh et al.).Early Captions in the US: Julia Child’s The French ChefJulia Child is indicative of the early period of the cooking genre (Collins and College)—she has been described as “the epitome of the TV chef” (ray 53) and is often credited for making cooking accessible to American audiences through her onscreen focus on normalising techniques that she promised could be mastered at home (ray). She is still recognised for her mastery of the genre, and for her capacity to entertain in a way that stood out from her contemporaries (Collins and College; ray).Julia Child’s The French Chef originally aired on the US publicly-funded Public Broadcasting System (PBS) affiliate WBGH from 1963–1973. The captioning of television also began in the 1960s, with educators creating the captions themselves, mainly for educational use in deaf schools (Downey 70). However, there soon came calls for public television to also be made accessible for the deaf and hard of hearing—the debate focused on equality and pushed for recognition that deaf people were culturally diverse (Downey 70).The PBS therefore began a trial of captioning programs (Downey 71). These would be “open captions”—characters which were positioned on the screen as part of the normal image for all viewers to see (Downey 71). The trial was designed to determine both the number of D/deaf and hard of hearing people viewing the program, as well as to test if non-D/deaf and hard of hearing viewers would watch a program which had captions (Downey 71). The French Chef was selected for captioning by WBGH because it was their most popular television show in the early 1970s and in 1972 eight episodes of The French Chef were aired using open—albeit inconsistent—captions (Downey 71; Jensema et al. 284).There were concerns from some broadcasters that openly captioned programs would drive away the “hearing majority” (Downey 71). However, there was no explicit study carried out in 1972 on the viewers of The French Chef to determine if this was the case because WBGH ran out of funds to research this further (Downey 71). Nevertheless, Jensema, McCann and Ramsey (284) note that WBGH did begin to re-broadcast ABC World News Tonight in the 1970s with open captions and that this was the only regularly captioned show at the time.Due to changes in technology and fears that not everyone wanted to see captions onscreen, television’s focus shifted from open captions to closed captioning in the 1980s. Captions became encoded, with viewers needing a decoder to be able to access them. However, the high cost of the decoders meant that many could not afford to buy them and adoption of the technology was slow (Youngblood and Lysaght 243; Downey 71). In 1979, the US government had set up the National Captioning Institute (NCI) with a mandate to develop and sell these decoders, and provide captioning services to the networks. This was initially government-funded but was designed to eventually be self-sufficient (Downey 73).PBS, ABC and NBC (but not CBS) had agreed to a trial (Downey 73). However, there was a reluctance on the part of broadcasters to pay to caption content when there was not enough evidence that the demand was high (Downey 73—74). The argument for the provision of captioned content therefore began to focus on the rights of all citizens to be able to access a public service. A complaint was lodged claiming that the Los Angeles station KCET, which was a PBS affiliate, did not provide captioned content that was available elsewhere (Downey 74). When Los Angeles PBS station KCET refused to air captioned episodes of The French Chef, the Greater Los Angeles Council on Deafness (GLAD) picketed the station until the decision was reversed. GLAD then focused on legislation and used the Rehabilitation Act to argue that television was federally assisted and, by not providing captioned content, broadcasters were in violation of the Act (Downey 74).GLAD also used the 1934 Communications Act in their argument. This Act had firstly established the Federal Communications Commission (FCC) and then assigned them the right to grant and renew broadcast licenses as long as those broadcasters served the ‘‘public interest, convenience, and necessity’’ (Michalik, cited in Downey 74). The FCC could, argued GLAD, therefore refuse to renew the licenses of broadcasters who did not air captioned content. However, rather than this argument working in their favour, the FCC instead changed its own procedures to avoid such legal actions in the future (Downey 75). As a result, although some stations began to voluntarily caption more content, it was not until 1996 that it became a legally mandated requirement with the introduction of the Telecommunications Act (Youngblood and Lysaght 244)—too late for The French Chef.My Kitchen Rules: Captioning BreachWhereas The French Chef presented instructional cooking programming from a kitchen set, more recently the food genre has moved away from the staged domestic kitchen set as an instructional space to use real-life domestic kitchens and more competitive multi-bench spaces. The Australian program MKR straddles this shift in the cooking genre with the first half of each season occurring in domestic settings and the second half in Iron Chef style studio competition (see Oren for a discussion of the influence of Iron Chef on contemporary cooking shows).All broadcast channels in Australia are mandated to caption 100 per cent of programs aired between 6am and midnight. However, the 2013 MKR Grand Final broadcast by Channel Seven Brisbane Pty Ltd and Channel Seven Melbourne Pty Ltd (Seven) failed to transmit 10 minutes of captions some 30 minutes into the 2-hour program. The ACMA received two complaints relating to this. The first complaint, received on 27 April 2013, the same evening as the program was broadcast, noted ‘[the D/deaf community] … should not have to miss out’ (ACMA, Report No. 3046 3). The second complaint, received on 30 April 2013, identified the crucial nature of the missing segment and its effect on viewers’ overall enjoyment of the program (ACMA, Report No. 3046 3).Seven explained that the relevant segment (approximately 10 per cent of the program) was missing from the captioning file, but that it had not appeared to be missing when Seven completed its usual captioning checks prior to broadcast (ACMA, Report No. 3046 4). The ACMA found that Seven had breached the conditions of their commercial television broadcasting licence by “failing to provide a captioning service for the program” (ACMA, Report No. 3046 12). The interruption of captioning was serious enough to constitute a breach due, in part, to the nature and characteristic of the program:the viewer is engaged in the momentum of the competitive process by being provided with an understanding of each of the competition stages; how the judges, guests and contestants interact; and their commentaries of the food and the cooking processes during those stages. (ACMA, Report No. 3046 6)These interactions have become a crucial part of the cooking genre, a genre often described as offering a way to acquire cultural capital via instructions in both cooking and ideological food preferences (Oren 31). Further, in relation to the uncaptioned MKR segment, ACMA acknowledged it would have been difficult to follow both the cooking process and the exchanges taking place between contestants (ACMA, Report No. 3046 8). ACMA considered these exchanges crucial to ‘a viewer’s understanding of, and secondly to their engagement with the different inter-related stages of the program’ (ACMA, Report No. 3046 7).An additional complaint was made with regards to the same program broadcast on Prime Television (Northern) Pty Ltd (Prime), a Seven Network affiliate. The complaint stated that the lack of captions was “Not good enough in prime time and for a show that is non-live in nature” (ACMA, Report No. 3124 3). Despite the fact that the ACMA found that “the fault arose from the affiliate, Seven, rather than from the licensee [Prime]”, Prime was also found to also have breached their licence conditions by failing to provide a captioning service (ACMA, Report No. 3124 12).The following year, Seven launched captions for their online catch-up television platform. Although this was a result of discussions with a complainant over the broader lack of captioned online television content, it was also a step that re-established Seven’s credentials as a leader in commercial television access. The 2015 season of MKR also featured their first partially-deaf contestant, Emilie Biggar.Mainstreaming Captions — Inter-Platform CooperationOver time, cooking shows on television have evolved from an informative style (The French Chef) to become more entertaining in their approach (MKR). As Oren identifies, this has seen a shift in the food genre “away from the traditional, instructional format and towards professionalism and competition” (Oren 25). The affordances of television itself as a visual medium has also been recognised as crucial in the popularity of this genre and its more recent transmedia turn. That is, following Joshua Meyrowitz’s medium theory regarding how different media can afford us different messages, televised cooking shows offer audiences stylised knowledge about food and cooking beyond the traditional cookbook (Oren; ray). In addition, cooking shows are taking their product beyond just television and increasing their inter-platform cooperation (Oren)—for example, MKR has a comprehensive companion website that viewers can visit to watch whole episodes, obtain full recipes, and view shopping lists. While this can be viewed as a modern take on Julia Child’s cookbook success, it must also be considered in the context of the increasing focus on multimedia approaches to cooking instructions (Hamada et al., Multimedia Integration; Cooking Navi; Oh et al.). Audiences today are more likely to attempt a recipe if they have seen it on television, and will use transmedia to download the recipe. As Oren explains:foodism’s ascent to popular culture provides the backdrop and motivation for the current explosion of food-themed formats that encourages audiences’ investment in their own expertise as critics, diners, foodies and even wanna-be professional chefs. FoodTV, in turn, feeds back into a web-powered, gastro-culture and critique-economy where appraisal outranks delight. (Oren 33)This explosion in popularity of the web-powered gastro culture Oren refers to has led to an increase in appetite for step by step, easy to access instructions. These are being delivered using captions. As a result of the legislation and activism described throughout this paper, captions are more widely available and, in many cases, now describe what is said onscreen verbatim. In addition, the mainstream commercial benefits and uses of captions are being explored. Captions have therefore moved from a specialist assistive technology for people who are D/deaf or hard of hearing to become recognised as an important resource for creative television viewers regardless of their hearing (Blankinship et al.). With captions becoming more accessible, accurate, financially viable, and mainstreamed, their potential as an additional television resource is of interest. As outlined above, within the cooking show genre—especially with its current multimedia turn and the demand for captioned recipe instructions (Hamada et al., “Multimedia Integration”, “Cooking Navi”; Oh et al.)—this is particularly pertinent.Hamada et al. identify captions as a useful technology to use in the increasingly popular educational, yet entertaining, cooking show genre as the required information—ingredient lists, instructions, recipes—is in high demand (Hamada et al., “Multimedia Integration” 658). They note that cooking shows often present information out of order, making them difficult to follow, particularly if a recipe must be sourced later from a website (Hamada et al., “Multimedia Integration” 658-59; Oh et al.). Each step in a recipe must be navigated and coordinated, particularly if multiple recipes are being completed at the same times (Hamada, et al., Cooking Navi) as is often the case on cooking shows such as MKR. Using captions as part of a software program to index cooking videos facilitates a number of search affordances for people wishing to replicate the recipe themselves. As Kyeong-Jin et al. explain:if food and recipe information are published as linked data with the scheme, it enables to search food recipe and annotate certain recipe by communities (sic). In addition, because of characteristics of linked data, information on food recipes can be connected to additional data source such as products for ingredients, and recipe websites can support users’ decision making in the cooking domain. (Oh et al. 2)The advantages of such a software program are many. For the audience there is easy access to desired information. For the number of commercial entities involved, this consumer desire facilitates endless marketing opportunities including product placement, increased ratings, and software development. Interesting, all of this falls outside the “usual” parameters of captions as purely an assistive device for a few, and facilitates the mainstreaming—and perhaps beginnings of acceptance—of captions.ConclusionCaptions are a vital accessibility feature for television viewers who are D/deaf or hard of hearing, not just from an informative or entertainment perspective but also to facilitate social inclusion for this culturally diverse group. The availability and quality of television captions has moved through three stages. These can be broadly summarised as early yet inconsistent captions, captions becoming more widely available and accurate—often as a direct result of activism and legislation—but not yet fully verbatim, and verbatim captions as adopted within mainstream software applications. This paper has situated these stages within the television cooking genre, a genre often remarked for its appeal towards inclusion and cultural capital.If television facilitates social inclusion, then food television offers vital cultural capital. While Julia Child’s The French Chef offered the first example of television captions via open captions in 1972, a lack of funding means we do not know how viewers (both hearing and not) actually received the program. However, at the time, captions that would be considered unacceptable today were received favourably (Jensema, McCann and Ramsey; Newell)—anything was deemed better than nothing. Increasingly, as the focus shifted to closed captioning and the cooking genre embraced a more competitive approach, viewers who required captions were no longer happy with missing or inconsistent captioning quality. The was particularly significant in Australia in 2013 when several viewers complained to ACMA that captions were missing from the finale of MKR. These captions provided more than vital cooking instructions—their lack prevented viewers from understanding conflict within the program. Following this breach, Seven became the only Australian commercial television station to offer captions on their web based catch-up platform. While this may have gone a long way to rehabilitate Seven amongst D/deaf and hard of hearing audiences, there is the potential too for commercial benefits. Caption technology is now being mainstreamed for use in cooking software applications developed from televised cooking shows. These allow viewers—both D/deaf and hearing—to access information in a completely new, and inclusive, way.ReferencesAgnihotri, Lalitha, et al. “Summarization of Video Programs Based on Closed Captions.” 4315 (2001): 599–607.Australian Communications and Media Authority (ACMA). Investigation Report No. 3046. 2013. 26 Apr. 2017 <http://www.acma.gov.au/~/media/Diversity%20Localism%20and%20Accessibility/Investigation%20reports/Word%20document/3046%20My%20Kitchen%20Rules%20Grand%20Final%20docx.docx>.———. Investigation Report No. 3124. 2014. 26 Apr. 2017 <http://www.acma.gov.au/~/media/Diversity%20Localism%20and%20Accessibility/Investigation%20reports/Word%20document/3124%20NEN%20My%20Kitchen%20Rules%20docx.docx>.Blankinship, E., et al. “Closed Caption, Open Source.” BT Technology Journal 22.4 (2004): 151–59.Collins, Kathleen, and John Jay College. “TV Cooking Shows: The Evolution of a Genre”. Flow: A Critical Forum on Television and Media Culture (7 May 2008). 14 May 2017 <http://www.flowjournal.org/2008/05/tv-cooking-shows-the-evolution-of-a-genre/>.Downey, Greg. “Constructing Closed-Captioning in the Public Interest: From Minority Media Accessibility to Mainstream Educational Technology.” The Journal of Policy, Regulation and Strategy for Telecommunications, Information and Media 9.2/3 (2007): 69–82. DOI: 10.1108/14636690710734670.Hamada, Reiko, et al. “Multimedia Integration for Cooking Video Indexing.” Advances in Multimedia Information Processing-PCM 2004 (2005): 657–64.Hamada, Reiko, et al. “Cooking Navi: Assistant for Daily Cooking in Kitchen.” Proceedings of the 13th Annual ACM International Conference on Multimedia. ACM.Ibrahim, Yasmin. “Food Porn and the Invitation to Gaze: Ephemeral Consumption and the Digital Spectacle.” International Journal of E-Politics (IJEP) 6.3 (2015): 1–12.Jensema, Carl J., Ralph McCann, and Scott Ramsey. “Closed-Captioned Television Presentation Speed and Vocabulary.” American Annals of the Deaf 141.4 (1996): 284–292.Matwick, Kelsi, and Keri Matwick. “Inquiry in Television Cooking Shows.” Discourse & Communication 9.3 (2015): 313–30.Meyrowitz, Joshua. No Sense of Place: The Impact of Electronic Media on Social Behavior. New York: Oxford University Press, 1985.Miura, K., et al. “Automatic Generation of a Multimedia Encyclopedia from TV Programs by Using Closed Captions and Detecting Principal Video Objects.” Eighth IEEE International Symposium on Multimedia (2006): 873–80.Newell, A.F. “Teletext for the Deaf.” Electronics and Power 28.3 (1982): 263–66.Oh, K.J. et al. “Automatic Indexing of Cooking Video by Using Caption-Recipe Alignment.” 2014 International Conference on Behavioral, Economic, and Socio-Cultural Computing (BESC2014) (2014): 1–6.Oren, Tasha. “On the Line: Format, Cooking and Competition as Television Values.” Critical Studies in Television: The International Journal of Television Studies 8.2 (2013): 20–35.Ouellette, Laurie, and James Hay. “Makeover Television, Governmentality and the Good Citizen.” Continuum: Journal of Media & Cultural Studies 22.4 (2008): 471–84.ray, krishnendu. “Domesticating Cuisine: Food and Aesthetics on American Television.” Gastronomica 7.1 (2007): 50–63.Youngblood, Norman E., and Ryan Lysaght. “Accessibility and Use of Online Video Captions by Local Television News Websites.” Electronic News 9.4 (2015): 242–256.
APA, Harvard, Vancouver, ISO, and other styles
39

Ibrahim, Yasmin. "Commodifying Terrorism." M/C Journal 10, no. 3 (June 1, 2007). http://dx.doi.org/10.5204/mcj.2665.

Full text
Abstract:
Introduction Figure 1 The counter-Terrorism advertising campaign of London’s Metropolitan Police commodifies some everyday items such as mobile phones, computers, passports and credit cards as having the potential to sustain terrorist activities. The process of ascribing cultural values and symbolic meanings to some everyday technical gadgets objectifies and situates Terrorism into the everyday life. The police, in urging people to look out for ‘the unusual’ in their normal day-to-day lives, juxtapose the everyday with the unusual, where day-to-day consumption, routines and flows of human activity can seemingly house insidious and atavistic elements. This again is reiterated in the Met police press release: Terrorists live within our communities making their plans whilst doing everything they can to blend in, and trying not to raise suspicions about their activities. (MPA Website) The commodification of Terrorism through uncommon and everyday objects situates Terrorism as a phenomenon which occupies a liminal space within the everyday. It resides, breathes and co-exists within the taken-for-granted routines and objects of ‘the everyday’ where it has the potential to explode and disrupt without warning. Since 9/11 and the 7/7 bombings Terrorism has been narrated through the disruption of mobility, whether in mid-air or in the deep recesses of the Underground. The resonant thread of disruption to human mobility evokes a powerful meta-narrative where acts of Terrorism can halt human agency amidst the backdrop of the metropolis, which is often a metaphor for speed and accelerated activities. If globalisation and the interconnected nature of the world are understood through discourses of risk, Terrorism bears the same footprint in urban spaces of modernity, narrating the vulnerability of the human condition in an inter-linked world where ideological struggles and resistance are manifested through inexplicable violence and destruction of lives, where the everyday is suspended to embrace the unexpected. As a consequence ambient fear “saturates the social spaces of everyday life” (Hubbard 2). The commodification of Terrorism through everyday items of consumption inevitably creates an intertextuality with real and media events, which constantly corrode the security of the metropolis. Paddy Scannell alludes to a doubling of place in our mediated world where “public events now occur simultaneously in two different places; the place of the event itself and that in which it is watched and heard. The media then vacillates between the two sites and creates experiences of simultaneity, liveness and immediacy” (qtd. in Moores 22). The doubling of place through media constructs a pervasive environment of risk and fear. Mark Danner (qtd. in Bauman 106) points out that the most powerful weapon of the 9/11 terrorists was that innocuous and “most American of technological creations: the television set” which provided a global platform to constantly replay and remember the dreadful scenes of the day, enabling the terrorist to appear invincible and to narrate fear as ubiquitous and omnipresent. Philip Abrams argues that ‘big events’ (such as 9/11 and 7/7) do make a difference in the social world for such events function as a transformative device between the past and future, forcing society to alter or transform its perspectives. David Altheide points out that since September 11 and the ensuing war on terror, a new discourse of Terrorism has emerged as a way of expressing how the world has changed and defining a state of constant alert through a media logic and format that shapes the nature of discourse itself. Consequently, the intensity and centralisation of surveillance in Western countries increased dramatically, placing the emphasis on expanding the forms of the already existing range of surveillance processes and practices that circumscribe and help shape our social existence (Lyon, Terrorism 2). Normalisation of Surveillance The role of technologies, particularly information and communication technologies (ICTs), and other infrastructures to unevenly distribute access to the goods and services necessary for modern life, while facilitating data collection on and control of the public, are significant characteristics of modernity (Reiman; Graham and Marvin; Monahan). The embedding of technological surveillance into spaces and infrastructures not only augment social control but also redefine data as a form of capital which can be shared between public and private sectors (Gandy, Data Mining; O’Harrow; Monahan). The scale, complexity and limitations of omnipresent and omnipotent surveillance, nevertheless, offer room for both subversion as well as new forms of domination and oppression (Marx). In surveillance studies, Foucault’s analysis is often heavily employed to explain lines of continuity and change between earlier forms of surveillance and data assemblage and contemporary forms in the shape of closed-circuit television (CCTV) and other surveillance modes (Dee). It establishes the need to discern patterns of power and normalisation and the subliminal or obvious cultural codes and categories that emerge through these arrangements (Fopp; Lyon, Electronic; Norris and Armstrong). In their study of CCTV surveillance, Norris and Armstrong (cf. in Dee) point out that when added to the daily minutiae of surveillance, CCTV cameras in public spaces, along with other camera surveillance in work places, capture human beings on a database constantly. The normalisation of surveillance, particularly with reference to CCTV, the popularisation of surveillance through television formats such as ‘Big Brother’ (Dee), and the expansion of online platforms to publish private images, has created a contradictory, complex and contested nature of spatial and power relationships in society. The UK, for example, has the most developed system of both urban and public space cameras in the world and this growth of camera surveillance and, as Lyon (Surveillance) points out, this has been achieved with very little, if any, public debate as to their benefits or otherwise. There may now be as many as 4.2 million CCTV cameras in Britain (cf. Lyon, Surveillance). That is one for every fourteen people and a person can be captured on over 300 cameras every day. An estimated £500m of public money has been invested in CCTV infrastructure over the last decade but, according to a Home Office study, CCTV schemes that have been assessed had little overall effect on crime levels (Wood and Ball). In spatial terms, these statistics reiterate Foucault’s emphasis on the power economy of the unseen gaze. Michel Foucault in analysing the links between power, information and surveillance inspired by Bentham’s idea of the Panopticon, indicated that it is possible to sanction or reward an individual through the act of surveillance without their knowledge (155). It is this unseen and unknown gaze of surveillance that is fundamental to the exercise of power. The design and arrangement of buildings can be engineered so that the “surveillance is permanent in its effects, even if it is discontinuous in its action” (Foucault 201). Lyon (Terrorism), in tracing the trajectory of surveillance studies, points out that much of surveillance literature has focused on understanding it as a centralised bureaucratic relationship between the powerful and the governed. Invisible forms of surveillance have also been viewed as a class weapon in some societies. With the advancements in and proliferation of surveillance technologies as well as convergence with other technologies, Lyon argues that it is no longer feasible to view surveillance as a linear or centralised process. In our contemporary globalised world, there is a need to reconcile the dialectical strands that mediate surveillance as a process. In acknowledging this, Giles Deleuze and Felix Guattari have constructed surveillance as a rhizome that defies linearity to appropriate a more convoluted and malleable form where the coding of bodies and data can be enmeshed to produce intricate power relationships and hierarchies within societies. Latour draws on the notion of assemblage by propounding that data is amalgamated from scattered centres of calculation where these can range from state and commercial institutions to scientific laboratories which scrutinise data to conceive governance and control strategies. Both the Latourian and Deleuzian ideas of surveillance highlight the disparate arrays of people, technologies and organisations that become connected to make “surveillance assemblages” in contrast to the static, unidirectional Panopticon metaphor (Ball, “Organization” 93). In a similar vein, Gandy (Panoptic) infers that it is misleading to assume that surveillance in practice is as complete and totalising as the Panoptic ideal type would have us believe. Co-optation of Millions The Metropolitan Police’s counter-Terrorism strategy seeks to co-opt millions where the corporeal body can complement the landscape of technological surveillance that already co-exists within modernity. In its press release, the role of civilian bodies in ensuring security of the city is stressed; Keeping Londoners safe from Terrorism is not a job solely for governments, security services or police. If we are to make London the safest major city in the world, we must mobilise against Terrorism not only the resources of the state, but also the active support of the millions of people who live and work in the capita. (MPA Website). Surveillance is increasingly simulated through the millions of corporeal entities where seeing in advance is the goal even before technology records and codes these images (William). Bodies understand and code risk and images through the cultural narratives which circulate in society. Compared to CCTV technology images, which require cultural and political interpretations and interventions, bodies as surveillance organisms implicitly code other bodies and activities. The travel bag in the Metropolitan Police poster reinforces the images of the 7/7 bombers and the renewed attempts to bomb the London Underground on the 21st of July. It reiterates the CCTV footage revealing images of the bombers wearing rucksacks. The image of the rucksack both embodies the everyday as well as the potential for evil in everyday objects. It also inevitably reproduces the cultural biases and prejudices where the rucksack is subliminally associated with a specific type of body. The rucksack in these terms is a laden image which symbolically captures the context and culture of risk discourses in society. The co-optation of the population as a surveillance entity also recasts new forms of social responsibility within the democratic polity, where privacy is increasingly mediated by the greater need to monitor, trace and record the activities of one another. Nikolas Rose, in discussing the increasing ‘responsibilisation’ of individuals in modern societies, describes the process in which the individual accepts responsibility for personal actions across a wide range of fields of social and economic activity as in the choice of diet, savings and pension arrangements, health care decisions and choices, home security measures and personal investment choices (qtd. in Dee). While surveillance in individualistic terms is often viewed as a threat to privacy, Rose argues that the state of ‘advanced liberalism’ within modernity and post-modernity requires considerable degrees of self-governance, regulation and surveillance whereby the individual is constructed both as a ‘new citizen’ and a key site of self management. By co-opting and recasting the role of the citizen in the age of Terrorism, the citizen to a degree accepts responsibility for both surveillance and security. In our sociological imagination the body is constructed both as lived as well as a social object. Erving Goffman uses the word ‘umwelt’ to stress that human embodiment is central to the constitution of the social world. Goffman defines ‘umwelt’ as “the region around an individual from which signs of alarm can come” and employs it to capture how people as social actors perceive and manage their settings when interacting in public places (252). Goffman’s ‘umwelt’ can be traced to Immanuel Kant’s idea that it is the a priori categories of space and time that make it possible for a subject to perceive a world (Umiker-Sebeok; qtd. in Ball, “Organization”). Anthony Giddens adapted the term Umwelt to refer to “a phenomenal world with which the individual is routinely ‘in touch’ in respect of potential dangers and alarms which then formed a core of (accomplished) normalcy with which individuals and groups surround themselves” (244). Benjamin Smith, in considering the body as an integral component of the link between our consciousness and our material world, observes that the body is continuously inscribed by culture. These inscriptions, he argues, encompass a wide range of cultural practices and will imply knowledge of a variety of social constructs. The inscribing of the body will produce cultural meanings as well as create forms of subjectivity while locating and situating the body within a cultural matrix (Smith). Drawing on Derrida’s work, Pugliese employs the term ‘Somatechnics’ to conceptualise the body as a culturally intelligible construct and to address the techniques in and through which the body is formed and transformed (qtd. in Osuri). These techniques can encompass signification systems such as race and gender and equally technologies which mediate our sense of reality. These technologies of thinking, seeing, hearing, signifying, visualising and positioning produce the very conditions for the cultural intelligibility of the body (Osuri). The body is then continuously inscribed and interpreted through mediated signifying systems. Similarly, Hayles, while not intending to impose a Cartesian dichotomy between the physical body and its cognitive presence, contends that the use and interactions with technology incorporate the body as a material entity but it also equally inscribes it by marking, recording and tracing its actions in various terrains. According to Gayatri Spivak (qtd. in Ball, “Organization”) new habits and experiences are embedded into the corporeal entity which then mediates its reactions and responses to the social world. This means one’s body is not completely one’s own and the presence of ideological forces or influences then inscribe the body with meanings, codes and cultural values. In our modern condition, the body and data are intimately and intricately bound. Outside the home, it is difficult for the body to avoid entering into relationships that produce electronic personal data (Stalder). According to Felix Stalder our physical bodies are shadowed by a ‘data body’ which follows the physical body of the consuming citizen and sometimes precedes it by constructing the individual through data (12). Before we arrive somewhere, we have already been measured and classified. Thus, upon arrival, the citizen will be treated according to the criteria ‘connected with the profile that represents us’ (Gandy, Panoptic; William). Following September 11, Lyon (Terrorism) reveals that surveillance data from a myriad of sources, such as supermarkets, motels, traffic control points, credit card transactions records and so on, was used to trace the activities of terrorists in the days and hours before their attacks, confirming that the body leaves data traces and trails. Surveillance works by abstracting bodies from places and splitting them into flows to be reassembled as virtual data-doubles, and in the process can replicate hierarchies and centralise power (Lyon, Terrorism). Mike Dee points out that the nature of surveillance taking place in modern societies is complex and far-reaching and in many ways insidious as surveillance needs to be situated within the broadest context of everyday human acts whether it is shopping with loyalty cards or paying utility bills. Physical vulnerability of the body becomes more complex in the time-space distanciated surveillance systems to which the body has become increasingly exposed. As such, each transaction – whether it be a phone call, credit card transaction, or Internet search – leaves a ‘data trail’ linkable to an individual person or place. Haggerty and Ericson, drawing from Deleuze and Guattari’s concept of the assemblage, describe the convergence and spread of data-gathering systems between different social domains and multiple levels (qtd. in Hier). They argue that the target of the generic ‘surveillance assemblage’ is the human body, which is broken into a series of data flows on which surveillance process is based. The thrust of the focus is the data individuals can yield and the categories to which they can contribute. These are then reapplied to the body. In this sense, surveillance is rhizomatic for it is diverse and connected to an underlying, invisible infrastructure which concerns interconnected technologies in multiple contexts (Ball, “Elements”). The co-opted body in the schema of counter-Terrorism enters a power arrangement where it constitutes both the unseen gaze as well as the data that will be implicated and captured in this arrangement. It is capable of producing surveillance data for those in power while creating new data through its transactions and movements in its everyday life. The body is unequivocally constructed through this data and is also entrapped by it in terms of representation and categorisation. The corporeal body is therefore part of the machinery of surveillance while being vulnerable to its discriminatory powers of categorisation and victimisation. As Hannah Arendt (qtd. in Bauman 91) had warned, “we terrestrial creatures bidding for cosmic significance will shortly be unable to comprehend and articulate the things we are capable of doing” Arendt’s caution conveys the complexity, vulnerability as well as the complicity of the human condition in the surveillance society. Equally it exemplifies how the corporeal body can be co-opted as a surveillance entity sustaining a new ‘banality’ (Arendt) in the machinery of surveillance. Social Consequences of Surveillance Lyon (Terrorism) observed that the events of 9/11 and 7/7 in the UK have inevitably become a prism through which aspects of social structure and processes may be viewed. This prism helps to illuminate the already existing vast range of surveillance practices and processes that touch everyday life in so-called information societies. As Lyon (Terrorism) points out surveillance is always ambiguous and can encompass genuine benefits and plausible rationales as well as palpable disadvantages. There are elements of representation to consider in terms of how surveillance technologies can re-present data that are collected at source or gathered from another technological medium, and these representations bring different meanings and enable different interpretations of life and surveillance (Ball, “Elements”). As such surveillance needs to be viewed in a number of ways: practice, knowledge and protection from threat. As data can be manipulated and interpreted according to cultural values and norms it reflects the inevitability of power relations to forge its identity in a surveillance society. In this sense, Ball (“Elements”) concludes surveillance practices capture and create different versions of life as lived by surveilled subjects. She refers to actors within the surveilled domain as ‘intermediaries’, where meaning is inscribed, where technologies re-present information, where power/resistance operates, and where networks are bound together to sometimes distort as well as reiterate patterns of hegemony (“Elements” 93). While surveillance is often connected with technology, it does not however determine nor decide how we code or employ our data. New technologies rarely enter passive environments of total inequality for they become enmeshed in complex pre-existing power and value systems (Marx). With surveillance there is an emphasis on the classificatory powers in our contemporary world “as persons and groups are often risk-profiled in the commercial sphere which rates their social contributions and sorts them into systems” (Lyon, Terrorism 2). Lyon (Terrorism) contends that the surveillance society is one that is organised and structured using surveillance-based techniques recorded by technologies, on behalf of the organisations and governments that structure our society. This information is then sorted, sifted and categorised and used as a basis for decisions which affect our life chances (Wood and Ball). The emergence of pervasive, automated and discriminatory mechanisms for risk profiling and social categorising constitute a significant mechanism for reproducing and reinforcing social, economic and cultural divisions in information societies. Such automated categorisation, Lyon (Terrorism) warns, has consequences for everyone especially in face of the new anti-terror measures enacted after September 11. In tandem with this, Bauman points out that a few suicidal murderers on the loose will be quite enough to recycle thousands of innocents into the “usual suspects”. In no time, a few iniquitous individual choices will be reprocessed into the attributes of a “category”; a category easily recognisable by, for instance, a suspiciously dark skin or a suspiciously bulky rucksack* *the kind of object which CCTV cameras are designed to note and passers-by are told to be vigilant about. And passers-by are keen to oblige. Since the terrorist atrocities on the London Underground, the volume of incidents classified as “racist attacks” rose sharply around the country. (122; emphasis added) Bauman, drawing on Lyon, asserts that the understandable desire for security combined with the pressure to adopt different kind of systems “will create a culture of control that will colonise more areas of life with or without the consent of the citizen” (123). This means that the inhabitants of the urban space whether a citizen, worker or consumer who has no terrorist ambitions whatsoever will discover that their opportunities are more circumscribed by the subject positions or categories which are imposed on them. Bauman cautions that for some these categories may be extremely prejudicial, restricting them from consumer choices because of credit ratings, or more insidiously, relegating them to second-class status because of their colour or ethnic background (124). Joseph Pugliese, in linking visual regimes of racial profiling and the shooting of Jean Charles de Menezes in the aftermath of 7/7 bombings in London, suggests that the discursive relations of power and visuality are inextricably bound. Pugliese argues that racial profiling creates a regime of visuality which fundamentally inscribes our physiology of perceptions with stereotypical images. He applies this analogy to Menzes running down the platform in which the retina transforms him into the “hallucinogenic figure of an Asian Terrorist” (Pugliese 8). With globalisation and the proliferation of ICTs, borders and boundaries are no longer sacrosanct and as such risks are managed by enacting ‘smart borders’ through new technologies, with huge databases behind the scenes processing information about individuals and their journeys through the profiling of body parts with, for example, iris scans (Wood and Ball 31). Such body profiling technologies are used to create watch lists of dangerous passengers or identity groups who might be of greater ‘risk’. The body in a surveillance society can be dissected into parts and profiled and coded through technology. These disparate codings of body parts can be assembled (or selectively omitted) to construct and represent whole bodies in our information society to ascertain risk. The selection and circulation of knowledge will also determine who gets slotted into the various categories that a surveillance society creates. Conclusion When the corporeal body is subsumed into a web of surveillance it often raises questions about the deterministic nature of technology. The question is a long-standing one in our modern consciousness. We are apprehensive about according technology too much power and yet it is implicated in the contemporary power relationships where it is suspended amidst human motive, agency and anxiety. The emergence of surveillance societies, the co-optation of bodies in surveillance schemas, as well as the construction of the body through data in everyday transactions, conveys both the vulnerabilities of the human condition as well as its complicity in maintaining the power arrangements in society. Bauman, in citing Jacques Ellul and Hannah Arendt, points out that we suffer a ‘moral lag’ in so far as technology and society are concerned, for often we ruminate on the consequences of our actions and motives only as afterthoughts without realising at this point of existence that the “actions we take are most commonly prompted by the resources (including technology) at our disposal” (91). References Abrams, Philip. Historical Sociology. Shepton Mallet, UK: Open Books, 1982. Altheide, David. “Consuming Terrorism.” Symbolic Interaction 27.3 (2004): 289-308. Arendt, Hannah. Eichmann in Jerusalem: A Report on the Banality of Evil. London: Faber & Faber, 1963. Bauman, Zygmunt. Liquid Fear. Cambridge, UK: Polity, 2006. Ball, Kristie. “Elements of Surveillance: A New Framework and Future Research Direction.” Information, Communication and Society 5.4 (2002): 573-90 ———. “Organization, Surveillance and the Body: Towards a Politics of Resistance.” Organization 12 (2005): 89-108. Dee, Mike. “The New Citizenship of the Risk and Surveillance Society – From a Citizenship of Hope to a Citizenship of Fear?” Paper Presented to the Social Change in the 21st Century Conference, Queensland University of Technology, Queensland, Australia, 22 Nov. 2002. 14 April 2007 http://eprints.qut.edu.au/archive/00005508/02/5508.pdf>. Deleuze, Gilles, and Felix Guattari. A Thousand Plateaus. Minneapolis: U of Minnesota P, 1987. Fopp, Rodney. “Increasing the Potential for Gaze, Surveillance and Normalization: The Transformation of an Australian Policy for People and Homeless.” Surveillance and Society 1.1 (2002): 48-65. Foucault, Michel. Discipline and Punish: The Birth of the Prison. London: Allen Lane, 1977. Giddens, Anthony. Modernity and Self-Identity. Self and Society in the Late Modern Age. Stanford: Stanford UP, 1991. Gandy, Oscar. The Panoptic Sort: A Political Economy of Personal Information. Boulder, CO: Westview, 1997. ———. “Data Mining and Surveillance in the Post 9/11 Environment.” The Intensification of Surveillance: Crime, Terrorism and War in the Information Age. Eds. Kristie Ball and Frank Webster. Sterling, VA: Pluto Press, 2003. Goffman, Erving. Relations in Public. Harmondsworth: Penguin, 1971. Graham, Stephen, and Simon Marvin. Splintering Urbanism: Networked Infrastructures, Technological Mobilities and the Urban Condition. New York: Routledge, 2001. Hier, Sean. “Probing Surveillance Assemblage: On the Dialectics of Surveillance Practices as Process of Social Control.” Surveillance and Society 1.3 (2003): 399-411. Hayles, Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics. Chicago: U of Chicago P, 1999. Hubbard, Phil. “Fear and Loathing at the Multiplex: Everyday Anxiety in the Post-Industrial City.” Capital & Class 80 (2003). Latour, Bruno. Science in Action. Cambridge, Mass: Harvard UP, 1987 Lyon, David. The Electronic Eye – The Rise of Surveillance Society. Oxford: Polity Press, 1994. ———. “Terrorism and Surveillance: Security, Freedom and Justice after September 11 2001.” Privacy Lecture Series, Queens University, 12 Nov 2001. 16 April 2007 http://privacy.openflows.org/lyon_paper.html>. ———. “Surveillance Studies: Understanding Visibility, Mobility and the Phonetic Fix.” Surveillance and Society 1.1 (2002): 1-7. Metropolitan Police Authority (MPA). “Counter Terrorism: The London Debate.” Press Release. 21 June 2006. 18 April 2007 http://www.mpa.gov.uk.access/issues/comeng/Terrorism.htm>. Pugliese, Joseph. “Asymmetries of Terror: Visual Regimes of Racial Profiling and the Shooting of Jean Charles de Menezes in the Context of the War in Iraq.” Borderlands 5.1 (2006). 30 May 2007 http://www.borderlandsejournal.adelaide.edu.au/vol15no1_2006/ pugliese.htm>. Marx, Gary. “A Tack in the Shoe: Neutralizing and Resisting the New Surveillance.” Journal of Social Issues 59.2 (2003). 18 April 2007 http://web.mit.edu/gtmarx/www/tack.html>. Moores, Shaun. “Doubling of Place.” Mediaspace: Place Scale and Culture in a Media Age. Eds. Nick Couldry and Anna McCarthy. Routledge, London, 2004. Monahan, Teri, ed. Surveillance and Security: Technological Politics and Power in Everyday Life. Routledge: London, 2006. Norris, Clive, and Gary Armstrong. The Maximum Surveillance Society: The Rise of CCTV. Oxford: Berg, 1999. O’Harrow, Robert. No Place to Hide. New York: Free Press, 2005. Osuri, Goldie. “Media Necropower: Australian Media Reception and the Somatechnics of Mamdouh Habib.” Borderlands 5.1 (2006). 30 May 2007 http://www.borderlandsejournal.adelaide.edu.au/vol5no1_2006 osuri_necropower.htm>. Rose, Nikolas. “Government and Control.” British Journal of Criminology 40 (2000): 321–399. Scannell, Paddy. Radio, Television and Modern Life. Oxford: Blackwell, 1996. Smith, Benjamin. “In What Ways, and for What Reasons, Do We Inscribe Our Bodies?” 15 Nov. 1998. 30 May 2007 http:www.bmezine.com/ritual/981115/Whatways.html>. Stalder, Felix. “Privacy Is Not the Antidote to Surveillance.” Surveillance and Society 1.1 (2002): 120-124. Umiker-Sebeok, Jean. “Power and the Construction of Gendered Spaces.” Indiana University-Bloomington. 14 April 2007 http://www.slis.indiana.edu/faculty/umikerse/papers/power.html>. William, Bogard. The Simulation of Surveillance: Hypercontrol in Telematic Societies. Cambridge: Cambridge UP, 1996. Wood, Kristie, and David M. Ball, eds. “A Report on the Surveillance Society.” Surveillance Studies Network, UK, Sep. 2006. 14 April 2007 http://www.ico.gov.uk/upload/documents/library/data_protection/ practical_application/surveillance_society_full_report_2006.pdf>. Citation reference for this article MLA Style Ibrahim, Yasmin. "Commodifying Terrorism: Body, Surveillance and the Everyday." M/C Journal 10.3 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0706/05-ibrahim.php>. APA Style Ibrahim, Y. (Jun. 2007) "Commodifying Terrorism: Body, Surveillance and the Everyday," M/C Journal, 10(3). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0706/05-ibrahim.php>.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography