To see the other types of publications on this topic, follow the link: Domain Name System over Quic.

Journal articles on the topic 'Domain Name System over Quic'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Domain Name System over Quic.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Victors, Jesse, Ming Li, and Xinwen Fu. "The Onion Name System." Proceedings on Privacy Enhancing Technologies 2017, no. 1 (January 1, 2017): 21–41. http://dx.doi.org/10.1515/popets-2017-0003.

Full text
Abstract:
Abstract Tor onion services, also known as hidden services, are anonymous servers of unknown location and ownership that can be accessed through any Torenabled client. They have gained popularity over the years, but since their introduction in 2002 still suffer from major usability challenges primarily due to their cryptographically-generated non-memorable addresses. In response to this difficulty, in this work we introduce the Onion Name System (OnioNS), a privacy-enhanced decentralized name resolution service. OnioNS allows Tor users to reference an onion service by a meaningful globally-unique verifiable domain name chosen by the onion service administrator.We construct OnioNS as an optional backwards-compatible plugin for Tor, simplify our design and threat model by embedding OnioNS within the Tor network, and provide mechanisms for authenticated denial-of-existence with minimal networking costs. We introduce a lottery-like system to reduce the threat of land rushes and domain squatting. Finally, we provide a security analysis, integrate our software with the Tor Browser, and conduct performance tests of our prototype.
APA, Harvard, Vancouver, ISO, and other styles
2

M. Banadaki, Yaser. "Detecting Malicious DNS over HTTPS Traffic in Domain Name System using Machine Learning Classifiers." Journal of Computer Sciences and Applications 8, no. 2 (August 20, 2020): 46–55. http://dx.doi.org/10.12691/jcsa-8-2-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Antic, Djordje, and Mladen Veinovic. "Implementation of DNSSEC-secured name servers for ni.rs zone and best practices." Serbian Journal of Electrical Engineering 13, no. 3 (2016): 369–80. http://dx.doi.org/10.2298/sjee1603369a.

Full text
Abstract:
As a backbone of all communications over the Internet, DNS (Domain Name System) is crucial for all entities that need to be visible and provide services outside their internal networks. Public administration is a prime example for various services that have to be provided to citizens. This manuscript presents one possible approach, implemented in the administration of the City of Nis, for improving the robustness and resilience of external domain space, as well as securing it with DNSSEC (DNS Security Extensions).
APA, Harvard, Vancouver, ISO, and other styles
4

Singanamalla, Sudheesh, Suphanat Chunhapanya, Jonathan Hoyland, Marek Vavruša, Tanya Verma, Peter Wu, Marwan Fayed, Kurtis Heimerl, Nick Sullivan, and Christopher Wood. "Oblivious DNS over HTTPS (ODoH): A Practical Privacy Enhancement to DNS." Proceedings on Privacy Enhancing Technologies 2021, no. 4 (July 23, 2021): 575–92. http://dx.doi.org/10.2478/popets-2021-0085.

Full text
Abstract:
Abstract The Internet’s Domain Name System (DNS) responds to client hostname queries with corresponding IP addresses and records. Traditional DNS is unencrypted and leaks user information to on-lookers. Recent efforts to secure DNS using DNS over TLS (DoT) and DNS over HTTPS (DoH) have been gaining traction, ostensibly protecting DNS messages from third parties. However, the small number of available public large-scale DoT and DoH resolvers has reinforced DNS privacy concerns, specifically that DNS operators could use query contents and client IP addresses to link activities with identities. Oblivious DNS over HTTPS (ODoH) safeguards against these problems. In this paper we implement and deploy interoperable instantiations of the protocol, construct a corresponding formal model and analysis, and evaluate the protocols’ performance with wide-scale measurements. Results suggest that ODoH is a practical privacy-enhancing replacement for DNS.
APA, Harvard, Vancouver, ISO, and other styles
5

Nakatsuka, Yoshimichi, Andrew Paverd, and Gene Tsudik. "PDoT." Digital Threats: Research and Practice 2, no. 1 (March 2021): 1–22. http://dx.doi.org/10.1145/3431171.

Full text
Abstract:
Security and privacy of the Internet Domain Name System (DNS) have been longstanding concerns. Recently, there is a trend to protect DNS traffic using Transport Layer Security (TLS). However, at least two major issues remain: (1) How do clients authenticate DNS-over-TLS endpoints in a scalable and extensible manner? and (2) How can clients trust endpoints to behave as expected? In this article, we propose a novel Private DNS-over-TLS (PDoT) architecture. PDoT includes a DNS Recursive Resolver (RecRes) that operates within a Trusted Execution Environment. Using Remote Attestation , DNS clients can authenticate and receive strong assurance of trustworthiness of PDoT RecRes. We provide an open source proof-of-concept implementation of PDoT and experimentally demonstrate that its latency and throughput match that of the popular Unbound DNS-over-TLS resolver.
APA, Harvard, Vancouver, ISO, and other styles
6

Díaz-Sánchez, Daniel, Andrés Marín-Lopez, Florina Almenárez Mendoza, and Patricia Arias Cabarcos. "DNS/DANE Collision-Based Distributed and Dynamic Authentication for Microservices in IoT †." Sensors 19, no. 15 (July 26, 2019): 3292. http://dx.doi.org/10.3390/s19153292.

Full text
Abstract:
IoT devices provide real-time data to a rich ecosystem of services and applications. The volume of data and the involved subscribe/notify signaling will likely become a challenge also for access and core networks. To alleviate the core of the network, other technologies like fog computing can be used. On the security side, designers of IoT low-cost devices and applications often reuse old versions of development frameworks and software components that contain vulnerabilities. Many server applications today are designed using microservice architectures where components are easier to update. Thus, IoT can benefit from deploying microservices in the fog as it offers the required flexibility for the main players of ubiquitous computing: nomadic users. In such deployments, IoT devices need the dynamic instantiation of microservices. IoT microservices require certificates so they can be accessed securely. Thus, every microservice instance may require a newly-created domain name and a certificate. The DNS-based Authentication of Named Entities (DANE) extension to Domain Name System Security Extensions (DNSSEC) allows linking a certificate to a given domain name. Thus, the combination of DNSSEC and DANE provides microservices’ clients with secure information regarding the domain name, IP address, and server certificate of a given microservice. However, IoT microservices may be short-lived since devices can move from one local fog to another, forcing DNSSEC servers to sign zones whenever new changes occur. Considering DNSSEC and DANE were designed to cope with static services, coping with IoT dynamic microservice instantiation can throttle the scalability in the fog. To overcome this limitation, this article proposes a solution that modifies the DNSSEC/DANE signature mechanism using chameleon signatures and defining a new soft delegation scheme. Chameleon signatures are signatures computed over a chameleon hash, which have a property: a secret trapdoor function can be used to compute collisions to the hash. Since the hash is maintained, the signature does not have to be computed again. In the soft delegation schema, DNS servers obtain a trapdoor that allows performing changes in a constrained zone without affecting normal DNS operation. In this way, a server can receive this soft delegation and modify the DNS zone to cope with frequent changes such as microservice dynamic instantiation. Changes in the soft delegated zone are much faster and do not require the intervention of the DNS primary servers of the zone.
APA, Harvard, Vancouver, ISO, and other styles
7

Kadhim, Huda Yousif, Karim Hashim Al-saedi, and Mustafa Dhiaa Al-Hassani. "Mobile Phishing Websites Detection and Prevention Using Data Mining Techniques." International Journal of Interactive Mobile Technologies (iJIM) 13, no. 10 (September 25, 2019): 205. http://dx.doi.org/10.3991/ijim.v13i10.10797.

Full text
Abstract:
<strong>Abstract</strong>— The widespread use of smart phones nowadays makes them vulnerable to phishing. Phishing is the process of trying to steal user information over the Internet by claiming they are a trusted entity and thus access and steal the victim's data(user name, password and credit card details). Consequently, the need for mobile phishing detection system has become an urgent need. And this is what we are attempting to introduce in this paper, where we introduce a system to detect phishing websites on Android phones. That predicts and prevents phishing websites from deceiving users, utilizing data mining techniques to predict whether a website is phishing or not, relying on a set of factors (URL based features, HTML based features and Domain based features). The results shows system effectiveness in predicting phishing websites with 97% as prediction accuracy.
APA, Harvard, Vancouver, ISO, and other styles
8

Stvan, Laurel Smith. "The contingent meaning of –ex brand names in English." Corpora 1, no. 2 (November 2006): 217–50. http://dx.doi.org/10.3366/cor.2006.1.2.217.

Full text
Abstract:
The –ex string found in English product and company names (e.g., Kleenex, Timex and Virex), is investigated to discover whether this ending has consistent meaning across coined words and to observe any constraints on its attachment and interpretation. Seven hundred and ninety-three –exbrand name types were collected and examined, derived from American English texts in the Brown and Frown corpora as well as over 600 submissions to the US Patent and Trademark Office's Trademark Electronic Search System database (TESS); American native English speakers were also surveyed to assess interpretations of –ex meaning in brands. Analysis of these coined terms reveals that –ex meaning is contingent, reflecting assumptions by a given speaker of a referent's domain in a given time, region and culture. Yet, despite ambiguities in its interpretation, the –ex form shows increasing use.
APA, Harvard, Vancouver, ISO, and other styles
9

Zain ul Abideen, Muhammad, Shahzad Saleem, and Madiha Ejaz. "VPN Traffic Detection in SSL-Protected Channel." Security and Communication Networks 2019 (October 29, 2019): 1–17. http://dx.doi.org/10.1155/2019/7924690.

Full text
Abstract:
In recent times, secure communication protocols over web such as HTTPS (Hypertext Transfer Protocol Secure) are being widely used instead of plain web communication protocols like HTTP (Hypertext Transfer Protocol). HTTPS provides end-to-end encryption between the user and service. Nowadays, organizations use network firewalls and/or intrusion detection and prevention systems (IDPS) to analyze the network traffic to detect and protect against attacks and vulnerabilities. Depending on the size of organization, these devices may differ in their capabilities. Simple network intrusion detection system (NIDS) and firewalls generally have no feature to inspect HTTPS or encrypted traffic, so they rely on unencrypted traffic to manage the encrypted payload of the network. Recent and powerful next-generation firewalls have Secure Sockets Layer (SSL) inspection feature which are expensive and may not be suitable for every organizations. A virtual private network (VPN) is a service which hides real traffic by creating SSL-protected channel between the user and server. Every Internet activity is then performed under the established SSL tunnel. The user inside the network with malicious intent or to hide his activity from the network security administration of the organization may use VPN services. Any VPN service may be used by users to bypass the filters or signatures applied on network security devices. These services may be the source of new virus or worm injected inside the network or a gateway to facilitate information leakage. In this paper, we have proposed a novel approach to detect VPN activity inside the network. The proposed system analyzes the communication between user and the server to analyze and extract features from network, transport, and application layer which are not encrypted and classify the incoming traffic as malicious, i.e., VPN traffic or standard traffic. Network traffic is analyzed and classified using DNS (Domain Name System) packets and HTTPS- (Hypertext Transfer Protocol Secure-) based traffic. Once traffic is classified, the connection based on the server’s IP, TCP port connected, domain name, and server name inside the HTTPS connection is analyzed. This helps in verifying legitimate connection and flags the VPN-based traffic. We worked on top five freely available VPN services and analyzed their traffic patterns; the results show successful detection of the VPN activity performed by the user. We analyzed the activity of five users, using some sort of VPN service in their Internet activity, inside the network. Out of total 729 connections made by different users, 329 connections were classified as legitimate activity, marking 400 remaining connections as VPN-based connections. The proposed system is lightweight enough to keep minimal overhead, both in network and resource utilization and requires no specialized hardware.
APA, Harvard, Vancouver, ISO, and other styles
10

Ivanov, Ievgen, Artur Korniłowicz, and Mykola Nikitchenko. "On an Algorithmic Algebra over Simple-Named Complex-Valued Nominative Data." Formalized Mathematics 26, no. 2 (July 1, 2018): 149–58. http://dx.doi.org/10.2478/forma-2018-0012.

Full text
Abstract:
Summary This paper continues formalization in the Mizar system [2, 1] of basic notions of the composition-nominative approach to program semantics [14] which was started in [8, 12, 10]. The composition-nominative approach studies mathematical models of computer programs and data on various levels of abstraction and generality and provides tools for reasoning about their properties. In particular, data in computer systems are modeled as nominative data [15]. Besides formalization of semantics of programs, certain elements of the composition-nominative approach were applied to abstract systems in a mathematical systems theory [4, 6, 7, 5, 3]. In the paper we give a formal definition of the notions of a binominative function over given sets of names and values (i.e. a partial function which maps simple-named complex-valued nominative data to such data) and a nominative predicate (a partial predicate on simple-named complex-valued nominative data). The sets of such binominative functions and nominative predicates form the carrier of the generalized Glushkov algorithmic algebra for simple-named complex-valued nominative data [15]. This algebra can be used to formalize algorithms which operate on various data structures (such as multidimensional arrays, lists, etc.) and reason about their properties. In particular, we formalize the operations of this algebra which require a specification of a data domain and which include the existential quantifier, the assignment composition, the composition of superposition into a predicate, the composition of superposition into a binominative function, the name checking predicate. The details on formalization of nominative data and the operations of the algorithmic algebra over them are described in [11, 13, 9].
APA, Harvard, Vancouver, ISO, and other styles
11

Al-Nawasrah, Ahmad, Ammar Ali Almomani, Samer Atawneh, and Mohammad Alauthman. "A Survey of Fast Flux Botnet Detection With Fast Flux Cloud Computing." International Journal of Cloud Applications and Computing 10, no. 3 (July 2020): 17–53. http://dx.doi.org/10.4018/ijcac.2020070102.

Full text
Abstract:
A botnet refers to a set of compromised machines controlled distantly by an attacker. Botnets are considered the basis of numerous security threats around the world. Command and control (C&C) servers are the backbone of botnet communications, in which bots send a report to the botmaster, and the latter sends attack orders to those bots. Botnets are also categorized according to their C&C protocols, such as internet relay chat (IRC) and peer-to-peer (P2P) botnets. A domain name system (DNS) method known as fast-flux is used by bot herders to cover malicious botnet activities and increase the lifetime of malicious servers by quickly changing the IP addresses of the domain names over time. Several methods have been suggested to detect fast-flux domains. However, these methods achieve low detection accuracy, especially for zero-day domains. They also entail a significantly long detection time and consume high memory storage. In this survey, we present an overview of the various techniques used to detect fast-flux domains according to solution scopes, namely, host-based, router-based, DNS-based, and cloud computing techniques. This survey provides an understanding of the problem, its current solution space, and the future research directions expected.
APA, Harvard, Vancouver, ISO, and other styles
12

Papadopoulos, Pavlos, Nikolaos Pitropakis, William J. Buchanan, Owen Lo, and Sokratis Katsikas. "Privacy-Preserving Passive DNS." Computers 9, no. 3 (August 12, 2020): 64. http://dx.doi.org/10.3390/computers9030064.

Full text
Abstract:
The Domain Name System (DNS) was created to resolve the IP addresses of web servers to easily remembered names. When it was initially created, security was not a major concern; nowadays, this lack of inherent security and trust has exposed the global DNS infrastructure to malicious actors. The passive DNS data collection process creates a database containing various DNS data elements, some of which are personal and need to be protected to preserve the privacy of the end users. To this end, we propose the use of distributed ledger technology. We use Hyperledger Fabric to create a permissioned blockchain, which only authorized entities can access. The proposed solution supports queries for storing and retrieving data from the blockchain ledger, allowing the use of the passive DNS database for further analysis, e.g., for the identification of malicious domain names. Additionally, it effectively protects the DNS personal data from unauthorized entities, including the administrators that can act as potential malicious insiders, and allows only the data owners to perform queries over these data. We evaluated our proposed solution by creating a proof-of-concept experimental setup that passively collects DNS data from a network and then uses the distributed ledger technology to store the data in an immutable ledger, thus providing a full historical overview of all the records.
APA, Harvard, Vancouver, ISO, and other styles
13

Hong, Jeongkwan, Minho Won, and Hyunju Ro. "The Molecular and Pathophysiological Functions of Members of the LNX/PDZRN E3 Ubiquitin Ligase Family." Molecules 25, no. 24 (December 15, 2020): 5938. http://dx.doi.org/10.3390/molecules25245938.

Full text
Abstract:
The ligand of Numb protein-X (LNX) family, also known as the PDZRN family, is composed of four discrete RING-type E3 ubiquitin ligases (LNX1, LNX2, LNX3, and LNX4), and LNX5 which may not act as an E3 ubiquitin ligase owing to the lack of the RING domain. As the name implies, LNX1 and LNX2 were initially studied for exerting E3 ubiquitin ligase activity on their substrate Numb protein, whose stability was negatively regulated by LNX1 and LNX2 via the ubiquitin-proteasome pathway. LNX proteins may have versatile molecular, cellular, and developmental functions, considering the fact that besides these proteins, none of the E3 ubiquitin ligases have multiple PDZ (PSD95, DLGA, ZO-1) domains, which are regarded as important protein-interacting modules. Thus far, various proteins have been isolated as LNX-interacting proteins. Evidence from studies performed over the last two decades have suggested that members of the LNX family play various pathophysiological roles primarily by modulating the function of substrate proteins involved in several different intracellular or intercellular signaling cascades. As the binding partners of RING-type E3s, a large number of substrates of LNX proteins undergo degradation through ubiquitin-proteasome system (UPS) dependent or lysosomal pathways, potentially altering key signaling pathways. In this review, we highlight recent and relevant findings on the molecular and cellular functions of the members of the LNX family and discuss the role of the erroneous regulation of these proteins in disease progression.
APA, Harvard, Vancouver, ISO, and other styles
14

Facco Rodrigues, Vinicius, Ivam Guilherme Wendt, Rodrigo da Rosa Righi, Cristiano André da Costa, Jorge Luis Victória Barbosa, and Antonio Marcos Alberti. "Brokel: Towards enabling multi-level cloud elasticity on publish/subscribe brokers." International Journal of Distributed Sensor Networks 13, no. 8 (August 2017): 155014771772886. http://dx.doi.org/10.1177/1550147717728863.

Full text
Abstract:
Internet of Things networks together with the data that flow between networked smart devices are growing at unprecedented rates. Often brokers, or intermediaries nodes, combined with the publish/subscribe communication model represent one of the most used strategies to enable Internet of Things applications. At scalability viewpoint, cloud computing and its main feature named resource elasticity appear as an alternative to solve the use of over-provisioned clusters, which normally present a fixed number of resources. However, we perceive that today the elasticity and Pub/Sub duet presents several limitations, mainly related to application rewrite, single cloud elasticity limited to one level and false-positive resource reorganization actions. Aiming at bypassing the aforesaid problems, this article proposes Brokel, a multi-level elasticity model for Pub/Sub brokers. Users, things, and applications use Brokel as a centralized messaging service broker, but in the back-end the middleware provides better performance and cost (used resources × performance) on message delivery using virtual machine (VM) replication. Our scientific contribution regards the multi-level, orchestrator, and broker, and the addition of a geolocation domain name system service to define the most suitable entry point in the Pub/Sub architecture. Different execution scenarios and metrics were employed to evaluate a Brokel prototype using VMs that encapsulate the functionalities of Mosquitto and RabbitMQ brokers. The obtained results were encouraging in terms of application time, message throughput, and cost (application time × resource usage) when comparing elastic and non-elastic executions.
APA, Harvard, Vancouver, ISO, and other styles
15

Klots, Y. P., I. V. Muliar, V. M. Cheshun, and O. V. Burdyug. "USE OF DISTRIBUTED HASH TABLES TO PROVIDE ACCESS TO CLOUD SERVICES." Collection of scientific works of the Military Institute of Kyiv National Taras Shevchenko University, no. 67 (2020): 85–95. http://dx.doi.org/10.17721/2519-481x/2020/67-09.

Full text
Abstract:
In the article the urgency of the problem of granting access to services of distributed cloud system is disclosed, in particular, the peer distributed cloud system is characterized. The process of interaction of the main components is provided to access the domain name web resource. It is researched that the distribution of resources between nodes of a peer distributed cloud system with the subsequent provision of services on request is implemented using the Kademlia protocol on a local network or Internet and contains processes for publishing the resource at the initial stage of its owner, replication and directly providing access to resources. Application of modern technologies of adaptive information security systems does not allow full control over the information flows of the cloud computing environment, since they function at the upper levels of the hierarchy. Therefore, to create effective mechanisms for protecting software in a cloud computing environment, it is necessary to develop new threat models and to create methods for displaying computer attacks that allow operatively to identify hidden and potentially dangerous processes of information interaction. Rules of access form the basis of security policy and include restrictions on the mechanisms of initialization processes access. Under the developed operations model, the formalized description of hidden threats is reduced to the emergence of context-dependent transitions in the multigraph transactions. The method of granting access to the services of the distributed cloud system is substantiated. It is determined that the Distributed Hash Table (DHT) infrastructure is used to find a replication node that has a replica of the requested resource or part of it. The study identified the stages of identification of the node's validation. The process of adding a new node, validating authenticity, publishing a resource, and accessing a resource is described in the form of a step-by-step sequence of actions within the framework of the method of granting access to services of a distributed cloud system by graphical description of information flows, interaction of processes of information and objects processing.
APA, Harvard, Vancouver, ISO, and other styles
16

Dos Santos, Diego Pinto, Candice Müller, Fabio D'Agostini, Maria Cristina F. De Castro, and Fernando C. De Castro. "Symbol Synchronization for OFDM Receivers with FFT Transport Delay Compensation." Journal of Circuits, Systems and Computers 24, no. 05 (April 8, 2015): 1550076. http://dx.doi.org/10.1142/s0218126615500760.

Full text
Abstract:
This paper proposes a new blind approach for time synchronization of orthogonal frequency division multiplexing (OFDM) receivers (RX). It is largely known that the OFDM technique has been successfully applied to a wide variety of digital communications systems over the past several years — IEEE 802.16 WiMax, 3GPP-LTE, IEEE 802.22, DVB T/H, ISDB-T, to name a few. We focus on the synchronization for the ISDB-T digital television system, currently adopted by several South American countries. The proposed approach uses the coarse synchronization to estimate the initial time reference and then, the fine synchronization keeps tracking the transmitter (TX) time reference. The innovation on the proposed approach regards to the closed loop control stabilization of the fine synchronization. It uses a smith predictor and a differential estimator, which estimates the difference between TX and RX clock frequencies. The proposed method allows the RX to track the TX time reference with high precision ([Formula: see text] sample fraction). Thus, the carriers phase rotation issue due to incorrect time reference is minimized, and it does not affect the proper RX IQ symbols demodulation process. The RX internal time reference is adjusted based on pilot symbols, called scattered pilots (SPs) in the context of the ISDB-T standard, which are inserted in the frequency domain at the inverse fast Fourier transform (IFFT) input in the TX. The averaged progressive phase rotation of the received SPs at the fast Fourier transform (FFT) output is used to compute the time misalignment. This misalignment is used to adjust the RX fine time synchronism. Furthermore, the proposed method has been implemented in an ISDB-T RX. The FPGA-based receiver has been evaluated over several multipath, Doppler and AWGN channel models.
APA, Harvard, Vancouver, ISO, and other styles
17

Gornostaeva, Yuliya A. "Metaphorical Models of Discourse Designing of the Negative Image of Russia in Spanish Media." Vestnik of Northern (Arctic) Federal University. Series Humanitarian and Social Sciences, no. 6 (December 15, 2020): 47–53. http://dx.doi.org/10.37482/2687-1505-v062.

Full text
Abstract:
This article deals with the problem of discourse designing of the negative image of Russia in Spanish mass media using metaphorical models with the target domain “Russia”. Within the framework of this study, a metaphorical model is defined as a mental scheme of connections between different conceptual domains, which is formed and fixed in the minds of native speakers. The material includes Spanish articles from reputable sources (El País and BBC News in Spanish) mentioning Russia (over 500,000 characters in total). With the use of discourse, corpus and lexical-semantic analysis as well as conceptual analysis of metaphorical models, the author found that the negative image of Russia is constructed in Spanish political media discourse through the following metaphorical models: “Russia is the country of Putin”, “Russia is the heir to the USSR” and “Russia is a pseudo-saviour of the world”. The following frames can be distinguished within the structure of the model “Russia is the country of Putin”: 1) Russia belongs to Putin; 2) Putin is a lifelong president; 3) Putinism is an ideology of Russian citizens. The metaphorical model “Russia is the heir to the USSR” includes the following stereotypical scenarios: 1) authoritarian, undemocratic methods of governance; 2) “Soviet” way of living; 3) backward economy. The frame that exists within the model “Russia is a pseudo-saviour of the world” actualizes the scenario of providing humanitarian aid for show. Among the lexical and grammatical representatives of these metaphorical models are the following: prepositional construction with de and the proper name Putin; adjectives with the semes “infinitely long, eternal” and “entire”; adjectives containing semes of distance/separateness; lexeme soviético, etc. The nominative field of the considered image contains both direct nominations, represented by the toponyms Rusia, Moscú and Kremlin, and metaphors that are mainly related to the size of the state. The stereotypical features of the source domain make it possible to create a negative image of Russia as an authoritarian, undemocratic state with an almost monarchical system, outdated Soviet governance methods, saviour-ofthe-world ambitions, and ostentatious behaviour in the international arena.
APA, Harvard, Vancouver, ISO, and other styles
18

Jara, Antonio J., Miguel A. Zamora, and Antonio Skarmeta. "Glowbal IP: An Adaptive and Transparent IPv6 Integration in the Internet of Things." Mobile Information Systems 8, no. 3 (2012): 177–97. http://dx.doi.org/10.1155/2012/819250.

Full text
Abstract:
The Internet of Things (IoT) requires scalability, extensibility and a transparent integration of multi-technology in order to reach an efficient support for global communications, discovery and look-up, as well as access to services and information. To achieve these goals, it is necessary to enable a homogenous and seamless machine-to-machine (M2M) communication mechanism allowing global access to devices, sensors and smart objects. In this respect, the proposed answer to these technological requirements is called Glowbal IP, which is based on a homogeneous access to the devices/sensors offered by the IPv6 addressing and core network. Glowbal IP's main advantages with regard to 6LoWPAN/IPv6 are not only that it presents a low overhead to reach a higher performance on a regular basis, but also that it determines the session and identifies global access by means of a session layer defined over the application layer. Technologies without any native support for IP are thereby adaptable to IP e.g. IEEE 802.15.4 and Bluetooth Low Energy. This extension towards the IPv6 network opens access to the features and methods of the devices through a homogenous access based on WebServices (e.g. RESTFul/CoAP). In addition to this, Glowbal IP offers global interoperability among the different devices, and interoperability with external servers and users applications. All in all, it allows the storage of information related to the devices in the network through the extension of the Domain Name System (DNS) from the IPv6 core network, by adding the Service Directory extension (DNS-SD) to store information about the sensors, their properties and functionality. A step forward in network-based information systems is thereby reached, allowing a homogenous discovery, and access to the devices from the IoT. Thus, the IoT capabilities are exploited by allowing an easier and more transparent integration of the end users applications with sensors for the future evaluations and use cases.
APA, Harvard, Vancouver, ISO, and other styles
19

Dahlberg, Rasmus, Tobias Pulls, Tom Ritter, and Paul Syverson. "Privacy-Preserving & Incrementally-Deployable Support for Certificate Transparency in Tor." Proceedings on Privacy Enhancing Technologies 2021, no. 2 (January 29, 2021): 194–213. http://dx.doi.org/10.2478/popets-2021-0024.

Full text
Abstract:
Abstract The security of the web improved greatly throughout the last couple of years. A large majority of the web is now served encrypted as part of HTTPS, and web browsers accordingly moved from positive to negative security indicators that warn the user if a connection is insecure. A secure connection requires that the server presents a valid certificate that binds the domain name in question to a public key. A certificate used to be valid if signed by a trusted Certificate Authority (CA), but web browsers like Google Chrome and Apple’s Safari have additionally started to mandate Certificate Transparency (CT) logging to overcome the weakest-link security of the CA ecosystem. Tor and the Firefox-based Tor Browser have yet to enforce CT. In this paper, we present privacy-preserving and incrementally-deployable designs that add support for CT in Tor. Our designs go beyond the currently deployed CT enforcements that are based on blind trust: if a user that uses Tor Browser is man-in-the-middled over HTTPS, we probabilistically detect and disclose cryptographic evidence of CA and/or CT log misbehavior. The first design increment allows Tor to play a vital role in the overall goal of CT: detect mis-issued certificates and hold CAs accountable. We achieve this by randomly cross-logging a subset of certificates into other CT logs. The final increments hold misbehaving CT logs accountable, initially assuming that some logs are benign and then without any such assumption. Given that the current CT deployment lacks strong mechanisms to verify if log operators play by the rules, exposing misbehavior is important for the web in general and not just Tor. The full design turns Tor into a system for maintaining a probabilistically-verified view of the CT log ecosystem available from Tor’s consensus. Each increment leading up to it preserves privacy due to and how we use Tor.
APA, Harvard, Vancouver, ISO, and other styles
20

Dahlquist, Kam D., John David N. Dionisio, Ben G. Fitzpatrick, Nicole A. Anguiano, Anindita Varshneya, Britain J. Southwick, and Mihir Samdarshi. "GRNsight: a web application and service for visualizing models of small- to medium-scale gene regulatory networks." PeerJ Computer Science 2 (September 12, 2016): e85. http://dx.doi.org/10.7717/peerj-cs.85.

Full text
Abstract:
GRNsight is a web application and service for visualizing models of gene regulatory networks (GRNs). A gene regulatory network (GRN) consists of genes, transcription factors, and the regulatory connections between them which govern the level of expression of mRNA and protein from genes. The original motivation came from our efforts to perform parameter estimation and forward simulation of the dynamics of a differential equations model of a small GRN with 21 nodes and 31 edges. We wanted a quick and easy way to visualize the weight parameters from the model which represent the direction and magnitude of the influence of a transcription factor on its target gene, so we created GRNsight. GRNsight automatically lays out either an unweighted or weighted network graph based on an Excel spreadsheet containing an adjacency matrix where regulators are named in the columns and target genes in the rows, a Simple Interaction Format (SIF) text file, or a GraphML XML file. When a user uploads an input file specifying an unweighted network, GRNsight automatically lays out the graph using black lines and pointed arrowheads. For a weighted network, GRNsight uses pointed and blunt arrowheads, and colors the edges and adjusts their thicknesses based on the sign (positive for activation or negative for repression) and magnitude of the weight parameter. GRNsight is written in JavaScript, with diagrams facilitated by D3.js, a data visualization library. Node.js and the Express framework handle server-side functions. GRNsight’s diagrams are based on D3.js’s force graph layout algorithm, which was then extensively customized to support the specific needs of GRNs. Nodes are rectangular and support gene labels of up to 12 characters. The edges are arcs, which become straight lines when the nodes are close together. Self-regulatory edges are indicated by a loop. When a user mouses over an edge, the numerical value of the weight parameter is displayed. Visualizations can be modified by sliders that adjust the force graph layout parameters and through manual node dragging. GRNsight is best-suited for visualizing networks of fewer than 35 nodes and 70 edges, although it accepts networks of up to 75 nodes or 150 edges. GRNsight has general applicability for displaying any small, unweighted or weighted network with directed edges for systems biology or other application domains. GRNsight serves as an example of following and teaching best practices for scientific computing and complying with FAIR principles, using an open and test-driven development model with rigorous documentation of requirements and issues on GitHub. An exhaustive unit testing framework using Mocha and the Chai assertion library consists of around 160 automated unit tests that examine nearly 530 test files to ensure that the program is running as expected. The GRNsight application (http://dondi.github.io/GRNsight/) and code (https://github.com/dondi/GRNsight) are available under the open source BSD license.
APA, Harvard, Vancouver, ISO, and other styles
21

Xing, Fei, Yi Ping Yao, Zhi Wen Jiang, and Bing Wang. "Fine-Grained Parallel and Distributed Spatial Stochastic Simulation of Biological Reactions." Advanced Materials Research 345 (September 2011): 104–12. http://dx.doi.org/10.4028/www.scientific.net/amr.345.104.

Full text
Abstract:
To date, discrete event stochastic simulations of large scale biological reaction systems are extremely compute-intensive and time-consuming. Besides, it has been widely accepted that spatial factor plays a critical role in the dynamics of most biological reaction systems. The NSM (the Next Sub-Volume Method), a spatial variation of the Gillespie’s stochastic simulation algorithm (SSA), has been proposed for spatially stochastic simulation of those systems. While being able to explore high degree of parallelism in systems, NSM is inherently sequential, which still suffers from the problem of low simulation speed. Fine-grained parallel execution is an elegant way to speed up sequential simulations. Thus, based on the discrete event simulation framework JAMES II, we design and implement a PDES (Parallel Discrete Event Simulation) TW (time warp) simulator to enable the fine-grained parallel execution of spatial stochastic simulations of biological reaction systems using the ANSM (the Abstract NSM), a parallel variation of the NSM. The simulation results of classical Lotka-Volterra biological reaction system show that our time warp simulator obtains remarkable parallel speed-up against sequential execution of the NSM.I.IntroductionThe goal of Systems biology is to obtain system-level investigations of the structure and behavior of biological reaction systems by integrating biology with system theory, mathematics and computer science [1][3], since the isolated knowledge of parts can not explain the dynamics of a whole system. As the complement of “wet-lab” experiments, stochastic simulation, being called the “dry-computational” experiment, plays a more and more important role in computing systems biology [2]. Among many methods explored in systems biology, discrete event stochastic simulation is of greatly importance [4][5][6], since a great number of researches have present that stochasticity or “noise” have a crucial effect on the dynamics of small population biological reaction systems [4][7]. Furthermore, recent research shows that the stochasticity is not only important in biological reaction systems with small population but also in some moderate/large population systems [7].To date, Gillespie’s SSA [8] is widely considered to be the most accurate way to capture the dynamics of biological reaction systems instead of traditional mathematical method [5][9]. However, SSA-based stochastic simulation is confronted with two main challenges: Firstly, this type of simulation is extremely time-consuming, since when the types of species and the number of reactions in the biological system are large, SSA requires a huge amount of steps to sample these reactions; Secondly, the assumption that the systems are spatially homogeneous or well-stirred is hardly met in most real biological systems and spatial factors play a key role in the behaviors of most real biological systems [19][20][21][22][23][24]. The next sub-volume method (NSM) [18], presents us an elegant way to access the special problem via domain partition. To our disappointment, sequential stochastic simulation with the NSM is still very time-consuming, and additionally introduced diffusion among neighbor sub-volumes makes things worse. Whereas, the NSM explores a very high degree of parallelism among sub-volumes, and parallelization has been widely accepted as the most meaningful way to tackle the performance bottleneck of sequential simulations [26][27]. Thus, adapting parallel discrete event simulation (PDES) techniques to discrete event stochastic simulation would be particularly promising. Although there are a few attempts have been conducted [29][30][31], research in this filed is still in its infancy and many issues are in need of further discussion. The next section of the paper presents the background and related work in this domain. In section III, we give the details of design and implementation of model interfaces of LP paradigm and the time warp simulator based on the discrete event simulation framework JAMES II; the benchmark model and experiment results are shown in Section IV; in the last section, we conclude the paper with some future work.II. Background and Related WorkA. Parallel Discrete Event Simulation (PDES)The notion Logical Process (LP) is introduced to PDES as the abstract of the physical process [26], where a system consisting of many physical processes is usually modeled by a set of LP. LP is regarded as the smallest unit that can be executed in PDES and each LP holds a sub-partition of the whole system’s state variables as its private ones. When a LP processes an event, it can only modify the state variables of its own. If one LP needs to modify one of its neighbors’ state variables, it has to schedule an event to the target neighbor. That is to say event message exchanging is the only way that LPs interact with each other. Because of the data dependences or interactions among LPs, synchronization protocols have to be introduced to PDES to guarantee the so-called local causality constraint (LCC) [26]. By now, there are a larger number of synchronization algorithms have been proposed, e.g. the null-message [26], the time warp (TW) [32], breath time warp (BTW) [33] and etc. According to whether can events of LPs be processed optimistically, they are generally divided into two types: conservative algorithms and optimistic algorithms. However, Dematté and Mazza have theoretically pointed out the disadvantages of pure conservative parallel simulation for biochemical reaction systems [31]. B. NSM and ANSM The NSM is a spatial variation of Gillespie’ SSA, which integrates the direct method (DM) [8] with the next reaction method (NRM) [25]. The NSM presents us a pretty good way to tackle the aspect of space in biological systems by partitioning a spatially inhomogeneous system into many much more smaller “homogeneous” ones, which can be simulated by SSA separately. However, the NSM is inherently combined with the sequential semantics, and all sub-volumes share one common data structure for events or messages. Thus, directly parallelization of the NSM may be confronted with the so-called boundary problem and high costs of synchronously accessing the common data structure [29]. In order to obtain higher efficiency of parallel simulation, parallelization of NSM has to firstly free the NSM from the sequential semantics and secondly partition the shared data structure into many “parallel” ones. One of these is the abstract next sub-volume method (ANSM) [30]. In the ANSM, each sub-volume is modeled by a logical process (LP) based on the LP paradigm of PDES, where each LP held its own event queue and state variables (see Fig. 1). In addition, the so-called retraction mechanism was introduced in the ANSM too (see algorithm 1). Besides, based on the ANSM, Wang etc. [30] have experimentally tested the performance of several PDES algorithms in the platform called YH-SUPE [27]. However, their platform is designed for general simulation applications, thus it would sacrifice some performance for being not able to take into account the characteristics of biological reaction systems. Using the similar ideas of the ANSM, Dematté and Mazza have designed and realized an optimistic simulator. However, they processed events in time-stepped manner, which would lose a specific degree of precisions compared with the discrete event manner, and it is very hard to transfer a time-stepped simulation to a discrete event one. In addition, Jeschke etc.[29] have designed and implemented a dynamic time-window simulator to execution the NSM in parallel on the grid computing environment, however, they paid main attention on the analysis of communication costs and determining a better size of the time-window.Fig. 1: the variations from SSA to NSM and from NSM to ANSMC. JAMES II JAMES II is an open source discrete event simulation experiment framework developed by the University of Rostock in Germany. It focuses on high flexibility and scalability [11][13]. Based on the plug-in scheme [12], each function of JAMES II is defined as a specific plug-in type, and all plug-in types and plug-ins are declared in XML-files [13]. Combined with the factory method pattern JAMES II innovatively split up the model and simulator, which makes JAMES II is very flexible to add and reuse both of models and simulators. In addition, JAMES II supports various types of modelling formalisms, e.g. cellular automata, discrete event system specification (DEVS), SpacePi, StochasticPi and etc.[14]. Besides, a well-defined simulator selection mechanism is designed and developed in JAMES II, which can not only automatically choose the proper simulators according to the modeling formalism but also pick out a specific simulator from a serious of simulators supporting the same modeling formalism according to the user settings [15].III. The Model Interface and SimulatorAs we have mentioned in section II (part C), model and simulator are split up into two separate parts. Thus, in this section, we introduce the designation and implementation of model interface of LP paradigm and more importantly the time warp simulator.A. The Mod Interface of LP ParadigmJAMES II provides abstract model interfaces for different modeling formalism, based on which Wang etc. have designed and implemented model interface of LP paradigm[16]. However, this interface is not scalable well for parallel and distributed simulation of larger scale systems. In our implementation, we accommodate the interface to the situation of parallel and distributed situations. Firstly, the neighbor LP’s reference is replaced by its name in LP’s neighbor queue, because it is improper even dangerous that a local LP hold the references of other LPs in remote memory space. In addition, (pseudo-)random number plays a crucial role to obtain valid and meaningful results in stochastic simulations. However, it is still a very challenge work to find a good random number generator (RNG) [34]. Thus, in order to focus on our problems, we introduce one of the uniform RNGs of JAMES II to this model interface, where each LP holds a private RNG so that random number streams of different LPs can be independent stochastically. B. The Time Warp SimulatorBased on the simulator interface provided by JAMES II, we design and implement the time warp simulator, which contains the (master-)simulator, (LP-)simulator. The simulator works strictly as master/worker(s) paradigm for fine-grained parallel and distributed stochastic simulations. Communication costs are crucial to the performance of a fine-grained parallel and distributed simulation. Based on the Java remote method invocation (RMI) mechanism, P2P (peer-to-peer) communication is implemented among all (master-and LP-)simulators, where a simulator holds all the proxies of targeted ones that work on remote workers. One of the advantages of this communication approach is that PDES codes can be transferred to various hardwire environment, such as Clusters, Grids and distributed computing environment, with only a little modification; The other is that RMI mechanism is easy to realized and independent to any other non-Java libraries. Since the straggler event problem, states have to be saved to rollback events that are pre-processed optimistically. Each time being modified, the state is cloned to a queue by Java clone mechanism. Problem of this copy state saving approach is that it would cause loads of memory space. However, the problem can be made up by a condign GVT calculating mechanism. GVT reduction scheme also has a significant impact on the performance of parallel simulators, since it marks the highest time boundary of events that can be committed so that memories of fossils (processed events and states) less than GVT can be reallocated. GVT calculating is a very knotty for the notorious simultaneous reporting problem and transient messages problem. According to our problem, another GVT algorithm, called Twice Notification (TN-GVT) (see algorithm 2), is contributed to this already rich repository instead of implementing one of GVT algorithms in reference [26] and [28].This algorithm looks like the synchronous algorithm described in reference [26] (pp. 114), however, they are essentially different from each other. This algorithm has never stopped the simulators from processing events when GVT reduction, while algorithm in reference [26] blocks all simulators for GVT calculating. As for the transient message problem, it can be neglect in our implementation, because RMI based remote communication approach is synchronized, that means a simulator will not go on its processing until the remote the massage get to its destination. And because of this, the high-costs message acknowledgement, prevalent over many classical asynchronous GVT algorithms, is not needed anymore too, which should be constructive to the whole performance of the time warp simulator.IV. Benchmark Model and Experiment ResultsA. The Lotka-Volterra Predator-prey SystemIn our experiment, the spatial version of Lotka-Volterra predator-prey system is introduced as the benchmark model (see Fig. 2). We choose the system for two considerations: 1) this system is a classical experimental model that has been used in many related researches [8][30][31], so it is credible and the simulation results are comparable; 2) it is simple but helpful enough to test the issues we are interested in. The space of predator-prey System is partitioned into a2D NXNgrid, whereNdenotes the edge size of the grid. Initially the population of the Grass, Preys and Predators are set to 1000 in each single sub-volume (LP). In Fig. 2,r1,r2,r3stand for the reaction constants of the reaction 1, 2 and 3 respectively. We usedGrass,dPreyanddPredatorto stand for the diffusion rate of Grass, Prey and Predator separately. Being similar to reference [8], we also take the assumption that the population of the grass remains stable, and thusdGrassis set to zero.R1:Grass + Prey ->2Prey(1)R2:Predator +Prey -> 2Predator(2)R3:Predator -> NULL(3)r1=0.01; r2=0.01; r3=10(4)dGrass=0.0;dPrey=2.5;dPredato=5.0(5)Fig. 2: predator-prey systemB. Experiment ResultsThe simulation runs have been executed on a Linux Cluster with 40 computing nodes. Each computing node is equipped with two 64bit 2.53 GHz Intel Xeon QuadCore Processors with 24GB RAM, and nodes are interconnected with Gigabit Ethernet connection. The operating system is Kylin Server 3.5, with kernel 2.6.18. Experiments have been conducted on the benchmark model of different size of mode to investigate the execution time and speedup of the time warp simulator. As shown in Fig. 3, the execution time of simulation on single processor with 8 cores is compared. The result shows that it will take more wall clock time to simulate much larger scale systems for the same simulation time. This testifies the fact that larger scale systems will leads to more events in the same time interval. More importantly, the blue line shows that the sequential simulation performance declines very fast when the mode scale becomes large. The bottleneck of sequential simulator is due to the costs of accessing a long event queue to choose the next events. Besides, from the comparison between group 1 and group 2 in this experiment, we could also conclude that high diffusion rate increased the simulation time greatly both in sequential and parallel simulations. This is because LP paradigm has to split diffusion into two processes (diffusion (in) and diffusion (out) event) for two interactive LPs involved in diffusion and high diffusion rate will lead to high proportional of diffusion to reaction. In the second step shown in Fig. 4, the relationship between the speedups from time warp of two different model sizes and the number of work cores involved are demonstrated. The speedup is calculated against the sequential execution of the spatial reaction-diffusion systems model with the same model size and parameters using NSM.Fig. 4 shows the comparison of speedup of time warp on a64X64grid and a100X100grid. In the case of a64X64grid, under the condition that only one node is used, the lowest speedup (a little bigger than 1) is achieved when two cores involved, and the highest speedup (about 6) is achieved when 8 cores involved. The influence of the number of cores used in parallel simulation is investigated. In most cases, large number of cores could bring in considerable improvements in the performance of parallel simulation. Also, compared with the two results in Fig. 4, the simulation of larger model achieves better speedup. Combined with time tests (Fig. 3), we find that sequential simulator’s performance declines sharply when the model scale becomes very large, which makes the time warp simulator get better speed-up correspondingly.Fig. 3: Execution time (wall clock time) of Seq. and time warp with respect to different model sizes (N=32, 64, 100, and 128) and model parameters based on single computing node with 8 cores. Results of the test are grouped by the diffusion rates (Group 1: Sequential 1 and Time Warp 1. dPrey=2.5, dPredator=5.0; Group 2: dPrey=0.25, dPredator=0.5, Sequential 2 and Time Warp 2).Fig. 4: Speedup of time warp with respect to the number of work cores and the model size (N=64 and 100). Work cores are chose from one computing node. Diffusion rates are dPrey=2.5, dPredator=5.0 and dGrass=0.0.V. Conclusion and Future WorkIn this paper, a time warp simulator based on the discrete event simulation framework JAMES II is designed and implemented for fine-grained parallel and distributed discrete event spatial stochastic simulation of biological reaction systems. Several challenges have been overcome, such as state saving, roll back and especially GVT reduction in parallel execution of simulations. The Lotka-Volterra Predator-Prey system is chosen as the benchmark model to test the performance of our time warp simulator and the best experiment results show that it can obtain about 6 times of speed-up against the sequential simulation. The domain this paper concerns with is in the infancy, many interesting issues are worthy of further investigated, e.g. there are many excellent PDES optimistic synchronization algorithms (e.g. the BTW) as well. Next step, we would like to fill some of them into JAMES II. In addition, Gillespie approximation methods (tau-leap[10] etc.) sacrifice some degree of precision for higher simulation speed, but still could not address the aspect of space of biological reaction systems. The combination of spatial element and approximation methods would be very interesting and promising; however, the parallel execution of tau-leap methods should have to overcome many obstacles on the road ahead.AcknowledgmentThis work is supported by the National Natural Science Foundation of China (NSF) Grant (No.60773019) and the Ph.D. Programs Foundation of Ministry of Education of China (No. 200899980004). The authors would like to show their great gratitude to Dr. Jan Himmelspach and Dr. Roland Ewald at the University of Rostock, Germany for their invaluable advice and kindly help with JAMES II.ReferencesH. Kitano, "Computational systems biology." Nature, vol. 420, no. 6912, pp. 206-210, November 2002.H. Kitano, "Systems biology: a brief overview." Science (New York, N.Y.), vol. 295, no. 5560, pp. 1662-1664, March 2002.A. Aderem, "Systems biology: Its practice and challenges," Cell, vol. 121, no. 4, pp. 511-513, May 2005. [Online]. Available: http://dx.doi.org/10.1016/j.cell.2005.04.020.H. de Jong, "Modeling and simulation of genetic regulatory systems: A literature review," Journal of Computational Biology, vol. 9, no. 1, pp. 67-103, January 2002.C. W. Gardiner, Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences (Springer Series in Synergetics), 3rd ed. Springer, April 2004.D. T. Gillespie, "Simulation methods in systems biology," in Formal Methods for Computational Systems Biology, ser. Lecture Notes in Computer Science, M. Bernardo, P. Degano, and G. Zavattaro, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5016, ch. 5, pp. 125-167.Y. Tao, Y. Jia, and G. T. Dewey, "Stochastic fluctuations in gene expression far from equilibrium: Omega expansion and linear noise approximation," The Journal of Chemical Physics, vol. 122, no. 12, 2005.D. T. Gillespie, "Exact stochastic simulation of coupled chemical reactions," Journal of Physical Chemistry, vol. 81, no. 25, pp. 2340-2361, December 1977.D. T. Gillespie, "Stochastic simulation of chemical kinetics," Annual Review of Physical Chemistry, vol. 58, no. 1, pp. 35-55, 2007.D. T. Gillespie, "Approximate accelerated stochastic simulation of chemically reacting systems," The Journal of Chemical Physics, vol. 115, no. 4, pp. 1716-1733, 2001.J. Himmelspach, R. Ewald, and A. M. Uhrmacher, "A flexible and scalable experimentation layer," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 827-835.J. Himmelspach and A. M. Uhrmacher, "Plug'n simulate," in 40th Annual Simulation Symposium (ANSS'07). Washington, DC, USA: IEEE, March 2007, pp. 137-143.R. Ewald, J. Himmelspach, M. Jeschke, S. Leye, and A. M. Uhrmacher, "Flexible experimentation in the modeling and simulation framework james ii-implications for computational systems biology," Brief Bioinform, vol. 11, no. 3, pp. bbp067-300, January 2010.A. Uhrmacher, J. Himmelspach, M. Jeschke, M. John, S. Leye, C. Maus, M. Röhl, and R. Ewald, "One modelling formalism & simulator is not enough! a perspective for computational biology based on james ii," in Formal Methods in Systems Biology, ser. Lecture Notes in Computer Science, J. Fisher, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5054, ch. 9, pp. 123-138. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-68413-8_9.R. Ewald, J. Himmelspach, and A. M. Uhrmacher, "An algorithm selection approach for simulation systems," pads, vol. 0, pp. 91-98, 2008.Bing Wang, Jan Himmelspach, Roland Ewald, Yiping Yao, and Adelinde M Uhrmacher. Experimental analysis of logical process simulation algorithms in james ii[C]// In M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, and R. G. Ingalls, editors, Proceedings of the Winter Simulation Conference, IEEE Computer Science, 2009. 1167-1179.Ewald, J. Rössel, J. Himmelspach, and A. M. Uhrmacher, "A plug-in-based architecture for random number generation in simulation systems," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 836-844.J. Elf and M. Ehrenberg, "Spontaneous separation of bi-stable biochemical systems into spatial domains of opposite phases." Systems biology, vol. 1, no. 2, pp. 230-236, December 2004.K. Takahashi, S. Arjunan, and M. Tomita, "Space in systems biology of signaling pathways? Towards intracellular molecular crowding in silico," FEBS Letters, vol. 579, no. 8, pp. 1783-1788, March 2005.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.D. Ridgway, G. Broderick, and M. Ellison, "Accommodating space, time and randomness in network simulation," Current Opinion in Biotechnology, vol. 17, no. 5, pp. 493-498, October 2006.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.W. G. Wilson, A. M. Deroos, and E. Mccauley, "Spatial instabilities within the diffusive lotka-volterra system: Individual-based simulation results," Theoretical Population Biology, vol. 43, no. 1, pp. 91-127, February 1993.K. Kruse and J. Elf. Kinetics in spatially extended systems. In Z. Szallasi, J. Stelling, and V. Periwal, editors, System Modeling in Cellular Biology. From Concepts to Nuts and Bolts, pages 177–198. MIT Press, Cambridge, MA, 2006.M. A. Gibson and J. Bruck, "Efficient exact stochastic simulation of chemical systems with many species and many channels," The Journal of Physical Chemistry A, vol. 104, no. 9, pp. 1876-1889, March 2000.R. M. Fujimoto, Parallel and Distributed Simulation Systems (Wiley Series on Parallel and Distributed Computing). Wiley-Interscience, January 2000.Y. Yao and Y. Zhang, “Solution for analytic simulation based on parallel processing,” Journal of System Simulation, vol. 20, No.24, pp. 6617–6621, 2008.G. Chen and B. K. Szymanski, "Dsim: scaling time warp to 1,033 processors," in WSC '05: Proceedings of the 37th conference on Winter simulation. Winter Simulation Conference, 2005, pp. 346-355.M. Jeschke, A. Park, R. Ewald, R. Fujimoto, and A. M. Uhrmacher, "Parallel and distributed spatial simulation of chemical reactions," in 2008 22nd Workshop on Principles of Advanced and Distributed Simulation. Washington, DC, USA: IEEE, June 2008, pp. 51-59.B. Wang, Y. Yao, Y. Zhao, B. Hou, and S. Peng, "Experimental analysis of optimistic synchronization algorithms for parallel simulation of reaction-diffusion systems," High Performance Computational Systems Biology, International Workshop on, vol. 0, pp. 91-100, October 2009.L. Dematté and T. Mazza, "On parallel stochastic simulation of diffusive systems," in Computational Methods in Systems Biology, M. Heiner and A. M. Uhrmacher, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5307, ch. 16, pp. 191-210.D. R. Jefferson, "Virtual time," ACM Trans. Program. Lang. Syst., vol. 7, no. 3, pp. 404-425, July 1985.J. S. Steinman, "Breathing time warp," SIGSIM Simul. Dig., vol. 23, no. 1, pp. 109-118, July 1993. [Online]. Available: http://dx.doi.org/10.1145/174134.158473 S. K. Park and K. W. Miller, "Random number generators: good ones are hard to find," Commun. ACM, vol. 31, no. 10, pp. 1192-1201, October 1988.
APA, Harvard, Vancouver, ISO, and other styles
22

Herrmann, Dominik. "Privacy issues in the Domain Name System and techniques for self-defense." it - Information Technology 57, no. 6 (January 28, 2015). http://dx.doi.org/10.1515/itit-2015-0038.

Full text
Abstract:
AbstractThere is a growing interest in retaining and analyzing metadata. Motivated by this trend, the dissertation studies the potential for surveillance via the Domain Name System, an important infrastructure service on the Internet. Three fingerprinting techniques are developed and evaluated. The first technique allows a resolver to infer the URLs of the websites a user visits by matching characteristic patterns in DNS requests. The second technique allows to determine operating system and browser of a user based on behavioral features. Thirdly, and most importantly, it is demonstrated that the activities of users can be tracked over multiple sessions, even when the IP address of the user changes over time. In addition, the dissertation considers possible countermeasures. Obfuscating the desired hostnames by sending a large number of dummy requests (so-called range queries), turns out to be less effective than believed so far. Therefore, more appropriate techniques (mixing queries, pushing popular queries, and extended caching of results) are proposed in the thesis. The results raise awareness for an overlooked threat that infringes the privacy of Internet users, but they also contribute to the development of more usable and more effective privacy enhancing technologies.
APA, Harvard, Vancouver, ISO, and other styles
23

Ян Коваленко. "EXTRAJUDICIAL REMEDIES FOR THE RIGHT TO A DOMAIN NAME." Theory and Practice of Intellectual Property, no. 3 (June 30, 2020). http://dx.doi.org/10.33731/32020.216583.

Full text
Abstract:
Today, each member of the business industry in one way or another presents his company, his product or services he provides on the World Wide Web. The main purpose of the company in the internet is to create and use its own website to provide information about their product and to find a potential buyer, as well as demonstrate their advantages and how they differ from their competitors. The main purpose of registeringa specific «domain name» is to create favorable (convenient) conditions for the buyer who wanted to get acquainted with the company or its range. After all, if a domain name is associatively similar to the name of an individual company, then it will be much easier for the buyer to find the website of such a company than through a search using search engines.Business representatives do not always succeed, especially if the name (trademark, brand) of the company is widely known, or if such a company has become a «victim» of unfair competition. This creates controversy over which parties are interested in resolving as quickly and cheaply as possible.This is greatly facilitated by the work of the World Intellectual Property Organization and a number of documents, among which the leading are «Principles for Dispute Resolution on Identical Domain Names» and «Principles of Uniform Rules for Dispute Resolution on Domain Names» (UDRP), adopted by ICANN. However, their analysis, as well as the analysis of law enforcement practice, allow us to speak not only about the effectiveness, but also about certain shortcomings of the proposed ICANN procedure for resolving disputes over domain names, which entail ambiguous dispute resolution practices.The analysis of the regulation of protection of rights to the domain name is carried out with the prescriptions of the «Principles for the resolution of disputes about the same domain names» and the «Uniform Domain Name Dispute Resolution Rules» (UDRP) adopted by ICANN. An attempt is made to highlight the advantages and disadvantages of an out-of-court procedure for resolving disputes over domain names and suggested possible ways to improve such a system in Ukraine.
APA, Harvard, Vancouver, ISO, and other styles
24

Hyder, Muhammad Faraz, and Muhammad Ali Ismail. "Toward Domain Name System privacy enhancement using intent‐based Moving Target Defense framework over software defined networks." Transactions on Emerging Telecommunications Technologies, June 3, 2021. http://dx.doi.org/10.1002/ett.4318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Heikkinen, Mikko, Anniina Kuusijärvi, Ville-Matti Riihikoski, and Leif Schulman. "Multi-domain Collection Management Simplified — the Finnish National Collection Management System Kotka." Biodiversity Information Science and Standards 4 (October 9, 2020). http://dx.doi.org/10.3897/biss.4.59119.

Full text
Abstract:
Many natural history museums share a common problem: a multitude of legacy collection management systems (CMS) and the difficulty of finding a new system to replace them. Kotka is a CMS developed starting in 2011 at the Finnish Museum of Natural History (Luomus) and Finnish Biodiversity Information Facility (FinBIF) (Heikkinen et al. 2019, Schulman et al. 2019) to solve this problem. It has grown into a national system used by all natural history museums in Finland, and currently contains over two million specimens from several domains (zoological, botanical, paleontological, microbial, tissue sample and botanic garden collections). Kotka is a web application where data can be entered, edited, searched and exported through a browser-based user interface. It supports designing and printing specimen labels, handling collection metadata and specimen transactions, and helps support Nagoya protocol compliance. Creating a shared system for multiple institutions and collection types is difficult due to differences in their current processes, data formats, future needs and opinions. The more independent actors there are involved, the more complicated the development becomes. Successful development requires some trade-offs. Kotka has chosen features and development principles that emphasize fast development into a multitude of different purposes. Kotka was developed using agile methods with a single person (a product owner) making development decisions, based on e.g., strategic objectives, customer value and user feedback. Technical design emphasizes efficient development and usage over completeness and formal structure of the data. It applies simple and pragmatic approaches and improves collection management by providing practical tools for the users. In these regards, Kotka differs in many ways from a traditional CMS. Kotka stores data in a mostly denormalized free text format and uses a simple hierarchical data model. This allows greater flexibility and makes it easy to add new data fields and structures based on user feedback. Data harmonization and quality assurance is a continuous process, instead of doing it before entering data into the system. For example, specimen data with a taxon name can be entered into Kotka before the taxon name has been entered into the accompanying FinBIF taxonomy database. Example: simplified data about two specimens in Kotka, which have not been fully harmonized yet. Taxon: Corvus corone cornix Country: FI Collector: Doe, John Coordinates: 668, 338 Coordinate system: Finnish uniform coordinate system Taxon: Corvus corone cornix Country: FI Collector: Doe, John Coordinates: 668, 338 Coordinate system: Finnish uniform coordinate system Taxon: Corvus cornix Country: Finland Collector: Doe, J. Coordinates: 60.2442, 25,7201 Coordinate system: WGS84 Taxon: Corvus cornix Country: Finland Collector: Doe, J. Coordinates: 60.2442, 25,7201 Coordinate system: WGS84 Kotka’s data model does not follow standards, but has grown organically to reflect practical needs from the users. This is true particularly of data collected in research projects, which are often unique and complicated (e.g. complex relationships between species), requiring new data fields and/or storing data as free text. The majority of the data can be converted into simplified standard formats (e.g. Darwin Core) for sharing. The main challenge with this has been vague definitions of many data sharing formats (e.g. Darwin Core, CETAF Specimen Preview Profile (CETAF 2020), allowing different interpretations. Kotka trusts its users: it places very few limitations on what users can do, and has very simple user role management. Kotka stores the full history of all data, which allows fixing any possible errors and prevents data loss. Kotka is open source software, but is tightly coupled with the infrastructure of the Finnish Biodiversity Information Facility (FinBIF). Currently, it is only offered as an online service (Software as a Service) hosted by FinBIF. However, it could be developed into a more modular system that could, for example, utilize multiple different database backends and taxonomy data sources.
APA, Harvard, Vancouver, ISO, and other styles
26

Rafferty, Dylan, and Kevin Curran. "The Role of Blockchain in Cyber Security." Semiconductor Science and Information Devices 3, no. 1 (May 21, 2021). http://dx.doi.org/10.30564/ssid.v3i1.3153.

Full text
Abstract:
Cyber security breaches are on the rise globally. Due to the introduction of legislation like the EU’s General Data Protection Regulation (GDPR), companies are now subject to further financial penalties if they fail to meet requirements in protecting user information. In 2018, 75% of CEOs and board members considered cyber security and technology acquisitions among their top priorities, and blockchain based solutions were among the most considered options. Blockchain is a decentralised structure that offers multiple security benefits over traditional, centralised network architectures. These two approaches are compared in this chapter in areas such as data storage, the Internet of Things (IoT) and Domain Name System (DNS) in order to determine blockchain’s potential in the future of cyber security.
APA, Harvard, Vancouver, ISO, and other styles
27

"Malicious Traffic Detection System using Publicly Available Blacklist’s." International Journal of Engineering and Advanced Technology 8, no. 6S (September 6, 2019): 356–61. http://dx.doi.org/10.35940/ijeat.f1075.0886s19.

Full text
Abstract:
In this fastest growing technology with the increase in internet usage, the communication became much faster and easier which resulted in the massive growth in digitalization. With this the cyber crimes were increasing day-by-day . They employ every possible technique and trick to make the users as zombies for their malicious activities or Crypto mining. In recent years we are facing issues with ransomware’ which result in the loss of data integrity and confidentiality along with our privacy and anonymity. The malware’ can spread all over the network within no time. Using anti virus programs alone for safeguarding our network is a bad practice because they filter the traffic on signature based. Here problem is if the user is not up to date with the definitions from the AV provider, then he will be prone to the attack. In this model a system to track malicious trails in a network is done. This employs online malware detection system (Virus Total) and open source dynamic black lists which contain malware or suspicious programs along with some static pre compiled blacklists from different antivirus providers and our own definitions of block to filter the traffic which gives the detailed log report on the suspicious trails, this is from domain name or IP address or malicious scripts in the webpage.
APA, Harvard, Vancouver, ISO, and other styles
28

Yoder, Matthew, and Dmitry Dmitriev. "Nomenclature over 5 years in TaxonWorks: Approach, implementation, limitations and outcomes." Biodiversity Information Science and Standards 5 (September 20, 2021). http://dx.doi.org/10.3897/biss.5.75441.

Full text
Abstract:
We are now over four decades into digitally managing the names of Earth's species. As the number of federating (i.e., software that brings together previously disparate projects under a common infrastructure, for example TaxonWorks) and aggregating (e.g., International Plant Name Index, Catalog of Life (CoL)) efforts increase, there remains an unmet need for both the migration forward of old data, and for the production of new, precise and comprehensive nomenclatural catalogs. Given this context, we provide an overview of how TaxonWorks seeks to contribute to this effort, and where it might evolve in the future. In TaxonWorks, when we talk about governed names and relationships, we mean it in the sense of existing international codes of nomenclature (e.g., the International Code of Zoological Nomenclature (ICZN)). More technically, nomenclature is defined as a set of objective assertions that describe the relationships between the names given to biological taxa and the rules that determine how those names are governed. It is critical to note that this is not the same thing as the relationship between a name and a biological entity, but rather nomenclature in TaxonWorks represents the details of the (governed) relationships between names. Rather than thinking of nomenclature as changing (a verb commonly used to express frustration with biological nomenclature), it is useful to think of nomenclature as a set of data points, which grows over time. For example, when synonymy happens, we do not erase the past, but rather record a new context for the name(s) in question. The biological concept changes, but the nomenclature (names) simply keeps adding up. Behind the scenes, nomenclature in TaxonWorks is represented by a set of nodes and edges, i.e., a mathematical graph, or network (e.g., Fig. 1). Most names (i.e., nodes in the network) are what TaxonWorks calls "protonyms," monomial epithets that are used to construct, for example, bionomial names (not to be confused with "protonym" sensu the ICZN). Protonyms are linked to other protonyms via relationships defined in NOMEN, an ontology that encodes governed rules of nomenclature. Within the system, all data, nodes and edges, can be cited, i.e., linked to a source and therefore anchored in time and tied to authorship, and annotated with a variety of annotation types (e.g., notes, confidence levels, tags). The actual building of the graphs is greatly simplified by multiple user-interfaces that allow scientists to review (e.g. Fig. 2), create, filter, and add to (again, not "change") the nomenclatural history. As in any complex knowledge-representation model, there are outlying scenarios, or edge cases that emerge, making certain human tasks more complex than others. TaxonWorks is no exception, it has limitations in terms of what and how some things can be represented. While many complex representations are hidden by simplified user-interfaces, some, for example, the handling of the ICZN's Family-group name, batch-loading of invalid relationships, and comparative syncing against external resources need more work to simplify the processes presently required to meet catalogers' needs. The depth at which TaxonWorks can capture nomenclature is only really valuable if it can be used by others. This is facilitated by the application programming interface (API) serving its data (https://api.taxonworks.org), serving text files, and by exports to standards like the emerging Catalog of Life Data Package. With reference to real-world problems, we illustrate different ways in which the API can be used, for example, as integrated into spreadsheets, through the use of command line scripts, and serve in the generation of public-facing websites. Behind all this effort are an increasing number of people recording help videos, developing documentation, and troubleshooting software and technical issues. Major contributions have come from developers at many skill levels, from high school to senior software engineers, illustrating that TaxonWorks leads in enabling both technical and domain-based contributions. The health and growth of this community is a key factor in TaxonWork's potential long-term impact in the effort to unify the names of Earth's species.
APA, Harvard, Vancouver, ISO, and other styles
29

Al-Drees, Mohammed, Marwah M. Almasri, Mousa Al-Akhras, and Mohammed Alawairdhi. "Building a DNS Tunneling Dataset." International Journal of Sensors, Wireless Communications and Control 10 (November 24, 2020). http://dx.doi.org/10.2174/2210327910999201124205758.

Full text
Abstract:
Background:: Domain Name System (DNS) is considered the phone book of the Internet. Its main goal is to translate a domain name to an IP address that the computer can understand. However, DNS can be vulnerable to various kinds of attacks, such as DNS poisoning attacks and DNS tunneling attacks. Objective:: The main objective of this paper is to allow researchers to identify DNS tunnel traffic using machine-learning algorithms. Training machine-learning algorithms to detect DNS tunnel traffic and determine which protocol was used will help the community to speed up the process of detecting such attacks. Method:: In this paper, we consider the DNS tunneling attack. In addition, we discuss how attackers can exploit this protocol to infiltrate data breaches from the network. The attack starts by encoding data inside the DNS queries to the outside of the network. The malicious DNS server will receive the small chunk of data decoding the payload and put it together at the server. The main concern is that the DNS is a fundamental service that is not usually blocked by a firewall and receives less attention from systems administrators due to a vast amount of traffic. Results:: This paper investigates how this type of attack happens using the DNS tunneling tool by setting up an environment consisting of compromised DNS servers and compromised hosts with the Iodine tool installed in both machines. The generated dataset contains the traffic of HTTP, HTTPS, SSH, SFTP, and POP3 protocols over the DNS. No features were removed from the dataset so that researchers could utilize all features in the dataset. Conclusion:: DNS tunneling remains a critical attack that needs more attention to address. DNS tunneled environment allows us to understand how such an attack happens. We built the appropriate dataset by simulating various attack scenarios using different protocols. The created dataset contains PCAP, JSON, and CSV files to allow researchers to use different methods to detect tunnel traffic.
APA, Harvard, Vancouver, ISO, and other styles
30

Wark, McKenzie. "Toywars." M/C Journal 6, no. 3 (June 1, 2003). http://dx.doi.org/10.5204/mcj.2179.

Full text
Abstract:
I first came across etoy in Linz, Austria in 1995. They turned up at Ars Electronica with their shaved heads, in their matching orange bomber jackets. They were not invited. The next year they would not have to crash the party. In 1996 they were awarded Arts Electronica’s prestigious Golden Nica for web art, and were on their way to fame and bitterness – the just rewards for their art of self-regard. As founding member Agent.ZAI says: “All of us were extremely greedy – for excitement, for drugs, for success.” (Wishart & Boschler: 16) The etoy story starts on the fringes of the squatters’ movement in Zurich. Disenchanted with the hard left rhetorics that permeate the movement in the 1980s, a small group look for another way of existing within a commodified world, without the fantasy of an ‘outside’ from which to critique it. What Antonio Negri and friends call the ‘real subsumption’ of life under the rule of commodification is something etoy grasps intuitively. The group would draw on a number of sources: David Bowie, the Sex Pistols, the Manchester rave scene, European Amiga art, rumors of the historic avant gardes from Dada to Fluxus. They came together in 1994, at a meeting in the Swiss resort town of Weggis on Lake Lucerne. While the staging of the founding meeting looks like a rerun of the origins of the Situationist International, the wording of the invitation might suggest the founding of a pop music boy band: “fun, money and the new world?” One of the – many – stories about the origins of the name Dada has it being chosen at random from a bilingual dictionary. The name etoy, in an update on that procedure, was spat out by a computer program designed to make four letter words at random. Ironically, both Dada and etoy, so casually chosen, would inspire furious struggles over the ownership of these chancey 4-bit words. The group decided to make money by servicing the growing rave scene. Being based in Vienna and Zurich, the group needed a way to communicate, and chose to use the internet. This was a far from obvious thing to do in 1994. Connections were slow and unreliable. Sometimes it was easier to tape a hard drive full of clubland graphics to the underside of a seat on the express train from Zurich to Vienna and simply email instructions to meet the train and retrieve it. The web was a primitive instrument in 1995 when etoy built its first website. They launched it with a party called etoy.FASTLANE, an optimistic title when the web was anything but. Coco, a transsexual model and tabloid sensation, sang a Japanese song while suspended in the air. She brought media interest, and was anointed etoy’s lifestyle angel. As Wishart and Bochsler write, “it was as if the Seven Dwarfs had discovered their Snow White.” (Wishart & Boschler: 33) The launch didn’t lead to much in the way of a music deal or television exposure. The old media were not so keen to validate the etoy dream of lifting themselves into fame and fortune by their bootstraps. And so etoy decided to be stars of the new media. The slogan was suitably revised: “etoy: the pop star is the pilot is the coder is the designer is the architect is the manager is the system is etoy.” (Wishart & Boschler: 34) The etoy boys were more than net.artists, they were artists of the brand. The brand was achieving a new prominence in the mid-90s. (Klein: 35) This was a time when capitalism was hollowing itself out in the overdeveloped world, shedding parts of its manufacturing base. Control of the circuits of commodification would rest less on the ownership of the means of production and more on maintaining a monopoly on the flows of information. The leading edge of the ruling class was becoming self-consciously vectoral. It controlled the flow of information about what to produce – the details of design, the underlying patents. It controlled the flows of information about what is produced – the brands and logos, the slogans and images. The capitalist class is supplanted by a vectoral class, controlling the commodity circuit through the vectors of information. (Wark) The genius of etoy was to grasp the aesthetic dimension of this new stage of commodification. The etoy boys styled themselves not so much as a parody of corporate branding and management groupthink, but as logical extension of it. They adopted matching uniforms and called themselves agents. In the dada-punk-hiphop tradition, they launched themselves on the world as brand new, self-created, self-named subjects: Agents Zai, Brainhard, Gramazio, Kubli, Esposto, Udatny and Goldstein. The etoy.com website was registered in 1995 with Network Solutions for a $100 fee. The homepage for this etoy.TANKSYSTEM was designed like a flow chart. As Gramazio says: “We wanted to create an environment with surreal content, to build a parallel world and put the content of this world into tanks.” (Wishart & Boschler: 51) One tank was a cybermotel, with Coco the first guest. Another tank showed you your IP number, with a big-brother eye looking on. A supermarket tank offered sunglasses and laughing gas for sale, but which may or may not be delivered. The underground tank included hardcore photos of a sensationalist kind. A picture of the Federal Building in Oklamoma City after the bombing was captioned in deadpan post-situ style “such work needs a lot of training.” (Wishart & Boschler: 52) The etoy agents were by now thoroughly invested in the etoy brand and the constellation of images they had built around it, on their website. Their slogan became “etoy: leaving reality behind.” (Wishart & Boschler: 53) They were not the first artists fascinated by commodification. It was Warhol who said “good art is good business.”(Warhol ) But etoy reversed the equation: good business is good art. And good business, in this vectoral age, is in its most desirable form an essentially conceptual matter of creating a brand at the center of a constellation of signifiers. Late in 1995, etoy held another group meeting, at the Zurich youth center Dynamo. The problem was that while they had build a hardcore website, nobody was visiting it. Agents Gooldstein and Udatny thought that there might be a way of using the new search engines to steer visitors to the site. Zai and Brainhard helped secure a place at the Vienna Academy of Applied Arts where Udatny could use the computer lab to implement this idea. Udatny’s first step was to create a program that would go out and gather email addresses from the web. These addresses would form the lists for the early examples of art-spam that etoy would perpetrate. Udatny’s second idea was a bit more interesting. He worked out how to get the etoy.TANKSYSTEM page listed in search engines. Most search engines ranked pages by the frequency of the search term in the pages it had indexed, so etoy.TANKSYSTEM would contain pages of selected keywords. Porn sites were also discovering this method of creating free publicity. The difference was that etoy chose a very carefully curated list of 350 search terms, including: art, bondage, cyberspace, Doom, Elvis, Fidel, genx, heroin, internet, jungle and Kant. Users of search engines who searched for these terms would find dummy pages listed prominently in their search results that directed them, unsuspectingly, to etoy.com. They called this project Digital Hijack. To give the project a slightly political aura, the pages the user was directed to contained an appeal for the release of convicted hacker Kevin Mitnick. This was the project that won them a Golden Nica statuette at Ars Electronica in 1996, which Gramazio allegedly lost the same night playing roulette. It would also, briefly, require that they explain themselves to the police. Digital Hijack also led to the first splits in the group, under the intense pressure of organizing it on a notionally collective basis, but with the zealous Agent Zai acting as de facto leader. When Udatny was expelled, Zai and Brainhard even repossessed his Toshiba laptop, bought with etoy funds. As Udatny recalls, “It was the lowest point in my life ever. There was nothing left; I could not rely on etoy any more. I did not even have clothes, apart from the etoy uniform.” (Wishart & Boschler: 104) Here the etoy story repeats a common theme from the history of the avant gardes as forms of collective subjectivity. After Digital Hijack, etoy went into a bit of a slump. It’s something of a problem for a group so dependent on recognition from the other of the media, that without a buzz around them, etoy would tend to collapse in on itself like a fading supernova. Zai spend the early part of 1997 working up a series of management documents, in which he appeared as the group’s managing director. Zai employed the current management theory rhetoric of employee ‘empowerment’ while centralizing control. Like any other corporate-Trotskyite, his line was that “We have to get used to reworking the company structure constantly.” (Wishart & Boschler: 132) The plan was for each member of etoy to register the etoy trademark in a different territory, linking identity to information via ownership. As Zai wrote “If another company uses our name in a grand way, I’ll probably shoot myself. And that would not be cool.” (Wishart & Boschler:: 132) As it turned out, another company was interested – the company that would become eToys.com. Zai received an email offering “a reasonable sum” for the etoy.com domain name. Zai was not amused. “Damned Americans, they think they can take our hunting grounds for a handful of glass pearls….”. (Wishart & Boschler: 133) On an invitation from Suzy Meszoly of C3, the etoy boys traveled to Budapest to work on “protected by etoy”, a work exploring internet security. They spent most of their time – and C3’s grant money – producing a glossy corporate brochure. The folder sported a blurb from Bjork: “etoy: immature priests from another world” – which was of course completely fabricated. When Artothek, the official art collection of the Austrian Chancellor, approached etoy wanting to buy work, the group had to confront the problem of how to actually turn their brand into a product. The idea was always that the brand was the product, but this doesn’t quite resolve the question of how to produce the kind of unique artifacts that the art world requires. Certainly the old Conceptual Art strategy of selling ‘documentation’ would not do. The solution was as brilliant as it was simple – to sell etoy shares. The ‘works’ would be ‘share certificates’ – unique objects, whose only value, on the face of it, would be that they referred back to the value of the brand. The inspiration, according to Wishart & Boschsler, was David Bowie, ‘the man who sold the world’, who had announced the first rock and roll bond on the London financial markets, backed by future earnings of his back catalogue and publishing rights. Gramazio would end up presenting Chancellor Viktor Klima with the first ‘shares’ at a press conference. “It was a great start for the project”, he said, “A real hack.” (Wishart & Boschler: 142) For this vectoral age, etoy would create the perfect vectoral art. Zai and Brainhard took off next for Pasadena, where they got the idea of reverse-engineering the online etoy.TANKSYSTEM by building an actual tank in an orange shipping container, which would become etoy.TANK 17. This premiered at the San Francisco gallery Blasthaus in June 1998. Instant stars in the small world of San Francisco art, the group began once again to disintegrate. Brainhard and Esposito resigned. Back in Europe in late 1998, Zai was preparing to graduate from the Vienna Academy of Applied Arts. His final project would recapitulate the life and death of etoy. It would exist from here on only as an online archive, a digital mausoleum. As Kubli says “there was no possibility to earn our living with etoy.” (Wishart & Boschler: 192) Zai emailed eToys.com and asked them if them if they would like to place a banner ad on etoy.com, to redirect any errant web traffic. Lawyers for eToys.com offered etoy $30,000 for the etoy.com domain name, which the remaining members of etoy – Zai, Gramazio, Kubli – refused. The offer went up to $100,000, which they also refused. Through their lawyer Peter Wild they demanded $750,000. In September 1999, while etoy were making a business presentation as their contribution to Ars Electronica, eToys.com lodged a complaint against etoy in the Los Angeles Superior Court. The company hired Bruce Wessel, of the heavyweight LA law firm Irell & Manella, who specialized in trademark, copyright and other intellectual property litigation. The complaint Wessel drafted alleged that etoy had infringed and diluted the eToys trademark, were practicing unfair competition and had committed “intentional interference with prospective economic damage.” (Wishart & Boschler: 199) Wessel demanded an injunction that would oblige etoy to cease using its trademark and take down its etoy.com website. The complaint also sought to prevent etoy from selling shares, and demanded punitive damages. Displaying the aggressive lawyering for which he was so handsomely paid, Wessel invoked the California Unfair Competition Act, which was meant to protect citizens from fraudulent business scams. Meant as a piece of consumer protection legislation, its sweeping scope made it available for inventive suits such as Wessel’s against etoy. Wessel was able to use pretty much everything from the archive etoy built against it. As Wishart and Bochsler write, “The court papers were like a delicately curated catalogue of its practices.” (Wishart & Boschler: 199) And indeed, legal documents in copyright and trademark cases may be the most perfect literature of the vectoral age. The Unfair Competition claim was probably aimed at getting the suit heard in a Californian rather than a Federal court in which intellectual property issues were less frequently litigated. The central aim of the eToys suit was the trademark infringement, but on that head their claims were not all that strong. According to the 1946 Lanham Act, similar trademarks do not infringe upon each other if there they are for different kinds of business or in different geographical areas. The Act also says that the right to own a trademark depends on its use. So while etoy had not registered their trademark and eToys had, etoy were actually up and running before eToys, and could base their trademark claim on this fact. The eToys case rested on a somewhat selective reading of the facts. Wessel claimed that etoy was not using its trademark in the US when eToys was registered in 1997. Wessel did not dispute the fact that etoy existed in Europe prior to that time. He asserted that owning the etoy.com domain name was not sufficient to establish a right to the trademark. If the intention of the suit was to bully etoy into giving in, it had quite the opposite effect. It pissed them off. “They felt again like the teenage punks they had once been”, as Wishart & Bochsler put it. Their art imploded in on itself for lack of attention, but called upon by another, it flourished. Wessel and eToys.com unintentionally triggered a dialectic that worked in quite the opposite way to what they intended. The more pressure they put on etoy, the more valued – and valuable – they felt etoy to be. Conceptual business, like conceptual art, is about nothing but the management of signs within the constraints of given institutional forms of market. That this conflict was about nothing made it a conflict about everything. It was a perfectly vectoral struggle. Zai and Gramazio flew to the US to fire up enthusiasm for their cause. They asked Wolfgang Staehle of The Thing to register the domain toywar.com, as a space for anti-eToys activities at some remove from etoy.com, and as a safe haven should eToys prevail with their injunction in having etoy.com taken down. The etoy defense was handled by Marcia Ballard in New York and Robert Freimuth in Los Angeles. In their defense, they argued that etoy had existed since 1994, had registered its globally accessible domain in 1995, and won an international art prize in 1996. To counter a claim by eToys that they had a prior trademark claim because they had bought a trademark from another company that went back to 1990, Ballard and Freimuth argued that this particular trademark only applied to the importation of toys from the previous owner’s New York base and thus had no relevance. They capped their argument by charging that eToys had not shown that its customers were really confused by the existence of etoy. With Christmas looming, eToys wanted a quick settlement, so they offered Zurich-based etoy lawyer Peter Wild $160,000 in shares and cash for the etoy domain. Kubli was prepared to negotiate, but Zai and Gramazio wanted to gamble – and raise the stakes. As Zai recalls: “We did not want to be just the victims; that would have been cheap. We wanted to be giants too.” (Wishart & Boschler: 207) They refused the offer. The case was heard in November 1999 before Judge Rafeedie in the Federal Court. Freimuth, for etoy, argued that federal Court was the right place for what was essentially a trademark matter. Robert Kleiger, for eToys, countered that it should stay where it was because of the claims under the California Unfair Competition act. Judge Rafeedie took little time in agreeing with the eToys lawyer. Wessel’s strategy paid off and eToys won the first skirmish. The first round of a quite different kind of conflict opened when etoy sent out their first ‘toywar’ mass mailing, drawing the attention of the net.art, activism and theory crowd to these events. This drew a report from Felix Stalder in Telepolis: “Fences are going up everywhere, molding what once seemed infinite space into an overcrowded and tightly controlled strip mall.” (Stalder ) The positive feedback from the net only emboldened etoy. For the Los Angeles court, lawyers for etoy filed papers arguing that the sale of ‘shares’ in etoy was not really a stock offering. “The etoy.com website is not about commerce per se, it is about artist and social protest”, they argued. (Wishart & Boschler: 209) They were obliged, in other words, to assert a difference that the art itself had intended to blur in order to escape eToy’s claims under the Unfair Competition Act. Moreover, etoy argued that there was no evidence of a victim. Nobody was claiming to have been fooled by etoy into buying something under false pretences. Ironically enough, art would turn out in hindsight to be a more straightforward transaction here, involving less simulation or dissimulation, than investing in a dot.com. Perhaps we have reached the age when art makes more, not less, claim than business to the rhetorical figure of ‘reality’. Having defended what appeared to be the vulnerable point under the Unfair Competition law, etoy went on the attack. It was the failure of eToys to do a proper search for other trademarks that created the problem in the first place. Meanwhile, in Federal Court, lawyers for etoy launched a counter-suit that reversed the claims against them made by eToys on the trademark question. While the suits and counter suits flew, eToys.com upped their offer to settle to a package of cash and shares worth $400,000. This rather puzzled the etoy lawyers. Those choosing to sue don’t usually try at the same time to settle. Lawyer Peter Wild advised his clients to take the money, but the parallel tactics of eToys.com only encouraged them to dig in their heels. “We felt that this was a tremendous final project for etoy”, says Gramazio. As Zai says, “eToys was our ideal enemy – we were its worst enemy.” (Wishart & Boschler: 210) Zai reported the offer to the net in another mass mail. Most people advised them to take the money, including Doug Rushkoff and Heath Bunting. Paul Garrin counseled fighting on. The etoy agents offered to settle for $750,000. The case came to court in late November 1999 before Judge Shook. The Judge accepted the plausibility of the eToys version of the facts on the trademark issue, which included the purchase of a registered trademark from another company that went back to 1990. He issued an injunction on their behalf, and added in his statement that he was worried about “the great danger of children being exposed to profane and hardcore pornographic issues on the computer.” (Wishart & Boschler: 222) The injunction was all eToys needed to get Network Solutions to shut down the etoy.com domain. Zai sent out a press release in early December, which percolated through Slashdot, rhizome, nettime (Staehle) and many other networks, and catalyzed the net community into action. A debate of sorts started on investor websites such as fool.com. The eToys stock price started to slide, and etoy ‘warriors’ felt free to take the credit for it. The story made the New York Times on 9th December, Washington Post on the 10th, Wired News on the 11th. Network Solutions finally removed the etoy.com domain on the 10th December. Zai responded with a press release: “this is robbery of digital territory, American imperialism, corporate destruction and bulldozing in the way of the 19th century.” (Wishart & Boschler: 237) RTMark set up a campaign fund for toywar, managed by Survival Research Laboratories’ Mark Pauline. The RTMark press release promised a “new internet ‘game’ designed to destroy eToys.com.” (Wishart & Boschler: 239) The RTMark press release grabbed the attention of the Associated Press newswire. The eToys.com share price actually rose on December 13th. Goldman Sachs’ e-commerce analyst Anthony Noto argued that the previous declines in the Etoys share price made it a good buy. Goldman Sachs was the lead underwriter of the eToys IPO. Noto’s writings may have been nothing more than the usual ‘IPOetry’ of the time, but the crash of the internet bubble was some months away yet. The RTMark campaign was called ‘The Twelve Days of Christmas’. It used the Floodnet technique that Ricardo Dominguez used in support of the Zapatistas. As Dominguez said, “this hysterical power-play perfectly demonstrates the intensions of the new net elite; to turn the World Wide Web into their own private home-shopping network.” (Wishart & Boschler: 242) The Floodnet attack may have slowed the eToys.com server down a bit, but it was robust and didn’t crash. Ironically, it ran on open source software. Dominguez claims that the ‘Twelve Days’ campaign, which relied on individuals manually launching Floodnet from their own computers, was not designed to destroy the eToys site, but to make a protest felt. “We had a single-bullet script that could have taken down eToys – a tactical nuke, if you will. But we felt this script did not represent the presence of a global group of people gathered to bear witness to a wrong.” (Wishart & Boschler: 245) While the eToys engineers did what they could to keep the site going, eToys also approached universities and businesses whose systems were being used to host Floodnet attacks. The Thing, which hosted Dominguez’s eToys Floodnet site was taken offline by The Thing’s ISP, Verio. After taking down the Floodnet scripts, The Thing was back up, restoring service to the 200 odd websites that The Thing hosted besides the offending Floodnet site. About 200 people gathered on December 20th at a demonstration against eToys outside the Museum of Modern Art. Among the crowd were Santas bearing signs that said ‘Coal for eToys’. The rally, inside the Museum, was led by the Reverend Billy of the Church of Stop Shopping: “We are drowning in a sea of identical details”, he said. (Wishart & Boschler: 249-250) Meanwhile etoy worked on the Toywar Platform, an online agitpop theater spectacle, in which participants could act as soldiers in the toywar. This would take some time to complete – ironically the dispute threatened to end before this last etoy artwork was ready, giving etoy further incentives to keep the dispute alive. The etoy agents had a new lawyer, Chris Truax, who was attracted to the case by the publicity it was generating. Through Truax, etoy offered to sell the etoy domain and trademark for $3.7 million. This may sound like an insane sum, but to put it in perspective, the business.com site changed hands for $7.5 million around this time. On December 29th, Wessel signaled that eToys was prepared to compromise. The problem was, the Toywar Platform was not quite ready, so etoy did what it could to drag out the negotiations. The site went live just before the scheduled court hearings, January 10th 2000. “TOYWAR.com is a place where all servers and all involved people melt and build a living system. In our eyes it is the best way to express and document what’s going on at the moment: people start to about new ways to fight for their ideas, their lifestyle, contemporary culture and power relations.” (Wishart & Boschler: 263) Meanwhile, in a California courtroom, Truax demanded that Network Solutions restore the etoy domain, that eToys pay the etoy legal expenses, and that the case be dropped without prejudice. No settlement was reached. Negotiations dragged on for another two weeks, with the etoy agents’ attention somewhat divided between two horizons – art and law. The dispute was settled on 25th January. Both parties dismissed their complaints without prejudice. The eToys company would pay the etoy artists $40,000 for legal costs, and contact Network Solutions to reinstate the etoy domain. “It was a pleasure doing business with one of the biggest e-commerce giants in the world” ran the etoy press release. (Wishart & Boschler: 265) That would make a charming end to the story. But what goes around comes around. Brainhard, still pissed off with Zai after leaving the group in San Francisco, filed for the etoy trademark in Austria. After that the internal etoy wranglings just gets boring. But it was fun while it lasted. What etoy grasped intuitively was the nexus between the internet as a cultural space and the transformation of the commodity economy in a yet-more abstract direction – its becoming-vectoral. They zeroed in on the heart of the new era of conceptual business – the brand. As Wittgenstein says of language, what gives words meaning is other words, so too for brands. What gives brands meaning is other brands. There is a syntax for brands as there is for words. What etoy discovered is how to insert a new brand into that syntax. The place of eToys as a brand depended on their business competition with other brands – with Toys ‘R’ Us, for example. For etoy, the syntax they discovered for relating their brand to another one was a legal opposition. What made etoy interesting was their lack of moral posturing. Their abandonment of leftist rhetorics opened them up to exploring the territory where media and business meet, but it also made them vulnerable to being consumed by the very dialectic that created the possibility of staging etoy in the first place. By abandoning obsolete political strategies, they discovered a media tactic, which collapsed for want of a new strategy, for the new vectoral terrain on which we find ourselves. Works Cited Negri, Antonio. Time for Revolution. Continuum, London, 2003. Warhol, Andy. From A to B and Back Again. Picador, New York, 1984. Stalder, Felix. ‘Fences in Cyberspace: Recent events in the battle over domain names’. 19 Jun 2003. <http://felix.openflows.org/html/fences.php>. Wark, McKenzie. ‘A Hacker Manifesto [version 4.0]’ 19 Jun 2003. http://subsol.c3.hu/subsol_2/contributors0/warktext.html. Klein, Naomi. No Logo. Harper Collins, London, 2000. Wishart, Adam & Regula Bochsler. Leaving Reality Behind: etoy vs eToys.com & Other Battles to Control Cyberspace Ecco Books, 2003. Staehle, Wolfgang. ‘<nettime> etoy.com shut down by US court.’ 19 Jun 2003. http://amsterdam.nettime.org/Lists-Archives/nettime-l-9912/msg00005.html Links http://amsterdam.nettime.org/Lists-Archives/nettime-l-9912/msg00005.htm http://felix.openflows.org/html/fences.html http://subsol.c3.hu/subsol_2/contributors0/warktext.html Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Wark, McKenzie. "Toywars" M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0306/02-toywars.php>. APA Style Wark, M. (2003, Jun 19). Toywars. M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0306/02-toywars.php>
APA, Harvard, Vancouver, ISO, and other styles
31

Saxena, Abhinav, Jose Celaya, Bhaskar Saha, Sankalita Saha, and Kai Goebel. "Metrics for Offline Evaluation of Prognostic Performance." International Journal of Prognostics and Health Management 1, no. 1 (March 22, 2021). http://dx.doi.org/10.36001/ijphm.2010.v1i1.1336.

Full text
Abstract:
Prognostic performance evaluation has gained significant attention in the past few years.*Currently, prognostics concepts lack standard definitions and suffer from ambiguous and inconsistent interpretations. This lack of standards is in part due to the varied end-user requirements for different applications, time scales, available information, domain dynamics, etc. to name a few. The research community has used a variety of metrics largely based on convenience and their respective requirements. Very little attention has been focused on establishing a standardized approach to compare different efforts. This paper presents several new evaluation metrics tailored for prognostics that were recently introduced and were shown to effectively evaluate various algorithms as compared to other conventional metrics. Specifically, this paper presents a detailed discussion on how these metrics should be interpreted and used. These metrics have the capability of incorporating probabilistic uncertainty estimates from prognostic algorithms. In addition to quantitative assessment they also offer a comprehensive visual perspective that can be used in designing the prognostic system. Several methods are suggested to customize these metrics for different applications. Guidelines are provided to help choose one method over another based on distribution characteristics. Various issues faced by prognostics and its performance evaluation are discussed followed by a formal notational framework to help standardize subsequent developments.
APA, Harvard, Vancouver, ISO, and other styles
32

Nguyen, Nhung, Roselyn Gabud, and Sophia Ananiadou. "COPIOUS: A gold standard corpus of named entities towards extracting species occurrence from biodiversity literature." Biodiversity Data Journal 7 (January 22, 2019). http://dx.doi.org/10.3897/bdj.7.e29626.

Full text
Abstract:
Background Species occurrence records are very important in the biodiversity domain. While several available corpora contain only annotations of species names or habitats and geographical locations, there is no consolidated corpus that covers all types of entities necessary for extracting species occurrence from biodiversity literature. In order to alleviate this issue, we have constructed the COPIOUS corpus—a gold standard corpus that covers a wide range of biodiversity entities. Results Two annotators manually annotated the corpus with five categories of entities, i.e. taxon names, geographical locations, habitats, temporal expressions and person names. The overall inter-annotator agreement on 200 doubly-annotated documents is approximately 81.86% F-score. Amongst the five categories, the agreement on habitat entities was the lowest, indicating that this type of entity is complex. The COPIOUS corpus consists of 668 documents downloaded from the Biodiversity Heritage Library with over 26K sentences and more than 28K entities. Named entity recognisers trained on the corpus could achieve an F-score of 74.58%. Moreover, in recognising taxon names, our model performed better than two available tools in the biodiversity domain, namely the SPECIES tagger and the Global Name Recognition and Discovery. More than 1,600 binary relations of Taxon-Habitat, Taxon-Person, Taxon-Geographical locations and Taxon-Temporal expressions were identified by applying a pattern-based relation extraction system to the gold standard. Based on the extracted relations, we can produce a knowledge repository of species occurrences. Conclusion The paper describes in detail the construction of a gold standard named entity corpus for the biodiversity domain. An investigation of the performance of named entity recognition (NER) tools trained on the gold standard revealed that the corpus is sufficiently reliable and sizeable for both training and evaluation purposes. The corpus can be further used for relation extraction to locate species occurrences in literature—a useful task for monitoring species distribution and preserving the biodiversity.
APA, Harvard, Vancouver, ISO, and other styles
33

Bates, Samantha, John Bowers, Shane Greenstein, Jordi Weinstock, Yunhan Xu, and Jonathan Zittrain. "Evidence of Decreasing Internet Entropy: The Lack of Redundancy in DNS Resolution by Major Websites and Services." Journal of Quantitative Description: Digital Media 1 (April 26, 2021). http://dx.doi.org/10.51685/jqd.2021.011.

Full text
Abstract:
The Internet, and the Web built on top of it, were intended to support an “entropic” physical and logical network map (Zittrain, 2013). That is, they have been designed to allow servers to be spread anywhere in the world in an ad hoc and evolving fashion, rather than a centralized one. Arbitrary distance among, and number of, servers causes no particular architectural problems, and indeed ensures that problems experienced by one data source remain unlinked to others. A Web page can be assembled from any number of upstream sources, through the use of various URLs, each pointing to a different location. To a user, the page looks unified. Over time, however, there are signs that the hosting and finding of Internet services has become more centralized. We explore and document one possible dimension of this centralization. We analyze the extent to which the Internet’s global domain name resolution (DNS) system has preserved its distributed resilience given the rise of cloud-based hosting and infrastructure. We offer evidence of the dramatic concentration of the DNS hosting market in the hands of a small number of cloud service providers over a period spanning from 2011-2018. In addition, we examine changes in domains’ tendency to “diversify” their pool of nameservers – how frequently domains employ DNS management services from multiple providers rather than just one provider. Throughout the paper, we use the catastrophic October 2016 attack on Dyn, a major DNS hosting provider, to illustrate the cybersecurity consequences of our analysis.
APA, Harvard, Vancouver, ISO, and other styles
34

Liljeblad, Johan, Tapani Lahti, and Matts Djos. "Linked Data Tools for Managing Taxonomic Databases." Biodiversity Information Science and Standards 3 (June 21, 2019). http://dx.doi.org/10.3897/biss.3.37329.

Full text
Abstract:
Taxonomic information is dynamic, i.e. changes are made continuously, so scientific names are insufficient to track changes in taxon circumscription. The principles of Linked Open Data (LOD), as defined by the World Wide Web Consortium, can be applied for documenting the relationships of taxon circumscriptions over time and between checklists of taxa. In our scheme, each checklist and each taxon in the checklist is assigned a globally unique, persistent identifier. According to the LOD principles, HTTP Uniform Resource Identifiers (URIs) are used as identifiers, providing both human-readable (HTML) and machine-readable (XML) responses for client requests. Common vocabularies are needed in machine-readable responses to HTTP URIs. We use SKOS (Simple Knowledge Organization System) as a basic vocabulary for describing checklists as instances of class skos:ConceptScheme, and taxa as instances of class skos:Concept. Set relationships between taxon circumscriptions are described using the properties skos:broader and skos:narrower. Darwin Core vocabulary is used for describing taxon properties, such as scientific names, taxonomic ranks and authorship string, in the checklists. Instead of directly linking taxon circumscriptions between checklists, we define a HTTP URI for each unique circumscription. This common identifier is then mapped to internal checklist identifiers matching the circumscription using the property skos:exactMatch. For the management of the URIs, the domain name TAXONID.ORG has been registered. In a pilot study, our approach has been applied to linking taxon circumscriptions of selected taxa between the national checklists of Sweden and Finland. In the future, national checklists from other Nordic/Baltic countries (Norway, Denmark, Iceland, Estonia) can be easily linked together as well. The work is part of the NeIC DeepDive project (neic.no).
APA, Harvard, Vancouver, ISO, and other styles
35

Wesch, Michael. "Creating "Kantri" in Central New Guinea: Relational Ontology and the Categorical Logic of Statecraft." M/C Journal 11, no. 5 (August 21, 2008). http://dx.doi.org/10.5204/mcj.67.

Full text
Abstract:
Since their first encounter with colonial administrators in 1963, approximately 2,000 indigenous people living in the Nimakot region of central New Guinea have been struggling with a tension between their indigenous way of life and the imperatives of the state. It is not just that they are on the international border between Papua New Guinea and Indonesia and therefore difficult to categorise into this or that country. It is that they do not habitually conceptualise themselves and others in categorical terms. They value and focus on relationships rather than categories. In their struggle to adapt the blooming buzzing complexities of their semi-nomadic lifestyle and relational logic to the strict and apparently static lines, grids, and coordinates of rationalistic statecraft they have become torn by duelling conceptions of “kantri” itself (Melanesian Tok Pisin for “country”). On the one hand, kantri invokes an unbroken rural landscape rich with personal and cultural memories that establish a firm and deep relationship with the land and the ancestors. Such a notion fits easily with local conceptions of kinship and land tenure. On the other hand, kantri is a bounded object, part of an often frustrating and mystifying system of categorization imposed by strict and rationalist mechanisms of statecraft. The following analyses this tension based on 22 months of intensive and intimate participant observation in the region from 1999-2006 with a special focus on the uses and impacts of writing and other new communication technologies. The categorical bias of statecraft is enabled, fostered, extended, and maintained by the technology of writing. Statecraft seeks (or makes) categories that are ideally stable, permanent, non-negotiable, and fit for the relative fixity of print, while the relationships emphasised by people of Nimakot are fluid, temporary, negotiable, contested and ambiguous. In contrast to the engaged, pragmatic, and personal view one finds in face-to-face relationships on the ground, the state’s knowledge of the local is ultimately mediated by what can be written into abstract categories that can be listed, counted and aggregated, producing a synoptic, distanced, and decontextualising perspective. By simplifying the cacophonous blooming buzzing complexities of life into legible categories, regularities, and rules, the pen and paper become both the eyes and the voice of the state (Scott 2). Even the writing of this paper is difficult. Many sentences would be easier to write if I could just name the group I am discussing. But the group of people I am writing about have no clear and uncontested name for themselves. More importantly, they do not traditionally think of themselves as a “group,” nor do they habitually conceptualise others in terms of bounded groups of individuals. The biggest challenge to statecraft’s attempts to create a sense of “country” here is the fact that most local people do not subjectively think of themselves in categorical terms. They do not imagine themselves to be part of “adjacent and competitive empires” (Strathern 102). This “group” is most widely known as the western “Atbalmin” though the name is not an indigenous term. “Atbalmin” is a word used by the neighbouring Telefol that means “people of the trees.” It was adopted by early patrol officers who were accompanied by Telefol translators. As these early patrols made their way through the “Atbalmin” region from east to west they frequently complained about names and their inability to pin or pen them down. Tribal names, clan names, even personal names seemed to change with each asking. While such flexibility and flux were perfectly at home in an oral face-to-face environment, it wasn’t suitable for the colonial administrators’ relatively fixed and static books. The “mysterious Kufelmin” (as the patrol reports refer to them) were even more frustrating for early colonial officers. Patrols heading west from Telefomin searched for decades for this mysterious group and never found them. To this day nobody has ever set foot in a Kufelmin village. In each valley heading west patrols were told that the Kufelmin were in the next valley to the west. But the Kufelmin were never there. They were always one more valley to the west. The problem was that the administrators wrongly assumed “Kufelmin” to be a tribal name as stable and categorical as the forms and maps they were using would accept. Kufelmin simply means “those people to the West.” It is a relational term, not a categorical one. The administration’s first contact with the people of Nimakot exposed even more fundamental differences and specific tensions between the local relational logic and the categorical bias of statecraft. Australian patrol officer JR McArthur crested the mountain overlooking Nimakot at precisely 1027 hours on 16 August 1963, a fact he dutifully recorded in his notebook (Telefomin Patrol Report 12 of 1962/63). He then proceeded down the mountain with pen and paper in hand, recording the precise moment he crossed the Sunim creek (1109 hours), came to Sunimbil (1117 hours), and likewise on and on to his final destination near the base of the present-day airstrip. Such recordings of precise times and locations were central to McArthur's main goal. Amidst the steep mountains painted with lush green gardens, sparkling waterfalls, and towering virgin rainforests McArthur busied himself examining maps and aerial photographs searching for the region’s most impressive, imposing, and yet altogether invisible feature: the 141st Meridian East of Greenwich, the international border. McArthur saw his work as one of fixing boundaries, taking names, and extending the great taxonomic system of statecraft that would ultimately “rationalise” and order even this remote corner of the globe. When he came to the conclusion that he had inadvertently stepped outside his rightful domain he promptly left, noting in his report that he purchased a pig just before leaving. The local understanding of this event is very different. While McArthur was busy making and obeying categories, the people of Nimakot were primarily concerned with making relationships. In this case, they hoped to create a relationship through which valuable goods, the likes of which they had never seen, would flow. The pig mentioned in McArthur's report was not meant to be bought or sold, but as a gift signifying the beginning of what locals hoped would be a long relationship. When McArthur insisted on paying for it and then promptly left with a promise that he would never return, locals interpreted his actions as an accusation of witchcraft. Witchcraft is the most visible and dramatic aspect of the local relational logic of being, what might be termed a relational ontology. Marilyn Strathern describes this ontology as being as much “dividual” as individual, pointing out that Melanesians tend to conceptualise themselves as defined and constituted by social relationships rather than independent from them (102). The person is conceptualised as socially and collectively constituted rather than individuated. A person’s strength, health, intelligence, disposition, and behaviour depend on the strength and nature of one’s relationships (Knauft 26). The impacts of this relational ontology on local life are far reaching. Unconditional kindness and sharing are constantly required to maintain healthy relations because unhealthy relations are understood to be the direct cause of sickness, infertility, and death. Where such misfortunes do befall someone, their explanations are sought in a complex calculus examining relational histories. Whoever has a bad relation with the victim is blamed for their misfortune. Modernists disparage such ideas as “witchcraft beliefs” but witchcraft accusations are just a small part of a much more pervasive, rich, and logical relational ontology in which the health and well-being of relations are conceptualised as influencing the health and well-being of things and people. Because of this logic, people of Nimakot are relationship experts who navigate the complex relational field with remarkable subtleness and tact. But even they cannot maintain the unconditional kindness and sharing that is required of them when their social world grows too large and complex. A village rarely grows to over 50 people before tensions lead to an irresolvable witchcraft accusation and the village splits up. In this way, the continuous negotiations inspired by the relational ontology lead to constant movement, changing of names, and shifting clan affiliations – nothing that fits very well on a static map or a few categories in a book. Over the past 45 years since McArthur first brought the mechanisms of statecraft into Nimakot, the tensions between this local relational ontology and the categorical logic of the state have never been resolved. One might think that a synthesis of the two forms would have emerged. Instead, to this day, all that becomes new is the form through which the tensions are expressed and the ways in which the tensions are exacerbated. The international border has been and continues to be the primary catalyst for these tensions to express themselves. As it turns out, McArthur had miscalculated. He had not crossed the international border before coming to Nimakot. It was later determined that the border runs right through the middle of Nimakot, inspiring one young local man to describe it to me as “that great red mark that cuts us right through the heart.” The McArthur encounter was a harbinger of what was to come; a battle for kantri as unbounded connected landscape, and a battle with kantri as a binding categorical system, set against a backdrop of witchcraft imagery. Locals soon learned the importance of the map and census for receiving state funds for construction projects, education, health care, and other amenities. In the early 1970s a charismatic local man convinced others to move into one large village called Tumolbil. The large population literally put Tumolbil “on the map,” dramatically increasing its visibility to government and foreign aid. Drawn by the large population, an airstrip, school, and aid post were built in the late 1970s and early 1980s. Locally this process is known as “namba tok,” meaning that “numbers (population, statistics, etc.) talk” to the state. The greater the number, the stronger the voice, so locals are now intent on creating large stable villages that are visible to the state and in line for services and development projects. Yet their way of life and relational cultural logics continue to betray their efforts to create such villages. Most people still navigate the complexities of their social relations by living in small, scattered, semi-nomadic hamlets. Even as young local men trained in Western schools become government officials in charge of the maps and census books themselves, they are finding that they are frustrated by the same characteristics of life that once frustrated colonial administrators. The tensions between the local relational ontology and the categorical imperatives of the state come to rest squarely on the shoulders of these young men. They want large stable villages that will produce a large number in the census book in order to bring development projects to their land. More importantly, they recognise that half of their land rests precariously west of that magical 141st Meridian. A clearly defined and distinct place on the map along with a solid number of names in the census book, have become essential to assuring their continued connection with their kantri. On several occasions they have felt threatened by the possibility that they would have to either abandon the land west of the meridian or become citizens of Indonesia. The first option threatens their sense of kantri as connection to their traditional land. The other violates their new found sense of kantri as nationalistic pride in the independent state of Papua New Guinea. In an attempt to resolve these increasingly pressing tensions, the officers designed “Operation Clean and Sweep” in 2003 – a plan to move people out of their small scattered hamlets and into one of twelve larger villages that had been recognised by Papua New Guinea in previous census and mapping exercises. After sending notice to hamlet residents, an operation team of over one hundred men marched throughout Nimakot, burning each hamlet along the way. Before each burning, officers gave a speech peppered with the phrase “namba tok.” Most people listened to the speeches with enthusiasm, often expressing their own eagerness to leave their hamlet behind to live in a large orderly village. In one hamlet they asked me to take a photo of them in front of their houses just before they cheerfully allowed government officers to enter their homes and light the thatch of their rooftops. “Finally,” the officer in charge exclaimed triumphantly, “we can put people where their names are.” If the tension between local relational logics and the categorical imperatives of the state had been only superficial, perhaps this plan would have ultimately resolved the tension. But the tension is not only expressed objectively in the need for large stable villages, but subjectively as well, in the state’s need for people to orient themselves primarily as citizens and individuals, doing what is best for the country as a categorical group rather than acting as relational “dividuals” and orienting their lives primarily towards the demands of kinship and other relations. This tension has been recognised in other contexts as well, and theorised in Craig Calhoun’s study of nationalism in which he marks out two related distinctions: “between networks of social relationships and categories of similar individuals, and between reproduction through directly interpersonal interactions and reproduction through the mediation of relatively impersonal agencies of large-scale cultural standardization and social organization” (29). The former in both of these distinctions make up the essential components of relational ontology, while the latter describe the mechanisms and logic of statecraft. To describe the form of personhood implicit in nationalism, Calhoun introduces the term “categorical identity” to designate “identification by similarity of attributes as a member of a set of equivalent members” (42). While locals are quick to understand the power of categorical entities in the cultural process of statecraft and therefore have eagerly created large villages on a number of occasions in order to “game” the state system, they do not readily assume a categorical identity, an identity with these categories, and the villages have consistently disintegrated over time due to relational tensions and witchcraft accusations born from the local relational ontology. Operation Clean and Sweep reached its crisis moment just two days after the burnings began. An influential man from one of the unmapped hamlets scheduled for burning came to the officers complaining that he would not move to the large government village because he would have to live too close to people who had bewitched and killed members of his family. Others echoed his fears of witchcraft in the large government villages. The drive for a categorical order came head to head with the local relational ontology. Moving people into large government villages and administering a peaceful, orderly, lawful society of citizens (a categorical identity) would take much more than eliminating hamlets and forced migration. It would require a complete transformation in their sense of being – a transformation that even the officers themselves have not fully undertaken. The officers did not see the relational ontology as the problem. They saw witchcraft as the problem. They announced plans to eradicate witchcraft altogether. For three months, witchcraft suspects were apprehended, interrogated, and asked to list names of other witches. With each interrogation, the list of witches grew longer and longer. The interrogations were violent at times, but not as violent or as devastating as the list itself. The violence of the list hid behind its simple elegance. Like a census book, it had a mystique of orderliness and rationality. It stripped away the ugliness and complexity of interrogations leaving nothing but pure categorical knowledge. In the interrogation room, the list became a powerful tool the officer in charge used to intimidate his suspects. He often began by reading from the list, as if to say, “we already have you right here.” But one might say it was the officer who was really trapped in the list. It ensnared him in its simple elegance, its clean straight lines and clear categories. He was not using the list as much as the list was using him. Traditionally it was not the witch that was of concern, but the act of witchcraft itself. If the relationship could be healed – thereby healing the victim – all was forgiven. The list transformed the accused from temporary, situational, and indefinite witches involved in local relational disputes to permanent, categorical witches in violation of state law. Traditional ways of dealing with witchcraft focused on healing relationships. The print culture of the state focuses on punishing the categorically “guilty” categorical individual. They were “sentenced” “by the book.” As an outsider, I was simply thought to be naïve about the workings of witchcraft. My protests were ignored (see Wesch). Ultimately it ended because making a list of witches proved to be even more difficult than making a list for the census. Along with the familiar challenges of shifting names and affiliations, the witch list made its own enemies. The moment somebody was listed all of their relations ceased recognising the list and those making it as authoritative. In the end, the same tensions that motivated Operation Clean and Sweep were only reproduced by the efforts to resolve them. The tensions demonstrated themselves to be more tenacious than anticipated, grounded as they are in pervasive self-sustaining cultural systems that do not overlap in a way that is significant enough to threaten their mutual existence. The relational ontology is embedded in rich and enduring local histories of gift exchange, marriage, birth, death, and conflict. Statecraft is embedded in a broader system of power, hierarchy, deadlines, roles, and rules. They are not simply matters of belief. In this way, the focus on witches and witchcraft could never resolve the tensions. Instead, the movement only exacerbated the relational tensions that inspire, extend, and maintain witchcraft beliefs, and once again people found themselves living in small, scattered hamlets, wishing they could somehow come together to live in large prosperous villages so their population numbers would be great enough to “talk” to the state, bringing in valuable services, and more importantly, securing their land and citizenship with Papua New Guinea. It is in this context that “kantri” not only embodies the tensions between local ways of life and the imperatives of the state, but also the persistent hope for resolution, and the haunting memories of previous failures. References Calhoun, Craig. Nationalism. Open UP, 1997. Knauft, Bruce. From Primitive to Postcolonial in Melanesia and Anthropology. Ann Arbor: U Michigan P, 1999. McArthur, JR. Telefomin Patrol Report 12 of 1962/63 Strathern, Marilyn. The Gender of the Gift. U California P, 1988. Scott, James. Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. Yale UP, 1998. Wesch, Michael. “A Witch Hunt in New Guinea: Anthropology on Trial.” Anthropology and Humanism 32.1 (2007): 4-17.
APA, Harvard, Vancouver, ISO, and other styles
36

Sachs, Joel, Jocelyn Pender, Beatriz Lujan-Toro, James Macklin, Peter Haase, and Robin Malik. "Using Wikidata and Metaphactory to Underpin an Integrated Flora of Canada." Biodiversity Information Science and Standards 3 (August 8, 2019). http://dx.doi.org/10.3897/biss.3.38627.

Full text
Abstract:
We are using Wikidata and Metaphactory to build an Integrated Flora of Canada (IFC). IFC will be integrated in two senses: First, it will draw on multiple existing flora (e.g. Flora of North America, Flora of Manitoba, etc.) for content. Second, it will be a portal to related resources such as annotations, specimens, literature, and sequence data. Background We had success using Semantic Media Wiki (SMW) as the platform for an on-line representation of the Flora of North America (FNA). We used Charaparser (Cui 2012) to extract plant structures (e.g. “stem”), characters (e.g. “external texture”), and character values (e.g. “glabrous”) from the semi-structured FNA treatments. We then loaded this data into SMW, which allows us to query for taxa based on their character traits, and enables a broad range of exploratory analysis, both for purposes of hypothesis generation, and also to provide support for or against specific scientific hypotheses. Migrating to Wikidata/Wikibase We decided to explore a migration from SMW to Wikibase for three main reasons: simplified workflow; triple level provenance; and sustainability. Simplified workflow: Our workflow for our FNA-based portal includes Natural Language Processing (NLP) of coarse-grained XML to get the fine-grained XML, transforming this XML for input into SMW, and a custom SMW skin for displaying the data. We consider the coarse-grained XML to be canonical. When it changes (because we find an error, or we improve our NLP), we have to re-run the transformation, and re-load the data, which is time-consuming. Ideally, our presentation would be based on API calls to the data itself, eliminating the need to transform and re-load after every change. Provenance: Wikidata's provenance model supports having multiple, conflicting assertions for the same character trait, which is something that inevitably happens when floristic data is integrated. Sustainability: Wikidata has strong support from the Wikimedia Foundation, while SMW is increasingly seen as a legacy system. Wikibase vs. Wikidata Wikidata, however, is not a suitable home for the Integrated Flora of Canada. It is built upon a relatively small number of community curated properties, while we have ~4500 properties for the Asteraceae family alone. The model we want to pursue is to use Wikidata for a small group of core properties (e.g. accepted name, parent taxon, etc.), and to use our own instance of Wikibase for the much larger number of specialized morphological properties (e.g. adaxial leaf colour, leaf external texture, etc.) Essentially, we will be running our own Wikidata, over which we would exercise full control. Miller (2018) decribes deploying this curation model in another domain. Metaphactory Metaphactory is a suite of middleware and front-end interfaces for authoring, managing, and querying knowledge graphs, including mechanisms for faceted search and geospatial visualizations. It is also the software (together with Blazegraph) behind the Wikidata Query Service. Metaphactory provides us with a SPARQL endpoint; a templating mechanism that allows each taxonomic treatment to be rendered via a collection of SPARQL queries; reasoning capabilities (via an underlying graph database) that permit the organization of over 42,000 morphological properties; and a variety of search and discovery tools. There are a number of ways in which Wikidata and Metaphactory can work together, and we are still exploring questions such as: Will provenance be managed via named graphs, or via the Wikidata snak model?; How will data flow between the two platforms? Etc. We will report on our findings to date, and invite collaboration with related Wikimedia-based projects.
APA, Harvard, Vancouver, ISO, and other styles
37

Gaby, Alice, Jonathon Lum, Thomas Poulton, and Jonathan Schlossberg. "What in the World Is North? Translating Cardinal Directions across Languages, Cultures and Environments." M/C Journal 20, no. 6 (December 31, 2017). http://dx.doi.org/10.5204/mcj.1276.

Full text
Abstract:
IntroductionFor many, north is an abstract point on a compass, an arrow that tells you which way to hold up a map. Though scientifically defined according to the magnetic north pole, and/or the earth’s axis of rotation, these facts are not necessarily discernible to the average person. Perhaps for this reason, the Oxford English Dictionary begins with reference to the far more mundane and accessible sun and features of the human body, in defining north as; “in the direction of the part of the horizon on the left-hand side of a person facing the rising sun” (OED Online). Indeed, many of the words for ‘north’ around the world are etymologically linked to the left hand side (for example Cornish clēth ‘north, left’). We shall see later that even in English, many speakers conceptualise ‘north’ in an egocentric way. Other languages define ‘north’ in opposition to an orthogonal east-west axis defined by the sun’s rising and setting points (see, e.g., the extensive survey of Brown).Etymology aside, however, studies such as Brown’s presume a set of four cardinal directions which are available as primordial ontological categories which may (or may not) be labelled by the languages of the world. If we accept this premise, the fact that a word is translated as ‘north’ is sufficient to understand the direction it describes. There is good reason to reject this premise, however. We present data from three languages among which there is considerable variance in how the words translated as ‘north’ are typically used and understood. These languages are Kuuk Thaayorre (an Australian Aboriginal language spoken on Cape York Peninsula), Marshallese (an Oceanic language spoken in the Republic of the Marshall Islands), and Dhivehi (an Indo-Aryan language spoken in the Maldives). Lastly, we consider the results of an experiment that show Australian English speakers tend to interpret the word north according to the orientation of their own bodies and the objects they manipulate, rather than as a cardinal direction as such.‘North’ in Kuuk ThaayorreKuuk Thaayorre is a Pama-Nyungan language spoken on the west coast of Australia’s Cape York Peninsula in the community of Pormpuraaw. The Kuuk Thaayorre words equivalent to north, south, east and west (hereafter, ‘directionals’) are both complex and frequently used. They are complex in the sense that they combine with prefixes and suffixes to form dozens of words which indicate not only the direction involved, but also the degree of distance, whether there is motion from, towards, to a fixed point, or within a bounded area in that location, proximity to the local river, and more. The ubiquity of these words is illustrated by the fact that the most common greeting formula involves one person asking nhunt wanthan pal yan? ‘where are you going’ and the other responding, for example, ngay yuurriparrop yan ‘I’m going a long way southwards towards the river’, or ngay iilungkarruw yan ‘I’m coming from the northwest’. Directional terms are strewn liberally throughout Kuuk Thaayorre speech. They are employed in the description of both large-scale and small-scale spaces, whether giving directions to a far-off town, asking another person to ‘move a little to the north’, or identifying the person ‘to the east’ of another in a photograph. Likewise, directional gestures are highly frequent, sometimes augmenting the information given in the speech stream, sometimes used in the absence of spoken directions, and other times redundantly duplicating the information given by a directional word.The forms and meanings of directional words are described in detail in Gaby (Gaby 344–52). At the core of this system are six directional roots referring to the north and south banks of the nearby Edward River as well as two intersecting axes. One of these axes is equivalent to the east—west axis familiar to English speakers, and is defined by the apparent diurnal trajectory of the sun. (At a latitude of 14 degrees 54 minutes south, the Kuuk Thaayorre homeland sees little variation in the location of sunrise and sunset through the year.) While the poles of the second axis are translated by the English terms north and south, from a Western perspective this axis is skewed such that Kuuk Thaayorre -ungkarr ‘~north’ lies approximately 35 degrees west of magnetic north. Rather than being defined by magnetic or polar north, this axis aligns with the local coastline. This is true even when the terms are used at inland locations where there is no visual access to the water or parallel sand ridges. How Kuuk Thaayorre speakers apply this system to environments further removed from this particular stretch of coast—especially in the presence of a differently-oriented coast—remains a topic for future research.‘North’ in MarshalleseMarshallese is the language of the people of the Marshall Islands, an expansive archipelago consisting of 22 inhabited atolls and three inhabited non-atoll islands located in the Northern Pacific. The Marshallese have a long history as master navigators, a skill necessary to keep strong links between far-flung and disparate islands (Lewis; Genz).Figure 1: The location of the Marshall IslandsAs with other Pacific languages (e.g. Palmer; Ross; François), Marshallese deploys a complex system of geocentric references. Cardinal directions are historically derived from the Pacific trade winds, reflecting the importance of these winds for navigation and wayfinding. The etymologies of the Marshallese directions are shown in Table 1 below. The terms given in this table are in the Ralik dialect, spoken in the western Marshall Islands. The terms used in the Ratak (eastern) dialect are related, but slightly different in form. See Schlossberg for more detailed discussion. Etymologies originally sourced from Bender et al. and Ross.Table 1: Marshallese cardinal direction words with etymological source semantics EastWestNorthSouthNoun formrearrilik iōn̄ rōkEtymology‘calm shore (of islet)’‘rough shore (of islet)’‘windy season’; ‘season of northerly winds’‘dry season’; ‘season of southerly winds’Verb modifier formtatonin̄a rōn̄aEtymology‘up(wind)’‘down(wind)’‘windy season’; ‘season of northerly winds’‘dry season’; ‘season of southerly winds’As with many other Oceanic languages, Marshallese has three domains of spatial language use: the local domain, the inshore-maritime domain and the navigational domain. Cardinal directions are the sole strategy employed in the navigational domain, which occurs when sailing on the open ocean. In the inshore-maritime domain, which applies when sailing on the ocean or lagoon in sight of land, a land-sea axis is used (The question of whether, in fact, these directions form axes as such is considered further below). Similarly, when walking around an island, a calm side-rough side (of island) axis is employed. In both situations, either the cardinal north-south axis or east-west axis is used to form a secondary cross-axis to the topography-based axis. The cardinal axis parallel to the calm-rough or land-sea axis is rarely used. When the island is not oriented perfectly perpendicular to one of the cardinal axes, the cardinal axes rotate such that they are perpendicular to the primary axis. This can result in the orientation of iōn̄ ‘north’ being quite skewed away from ‘true’ north. An example of how the cardinal and topographic axes prototypically work is exemplified in Figure 2, which shows Jabor, an islet in Jaluit Atoll in the south-west Marshalls.Figure 2: The geocentric directional system of Jabor, Jaluit AtollWhile cartographic cardinal directions comprise two perpendicular axes, this is not the case for many Marshallese. The clearest evidence for this is the directional system of Kili Island, a small non-atoll island approximately 50km west of Jaluit Atoll. The directional system of Kili is similar to that of Jabor, with one notable exception; the iōn̄-rōk ‘north-south’ and rear-rilik ‘east-west’ axes are not perpendicular but rather parallel (Figure 3) The rear-rilik axis takes precedence and the iōn̄-rōk axis is rarely used, showing the primacy of the east-west axis on Kili. This is a clear indication that the Western abstraction of crossed cardinal axes is not in play in the Marshall Islands; the iōn̄-rōk and rear-rilik axes can function completely independently of one another.Figure 3: Geocentric system of spatial reference on KiliSpringdale is a small city in north-west of the landlocked state of Arkansas. It hosts the largest number of expatriate Marshallese in the United States. Of 26 participants in an object placement task, four respondents were able to correctly identify the four cardinal points (Schlossberg). Aside from some who said they simply did not know others gave a variety of answers, including that iōn̄, rōk, rilik and rear only exist in the Marshall Islands. Others imagined a canonical orientation derived from their home atoll and transposed this onto their current environment; one person who was facing the front door in their house in Springdale reported that they imagined they were in their house in the Marshall Islands, where when oriented towards the door, they were facing iōn̄ ‘north’, thus deriving an orientation with respect to a Marshallese cardinal direction. Aside from the four participants who identified the directions correctly, a further six participants responded in a consistent—if incorrect—way, i.e. although the directions were not correctly identified, the responses were consistent with the conceptualisation of crossed cardinal axes, merely that the locations identified were rotated from their true referents. This leaves 16 of the 26 participants (62%) who did not display evidence of having a conceptual system of two crossed cardinal axes.If one were to point in a direction and say ‘this is north’, most Westerners would easily be able to identify ‘south’ by pointing in the opposite direction. This is not the case with Marshallese speakers, many of whom are unable to do the same if given a Marshallese cardinal direction and asked to name its opposite (cf. Schlossberg). This demonstrates that for many Marshallese, each of these cardinal terms do not form axes at all, but rather are four unique locally-anchored points.‘North’ in DhivehiDhivehi is spoken in the Maldives, an archipelago to the southwest of India and Sri Lanka in the Indian Ocean (see Figure 4). Maldivians have a long history of sailing on the open waters, in order to fish and to trade. Traditionally, much of the adult male population would spend long periods of time on such voyages, riding the trade winds and navigating by the stars. For Maldivians, uturu ‘north’ is a direction of safety—the long axis of the Maldivian archipelago runs north to south, and so by sailing north, one has the best possible chance of reaching another island or (eventually) the mainlands of India or Sri Lanka.Figure 4: Location of the MaldivesIt is perhaps unsurprising, then, that many Maldivians are well attuned to the direction denoted by uturu ‘north’, as well as to the other cardinal directions. In an object placement task performed by 41 participants in Laamu Atoll, 32 participants (78%) correctly placed a plastic block ‘to the north’ (uturaṣ̊) of another block when instructed to do so (Lum). The prompts dekonaṣ̊ ‘to the south’ and huḷangaṣ̊ ‘to the west’ yielded similarly high rates of correct responses, though as many as 37 participants (90%) responded correctly to the prompt iraṣ̊ ‘to the east’—this is perhaps because the term for ‘east’ also means ‘sun’ and is strongly associated with the sunrise, whereas the terms for the other cardinal directions are comparatively opaque. However, the path of the sun is not the only environmental cue that shapes the use of Dhivehi cardinal directions. As in Kuuk Thaayorre and Marshallese, cardinal directions in Dhivehi are often ‘calibrated’ according to the orientation of local coastlines. In Fonadhoo, for example, which is oriented northeast to southwest, the system of cardinal directions is rotated about 45 degrees clockwise: uturu ‘north’ points to what is actually northeast and dekona/dekunu ‘south’ to what is actually southwest (i.e., along the length of the island), while iru/iramati ‘east’ and huḷangu ‘west’ are perpendicular to shore (see Figure 5). However, despite this rotated system being in use, residents of Fonadhoo often comment that these are not the ‘real’ cardinal directions, which are determined by the path of the sun.Figure 5: Directions in Fonadhoo, Laamu Atoll, MaldivesIn addition to the four cardinal directions, Dhivehi possesses four intercardinal directions, which are compound terms: iru-uturu ‘northeast’, iru-dekunu ‘southeast’, huḷangu-uturu ‘northwest’, and huḷangu-dekunu ‘southwest’. Yet even a system of eight compass points is not sufficient for describing directions over long distances, especially on the open sea where there are no landmarks to refer to. A system of 32 ‘sidereal’ compass directions (see Figure 6), based on the rising and setting points of stars in the night sky, is available for such purposes—for example, simāgu īran̊ ‘Arcturus rising’ points ENE or 67.5°, while simāgu astamān̊ ‘Arcturus setting’ points WNW or 292.5°. (These Dhivehi names for the sidereal directions are borrowings from Arabic, and were probably introduced by Arab seafarers in the medieval period, see Lum 174-79). Eight sidereal directions coincide with the basic (inter)cardinal directions of the solar compass described earlier. For example, gahā ‘Polaris’ in the sidereal compass corresponds exactly with uturu ‘north’ in the solar compass. Thus Dhivehi has both a sidereal ‘north’ and a solar ‘north’, though the latter is sometimes rotated according to local topography. However, the system of sidereal compass directions has largely fallen out of use, and is known only to older and some middle-aged men. This appears to be due to the diversification of the Maldivian economy in recent decades along with the modernisation of Maldivian fishing vessels, including the introduction of GPS technology. Nonetheless, fishermen and fishing communities use solar compass directions much more frequently than other groups in the Maldives (Lum; Palmer et al.), and some of the oldest men still use sidereal compass directions occasionally.Figure 6: Dhivehi sidereal compass with directions in Thaana script (used with kind permission of Abdulla Rasheed and Abdulla Zuhury)‘North’ in EnglishThe traditional definition of north in terms of Magnetic North or Geographic North is well known to native English speakers and may appear relatively straightforward. In practice, however, the use and interpretation of north is more variable. English speakers generally draw on cardinal directions only in restricted circumstances, i.e. in large-scale geographical or navigational contexts rather than, for example, small-scale configurations of manipulable objects (Majid et al. 108). Consequently, most English speakers do not need to maintain a mental compass to keep track of North at all times. So, if English speakers are generally unaware of where North is, how do they perform when required to use it?A group of 36 Australian English speakers participated in an experimental task where they were presented with a stimulus object (in this case, a 10cm wide cube) while facing S72ºE (Poulton). They were then handed another cube and asked to place it next to the stimulus cube in a particular direction (e.g. ‘put this cube to the north of that cube’). Participants completed a total of 48 trials, including each of the four cardinal directions as target, as well as expressions such as behind, in front of and to the left of. As shown in Figure 7, participants’ responses were categorised in one of three ways: correct, near-correct, or incorrect.Figure 7: Possible responses to prompt of north: A = correct, B = near-correct (aligned with the side of stimulus object closest to north), C = incorrect.Every participant placed their cube in alignment with the axes of the stimulus object (i.e. responses B and C in Figure 7). Orientation to Magnetic/Geographic North was thus insufficient to override the local cues of the task at hand. The 9% of participants showed some awareness of the location of Magnetic/Geographic North, however, by making the near-correct response type B. No participants who behaved in such a way expressed certainty in their responses, however. Most commonly, they calculated the rough direction concerned by triangulating with local landmarks such as nearby roads, or the location of Melbourne’s CBD (as verbally expressed both during the task and during an informal interview afterwards).The remaining 91% of participants’ responses were entirely incorrect. Of these, 13.2% involved similar thought processes as the near-correct responses, but did not result in the identification of the closest side of the stimulus to the instructed direction. However, 77.8% of the total participants interpreted north as the far side of the stimulus. While such responses were classified incorrect on the basis of Magnetic or Geographic North, they were consistent with one another and correct with respect to an alternative definition of English north in terms of the participant’s own body. One of the participants alludes to this alternative definition, asking “Do you mean my North or physical North?”. We refer to this alternative definition as Relative North. Relative North is not bound to any given point on the Earth or a derivation of the sun’s position; instead, it is entirely bound to the perceiver’s own orientation. This equates the north direction with forward and the other cardinals’ points are derived from this reference point (see Figure 8). Map-reading practices likely support the development of the secondary, Relative sense of North.Figure 8: Relative North and the Relative directions derived from itConclusionWe have compared the words closest in meaning to the English word north in four entirely unrelated languages. In the Australian Aboriginal language Kuuk Thaayorre, the ‘north’ direction aligns with the local coast, pointing in a direction 35 degrees west of Magnetic North. In Marshallese, the compass direction corresponding to ‘north’ is different for each island, being defined in opposition to an axis running between the ocean and lagoon sides of that island. The Dhivehi ‘north’ direction may be defined either in opposition to the (sun-based) east-west axis, calibrated to the configuration of the local island, as in Marshallese, or defined in terms of Polaris, the Pole star. In all these cases, though, the system of directions is anchored by properties of the external environment. English speakers, by contrast, are shown to—at least some of the time—define north with reference to their own embodied perspective, as the direction extending outwards from the front of their bodies. These findings demonstrate that, far from being universal, ‘north’ is a culture-specific category. As such, great care must be taken when translating or drawing equivalencies between these concepts across languages.ReferencesBender, Byron W., et al. “Proto-Micronesian Reconstructions: I.” Oceanic Linguistics 42.1 (2003): 1–110.Brown, Cecil H. “Where Do Cardinal Direction Terms Come From?” Anthropological Linguistics 25.2 (1983): 121–161. François, Alexandre. “Reconstructing the Geocentric System of Proto-Oceanic.” Oceanic Linguistics 43.1 (2004): 1–31. Gaby, Alice R. A Grammar of Kuuk Thaayorre. Vol. 74. Berlin: De Gruyter Mouton, 2017.Genz, Joseph. “Complementarity of Cognitive and Experiential Ways of Knowing the Ocean in Marshallese Navigation.” Ethos 42.3 (2014): 332–351.Lewis, David Henry. We, the Navigators: The Ancient Art of Landfinding in the Pacific. 2nd ed. Honolulu: University of Hawai'i Press, 1994. Lum, Jonathon. "Frames of Spatial Reference in Dhivehi Language and Cognition." PhD Thesis. Melbourne: Monash University, 2018. Majid, Asifa, et al. “Can Language Restructure Cognition? The Case for Space.” Trends in Cognitive Sciences 8.3 (2004): 108–114.OED Online. “North, Adv., Adj., and N.” Oxford English Dictionary. Oxford: Oxford University Press. <http://www.oed.com.ezproxy.lib.monash.edu.au/view/Entry/128325>.Palmer, Bill. “Absolute Spatial Reference and the Grammaticalisation of Perceptually Salient Phenomena.” Representing Space in Oceania: Culture in Language and Mind. Canberra: Pacific Linguistics, 2002. 107–133. ———, et al. "“Sociotopography: The Interplay of Language, Culture, and Environment.” Linguistic Typology 21.3 (2017). DOI:10.1515/lingty-2017-0011.Poulton, Thomas. “Exploring Space: Frame-of-Reference Selection in English.” Honours Thesis. Melbourne: Monash University, 2016.Ross, Malcolm D. “Talking about Space: Terms of Location and Direction.” The Lexicon of Proto-Oceanic: The Culture and Environment of Ancestral Oceanic Society: The Physical Environment. Eds. Malcolm D. Ross, Andrew Pawley, and Meredith Osmond. Vol. 2. Canberra: Pacific Linguistics, 2003. 229–294. Schlossberg, Jonathan. Atolls, Islands and Endless Suburbia: Spatial Reference in Marshallese. PhD thesis. Newcastle: University of Newcastle, in preparation.
APA, Harvard, Vancouver, ISO, and other styles
38

Charman, Suw, and Michael Holloway. "Copyright in a Collaborative Age." M/C Journal 9, no. 2 (May 1, 2006). http://dx.doi.org/10.5204/mcj.2598.

Full text
Abstract:
The Internet has connected people and cultures in a way that, just ten years ago, was unimaginable. Because of the net, materials once scarce are now ubiquitous. Indeed, never before in human history have so many people had so much access to such a wide variety of cultural material, yet far from heralding a new cultural nirvana, we are facing a creative lock-down. Over the last hundred years, copyright term has been extended time and again by a creative industry eager to hold on to the exclusive rights to its most lucrative materials. Previously, these rights guaranteed a steady income because the industry controlled supply and, in many cases, manufactured demand. But now culture has moved from being physical artefacts that can be sold or performances that can be experienced to being collections of 1s and 0s that can be easily copied and exchanged. People are revelling in the opportunity to acquire and experience music, movies, TV, books, photos, essays and other materials that they would otherwise have missed out on; and they picking up the creative ball and running with it, making their own version, remixes, mash-ups and derivative works. More importantly than that, people are producing and sharing their own cultural resources, publishing their own original photos, movies, music, writing. You name it, somewhere someone is making it, just for the love of it. Whilst the creative industries are using copyright law in every way they can to prosecute, shut down, and scare people away from even legitimate uses of cultural materials, the law itself is becoming increasingly inadequate. It can no longer deal with society’s demands and expectations, nor can it cope with modern forms of collaboration facilitated by technologies that the law makers could never have anticipated. Understanding Copyright Copyright is a complex area of law and even a seemingly simple task like determining whether a work is in or out of copyright can be a difficult calculation, as illustrated by flowcharts from Tim Padfield of the National Archives examining the British system, and Bromberg & Sunstein LLP which covers American works. Despite the complexity, understanding copyright is essential in our burgeoning knowledge economies. It is becoming increasingly clear that sharing knowledge, skills and expertise is of great importance not just within companies but also within communities and for individuals. There are many tools available today that allow people to work, synchronously or asynchronously, on creative endeavours via the Web, including: ccMixter, a community music site that helps people find material to remix; YouTube, which hosts movies; and JumpCut:, which allows people to share and remix their movies. These tools are being developed because of the increasing number of cultural movements toward the appropriation and reuse of culture that are encouraging people to get involved. These movements vary in their constituencies and foci, and include the student movement FreeCulture.org, the Free Software Foundation, the UK-based Remix Commons. Even big business has acknowledged the importance of cultural exchange and development, with Apple using the tagline ‘Rip. Mix. Burn.’ for its controversial 2001 advertising campaign. But creators—the writers, musicians, film-makers and remixers—frequently lose themselves in the maze of copyright legislation, a maze complicated by the international aspect of modern collaboration. Understanding of copyright law is at such a low ebb because current legislation is too complex and, in parts, out of step with modern technology and expectations. Creators have neither the time nor the motivation to learn more—they tend to ignore potential issues and continue labouring under any misapprehensions they have acquired along the way. The authors believe that there is an urgent need for review, modernisation and simplification of intellectual property laws. Indeed, in the UK, intellectual property is currently being examined by a Treasury-level review lead by Andrew Gowers. The Gowers Review is, at the time of writing, accepting submissions from interested parties and is due to report in the Autumn of 2006. Internationally, however, the situation is likely to remain difficult, so creators must grasp the nettle, educate themselves about copyright, and ensure that they understand the legal ramifications of collaboration, publication and reuse. What Is Collaboration? Wikipedia, a free online encyclopaedia created and maintained by unpaid volunteers, defines collaboration as “all processes wherein people work together—applying both to the work of individuals as well as larger collectives and societies” (Wikipedia, “Collaboration”). These varied practices are some of our most common and basic tendencies and apply in almost every sphere of human behaviour; working together with others might be described as an instinctive, pragmatic or social urge. We know we are collaborating when we work in teams with colleagues or brainstorm an idea with a friend, but there are many less familiar examples of collaboration, such as taking part in a Mexican wave or standing in a queue. In creative works, the law expects collaborators to obtain permission to reuse work created by others before they embark upon that reuse. Yet this distinction between ‘my’ work and ‘your’ work is entirely a legal and social construct, as opposed to an absolute fact of human nature, and new technologies are blurring the boundaries between what is ‘mine’ and what is ‘yours’ whilst new cultural movements posit a third position, ‘ours’. Yochai Benkler coined the term ‘commons-based peer production’ (Benkler, Coase’s Penguin; The Wealth of Nations) to describe collaborative efforts, such as free and open-source software or projects such as Wikipedia itself, which are based on sharing information. Benkler posits this particular example of collaboration as an alternative model for economic development, in contrast to the ‘firm’ and the ‘market’. Benkler’s notion sits uncomfortably with the individualistic precepts of originality which dominate IP policy, but with examples of commons-based peer production on the increase, it cannot be ignored when considering how new technologies and ways of working interact with existing and future copyright legislation. The Development of Collaboration When we think of collaboration we frequently imagine academics working together on a research paper, or musicians jamming together to write a new song. In academia, researchers working on a project are expected to write papers for publication in journals on a regular basis. The motto ‘publish or die’ is well known to anyone who has worked in academic circle—publishing papers is the lifeblood of the academic career, forming the basis of a researcher’s status within the academic community and providing data and theses for other researchers to test and build upon. In these circumstances, copyright is often assigned by the authors to a journal and, because there is no direct commercial outcome for the authors, conflicts regarding copyright tend to be restricted to issues such as reuse and reproduction. Within the creative industries, however, the focus of the collaboration is to derive commercial benefit from the work, so copyright issues, such as division of fees and royalties, plagiarism, and rights for reuse are much more profitable and hence they are more vigorously pursued. All of these issues are commonly discussed, documented and well understood. Less well understood is the interaction between copyright and the types of collaboration that the Internet has facilitated over the last decade. Copyright and Wikis Ten years ago, Ward Cunningham invented the ‘wiki’—a Web page which could be edited in situ by anyone with a browser. A wiki allows multiple users to read and edit the same page and, in many cases, those users are either anonymous or identified only by a nickname. The most famous example of a wiki is Wikipedia, which was started by Jimmy Wales in 2001 and now has over a million articles and over 1.2 million registered users (Wikipedia, “Wikipedia Statistics”). The culture of online wiki collaboration is a gestalt—the whole is greater than the sum of the parts and the collaborators see the overall success of the project as more important than their contribution to it. The majority of wiki software records every single edit to every page, creating a perfect audit trail of who changed which page and when. Because copyright is granted for the expression of an idea, in theory, this comprehensive edit history would allow users to assert copyright over their contributions, but in practice it is not possible to delineate clearly between different people’s contributions and, even if it was possible, it would simply create a thicket of rights which could never be untangled. In most cases, wiki users do not wish to assert copyright and are not interested in financial gain, but when wikis are set up to provide a source of information for reuse, copyright licensing becomes an issue. In the UK, it is not possible to dedicate a piece of work to the public domain, nor can you waive your copyright in a work. When a copyright holder wishes to licence their work, they can only assign that licence to another person or a legal entity such as a company. This is because in the UK, the public domain is formed of the ‘leftovers’ of intellectual property—works for which copyright has expired or those aspects of creative works which do not qualify for protection. It cannot be formally added to, although it certainly can be reduced by, for example, extension of copyright term which removes work from the public domain by re-copyrighting previously unprotected material. So the question becomes, to whom does the content of a wiki belong? At this point traditional copyright doctrines are of little use. The concept of individuals owning their original contribution falls down when contributions become so entangled that it’s impossible to split one person’s work from another. In a corporate context, individuals have often signed an employment contract in which they assign copyright in all their work to their employer, so all material created individually or through collaboration is owned by the company. But in the public sphere, there is no employer, there is no single entity to own the copyright (the group of contributors not being in itself a legal entity), and therefore no single entity to give permission to those who wish to reuse the content. One possible answer would be if all contributors assigned their copyright to an individual, such as the owner of the wiki, who could then grant permission for reuse. But online communities are fluid, with people joining and leaving as the mood takes them, and concepts of ownership are not as straightforward as in the offline world. Instead, authors who wished to achieve the equivalent of assigning rights to the public domain would have to publish a free licence to ‘the world’ granting permission to do any act otherwise restricted by copyright in the work. Drafting such a licence so that it is legally binding is, however, beyond the skills of most and could be done effectively only by an expert in copyright. The majority of creative people, however, do not have the budget to hire a copyright lawyer, and pro bono resources are few and far between. Copyright and Blogs Blogs are a clearer-cut case. Blog posts are usually written by one person, even if the blog that they are contributing to has multiple authors. Copyright therefore resides clearly with the author. Even if the blog has a copyright notice at the bottom—© A.N. Other Entity—unless there has been an explicit or implied agreement to transfer rights from the writer to the blog owner, copyright resides with the originator. Simply putting a copyright notice on a blog does not constitute such an agreement. Equally, copyright in blog comments resides with the commenter, not the site owner. This reflects the state of copyright with personal letters—the copyright in a letter resides with the letter writer, not the recipient, and owning letters does not constitute a right to publish them. Obviously, by clicking the ‘submit’ button, commenters have decided themselves to publish, but it should be remembered that that action does not transfer copyright to the blog owner without specific agreement from the commenter. Copyright and Musical Collaboration Musical collaboration is generally accepted by legal systems, at least in terms of recording (duets, groups and orchestras) and writing (partnerships). The practice of sampling—taking a snippet of a recording for use in a new work—has, however, changed the nature of collaboration, shaking up the recording industry and causing a legal furore. Musicians have been borrowing directly from each other since time immemorial and the student of classical music can point to many examples of composers ‘quoting’ each other’s melodies in their own work. Folk musicians too have been borrowing words and music from each other for centuries. But sampling in its modern form goes back to the musique concrète movement of the 1940s, when musicians used portions of other recordings in their own new compositions. The practice developed through the 50s and 60s, with The Beatles’ “Revolution 9” (from The White Album) drawing heavily from samples of orchestral and other recordings along with speech incorporated live from a radio playing in the studio at the time. Contemporary examples of sampling are too common to pick highlights, but Paul D. Miller, a.k.a. DJ Spooky ‘that Subliminal Kid’, has written an analysis of what he calls ‘Rhythm Science’ which examines the phenomenon. To begin with, sampling was ignored as it was rare and commercially insignificant. But once rap artists started to make significant amounts of money using samples, legal action was taken by originators claiming copyright infringement. Notable cases of illegal sampling were “Pump Up the Volume” by M/A/R/R/S in 1987 and Vanilla Ice’s use of Queen/David Bowie’s “Under Pressure” in the early 90s. Where once artists would use a sample and sort out the legal mess afterwards, such high-profile litigation has forced artists to secure permission for (or ‘clear’) their samples before use, and record companies will now refuse to release any song with uncleared samples. As software and technology progress further, so sampling progresses along with it. Indeed, sampling has now spawned mash-ups, where two or more songs are combined to create a musical hybrid. Instead of using just a portion of a song in a new composition which may be predominantly original, mash-ups often use no original material and rely instead upon mixing together tracks creatively, often juxtaposing musical styles or lyrics in a humorous manner. One of the most illuminating examples of a mash-up is DJ Food Raiding the 20th Century which itself gives a history of sampling and mash-ups using samples from over 160 sources, including other mash-ups. Mash-ups are almost always illegal, and this illegality drives mash-up artists underground. Yet, despite the fact that good mash-ups can spread like wildfire on the Internet, bringing new interest to old and jaded tracks and, potentially, new income to artists whose work had been forgotten, this form of musical expression is aggressively demonised upon by the industry. Given the opportunity, the industry will instead prosecute for infringement. But clearing rights is a complex and expensive procedure well beyond the reach of the average mash-up artist. First, you must identify the owner of the sound recording, a task easier said than done. The name of the rights holder may not be included in the original recording’s packaging, and as rights regularly change hands when an artist’s contract expires or when a record label is sold, any indication as to the rights holder’s identity may be out of date. Online musical databases such as AllMusic can be of some use, but in the case of older or obscure recordings, it may not be possible to locate the rights holder at all. Works where there is no identifiable rights holder are called ‘orphaned works’, and the longer the term of copyright, the more works are orphaned. Once you know who the rights holder is, you can negotiate terms for your proposed usage. Standard fees are extremely high, especially in the US, and typically discourage use. This convoluted legal culture is an anachronism in desperate need of reform: sampling has produced some of the most culturally interesting and financially valuable recordings of the past thirty years, so should be supported rather than marginalised. Unless the legal culture develops an acceptance for these practices, the associated financial and cultural benefits for society will not be realised. The irony is that there is already a successful model for simplifying licensing. If a musician wishes to record a cover version of a song, then royalty terms are set by law and there is no need to seek permission. In this case, the lawmakers have recognised the social and cultural benefit of cover versions and created a workable solution to the permissions problem. There is no logical reason why a similar system could not be put in place for sampling. Alternatives to Traditional Copyright Copyright, in its default structure, is a disabling force. It says that you may not do anything with my work without my permission and forces creators wishing to make a derivative work to contact me in order to obtain that permission in writing. This ‘permissions society’ has become the norm, but it is clear that it is not beneficial to society to hide away so much of our culture behind copyright, far beyond the reach of the individual creator. Fortunately there are fast-growing alternatives which simplify whilst encouraging creativity. Creative Commons is a global movement started by academic lawyers in the US who thought to write a set of more flexible copyright licences for creative works. These licenses enable creators to precisely tailor restrictions imposed on subsequent users of their work, prompting the tag-line ‘some rights reserved’ Creators decide if they will allow redistribution, commercial or non-commercial re-use, or require attribution, and can combine these permissions in whichever way they see fit. They may also choose to authorise others to sample their works. Built upon the foundation of copyright law, Creative Commons licences now apply to some 53 million works world-wide (Doctorow), and operate in over 60 jurisdictions. Their success is testament to the fact that collaboration and sharing is a fundamental part of human nature, and treating cultural output as property to be locked away goes against the grain for many people. Creative Commons are now also helping scientists to share not just the results of their research, but also data and samples so that others can easily replicate experiments and verify or refute results. They have thus created Science Commons in an attempt to free up data and resources from unnecessary private control. Scientists have been sharing their work via personal Web pages and other Websites for many years, and additional tools which allow them to benefit from network effects are to be welcomed. Another example of functioning alternative practices is the Remix Commons, a grassroots network spreading across the UK that facilitates artistic collaboration. Their Website is a forum for exchange of cultural materials, providing a space for creators to both locate and present work for possible remixing. Any artistic practice which can reasonably be rendered online is welcomed in their broad church. The network’s rapid expansion is in part attributable to its developers’ understanding of the need for tangible, practicable examples of a social movement, as embodied by their ‘free culture’ workshops. Collaboration, Copyright and the Future There has never been a better time to collaborate. The Internet is providing us with ways to work together that were unimaginable even just a decade ago, and high broadband penetration means that exchanging large amounts of data is not only feasible, but also getting easier and easier. It is possible now to work with other artists, writers and scientists around the world without ever physically meeting. The idea that the Internet may one day contain the sum of human knowledge is to underestimate its potential. The Internet is not just a repository, it is a mechanism for new discoveries, for expanding our knowledge, and for making links between people that would previously have been impossible. Copyright law has, in general, failed to keep up with the amazing progress shown by technology and human ingenuity. It is time that the lawmakers learnt how to collaborate with the collaborators in order to bring copyright up to date. References Apple. “Rip. Mix. Burn.” Advertisement. 28 April 2006 http://www.theapplecollection.com/Collection/AppleMovies/mov/concert_144a.html>. Benkler, Yochai. Coase’s Penguin. Yale Law School, 1 Dec. 2002. 14 April 2006 http://www.benkler.org/CoasesPenguin.html>. ———. The Wealth of Nations. New Haven: Yape UP, 2006. Bromberg & Sunstein LLP. Flowchart for Determining when US Copyrights in Fixed Works Expire. 14 Apr. 2006 http://www.bromsun.com/practices/copyright-portfolio-development/flowchart.htm>. DJ Food. Raiding the 20th Century. 14 April 2006 http://www.ubu.com/sound/dj_food.html>. Doctorow, Cory. “Yahoo Finds 53 Million Creative Commons Licensed Works Online.” BoingBoing 5 Oct. 2005. 14 April 2006 http://www.boingboing.net/2005/10/05/yahoo_finds_53_milli.html>. Miller, Paul D. Rhythm Science. Cambridge, Mass.: MIT Press, 2004. Padfield, Tim. “Duration of Copyright.” The National Archives. 14 Apr. 2006 http://www.kingston.ac.uk/library/copyright/documents/DurationofCopyright FlowchartbyTimPadfieldofTheNationalArchives_002.pdf>. Wikipedia. “Collaboration.” 14 April 2006 http://en.wikipedia.org/wiki/Collaboration>. ———. “Wikipedia Statistics.” 14 April 2006 http://en.wikipedia.org/wiki/Special:Statistics>. Citation reference for this article MLA Style Charman, Suw, and Michael Holloway. "Copyright in a Collaborative Age." M/C Journal 9.2 (2006). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0605/02-charmanholloway.php>. APA Style Charman, S., and M. Holloway. (May 2006) "Copyright in a Collaborative Age," M/C Journal, 9(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0605/02-charmanholloway.php>.
APA, Harvard, Vancouver, ISO, and other styles
39

Goffey, Andrew. "Idiotypic Networks, Normative Networks." M/C Journal 6, no. 4 (August 1, 2003). http://dx.doi.org/10.5204/mcj.2235.

Full text
Abstract:
Health Health is a production, a process, and not a goal. It is a means and not an end state, required “to liberate life wherever it is imprisoned by and within man, by and within organisms and genera” (Deleuze). We live our health as a network, within networks, within social, technological, political and biological networks, but how does the network concept understand health? And how does the network concept implicate health within other networks, for better or for worse? Biopolitical Relations In its diverse forms, network thinking institutes a relational ontology, an ontology of connection and of connectedness. Whether the connections being explored are those governing the proverbial ‘six degrees of separation’, the small world in which “no-one is more than a few handshakes from anyone else”, the rhizomatic imperative that not only is everything connected but it must be, or even the ordinality of the mathematical regimen of belonging (Alain Badiou), one gains the impression that network thinking is the expression of a common world-view, a zeitgeist. Yet to think in this way is not only to lose sight of the important qualitative differences evident in the manifold conceptions of ‘network’ but is also to overlook differences in descent in the genealogy of knowledges and hence the differential inscription of those knowledges in power relations (another network…). The case of immunology is analysed here as one line of descent in network thinking, selected here for its susceptibility to exemplify a series of biopolitical implications which may not be so evident in other scientific fields. What follows is an attempt to address some of these implications for our understanding of the materiality of communications. Self - Nonself Since the groundbreaking work of Sir Frank MacFarlane Burnett in the 1940s and 1950s, immunology has become known as the ‘science of self-nonself discrimination’. In the first half of the twentieth century, as Pauline Mazumdar has argued, immunology was caught up in a classificatory problematic of the nature of species and specificity. In the latter half of the twentieth century, it might be argued, this concern becomes a more general one of the nature of biological identity and the mechanisms of organic integrity. Yet it is licit to see in these innocently scientific concerns the play of another set of interests, another set of issues or, to put it slightly differently, another problematic. We can see in the autonymic definition of immunology as the ‘science of self-nonself discrimination’ a biopolitical concern with the nature, maintenance and protection of populations: a delegation of the health of the body to a set of autonomous biological mechanisms, an interiorisation of a social and political problematic parallel to the interiorisation of the social repression of desire traced out by Gilles Deleuze and Felix Guattari in their Anti-Oedipus. There are a number of points which are relevant here. The intellectual roots of immunology are to be found in Darwinian theory. Socially, however, immunology develops out of a set of public health practices, a set of public health reforms. Immunology locates the mechanisms for maintaining the integrity of the organism ‘under the skin’ and in a sense shifts the focal point of the problem of health to the internal workings of the organism. In this way, it reconfigures the field of social action. The enormous success of vaccination programmes and a concentration on the ‘serologic’ of immunisation focalises immunological research on outer-directed reactions. We can find a trace of the social field to which immunology is related in the name of the discipline itself. The term ‘immunology’ derives from the Latin term ‘immunitas’ which signified an exemption from public duty. The mechanisms of the immune system are routinely figured as weapons in a war against the enemy (Paul Ehrlich: “magic bullets”). And war, as Agamben has argued, exemplifies a state of exception. Given the way in which immunology shifts health inside the body, its enemies become ‘any enemies whatever’, microphysical forces with no apparent connection to the socius. The ability to combat any enemy whatever offers decisive evidence of the miraculous abilities of the self, the sovereignty of its powers. The self which the immune system protects is imagined to be defined anterior to that system, on a genetic basis, independent of the rules governing interactions in the system itself. The ability of the immune system to respond to and destroy any enemy whatever and thus maintain the organism’s sovereign identity demonstrates its ‘intolerance for foreign matter’ (Macfarlane Burnet). The molecular terrain on which its combat is waged is only apparently divorced from the socius. Idiotypy Network theory offers an interesting response to this set of ideas. Niels Jerne developed idiotypic network theory as a way of overcoming some of the difficulties in the accepted version of how the immune system works. The immune system possesses the remarkable ability to distinguish between everything which is a friend of the organism and everything which is an enemy. The key question which this poses is this: how and on what basis does the immune system not react to self, why does it posses what Paul Ehrlich called ‘horror autotoxicus’? The standard wisdom is to maintain that those elements which can react to self are firstly only very small in number and, secondly, eliminated by a process of learning (‘clonal deletion’). Yet this view is wrong on both counts – there is a far higher concentration of ‘auto-antibodies’ in the individual organism than the standard theory suggests, and an organism which develops in the absence of contact with ‘antigens’ originating in the environment can nevertheless develop a perfectly functional immune system. Jerne’s theory develops as a piece of self-organisational wisdom. Everything in the immune system is connected. The activities of all the elements in the system are regulated by the activities of every other. One type of cell specifically recognises and thus is stimulated into action (i.e. the production of clones) by another. However, this reaction is dampened down by another recognition event: the proliferation of clones of the first type of cell is regulated by the response of a third (also a production), and so on. This cascading chain of stimulus-response events is called an idiotypic network, by Jerne, a recurrent set of ‘eigen-behaviours’, and it reverses the conventional wisdom about the way in which the immune system operates: the destructive response to the other is no longer an exception but a limiting case in the auto-consistent behaviour of a self-organising network. An immune reaction is not a characteristic of the miraculous power of the immune system but a consequence of the network’s loss of plasticity. Autopoiesis Francisco Varela and the so-called ‘Paris School’ have managed to draw out the radical consequences of this way of looking at organic processes. The first point they make is that idiotypic network theory substitutes an autonomous conception of immunity for the predominantly heteronomous view of immunity as a set of defensive mechanisms. A variant on the more general autopoietic postulate of the circular causality inhering in living systems, the eigen-behaviour used to characterise immune networks attempts to move our understanding of biological processes away from the biopolitical problematic of defence and security. As Varela and Anspach put it, “to say that immunity is fundamentally defence is as distorted as saying that the brain is fundamentally concerned with defence and avoidance. We certainly defend ourselves and avoid attack, but this is hardly what cognition is about, that is flexibility in living”. An idiotypic network is thus conceptualised as a radically autonomous system, which effectively knows no outside. The idea that the immune network has defence as its prime function is argued by Varela to result from the epistemically relative nature of the claims made by the biologist: it is a claim which makes sense from the specific point of view of the observer but does not – cannot – explain what the immune network is doing in its own terms. The place of the observer in biology is fundamentally contingent. The assertion of the contingent nature of the observation in biology is not, however, accompanied by an analysis of the immanent implication of these observations in the socius. As Maturana himself has noted, “the fact that science as a cognitive domain is constituted and validated in the operational coherences of the praxis of living of the standard observers as they operate in their experiential domains without reference to an independent reality does not make scientific statements subjective”. Certainly not, if these statements can be demonstrated to belong to a specific set of discursive ‘regularities’. The argument that the immune network does not have defence as its primary function of course raises the question of what the immune network is actually for. The research carried out by Varela and his associates suggests, and this is the second point, that the immune network is responsible for the assertion of organic identity. Far from being a secondary mechanism for the protection of a sovereign identity defined elsewhere and otherwise, the organisation of the immune network as a recurrent set of mutually reinforcing chemical interactions (in which defence is instead the result of an excessive perturbation of the system), suggests that the network has a primary role in defining identity. To put it another way, the immune network is a means of individuation. The field of theoretical immunology more generally has explored the logic of the network constitution of individuality. Experimental evidence suggests that vertebrate organisms replace up to 20% of the chemical components constituting the immune network daily, thus demonstrating a highly productive processual character, but how does this activity cohere into the development of a consistent set? Theoretical immunologists use some of the arguments of complexity theory to show that even the continuous random production of notional molecular compounds (which would correspond to the elements of the immune network – B-cells, T-cells and so on) can yield an organised consistent set. They argue that this set or network of interactions forms a ‘cognitive field’ which determines the sensitivity of the network to any one of its elements at any moment in time. The sensitivity of the network – equally its degree of connectedness – determines the likelihood that any element will be integrated or rejected. The less connected the network to any element, the more likely that element will be rejected. Interestingly, the shape of the cognitive field of the network – what it is sensitive to – varies over time, and the network is more flexible, or plastic, at earlier stage in its history than later. The crucial point, however, is that there are no necessarily enduring components to this network. A useful term to describe this is metastability: immune networks provide evidence for an ongoing process of individuation, itself a more or less chaotic process. Such a view is far from gaining univocal adherence in the immunological community and yet it certainly offers an interesting and inventive way of looking at the anomalies of currently available experimental evidence, not least the difficulties standard theory has of grasping auto-immune diseases. But does the network conception of immunity displace the biopolitical problematic ? As mentioned above, for Varela this view of the immune network as an autonomous, cognitive system offers a way out of the predominantly militaristic characterisation of the organism’s maintenance mechanisms, and thus permits the conceptualisation of what he calls ‘flexibility in living’. Yet, if the claim sketched out above concerning the link between immunology and biopolitics is correct, one is entitled to ask about the extent to which network thought as a way of grasping biological processes can really constitute a locus of resistance to contemporary biopolitical imperatives. Pacification To finish, it is worth noting firstly that with biopolitics, in the genealogy sketched out by Foucault, mutations in power are accompanied by a shift in its phenomenal manifestation: the noisy destructiveness of sovereignty, with its power over life and death, is replaced by the anonymity of the grey procedures of knowledge. Cognition could perhaps be another form of power. And power is for Foucault, of course, a network. Or, to take another view, contemporary power may be characterised by the state of the exception becoming the rule (Agamben): the exceptional response of the sovereign has spread across the whole social fabric or by the generalised diffusion of the death drive across the whole of the socius (Deleuze and Guattari). The diffuse cognitive qualities of the network conception of immunity might in this sense correspond to contemporary shifts in the nature of power and its exercise. As Francois Ewald has put it in his discussion of Foucault’s Discipline and Punish, “[n]ormative knowledge appeals to nothing exterior to that which it works on, that which it makes visible. What precisely is the norm? It is the measure which simultaneously individualises, makes ceaseless individualisation possible and creates comparability”. Works Cited Giorgio Agamben Homo Sacer (Stanford University Press, Stanford CA, 1998) Albert-Làszló Barabàsi Linked: The New Science of Networks (Perseus, Cambridge MA,2002) Gilles Deleuze and Felix Guattari Anti-Oedipus (Minnesota University Press, Minneapolis, 1983) François Ewald ‘A power without an exterior’ in T.J. Armstrong (ed.) Michel Foucault Philosopher (Harvester Wheatsheaf, Hemel Hempstead, 1992) Pauline Mazumdar Species and Specificity (Cambridge University Press, Cambridge, 1995) Francisco Varela and Mark Anspach ‘The Body Thinks: The Immune System in the Process of Somatic Individuation’ in Hans Ulrich Gumbrecht and K.Ludwig Pfeiffer (eds.) Materialities of Communication (Stanford University Press, Stanford CA,1994) Cary Wolfe ‘In Search of Post-Humanist Theory: The Second-Order Cybernetics of Maturana and Varela’ in Cultural Critique (Spring 1995) 30:36 <http://www.santafe.edu/projects/immunology/> Links http://www.santafe.edu/projects/immunology/%20 Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Goffey, Andrew. "Idiotypic Networks, Normative Networks" M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0308/07-idiotypic.php>. APA Style Goffey, A. (2003, Aug 26). Idiotypic Networks, Normative Networks. M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0308/07-idiotypic.php>
APA, Harvard, Vancouver, ISO, and other styles
40

McGuire, Mark. "Ordered Communities." M/C Journal 7, no. 6 (January 1, 2005). http://dx.doi.org/10.5204/mcj.2474.

Full text
Abstract:
A rhetoric of freedom characterises much of the literature dealing with online communities: freedom from fixed identity and appearance, from the confines of geographic space, and from control. The prevailing view, a combination of futurism and utopianism, is that the lack of order in cyberspace enables the creation of social spaces that will enhance personal freedom and advance the common good. Sherry Turkle argues that computer-mediated communication allows us to create a new form of community, in which identity is multiple and fluid (15-17). Marcos Novak celebrates the possibilities of a dematerialized, ethereal virtual architecture in which the relationships between abstract elements are in a constant state of flux (250). John Perry Barlow employs the frontier metaphor to frame cyberspace as an unmapped, ungoverned territory in which a romantic and a peculiarly American form of individualism can be enjoyed by rough and ready pioneers (“Crime” 460). In his 1993 account as an active participant in The WELL (Whole Earth ‘Lectronic Link), one of the earliest efforts to construct a social space online, Howard Rheingold celebrates the freedom to create a “new kind of culture” and an “authentic community” in the “electronic frontier.” He worries, however, that the freedom enjoyed by early homesteaders may be short lived, because “big power and big money” might soon find ways to control the Internet, just as they have come to dominate and direct other communications media. “The Net,” he states, “is still out of control in fundamental ways, but it might not stay that way for long” (Virtual Community 2-5). The uses of order and disorder Some theorists have identified disorder as a necessary condition for the development of healthy communities. In The Uses of Disorder (1970), Richard Sennett argues that “the freedom to accept and to live with disorder” is integral to our search for community (xviii). In his 1989 study of social space, Ray Oldenburg maintains that public hangouts, which constitute the heart of vibrant communities, support sociability best when activities are unplanned, unorganized, and unrestricted (33). He claims that without the constraints of preplanned control we will be more in control of ourselves and more aware of one another (198). More recently, Charles Landry suggests that “structured instability” and “controlled disruption,” resulting from competition, conflict, crisis, and debate, make cities less comfortable but more exciting. Further, he argues that “endemic structural disorder” requiring ongoing adjustments can generate healthy creative activity and stimulate continual innovation (156-58). Kevin Robins, too, believes that any viable social system must be prepared to accept a level of uncertainty, disorder, and fear. He observes, however, that techno-communities are “driven by the compulsion to neutralize,” and they therefore exclude these possibilities in favour of order and security (90-91). Indeed, order and security are the dominant characteristics that less idealistic observers have identified with cyberspace. Alexander Galloway explains how, despite its potential as a liberating development, the Internet is based on technologies of control. This control is exercised at the code level through technical protocols, such as TCP/IP, DNS, and HTM, that determine disconnections as well as connections (Galloway). Lawrence Lessig suggests that in our examination of the ownership, regulation, and governance of the virtual commons, we must take into account three distinct layers. As well as the “logical” or “code” layer that Galloway foregrounds, we should also consider the “physical” layer, consisting of the computers and wires that carry Internet communications, and the “content” layer, which includes everything that we see and hear over the network. In principle, each of these layers could be free and unorganized, or privately owned and controlled (Lessig 23). Dan Schiller documents the increasing privatization of the Net and argues that corporate cyberspace extends the reach of the market, enabling it to penetrate into areas that have previously been considered to be part of the public domain. For Schiller, the Internet now serves as the main production and control mechanism of a global market system (xiv). Checking into Habbo Hotel Habbo Hotel is an example of a highly ordered and controlled online social space that uses community and game metaphors to suggest something much more open and playful. Designed to attract the teenage market, this graphically intensive cartoon-like hotel is like an interactive Legoland, in which participants assemble a toy-like “Habbo” character and chat, play games, and construct personal environments. The first Habbo Hotel opened its doors in the United Kingdom in 2000, and, by September 2004, localized sites were based in a dozen countries, including Canada, the Unites States, Finland, Japan, Switzerland and Spain, with further expansion planned. At that time, there were more than seventeen million registered Habbo characters worldwide with 2.3 million unique visitors each month (“Strong Growth”). The hotel contains thousands of private rooms and twenty-two public spaces, including a welcome lounge, three lobbies, cinema, game hall, café, pub, and an extensive hallway. Anyone can go to the Room-O-Matic and instantly create a free guest room. However, there are a limited number of layouts to choose from and the furnishings, which must be purchased, have be chosen from a catalog of fixed offerings. All rooms are located on one of five floors, which categorize them according to use (parties, games, models, mazes, and trading). Paradoxically, the so-called public spaces are more restricted and less public than the private guest quarters. The limited capacity of the rooms means that all of the public spaces are full most of the time. Priority is given to paying Habbo Club members and others are denied entry or are unceremoniously ejected from a room when it becomes full. Most visitors never make it into the front lobby. This rigid and restricted construction is far from Novak’s vision of a “liquid architecture” without barriers, that morphs in response to the constantly changing desires of individual inhabitants (Novak 250). Before entering the virtual hotel, individuals must first create a Lego-like avatar. Users choose a unique name for their Habbo (no foul language is allowed) and construct their online persona from a limited selection and colour of body parts. One of two different wardrobes is available, depending on whether “Boy” or “Girl” is chosen. The gender of every Habbo is easily recognizable and the restricted wardrobe results in remarkably similar looking young characters. The lack of differentiation encourages participants to treat other Habbos as generic “Boys” or “Girls” and it encourages limited and predictable conversations that fit the stereotype of male-female interactions in most chat sites. Contrary to Turkle’s contention that computer mediated communication technologies expose the fallacy of a single, fixed, identity, and free participants to experiment with alternative selves (15-17), Habbo characters are permitted just one unchangeable name, and are capable of only limited visual transformations. A fixed link between each Habbo character and its registered user (information that is not available to other participants) allows the hotel management to track members through the site and monitor their behavior. Habbo movements are limited to walking, waving, dancing and drinking virtual alcohol-free beverages. Movement between spaces is accomplished by entering a teleport booth, or by selecting a location by name from the hotel Navigator. Habbos cannot jump, fly or walk through objects or other Habbos. They have no special powers and only a limited ability to interact with objects in their environment. They cannot be hurt or otherwise affected by anything in their surroundings, including other Habbos. The emphasis is on safety and avoidance of conflict. Text chat in Habbo Hotel is limited to one sixty-one-character line, which appears above the Habbo, floats upward, and quickly disappears off the top of the screen. Text must be typed in real time while reading on-going conversations and it is not possible to archive a chat sessions or view past exchanges. There is no way of posting a message on a public board. Using the Habbo Console, shorter messages can also be exchanged between Habbos who may be occupying different rooms. The only other narratives available on the site are in the form of official news and promotions. Before checking into the hotel, Habbos can stop to read Habbo Today, which promotes current offers and activities, and HabboHood Happenings, which offers safety tips, information about membership benefits, jobs (paid in furniture), contest winners, and polls. According to Rheingold, a virtual community can form online when enough people participate in meaningful public discussions over an extended period of time and develop “webs of personal relationships” (Virtual Community 5). By restricting communication to short, fleeting messages between individual Habbos, the hotel frustrates efforts by members to engage in significant dialogue and create a viable social group. Although “community” is an important part of the Habbo Hotel brand, it is unlikely to be a substantial part of the actual experience. The virtual hotel is promoted as a safe, non-threatening environment suitable for the teenagers is designed to attract. Parents’ concerns about the dangers of an unregulated chat space provide the hotel management with a justification for creating a highly controlled social space. The hotel is patrolled twenty-four hours a day by professional moderators backed-up by a team of 180 volunteer “Hobbas,” or guides, who can issue warnings to misbehaving Habbos, or temporarily ban them from the site. All text keyed in by Habbos passes through an automated “Bobba Filter” that removes swearing, racist words, explicit sexual comments and “anything that goes against the “Habbo Way” (“Bad Language”). Stick to the rules and you’ll have fun, Habbos are told, “break them and you’ll get yourself banned” (“Habbo Way”). In Big Brother fashion, messages are displayed throughought the hotel advising members to “Stay safe, read the Habbohood Watch,” “Never give out your details!” and “Obey the Habbo way and you’ll be OK.” This miniature surveillance society contradicts Barlow’s observation that cyberspace serves as “a perfect breeding ground for both outlaws and new ideas about liberty” (“Crime” 460). In his manifesto declaring the independence of cyberspace from government control, he maintains that the state has no authority in the electronic “global social space,” where, he asserts, “[w]e are forming our own Social Contract” based on the Golden Rule (“Declaration”). However, Habbo Hotel shows how the rule of the marketplace, which values profits more than social practices, can limit the freedoms of online civil society just as effectively as the most draconian government regulation. Place your order Far from permitting the “controlled disruption” advocated by Landry, the hotel management ensures that nothing is allowed to disrupt their control over the participants. Without conflict and debate, there are few triggers for creative activity in the site, which is designed to encourage consumption, not community. Timo Soininen, the managing director of the company that designed the hotel, states that, because teenagers like to showcase their own personal style, “self-expression is the key to our whole concept.” However, since it isn’t possible to create a Habbo from scratch, or to import clothing or other objects from outside the site, the only way for members to effectively express themselves is by decorating and furnishing their room with items purchased from the Habbo Catalogue. “You see, this,” admits Soininen, “is where our revenue model kicks in” (Shalit). Real-world products and services are also marketed through ads and promotions that are integrated into chat, news, and games. The result, according to Habbo Ltd, is “the ideal vehicle for third party brands to reach this highly desired 12-18 year-old market in a cost-effective and creative manner” (“Habbo Company Profile”). Habbo Hotel is a good example of what Herbert Schiller describes as the corporate capture of sites of public expression. He notes that, when put at the service of growing corporate power, new technologies “provide the instrumentation for organizing and channeling expression” (5-6). In an afterword to a revised edition of The Virtual Community, published in 2000, Rheingold reports on the sale of the WELL to a privately owned corporation, and its decline as a lively social space when order was imposed from the top down. Although he believes that there is a place for commercial virtual communities on the Net, he acknowledges that as economic forces become more entrenched, “more controls will be instituted because there is more at stake.” While remaining hopeful that activists can leverage the power of many-to-many communications for the public good, he wonders what will happen when “the decentralized network infrastructure and freewheeling network economy collides with the continuing growth of mammoth, global, communication empires” (Virtual Community Rev. 375-7). Although the company that built Habbo Hotel is far from achieving global empire status, their project illustrates how the dominant ethos of privatization and the increasing emphasis on consumption results in gated virtual communities that are highly ordered, restricted, and controlled. The popularity of the hotel reflects the desire of millions of Habbos to express their identities and ideas in a playful environment that they are free to create and manipulate. However, they soon find that the rules are stacked against them. Restricted design options, severe communication limitations, and fixed architectural constraints mean that the only freedom left is the freedom to choose from a narrow range of provided options. In private cyberspaces like Habbo Hotel, the logic of the market rules out unrestrained many-to-many communications in favour of controlled commercial relationships. The liberating potential of the Internet that was recognized by Rheingold and others has been diminished as the forces of globalized commerce impose their order on the electronic frontier. References “Bad Language.” Habbo Hotel. 2004. Sulake UK Ltd. 15 Apr. 2004 http://www.habbohotel.co.uk/habbo/en/help/safety/badlanguage/>. Barlow, John Perry. “Crime and Puzzlement.” High Noon on the Electronic Frontier: Conceptual Issues in Cyberspace. Ed. Peter Ludlow. Cambridge, Mass.: MIT P, 1996. 459-86. ———. “A Declaration of the Independence of Cyberspace.” 8 Feb. 1996. 3 July 2004 http://www.eff.org/~barlow/Declaration-Final.html>. Galloway, Alexander R. Protocol: How Control Exists after Decentralization. Cambridge, Mass.: MIT P, 2004. “Habbo Company Profile.” Habbo Hotel. 2002. Habbo Ltd. 20 Jan. 2003 http://www.habbogroup.com>. “The Habbo Way.” Habbo Hotel. 2004. Sulake UK Ltd. 15 Apr. 2004 http://www.habbohotel.co.uk/habbo/en/help/safety/habboway/>. Landry, Charles. The Creative City: A Toolkit for Urban Innovators. London: Earthscan, 2000. Lessig, Lawrence. The Future of Ideas: The Fate of the Commons in a Connected World. New York: Random, 2001. Novak, Marcos. “Liquid Architecture in Cyberspace.” Cyberspace: First Steps. Ed. Michael Benedikt. Cambridge, Mass.: MIT P, 1991. 225-54. Oldenburg, Ray. The Great Good Place: Cafés, Coffee Shops, Community Centers, Beauty Parlors, General Stores, Bars, Hangouts and How They Get You through the Day. New York: Paragon, 1989. Rheingold, Howard. The Virtual Community: Homesteading on the Electronic Frontier. New York: Harper, 1993. ———. The Virtual Community: Homesteading on the Electronic Frontier. Rev. ed. Cambridge, Mass.: MIT P, 2000. Robins, Kevin. “Cyberspace and the World We Live In.” The Cybercultures Reader. Eds. David Bell and Barbara M. Kennedy. London: Routledge, 2000. 77-95. Schiller, Dan. Digital Capitalism: Networking the Global Market System. Cambridge, Mass.: MIT P, 1999. Schiller, Herbert I. Culture Inc.: The Corporate Takeover of Public Expression. New York: Oxford UP, 1991. Sennett, Richard. The Uses of Disorder: Personal Identity & City Life. New York: Vintage, 1970. Shalit, Ruth. “Welcome to the Habbo Hotel.” mpulse Magazine. Mar. 2002. Hewlett-Packard. 1 Apr. 2004 http://www.cooltown.com/cooltown/mpulse/0302-habbo.asp>. “Strong Growth in Sulake’s Revenues and Profit – Habbo Hotel Online Game Will Launch in the US in September.” 3 Sept. 2004. Sulake. Sulake Corp. 9 Jan. 2005 http://www.sulake.com/>. Turkle, Sherry. Life on the Screen: Identity in the Age of the Internet. New York: Simon, 1997. Citation reference for this article MLA Style McGuire, Mark. "Ordered Communities." M/C Journal 7.6 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0501/06-mcguire.php>. APA Style McGuire, M. (Jan. 2005) "Ordered Communities," M/C Journal, 7(6). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0501/06-mcguire.php>.
APA, Harvard, Vancouver, ISO, and other styles
41

Pace, John. "The Yes Men." M/C Journal 6, no. 3 (June 1, 2003). http://dx.doi.org/10.5204/mcj.2190.

Full text
Abstract:
In a light-speed economy of communication, the only thing that moves faster than information is imagination. And in a time when, more than ever before, information is the currency of global politics, economics, conflict, and conquest what better way to critique and crinkle the global-social than to combine the two - information and imagination - into an hilarious mockery of, and a brief incursion into the vistas of the globalitarian order. This is precisely the reflexive and rhetorical pot-pourri that the group 'the Yes Men' (www.theyesmen.org) have formed. Beginning in 2000, the Yes Men describe themselves as a "network of impostors". Basically, the Yes Men (no they're not all men) fool organisations into believing they are representatives of the WTO (World Trade Organisation) and in-turn receive, and accept, invitations to speak (as WTO representatives) at conferences, meetings, seminars, and all manner and locale of corporate pow-wows. At these meetings, the Yes Men deliver their own very special brand of WTO public address. Let's walk through a hypothetical situation. Ashley is organising a conference for a multinational adult entertainment company, at which the management might discuss ways in which it could cut costs from its dildo manufacturing sector by moving production to Indonesia where labour is cheap and tax non-existent (for some), rubber is in abundance, and where the workers hands are slender enough so as to make even the "slimline-tickler" range appear gushingly large in annual report photographs. Ashley decides that a presentation from Supachai Panitchpakdi - head of the WTO body - on the virtues of unrestrained capitalism would be a great way to start the conference, and to build esprit de corps among participants - to summon some good vibrations, if you will. So Ashley jumps on the net. After the obligatory four hours of trying to close the myriad porn site pop-ups that plague internet users of the adult entertainment industry, Ashley comes across the WTO site - or at least what looks like the WTO site - and, via the email link, goes about inviting Supachai Panitchpakdi to speak at the conference. What Ashley doesn't realise is that the site is a mirror site of the actual WTO site. This is not, however, grounds for Ashley's termination because it is only after careful and timely scrutiny that you can tell the difference - and in a hypercapitalist economy who has got time to carefully scrutinize? You see, the Yes Men own the domain name www.gatt.org (GATT [General Agreement on Tariffs and Trade]being the former, not so formalised and globally sanctioned incarnation of the WTO), so in the higgledy-piggeldy cross-referencing infosphere of the internet, and its economy of keywords, unsuspecting WTO fans often find themselves perusing the Yes Men site. The Yes Men are sirens in both senses of the word. They raise alarm to rampant corporatism; and they sing the tunes of corporatism to lure their victims – they signal and seduce. The Yes Men are pull marketers, as opposed to the push tactics of logo based activism, and this is what takes them beyond logoism and its focus on the brand bullies. During the few years the Yes Men have been operating their ingenious rhetorical realignment of the WTO, they have pulled off some of the most golden moments in tactical media’s short history. In May 2002, after accepting an email invitation from conference organisers, the Yes Men hit an accountancy conference in Sydney. In his keynote speech, yes man Andy Bichlbaum announced that as of that day the WTO had decided to "effect a cessation of all operations, to be accomplished over a period of four months, culminating in September". He announced that "the WTO will reintegrate as a new trade body whose charter will be to ensure that trade benefits the poor" (ref). The shocking news hit a surprisingly receptive audience and even sparked debate in the floor of the Canadian Parliament where questions were asked by MP John Duncan about "what impact this will have on our appeals on lumber, agriculture, and other ongoing trade disputes". The Certified Practicing Accountants (CPA) Australia reported that [t]he changes come in response to recent studies which indicate strongly that the current free trade rules and policies have increased poverty, pollution, and inequality, and have eroded democratic principles, with a disproportionatly large negative effect on the poorest countries (CPA: 2002) In another Yes Men assault, this time at a Finnish textiles conference, yes man Hank Hardy Unruh gave a speech (in stead of the then WTO head Mike Moore) arguing that the U.S. civil war (in which slavery became illegal) was a useless waste of time because the system of imported labour (slavery) has been supplanted now by a system of remote labour (sweatshops)- instead of bringing the "labour" to the dildos via ships from Africa, now we can take the dildos to the "labour", or more precisely, the idea of a dildo - or in biblical terms - take the mount'em to Mohammed, Mhemmet, or Ming. Unruh meandered through his speech to the usual complicit audience, happy to accept his bold assertions in the coma-like stride of a conference delegate, that is, until he ripped off his business suit (with help from an accomplice) to reveal a full-body golden leotard replete with a giant golden phallus which he proceeded to inflate with the aid of a small gas canister. He went on to describe to the audience that the suit, dubbed "the management leisure suit", was a new innovation in the remote labour control field. He informed the textiles delegates that located in the end of the phallus was a small video interface through which one could view workers in the Third World and administer, by remote control, electric shocks to those employees not working hard enough. Apparently after the speech only one objection was forwarded and that was from a woman who complained that the phallus device was not appropriate because not only men can oppress workers in the third world. It is from the complicity of their audiences that the Yes Men derive their most virulent critique. They point out that the "aim is to get people to think more seriously about the sort of bullshit they are prepared to swallow, if and when the information comes from a suitably respected authority. By appearing, for example, in the name of the WTO, one could even make out a case for justifying homicide, irrespective of the target audience's training and intellect" (Yes men) Unruh says. And this is the real statement that the Yes Men make, their real-life, real-time theatre hollows-out the signifer of the WTO and injects its own signified to highlight the predominant role of language - rhetoric - in the globalising of the ideas of neo-liberalism. In speaking shit and having people, nay, experts, swallow it comfortably, the Yes Men punctuate that globalisation is as much a movement of ideas across societies as it is a movement of things through societies. It is a movement of ideals - a movement of meanings. Organisations like the WTO propagate these meanings, and propagandise a situation where there is no alternative to initiatives like free trade and the top-down, repressive regime espoused buy neoliberal triumphalists. The Yes Men highlight that the seemingly immutable and inevitable charge of neoliberalism, is in fact simply the dominance of a single way of structuring social life - one dictated by the market. Through their unique brand of semiotic puppetry, the Yes Men show that the project of unelected treaty organisations like the WTO and their push toward the globalisation of neoliberalism is not inevitable, it is not a fait accompli, but rather, that their claims of an inexorable movement toward a neo-liberal capitalism are simply more rhetorical than real. By using the spin and speak of the WTO to suggest ideas like forcing the world's poor to recycle hamburgers to cure world hunger, the Yes Men demonstrate that the power of the WTO lies on the tip of their tongue, and their ability to convince people the world over of the unquestionable legitimacy of that tongue-tip teetering power. But it is that same power that has threatened the future of the Yes Men. In November 2001, the owners of the gatt.org website received a call from the host of its webpage, Verio. The WTO had contacted Verio and asked them to shut down the gatt.org site for copyright violations. But the Yes Men came up with their own response - they developed software that is freely available and which allows the user to mirror any site on the internet easily. Called "Reamweaver", the software allows the user to instantly "funhouse-mirror" anyone's website, copying the real-time "look and feel" but letting the user change any words, images, etc. that they choose. The thought of anyone being able to mimic any site on the internet is perhaps a little scary - especially in terms of e-commerce - imagine that "lizard-tongue belly button tickler" never arriving! Or thinking you had invited a bunch of swingers over to your house via a swingers website, only to find that you'd been duped by a rogue gang of fifteen tax accountants who had come to your house to give you a lecture on the issues associated with the inclusion of pro-forma information in preliminary announcements in East Europe 1955-1958. But seriously, I'm yet to critique the work of the Yes Men. Their brand of protest has come under fire most predictably from the WTO, and least surprisingly from their duped victims. But, really, in an era where the neo-liberal conservative right dominate the high-end operations of sociality, I am reticent to say a bad word about the Yes Men's light, creative, and refreshing style of dissent. I can hear the "free speech" cry coming from those who'd charge the Yes Men with denying their victims the right to freely express their ideas - and I suppose they are correct. But can supra-national institutions like the WTO and their ilk really complain about the Yes Men’s infringement on their rights to a fair communicative playing field when daily they ride rough-shod over the rights of people and the people-defined "rights" of all else with which we share this planet? This is a hazardous junction for the dissent of the Yes Men because it is a point at which personal actions collide head-on with social ethics. The Yes Men’s brand of dissent is a form of direct action, and like direct action, the emphasis is on putting physical bodies between the oppressor and the oppressed – in this case between the subaltern and the supra-national. The Yes Men put their bodies between and within bodies – they penetrate the veneer of the brand to crawl around inside and mess with the mind of the host company body. Messing with anybody’s body is going to be bothersome. But while corporations enjoy the “rights” of embodied citizens, they are spared from the consequences citizens must endure. Take Worldcom’s fraudulent accounting (the biggest in US history) for instance, surely such a monumental deception necessitates more than a USD500 million fine. When will “capital punishment” be introduced to apply to corporations? As in “killing off” the corporation and all its articles of association? Such inconsistencies in the citizenry praxis of corporations paint a pedestrian crossing at the junction where “body” activism meets the ethic (right?) of unequivocal free-speech for all – and when we factor-in crippling policies like structural adjustment, the ethically hazardous junction becomes shadowed by a glorious pedestrian overpass! Where logocentric activism literally concentrates on the apparel – the branded surface - the focus of groups like the Yes Men is on the body beneath – both corporate and corporeal. But are the tactics of the Yes Men enough? Does this step beyond logocentric focused activism wade into the territory of substantive change? Of course the answer is a resounding no. The Yes Men are culture jammers - and culture jamming exists in the realm of ephemera. It asks a question, for a fleeting moment in the grand scheme of struggle, and then fades away. Fetishising the tactics of the Yes Men risks steering dissent into a form of entertainment - much like the entertainmentised politics it opposes. What the Yes Men do is creative and skilful, but it does not express the depth of commitment displayed by those activists working tirelessly on myriad - less-glamorous - campaigns such as the free West Papua movement, and other broader issues of social activism like indigenous rights. If politics is entertainment, then the politics of the Yes Men celebrates the actor while ignoring the hard work of the production team. But having said that, I believe the Yes Men serve an important function in the complex mechanics of dissent. They are but one tactic - they cannot be expected to work with history, they exist in the moment, a transitory trance of reason. And provided the Yes Men continue to use their staged opportunities as platforms to suggest BETTER IDEAS, while also acknowledging the depth and complexity of the subject matter with which they deal, then their brand of protest is valid and effective. The Yes Men ride the cusp of a new style of contemporary social protest, and the more people who likewise use imagination to counter the globalitarian regime and its commodity logic, the better. Through intelligent satire and deft use of communication technologies, the Yes Men lay bare the internal illogic (in terms of human and ecological wellbeing) of the fetishistic charge to cut costs at all costs. Thank-Gatt for the Yes Men, the chastisers of the global eco-social pimps. Works Cited CPA. (2002). World Trade Organisation to Redefine Charter. http://theyesmen.org/tro/cpa.html Yes Men: http://theyesmen.org/ * And thanks to Phil Graham for the “capital punishment” idea. Links http://theyesmen.org/ http://theyesmen.org/tro/cpa.html http://www.gatt.org http://www.theyesmen.org/ Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Pace, John. "The Yes Men" M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0306/05-yesmen.php>. APA Style Pace, J. (2003, Jun 19). The Yes Men. M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0306/05-yesmen.php>
APA, Harvard, Vancouver, ISO, and other styles
42

Glover, Stuart. "Failed Fantasies of Cohesion: Retrieving Positives from the Stalled Dream of Whole-of-Government Cultural Policy." M/C Journal 13, no. 1 (March 21, 2010). http://dx.doi.org/10.5204/mcj.213.

Full text
Abstract:
In mid-2001, in a cultural policy discussion at Arts Queensland, an Australian state government arts policy and funding apparatus, a senior arts bureaucrat seeking to draw a funding client’s gaze back to the bigger picture of what the state government was trying to achieve through its cultural policy settings excused his own abstracting comments with the phrase, “but then I might just be a policy ‘wank’”. There was some awkward laughter before one of his colleagues asked, “did you mean a policy ‘wonk’”? The incident was a misstatement of a term adopted in the 1990s to characterise the policy workers in the Clinton Whitehouse (Cunningham). This was not its exclusive use, but many saw Clinton as an exemplary wonk: less a pragmatic politician than one entertained by the elaboration of policy. The policy work of Clinton’s kitchen cabinet was, in part, driven by a pervasive rationalist belief in the usefulness of ordered policy processes as a method of producing social and economic outcomes, and, in part, by the seductions of policy-play: its ambivalences, its conundrums, and, in some sense, its aesthetics (Klein 193-94). There, far from being characterised as unproductive “self-abuse” of the body-politic, policy processes were alive as a pragmatic technology, an operationalisation of ideology, as an aestheticised field of play, but more than anything as a central rationalist tenant of government action. This final idea—the possibilities of policy for effecting change, promoting development, meeting government objectives—is at the centre of the bureaucratic imagination. Policy is effective. And a concomitant belief is that ordered or organised policy processes result in the best policy and the best outcomes. Starting with Harold Lasswell, policy theorists extended the general rationalist suppositions of Western representative democracies into executive government by arguing for the value of information/knowledge and the usefulness of ordered process in addressing thus identified policy problems. In the post-war period particularly, a case can be made for the usefulness of policy processes to government—although, in a paradox, these rationalist conceptions of the policy process were strangely irrational, even Utopian, in their view of transformational capacities possibilities of policy. The early policy scientists often moved beyond a view of policy science as a useful tool, to the advocacy of policy science and the policy scientist as panaceas for public ills (Parsons 18-19). The Utopian ambitions of policy science finds one of their extremes in the contemporary interest in whole-of-government approaches to policy making. Whole-of-governmentalism, concern with co-ordination of policy and delivery across all areas of the state, can seen as produced out of Western governments’ paradoxical concern with (on one hand) order, totality, and consistency, and (on the other) deconstructing existing mechanisms of public administration. Whole-of-governmentalism requires a horizontal purview of government goals, programs, outputs, processes, politics, and outcomes, alongside—and perhaps in tension with—the long-standing vertical purview that is fundamental to ministerial responsibility. This often presents a set of public management problems largely internal to government. Policy discussion and decision-making, while affecting community outcomes and stakeholder utility, are, in this circumstance, largely inter-agency in focus. Any eventual policy document may well have bureaucrats rather than citizens as its target readers—or at least as its closest readers. Internally, cohesion of objective, discourse, tool and delivery are pursued as a prime interests of policy making. Failing at Policy So what happens when whole-of-government policy processes, particularly cultural policy processes, break down or fail? Is there anything productive to be retrieved from a failed fantasy of policy cohesion? This paper examines the utility of a failure to cohere and order in cultural policy processes. I argue that the conditions of contemporary cultural policy-making, particularly the tension between the “boutique” scale of cultural policy-making bodies and the revised, near universal, remit of cultural policy, require policy work to be undertaken in an environment and in such a way that failure is almost inevitable. Coherence and cohesions are fundamental principles of whole-of-government policy but cultural policy ambitions are necessarily too comprehensive to be achievable. This is especially so for the small arts or cultural offices government that normally act as lead agencies for cultural policy development within government. Yet, that these failed processes can still give rise to positive outcomes or positive intermediate outputs that can be taken up in a productive way in the ongoing cycle of policy work that categorises contemporary cultural governance. Herein, I detail the development of Building the Future, a cultural policy planning paper (and the name of a policy planning process) undertaken within Arts Queensland in 1999 and 2000. (While this process is now ten years in the past, it is only with a decade past that as a consultant I am in apposition to write about the material.) The abandonment of this process before the production of a public policy program allows something to be said about the utility and role of failure in cultural policy-making. The working draft of Building the Future never became a public document, but the eight months of its development helped produce a series of shifts in the discourse of Queensland Government cultural policy: from “arts” to “creative industries”; and from arts bureaucracy-centred cultural policy to the whole-of-government policy frameworks. These concepts were then taken up and elaborated in the Creative Queensland policy statement published by Arts Queensland in October 2002, particularly the concern with creative industries; whole-of-government cultural policy; and the repositioning of Arts Queensland as a service agency to other potential cultural funding-bodies within government. Despite the failure of the Building the Future process, it had a role in the production of the policy document and policy processes that superseded it. This critique of cultural policy-making rather than cultural policy texts, announcements and settings is offered as part of a project to bring to cultural policy studies material and theoretical accounts of the particularities of making cultural policy. While directions in cultural policy have much to do with the overall directions of government—which might over the past decade be categorised as focus on de-regulation, out-sourcing of services—there are developments in cultural policy settings and in cultural policy processes that are particular to cultural policy and cultural policy-making. Central to the development of cultural policy studies and to cultural policy is a transformational broadening of the operant definition of culture within government (O'Regan). Following Raymond Williams, the domain of culture is broadened to include the high culture, popular culture, folk culture and the culture of everyday life. Accordingly, in some sense, every issue of governance is deemed to have a cultural dimension—be it policy questions around urban space, tourism, community building and so on. Contemporary governments are required to act with a concern for cultural questions both within and across a number of long-persisting and otherwise discrete policy silos. This has implications for cultural policy makers and for program delivery. The definition of culture as “everyday life”, while truistically defendable, becomes unwieldy as an imprimatur or a container for administrative activity. Transforming cultural policy into a domain incorporating most social policy and significant elements of economic policy makes the domain titanically large. Potentially, it compromises usual government efforts to order policy activity through the division or apportionment of responsibility (Glover and Cunningham 19). The problem has given rise to a new mode of policy-making which attends to the co-ordination of policy across and between levels of government, known as whole-of government policy-making (see O’Regan). Within the domain of cultural policy the task of whole-of-government cultural policy is complicated by the position of, and the limits upon, arts and cultural bureaux within state and federal governments. Dedicated cultural planning bureaux often operate as “boutique” agencies. They are usually discrete line agencies or line departments within government—only rarely are they part of the core policy function of departments of a Premier or a Prime Minister. Instead, like most line agencies, they lack the leverage within the bureaucracy or policy apparatus to deliver whole-of-government cultural policy change. In some sense, failure is the inevitable outcome of all policy processes, particularly when held up against the mechanistic representation of policy processes in policy typical of policy handbooks (see Bridgman and Davis 42). Against such models, which describe policy a series of discrete linear steps, all policy efforts fail. The rationalist assumptions of early policy models—and the rigid templates for policy process that arise from their assumptions—in retrospect condemn every policy process to failure or at least profound shortcoming. This is particularly so with whole-of-government cultural policy making To re-think this, it can be argued that the error then is not really in the failure of the process, which is invariably brought about by the difficulty for coherent policy process to survive exogenous complexity, but instead the error rests with the simplicity of policy models and assumptions about the possibility of cohesion. In some sense, mechanistic policy processes make failure endogenous. The contemporary experience of making policy has tended to erode any fantasies of order, clear process, or, even, clear-sightedness within government. Achieving a coherence to the policy message is nigh on impossible—likewise cohesion of the policy framework is unlikely. Yet, importantly, failed policy is not without value. The churn of policy work—the exercise of attempting cohrent policy-making—constitutes, in some sense, the deliberative function of government, and potentially operates as a force (and site) of change. Policy briefings, reports, and draft policies—the constitution of ideas in the policy process and the mechanism for their dissemination within the body of government and perhaps to other stakeholders—are discursive acts in the process of extending the discourse of government and forming its later actions. For arts and cultural policy agencies in particular, who act without the leverage or resources of central agencies, the expansive ambitions of whole-of-government cultural policy makes failure inevitable. In such a circumstance, retrieving some benefits at the margins of policy processes, through the churn of policy work towards cohesion, is an important consolation. Case study: Cultural Policy 2000 The policy process I wish to examine is now complete. It ran over the period 1999–2002, although I wish to concentrate on my involvement in the process in early 2000 during which, as a consultant to Arts Queensland, I generated a draft policy document, Building the Future: A policy framework for the next five years (working draft). The imperative to develop a new state cultural policy followed the election of the first Beattie Labor government in July 1998. By 1999, senior Arts Queensland staff began to argue (within government at least) for the development of a new state cultural policy. The bureaucrats perceived policy development as one way of establishing “traction” in the process of bidding for new funds for the portfolio. Arts Minister Matt Foley was initially reluctant to “green-light” the policy process, but eventually in early 1999 he acceded to it on the advice of Arts Queensland, the industry, his own policy advisors and the Department of Premier. As stated above, this case study is offered now because the passing of time makes the analysis of relatively sensitive material possible. From the outset, an abbreviated timeframe for consultation and drafting seem to guarantee a difficult birth for the policy document. This was compounded by a failure to clarity the aims and process of the project. In presenting the draft policy to the advisory group, it became clear that there was no agreed strategic purpose to the document: Was it to be an advertisement, a framework for policy ideas, an audit, or a report on achievements? Tied to this, were questions about the audience for the policy statement. Was it aimed at the public, the arts industry, bureaucrats inside Arts Queensland, or, in keeping with the whole-of-government inflection to the document and its putative use in bidding for funds inside government, bureaucrats outside of Arts Queensland? My own conception of the document was as a cultural policy framework for the whole-of-government for the coming five years. It would concentrate on cultural policy in three realms: Arts Queensland; the arts instrumentalities; and other departments (particularly the cultural initiatives undertaken by the Department of Premier and the Department of State Development). In order to do this I articulated (for myself) a series of goals for the document. It needed to provide the philosophical underpinnings for a new arts and cultural policy, discuss the cultural significance of “community” in the context of the arts, outline expansion plans for the arts infrastructure throughout Queensland, advance ideas for increased employment in the arts and cultural industries, explore the development of new audiences and markets, address contemporary issues of technology, globalisation and culture commodification, promote a whole-of-government approach to the arts and cultural industries, address social justice and equity concerns associated with cultural diversity, and present examples of current and new arts and cultural practices. Five key strategies were identified: i) building strong communities and supporting diversity; ii) building the creative industries and the cultural economy; iii) developing audiences and telling Queensland’s stories; iv) delivering to the world; and v) a new role for government. While the second aim of building the creative industries and the cultural economy was an addition to the existing Australian arts policy discourse, it is the articulation of a new role for government that is most radical here. The document went to the length of explicitly suggesting a series of actions to enable Arts Queensland to re-position itself inside government: develop an ongoing policy cycle; position Arts Queensland as a lead agency for cultural policy development; establish a mechanism for joint policy planning across the arts portfolio; adopt a whole-of-government approach to policy-making and program delivery; use arts and cultural strategies to deliver on social and economic policy agendas; centralise some cultural policy functions and project; maintain and develop mechanisms and peer assessment; establish long-term strategic relationships with the Commonwealth and local government; investigate new vehicles for arts and cultural investment; investigate partnerships between industry, community and government; and develop appropriate performance measures for the cultural industries. In short, the scope of the document was titanically large, and prohibitively expansive as a basis for policy change. A chief limitation of these aims is that they seem to place the cohesion and coherence of the policy discourse at the centre of the project—when it might have better privileged a concern with policy outputs and industry/community outcomes. The subsequent dismal fortunes of the document are instructive. The policy document went through several drafts over the first half of 2000. By August 2000, I had removed myself from the process and handed the drafting back to Arts Queensland which then produced shorter version less discursive than my initial draft. However, by November 2000, it is reasonable to say that the policy document was abandoned. Significantly, after May 2000 the working drafts began to be used as internal discussion documents with government. Thus, despite the abandonment of the policy process, largely due to the unworkable breadth of its ambition, the document had a continued policy utility. The subsequent discussions helped organise future policy statements and structural adjustments by government. After the re-election of the Beattie government in January 2001, a more substantial policy process was commenced with the earlier policy documents as a starting point. By early 2002 the document was in substantial draft. The eventual policy, Creative Queensland, was released in October 2002. Significantly, this document sought to advance two ideas that I believe the earlier process did much to mobilise: a whole-of-government approach to culture; and a broader operant definition of culture. It is important not to see these as ideas merely existing “textually” in the earlier policy draft of Building the Future, but instead to see them as ideas that had begun adhere themselves to the cultural policy mechanism of government, and begun to be deployed in internal policy discussions and in program design, before finding an eventual home in a published policy text. Analysis The productive effects of the aborted policy process in which I participated are difficult to quantify. They are difficult, in fact, to separate out from governments’ ongoing processes of producing and circulating policy ideas. What is clear is that the effects of Building the Future were not entirely negated by it never becoming public. Instead, despite only circulating to a readership of bureaucrats it represented the ideas of part of the bureaucracy at a point in time. In this instance, a “failed” policy process, and its intermediate outcomes, the draft policy, through the churn of policy work, assisted government towards an eventual policy statement and a new form of governmental organisation. This suggests that processes of cultural policy discussion, or policy churn, can be as productive as the public “enunciation” of formal policy in helping to organise ideas within government and determine programs and the allocation of resources. This is even so where the Utopian idealism of the policy process is abandoned for something more graspable or politic. For the small arts or cultural policy bureau this is an important incremental benefit. Two final implications should be noted. The first is for models of policy process. Bridgman and Davis’s model of the Australian policy cycle, despite its mechanistic qualities, is ambiguous about where the policy process begins and ends. In one instance they represent it as linear but strictly circular, always coming back to its own starting point (27). Elsewhere, however, they represent it as linear, but not necessarily circular, passing through eight stages with a defined beginning and end: identification of issues; policy analysis; choosing policy instruments; consultation; co-ordination; decision; implementation; and evaluation (28–29). What is clear from the 1999-2002 policy process—if we take the full period between when Arts Queensland began to organise the development of a new arts policy and its publication as Creative Queensland in October 2002—is that the policy process was not a linear one progressing in an orderly fashion towards policy outcomes. Instead, Building the Future, is a snapshot in time (namely early to mid-2000) of a fragmenting policy process; it reveals policy-making as involving a concurrency of policy activity rather than a progression through linear steps. Following Mark Considine’s conception of policy work as the state’s effort at “system-wide information exchange and policy transfer” (271), the document is concerned less in the ordering of resources than the organisation of policy discourse. The churn of policy is the mobilisation of information, or for Considine: policy-making, when considered as an innovation system among linked or interdependent actors, becomes a learning and regulating web based upon continuous exchanges of information and skill. Learning occurs through regulated exchange, rather than through heroic insight or special legislative feats of the kind regularly described in newspapers. (269) The acceptance of this underpins a turn in contemporary accounts of policy (Considine 252-72) where policy processes become contingent and incomplete Policy. The ordering of policy is something to be attempted rather than achieved. Policy becomes pragmatic and ad hoc. It is only coherent in as much as a policy statement represents a bringing together of elements of an agency or government’s objectives and program. The order, in some sense, arrives through the act of collection, narrativisation and representation. The second implication is more directly for cultural policy makers facing the prospect of whole-of-government cultural policy making. While it is reasonable for government to wish to make coherent totalising statements about its cultural interests, such ambitions bring the near certainty of failure for the small agency. Yet these failures of coherence and cohesion should be viewed as delivering incremental benefits through the effort and process of this policy “churn”. As was the case with the Building the Future policy process, while aborted it was not a totally wasted effort. Instead, Building the Future mobilised a set of ideas within Arts Queensland and within government. For the small arts or cultural bureaux approaching the enormous task of whole-of government cultural policy making such marginal benefits are important. References Arts Queensland. Creative Queensland: The Queensland Government Cultural Policy 2002. Brisbane: Arts Queensland, 2002. Bridgman, Peter, and Glyn Davis. Australian Policy Handbook. St Leonards: Allen & Unwin, 1998. Considine, Mark. Public Policy: A Critical Approach. South Melbourne: Palgrave Macmillan, 1996. Cunningham, Stuart. "Willing Wonkers at the Policy Factory." Media Information Australia 73 (1994): 4-7. Glover, Stuart, and Stuart Cunningham. "The New Brisbane." Artlink 23.2 (2003): 16-23. Glover, Stuart, and Gillian Gardiner. Building the Future: A Policy Framework for the Next Five Years (Working Draft). Brisbane: Arts Queensland, 2000. Klein, Joe. "Eight Years." New Yorker 16 & 23 Oct. 2000: 188-217. O'Regan, Tom. "Cultural Policy: Rejuvenate or Wither". 2001. rtf.file. (26 July): AKCCMP. 9 Aug. 2001. ‹http://www.gu.edu.au/centre/cmp>. Parsons, Wayne. Public Policy: An Introduction to the Theory and Practice of Policy Analysis. Aldershot: Edward Edgar, 1995.Williams, Raymond. Key Words: A Vocabulary of Culture and Society. London: Fontana, 1976.
APA, Harvard, Vancouver, ISO, and other styles
43

Hartley, John. "Lament for a Lost Running Order? Obsolescence and Academic Journals." M/C Journal 12, no. 3 (July 15, 2009). http://dx.doi.org/10.5204/mcj.162.

Full text
Abstract:
The academic journal is obsolete. In a world where there are more titles than ever, this is a comment on their form – especially the print journal – rather than their quantity. Now that you can get everything online, it doesn’t really matter what journal a paper appears in; certainly it doesn’t matter what’s in the same issue. The experience of a journal is rapidly obsolescing, for both editors and readers. I’m obviously not the first person to notice this (see, for instance, "Scholarly Communication"; "Transforming Scholarly Communication"; Houghton; Policy Perspectives; Teute), but I do have a personal stake in the process. For if the journal is obsolete then it follows that the editor is obsolete, and I am the editor of the International Journal of Cultural Studies. I founded the IJCS and have been sole editor ever since. Next year will see the fiftieth issue. So far, I have been responsible for over 280 published articles – over 2.25 million words of other people’s scholarship … and counting. We won’t say anything about the words that did not get published, except that the IJCS rejection rate is currently 87 per cent. Perhaps the first point that needs to be made, then, is that obsolescence does not imply lack of success. By any standard the IJCS is a successful journal, and getting more so. It has recently been assessed as a top-rating A* journal in the Australian Research Council’s journal rankings for ERA (Excellence in Research for Australia), the newly activated research assessment exercise. (In case you’re wondering, M/C Journal is rated B.) The ARC says of the ranking exercise: ‘The lists are a result of consultations with the sector and rigorous review by leading researchers and the ARC.’ The ARC definition of an A* journal is given as: Typically an A* journal would be one of the best in its field or subfield in which to publish and would typically cover the entire field/ subfield. Virtually all papers they publish will be of very high quality. These are journals where most of the work is important (it will really shape the field) and where researchers boast about getting accepted.Acceptance rates would typically be low and the editorial board would be dominated by field leaders, including many from top institutions. (Appendix I, p. 21; and see p. 4.)Talking of boasting, I love to prate about the excellent people we’ve published in the IJCS. We have introduced new talent to the field, and we have published new work by some of its pioneers – including Richard Hoggart and Stuart Hall. We’ve also published – among many others – Sara Ahmed, Mohammad Amouzadeh, Tony Bennett, Goran Bolin, Charlotte Brunsdon, William Boddy, Nico Carpentier, Stephen Coleman, Nick Couldry, Sean Cubitt, Michael Curtin, Daniel Dayan, Ben Dibley, Stephanie Hemelryk Donald, John Frow, Elfriede Fursich, Christine Geraghty, Mark Gibson, Paul Gilroy, Faye Ginsberg, Jonathan Gray, Lawrence Grossberg, Judith Halberstam, Hanno Hardt, Gay Hawkins, Joke Hermes, Su Holmes, Desmond Hui, Fred Inglis, Henry Jenkins, Deborah Jermyn, Ariel Heryanto, Elihu Katz, Senator Rod Kemp (Australian government minister), Youna Kim, Agnes Ku, Richard E. Lee, Jeff Lewis, David Lodge (the novelist), Knut Lundby, Eric Ma, Anna McCarthy, Divya McMillin, Antonio Menendez-Alarcon, Toby Miller, Joe Moran, Chris Norris, John Quiggin, Chris Rojek, Jane Roscoe, Jeffrey Sconce, Lynn Spigel, John Storey, Su Tong, the late Sako Takeshi, Sue Turnbull, Graeme Turner, William Uricchio, José van Dijck, Georgette Wang, Jing Wang, Elizabeth Wilson, Janice Winship, Handel Wright, Wu Jing, Wu Qidi (Chinese Vice-Minister of Education), Emilie Yueh-Yu Yeh, Robert Young and Zhao Bin. As this partial list makes clear, as well as publishing the top ‘hegemons’ we also publish work pointing in new directions, including papers from neighbouring disciplines such as anthropology, area studies, economics, education, feminism, history, literary studies, philosophy, political science, and sociology. We have sought to represent neglected regions, especially Chinese cultural studies, which has grown strongly during the past decade. And for quite a few up-and-coming scholars we’ve been the proud host of their first international publication. The IJCS was first published in 1998, already well into the internet era, but it was print-only at that time. Since then, all content, from volume 1:1 onwards, has been digitised and is available online (although vol 1:2 is unaccountably missing). The publishers, Sage Publications Ltd, London, have steadily added online functionality, so that now libraries can get the journal in various packages, including offering this title among many others in online-only bundles, and individuals can purchase single articles online. Thus, in addition to institutional and individual subscriptions, which remain the core business of the journal, income is derived by the publisher from multi-site licensing, incremental consortial sales income, single- and back-issue sales (print), pay-per-view, and deep back file sales (electronic). So what’s obsolete about it? In that boasting paragraph of mine (above), about what wonderful authors we’ve published, lies one of the seeds of obsolescence. For now that it is available online, ‘users’ (no longer ‘readers’!) can search for what they want and ignore the journal as such altogether. This is presumably how most active researchers experience any journal – they are looking for articles (or less: quotations; data; references) relevant to a given topic, literature review, thesis etc. They encounter a journal online through its ‘content’ rather than its ‘form.’ The latter is irrelevant to them, and may as well not exist. The Cover Some losses are associated with this change. First is the loss of the front cover. Now you, dear reader, scrolling through this article online, might well complain, why all the fuss about covers? Internet-generation journals don’t have covers, so all of the work that goes into them to establish the brand, the identity and even the ‘affect’ of a journal is now, well, obsolete. So let me just remind you of what’s at stake. Editors, designers and publishers all take a good deal of trouble over covers, since they are the point of intersection of editorial, design and marketing priorities. Thus, the IJCS cover contains the only ‘content’ of the journal for which we pay a fee to designers and photographers (usually the publisher pays, but in one case I did). Like any other cover, ours has three main elements: title, colour and image. Thought goes into every detail. Title I won’t say anything about the journal’s title as such, except that it was the result of protracted discussions (I suggested Terra Nullius at one point, but Sage weren’t having any of that). The present concern is with how a title looks on a cover. Our title-typeface is Frutiger. Originally designed by Adrian Frutiger for Charles de Gaulle Airport in Paris, it is suitably international, being used for the corporate identity of the UK National Health Service, Telefónica O2, the Royal Navy, the London School of Economics , the Canadian Broadcasting Corporation, the Conservative Party of Canada, Banco Bradesco of Brazil, the Finnish Defence Forces and on road signs in Switzerland (Wikipedia, "Frutiger"). Frutiger is legible, informal, and reads well in small copy. Sage’s designer and I corresponded on which of the words in our cumbersome name were most important, agreeing that ‘international’ combined with ‘cultural’ is the USP (Unique Selling Point) of the journal, so they should be picked out (in bold small-caps) from the rest of the title, which the designer presented in a variety of Frutiger fonts (regular, italic, and reversed – white on black), presumably to signify the dynamism and diversity of our content. The word ‘studies’ appears on a lozenge-shaped cartouche that is also used as a design element throughout the journal, for bullet points, titles and keywords. Colour We used to change this every two years, but since volume 7 it has stabilised with the distinctive Pantone 247, ‘new fuchsia.’ This colour arose from my own environment at QUT, where it was chosen (by me) for the new Creative Industries Faculty’s academic gowns and hoods, and thence as a detailing colour for the otherwise monochrome Creative Industries Precinct buildings. There’s a lot of it around my office, including on the wall and the furniture. New Fuchsia is – we are frequently told – a somewhat ‘girly’ colour, especially when contrasted with the Business Faculty’s blue or Law’s silver; its similarity to the Girlfriend/Dolly palette does introduce a mild ‘politics of prestige’ element, since it is determinedly pop culture, feminised, and non-canonical. Image Right at the start, the IJCS set out to signal its difference from other journals. At that time, all Sage journals had calligraphic colours – but I was insistent that we needed a photograph (I have ‘form’ in this respect: in 1985 I changed the cover of the Australian Journal of Cultural Studies from a line drawing (albeit by Sydney Nolan) to a photograph; and I co-designed the photo-cover of Cultural Studies in 1987). For IJCS I knew which photo I wanted, and Sage went along with the choice. I explained it in the launch issue’s editorial (Hartley, "Editorial"). That original picture, a goanna on a cattle grid in the outback, by Australian photographer Grant Hobson, lasted ten years. Since volume 11 – in time for our second decade – the goanna has been replaced with a picture by Italian-based photographer Patrick Nicholas, called ‘Reality’ (Hartley, "Cover Narrative"). We have also used two other photos as cover images, once each. They are: Daniel Meadows’s 1974 ‘Karen & Barbara’ (Hartley, "Who"); and a 1962 portrait of Richard Hoggart from the National Portrait Gallery in London (Owen & Hartley 2007). The choice of picture has involved intense – sometimes very tense – negotiations with Sage. Most recently, they were adamant the Daniel Meadows picture, which I wanted to use as the long-term replacement of the goanna, was too ‘English’ and they would not accept it. We exchanged rather sharp words before compromising. There’s no need to rehearse the dispute here; the point is that both sides, publisher and editor, felt that vital interests were at stake in the choice of a cover-image. Was it too obscure; too Australian; too English; too provocative (the current cover features, albeit in the deep background, a TV screen-shot of a topless Italian game-show contestant)? Running Order Beyond the cover, the next obsolete feature of a journal is the running order of articles. Obviously what goes in the journal is contingent upon what has been submitted and what is ready at a given time, so this is a creative role within a very limited context, which is what makes it pleasurable. Out of a limited number of available papers, a choice must be made about which one goes first, what order the other papers should follow, and which ones must be held over to the next issue. The first priority is to choose the lead article: like the ‘first face’ in a fashion show (if you don’t know what I mean by that, see FTV.com. It sets the look, the tone, and the standard for the issue. I always choose articles I like for this slot. It sends a message to the field – look at this! Next comes the running order. We have about six articles per issue. It is important to maintain the IJCS’s international mix, so I check for the country of origin, or failing that (since so many articles come from Anglosphere countries like the USA, UK and Australia), the location of the analysis. Attention also has to be paid to the gender balance among authors, and to the mix of senior and emergent scholars. Sometimes a weak article needs to be ‘hammocked’ between two good ones (these are relative terms – everything published in the IJCS is of a high scholarly standard). And we need to think about disciplinary mix, so as not to let the journal stray too far towards one particular methodological domain. Running order is thus a statement about the field – the disciplinary domain – rather than about an individual paper. It is a proposition about how different voices connect together in some sort of disciplinary syntax. One might even claim that the combination of cover and running order is a last vestige of collegiate collectivism in an era of competitive academic individualism. Now all that matters is the individual paper and author; the ‘currency’ is tenure, promotion and research metrics, not relations among peers. The running order is obsolete. Special Issues An extreme version of running order is the special issue. The IJCS has regularly published these; they are devoted to field-shaping initiatives, as follows: Title Editor(s) Issue Date Radiocracy: Radio, Development and Democracy Amanda Hopkinson, Jo Tacchi 3.2 2000 Television and Cultural Studies Graeme Turner 4.4 2001 Cultural Studies and Education Karl Maton, Handel Wright 5.4 2002 Re-Imagining Communities Sara Ahmed, Anne-Marie Fortier 6.3 2003 The New Economy, Creativity and Consumption John Hartley 7.1 2004 Creative Industries and Innovation in China Michael Keane, John Hartley 9.3 2006 The Uses of Richard Hoggart Sue Owen, John Hartley 10.1 2007 A Cultural History of Celebrity Liz Barry 11.3 2008 Caribbean Media Worlds Anna Pertierra, Heather Horst 12.2 2009 Co-Creative Labour Mark Deuze, John Banks 12.5 2009 It’s obvious that special issues have a place in disciplinary innovation – they can draw attention in a timely manner to new problems, neglected regions, or innovative approaches, and thus they advance the field. They are indispensible. But because of online publication, readers are not held to the ‘project’ of a special issue and can pick and choose whatever they want. And because of the peculiarities of research assessment exercises, editing special issues doesn’t count as research output. The incentive to do them is to that extent reduced, and some universities are quite heavy-handed about letting academics ‘waste’ time on activities that don’t produce ‘metrics.’ The special issue is therefore threatened with obsolescence too. Refereeing In many top-rating journals, the human side of refereeing is becoming obsolete. Increasingly this labour-intensive chore is automated and the labour is technologically outsourced from editors and publishers to authors and referees. You have to log on to some website and follow prompts in order to contribute both papers and the assessment of papers; interactions with editors are minimal. At the IJCS the process is still handled by humans – namely, journal administrator Tina Horton and me. We spend a lot of time checking how papers are faring, from trying to find the right referees through to getting the comments and then the author’s revisions completed in time for a paper to be scheduled into an issue. The volume of email correspondence is considerable. We get to know authors and referees. So we maintain a sense of an interactive and conversational community, albeit by correspondence rather than face to face. Doubtless, sooner or later, there will be a depersonalised Text Management System. But in the meantime we cling to the romantic notion that we are involved in refereeing for the sake of the field, for raising the standard of scholarship, for building a globally dispersed virtual college of cultural studies, and for giving everyone – from unfavoured countries and neglected regions to famous professors in old-money universities – the same chance to get their research published. In fact, these are largely delusional ideals, for as everyone knows, refereeing is part of the political economy of publicly-funded research. It’s about academic credentials, tenure and promotion for the individual, and about measurable research metrics for the academic organisation or funding agency (Hartley, "Death"). The IJCS has no choice but to participate: we do what is required to qualify as a ‘double-blind refereed journal’ because that is the only way to maintain repute, and thence the flow of submissions, not to mention subscriptions, without which there would be no journal. As with journals themselves, which proliferate even as the print form becomes obsolete, so refereeing is burgeoning as a practice. It’s almost an industry, even though the currency is not money but time: part gift-economy; part attention-economy; partly the payment of dues to the suzerain funding agencies. But refereeing is becoming obsolete in the sense of gathering an ‘imagined community’ of people one might expect to know personally around a particular enterprise. The process of dispersal and anonymisation of the field is exacerbated by blind refereeing, which we do because we must. This is suited to a scientific domain of objective knowledge, but everyone knows it’s not quite like that in the ‘new humanities’. The agency and identity of the researcher is often a salient fact in the research. The embedded positionality of the author, their reflexiveness about their own context and room-for-manoeuvre, and the radical contextuality of knowledge itself – these are all more or less axiomatic in cultural studies, but they’re not easily served by ‘double-blind’ refereeing. When refereeing is depersonalised to the extent that is now rife (especially in journals owned by international commercial publishers), it is hard to maintain a sense of contextualised productivity in the knowledge domain, much less a ‘common cause’ to which both author and referee wish to contribute. Even though refereeing can still be seen as altruistic, it is in the service of something much more general (‘scholarship’) and much more particular (‘my career’) than the kind of reviewing that wants to share and improve a particular intellectual enterprise. It is this mid-range altruism – something that might once have been identified as a politics of knowledge – that’s becoming obsolete, along with the printed journals that were the banner and rallying point for the cause. If I were to start a new journal (such as cultural-science.org), I would prefer ‘open refereeing’: uploading papers on an open site, subjecting them to peer-review and criticism, and archiving revised versions once they have received enough votes and comments. In other words I’d like to see refereeing shifted from the ‘supply’ or production side of a journal to the ‘demand’ or readership side. But of course, ‘demand’ for ‘blind’ refereeing doesn’t come from readers; it comes from the funding agencies. The Reading Experience Finally, the experience of reading a journal is obsolete. Two aspects of this seem worthy of note. First, reading is ‘out of time’ – it no longer needs to conform to the rhythms of scholarly publication, which are in any case speeding up. Scholarship is no longer seasonal, as it has been since the Middle Ages (with university terms organised around agricultural and ecclesiastical rhythms). Once you have a paper’s DOI number, you can read it any time, 24/7. It is no longer necessary even to wait for publication. With some journals in our field (e.g. Journalism Studies), assuming your Library subscribes, you can access papers as soon as they’re uploaded on the journal’s website, before the published edition is printed. Soon this will be the norm, just as it is for the top science journals, where timely publication, and thereby the ability to claim first discovery, is the basis of intellectual property rights. The IJCS doesn’t (yet) offer this service, but its frequency is speeding up. It was launched in 1998 with three issues a year. It went quarterly in 2001 and remained a quarterly for eight years. It has recently increased to six issues a year. That too causes changes in the reading experience. The excited ripping open of the package is less of a thrill the more often it arrives. Indeed, how many subscribers will admit that sometimes they don’t even open the envelope? Second, reading is ‘out of place’ – you never have to see the journal in which a paper appears, so you can avoid contact with anything that you haven’t already decided to read. This is more significant than might first appear, because it is affecting journalism in general, not just academic journals. As we move from the broadcast to the broadband era, communicative usage is shifting too, from ‘mass’ communication to customisation. This is a mixed blessing. One of the pleasures of old-style newspapers and the TV news was that you’d come across stories you did not expect to find. Indeed, an important attribute of the industrial form of journalism is its success in getting whole populations to read or watch stories about things they aren’t interested in, or things like wars and crises that they’d rather not know about at all. That historic textual achievement is in jeopardy in the broadband era, because ‘the public’ no longer needs to gather around any particular masthead or bulletin to get their news. With Web 2.0 affordances, you can exercise much more choice over what you attend to. This is great from the point of view of maximising individual choice, but sub-optimal in relation to what I’ve called ‘population-gathering’, especially the gathering of communities of interest around ‘tales of the unexpected’ – novelty or anomalies. Obsolete: Collegiality, Trust and Innovation? The individuation of reading choices may stimulate prejudice, because prejudice (literally, ‘pre-judging’) is built in when you decide only to access news feeds about familiar topics, stories or people in which you’re already interested. That sort of thing may encourage narrow-mindedness. It is certainly an impediment to chance discovery, unplanned juxtaposition, unstructured curiosity and thence, perhaps, to innovation itself. This is a worry for citizenship in general, but it is also an issue for academic ‘knowledge professionals,’ in our ever-narrower disciplinary silos. An in-close specialist focus on one’s own area of expertise need no longer be troubled by the concerns of the person in the next office, never mind the next department. Now, we don’t even have to meet on the page. One of the advantages of whole journals, then, is that each issue encourages ‘macro’ as well as ‘micro’ perspectives, and opens reading up to surprises. This willingness to ‘take things on trust’ describes a ‘we’ community – a community of trust. Trust too is obsolete in these days of performance evaluation. We’re assessed by an anonymous system that’s managed by people we’ll never meet. If the ‘population-gathering’ aspects of print journals are indeed obsolete, this may reduce collegiate trust and fellow-feeling, increase individualist competitiveness, and inhibit innovation. In the face of that prospect, I’m going to keep on thinking about covers, running orders, referees and reading until the role of editor is obsolete too. ReferencesHartley, John. "'Cover Narrative': From Nightmare to Reality." International Journal of Cultural Studies 11.2 (2005): 131-137. ———. "Death of the Book?" Symposium of the National Scholarly Communication Forum & Australian Academy of the Humanities, Sydney Maritime Museum, 2005. 26 Apr. 2009 ‹http://www.humanities.org.au/Resources/Downloads/NSCF/RoundTables1-17/PDF/Hartley.pdf›. ———. "Editorial: With Goanna." International Journal of Cultural Studies 1.1 (1998): 5-10. ———. "'Who Are You Going to Believe – Me or Your Own Eyes?' New Decade; New Directions." International Journal of Cultural Studies 11.1 (2008): 5-14. Houghton, John. "Economics of Scholarly Communication: A Discussion Paper." Center for Strategic Economic Studies, Victoria University, 2000. 26 Apr. 2009 ‹http://www.caul.edu.au/cisc/EconomicsScholarlyCommunication.pdf›. Owen, Sue, and John Hartley, eds. The Uses of Richard Hoggart. International Journal of Cultural Studies (special issue), 10.1 (2007). Policy Perspectives: To Publish and Perish. (Special issue cosponsored by the Association of Research Libraries, Association of American Universities and the Pew Higher Education Roundtable) 7.4 (1998). 26 Apr. 2009 ‹http://www.arl.org/scomm/pew/pewrept.html›. "Scholarly Communication: Crisis and Revolution." University of California Berkeley Library. N.d. 26 Apr. 2009 ‹http://www.lib.berkeley.edu/Collections/crisis.html›. Teute, F. J. "To Publish or Perish: Who Are the Dinosaurs in Scholarly Publishing?" Journal of Scholarly Publishing 32.2 (2001). 26 Apr. 2009 ‹http://www.utpjournals.com/product/jsp/322/perish5.html›."Transforming Scholarly Communication." University of Houston Library. 2005. 26 Apr. 2009 ‹http://info.lib.uh.edu/scomm/transforming.htm›.
APA, Harvard, Vancouver, ISO, and other styles
44

Lovink, Geert. "Fragments on New Media Arts and Science." M/C Journal 6, no. 4 (August 1, 2003). http://dx.doi.org/10.5204/mcj.2242.

Full text
Abstract:
Of Motivational Art “Live to be outstanding.” What is new media in the age of the ‘rock ‘n’ roll life coach’ Anthony Robbins? There is no need to be ‘spectacular’ anymore. The Situationist critique of the ‘spectacle’ has worn out. That would be my assessment of the Robbins Age we now live in. Audiences are no longer looking for empty entertainment; they need help. Art has to motivate, not question but assist. Today’s aesthetic experience ought to awaken the spiritual side of life. Aesthetics are not there for contemplation only. Art has to become (inter)active and take on the role of ‘coaching.’ In terms of the ‘self mastery’ discourse, the 21st Century artist helps to ‘unleash the power from within.’ No doubt this is going to be achieved with ‘positive energy.’ What is needed is “perverse optimism” (Tibor Kalman). Art has to create, not destroy. A visit to the museum or gallery has to fit into one’s personal development program. Art should consult, not criticize. In order to be a true Experience, the artwork has to initiate through a bodily experience, comparable to the fire walk. It has to be passionate, and should shed its disdain for the viewer, along with its postmodern strategies of irony, reversal and indifference. In short: artists have to take responsibility and stop their silly plays. The performance artist’s perfect day-job: the corporate seminar, ‘trust-building’ and distilling the firm’s ‘core values’ from its ‘human resources’. Self-management ideology builds on the 80s wave of political correctness, liberated from a critical negativism that only questioned existing power structures without giving guidance. As Tony says: “Live with passion!” Emotions have to flow. People want to be fired up and ‘move out of their comfort zone.’ Complex references to intellectual currents within art history are a waste of time. The art experience has to fit in and add to the ‘personal growth’ agenda. Art has to ‘leverage fears’ and promise ‘guaranteed success.’ Part therapist, part consultant, art no longer compensates for a colourless life. Instead it makes the most of valuable resources and is aware of the ‘attention economy’ it operates in. In order to reach such higher plains of awareness it seems unavoidable to admit and celebrate one’s own perverse Existenz. Everyone is a pile of shit and has got dirty hands. Or as Tibor Kalman said: “No one gets to work under ethically pure conditions.” (see Rick Poynor’s <http://www.undesign.org/tiborocity/>). It is at that Zizekian point that art as a counseling practice comes into being. Mapping the Limits of New Media To what extent has the ‘tech wreck’ and following scandals affected our understanding of new media? No doubt there will also be cultural fall-out. Critical new media practices have been slow to respond to both the rise and the fall of dotcommania. The world of IT firms and their volatile valuations on the world’s stock markets seemed light years away from the new media arts galaxy. The speculative hey-day of new media culture was the early-mid 90s, before the rise of the World Wide Web. Theorists and artists jumped eagerly at not-yet-existing and inaccessible technologies such as virtual reality. Cyberspace generated a rich collection of mythologies. Issues of embodiment and identity were fiercely debated. Only five years later, with Internet stocks going through the roof, not much was left of the initial excitement in intellectual and artistic circles. Experimental technoculture missed out on the funny money. Over the last few years there has been a steady stagnation of new media culture, its concepts and its funding. With hundreds of millions of new users flocking onto the Net, the arts could no longer keep up and withdrew to their own little world of festivals, mailing lists and workshops. Whereas new media arts institutions, begging for goodwill, still portray artists as working at the forefront of technological developments, collaborating with state of the art scientists, the reality is a different one. Multi-disciplinary goodwill is at an all time low. At best, the artist’s new media products are ‘demo design’ as described by Peter Lunenfeld in Snap to Grid. Often it does not even reach that level. New media art, as defined by its few institutions, rarely reaches audiences outside of its own subculture. What in positive terms could be described as the heroic fight for the establishment of a self-referential ‘new media arts system’ through a frantic differentiation of works, concepts and traditions, may as well be classified as a dead-end street. The acceptance of new media by leading museums and collectors will simply not happen. Why wait a few decades anyway? The majority of the new media art works on display at ZKM in Karlsruhe, the Linz Ars Electronica Center, ICC in Tokyo or the newly opened Australian Centre for the Moving Image are hopeless in their innocence, being neither critical nor radically utopian in approach. It is for that reason that the new media arts sector, despite its steady growth, is getting increasingly isolated, incapable of addressing the issues of today’s globalized world. It is therefore understandable that the contemporary (visual) arts world is continuing the decades old silent boycott of interactive new media works in galleries, biennales and shows such as Documenta. A critical reassessment of the role of arts and culture within today’s network society seems necessary. Let’s go beyond the ‘tactical’ intentions of the players involved. This is not a blame game. The artist-engineer, tinkering away on alternative human-machine interfaces, social software, or digital aesthetics has effectively been operating in a self-imposed vacuum. Over the last few decades both science and business have successfully ignored the creative community. Even worse, artists have actively been sidelined in the name of ‘usability’. The backlash movement against web design, led by usability guru Jakob Nielsen, is a good example of this trend. Other contributing factors may have been fear of corporate dominance by companies such as AOL/Time Warner and Microsoft. Lawrence Lessig argues that innovation of the Internet itself is in danger. In the meanwhile the younger generation is turning its back from new media arts questions and operates as anti-corporate activists, if at all engaged. Since the crash the Internet has rapidly lost its imaginative attraction. File swapping and cell phones can only temporarily fill the vacuum. It would be foolish to ignore this. New media have lost their magic spell; the once so glamorous gadgets are becoming part of everyday life. This long-term tendency, now in a phase of acceleration, seriously undermines the future claim of new media altogether. Another ‘taboo’ issue in new media is generationalism. With video and expensive interactive installations being the domain of the ‘68 baby boomers, the generation of ‘89 has embraced the free Internet. But the Net turned out to be a trap for them. Whereas real assets, positions and power remains in the hands of the ageing baby boomers, the gamble of its predecessors on the rise of new media did not materialize. After venture capital has melted away, there is still no sustainable revenue system in place for the Internet. The slow working education bureaucracies have not yet grasped the new media malaise. Universities are still in the process of establishing new media departments. But that will come to a halt at some point. The fifty-something tenured chairs and vice-chancellors must feel good about their persistent sabotage. ‘What’s so new about new media anyway? Technology was hype after all, promoted by the criminals of Enron and WorldCom. It’s enough for students to do a bit of email and web surfing, safeguarded within a filtered and controlled intranet…’ It is to counter this cynical reasoning that we urgently need to analyze the ideology of the greedy 90s and its techno-libertarianism. If we don’t disassociate new media quickly from that decade, if we continue with the same rhetoric, the isolation of the new media sector will sooner or later result in its death. Let’s transform the new media buzz into something more interesting altogether – before others do it for us.The Will to Subordinate to Science The dominant wing of Western ‘new media arts’ lacks a sense of superiority, sovereignty, determination and direction. One can witness a tendency towards ‘digital inferiority’ at virtually every cyber-event. Artists, critics and curators have made themselves subservient to technology – and ‘life science’ in particular. This ideological stand has grown out of an ignorance that cannot be explained easily. We’re talking here about a subtle mentality, almost a taboo. The cult practice between ‘domina’ science and its slaves the new media artists is taking place in backrooms of universities and art institutions, warmly supported by genuinely interested corporate bourgeois elements – board members, professors, science writers and journalists – that set the technocultural agenda. Here we’re not talking about some form of ‘techno celebration.’ New media art is not merely a servant to corporate interests. If only it was that simple. The reproach of new media arts ‘celebrating’ technology is a banality, only stated by outsiders; and the interest in life sciences can easily be sold as a (hidden) longing to take part in science’s supra-human ‘triumph of logos,’ but I won’t do that here. Scientists, for their part, are disdainfully looking down at the vaudeville interfaces and well-meant weirdness of biotech art. Not that they will say anything. But the weak smiles on their faces bespeak a cultural gap light years wide. An exquisite non-communication is at hand here. Performance artist Coco Fusco recently wrote a critique of biotech art on the Nettime mailinglist (January 26, 2003). “Biotech artists have claimed that they are redefining art practice and therefore the old rules don't apply to them.” For Fusco bioart’s “heroic stance and imperviousness to criticism sounds a bit hollow and self-serving after a while, especially when the demand for inclusion in mainstream art institutions, art departments in universities, art curricula, art world money and art press is so strong.” From this marginal position, its post-human dreams of transcending the body could better be read as desires to transcend its own marginality, being neither recognized as ‘visual arts’ nor as ‘science.’ Coco Fusco: “I find the attempts by many biotech art endorsers to celebrate their endeavor as if it were just about a scientific or aesthetic pursuit to be disingenuous. Its very rhetoric of transcendence of the human is itself a violent act of erasure, a master discourse that entails the creation of ‘slaves’ as others that must be dominated.” OK, but what if all this remains but a dream, prototypes of human-machine interfaces that, like demo-design, are going nowhere? The isolated social position of the new media arts in this type of criticism is not taken into consideration. Biotech art has to be almighty in order for the Fusco rhetoric to function. Coco Fusco rightly points at artists that “attend meetings with ‘real’ scientists, but in that context they become advisors on how to popularize science, which is hardly what I would call a critical intervention in scientific institutions.” Artists are not ‘better scientists’ and the scientific process is not a better way of making art than any other, Fusco writes. She concludes: “Losing respect for human life is certainly the underbelly of any militaristic adventure, and lies at the root of the racist and classist ideas that have justified the violent use of science for centuries. I don't think there is any reason to believe that suddenly, that kind of science will disappear because some artists find beauty in biotech.” It remains an open question where radical criticism of (life) science has gone and why the new media (arts) canon is still in such a primitive, regressive stage. Links http://www.undesign.org/tiborocity/ Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Lovink, Geert. "Fragments on New Media Arts and Science" M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0308/10-fragments.php>. APA Style Lovink, G. (2003, Aug 26). Fragments on New Media Arts and Science. M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0308/10-fragments.php>
APA, Harvard, Vancouver, ISO, and other styles
45

Stalcup, Meg. "What If? Re-imagined Scenarios and the Re-Virtualisation of History." M/C Journal 18, no. 6 (March 7, 2016). http://dx.doi.org/10.5204/mcj.1029.

Full text
Abstract:
Image 1: “Oklahoma State Highway Re-imagined.” CC BY-SA 4.0 2015 by author, using Wikimedia image by Ks0stm (CC BY-SA 3 2013). Introduction This article is divided in three major parts. First a scenario, second its context, and third, an analysis. The text draws on ethnographic research on security practices in the United States among police and parts of the intelligence community from 2006 through to the beginning of 2014. Real names are used when the material is drawn from archival sources, while individuals who were interviewed during fieldwork are referred to by their position rank or title. For matters of fact not otherwise referenced, see the sources compiled on “The Complete 911 Timeline” at History Commons. First, a scenario. Oklahoma, 2001 It is 1 April 2001, in far western Oklahoma, warm beneath the late afternoon sun. Highway Patrol Trooper C.L. Parkins is about 80 kilometres from the border of Texas, watching trucks and cars speed along Interstate 40. The speed limit is around 110 kilometres per hour, and just then, his radar clocks a blue Toyota Corolla going 135 kph. The driver is not wearing a seatbelt. Trooper Parkins swung in behind the vehicle, and after a while signalled that the car should pull over. The driver was dark-haired and short; in Parkins’s memory, he spoke English without any problem. He asked the man to come sit in the patrol car while he did a series of routine checks—to see if the vehicle was stolen, if there were warrants out for his arrest, if his license was valid. Parkins said, “I visited with him a little bit but I just barely remember even having him in my car. You stop so many people that if […] you don't arrest them or anything […] you don't remember too much after a couple months” (Clay and Ellis). Nawaf Al Hazmi had a valid California driver’s license, with an address in San Diego, and the car’s registration had been legally transferred to him by his former roommate. Parkins’s inquiries to the National Crime Information Center returned no warnings, nor did anything seem odd in their interaction. So the officer wrote Al Hazmi two tickets totalling $138, one for speeding and one for failure to use a seat belt, and told him to be on his way. Al Hazmi, for his part, was crossing the country to a new apartment in a Virginia suburb of Washington, DC, and upon arrival he mailed the payment for his tickets to the county court clerk in Oklahoma. Over the next five months, he lived several places on the East Coast: going to the gym, making routine purchases, and taking a few trips that included Las Vegas and Florida. He had a couple more encounters with local law enforcement and these too were unremarkable. On 1 May 2001 he was mugged, and promptly notified the police, who documented the incident with his name and local address (Federal Bureau of Investigation, 139). At the end of June, having moved to New Jersey, he was involved in a minor traffic accident on the George Washington Bridge, and officers again recorded his real name and details of the incident. In July, Khalid Al Mihdhar, the previous owner of the car, returned from abroad, and joined Al Hazmi in New Jersey. The two were boyhood friends, and they went together to a library several times to look up travel information, and then, with Al Hazmi’s younger brother Selem, to book their final flight. On 11 September, the three boarded American Airlines flight 77 as part of the Al Qaeda team that flew the mid-sized jet into the west façade of the Pentagon. They died along with the piloting hijacker, all the passengers, and 125 people on the ground. Theirs was one of four airplanes hijacked that day, one of which was crashed by passengers, the others into significant sites of American power, by men who had been living for varying lengths of time all but unnoticed in the United States. No one thought that Trooper Parkins, or the other officers with whom the 9/11 hijackers crossed paths, should have acted differently. The Commissioner of the Oklahoma Department of Public Safety himself commented that the trooper “did the right thing” at that April traffic stop. And yet, interviewed by a local newspaper in January of 2002, Parkins mused to the reporter “it's difficult sometimes to think back and go: 'What if you had known something else?'" (Clay and Ellis). Missed Opportunities Image 2: “Hijackers Timeline (Redacted).” CC BY-SA 4.0 2015 by author, using the Federal Bureau of Investigation (FBI)’s “Working Draft Chronology of Events for Hijackers and Associates”. In fact, several of the men who would become the 9/11 hijackers were stopped for minor traffic violations. Mohamed Atta, usually pointed to as the ringleader, was given a citation in Florida that spring of 2001 for driving without a license. When he missed his court date, a bench warrant was issued (Wall Street Journal). Perhaps the warrant was not flagged properly, however, since nothing happened when he was pulled over again, for speeding. In the government inquiries that followed attack, and in the press, these brushes with the law were “missed opportunities” to thwart the 9/11 plot (Kean and Hamilton, Report 353). Among a certain set of career law enforcement personnel, particularly those active in management and police associations, these missed opportunities were fraught with a sense of personal failure. Yet, in short order, they were to become a source of professional revelation. The scenarios—Trooper Parkins and Al Hazmi, other encounters in other states, the general fact that there had been chance meetings between police officers and the hijackers—were re-imagined in the aftermath of 9/11. Those moments were returned to and reversed, so that multiple potentialities could be seen, beyond or in addition to what had taken place. The deputy director of an intelligence fusion centre told me in an interview, “it is always a local cop who saw something” and he replayed how the incidents of contact had unfolded with the men. These scenarios offered a way to recapture the past. In the uncertainty of every encounter, whether a traffic stop or questioning someone taking photos of a landmark (and potential terrorist target), was also potential. Through a process of re-imagining, police encounters with the public became part of the government’s “national intelligence” strategy. Previously a division had been marked between foreign and domestic intelligence. While the phrase “national intelligence” had long been used, notably in National Intelligence Estimates, after 9/11 it became more significant. The overall director of the US intelligence community became the Director National Intelligence, for instance, and the cohesive term marked the way that increasingly diverse institutional components, types of data and forms of action were evolving to address the collection of data and intelligence production (McConnell). In a series of working groups mobilised by members of major police professional organisations, and funded by the US Department of Justice, career officers and representatives from federal agencies produced detailed recommendations and plans for involving police in the new Information Sharing Environment. Among the plans drawn up during this period was what would eventually come to be the Nationwide Suspicious Activity Reporting Initiative, built principally around the idea of encounters such as the one between Parkins and Al Hazmi. Map 1: Map of pilot sites in the Nationwide Suspicious Activity Reporting Evaluation Environment in 2010 (courtesy of the author; no longer available online). Map 2: Map of participating sites in the Nationwide Suspicious Activity Reporting Initiative, as of 2014. In an interview, a fusion centre director who participated in this planning as well as its implementation, told me that his thought had been, “if we train state and local cops to understand pre-terrorism indicators, if we train them to be more curious, and to question more what they see,” this could feed into “a system where they could actually get that information to somebody where it matters.” In devising the reporting initiative, the working groups counter-actualised the scenarios of those encounters, and the kinds of larger plots to which they were understood to belong, in order to extract a set of concepts: categories of suspicious “activities” or “patterns of behaviour” corresponding to the phases of a terrorism event in the process of becoming (Deleuze, Negotiations). This conceptualisation of terrorism was standardised, so that it could be taught, and applied, in discerning and documenting the incidents comprising an event’s phases. In police officer training, the various suspicious behaviours were called “terrorism precursor activities” and were divided between criminal and non-criminal. “Functional Standards,” developed by the Los Angeles Police Department and then tested by the Department of Homeland Security (DHS), served to code the observed behaviours for sharing (via compatible communication protocols) up the federal hierarchy and also horizontally between states and regions. In the popular parlance of videos made for the public by local police departments and DHS, which would come to populate the internet within a few years, these categories were “signs of terrorism,” more specifically: surveillance, eliciting information, testing security, and so on. Image 3: “The Seven Signs of Terrorism (sometimes eight).” CC BY-SA 4.0 2015 by author, using materials in the public domain. If the problem of 9/11 had been that the men who would become hijackers had gone unnoticed, the basic idea of the Suspicious Activity Reporting Initiative was to create a mechanism through which the eyes and ears of everyone could contribute to their detection. In this vein, “If You See Something, Say Something™” was a campaign that originated with the New York City Metropolitan Transportation Authority, and was then licensed for use to DHS. The tips and leads such campaigns generated, together with the reports from officers on suspicious incidents that might have to do with terrorism, were coordinated in the Information Sharing Environment. Drawing on reports thus generated, the Federal Government would, in theory, communicate timely information on security threats to law enforcement so that they would be better able to discern the incidents to be reported. The cycle aimed to catch events in emergence, in a distinctively anticipatory strategy of counterterrorism (Stalcup). Re-imagination A curious fact emerges from this history, and it is key to understanding how this initiative developed. That is, there was nothing suspicious in the encounters. The soon-to-be terrorists’ licenses were up-to-date, the cars were legal, they were not nervous. Even Mohamed Atta’s warrant would have resulted in nothing more than a fine. It is not self-evident, given these facts, how a governmental technology came to be designed from these scenarios. How––if nothing seemed of immediate concern, if there had been nothing suspicious to discern––did an intelligence strategy come to be assembled around such encounters? Evidently, strident demands were made after the events of 9/11 to know, “what went wrong?” Policies were crafted and implemented according to the answers given: it was too easy to obtain identification, or to enter and stay in the country, or to buy airplane tickets and fly. But the trooper’s question, the reader will recall, was somewhat different. He had said, “It’s difficult sometimes to think back and go: ‘What if you had known something else?’” To ask “what if you had known something else?” is also to ask what else might have been. Janet Roitman shows that identifying a crisis tends to implicate precisely the question of what went wrong. Crisis, and its critique, take up history as a series of right and wrong turns, bad choices made between existing dichotomies (90): liberty-security, security-privacy, ordinary-suspicious. It is to say, what were the possibilities and how could we have selected the correct one? Such questions seek to retrospectively uncover latencies—systemic or structural, human error or a moral lapse (71)—but they ask of those latencies what false understanding of the enemy, of threat, of priorities, allowed a terrible thing to happen. “What if…?” instead turns to the virtuality hidden in history, through which missed opportunities can be re-imagined. Image 4: “The Cholmondeley Sisters and Their Swaddled Babies.” Anonymous, c. 1600-1610 (British School, 17th century); Deleuze and Parnet (150). CC BY-SA 4.0 2015 by author, using materials in the public domain. Gilles Deleuze, speaking with Claire Parnet, says, “memory is not an actual image which forms after the object has been perceived, but a virtual image coexisting with the actual perception of the object” (150). Re-imagined scenarios take up the potential of memory, so that as the trooper’s traffic stop was revisited, it also became a way of imagining what else might have been. As Immanuel Kant, among others, points out, “the productive power of imagination is […] not exactly creative, for it is not capable of producing a sense representation that was never given to our faculty of sense; one can always furnish evidence of the material of its ideas” (61). The “memory” of these encounters provided the material for re-imagining them, and thereby re-virtualising history. This was different than other governmental responses, such as examining past events in order to assess the probable risk of their repetition, or drawing on past events to imagine future scenarios, for use in exercises that identify vulnerabilities and remedy deficiencies (Anderson). Re-imagining scenarios of police-hijacker encounters through the question of “what if?” evoked what Erin Manning calls “a certain array of recognizable elastic points” (39), through which options for other movements were invented. The Suspicious Activity Reporting Initiative’s architects instrumentalised such moments as they designed new governmental entities and programs to anticipate terrorism. For each element of the encounter, an aspect of the initiative was developed: training, functional standards, a way to (hypothetically) get real-time information about threats. Suspicion was identified as a key affect, one which, if cultivated, could offer a way to effectively deal not with binary right or wrong possibilities, but with the potential which lies nestled in uncertainty. The “signs of terrorism” (that is, categories of “terrorism precursor activities”) served to maximise receptivity to encounters. Indeed, it can apparently create an oversensitivity, manifested, for example, in police surveillance of innocent people exercising their right to assemble (Madigan), or the confiscation of photographers’s equipment (Simon). “What went wrong?” and “what if?” were different interrogations of the same pre-9/11 incidents. The questions are of course intimately related. Moments where something went wrong are when one is likely to ask, what else might have been known? Moreover, what else might have been? The answers to each question informed and shaped the other, as re-imagined scenarios became the means of extracting categories of suspicious activities and patterns of behaviour that comprise the phases of an event in becoming. Conclusion The 9/11 Commission, after two years of investigation into the causes of the disastrous day, reported that “the most important failure was one of imagination” (Kean and Hamilton, Summary). The iconic images of 9/11––such as airplanes being flown into symbols of American power––already existed, in guises ranging from fictive thrillers to the infamous FBI field memo sent to headquarters on Arab men learning to fly, but not land. In 1974 there had already been an actual (failed) attempt to steal a plane and kill the president by crashing it into the White House (Kean and Hamilton, Report Ch11 n21). The threats had been imagined, as Pat O’Malley and Philip Bougen put it, but not how to govern them, and because the ways to address those threats had been not imagined, they were discounted as matters for intervention (29). O’Malley and Bougen argue that one effect of 9/11, and the general rise of incalculable insecurities, was to make it necessary for the “merely imaginable” to become governable. Images of threats from the mundane to the extreme had to be conjured, and then imagination applied again, to devise ways to render them amenable to calculation, minimisation or elimination. In the words of the 9/11 Commission, the Government must bureaucratise imagination. There is a sense in which this led to more of the same. Re-imagining the early encounters reinforced expectations for officers to do what they already do, that is, to be on the lookout for suspicious behaviours. Yet, the images of threat brought forth, in their mixing of memory and an elastic “almost,” generated their own momentum and distinctive demands. Existing capacities, such as suspicion, were re-shaped and elaborated into specific forms of security governance. The question of “what if?” and the scenarios of police-hijacker encounter were particularly potent equipment for this re-imagining of history and its re-virtualisation. References Anderson, Ben. “Preemption, Precaution, Preparedness: Anticipatory Action and Future Geographies.” Progress in Human Geography 34.6 (2010): 777-98. Clay, Nolan, and Randy Ellis. “Terrorist Ticketed Last Year on I-40.” NewsOK, 20 Jan. 2002. 25 Nov. 2014 ‹http://newsok.com/article/2779124›. Deleuze, Gilles. Negotiations. New York: Columbia UP, 1995. Deleuze, Gilles, and Claire Parnet. Dialogues II. New York: Columbia UP 2007 [1977]. Federal Bureau of Investigation. “Hijackers Timeline (Redacted) Part 01 of 02.” Working Draft Chronology of Events for Hijackers and Associates. 2003. 18 Apr. 2014 ‹https://vault.fbi.gov/9-11%20Commission%20Report/9-11-chronology-part-01-of-02›. Kant, Immanuel. Anthropology from a Pragmatic Point of View. Trans. Robert B. Louden. Cambridge: Cambridge UP, 2006. Kean, Thomas H., and Lee Hamilton. Executive Summary of the 9/11 Commission Report: Final Report of the National Commission on Terrorist Attacks upon the United States. 25 Oct. 2015 ‹http://www.9-11commission.gov/report/911Report_Exec.htm›. Kean, Thomas H., and Lee Hamilton. The 9/11 Commission Report: Final Report of the National Commission on Terrorist Attacks upon the United States. New York: W.W. Norton, 2004. McConnell, Mike. “Overhauling Intelligence.” Foreign Affairs, July/Aug. 2007. Madigan, Nick. “Spying Uncovered.” Baltimore Sun 18 Jul. 2008. 25 Oct. 2015 ‹http://www.baltimoresun.com/news/maryland/bal-te.md.spy18jul18-story.html›. Manning, Erin. Relationscapes: Movement, Art, Philosophy. Cambridge, MA: MIT P, 2009. O’Malley, P., and P. Bougen. “Imaginable Insecurities: Imagination, Routinisation and the Government of Uncertainty post 9/11.” Imaginary Penalities. Ed. Pat Carlen. Cullompton, UK: Willan, 2008.Roitman, Janet. Anti-Crisis. Durham, NC: Duke UP, 2013. Simon, Stephanie. “Suspicious Encounters: Ordinary Preemption and the Securitization of Photography.” Security Dialogue 43.2 (2012): 157-73. Stalcup, Meg. “Policing Uncertainty: On Suspicious Activity Reporting.” Modes of Uncertainty: Anthropological Cases. Eds. Limor Saminian-Darash and Paul Rabinow. Chicago: U of Chicago P, 2015. 69-87. Wall Street Journal. “A Careful Sequence of Mundane Dealings Sows a Day of Bloody Terror for Hijackers.” 16 Oct. 2001.
APA, Harvard, Vancouver, ISO, and other styles
46

Wallace, Derek. "'Self' and the Problem of Consciousness." M/C Journal 5, no. 5 (October 1, 2002). http://dx.doi.org/10.5204/mcj.1989.

Full text
Abstract:
Whichever way you look at it, self is bound up with consciousness, so it seems useful to review some of the more significant existing conceptions of this relationship. A claim by Mikhail Bakhtin can serve as an anchoring point for this discussion. He firmly predicates the formation of self not just on the existence of an individual consciousness, but on what might be called a double or social (or dialogic) consciousness. Summarising his argument, Pam Morris writes: 'A single consciousness could not generate a sense of its self; only the awareness of another consciousness outside the self can produce that image.' She goes on to say that, 'Behind this notion is Bakhtin's very strong sense of the physical and spatial materiality of bodily being,' and quotes directly from Bakhtin's essay as follows: This other human being whom I am contemplating, I shall always see and know something that he, from his place outside and over against me, cannot see himself: parts of his body that are inaccessible to his own gaze (his head, his face and its expression), the world behind his back . . . are accessible to me but not to him. As we gaze at each other, two different worlds are reflected in the pupils of our eyes . . . to annihilate this difference completely, it would be necessary to merge into one, to become one and the same person. This ever--present excess of my seeing, knowing and possessing in relation to any other human being, is founded in the uniqueness and irreplaceability of my place in the world. (Bakhtin in Morris 6 Recent investigations in neuroscience and the philosophy of mind lay down a challenge to this social conception of the self. Notably, it is a challenge that does not involve the restoration of any variant of Cartesian rationalism; indeed, it arguably over--privileges rationalism's subjective or phenomenological opposite. 'Self' in this emerging view is a biologically generated but illusory construction, an effect of the operation of what are called 'neural correlates of consciousness' (NCC). Very briefly, an NCC refers to the distinct pattern of neurochemical activity, a 'neural representational system' -- to some extent observable by modern brain--imaging equipment – that corresponds to a particular configuration of sense--phenomena, or 'content of consciousness' (a visual image, a feeling, or indeed a sense of self). Because this science is still largely hypothetical, with many alternative terms and descriptions, it would be better in this limited space to focus on one particular account – one that is particularly well developed in the area of selfhood and one that resonates with other conceptions included in this discussion. Thomas Metzinger begins by postulating the existence within each person (or 'system' in his terms) of a 'self--model', a representation produced by neural activity -- what he calls a 'neural correlate of self--consciousness' -- that the individual takes to be the actual self, or what Metzinger calls the 'phenomenal self'. 'A self--model is important,' Metzinger says, 'in enabling a system to represent itself to itself as an agent' (293). The individual is able to maintain this illusion because 'the self--model is the only representational structure that is anchored in the brain by a continuous source of internally generated input' (297). In a manner partly reminiscent of Bakhtin, he continues: 'The body is always there, and although its relational properties in space and in movement constantly change, the body is the only coherent perceptual object that constantly generates input.' The reason why the individual is able to jump from the self--model to the phenomenal self in the first place is because: We are systems that are not able to recognise their subsymbolic self--model as a model. For this reason, we are permanently operating under the conditions of a 'naïve--realistic self--misunderstanding': We experience ourselves as being in direct and immediate epistemic contact with ourselves. What we have in the past simply called a 'self' is not a non--physical individual, but only the content of an ongoing dynamical process – the process of transparent self—modeling. (Metzinger 299) The question that nonetheless arises is why it should be concluded that this self--model emerges from subjective neural activity and not, say, from socialisation. Why should a self--model be needed in the first place? Metzinger's response is to say that there is good evidence 'for some kind of innate 'body prototype'' (298), and he refers to research that shows that even children born without limbs develop self--models which sometimes include limbs, or report phantom sensations in limbs that have never existed. To me, this still leaves open the possibility that such children are modelling their body image on strong identification with human others. But be that as it may, one of the things that remains unclear after this relatively rich account of contemporary or scientific phenomenology is the extent to which 'neural consciousness' is or can be supplemented by other kinds of consciousness, or indeed whether neural consciousness can be overridden by the 'self' acting on the basis of these other kinds of consciousness. The key stake in Metzinger's account is 'subjectivity'. The reason why the neural correlate of self--consciousness is so important to him is: 'Only if we find the neural and functional correlates of the phenomenal self will we be able to discover a more general theoretical framework into which all data can fit. Only then will we have a chance to understand what we are actually talking about when we say that phenomenal experience is a subjective phenomenon' (301). What other kinds of consciousness might there be? It is significant that, not only do NCC exponents have little to say about the interaction with other people, they rarely mention language, and they are unanimously and emphatically of the opinion that the thinking or processing that takes place in consciousness is not dependent on language, or indeed any signifying system that we know of (though conceivably, it occurs to me, the neural correlates may signify to, or 'call up', each other). And they show little 'consciousness' that a still influential body of opinion (informed latterly by post--structuralist thinking) has argued for the consciousness shaping effects of 'discourse' -- i.e. for socially and culturally generated patterns of language or other signification to order the processing of reality. We could usefully coin the term 'verbal correlates of consciousness' (VCC) to refer to these patterns of signification (words, proverbs, narratives, discourses). Again, however, the same sorts of questions apply, since few discourse theorists mention anything like neuroscience: To what extent is verbal consciousness supplemented by other forms of consciousness, including neural consciousness? These questions may never be fully answerable. However, it is interesting to work through the idea that NCC and VCC both exist and can be in some kind of relation even if the precise relationship is not measurable. This indeed is close to the case that Charles Shepherdson makes for psychoanalysis in attempting to retrieve it from the misunderstanding under which it suffers today: We are now familiar with debates between those who seek to demonstrate the biological foundations of consciousness and sexuality, and those who argue for the cultural construction of subjectivity, insisting that human life has no automatically natural form, but is always decisively shaped by contingent historical conditions. No theoretical alternative is more widely publicised than this, or more heavily invested today. And yet, this very debate, in which 'nature' and 'culture' are opposed to one another, amounts to a distortion of psychoanalysis, an interpretive framework that not only obscures its basic concepts, but erodes the very field of psychoanalysis as a theoretically distinct formation (2--3). There is not room here for an adequate account of Shepherdson's recuperation of psychoanalytic categories. A glimpse of the stakes involved is provided by Shepherdson's account, following Eugenie Lemoine--Luccione, of anorexia, which neither biomedical knowledge nor social constructionism can adequately explain. The further fact that anorexia is more common among women of the same family than in the general population, and among women rather than men, but in neither case exclusively so, thereby tending to rule out a genetic factor, allows Shepherdson to argue: [A]norexia can be understood in terms of the mother--daughter relation: it is thus a symbolic inheritance, a particular relation to the 'symbolic order', that is transmitted from one generation to another . . . we may add that this relation to the 'symbolic order' [which in psychoanalytic theory is not coextensive with language] is bound up with the symbolisation of sexual difference. One begins to see from this that the term 'sexual difference' is not used biologically, but also that it does not refer to general social representations of 'gender,' since it concerns a more particular formation of the 'subject' (12). An intriguing, and related, possibility, suggested by Foucault, is that NCC and VCC (or in Foucault's terms the 'visible' and the 'articulable'), operate independently of each other – that there is a 'disjunction' (Deleuze 64) or 'dislocation' (Shepherdson 166) between them that prevents any dialectical relation. Clearly, for Foucault, the lack of dialectical relation between the two modes does not mean that both are not at all times equally functional. But one can certainly speculate that, increasingly under postmodernity and media saturation, the verbal (i.e. the domain of signification in general) is influential. And if linguistic formations -- discourses, narratives, etc. -- can proliferate and feed on each other unconstrained by other aspects of reality, we get the sense of language 'running away with itself' and, at least for a time, becoming divorced from a more complete sense of reality. (This of course is basically the argument of Baudrillard.) The reverse may also be possible, in certain periods, although the idea that language could have no mediating effect at all on the production of reality (just inconsequential fluff on the surface of things) seems far--fetched in the wake of so much postmodern and media theory. However, the notion is consistent with the theories of hard--line materialists and genetic determinists. But we should at least consider the possibility that some sort of shaping interaction between NCC and VCC, without implicating the full conceptual apparatus of psychoanalysis, is continuously occurring. This possibility is, for me, best realised by Jacques Derrida when he writes of an irreducible interweaving of consciousness and language (the latter for Derrida being a cover term for any system of signification). This interweaving is such that the significatory superstructure 'reacts upon' the 'substratum of non--expressive acts and contents', and the name for this interweaving is 'text' (Mowitt 98). A further possibility is that provided by Pierre Bourdieu's notion of habitus -- the socially inherited schemes of perception and selection, imparted by language and example, which operate for the most part below the level of consciousness but are available to conscious reflection by any individual lucky enough to learn how to recognise that possibility. If the subjective representations of NCC exist, this habitus can be at best only partial; something denied by Bourdieu whose theory of individual agency is founded in what he has referred to as 'the relation between two states of the social' – i.e. 'between history objectified in things, in the form of institutions, and history incarnate in the body, in the form of that system of durable dispositions I call habitus' (190). At the same time, much of Bourdieu's thinking about the habitus seems as though it could be consistent with the kind of predictable representations that might be produced by NCC. For example, there are the simple oppositions that structure much perception in Bourdieu's account. These range from the obvious phenomenological ones (dark/light; bright/dull; male/female; hard/soft, etc.) through to the more abstract, often analogical or metaphorical ones, such as those produced by teachers when assessing their students (bright/dull again; elegant/clumsy, etc.). It seems possible that NCC could provide the mechanism or realisation for the representation, storage, and reactivation of impressions constituting a social model--self. However, an entirely different possibility remains to be considered – which perhaps Bourdieu is also getting at – involving a radical rejection of both NCC and VCC. Any correlational or representational theory of the relationship between a self and his/her environment -- which, according to Charles Altieri, includes the anti--logocentrism of Derrida -- assumes that the primary focus for any consciousness is the mapping and remapping of this relationship rather than the actions and purposes of the agent in question. Referring to the later philosophy of Wittgenstein, Altieri argues: 'Conciousness is essentially not a way of relating to objects but of relating to actions we learn to perform . . . We do not usually think about objects, but about the specific form of activity which involves us with these objects at this time' (233). Clearly, there is not yet any certainty in the arguments provided by neuroscience that neural activity performs a representational role. Is it not, then, possible that this activity, rather than being a 'correlate' of entities, is an accompaniment to, a registration of, action that the rest of the body is performing? In this view, self is an enactment, an expression (including but not restricted to language), and what self--consciousness is conscious of is this activity of the self, not the self as entity. In a way that again returns us towards Bakhtin, Altieri writes: '>From an analytical perspective, it seems likely that our normal ways of acting in the world provide all the criteria we need for a sense of identity. As Sidney Shoemaker has shown, the most important source of the sense of our identity is the way we use the spatio--temporal location of our body to make physical distinctions between here and there, in front and behind, and so on' (234). Reasonably consistent with the Wittgensteinian view -- in its focus on self--activity -- is that contemporary theorisation of the self that compares in influence with that posed by neuroscience. This is the self avowedly constructed by networked computer technology, as described by Mark Poster: [W]hat has occurred in the advanced industrial societies with increasing rapidity . . . is the dissemination of technologies of symbolisation, or language machines, a process that may be described as the electronic textualisation of daily life, and the concomitant transformations of agency, transformations of the constitution of individuals as fixed identities (autonomous, self--regulating, independent) into subjects that are multiple, diffuse, fragmentary. The old (modern) agent worked with machines on natural materials to form commodities, lived near other workers and kin in urban communities, walked to work or traveled by public transport, and read newspapers but engaged as a communicator mostly in face--to--face relations. The new (postmodern) agent works mostly on symbols using computers, lives in isolation from other workers and kin, travels to work by car, and receives news and entertainment from television. . . . Individuals who have this experience do not stand outside the world of objects, observing, exercising rational faculties and maintaining a stable character. The individuals constituted by the new modes of information are immersed and dispersed in textualised practices where grounds are less important than moves. (44--45) Interestingly, Metzinger's theorisation of the model--self lends itself to the self--mutability -- though not the diffusion -- favoured by postmodernists like Poster. [I]t is . . . well conceivable that a system generates a number of different self--models which are functionally incompatible, and therefore modularised. They nevertheless could be internally coherent, each endowed with its own characteristic phenomenal content and behavioral profile. . . this does not have to be a pathological situation. Operating under different self--models in different situational contexts may be biologically as well as socially adaptive. Don't we all to some extent use multiple personalities to cope efficiently with different parts of our lives? (295--6) Poster's proposition is consistent with that of many in the humanities and social sciences today, influenced variously by postmodernism and social constructionism. What I believe remains at issue about his account is that it exchanges one form of externally constituted self ('fixed identity') for another (that produced by the 'modes of information'), and therefore remains locked in a logic of deterministic constitution. (There is a parallel here with Altieri's point about Derrida's inability to escape representation.) Furthermore, theorists like Poster may be too quickly generalising from the experience of adults in 'textualised environments'. Until such time as human beings are born directly into virtual reality environments, each will, for a formative period of time, experience the world in the way described by Bakhtin – through 'a unified perception of bodily and personal being . . . characterised . . . as a loving gift mutually exchanged between self and other across the borderzone of their two consciousnesses' (cited in Morris 6). I suggest it is very unlikely that this emergent sense of being can ever be completely erased even when people subsequently encounter each other in electronic networked environments. It is clearly not the role of a brief survey like this to attempt any resolution of these matters. Indeed, my review has made all the more apparent how far from being settled the question of consciousness, and by extension the question of selfhood, remains. Even the classical notion of the homunculus (the 'little inner man' or the 'ghost in the machine') has been put back into play with Francis Crick and Christof Koch's (2000) neurobiological conception of the 'unconscious homunculus'. The balance of contemporary evidence and argument suggests that the best thing to do right now is to keep the questions open against any form of reductionism – whether social or biological. One way to do this is to explore the notions of self and consciousness as implicated in ongoing processes of complex co--adaptation between biology and culture -- or their individual level equivalents, brain and mind (Taylor Ch. 7). References Altieri, C. "Wittgenstein on Consciousness and Language: a Challenge to Derridean Literary Theory." Wittgenstein, Theory and the Arts. Ed. Richard Allen and Malcolm Turvey. New York: Routledge, 2001. Bourdieu, P. In Other Words: Essays Towards a Reflexive Sociology. Trans. Matthew Adamson. Stanford: Stanford University Press, 1990. Crick, F. and Koch, C. "The Unconscious Homunculus." Neural Correlates of Consciousness: Empirical and Conceptual Questions. Ed. Thomas Metzinger. Cambridge, Mass.: MIT Press, 2000. Deleuze, G. Foucault. Trans. Sean Hand. Minneapolis: University of Minnesota Press, 1988. Metzinger, T. "The Subjectivity of Subjective Experience: A Representationalist Analysis of the First-Person Perspective." Neural Correlates of Consciousness: Empirical and Conceptual Questions. Ed. Thomas Metzinger. Cambridge, Mass.: MIT Press, 2000. Morris, P. (ed.). The Bakhtin Reader: Selected Writings of Bakhtin, Medvedev, Voloshinov. London: Edward Arnold, 1994. Mowitt, J. Text: The Genealogy of an Interdisciplinary Object. Durham: Duke University Press, 1992. Poster, M. Cultural History and Modernity: Disciplinary Readings and Challenges. New York: Columbia University Press, 1997. Shepherdson, C. Vital Signs: Nature, Culture, Psychoanalysis. New York: Routledge, 2000. Taylor, M. C. The Moment of Complexity: Emerging Network Culture. Chicago: University of Chicago Press, 2001. Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Wallace, Derek. "'Self' and the Problem of Consciousness" M/C: A Journal of Media and Culture 5.5 (2002). [your date of access] < http://www.media-culture.org.au/mc/0210/Wallace.html &gt. Chicago Style Wallace, Derek, "'Self' and the Problem of Consciousness" M/C: A Journal of Media and Culture 5, no. 5 (2002), < http://www.media-culture.org.au/mc/0210/Wallace.html &gt ([your date of access]). APA Style Wallace, Derek. (2002) 'Self' and the Problem of Consciousness. M/C: A Journal of Media and Culture 5(5). < http://www.media-culture.org.au/mc/0210/Wallace.html &gt ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
47

Bauer, Kathy Anne. "How Does Taste In Educational Settings Influence Parent Decision Making Regarding Enrolment?" M/C Journal 17, no. 1 (March 17, 2014). http://dx.doi.org/10.5204/mcj.765.

Full text
Abstract:
Introduction Historically in Australia, there has been a growing movement behind the development of quality Early Childhood Education and Care Centres (termed ‘centres’ for this article). These centres are designed to provide care and education outside of the home for children from birth to five years old. In the mid 1980s, the then Labor Government of Australia promoted and funded the establishment of many centres to provide women who were at home with children the opportunity to move into the workplace. Centre fees were heavily subsidised to make this option viable in the hope that more women would become employed and Australia’s rising unemployment statistics would be reduced. The popularity of this system soon meant that there was a childcare centre shortage and parents were faced with long waiting lists to enrol their child into a centre. To alleviate this situation, independent centres were established that complied with Government rules and regulations. Independent, state, and local government funded centres had a certain degree of autonomy over facilities, staffing, qualifications, quality programmes, and facilities. This movement became part of the global increased focus on the importance of early childhood education. As part of that educational emphasis, the Melbourne Declaration on Educational Goals for Young Australians in 2008 set the direction for schooling for the next 10 years. This formed the basis of Australia’s Education Reforms (Department of Education, Employment and Workplace Relations). The reforms have influenced the management of early childhood education and care centres. All centres must comply with the National Quality Framework that mandates staff qualifications, facility standards, and the ratios of children to adults. From a parent’s perspective centres now look very much the same. All centres have indoor and outdoor playing spaces, separate rooms for differently aged children, playgrounds, play equipment, foyer and office spaces with similarly qualified staff. With these similarities in mind, the dilemma for parents is how to decide on a centre for their child to attend. Does it come down to parents’ taste about a centre? In the education context, how is taste conceptualised? This article will present research that conceptualises taste as being part of a decision-making process (DMP) that is used by parents when choosing a centre for their child and, in doing so, will introduce the term: parental taste. The Determining Factors of Taste A three phase, sequential, mixed methods study was used to determine how parents select one centre over another. Cresswell described this methodology as successive phases of data collection, where each builds on the previous, with the aim of addressing the research question. This process was seen as a method to identify parents’ varying tastes in centres considered for their child to attend. Phase 1 used a survey of 78 participants to gather baseline data to explore the values, expectations, and beliefs of the parents. It also determined the aspects of the centre important to parents, and gauged the importance of the socio-economic status and educational backgrounds of the participants in their decision making. Phase 2 built on the phase 1 data and included interviews with 20 interviewees exploring the details of the decision-making process (DMP). This phase also elaborated on the survey questions and responses, determined the variables that might impact on the DMP, and identified how parents access information about early learning centres. Phase 3 focussed on parental satisfaction with their choice of early learning setting. Again using 20 interviewees, these interviews investigated the DMP that had been undertaken, as well as any that might still be ongoing. This phase focused on parents' reflection on the DMP used and questioned them as to whether the same process would be used again in other areas of decision making. Thematic analysis of the data revealed that it usually fell to the mother to explore centre options and make the decision about enrolment. Along the way, she may have discussions with the father and, to a lesser extent, with the centre staff. Friends, relatives, the child, siblings, and other educational professionals did not rank highly when the decision was being considered. Interestingly, it was found that the mother began to consider childcare options and the need for care twelve months or more before care was required and a decision had to be made. A small number of parents (three from the 20) said that they thought about it while pregnant but felt silly because they “didn’t even have a baby yet.” All mothers said that it took quite a while to get their head around leaving their child with someone else, and this anxiety and concern increased the younger the child was. Two parents had criteria that they did not want their child in care until he/she could talk and walk, so that the child could look after him- or herself to some extent. This indicated some degree of scepticism that their child would be cared for appropriately. Parents who considered enrolling their child into care closer to when it was required generally chose to do this because they had selected a pre-determined age that their child would go into childcare. A small number of parents (two) decided that their child would not attend a centre until three years old, while other parents found employment and had to find care quickly in response. The survey results showed that the aspects of a centre that most influenced parental decision-making were the activities and teaching methods used by staff, centre reputation, play equipment inside and outside the centre, and the playground size and centre buildings. The interview responses added to this by suggesting that the type of playground facilities available were important, with a natural environment being preferred. Interestingly, the lowest aspect of importance reported was whether the child had friends or family already attending the centre. The results of the survey and interview data reflected the parents’ aspirations for their child and included the development of personal competencies of self-awareness, self-regulation, and motivation linking emotions to thoughts and actions (Gendron). The child’s experience in a centre was expected to develop and refine personal traits such as self-confidence, self-awareness, self-management, the ability to interact with others, and the involvement in educational activities to achieve learning goals. With these aspirations in mind, parents felt considerable pressure to choose the environment that would fit their child the best. During the interview stage of data collection, the term “taste” emerged. The term is commonly used in a food, fashion, or style context. In the education context, however, taste was conceptualised as the judgement of likes and dislikes regarding centre attributes. Gladwell writes that “snap judgements are, first of all, enormously quick: they rely on the thinnest slices of experience. But they are also unconscious” (50). The immediacy of determining one's taste refutes the neoliberal construction (Campbell, Proctor, Sherington) of the DMP as a rational decision-making process that systematically compares different options before making a decision. In the education context, taste can be reconceptualised as an alignment between a decision and inherent values and beliefs. A personal “backpack” of experiences, beliefs, values, ideas, and memories all play a part in forming a person’s taste related to their likes and dislikes. In turn, this effects the end decision made. Parents formulated an initial response to a centre linked to the identification of attributes that aligned with personal values, beliefs, expectations, and aspirations. The data analysis indicated that parents formulated their personal taste in centres very quickly after centres were visited. At that point, parents had a clear image of the preferred centre. Further information gathering was used to reinforce that view and confirm this “parental taste.” How Does Parental Taste about a Centre Influence the Decision-Making Process? All parents used a process of decision-making to some degree. As already stated, it was usually the mother who gathered information to inform the final decision, but in two of the 78 cases it was the father who investigated and decided on the childcare centre in which to enrol. All parents used some form of process to guide their decision-making. A heavily planned process sees the parent gather information over a period of time and included participating in centre tours, drive-by viewings, talking with others, web-based searches, and, checking locations in the phone book. Surprisingly, centre advertising was the least used and least effective method of attracting parents, with only one person indicating that advertising had played a part in her DMP. This approach applied to a woman who had just moved to a new town and was not aware of the care options. This method may also be a reflection of the personality of the parent or it may reflect an understanding that there are differences between services in terms of their focus on education and care. A lightly planned process occurred when a relatively swift decision was made with minimal information gathering. It could have been the selection of the closest and most convenient centre, or the one that parents had heard people talk about. These parents were happy to go to the centre and add their name to the waiting list or enrol straight away. Generally, the impression was that all services provide the same education and care. Parents appeared to use different criteria when considering a centre for their child. Aspects here included the physical environment, size of rooms, aesthetic appeal, clean buildings, tidy surrounds, and a homely feel. Other aspects that affected this parental taste included the location of the centre, the availability of places for the child, and the interest the staff showed in parent and child. The interviews revealed that parents placed an importance on emotions when they decided if a centre suited their tastes that in turn affected their DMP. The “vibe,” the atmosphere, and how the staff made the parents feel were the most important aspects of this process. The centre’s reputation was also central to decision making. What Constructs Underpin the Decision? Parental choice decisions can appear to be rational, but are usually emotionally connected to parental aspirations and values. In this way, parental choice and prior parental decision making processes reflect the bounded rationality described by Kahneman, and are based on factors relevant to the individual as supported by Ariely and Lindstrom. Ariely states that choice and the decision making process are emotionally driven and may be irrational-rational decisions. Gladwell supports this notion in that “the task of making sense of ourselves and our behaviour requires that we acknowledge there can be as much value in the blink of an eye as in months of rational analysis” (17). Reay’s research into social, cultural, emotional, and human capital to explain human behaviour was built upon to develop five constructs for decision making in this research. The R.O.P.E.S. constructs are domains that tie together to categorise the interaction of emotional connections that underpin the decision making process based on the parental taste in centres. The constructs emerged from the analysis of the data collected in the three phase approach. They were based on the responses from parents related to both their needs and their child’s needs in terms of having a valuable and happy experience at a centre. The R.O.P.E.S. constructs were key elements in the formation of parental taste in centres and eventual enrolment. The Reputational construct (R) included word of mouth, from friends, the cleaner, other staff from either the focus or another centre, and may or may not have aligned with parental taste and influenced the decision. Other constructs (O) included the location and convenience of the centre, and availability of spaces. Cost was not seen as an issue with the subsidies making each centre similar in fee structure. The Physical construct (P) included the facilities available such as the indoor and outdoor play space, whether these are natural or artificial environments, and the play equipment available. The Social construct (S) included social interactions—sharing, making friends, and building networks. It was found that the Emotional construct (E) was central to the process. It underpinned all the other constructs and was determined by the emotions that were elicited when the parent had the first and subsequent contact with the centre staff. This construct is pivotal in parental taste and decision making. Parents indicated that some centres did not have an abundance of resources but “the lady was really nice” (interview response) and the parent thought that her child would be cared for in that environment. Comments such as “the lady was really friendly and made us feel part of the place even though we were just looking around” (interview response) added to the emotional connection and construct for the DMP. The emotional connection with staff and the willingness of the director to take the time to show the parent the whole centre was a common comment from parents. Parents indicated that if they felt comfortable, and the atmosphere was warm and homelike, then they knew that their child would too. One centre particularly supported parental taste in a homely environment and had lounges, floor rugs, lamps for lighting, and aromatherapy oil burning that contributed to a home-like feel that appealed to parents and children. The professionalism of the staff who displayed an interest in the children, had interesting activities in their room, and were polite and courteous also added to the emotional construct. Staff speaking to the parent and child, rather than just the parent, was also valued. Interestingly, parents did not comment on the qualifications held by staff, indicating that parents assumed that to be employed staff must hold the required qualifications. Is There a Further Application of Taste in Decision Making? The third phase of data collection was related to additional questions being asked of the interviewee that required reflection of the DMP used when choosing a centre for their child to attend. Parents were asked to review the process and comment on any changes that they would make if they were in the same position again. The majority of parents said that they were content with their taste in centres and the subsequent decision made. A quarter of the parents indicated that they would make minor changes to their process. A common comment made was that the process used was indicative of the parent’s personality. A self confessed “worrier” enrolling her first child gathered a great deal of information and visited many centres to enable the most informed decision to be made. In contrast, a more relaxed parent enrolling a second or third child made a quicker decision after visiting or phoning one or two centres. Although parents considered their decision to be rationally considered, the impact of parental taste upon visiting the centre and speaking to staff was a strong indicator of the level of satisfaction. Taste was a precursor to the decision. When asked if the same process would be used if choosing a different service, such as an accountant, parents indicated that a similar process would be used, but perhaps not as in depth. The reasoning here was that parents were aware that the decision of selecting a centre would impact on their child and ultimately themselves in an emotional way. The parent indicated that if they spent time visiting centres and it appealed to their taste then the child would like it too. In turn this made the whole process of attending a centre less stressful and emotional. Parents clarified that not as much personal information gathering would occur if searching for an accountant. The focus would be centred on the accountant’s professional ability. Other instances were offered, such as purchasing a car, or selecting a house, dentist, or a babysitter. All parents suggested that additional information would be collected if their child of family would be directly impacted by the decision. Advertising of services or businesses through various multimedia approaches appeared not to rate highly when parents were in the process of decision making. Television, radio, print, Internet, and social networks were identified as possible modes of communication available for consideration by parents. The generational culture was evident in the responses from different parent age groups. The younger parents indicated that social media, Internet, and print may be used to ascertain the benefits of different services and to access information about the reputation of centres. In comparison, the older parents preferred word-of-mouth recommendations. Neither television nor radio was seen as media approaches that would attract clientele. Conclusion In the education context, the concept of parental taste can be seen to be an integral component to the decision making process. In this case, the attributes of an educational facility align to an individual’s personal “backpack” and form a like or a dislike, known as parental taste. The implications for the Directors of Early Childhood Education and Care Centres indicate that parental taste plays a role in a child’s enrolment into a centre. Parental taste is determined by the attributes of the centre that are aligned to the R.O.P.E.S. constructs with the emotional element as the key component. A less rigorous DMP is used when a generic service is required. Media and cultural ways of looking at our society interpret how important decisions are made. A general assumption is that major decisions are made in a calm, considered and rational manner. This is a neoliberal view and is not supported by the research presented in this article. References Ariely, Dan. Predictably Irrational: The Hidden Forces That Shape Our Decisions. London: Harper, 2009. Australian Children’s Education, Care and Quality Authority (ACECQA). n.d. 14 Jan. 2014. ‹http://www.acecqa.gov.au›. Campbell, Craig, Helen Proctor, and Geoffrey Sherington. School Choice: How Parents Negotiate The New School Market In Australia. Crows Nest, N.S.W.: Allen and Unwin, 2009. Cresswell, John,W. Research Design. Qualitative, Quantitative and Mixed Methods Approaches (2nd ed.). Los Angeles: Sage, 2003. Department of Education. 11 Oct. 2013. 14 Jan. 2014. ‹http://education.gov.au/national-quality-framework-early-childhood-education-and-care›. Department of Employment, Education and Workplace Relations (DEEWR). Education Reforms. Canberra, ACT: Australian Government Publishing Service, 2009. Gendron, Benedicte. “Why Emotional Capital Matters in Education and in Labour?: Toward an Optimal Exploitation of Human Capital and Knowledge Mangement.” Les Cahiers de la Maison des Sciences Economiques 113 (2004): 1–37. Glaswell, Malcolm. “Blink: The power of thinking without thinking.” Harmondsworth, UK: Penguin, 2005. Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Strauss & Giroux, 2011. Lindstrom, Martin. Buy-ology: How Everything We Believe About Why We Buy is Wrong. Great London: Random House Business Books, 2009. Melbourne Declaration on Educational Goals for Young Australians. 14 Jan. 2014. ‹http://www.mceecdya.edu.au/mceecdya/melbourne_declaration,25979.html›. National Quality Framework. 14 Jan. 2014. ‹http://www.acecqa.gov.au. Reay, Diane. A Useful Extension of Bourdieu’s Conceptual Framework?: Emotional Capital as a Way of Understanding Mothers’ Involvement in their Children’s Education? Oxford: Blackwell Publishers, 2000.
APA, Harvard, Vancouver, ISO, and other styles
48

Aly, Anne, and Lelia Green. "Less than Equal: Secularism, Religious Pluralism and Privilege." M/C Journal 11, no. 2 (June 1, 2008). http://dx.doi.org/10.5204/mcj.32.

Full text
Abstract:
In its preamble, The Western Australian Charter of Multiculturalism (WA) commits the state to becoming: “A society in which respect for mutual difference is accompanied by equality of opportunity within a framework of democratic citizenship”. One of the principles of multiculturalism, as enunciated in the Charter, is “equality of opportunity for all members of society to achieve their full potential in a free and democratic society where every individual is equal before and under the law”. An important element of this principle is the “equality of opportunity … to achieve … full potential”. The implication here is that those who start from a position of disadvantage when it comes to achieving that potential deserve more than ‘equal’ treatment. Implicitly, equality can be achieved only through the recognition of and response to differential needs and according to the likelihood of achieving full potential. This is encapsulated in Kymlicka’s argument that neutrality is “hopelessly inadequate once we look at the diversity of cultural membership which exists in contemporary liberal democracies” (903). Yet such a potential commitment to differential support might seem unequal to some, where equality is constructed as the same or equal treatment regardless of differing circumstances. Until the past half-century or more, this problematic has been a hotly-contested element of the struggle for Civil Rights for African-Americans in the United States, especially as these rights related to educational opportunity during the years of racial segregation. For some, providing resources to achieve equal outcomes (rather than be committed to equal inputs) may appear to undermine the very ethos of liberal democracy. In Australia, this perspective has been the central argument of Pauline Hanson and her supporters who denounce programs designed as measures to achieve equality for specific disadvantaged groups; including Indigenous Australians and humanitarian refugees. Nevertheless, equality for all on all grounds of legally-accepted difference: gender, race, age, family status, sexual orientation, political conviction, to name a few; is often held as the hallmark of progressive liberal societies such as Australia. In the matter of religious freedoms the situation seems much less complex. All that is required for religious equality, it seems, is to define religion as a private matter – carried out, as it were, between consenting parties away from the public sphere. This necessitates, effectively, the separation of state and religion. This separation of religious belief from the apparatus of the state is referred to as ‘secularism’ and it tends to be regarded as a cornerstone of a liberal democracy, given the general assumption that secularism is a necessary precursor to equal treatment of and respect for different religious beliefs, and the association of secularism with the Western project of the Enlightenment when liberty, equality and science replaced religion and superstition. By this token, western nations committed to equality are also committed to being liberal, democratic and secular in nature; and it is a matter of state indifference as to which religious faith a citizen embraces – Wiccan, Christian, Judaism, etc – if any. Historically, and arguably more so in the past decade, the terms ‘democratic’, ‘secular’, ‘liberal’ and ‘equal’ have all been used to inscribe characteristics of the collective ‘West’. Individuals and states whom the West ascribe as ‘other’ are therefore either or all of: not democratic; not liberal; or not secular – and failing any one of these characteristics (for any country other than Britain, with its parliamentary-established Church of England, headed by the Queen as Supreme Governor) means that that country certainly does not espouse equality. The West and the ‘Other’ in Popular Discourse The constructed polarisation between the free, secular and democratic West that values equality; and the oppressive ‘other’ that perpetuates theocracies, religious discrimination and – at the ultimate – human rights abuses, is a common theme in much of the West’s media and popular discourse on Islam. The same themes are also applied in some measure to Muslims in Australia, in particular to constructions of the rights of Muslim women in Australia. Typically, Muslim women’s dress is deemed by some secular Australians to be a symbol of religious subjugation, rather than of free choice. Arguably, this polemic has come to the fore since the terrorist attacks on the United States in September 2001. However, as Aly and Walker note, the comparisons between the West and the ‘other’ are historically constructed and inherited (Said) and have tended latterly to focus western attention on the role and status of Muslim women as evidence of the West’s progression comparative to its antithesis, Eastern oppression. An examination of studies of the United States media coverage of the September 11 attacks, and the ensuing ‘war on terror’, reveals some common media constructions around good versus evil. There is no equal status between these. Good must necessarily triumph. In the media coverage, the evil ‘other’ is Islamic terrorism, personified by Osama bin Laden. Part of the justification for the war on terror is a perception that the West, as a force for good in this world, must battle evil and protect freedom and democracy (Erjavec and Volcic): to do otherwise is to allow the terror of the ‘other’ to seep into western lives. The war on terror becomes the defence of the west, and hence the defence of equality and freedom. A commitment to equality entails a defeat of all things constructed as denying the rights of people to be equal. Hutcheson, Domke, Billeaudeaux and Garland analysed the range of discourses evident in Time and Newsweek magazines in the five weeks following September 11 and found that journalists replicated themes of national identity present in the communication strategies of US leaders and elites. The political and media response to the threat of the evil ‘other’ is to create a monolithic appeal to liberal values which are constructed as being a monopoly of the ‘free’ West. A brief look at just a few instances of public communication by US political leaders confirms Hutcheson et al.’s contention that the official construction of the 2001 attacks invoked discourses of good and evil reminiscent of the Cold War. In reference to the actions of the four teams of plane hijackers, US president George W Bush opened his Address to the Nation on the evening of September 11: “Today, our fellow citizens, our way of life, our very freedom came under attack in a series of deliberate and deadly terrorist acts” (“Statement by the President in His Address to the Nation”). After enjoining Americans to recite Psalm 23 in prayer for the victims and their families, President Bush ended his address with a clear message of national unity and a further reference to the battle between good and evil: “This is a day when all Americans from every walk of life unite in our resolve for justice and peace. America has stood down enemies before, and we will do so this time. None of us will ever forget this day. Yet, we go forward to defend freedom and all that is good and just in our world” (“Statement by the President in His Address to the Nation”). In his address to the joint houses of Congress shortly after September 11, President Bush implicated not just the United States in this fight against evil, but the entire international community stating: “This is the world’s fight. This is civilisation’s fight” (cited by Brown 295). Addressing the California Business Association a month later, in October 2001, Bush reiterated the notion of the United States as the leading nation in the moral fight against evil, and identified this as a possible reason for the attack: “This great state is known for its diversity – people of all races, all religions, and all nationalities. They’ve come here to live a better life, to find freedom, to live in peace and security, with tolerance and with justice. When the terrorists attacked America, this is what they attacked”. While the US media framed the events of September 11 as an attack on the values of democracy and liberalism as these are embodied in US democratic traditions, work by scholars analysing the Australian media’s representation of the attacks suggested that this perspective was echoed and internationalised for an Australian audience. Green asserts that global media coverage of the attacks positioned the global audience, including Australians, as ‘American’. The localisation of the discourses of patriotism and national identity for Australian audiences has mainly been attributed to the media’s use of the good versus evil frame that constructed the West as good, virtuous and moral and invited Australian audiences to subscribe to this argument as members of a shared Western democratic identity (Osuri and Banerjee). Further, where the ‘we’ are defenders of justice, equality and the rule of law; the opposing ‘others’ are necessarily barbaric. Secularism and the Muslim Diaspora Secularism is a historically laden term that has been harnessed to symbolise the emancipation of social life from the forced imposition of religious doctrine. The struggle between the essentially voluntary and private demands of religion, and the enjoyment of a public social life distinct from religious obligations, is historically entrenched in the cultural identities of many modern Western societies (Dallmayr). The concept of religious freedom in the West has evolved into a principle based on the bifurcation of life into the objective public sphere and the subjective private sphere within which individuals are free to practice their religion of choice (Yousif), or no religion at all. Secularism, then, is contingent on the maintenance of a separation between the public (religion-free) and the private or non- public (which may include religion). The debate regarding the feasibility or lack thereof of maintaining this separation has been a matter of concern for democratic theorists for some time, and has been made somewhat more complicated with the growing presence of religious diasporas in liberal democratic states (Charney). In fact, secularism is often cited as a precondition for the existence of religious pluralism. By removing religion from the public domain of the state, religious freedom, in so far as it constitutes the ability of an individual to freely choose which religion, if any, to practice, is deemed to be ensured. However, as Yousif notes, the Western conception of religious freedom is based on a narrow notion of religion as a personal matter, possibly a private emotional response to the idea of God, separate from the rational aspects of life which reside in the public domain. Arguably, religion is conceived of as recognising (or creating) a supernatural dimension to life that involves faith and belief, and the suspension of rational thought. This Western notion of religion as separate from the state, dividing the private from the public sphere, is constructed as a necessary basis for the liberal democratic commitment to secularism, and the notional equality of all religions, or none. Rawls questioned how people with conflicting political views and ideologies can freely endorse a common political regime in secular nations. The answer, he posits, lies in the conception of justice as a mechanism to regulate society independently of plural (and often opposing) religious or political conceptions. Thus, secularism can be constructed as an indicator of pluralism and justice; and political reason becomes the “common currency of debate in a pluralist society” (Charney 7). A corollary of this is that religious minorities must learn to use the language of political reason to represent and articulate their views and opinions in the public context, especially when talking with non-religious others. This imposes a need for religious minorities to support their views and opinions with political reason that appeals to the community at large as citizens, and not just to members of the minority religion concerned. The common ground becomes one of secularism, in which all speakers are deemed to be indifferent as to the (private) claims of religion upon believers. Minority religious groups, such as fundamentalist Mormons, invoke secular language of moral tolerance and civil rights to be acknowledged by the state, and to carry out their door-to-door ‘information’ evangelisation/campaigns. Right wing fundamentalist Christian groups and Catholics opposed to abortion couch their views in terms of an extension of the secular right to life, and in terms of the human rights and civil liberties of the yet-to-be-born. In doing this, these religious groups express an acceptance of the plurality of the liberal state and engage in debates in the public sphere through the language of political values and political principles of the liberal democratic state. The same principles do not apply within their own associations and communities where the language of the private religious realm prevails, and indeed is expected. This embracing of a political rhetoric for discussions of religion in the public sphere presents a dilemma for the Muslim diaspora in liberal democratic states. For many Muslims, religion is a complete way of life, incapable of compartmentalisation. The narrow Western concept of religious expression as a private matter is somewhat alien to Muslims who are either unable or unwilling to separate their religious needs from their needs as citizens of the nation state. Problems become apparent when religious needs challenge what seems to be publicly acceptable, and conflicts occur between what the state perceives to be matters of rational state interest and what Muslims perceive to be matters of religious identity. Muslim women’s groups in Western Australia for example have for some years discussed the desirability of a Sharia divorce court which would enable Muslims to obtain divorces according to Islamic law. It should be noted here that not all Muslims agree with the need for such a court and many – probably a majority – are satisfied with the existing processes that allow Muslim men and women to obtain a divorce through the Australian family court. For some Muslims however, this secular process does not satisfy their religious needs and it is perceived as having an adverse impact on their ability to adhere to their faith. A similar situation pertains to divorced Catholics who, according to a strict interpretation of their doctrine, are unable to take the Eucharist if they form a subsequent relationship (even if married according to the state), unless their prior marriage has been annulled by the Catholic Church or their previous partner has died. Whereas divorce is considered by the state as a public and legal concern, for some Muslims and others it is undeniably a religious matter. The suggestion by the Anglican Communion’s Archbishop of Canterbury, Dr Rowan Williams, that the adoption of certain aspects of Sharia law regarding marital disputes or financial matters is ultimately unavoidable, sparked controversy in Britain and in Australia. Attempts by some Australian Muslim scholars to elaborate on Dr Williams’s suggestions, such as an article by Anisa Buckley in The Herald Sun (Buckley), drew responses that, typically, called for Muslims to ‘go home’. A common theme in these responses is that proponents of Sharia law (and Islam in general) do not share a commitment to the Australian values of freedom and equality. The following excerpts from the online pages of Herald Sun Readers’ Comments (Herald Sun) demonstrate this perception: “These people come to Australia for freedoms they have never experienced before and to escape repression which is generally brought about by such ‘laws’ as Sharia! How very dare they even think that this would be an option. Go home if you want such a regime. Such an insult to want to come over to this country on our very goodwill and our humanity and want to change our systems and ways. Simply, No!” Posted 1:58am February 12, 2008 “Under our English derived common law statutes, the law is supposed to protect an individual’s rights to life, liberty and property. That is the basis of democracy in Australia and most other western nations. Sharia law does not adequately share these philosophies and principles, thus it is incompatible with our system of law.” Posted 12:55am February 11, 2008 “Incorporating religious laws in the secular legal system is just plain wrong. No fundamentalist religion (Islam in particular) is compatible with a liberal-democracy.” Posted 2:23pm February 10, 2008 “It should not be allowed in Australia the Muslims come her for a better life and we give them that opportunity but they still believe in covering them selfs why do they even come to Australia for when they don’t follow owe [our] rules but if we went to there [their] country we have to cover owe selfs [sic]” Posted 11:28am February 10, 2008 Conflicts similar to this one – over any overt or non-private religious practice in Australia – may also be observed in public debates concerning the wearing of traditional Islamic dress; the slaughter of animals for consumption; Islamic burial rites, and other religious practices which cannot be confined to the private realm. Such conflicts highlight the inability of the rational liberal approach to solve all controversies arising from religious traditions that enjoin a broader world view than merely private spirituality. In order to adhere to the liberal reduction of religion to the private sphere, Muslims in the West must negotiate some religious practices that are constructed as being at odds with the rational state and practice a form of Islam that is consistent with secularism. At the extreme, this Western-acceptable form is what the Australian government has termed ‘moderate Islam’. The implication here is that, for the state, ‘non-moderate Islam’ – Islam that pervades the public realm – is just a descriptor away from ‘extreme’. The divide between Christianity and Islam has been historically played out in European Christendom as a refusal to recognise Islam as a world religion, preferring instead to classify it according to race or ethnicity: a Moorish tendency, perhaps. The secular state prefers to engage with Muslims as an ethnic, linguistic or cultural group or groups (Yousif). Thus, in order to engage with the state as political citizens, Muslims must find ways to present their needs that meet the expectations of the state – ways that do not use their religious identity as a frame of reference. They can do this by utilizing the language of political reason in the public domain or by framing their needs, views and opinions exclusively in terms of their ethnic or cultural identity with no reference to their shared faith. Neither option is ideal, or indeed even viable. This is partly because many Muslims find it difficult if not impossible to separate their religious needs from their needs as political citizens; and also because the prevailing perception of Muslims in the media and public arena is constructed on the basis of an understanding of Islam as a religion that conflicts with the values of liberal democracy. In the media and public arena, little consideration is given to the vast differences that exist among Muslims in Australia, not only in terms of ethnicity and culture, but also in terms of practice and doctrine (Shia or Sunni). The dominant construction of Muslims in the Australian popular media is of religious purists committed to annihilating liberal, secular governments and replacing them with anti-modernist theocratic regimes (Brasted). It becomes a talking point for some, for example, to realise that there are international campaigns to recognise Gay Muslims’ rights within their faith (ABC) (in the same way that there are campaigns to recognise Gay Christians as full members of their churches and denominations and equally able to hold high office, as followers of the Anglican Communion will appreciate). Secularism, Preference and Equality Modood asserts that the extent to which a minority religious community can fully participate in the public and political life of the secular nation state is contingent on the extent to which religion is the primary marker of identity. “It may well be the case therefore that if a faith is the primary identity of any community then that community cannot fully identify with and participate in a polity to the extent that it privileges a rival faith. Or privileges secularism” (60). Modood is not saying here that Islam has to be privileged in order for Muslims to participate fully in the polity; but that no other religion, nor secularism, should be so privileged. None should be first, or last, among equals. For such a situation to occur, Islam would have to be equally acceptable both with other religions and with secularism. Following a 2006 address by the former treasurer (and self-avowed Christian) Peter Costello to the Sydney Institute, in which Costello suggested that people who feel a dual claim from both Islamic law and Australian law should be stripped of their citizenship (Costello), the former Prime Minister, John Howard, affirmed what he considers to be Australia’s primary identity when he stated that ‘Australia’s core set of values flowed from its Anglo Saxon identity’ and that any one who did not embrace those values should not be allowed into the country (Humphries). The (then) Prime Minister’s statement is an unequivocal assertion of the privileged position of the Anglo Saxon tradition in Australia, a tradition with which many Muslims and others in Australia find it difficult to identify. Conclusion Religious identity is increasingly becoming the identity of choice for Muslims in Australia, partly because it is perceived that their faith is under attack and that it needs defending (Aly). They construct the defence of their faith as a choice and an obligation; but also as a right that they have under Australian law as equal citizens in a secular state (Aly and Green). Australian Muslims who have no difficulty in reconciling their core Australianness with their deep faith take it as a responsibility to live their lives in ways that model the reconciliation of each identity – civil and religious – with the other. In this respect, the political call to Australian Muslims to embrace a ‘moderate Islam’, where this is seen as an Islam without a public or political dimension, is constructed as treating their faith as less than equal. Religious identity is generally deemed to have no place in the liberal democratic model, particularly where that religion is constructed to be at odds with the principles and values of liberal democracy, namely tolerance and adherence to the rule of law. Indeed, it is as if the national commitment to secularism rules as out-of-bounds any identity that is grounded in religion, giving precedence instead to accepting and negotiating cultural and ethnic differences. Religion becomes a taboo topic in these terms, an affront against secularism and the values of the Enlightenment that include liberty and equality. In these circumstances, it is not the case that all religions are equally ignored in a secular framework. What is the case is that the secular framework has been constructed as a way of ‘privatising’ one religion, Christianity; leaving others – including Islam – as having nowhere to go. Islam thus becomes constructed as less than equal since it appears that, unlike Christians, Muslims are not willing to play the secular game. In fact, Muslims are puzzling over how they can play the secular game, and why they should play the secular game, given that – as is the case with Christians – they see no contradiction in performing ‘good Muslim’ and ‘good Australian’, if given an equal chance to embrace both. Acknowledgements This paper is based on the findings of an Australian Research Council Discovery Project, 2005-7, involving 10 focus groups and 60 in-depth interviews. The authors wish to acknowledge the participation and contributions of WA community members. References ABC. “A Jihad for Love.” Life Matters (Radio National), 21 Feb. 2008. 11 March 2008. < http://www.abc.net.au/rn/lifematters/stories/2008/2167874.htm >.Aly, Anne. “Australian Muslim Responses to the Discourse on Terrorism in the Australian Popular Media.” Australian Journal of Social Issues 42.1 (2007): 27-40.Aly, Anne, and Lelia Green. “‘Moderate Islam’: Defining the Good Citizen.” M/C Journal 10.6/11.1 (2008). 13 April 2008 < http://journal.media-culture.org.au/0804/08aly-green.php >.Aly, Anne, and David Walker. “Veiled Threats: Recurrent Anxieties in Australia.” Journal of Muslim Minority Affairs 27.2 (2007): 203-14.Brasted, Howard.V. “Contested Representations in Historical Perspective: Images of Islam and the Australian Press 1950-2000.” Muslim Communities in Australia. Eds. Abdullah Saeed and Akbarzadeh, Shahram. Sydney: University of New South Wales Press, 2001. 206-28.Brown, Chris. “Narratives of Religion, Civilization and Modernity.” Worlds in Collision: Terror and the Future of Global Order. Eds. Ken Booth and Tim Dunne. New York: Palgrave Macmillan, 2002. 293-324. Buckley, Anisa. “Should We Allow Sharia Law?” Sunday Herald Sun 10 Feb. 2008. 8 March 2008 < http://www.news.com.au/heraldsun/story/0,21985,231869735000117,00.html >.Bush, George. W. “President Outlines War Effort: Remarks by the President at the California Business Association Breakfast.” California Business Association 2001. 17 April 2007 < http://www.whitehouse.gov/news/releases/2001/10/20011017-15.html >.———. “Statement by the President in His Address to the Nation”. Washington, 2001. 17 April 2007 < http://www.whitehouse.gov/news/releases/2001/09/20010911-16.html >.Charney, Evan. “Political Liberalism, Deliberative Democracy, and the Public Sphere.” The American Political Science Review 92.1 (1998): 97- 111.Costello, Peter. “Worth Promoting, Worth Defending: Australian Citizenship, What It Means and How to Nurture It.” Address to the Sydney Institute, 23 February 2006. 24 Apr. 2008 < http://www.treasurer.gov.au/DisplayDocs.aspx?doc=speeches/2006/004.htm &pageID=05&min=phc&Year=2006&DocType=1 >.Dallmayr, Fred. “Rethinking Secularism.” The Review of Politics 61.4 (1999): 715-36.Erjavec, Karmen, and Zala Volcic. “‘War on Terrorism’ as Discursive Battleground: Serbian Recontextualisation of G. W. Bush’s Discourse.” Discourse and Society 18 (2007): 123- 37.Green, Lelia. “Did the World Really Change on 9/11?” Australian Journal of Communication 29.2 (2002): 1-14.Herald Sun. “Readers’ Comments: Should We Allow Sharia Law?” Herald Sun Online Feb. 2008. 8 March 2008. < http://www.news.com.au/heraldsun/comments/0,22023,23186973-5000117,00.html >.Humphries, David. “Live Here, Be Australian.” The Sydney Morning Herald 25 Feb. 2006, 1 ed.Hutcheson, John S., David Domke, Andre Billeaudeaux, and Philip Garland. “U.S. National Identity, Political Elites, and Patriotic Press Following September 11.” Political Communication 21.1 (2004): 27-50.Kymlicka, Will. “Liberal Individualism and Liberal Neutrality.” Ethics 99.4 (1989): 883-905.Modood, Tariq. “Establishment, Multiculturalism and British Citizenship.” The Political Quarterly (1994): 53-74.Osuri, Goldie, and Subhabrata B. Banerjee. “White Diasporas: Media Representations of September 11 and the Unbearable Whiteness of Being in Australia.” Social Semiotics 14.2 (2004): 151- 71.Rawls, John. A Theory of Justice. Cambridge: Harvard UP, 1971.Said, Edward. Orientalism. New York: Vintage Books 1978.Western Australian Charter of Multiculturalism. WA: Government of Western Australia, Nov. 2004. 11 March 2008 < http://www.equalopportunity.wa.gov.au/pdf/wa_charter_multiculturalism.pdf >.Yousif, Ahmad. “Islam, Minorities and Religious Freedom: A Challenge to Modern Theory of Pluralism.” Journal of Muslim Minority Affairs 20.1 (2000): 30-43.
APA, Harvard, Vancouver, ISO, and other styles
49

Stevens, Carolyn Shannon. "Cute But Relaxed: Ten Years of Rilakkuma in Precarious Japan." M/C Journal 17, no. 2 (March 3, 2014). http://dx.doi.org/10.5204/mcj.783.

Full text
Abstract:
Introduction Japan has long been cited as a major source of cute (kawaii) culture as it has spread around the world, as encapsulated in Christine R. Yano’s phrase ‘Pink Globalization’. This essay charts recent developments in Japanese society through the cute character Rilakkuma, a character produced by San-X (a competitor to Sanrio, which produces the famed Hello Kitty). His name means ‘relaxed bear’, and Rilakkuma and friends are featured in comics, games and other products, called kyarakutā shōhin (also kyarakutā guzzu, which both mean ‘character goods’). Rilakkuma is pictured relaxing, sleeping, eating sweets, and listening to music; he is not only lazy, but he is also unproductive in socio-economic terms. Yet, he is never censured for this lifestyle. He provides visual pleasure to those who buy these goods, but more importantly, Rilakkuma’s story charitably portrays a lifestyle that is fully consumptive with very little, if any, productivity. Rilakkuma’s reified consumption is certainly in line with many earlier analyses of shōjo (young girl) culture in Japan, where consumerism is considered ‘detached from the productive economy of heterosexual reproduction’ (Treat, 281) and valued as an end in itself. Young girl culture in Japan has been both critiqued and celebrated in in opposition to the economic productivity as well as the emotional emptiness and weakening social prestige of the salaried man (Roberson and Suzuki, 9-10). In recent years, ideal masculinity has been further critiqued with the rise of the sōshokukei danshi (‘grass-eating men’) image: today’s Japanese male youth appear to have no appetite for the ‘meat’ associated with heteronormative, competitively capitalistic male roles (Steger 2013). That is not to say all gender roles have vanished; instead, social and economic precarity has created a space for young people to subvert them. Whether by design or by accident, Rilakkuma has come to represent a Japanese consumer maintaining some standard of emotional equilibrium in the face of the instability that followed the Tōhoku earthquake, tsunami and nuclear disaster in early 2011. A Relaxed Bear in a Precarious Japan Certainly much has been written about the ‘lost decade(s)’ in Japan, or the unraveling of the Japanese postwar miracle since the early 1990s in a variety of unsettling ways. The burst of the ‘bubble economy’ in 1991 led to a period of low or no economic growth, uncertain employment conditions and deflation. Because of Japan’s relative wealth and mature economic system, this was seen a gradual process that Mark Driscoll calls a shift from the ‘so-called Japan Inc. of the 1980s’ to ‘“Japan Shrink” of the 2010s and 2020s’ (165). The Japanese economy was further troubled by the Global Financial Crisis of 2008, and then the Tōhoku disasters. These events have contributed to Japan’s state of ambivalence, as viewed by both its citizens and by external observers. Despite its relative wealth, the nation continues to struggle with deflation (and its corresponding stagnation of wages), a deepening chasm between the two-tier employment system of permanent and casual work, and a deepening public mistrust of corporate and governing authorities. Some of this story is not ‘new’; dual employment practices have existed throughout Japan’s postwar history. What has changed, however, is the attitudes of casual workers; it is now thought to be much more difficult, if not impossible, to shift from low paid, insecure casual labour to permanent, secure positions. The overall unemployment rate remains low precisely because the number of temporary and part time workers has increased, as much as one third of all workers in 2012 (The Japan Times). The Japanese government now concedes that ‘the balance of working conditions between regular and non-regular workers have therefore become important issues’ (Ministry of Health, Labour and Welfare); many see this is not only a distinction between ‘haves’ and ‘have-nots’, but also of a generational shift of those who achieved secure positions before the ‘lost decade’, and those who came after. Economic, political, environmental and social insecurity have given rise to a certain level public malaise, not conducive to a robust consumer culture. Enter Rilakkuma: he, like many other cute characters in Japan, entices the consumer to feel good about spending – or perhaps, to feel okay about spending? – in this precarious time of underemployment and uncertainty about the future. ‘Cute’ Characters: Attracting as Well as Attractive Cute (‘kawaii’) culture in Japan is not just aesthetic; it includes ‘a turn to emotion and even sentimentality, in some of the least likely places’ (Yano, 7). Cute kyarakutā are not just sentimentally attractive; they are more precisely attracting images which are used to sell these character goods: toys, household objects, clothing and stationery. Occhi writes that many kyarakutā are the result of an ‘anthropomorphization’ of objects or creatures which ‘guide the user towards specific [consumer] behaviors’ (78). While kyarakutā would be created first to sell a product, in the end, the character’s popularity at times can eclipse the product’s value, and the character thus becomes ‘pure product’, as in the case of Hello Kitty (Yano, 10). Most characters, however, merely function as ‘specific representatives of a product or service rendered mentally “sticky” through narratives, wordplay and other specialized aspects of their design’ (Occhi, 86). Miller refers to this phenomenon as ‘Japan’s zoomorphic urge’, and argues that etiquette guides and public service posters, which frequently use cute and cuddly animals in the place of humans, is done to ‘render […] potentially dangerous or sensitive topics as safe and acceptable’ (69). Cuteness instrumentally turns away from negative aspects of society, whether it is the demonstration of etiquette rules in public, or the portrayal of an underemployed or unemployed person watching TV at home, as in Rilakkuma. Thus we see a revitalization of the cute zeitgeist in Japanese consumerism in products such as the Rilakkuma franchise, produced by San-X, a company that produces and distributes ‘stationary [sic], sundry goods, merchandises [sic], and paper products with original design.’ (San-X Net). Who Is Rilakkuma? According to the company’s ‘fan’ books, written in response to the popularity of Rilakkuma’s character goods (Nakazawa), the background story of Rilakkuma is as follows: one day, a smallish bear found its way unexplained into the apartment of a Japanese OL (office lady) named Kaoru. He spends his time ‘being of no use to Kaoru, and is actually a pest by lying around all day doing nothing… his main concerns are meals and snacks. He seems to hate the summer [heat].’ Other activities include watching television, listening to music, taking long baths, and tossing balls of paper into the rubbish bin (Nakazawa, 4). His comrades are Korilakkuma (loosely translated as ‘Little Rilakkuma’) and Kiiroitori (simply, ‘Yellow Bird’). Korilakkuma is a smaller and paler version of Rilakkuma; like her friend, she appears in Kaoru’s apartment for no reason. She is described as liking to pull pranks (itazuradaisuki) and is comparatively more energetic (genki) than Rilakkuma; her main activities are imitating Rilakkuma and looking for someone with whom to play (6). Lastly, Kiiroitori is a small yellow bird resembling a chick, and seems to be the only character of the three who has any ‘right’ to reside in Kaoru’s apartment. Kiiroitori was a pet bird residing in cage before the appearance of these two bears, but after Rilakkuma and Korilakkuma set themselves up in her small apartment, Kiiroitori was liberated from his cage and flies in the faces of lazy Rilakkuma and mischievous Korilakkuma (7). Kiiroitori likes tidiness, and is frequently cleaning up after the lazy bears, and he can be short tempered about this (ibid). Kiiroitori’s interests include the charming but rather thrifty ‘finding spare change while cleaning up’ and ‘bear climbing’, which is enjoyed primarily for its annoyance to the bears (ibid). Fig. 1: Korilakkuma, Rilakkuma and Kiiroitori, in 10-year anniversary attire (photo by author). This narrative behind these character goods is yet another aspect of their commodification (in other words, their management, distribution and copyright protection). The information presented ­– the minute details of the characters’ existence, illustrated with cute drawings and calligraphy – enriches the consumer process by deepening the consumers’ interaction with the product. How does the story become as attractive as the cute character? One of the striking characteristics of the ‘official’ Rilakkuma discourse is the sense of ‘ikinari yattekita’ (things happening ‘out of the blue’; Nakazawa 22), or ‘naru yō ni narimasu’ (‘whatever will be will be’; 23) reasoning behind the narrative. Buyers want to know how and why these cute characters come into being, but there is no answer. To some extent, this vagueness reflects the reality of authorship: the characters were first conceptualized by a designer at San-X named Kondō Aki, who left the company soon after Rilakkuma’s debut in 2003 (Akibako). But this ‘out of the blue’ quality of the characters strikes a chord in many consumers’ view of their own lives: why are we here? what are we doing, and why do we do it? The existence of these characters and the reasons for their traits and preferences are inexplicable. There is no reason why or how Rilakkuma came to be – instead, readers are told that to just relax, ‘go with the flow’, and ‘what can be done today can always be done tomorrow’. Procrastination would normally be considered meiwaku, or bothersome to others who depend on you. In Productive Japan, this behavior is not valued. In Precarious Japan, however, underemployment and nonproductivity takes the pressure away from individuals to judge this behavior as negative. Procrastination shifts from meiwaku to normality, and to be transformed into kawaii culture, accepted and even celebrated as such. Rilakkuma is not the first Japanese pop cultural character to rub up against the hyper productive, gambaru (fight!) attitude associated with previous generations, with their associated tropes of the juken jikoku (exam preparation hell) for students, or the karōshi (death from overwork) salaried worker. An early example of this would be Chibi Marukochan (‘Little Maruko’), a comic character created in 1986 but whose popularity peaked in the 1990s. Maruko is an endearing but flawed primary school student who is cute and amusing, but also annoying and short tempered (Sakura). Flawed characters were frequently featured in Japanese popular culture, but Maruko was one of the first featured as heroine, not a jester-like sidekick. As an early example of Japanese cute, subversive characters, Maruko was often annoying and lazy, but she at least aspired to traits such as doing well in school and being a good daughter in her extended family. Rilakkuma, perhaps, demonstrates the extension of this cute but subversive hero/ine: when the stakes are lower (or at their lowest), so is the need for stress and anxiety. Taking it easy is the best option. Rilakkuma’s ‘charm point’ (chāmu pointo, which describes one’s personal appeal), is his transgressive cuteness, and this has paid off for San-X over the years in successful sales of his comic books as well as a variety of products (see fig. 2). Fig. 2: An example of some of the goods for sale in early 2014: a fleecy blanket, a 3d puzzle, note pads and stickers, decorative toggles for a school bag or purse, comic and ‘fan’ books, and a toy car (photo by the author). Over the decade between 2003 and 2013, San X has produced 51 volumes of Rilakkuma comics (Tonozuka, 37 – 42) and over 20 different series of stuffed animals (43 – 45); plus cushions, tote bags, tableware, stationery, and variety goods such as toilet paper holders, umbrellas and contact lens cases (46 – 52). While visiting the Rilakkuma themed shop in Tokyo Station in October 2013, a newly featured and popular product was the Rilakkuma ‘onesie’, a unisex and multipurpose outfit for adults. These products’ diversity are created to meet the consumer desires of Rilakkuma’s significant following in Japan; in a small-scale study of Japanese university students, researchers found that Rilakkuma was the number one nominated ‘favorite character’ (Nosu and Tanaka, 535). Furthermore, students claimed that the attractiveness of favorite characters were judged not just on their appearance, but also due to specific characteristics: ‘characters that are always idle, relaxed, stress-free’ and those ‘that have unusual behavior or stray from the right path’ (ibid) were cited as especially attractive/attracting. Just like Rilakkuma, these researchers found that young Japanese people – the demographic perhaps most troubled by an insecure economic future – are attracted to ‘characters that have flaws in some ways and are not merely cute’ (536). Where to, Rilakkuma? Miller, in her discussion of Japanese animal characters in a variety of cute cultural settings writes Non-human animals emerge as useful metaphors for humans, yet […] it is this aesthetic load rather than the lesson or the ideology behind the image that often becomes the center of our attention. […] However, I think it is useful to separate our analysis of zoomorphic images as vehicles for cuteness from their other possible uses and possible utility in many areas of culture (70). Similarly, we need to look beyond cute, and see what Miller terms as ‘the lesson’ behind the ‘aesthetic load’: here, how cuteness disguises social malaise and eases the shift from ‘Japan Inc.’ to ‘Japan Shrink’. When particular goods are ‘tied’ to other products, the message behind the ‘aesthetic load’ are complicated and deepened. Rilakkuma’s recent commercial (in)activity has been characterized by a variety of ‘tai uppu’ (tie ups), or promotional links between the Rilakkuma image and other similarly aligned products. Traditionally, tie ups in Japan have been most successful when formed between products that were associated with similar audiences and similar aesthetic preferences. We have seen tie ups, for example, between Hello Kitty and McDonald’s (targeting youthful fast food customers) since 1999 (Yano, 129). In ‘Japan Shrink’s’ competitive consumer market, tie ups are becoming more strategic, and all the more interesting. One of the troubled markets in Japan, as elsewhere, is the music industry. Shrinking expendable income coupled with a variety of downloading practices means the traditional popular music industry (primarily in the form of CDs) is in decline. In 2009, Rilakkuma began a co-badged campaign with Tower Records Japan – after all, listening to music is one of Rilakkuma’s listed favourite past times. TRJ was then independent from its failed US counterpart, and a major figure in the music retail scene despite disappointing CD sales since the late 1990s (Stevens, 85). To stir up consumer interest, TRJ offered objects, such as small dolls, towels and shopping bags, festooned with Rilakkuma images and phrases such as ‘Rilakkuma loves Tower Records’ and ‘Relaxed Tour 2012’ (Tonozuka, 72 – 73). Rilakkuma, in a familiar pose lying back with his arms crossed behind his head, but surrounded by musical notes and the phrase ‘No Music, No Life’ (72), presents compact image of the consumer zeitgeist of the day: one’s ikigai (reason for living) is clearly contingent on personal enjoyment, despite Japan’s music industry woes. Rilakkuma also enjoys a close relationship with the ubiquitous convenience store Lawson, which has over 11,000 individual stores throughout Japan and hundreds more overseas (Lawson, Corporate Information). Japanese konbini (the Japanese term for convenience stores), unlike their North American or Australian counterparts, enjoy a higher consumer image in terms of the quality and variety of their products, thus symbolize a certain relaxed lifestyle, as per Merry I. White’s description of the ‘no hands housewife’ breezing through the evening meal preparations thanks to ready made dishes purchased at konbini (72). Japanese convenience stores sell a variety of products, but sweets (Rilakkuma’s favourite) take up a large proportion of shelf space in many stores. The most current ‘Rilakkuma x Lawson campaign’ was undertaken between September and November 2013. During this period, customers earned points to receive a free teacup; certainly Rilakkuma’s cuteness motivated consumers to visit the store to get the prize. All was not well with this tie up, however; complaints about cracked teacups resulted in an external investigation. Finding no causal relationship between construction and fault, Lawson still apologized and offered to exchange any of the approximately 1.73 million cups with an alternate prize for any consumers who so wished (Lawson, An Apology). The alternate prize was still cute in its pink colouring and kawaii character pattern, but it was a larger and much sturdier commuter type mug. Here we see that while Rilakkuma is relaxed, he is still aware of corporate Japan’s increasing sense of corporate accountability and public health. One last tie up demonstrates an unusual alliance between the Rilakkuma franchise and other cultural icons. 2013 marked the ten-year anniversary of Rilakkuma and friends, and this was marked by several prominent campaigns. In Kyoto, we saw Rilakkuma and friends adorning o-mamori (religious amulets) at the famed Kinkakuji (Golden Pavilion), a major temple in Kyoto (see fig. 3a). The ‘languid dream’ of the lazy bear is a double-edged symbol, contrasting with the disciplined practice of Buddhism and complying with a Zen-like dream state of the beauty of the grounds. Another ten-year anniversary campaign was the tie up between Rilakkuma and the 50 year anniversary of JR’s Yamanote Line, the ‘city loop’ in Tokyo. Fig. 3a: Kiiroitori sits atop Rilakkuma with Korilakkuma by their side at the Golden Pavillion, Kyoto. The top caption reads: ‘Relaxed bear, Languid at the Golden Pavilion; Languid Dream Travelogue’Fig. 3b: a key chain made to celebrate Rilakkuma’s appointment to the JR Line; still lazy, Rilakkuma lies on his side but wears a conductor’s cap. This tie up was certainly a coup, for the Yamanote Line is a significant part of 13 million Tokyo residents’ lives, as well as a visible fixture in the cultural landscape since the early postwar period. The Yamanote, with its distinctive light green coloring (uguisuiro, which translates literally to ‘nightingale [bird] colour’) has its own aesthetic: as one of the first modern train lines in the capital, it runs through all the major leisure districts and is featured in many popular songs and even has its own drinking game. This nostalgia for the past, coupled with the masculine, super-efficient former national railway’s system is thus juxtaposed with the lazy, feminized teddy bear (Rilakkuma is male, but his domain is feminine), linking a longing for the past with gendered images of production and consumption in the present. In figure 3b, we see Rilakkuma riding the Yamanote on his own terms (lying on his side, propped up by one elbow – a pose we would never see a JR employee take in public). This cheeky cuteness increases the iconic train’s appeal to its everyday consumers, for despite its efficiency, this line is severely overcrowded during peak hours and suffers from user malaise with respect to etiquette and safety issues. Life in contemporary Japan is no longer the bright, shiny ‘bubble’ of the 1980s. Japan is wrestling with internal and external demons: the nuclear crisis, the lagging economy, deteriorating relations with China, and a generation of young people who have never experienced the optimism of their parents’ generation. Dreamlike, Japan’s denizens move through the contours of their daily lives much as they have in the past, for major social structures remain for the most part in tact; instead, it is the vision of the future that has altered. In this environment, we can argue that kawaii aesthetics are all the more important, for if we are uncomfortable thinking about negative or depressing topics such as industries in decline, questionable consumer safety standards, and overcrowded trains, a cute bear can make it much more ‘bear’-able.ReferencesDriscoll, Mark. “Debt and Denunciation in Post-Bubble Japan: On the Two Freeters.” Cultural Critique 65 (2007): 164-187. Kondō Aki - akibako. “Profile [of Designer Aki Kondō].” 6 Feb. 2014 ‹http://www.akibako.jp/profile/›. Lawson. “Kigyō Jōhō: Kaisha Gaiyō [Corporate Information: Company Overview].” Feb. 2013. 10 Feb. 2014 ‹http://www.lawson.co.jp/company/corporate/about.html/›. Lawson. “Owabi to Oshirase: Rōson aki no rilakkuma fea keihin ‘rilakkuma tei magu’ hason no osore [An Apology and Announcement: Lawson’s Autumn Rilakkuma Fair Giveaway ‘Rilakkuma Tea Mug’ Concern for Damage.” 2 Dec. 2013. 10 Feb. 2014 ‹http://www.lawson.co.jp/emergency/detail/detail_84331.html›. Miller, Laura. “Japan’s Zoomorphic Urge.” ASIANetwork Exchange XVII.2 (2010): 69-82. Ministry of Health, Labour and Welfare. “Employment Security.” 10 Feb. 2014 ‹http://www.mhlw.go.jp/english/policy/employ-labour/employment-security/dl/employment_security_bureau.pdf›. Nakazawa Kumiko, ed. Rirakkuma Daradara Fuan Bukku [Relaxed Bear Leisurely Fan Book]. Tokyo: Kabushikigaisha Shufutoseikatsu. 2008. Nosu, Kiyoshi, and Mai Tanaka. “Factors That Contribute to Japanese University Students’ Evaluations of the Attractiveness of Characters.” IEEJ Transactions on Electrical and Electronic Engineering 8.5 (2013): 535–537. Occhi, Debra J. “Consuming Kyara ‘Characters’: Anthropomorphization and Marketing in Contemporary Japan.” Comparative Culture 15 (2010): 78–87. Roberson, James E., and Nobue Suzuki, “Introduction”, in J. Roberson and N. Suzuki, eds., Men and Masculinities in Contemporary Japan: Dislocating the Salaryman Doxa. London: RoutledgeCurzon, 2003. 1-19. Sakura, Momoko. Chibi Marukochan 1 [Little Maruko, vol. 1]. Tokyo: Shūeisha, 1987 [1990]. San-X Net. “Company Info.” 10 Feb. 2014 ‹http://www.san-x.jp/COMPANY_INFO.html›. Steger, Brigitte. “Negotiating Gendered Space on Japanese Commuter Trains.” ejcjs 13.3 (2013). 29 Apr. 2014 ‹http://www.japanesestudies.org.uk/ejcjs/vol13/iss3/steger.html› Stevens, Carolyn S. Japanese Popular Music: Culture, Authenticity and Power. London: Routledge, 2008. The Japan Times. “Nonregulars at Record 35.2% of Workforce.” 22 Feb. 2012. 6 Feb. 2014 ‹http://www.japantimes.co.jp/news/2012/02/22/news/nonregulars-at-record-35-2-of-workforce/#.UvMb-kKSzeM›. Tonozuka Ikuo, ed. Rirakkuma Tsuzuki Daradara Fan Book [Relaxed Bear Leisurely Fan Book, Continued]. Tokyo: Kabushikigaisha Shufutoseikatsu, 2013. Treat, John Whittier. “Yoshimoto Banana’s Kitchen, or The Cultural Logic of Japanese Consumerism.” In L. Skov and B. Moeran, eds., Women, Media and Consumption in Japan, Surrey: Curzon, 1995. 274-298. White, Merry I. “Ladies Who Lunch: Young Women and the Domestic Fallacy in Japan.” In K. Cwiertka and B. Walraven, eds., Asian Food: The Global and the Local. Honolulu: University of Hawai’i Press, 2001. 63-75. Yano, Christine R. Pink Globalization: Hello Kitty’s Trek across the Pacific. Durham, NC: Duke University Press, 2013.
APA, Harvard, Vancouver, ISO, and other styles
50

Jethani, Suneel, and Robbie Fordyce. "Darkness, Datafication, and Provenance as an Illuminating Methodology." M/C Journal 24, no. 2 (April 27, 2021). http://dx.doi.org/10.5204/mcj.2758.

Full text
Abstract:
Data are generated and employed for many ends, including governing societies, managing organisations, leveraging profit, and regulating places. In all these cases, data are key inputs into systems that paradoxically are implemented in the name of making societies more secure, safe, competitive, productive, efficient, transparent and accountable, yet do so through processes that monitor, discipline, repress, coerce, and exploit people. (Kitchin, 165) Introduction Provenance refers to the place of origin or earliest known history of a thing. It refers to the custodial history of objects. It is a term that is commonly used in the art-world but also has come into the language of other disciplines such as computer science. It has also been applied in reference to the transactional nature of objects in supply chains and circular economies. In an interview with Scotland’s Institute for Public Policy Research, Adam Greenfield suggests that provenance has a role to play in the “establishment of reliability” given that a “transaction or artifact has a specified provenance, then that assertion can be tested and verified to the satisfaction of all parities” (Lawrence). Recent debates on the unrecognised effects of digital media have convincingly argued that data is fully embroiled within capitalism, but it is necessary to remember that data is more than just a transactable commodity. One challenge in bringing processes of datafication into critical light is how we understand what happens to data from its point of acquisition to the point where it becomes instrumental in the production of outcomes that are of ethical concern. All data gather their meaning through relationality; whether acting as a representation of an exterior world or representing relations between other data points. Data objectifies relations, and despite any higher-order complexities, at its core, data is involved in factualising a relation into a binary. Assumptions like these about data shape reasoning, decision-making and evidence-based practice in private, personal and economic contexts. If processes of datafication are to be better understood, then we need to seek out conceptual frameworks that are adequate to the way that data is used and understood by its users. Deborah Lupton suggests that often we give data “other vital capacities because they are about human life itself, have implications for human life opportunities and livelihoods, [and] can have recursive effects on human lives (shaping action and concepts of embodiment ... selfhood [and subjectivity]) and generate economic value”. But when data are afforded such capacities, the analysis of its politics also calls for us to “consider context” and “making the labour [of datafication] visible” (D’Ignazio and Klein). For Jenny L. Davis, getting beyond simply thinking about what data affords involves bringing to light how continually and dynamically to requests, demands, encourages, discourages, and refuses certain operations and interpretations. It is in this re-orientation of the question from what to how where “practical analytical tool[s]” (Davis) can be found. Davis writes: requests and demands are bids placed by technological objects, on user-subjects. Encourage, discourage and refuse are the ways technologies respond to bids user-subjects place upon them. Allow pertains equally to bids from technological objects and the object’s response to user-subjects. (Davis) Building on Lupton, Davis, and D’Ignazio and Klein, we see three principles that we consider crucial for work on data, darkness and light: data is not simply a technological object that exists within sociotechnical systems without having undergone any priming or processing, so as a consequence the data collecting entity imposes standards and way of imagining data before it comes into contact with user-subjects; data is not neutral and does not possess qualities that make it equivalent to the things that it comes to represent; data is partial, situated, and contingent on technical processes, but the outcomes of its use afford it properties beyond those that are purely informational. This article builds from these principles and traces a framework for investigating the complications arising when data moves from one context to another. We draw from the “data provenance” as it is applied in the computing and informational sciences where it is used to query the location and accuracy of data in databases. In developing “data provenance”, we adapt provenance from an approach that solely focuses on technical infrastructures and material processes that move data from one place to another and turn to sociotechnical, institutional, and discursive forces that bring about data acquisition, sharing, interpretation, and re-use. As data passes through open, opaque, and darkened spaces within sociotechnical systems, we argue that provenance can shed light on gaps and overlaps in technical, legal, ethical, and ideological forms of data governance. Whether data becomes exclusive by moving from light to dark (as has happened with the removal of many pages and links from Facebook around the Australian news revenue-sharing bill), or is publicised by shifting from dark to light (such as the Australian government releasing investigative journalist Andie Fox’s welfare history to the press), or even recontextualised from one dark space to another (as with genetic data shifting from medical to legal contexts, or the theft of personal financial data), there is still a process of transmission here that we can assess and critique through provenance. These different modalities, which guide data acquisition, sharing, interpretation, and re-use, cascade and influence different elements and apparatuses within data-driven sociotechnical systems to different extents depending on context. Attempts to illuminate and make sense of these complex forces, we argue, exposes data-driven practices as inherently political in terms of whose interests they serve. Provenance in Darkness and in Light When processes of data capture, sharing, interpretation, and re-use are obscured, it impacts on the extent to which we might retrospectively examine cases where malpractice in responsible data custodianship and stewardship has occurred, because it makes it difficult to see how things have been rendered real and knowable, changed over time, had causality ascribed to them, and to what degree of confidence a decision has been made based on a given dataset. To borrow from this issue’s concerns, the paradigm of dark spaces covers a range of different kinds of valences on the idea of private, secret, or exclusive contexts. We can parallel it with the idea of ‘light’ spaces, which equally holds a range of different concepts about what is open, public, or accessible. For instance, in the use of social data garnered from online platforms, the practices of academic researchers and analysts working in the private sector often fall within a grey zone when it comes to consent and transparency. Here the binary notion of public and private is complicated by the passage of data from light to dark (and back to light). Writing in a different context, Michael Warner complicates the notion of publicness. He observes that the idea of something being public is in and of itself always sectioned off, divorced from being fully generalisable, and it is “just whatever people in a given context think it is” (11). Michael Hardt and Antonio Negri argue that publicness is already shadowed by an idea of state ownership, leaving us in a situation where public and private already both sit on the same side of the propertied/commons divide as if the “only alternative to the private is the public, that is, what is managed and regulated by states and other governmental authorities” (vii). The same can be said about the way data is conceived as a public good or common asset. These ideas of light and dark are useful categorisations for deliberately moving past the tensions that arise when trying to qualify different subspecies of privacy and openness. The problem with specific linguistic dyads of private vs. public, or open vs. closed, and so on, is that they are embedded within legal, moral, technical, economic, or rhetorical distinctions that already involve normative judgements on whether such categories are appropriate or valid. Data may be located in a dark space for legal reasons that fall under the legal domain of ‘private’ or it may be dark because it has been stolen. It may simply be inaccessible, encrypted away behind a lost password on a forgotten external drive. Equally, there are distinctions around lightness that can be glossed – the openness of Open Data (see: theodi.org) is of an entirely separate category to the AACS encryption key, which was illegally but enthusiastically shared across the internet in 2007 to the point where it is now accessible on Wikipedia. The language of light and dark spaces allows us to cut across these distinctions and discuss in deliberately loose terms the degree to which something is accessed, with any normative judgments reserved for the cases themselves. Data provenance, in this sense, can be used as a methodology to critique the way that data is recontextualised from light to dark, dark to light, and even within these distinctions. Data provenance critiques the way that data is presented as if it were “there for the taking”. This also suggests that when data is used for some or another secondary purpose – generally for value creation – some form of closure or darkening is to be expected. Data in the public domain is more than simply a specific informational thing: there is always context, and this contextual specificity, we argue, extends far beyond anything that can be captured in a metadata schema or a licensing model. Even the transfer of data from one open, public, or light context to another will evoke new degrees of openness and luminosity that should not be assumed to be straightforward. And with this a new set of relations between data-user-subjects and stewards emerges. The movement of data between public and private contexts by virtue of the growing amount of personal information that is generated through the traces left behind as people make use of increasingly digitised services going about their everyday lives means that data-motile processes are constantly occurring behind the scenes – in darkness – where it comes into the view, or possession, of third parties without obvious mechanisms of consent, disclosure, or justification. Given that there are “many hands” (D’Iganzio and Klein) involved in making data portable between light and dark spaces, equally there can be diversity in the approaches taken to generate critical literacies of these relations. There are two complexities that we argue are important for considering the ethics of data motility from light to dark, and this differs from the concerns that we might have when we think about other illuminating tactics such as open data publishing, freedom-of-information requests, or when data is anonymously leaked in the public interest. The first is that the terms of ethics must be communicable to individuals and groups whose data literacy may be low, effectively non-existent, or not oriented around the objective of upholding or generating data-luminosity as an element of a wider, more general form of responsible data stewardship. Historically, a productive approach to data literacy has been finding appropriate metaphors from adjacent fields that can help add depth – by way of analogy – to understanding data motility. Here we return to our earlier assertion that data is more than simply a transactable commodity. Consider the notion of “giving” and “taking” in the context of darkness and light. The analogy of giving and taking is deeply embedded into the notion of data acquisition and sharing by virtue of the etymology of the word data itself: in Latin, “things having been given”, whereby in French données, a natural gift, perhaps one that is given to those that attempt capture for the purposes of empiricism – representation in quantitative form is a quality that is given to phenomena being brought into the light. However, in the contemporary parlance of “analytics” data is “taken” in the form of recording, measuring, and tracking. Data is considered to be something valuable enough to give or take because of its capacity to stand in for real things. The empiricist’s preferred method is to take rather than to accept what is given (Kitchin, 2); the data-capitalist’s is to incentivise the act of giving or to take what is already given (or yet to be taken). Because data-motile processes are not simply passive forms of reading what is contained within a dataset, the materiality and subjectivity of data extraction and interpretation is something that should not be ignored. These processes represent the recontextualisation of data from one space to another and are expressed in the landmark case of Cambridge Analytica, where a private research company extracted data from Facebook and used it to engage in psychometric analysis of unknowing users. Data Capture Mechanism Characteristics and Approach to Data Stewardship Historical Information created, recorded, or gathered about people of things directly from the source or a delegate but accessed for secondary purposes. Observational Represents patterns and realities of everyday life, collected by subjects by their own choice and with some degree of discretion over the methods. Third parties access this data through reciprocal arrangement with the subject (e.g., in exchange for providing a digital service such as online shopping, banking, healthcare, or social networking). Purposeful Data gathered with a specific purpose in mind and collected with the objective to manipulate its analysis to achieve certain ends. Integrative Places less emphasis on specific data types but rather looks towards social and cultural factors that afford access to and facilitate the integration and linkage of disparate datasets Table 1: Mechanisms of Data Capture There are ethical challenges associated with data that has been sourced from pre-existing sets or that has been extracted from websites and online platforms through scraping data and then enriching it through cleaning, annotation, de-identification, aggregation, or linking to other data sources (tab. 1). As a way to address this challenge, our suggestion of “data provenance” can be defined as where a data point comes from, how it came into being, and how it became valuable for some or another purpose. In developing this idea, we borrow from both the computational and biological sciences (Buneman et al.) where provenance, as a form of qualitative inquiry into data-motile processes, centres around understanding the origin of a data point as part of a broader almost forensic analysis of quality and error-potential in datasets. Provenance is an evaluation of a priori computational inputs and outputs from the results of database queries and audits. Provenance can also be applied to other contexts where data passes through sociotechnical systems, such as behavioural analytics, targeted advertising, machine learning, and algorithmic decision-making. Conventionally, data provenance is based on understanding where data has come from and why it was collected. Both these questions are concerned with the evaluation of the nature of a data point within the wider context of a database that is itself situated within a larger sociotechnical system where the data is made available for use. In its conventional sense, provenance is a means of ensuring that a data point is maintained as a single source of truth (Buneman, 89), and by way of a reproducible mechanism which allows for its path through a set of technical processes, it affords the assessment of a how reliable a system’s output might be by sheer virtue of the ability for one to retrace the steps from point A to B. “Where” and “why” questions are illuminating because they offer an ends-and-means view of the relation between the origins and ultimate uses of a given data point or set. Provenance is interesting when studying data luminosity because means and ends have much to tell us about the origins and uses of data in ways that gesture towards a more accurate and structured research agenda for data ethics that takes the emphasis away from individual moral patients and reorients it towards practices that occur within information management environments. Provenance offers researchers seeking to study data-driven practices a similar heuristic to a journalist’s line of questioning who, what, when, where, why, and how? This last question of how is something that can be incorporated into conventional models of provenance that make it useful in data ethics. The question of how data comes into being extends questions of power, legality, literacy, permission-seeking, and harm in an entangled way and notes how these factors shape the nature of personal data as it moves between contexts. Forms of provenance accumulate from transaction to transaction, cascading along, as a dataset ‘picks up’ the types of provenance that have led to its creation. This may involve multiple forms of overlapping provenance – methodological and epistemological, legal and illegal – which modulate different elements and apparatuses. Provenance, we argue is an important methodological consideration for workers in the humanities and social sciences. Provenance provides a set of shared questions on which models of transparency, accountability, and trust may be established. It points us towards tactics that might help data-subjects understand privacy in a contextual manner (Nissenbaum) and even establish practices of obfuscation and “informational self-defence” against regimes of datafication (Brunton and Nissenbaum). Here provenance is not just a declaration of what means and ends of data capture, sharing, linkage, and analysis are. We sketch the outlines of a provenance model in table 2 below. Type Metaphorical frame Dark Light What? The epistemological structure of a database determines the accuracy of subsequent decisions. Data must be consistent. What data is asked of a person beyond what is strictly needed for service delivery. Data that is collected for a specific stated purpose with informed consent from the data-subject. How does the decision about what to collect disrupt existing polities and communities? What demands for conformity does the database make of its subjects? Where? The contents of a database is important for making informed decisions. Data must be represented. The parameters of inclusion/exclusion that create unjust risks or costs to people because of their inclusion or exclusion in a dataset. The parameters of inclusion or exclusion that afford individuals representation or acknowledgement by being included or excluded from a dataset. How are populations recruited into a dataset? What divides exist that systematically exclude individuals? Who? Who has access to data, and how privacy is framed is important for the security of data-subjects. Data access is political. Access to the data by parties not disclosed to the data-subject. Who has collected the data and who has or will access it? How is the data made available to those beyond the data subjects? How? Data is created with a purpose and is never neutral. Data is instrumental. How the data is used, to what ends, discursively, practically, instrumentally. Is it a private record, a source of value creation, the subject of extortion or blackmail? How the data was intended to be used at the time that it was collected. Why? Data is created by people who are shaped by ideological factors. Data has potential. The political rationality that shapes data governance with regard to technological innovation. The trade-offs that are made known to individuals when they contribute data into sociotechnical systems over which they have limited control. Table 2: Forms of Data Provenance Conclusion As an illuminating methodology, provenance offers a specific line of questioning practices that take information through darkness and light. The emphasis that it places on a narrative for data assets themselves (asking what when, who, how, and why) offers a mechanism for traceability and has potential for application across contexts and cases that allows us to see data malpractice as something that can be productively generalised and understood as a series of ideologically driven technical events with social and political consequences without being marred by perceptions of exceptionality of individual, localised cases of data harm or data violence. References Brunton, Finn, and Helen Nissenbaum. "Political and Ethical Perspectives on Data Obfuscation." Privacy, Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology. Eds. Mireille Hildebrandt and Katja de Vries. New York: Routledge, 2013. 171-195. Buneman, Peter, Sanjeev Khanna, and Wang-Chiew Tan. "Data Provenance: Some Basic Issues." International Conference on Foundations of Software Technology and Theoretical Computer Science. Berlin: Springer, 2000. Davis, Jenny L. How Artifacts Afford: The Power and Politics of Everyday Things. Cambridge: MIT Press, 2020. D'Ignazio, Catherine, and Lauren F. Klein. Data Feminism. Cambridge: MIT Press, 2020. Hardt, Michael, and Antonio Negri. Commonwealth. Cambridge: Harvard UP, 2009. Kitchin, Rob. "Big Data, New Epistemologies and Paradigm Shifts." Big Data & Society 1.1 (2014). Lawrence, Matthew. “Emerging Technology: An Interview with Adam Greenfield. ‘God Forbid That Anyone Stopped to Ask What Harm This Might Do to Us’. Institute for Public Policy Research, 13 Oct. 2017. <https://www.ippr.org/juncture-item/emerging-technology-an-interview-with-adam-greenfield-god-forbid-that-anyone-stopped-to-ask-what-harm-this-might-do-us>. Lupton, Deborah. "Vital Materialism and the Thing-Power of Lively Digital Data." Social Theory, Health and Education. Eds. Deana Leahy, Katie Fitzpatrick, and Jan Wright. London: Routledge, 2018. Nissenbaum, Helen F. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford: Stanford Law Books, 2020. Warner, Michael. "Publics and Counterpublics." Public Culture 14.1 (2002): 49-90.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography