Journal articles on the topic 'Internet Engineering Task Force Case studies'

To see the other types of publications on this topic, follow the link: Internet Engineering Task Force Case studies.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 45 journal articles for your research on the topic 'Internet Engineering Task Force Case studies.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Eklund, Peter, Jeff Thom, Tim Wray, and Edward Dou. "Location Based Context-Aware Services in a Digital Ecosystem with Location Privacy." Journal of Cases on Information Technology 13, no. 2 (April 2011): 49–68. http://dx.doi.org/10.4018/jcit.2011040104.

Full text
Abstract:
This case discusses the architecture and application of privacy and trust issues in the Connected Mobility Digital Ecosystem (CMDE) for the University of Wollongong’s main campus community. The authors describe four mobile location-sensitive, context-aware applications (app(s)) that are designed for iPhones: a public transport passenger information app; a route-based private vehicle car-pooling app; an on-campus location-based social networking app; and a virtual art-gallery tour guide app. These apps are location-based and designed to augment user interactions within their physical environments. In addition, location data provided by the apps can be used to create value-added services and optimize overall system performance. The authors characterize this socio-technical system as a digital ecosystem and explain its salient features. Using the University of Wollongong’s campus and surrounds as the ecosystem’s community for the case studies, the authors present the architectures of these four applications (apps) and address issues concerning privacy, location-identity and uniform standards developed by the Internet Engineering Task Force (IETF).
APA, Harvard, Vancouver, ISO, and other styles
2

Mitchel, Christopher, Baraq Ghaleb, Safwan M. Ghaleb, Zakwan Jaroucheh, and Bander Ali Saleh Al-rimy. "The Impact of Mobile DIS and Rank-Decreased Attacks in Internet of Things Networks." International Journal of Engineering and Advanced Technology 10, no. 2 (December 30, 2020): 66–72. http://dx.doi.org/10.35940/ijeat.b1962.1210220.

Full text
Abstract:
With a predicted 50 billion devices by the end of 2020, the Internet of things has grown exponentially in the last few years. This growth has seen an increasing demand for mobility support in low power and lossy sensor networks, a type of network characterized by several limitations in terms of their resources including CPU, memory and batter, causing manufactures to push products out to the market faster, without the necessary security features. IoT networks rely on the Routing Protocol for Low Power and Lossy Network (RPL) for communication, designed by the Internet Engineering Task Force (IETF). This protocol has been proven to be efficient in relation to the handling of routing in such constrained networks, However, research studies revealed that RPL was inherently designed for static networks, indicating poor handling of mobile or dynamic topologies which is worsen when introducing mobile attacker. In this paper, two IoT routing attacks are evaluated under a mobile attacker with the aim of providing a critical evaluation of the impact the attacks have on the network in comparison to the case with static attacker. The first attack is the Rank attack in which the attacker announces false routing information to its neighbour attracting them to forward their data via the attacker. The second attack is the DIS attack in which the attacker floods the network with DIS messages triggering them to reset their transmission timers and sending messages more frequently. The comparison were conducted in terms of average power consumption and also the packet delivery ratio (PDR). Based on the results collected from the simulations, it was established that when an attacking node is mobile, there’s an average increase of 36.6 in power consumption and a decrease of 14 for packet delivery ratios when compared to a static attacking node.
APA, Harvard, Vancouver, ISO, and other styles
3

Granata, Samuele, Marco Di Benedetto, Cristina Terlizzi, Riccardo Leuzzi, Stefano Bifaretti, and Pericle Zanchetta. "Power Electronics Converters for the Internet of Energy: A Review." Energies 15, no. 7 (April 2, 2022): 2604. http://dx.doi.org/10.3390/en15072604.

Full text
Abstract:
This paper presents a comprehensive review of multi-port power electronics converters used for application in AC, DC, or hybrid distribution systems in an Internet of Energy scenario. In particular, multi-port solid-state transformer (SST) topologies have been addressed and classified according to their isolation capabilities and their conversion stages configurations. Non-conventional configurations have been considered. A comparison of the most relevant features and design specifications between popular topologies has been provided through a comprehensive and effective table. Potential benefits of SSTs in distribution applications have been highlighted even with reference to a network active nodes usage. This review also highlights standards and technical regulations in force for connecting SSTs to the electrical distribution system. Finally, two case studies of multi-port topologies have been presented and discussed. The first one is an isolated multi-port bidirectional dual active bridge DC-DC converter useful in fast-charging applications. The second case of study deals with a three-port AC-AC multi-level power converter in H-Bridge configuration able to replicate a network active node and capable of routing and controlling energy under different operating conditions.
APA, Harvard, Vancouver, ISO, and other styles
4

ELotmani, Fouad, Redouane Esbai, and Mohamed Atounti. "The Reverse Engineering of a Web Application Struts Based in the ADM Approach." International Journal of Online and Biomedical Engineering (iJOE) 16, no. 02 (February 12, 2020): 112. http://dx.doi.org/10.3991/ijoe.v16i02.11213.

Full text
Abstract:
<p>Since web technologies are constantly evolving, the adaptation of legacy web applications to new paradigms such as rich internet applications (RIAs) is become a necessity. In such tendencies, we notice that several web leaders has already migrated their web applications to RIAs. However, it faces many challenges due to the variety of frameworks. Nevertheless, and in order to facilitate the process of migration, it would be ideal to use tools that could help automatically generate or ease the generation of UML (Unified Modeling Language) models from legacy web application. In this context, novel technical frameworks used for information integration and migration processes such as Architecture-Driven Modernization Task Force (ADMTF) were fashioned to describe specifications and promote industry accord on the modernization of existing applications. In this paper, we propose a process for migrating application from Struts to UML model using ADM standards and MoDisco. We then present a case study as an example illustrating the different steps of the proposed process. We then validated the proposition within Eclipse Modelling Framework since a number of its tools and run-time environments are indeed aligned with ADM standards.</p>
APA, Harvard, Vancouver, ISO, and other styles
5

Iqbal, Razib, Shervin Shirmohammadi, and Rasha Atwah. "A Dynamic Approach to Estimate Receiving Bandwidth for WebRTC." International Journal of Multimedia Data Engineering and Management 7, no. 3 (July 2016): 17–33. http://dx.doi.org/10.4018/ijmdem.2016070102.

Full text
Abstract:
Web Real-Time Communication (WebRTC), drafted by the World Wide Web Consortium (W3C) and Internet Engineering Task Force (IETF), enables direct browser-to-browser real-time communication. As its congestion control mechanism, WebRTC uses the Google Congestion Control (GCC) algorithm. But using GCC will limit WebRTC's performance in cases of overusing due to using a fixed decreasing factor, known as alpha (a). In this paper, the authors propose a dynamic alpha model to reduce the receiving bandwidth estimate during overuse as indicated by the overuse detector. Using their proposed model, the receiver can more efficiently estimate its receiving rate in case of overuse. They implemented their model over both unconstrained and constrained networks. Experimental results show noticeable improvements in terms of higher incoming rate, lower Round-Trip Time, and lower packet loss compared to the fixed alpha model.
APA, Harvard, Vancouver, ISO, and other styles
6

Richard, Paul, Georges Birebent, Philippe Coiffet, Grigore Burdea, Daniel Gomez, and Noshir Langrana. "Effect of Frame Rate and Force Feedback on Virtual Object Manipulation." Presence: Teleoperators and Virtual Environments 5, no. 1 (January 1996): 95–108. http://dx.doi.org/10.1162/pres.1996.5.1.95.

Full text
Abstract:
Research on virtual environments (VE) produced significant advances in computer hardware (graphics boards and i/o tools) and software (real-time distributed simulations). However, fundamental questions remain about how user performance is affected by such factors as graphics refresh rate, resolution, control latencies, and multimodal feedback. This article reports on two experiments performed to examine dextrous manipulation of virtual objects. The first experiment studies the effect of graphics frame rate and viewing mode (monoscopic vs. stereoscopic) on the time required to grasp a moving target. The second experiment studies the effect of direct force feedback, pseudoforce feedback, and redundant force feedback on grasping force regulation. The trials were performed using a partially-immersive environment (graphics workstation and LCD glasses), a DataGlove, and the Rutgers Master with force feedback. Results of the first experiment indicate that stereoscopic viewing is beneficial for low refresh rates (it reduced task completion time by about 50% vs. monoscopic graphics). Results of the second experiment indicate that haptic feedback increases performance and reduces error rates, as compared to the open loop case (with no force feedback). The best performance was obtained when both direct haptic and redundant auditory feedback were provided to the user. The large number of subjects participating in these experiments (over 160 male and female) indicates good statistical significance for the above results.
APA, Harvard, Vancouver, ISO, and other styles
7

Kim, Jiyoon, Daniel Gerbi Duguma, Sangmin Lee, Bonam Kim, JaeDeok Lim, and Ilsun You. "Scrutinizing the Vulnerability of Ephemeral Diffie–Hellman over COSE (EDHOC) for IoT Environment Using Formal Approaches." Mobile Information Systems 2021 (September 13, 2021): 1–18. http://dx.doi.org/10.1155/2021/7314508.

Full text
Abstract:
Most existing conventional security mechanisms are insufficient, mainly attributable to their requirements for heavy processing capacity, large protocol message size, and longer round trips, for resource-intensive devices operating in an Internet of Things (IoT) context. These devices necessitate efficient communication and security protocols that are cognizant of the severe resource restrictions regarding energy, computation, communication, and storage. To realize this, the IETF (Internet Engineering Task Force) is currently working towards standardizing an ephemeral key-based lightweight and authenticated key exchange protocol called EDHOC (Ephemeral Diffie–Hellman over COSE). The protocol’s primary purpose is to build an OSCORE (Object Security for Constrained RESTful Environments) security environment by supplying crucial security properties such as secure key exchange, mutual authentication, perfect forward secrecy, and identity protection. EDHOC will most likely dominate IoT security once it becomes a standard. It is, therefore, imperative to inspect the protocol for any security flaw. In this regard, two previous studies have shown different security vulnerabilities of the protocol using formal security verification methods. Yet, both missed the vital security flaws we found in this paper: resource exhaustion and privacy attacks. In finding these vulnerabilities, we leveraged BAN-Logic and AVISPA to formally verify both EDHOC protocol variants. Consequently, we described these security flaws together with the results of the related studies and put forward recommended solutions as part of our future work.
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Kuiyuan, Mingzhi Pang, Yuqing Yin, Shouwan Gao, and Pengpeng Chen. "ARS: Adaptive Robust Synchronization for Underground Coal Wireless Internet of Things." Sensors 20, no. 17 (September 2, 2020): 4981. http://dx.doi.org/10.3390/s20174981.

Full text
Abstract:
Clock synchronization is still a vital and challenging task for underground coal wireless internet of things (IoT) due to the uncertainty of underground environment and unreliability of communication links. Instead of considering on-demand driven clock synchronization, this paper proposes a novel Adaptive Robust Synchronization (ARS) scheme with packets loss for mine wireless environment. A clock synchronization framework that is based on Kalman filtering is first proposed, which can adaptively adjust the sampling period of each clock and reduce the communication overhead in single-hop networks. The proposed scheme also solves the problem of outliers in data packets with time-stamps. In addition, this paper extends the ARS algorithm to multi-hop networks. Additionally, the upper and lower bounds of error covariance expectation are analyzed in the case of incomplete measurement. Extensive simulations are conducted in order to evaluate the performance. In the simulation environment, the clock accuracy of ARS algorithm is improved by 7.85% when compared with previous studies for single-hop networks. For multi-hop networks, the proposed scheme improves the accuracy by 12.56%. The results show that the proposed algorithm has high scalability, robustness, and accuracy, and can quickly adapt to different clock accuracy requirements.
APA, Harvard, Vancouver, ISO, and other styles
9

Sobral, José V. V., Joel J. P. C. Rodrigues, Ricardo A. L. Rabêlo, Jalal Al-Muhtadi, and Valery Korotaev. "Routing Protocols for Low Power and Lossy Networks in Internet of Things Applications." Sensors 19, no. 9 (May 9, 2019): 2144. http://dx.doi.org/10.3390/s19092144.

Full text
Abstract:
The emergence of the Internet of Things (IoT) and its applications has taken the attention of several researchers. In an effort to provide interoperability and IPv6 support for the IoT devices, the Internet Engineering Task Force (IETF) proposed the 6LoWPAN stack. However, the particularities and hardware limitations of networks associated with IoT devices lead to several challenges, mainly for routing protocols. On its stack proposal, IETF standardizes the RPL (IPv6 Routing Protocol for Low-Power and Lossy Networks) as the routing protocol for Low-power and Lossy Networks (LLNs). RPL is a tree-based proactive routing protocol that creates acyclic graphs among the nodes to allow data exchange. Although widely considered and used by current applications, different recent studies have shown its limitations and drawbacks. Among these, it is possible to highlight the weak support of mobility and P2P traffic, restrictions for multicast transmissions, and lousy adaption for dynamic throughput. Motivated by the presented issues, several new solutions have emerged during recent years. The approaches range from the consideration of different routing metrics to an entirely new solution inspired by other routing protocols. In this context, this work aims to present an extensive survey study about routing solutions for IoT/LLN, not limited to RPL enhancements. In the course of the paper, the routing requirements of LLNs, the initial protocols, and the most recent approaches are presented. The IoT routing enhancements are divided according to its main objectives and then studied individually to point out its most important strengths and weaknesses. Furthermore, as the main contribution, this study presents a comprehensive discussion about the considered approaches, identifying the still remaining open issues and suggesting future directions to be recognized by new proposals.
APA, Harvard, Vancouver, ISO, and other styles
10

Alizadeh, Mojtaba, Mohammad Hesam Tadayon, Kouichi Sakurai, Hiroaki Anada, and Alireza Jolfaei. "A Secure Ticket-Based Authentication Mechanism for Proxy Mobile IPv6 Networks in Volunteer Computing." ACM Transactions on Internet Technology 21, no. 4 (July 22, 2021): 1–16. http://dx.doi.org/10.1145/3407189.

Full text
Abstract:
Technology advances—such as improving processing power, battery life, and communication functionalities—contribute to making mobile devices an attractive research area. In 2008, in order to manage mobility, the Internet Engineering Task Force (IETF) developed Proxy Mobile IPv6, which is a network-based mobility management protocol to support seamless connectivity of mobile devices. This protocol can play a key role in volunteer computing paradigms as a user can seamlessly access computing resources. The procedure of user authentication is not defined in this standard; thus, many studies have been carried out to propose suitable authentication schemes. However, in the current authentication methods, with reduced latency and packet loss, some security and privacy considerations are neglected. In this study, we propose a secure and anonymous ticket-based authentication (SATA) method to protect mobile nodes against existing security and privacy issues. The proposed method reduces the overhead of handover authentication procedures using the ticket-based concept. We evaluated security and privacy strengths of the proposed method using security theorems and BAN logic.
APA, Harvard, Vancouver, ISO, and other styles
11

Alanezi, Mohammed A., Houssem R. E. H. Bouchekara, and Muhammad S. Javaid. "Optimizing Router Placement of Indoor Wireless Sensor Networks in Smart Buildings for IoT Applications." Sensors 20, no. 21 (October 30, 2020): 6212. http://dx.doi.org/10.3390/s20216212.

Full text
Abstract:
Internet of Things (IoT) is characterized by a system of interconnected devices capable of communicating with each other to carry out specific useful tasks. The connection between these devices is ensured by routers distributed in a network. Optimizing the placement of these routers in a distributed wireless sensor network (WSN) in a smart building is a tedious task. Computer-Aided Design (CAD) programs and software can simplify this task since they provide a robust and efficient tool. At the same time, experienced engineers from different backgrounds must play a prominent role in the abovementioned task. Therefore, specialized companies rely on both; a useful CAD tool along with the experience and the flair of a sound expert/engineer to optimally place routers in a WSN. This paper aims to develop a new approach based on the interaction between an efficient CAD tool and an experienced engineer for the optimal placement of routers in smart buildings for IoT applications. The approach follows a step-by-step procedure to weave an optimal network infrastructure, having both automatic and designer-intervention modes. Several case studies have been investigated, and the obtained results show that the developed approach produces a synthesized network with full coverage and a reduced number of routers.
APA, Harvard, Vancouver, ISO, and other styles
12

She, Yu, Siyang Song, Hai-jun Su, and Junmin Wang. "A Parametric Study of Compliant Link Design for Safe Physical Human–Robot Interaction." Robotica 39, no. 10 (February 3, 2021): 1739–59. http://dx.doi.org/10.1017/s0263574720001472.

Full text
Abstract:
SUMMARYRobots of next-generation physically interact with the world rather than be caged in a controlled area, and they need to make contact with the open-ended environment to perform their task. Compliant robot links offer intrinsic mechanical compliance for addressing the safety issue for physical human–robot interactions (pHRI). However, many important research questions are yet to be answered. For instance, how do system parameters, for example, mechanical compliance, motor torque, impact velocities, and so on, affect the impact force? how to formulate system impact dynamics of compliant robots, and how to size their geometric dimensions to maximize impact force reduction. In this paper, we present a parametric study of compliant link (CL) design for safe pHRI. We first present a theoretical model of the pHRI system that is comprised of robot dynamics, an impact contact model, and dummy head dynamics. After experimentally validating the theoretical model, we then systematically study the effects of CL parameters on the impact force in more detail. Specifically, we explore how the design and actuation parameters affect the impact force of pHRI system. Based on the parametric studies of the CL design, we propose a step-by-step process and a list of concrete guidelines for designing CL with safety constraints in pHRI. We further conduct a simulation case study to validate this design process and design guidelines.
APA, Harvard, Vancouver, ISO, and other styles
13

Rahmani, Amir Masoud, Rizwan Ali Naqvi, Saqib Ali, Seyedeh Yasaman Hosseini Mirmahaleh, Mohammed Alswaitti, Mehdi Hosseinzadeh, and Kamran Siddique. "An Astrocyte-Flow Mapping on a Mesh-Based Communication Infrastructure to Defective Neurons Phagocytosis." Mathematics 9, no. 23 (November 24, 2021): 3012. http://dx.doi.org/10.3390/math9233012.

Full text
Abstract:
In deploying the Internet of Things (IoT) and Internet of Medical Things (IoMT)-based applications and infrastructures, the researchers faced many sensors and their output’s values, which have transferred between service requesters and servers. Some case studies addressed the different methods and technologies, including machine learning algorithms, deep learning accelerators, Processing-In-Memory (PIM), and neuromorphic computing (NC) approaches to support the data processing complexity and communication between IoMT nodes. With inspiring human brain structure, some researchers tackled the challenges of rising IoT- and IoMT-based applications and neural structures’ simulation. A defective device has destructive effects on the performance and cost of the applications, and their detection is challenging for a communication infrastructure with many devices. We inspired astrocyte cells to map the flow (AFM) of the Internet of Medical Things onto mesh network processing elements (PEs), and detect the defective devices based on a phagocytosis model. This study focuses on an astrocyte’s cholesterol distribution into neurons and presents an algorithm that utilizes its pattern to distribute IoMT’s dataflow and detect the defective devices. We researched Alzheimer’s symptoms to understand astrocyte and phagocytosis functions against the disease and employ the vaccination COVID-19 dataset to define a set of task graphs. The study improves total runtime and energy by approximately 60.85% and 52.38% after implementing AFM, compared with before astrocyte-flow mapping, which helps IoMT’s infrastructure developers to provide healthcare services to the requesters with minimal cost and high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
14

Bimatov, Vladimir I., Vladimir Yu Kudentsov, and Valery I. Trushlyakov. "Technique for the experimental determination of the force coefficient of the frontal co-resistance of unstable in flight bodies." Vestnik Tomskogo gosudarstvennogo universiteta. Matematika i mekhanika, no. 75 (2022): 67–72. http://dx.doi.org/10.17223/19988621/75/6.

Full text
Abstract:
One of the most important problems in the design of modern aircrafts is the study of the force effects of high-energy flows on the elements of their structures and on the aircraft as a whole. Aeroballistic installations are widely used as a research tool. Determination of the drag force coefficient is the main task of experimental ballistics from which aerodynamic studies begin. The studies are designed to determine the drag coefficient of missiles having different aerodynamic shapes, which can be used in rocket science, artillery, and other areas of technology involved in the study of the movement of bodies in gaseous and liquid media. A feature of the aeroballistic method for determining the coefficient of drag force is that, in order to obtain values of CXO of a given accuracy, experiments can be carried out only with bodies that are stable during the whole time of movement in the studied section of the trajectory. The research is aimed at solving the problem of calculating the drag coefficient of bodies using trajectory data on their coaxial movement. During the ballistic test, the body is sequentially photographed relative to a fixed coordinate system, coordinates of its characteristic points and the time between the moments of photographing are recorded, and the displacements of the characteristic points of the body relative to the moving coordinate system associated with the base body performing a rectilinear motion with a zero angle of attack, are simultaneously measured. In this case, the base body is axially and movably connected to the body under study. It is shown that the applied group motion effect allows, within the framework of the accepted assumptions, to increase the accuracy and simplify the determination of the drag force coefficient of bodies of a complex geometric shape.
APA, Harvard, Vancouver, ISO, and other styles
15

Ryll, Markus, Giuseppe Muscio, Francesco Pierri, Elisabetta Cataldi, Gianluca Antonelli, Fabrizio Caccavale, Davide Bicego, and Antonio Franchi. "6D interaction control with aerial robots: The flying end-effector paradigm." International Journal of Robotics Research 38, no. 9 (June 26, 2019): 1045–62. http://dx.doi.org/10.1177/0278364919856694.

Full text
Abstract:
This paper presents a novel paradigm for physical interactive tasks in aerial robotics allowing reliability to be increased and weight and costs to be reduced compared with state-of-the-art approaches. By exploiting its tilted propeller actuation, the robot is able to control the full 6D pose (position and orientation independently) and to exert a full-wrench (force and torque independently) with a rigidly attached end-effector. Interaction is achieved by means of an admittance control scheme in which an outer loop control governs the desired admittance behavior (i.e., interaction compliance/stiffness, damping, and mass) and an inner loop based on inverse dynamics ensures full 6D pose tracking. The interaction forces are estimated by an inertial measurement unit (IMU)-enhanced momentum-based observer. An extensive experimental campaign is performed and four case studies are reported: a hard touch and slide on a wooden surface, called the sliding surface task; a tilted peg-in-hole task, i.e., the insertion of the end-effector in a tilted funnel; an admittance shaping experiment in which it is shown how the stiffness, damping, and apparent mass can be modulated at will; and, finally, the fourth experiment is to show the effectiveness of the approach also in the presence of time-varying interaction forces.
APA, Harvard, Vancouver, ISO, and other styles
16

Kamola, Mariusz. "Hybrid Approach to Design Optimisation: Preserve Accuracy, Reduce Dimensionality." International Journal of Applied Mathematics and Computer Science 17, no. 1 (March 1, 2007): 53–71. http://dx.doi.org/10.2478/v10006-007-0006-3.

Full text
Abstract:
Hybrid Approach to Design Optimisation: Preserve Accuracy, Reduce DimensionalityThe paper proposes a design procedure for the creation of a robust and effective hybrid algorithm, tailored to and capable of carrying out a given design optimisation task. In the course of algorithm creation, a small set of simple optimisation methods is chosen, out of which those performing best will constitute the hybrid algorithm. The simplicity of the method allows implementing ad-hoc modifications if unexpected adverse features of the optimisation problem are found. It is postulated to model a system that is smaller but conceptually equivalent, whose model is much simpler than the original one and can be used freely during algorithm construction. Successful operation of the proposed approach is presented in two case studies (power plant set-point optimisation and waveguide bend shape optimisation). The proposed methodology is intended to be used by those not having much knowledge of the system or modelling technology, but having the basic practice in optimisation. It is designed as a compromise between brute force optimisation and design optimisation preceded by a refined study of the underlying problem. Special attention is paid to cases where simulation failures (regardless of their nature) form big obstacles in the course of the optimisation process.
APA, Harvard, Vancouver, ISO, and other styles
17

Yang, Wenchao, Wenfeng Li, Yulian Cao, Yun Luo, and Lijun He. "An Information Theory Inspired Real-Time Self-Adaptive Scheduling for Production-Logistics Resources: Framework, Principle, and Implementation." Sensors 20, no. 24 (December 8, 2020): 7007. http://dx.doi.org/10.3390/s20247007.

Full text
Abstract:
The development of industrial-enabling technology, such as the industrial Internet of Things and physical network system, makes it possible to use real-time information in production-logistics scheduling. Real-time information in an intelligent factory is random, such as the arrival of customers’ jobs, and fuzzy, such as the processing time of Production-Logistics Resources. Besides, the coordination of production and logistic resources in a flexible workshop is also a hot issue. The availability of this information will enhance the quality of making scheduling decisions. However, when and how to use this information to realize the adaptive collaboration of Production-Logistics Resources are vital issues. Therefore, this paper studies the above problems by establishing a real-time reaction scheduling framework of Production-Logistics Resources dynamic cooperation. Firstly, a real-time task triggering strategy to maximize information utilization is proposed to explore when to use real-time information. Secondly, a collaborative method for Production-Logistics Resources is studied to explore how to use real-time information. Thirdly, a real-time self-adaptive scheduling algorithm based on information entropy is utilized to obtain a stable and feasible solution. Finally, the effectiveness and advancement of the proposed method are verified by a practical case.
APA, Harvard, Vancouver, ISO, and other styles
18

R, Sujatha, Aarthy SL, Jyotir Moy Chatterjee, A. Alaboudi, and NZ Jhanjhi. "A Machine Learning Way to Classify Autism Spectrum Disorder." International Journal of Emerging Technologies in Learning (iJET) 16, no. 06 (March 30, 2021): 182. http://dx.doi.org/10.3991/ijet.v16i06.19559.

Full text
Abstract:
In recent times Autism Spectrum Disorder (ASD) is picking up its force quicker than at any other time. Distinguishing autism characteristics through screening tests is over the top expensive and tedious. Screening of the same is a challenging task, and classification must be conducted with great care. Machine Learning (ML) can perform great in the classification of this problem. Most researchers have utilized the ML strategy to characterize patients and typical controls, among which support vector machines (SVM) are broadly utilized. Even though several studies have been done utilizing various methods, these investigations didn't give any complete decision about anticipating autism qualities regarding distinctive age groups. Accordingly, this paper plans to locate the best technique for ASD classi-fication out of SVM, K-nearest neighbor (KNN), Random Forest (RF), Naïve Bayes (NB), Stochastic gradient descent (SGD), Adaptive boosting (AdaBoost), and CN2 Rule Induction using 4 ASD datasets taken from UCI ML repository. The classification accuracy (CA) we acquired after experimentation is as follows: in the case of the adult dataset SGD gives 99.7%, in the adolescent dataset RF gives 97.2%, in the child dataset SGD gives 99.6%, in the toddler dataset Ada-Boost gives 99.8%. Autism spectrum quotients (AQs) varied among several sce-narios for toddlers, adults, adolescents, and children that include positive predic-tive value for the scaling purpose. AQ questions referred to topics about attention to detail, attention switching, communication, imagination, and social skills.
APA, Harvard, Vancouver, ISO, and other styles
19

Nigro, Michelangelo, Francesco Pierri, and Fabrizio Caccavale. "Control of an Omnidirectional UAV for Transportation and Manipulation Tasks." Applied Sciences 11, no. 22 (November 19, 2021): 10991. http://dx.doi.org/10.3390/app112210991.

Full text
Abstract:
This paper presents a motion control scheme for a new concept of omnidirectional aerial vehicle for transportation and manipulation tasks. The considered aerial platform is a novel quadrotor with the capability of providing multi-directional thrust by adding an actuated gimbal mechanism in charge of modifying the orientation of the frame on which the four rotors are mounted. The above mechanical design, differently from other omnidirectional unmanned aerial vehicles (UAVs) with tilted propellers, avoids internal forces and energy dissipation due to non-parallel propellers’ axes. The proposed motion controller is based on a hierarchical two-loop scheme. The external loop computes the force to be applied to the vehicle and the reference values for the additional joints, while the inner loop computes the joint torques and the moment to be applied to the multirotor. In order to make the system robust with respect to the external loads, a compensation of contact forces is introduced by exploiting the estimate provided by a momentum based observer. The stability of the motion control scheme is proven via Lyapunov arguments. Finally, two simulation case studies prove the capability of the omnidirectional UAV platform to track a 6-DoFs trajectory both in free motion and during a task involving grasping and transportation of an unknown object.
APA, Harvard, Vancouver, ISO, and other styles
20

Suponev, Vladimir, Vitaliy Ragulin, Serhii Kovalevskyi, Oleksandr Orel, Svyatoslav Kravets, and Anatolii Nechydiuk. "The influence of the kinematics of the working equipment of earthmoving machines on the pro-cess of deep vibratory cutting of bound soils." Bulletin of Kharkov National Automobile and Highway University, no. 99 (December 29, 2022): 84. http://dx.doi.org/10.30977/bul.2219-5548.2022.99.0.84.

Full text
Abstract:
Problem. The article presents the results of studies of the impact of kinematics of earthmoving equipment on the process of vibratory soil cutting during trenchless laying of linear underground communications and drainage systems. From the conducted review, it was established that the basis of the technology is the formation of a narrow gap in the soil with the help of a knife working body. This process requires significant energy costs and traction efforts. The task of reducing them is important and urgent, both from a scientific point of view and from a practical position. Goal. The purpose of the study is to develop scientifically based recommendations on reducing the resistance of deep cutting of the soil by the knife working body due to its vibrational and oscillatory movements. Methodology. Theoretical studies are based on ideas about the theory of soil mechanics, the theory of mechanisms and machines, and the influence of vibration on work processes. Results. Based on the established value of the path and speed of movement of the knife working body at any moment of time from its oscillation frequency, amplitude at fixed values of the movement speed, the direction of oscillations is determined depending on the ratio of translational and vibrational speeds, for which two typical cases were considered. In the first case, the movement of the knife working body will always be translational and the process will resemble the "pressing" of a stamp with variable effort in a certain period of time. During the second movement of the equipment, in some time segments, the values of the movement of the edge may become the reverse, in the direction of the movement of the machine. That is, the cutting process in this case is similar to "chopping" with a frequency of blows equal to the frequency of the knife's oscillations. Analytical research on the known indicators of the soil cutting process efficiency found that a decrease in the translational speed simultaneously reduces the static cutting power, which can affect the total energy consumption of the process as a whole, taking into account the power consumption of the vibration mechanism drive. Practical meaning. The analysis of the influence of the kinematics of the working equipment on the processes of deep cutting of soil made it possible to obtain the conditions for the effective use of the vibration drive, which is aimed at reducing the traction force and the energy consumption of the process.
APA, Harvard, Vancouver, ISO, and other styles
21

Mishra, Bhupesh Kumar, Keshav Dahal, and Zeeshan Pervez. "Dynamic Relief Items Distribution Model with Sliding Time Window in the Post-Disaster Environment." Applied Sciences 12, no. 16 (August 21, 2022): 8358. http://dx.doi.org/10.3390/app12168358.

Full text
Abstract:
In smart cities, relief items distribution is a complex task due to the factors such as incomplete information, unpredictable exact demand, lack of resources, and causality levels, to name a few. With the development of Internet of Things (IoT) technologies, dynamic data update provides the scope of distribution schedule to adopt changes with updates. Therefore, the dynamic relief items distribution schedule becomes a need to generate humanitarian supply chain schedules as a smart city application. To address the disaster data updates in different time periods, a dynamic optimised model with a sliding time window is proposed that defines the distribution schedule of relief items from multiple supply points to different disaster regions. The proposed model not only considers the details of available resources dynamically but also introduces disaster region priority along with transportation routes information updates for each scheduling time slot. Such an integrated optimised model delivers an effective distribution schedule to start with and updates it for each time slot. A set of numerical case studies is formulated to evaluate the performance of the optimised scheduling. The dynamic updates on the relief item demands’ travel path, causality level and available resources parameters have been included as performance measures for optimising the distributing schedule. The models have been evaluated based on performance measures to reflect disaster scenarios. Evaluation of the proposed models in comparison to the other perspective static and dynamic relief items distribution models shows that adopting dynamic updates in the distribution model cover most of the major aspects of the relief items distribution task in a more realistic way for post-disaster relief management. The analysis has also shown that the proposed model has the adaptability to address the changing demand and resources availability along with disaster conditions. In addition, this model will also help the decision-makers to plan the post-disaster relief operations in more effective ways by covering the updates on disaster data in each time period.
APA, Harvard, Vancouver, ISO, and other styles
22

Jaramillo, Manuel, Diego Carrión, and Jorge Muñoz. "A Deep Neural Network as a Strategy for Optimal Sizing and Location of Reactive Compensation Considering Power Consumption Uncertainties." Energies 15, no. 24 (December 10, 2022): 9367. http://dx.doi.org/10.3390/en15249367.

Full text
Abstract:
This research proposes a methodology for the optimal location and sizing of reactive compensation in an electrical transmission system through a deep neural network (DNN) by considering the smallest cost for compensation. An electrical power system (EPS) is subjected to unexpected increases in loads which are physically translated as an increment of users in the EPS. This phenomenon decreases voltage profiles in the whole system which also decreases the EPS’s reliability. One strategy to face this problem is reactive compensation; however, finding the optimal location and sizing of this compensation is not an easy task. Different algorithms and techniques such as genetic algorithms and non-linear programming have been used to find an optimal solution for this problem; however, these techniques generally need big processing power and the processing time is usually considerable. That being stated, this paper’s methodology aims to improve the voltage profile in the whole transmission system under scenarios in which a PQ load is randomly connected to any busbar of the system. The optimal location of sizing of reactive compensation will be found through a DNN which is capable of a relatively small processing time. The methodology is tested in three case studies, IEEE 14, 30 and 118 busbar transmission systems. In each of these systems, a brute force algorithm (BFA) is implemented by connecting a PQ load composed of 80% active power and 20% reactive power (which varies from 1 MW to 100 MW) to every busbar, for each scenario, reactive compensation (which varies from 10 Mvar to 300 Mvar) is connected to every busbar. Then power flows are generated for each case and by selecting the scenario which is closest to 90% of the original voltage profiles, the optimal scenario is selected and overcompensation (which would increase cost) is avoided. Through the BFA, the DNN is trained by selecting 70% of the generated data as training data and the other 30% is used as test data. Finally, the DNN is capable of achieving a 100% accuracy for location (in all three case studies when compared with BFA) and objective deviation has a difference of 3.18%, 7.43% and 0% for the IEEE 14, 30 and 118 busbar systems, respectively (when compared with the BFA). With this methodology, it is possible to find the optimal location and sizing of reactive compensation for any transmission system under any PQ load increment, with almost no processing time (with the DNN trained, the algorithm takes seconds to find the optimal solution).
APA, Harvard, Vancouver, ISO, and other styles
23

Smirnov, Oleksii, Anatoliy Narivskiy, Yevgen Smyrnov, Anastasiia Semenko, and Aleksei Verzilov. "Increasing the Dosing Accuracy of Magnetodynamic Foundry Equipment." Science and Innovation 17, no. 5 (October 12, 2021): 42–49. http://dx.doi.org/10.15407/scine17.05.042.

Full text
Abstract:
Introduction. The problem of combining continuous monitoring of the main informative process parameters (mass, temperature, melt consumption) and control of the pouring process is relevant for almost all filling devices today.Problem Statement. The development of pouring accuracy methods, particularly for small-dose pouring is an important task for the foundry industry.Purpose. The purpose is to study the dependences of the flow characteristics of the magnetodynamic equipment on the supplied voltage in various conditions of its operation.Materials and Methods. Physical modelling has been applied for the study of dosing accuracy for small doses in the range of 1.5—3 kg.Results. The coefficient of the numerical dependence of instantaneous mass flow consumption of a modeling fluid in the trough on the instantaneous mass of a modeling fluid in the trough has been established based on experimental studies with the use of a physical model of magnetodynamic device (MDD). The studies of filling doses within the range from 1.5 to 3 kg have shown that this coefficient corresponds to the range of the electromagnet supply voltage from 12.3 to 16.3 V. There have been determined the efficient range of the poured-metalmass to instantaneous-mass-flow-consumption ratio in the course of casting (2.20—2.25) and the corresponding range of the MDD electromagnet supply voltage to minimize the effect of jet pulsations on the dosing accuracy byreducing their amplitude. The dosing error does not exceed 1.5% by dose weight in the case of pouring small portions (1.5—3 kg).Conclusions. A new technical solution for MDD with an inclined weighting trough of a conventional design has been developed based on the electromagnetic transfer of a force proportional to the instantaneous melt massin the trough. The implementation of this solution makes it possible to reduce the number of strain gauge power sensors for the instantaneous measurement of the melt mass, from four sensors installed under the melting pot of the MDD prototype to one placed directly under the trough.
APA, Harvard, Vancouver, ISO, and other styles
24

Sarac, Mine, Mehmet Alper Ergin, Ahmetcan Erdogan, and Volkan Patoglu. "AssistOn-Mobile: a series elastic holonomic mobile platform for upper extremity rehabilitation." Robotica 32, no. 8 (September 16, 2014): 1433–59. http://dx.doi.org/10.1017/s0263574714002367.

Full text
Abstract:
SUMMARYWe present the design, control, and human–machine interface of a series elastic holonomic mobile platform,AssistOn-Mobile, aimed to administer therapeutic table-top exercises to patients who have suffered injuries that affect the function of their upper extremities. The proposed mobile platform is a low-cost, portable, easy-to-use rehabilitation device targeted for home use. In particular,AssistOn-Mobileconsists of a holonomic mobile platform with four actuated Mecanum wheels and a compliant, low-cost, multi-degrees-of-freedom series elastic element acting as its force sensing unit. Thanks to its series elastic actuation,AssistOn-Mobileis highly backdriveable and can provide assistance/resistance to patients, while performing omni-directional movements on plane.AssistOn-Mobilealso features Passive Velocity Field Control (PVFC) to deliver human-in-the-loop contour tracking rehabilitation exercises. PVFC allows patients to complete the contour-tracking tasks at their preferred pace, while providing the proper amount of assistance as determined by the therapists. PVFC not only minimizes the contour error but also does so by rendering the closed-loop system passive with respect to externally applied forces; hence, ensures the coupled stability of the human-robot system. We evaluate the feasibility and effectiveness ofAssistOn-Mobilewith PVFC for rehabilitation and present experimental data collected during human subject experiments under three case studies. In particular, we utilizeAssistOn-Mobilewith PVFC (a) to administer contour following tasks where the pace of the tasks is left to the control of the patients, so that the patients can assume a natural and comfortable speed for the tasks, (b) to limit compensatory movements of the patients by integrating a RGB-D sensor to the system to continually monitor the movements of the patients and to modulate the task speeds to provide online feedback to the patients, and (c) to integrate a Brain–Computer Interface such that the brain activity of the patients is mapped to the robot speed along the contour following tasks, rendering an assist-as-needed protocol for the patients with severe disabilities. The feasibility studies indicate thatAssistOn-Mobileholds promise in improving the accuracy and effectiveness of repetitive movement therapies, while also providing quantitative measures of patient progress.
APA, Harvard, Vancouver, ISO, and other styles
25

Tzafestas, Costas S., Kostas Birbas, Yiannis Koumpouros, and Dimitris Christopoulos. "Pilot Evaluation Study of a Virtual Paracentesis Simulator for Skill Training and Assessment: The Beneficial Effect of Haptic Display." Presence: Teleoperators and Virtual Environments 17, no. 2 (April 1, 2008): 212–29. http://dx.doi.org/10.1162/pres.17.2.212.

Full text
Abstract:
Effective, real-time training of health care professionals in invasive procedures is a challenging task. Furthermore, assessing in practice the acquisition of the dexterity and skills required to safely perform such operations is particularly difficult to perform objectively and reliably. The development of virtual reality (VR) simulators offers great potential toward these objectives, and can help bypass some of the difficulties associated with classical surgical training and assessment procedures. In this context, we have developed a prototype VR simulator platform for training in a class of invasive procedures, such as accessing central vessels. This paper focuses more particularly on a pilot study treating the specific application case of subclavian vein paracentesis. The simulation incorporates 3D models of all the human anatomy structures involved in this procedure, where collision detection and response algorithms are implemented to simulate most of the potential complications in accordance with the situations encountered in real clinical practice. Furthermore, haptic display is integrated using a typical force feedback device providing the user with a sense of touch during the simulated operations. Our main objective in this study was to obtain quantitative evaluation results regarding the effect of haptic display on performance. Two user groups participated in the study: (I) novice users and (II) experienced surgeons. The system automatically provides quantitative assessment scores of users' performance, applying a set of objective measures that also involve the optimality of the needle insertion path and indicators of maneuvering errors. Training and skill assessment performance of the system is evaluated in a twofold manner, regarding respectively: (a) the learning curve of novice users, and (b) the correlation of the system-generated scores with the actual surgical experience of the user. These performance indicators are assessed with respect to the activation of the haptic display and to whether this has any beneficial effect (or not). The experimental findings of this first pilot study provide quantitative evidence about the significance of haptic display, not only as a means to enhance the realism of the surgical simulation, but especially as an irreplaceable component for achieving objective and reliable skill assessment. Further larger-scale and long-term clinical studies are needed to validate the effectiveness of such platforms for actual training and dexterity enhancement, particularly when more complex sensorimotor skills are involved.
APA, Harvard, Vancouver, ISO, and other styles
26

Grantham, David, and Neville Hunt. "Web interfaces to enhance CAL materials: case studies from law and statistics." Research in Learning Technology 7, no. 3 (December 30, 2011). http://dx.doi.org/10.3402/rlt.v7i3.11566.

Full text
Abstract:
Both authors were seconded part-time to the Coventry University Teaching and Learning Task Force in the autumn of 1997. There are two main driving forces behind this Task Force. The first is the changing context of higher education, where increasing numbers of students, the shrinking unit of resource and an increasing emphasis on learning and teaching is generating increasing concern about the quality of the student experience. The second is to test how far communications and information technology (C&IT), highlighted in the Dealing Report (National Committee of Inquiry into Higher Education, 1997), could provide an important vehicle for more successful learning. At the same time there is an ongoing expansion of global learning resources available through the Internet. Harnessing these resources in a way that will enrich student learning became a focal point for both projects.DOI: 10.1080/0968776990070309
APA, Harvard, Vancouver, ISO, and other styles
27

Santoso, Aris Prio Agus, Totok Wahyudi, Safitri Nur Rohmah, and Ary Rachman Haryadi. "Legal Protection of Health Workers in the Task Force for the Acceleration of Handling Covid-19 from a State Administrative Law Point of View." JISIP (Jurnal Ilmu Sosial dan Pendidikan) 5, no. 2 (March 1, 2021). http://dx.doi.org/10.58258/jisip.v5i2.1826.

Full text
Abstract:
Article 28D paragraph (1) of the 1945 Constitution states that every person has the right to recognition, guarantee, protection and legal certainty that is just and equal treatment before the law, then Article 57 letter a of Law No. 36 of 2014 concerning Health Personnel also states that health workers in carrying out practices are entitled to obtain legal protection as long as carrying out their duties in accordance with Professional Standards, Professional Service Standards, and Operational Procedure Standards, but in their implementation, legal protection has not been seen to be carried out by office holders.The problem in this study is how the legal protection of health workers in the task force for acceleration of Covid-19 handling and what are the constraints of health workers in obtaining guarantees of occupational safety and health in the task force for acceleration of Covid-19 handling reviewed on administrative law.This research method uses a sociological juridical approach, by collecting data from field studies and literature studies, to find out the legal protection of health workers in the task force for handling Covid-19 acceleration reviewed on administrative law. The data obtained were analyzed qualitatively.Based on the results of the study it was found that health workers get legal protection in the form of supervision and guidance, but the legal protection efforts provided there are still weaknesses because some of the rights of health workers have not been fulfilled. In connection with the provision of occupational safety and health guarantees to health workers there are still several obstacles, including; due to the complicated bureaucracy of the Regional Government, and the uneven distribution of PPE (Personal Protective Equipment). The government, in this case, has not been able to provide maximum legal protection and work health and safety insurance for health workers.
APA, Harvard, Vancouver, ISO, and other styles
28

Lu, Jinrong, Lunyuan Chen, Junjuan Xia, Fusheng Zhu, Maobin Tang, Chengyuan Fan, and Jiangtao Ou. "Analytical offloading design for mobile edge computing-based smart internet of vehicle." EURASIP Journal on Advances in Signal Processing 2022, no. 1 (May 23, 2022). http://dx.doi.org/10.1186/s13634-022-00867-2.

Full text
Abstract:
AbstractIn this paper, we investigate how to analytically design an analytical offloading strategy for a multiuser mobile edge computing (MEC)-based smart internet of vehicle (IoV), where there are multiple computational access points (CAPs) which can help compute tasks from the vehicular users. As it is difficult to derive an analytical offloading ratio for a general MEC-based IoV network, we turn to provide an analytical offloading scheme for some special MEC networks including one-to-one, one-to-two and two-to-one cases. For each case, we study the system performance by using the linear combination of latency and energy consumption, and derive the analytical offloading ratio through minimizing the system cost. Simulation results are finally presented to verify the proposed studies. In particular, the proposed analytical offloading scheme can achieve the optimal performance of the brute force (BF) scheme. The analytical results in this paper can serve as an important reference for the analytical offloading design for a general MEC-based IoV.
APA, Harvard, Vancouver, ISO, and other styles
29

Ng, Chun Keat, and Nur Anida Jumadi. "IoT-Based Instrumentation Development for Reaction Time, Kick Impact Force, and Flexibility Index Measurement." International Journal of Electrical and Electronic Engineering & Telecommunications, 2022, 82–87. http://dx.doi.org/10.18178/ijeetc.11.1.82-87.

Full text
Abstract:
This paper presents an Internet of Thing (IoT)-based instrumentation development using vibration sensor, force sensor, and Blynk app to obtain the reaction time, kick impact force, and flexibility index based on visual stimulants. Besides the Blynk app, vibration, and force sensors, other main components of the prototype are the Arduino NodeMCU microcontroller, and three LEDs (yellow, red, and green). The developed prototype was able to record the reaction time, kick impact force, and flexibility index. These outputs could be viewed not only on the Organic Light Emitting Diode (OLED) display but also on the smartphone via the Blynk app interface. To evaluate the performance of the developed prototype, four male Silat athletes weighing between 61kg to 80kg were recruited for this experiment. They were requested to undergo a Simple Reaction Time (SRT) task which requires them to perform three trials of the front kick. From the SRT findings, it can be deduced that 75% of the participants reacted faster towards green LED with the fastest reaction time recorded was 1485.2±126.7ms compared to yellow and red LEDs. In the future, the design of the hardware in terms of circuitry and hardware casing will be improved for a better prototype presentation. Secondly, a big sample size of subjects and different branches of combat sports will be recruited to assist in further analysis. Furthermore, the sound stimuli will be included in future studies. Lastly, further research should be conducted to assess the effect of colored stimuli on the reaction time of the athletes.
APA, Harvard, Vancouver, ISO, and other styles
30

Yang, Xin, and Nazanin Rahmani. "Task scheduling mechanisms in fog computing: review, trends, and perspectives." Kybernetes ahead-of-print, ahead-of-print (March 5, 2020). http://dx.doi.org/10.1108/k-10-2019-0666.

Full text
Abstract:
Purpose In the past, with the development of the internet of things (IoT), this paper aims to consider fog computing (FC) as an efficient accompaniment to the cloud to control the IoT’s information and relation requirements. Wholly, FC is placed carefully around the IoT systems/sensors and develops cloud-based computing, memory and networking devices. Fog shares many similarities with the cloud, but the only difference between them is its location, in which fog devices are very close to end-users to process and respond to the client in less time. On the other hand, this system is useful for real-time flowing programs, sensor systems, and IoT that need high speed and reliable internet connectivity. However, there are many applications such as remote healthcare and medical cyber-physical systems, where low latency is needed. To reduce the latency of FC, the task scheduler plays a vital role. The task scheduling means to devote the task to fog resources in an efficient way. Yet, according to the findings, in spite of the preference of task scheduling techniques in the FC, there is not any review and research in this case. So, this paper offers systematic literature research about the available task scheduling techniques. In addition, the advantages and disadvantages associated with different task scheduling processes are considered, and the main challenges of them are addressed to design a more efficient task scheduler in the future. Additionally, according to the seen facts, future instructions are provided for these studies. Design/methodology/approach The paper complies with the methodological requirements of systematic literature reviews (SLR). The present paper investigates the newest systems and studies their practical techniques in detail. The applications of task scheduling mechanisms in FC have been categorized into two major groups, including heuristic and meta-heuristic. Findings Particularly, the replies to the project problem analyzed task scheduling are principal aim, present problems, project terminologies, methods and approaches in the fog settings. The authors tried to design his systematic discussion as precisely as possible. However, it might have still endured various confidence risks. Research limitations/implications This study aimed to be comprehensive but there were some limitations. First, the usage of affair scheduling in fog settings are contained in many places such as editorial notes, academic publications, technical writings, Web pages and so on. The published papers in national magazines were omitted. Also, the papers with the purpose of a special task scheduling issue, which probably consider other subjects rather than affair planning issue are omitted. So, in the competence of this study, this systematic analysis must be considered as the studies published in the central international FC journals. Second, the given issues might not have considered the general task scheduling area, which points to the possibility of describing more related questions that could be described. Third, research and publication bias: five confident electronic databases were chosen based on past study experiments. Finally, the numbers show that these five electronic databases must suggest the most related and reliable projects. Yet, selecting all main performing projects has not been confirmed. Probably some effective projects were omitted throughout the processes in Section 3. Different from the conclusion, changing from the search string to the information extraction exists, and the authors tried to exclude this by satisfying the source in central projects. Practical implications The results of this survey will be valuable for academicians, and it can provide visions into future research areas in this domain. Moreover, the disadvantages and advantages of the above systems have been studied, and their key issues have been emphasized to develop a more effective task scheduling mechanisms in the FC mechanisms. Originality/value It is useful to show the authors the state-of-the-art in the fog task scheduling area. The consequences of this project make researchers provide a more effective task planning approach in fog settings.
APA, Harvard, Vancouver, ISO, and other styles
31

Eden, Jonathan, Darwin Lau, Ying Tan, and Denny Oetomo. "Unilateral Manipulability Quality Indices: Generalized Manipulability Measures for Unilaterally Actuated Robots." Journal of Mechanical Design 141, no. 9 (July 19, 2019). http://dx.doi.org/10.1115/1.4043932.

Full text
Abstract:
The study of the relationship between the desired system dynamics and the actuation wrench producing those dynamics is important for robotic system analysis. For traditionally actuated robots, the quality indices of dexterity and manipulability quantify this relationship. However, for unilaterally actuated robots (UARs), such as grasping hands and cable-driven parallel robots (CDPRs), these indices cannot be applied due to the unilateral actuation constraint. In this paper, the quality indices of unilateral dexterity (UD) and unilateral maximum force amplification (UMFA) are established for UARs with arbitrary number of actuators. It is shown that these quality indices provide task-independent quantifications of the physical properties of robustness and force amplification for UARs, and they can measure the mechanism’s capability both in singular and nonsingular poses. With these indices, manipulability ellipsoid-derived measures can be applied to arbitrary UARs. The significance of the quality indices for robot synthesis and motion generation analysis is illustrated through two case studies: a five-fingered grasp selection problem and the workspace analysis of a spatial CDPR.
APA, Harvard, Vancouver, ISO, and other styles
32

Xiao, Qingyu, Mishek Musa, Isuru Godage, Hao Su, and Yue Chen. "Kinematics and Stiffness Modeling of Soft Robot With a Concentric Backbone." Journal of Mechanisms and Robotics, October 5, 2022, 1–13. http://dx.doi.org/10.1115/1.4055860.

Full text
Abstract:
Abstract Soft robots can undergo large elastic deformations and adapt to complex shapes. However, they lack the structural strength to withstand external loads due to the intrinsic compliance of fabrication materials (silicone or rubber). In this paper, we present a novel stiffness modulation approach that controls the robot's stiffness on-demand without permanently affecting the intrinsic compliance of the elastomeric body. Inspired by concentric tube robots, this approach uses a Nitinol tube as the backbone, which can be slid in and out of the soft robot body to achieve robot pose or stiffness modulation. To validate the proposed idea, we fabricated a tendon-driven concentric tube (TDCT) soft robot and developed the model based on Cosserat rod theory. The model is validated in different scenarios by varying the joint-space tendon input and task-space external contact force. Experimental results indicate that the model is capable of estimating the shape of the TDCT soft robot with an average root mean square error (RMSE) of 0.90~mm and an average tip error of 1.49~mm. Simulation studies demonstrate that the Nitinol backbone insertion can enhance the kinematic workspace and reduce the compliance of the TDCT soft robot by 57.7%. Two case studies (object manipulation and soft laparoscopic photodynamic therapy) are presented to demonstrate the potential application of the proposed design.
APA, Harvard, Vancouver, ISO, and other styles
33

Caldwell, Nick, and Sean Aylward Smith. "Machine." M/C Journal 2, no. 6 (September 1, 1999). http://dx.doi.org/10.5204/mcj.1779.

Full text
Abstract:
Briareos: If it's so quiet, then why do they need machines like that? I thought it was supposed to be peacetime. I'll tell you why-eighty per cent of the people here are artificial. Genetic engineering's out of control! -- Shirow, Appleseed 1.5: 37. Welcome to the 'machine' issue of M/C, appropriately mediated to you through the global network of machines sometimes known as the Internet. The question of the machine might seem a curious one for media and cultural studies scholars -- after all, commonsense tells us that whilst machines may let us practice culture or produce media, they are by definition not cultural. The cultural, the human, the social, is that which is left after all the nonhuman components have been properly contained within a 'machine'. And yet, what is a machine? When we talk of the machine in everyday discourse, it is always, explicitly or implicitly, part of a binary opposition between humans and machines, between motive biology and immobile artefacts, between active subjects and passive objects. In a similar vein the machine is always spectral, always metaphoric: it is one of the defining tropes of our industrialised civilisation. Whether it is the machinic Gehenna of H. G. Wells's or Fritz Lang's nightmares, all demonic steam engines and inhuman cogs; the shadowy, fifth column clones of Appleseed or the invasive 'Alien' that is micro-computerisation, the Human Genome project and nanotechnology, the machine is a metaphor that structures all levels of our relationships to culture. In scientific thought, at least since Newton and Descartes, the world has been composed of machines -- instruments which transmit and direct force, and which can be investigated, manipulated, invented. In humanistic thought, the machine is the other, the Frankenstein, the binary category against which the human gains its definition. It is the gleaming machine against which unruly women have been defined, the efficient machine against which uncontrollable mobs have had no recourse, the starkly military machine against which disenfranchised and colonised peoples have had no recourse but death. The machine is thus a significant cultural category, organising and policing ideological, discursive, historical, spiritual and political boundaries. In this issue of M/C, a range of authors address from different perspectives this question of the machine and its imbricated other, the human. Our feature writer, Anna Munster, in "Love Machines" critiques contemporary writing about sexuality and sexual experiences in cyberspace, commenting that "an erotic relation with the technological is occluded in most accounts of the sexual in cyberspace and in many engagements with digital technologies", arguing instead that these accounts engage in a kind of "onanistic" logic instead. She begins to address ways around this unsatisfactory arrangement by using the theories of Félix Guattari and Avital Ronell. Our second feature article is by the feminist philosopher and sociologist of technology, Zoë Sofoulis. In an article entitled "Machinic Musings with Mumford", Sofoulis returns to historian of technology Lewis Mumford for ideas about the role and purpose of machines in cultural development. She suggests that the machine's fascinating autonomy may inspire notions of the 'post-human', which can be critiqued from Mumford's humanist position as well as Latour's "non modern" stance. Taking up Mumford's point about the over-emphasis on machines in technology studies, Sofoulis argues that it is important to look at machines not just as things in themselves, but also the purposes they serve. A concern with the work of Guattari returns in Andreas Broeckmann's article "Minor Media -- Heterogenic Machines". Broeckmann reads a series of works by contemporary artists through the prism of Guattari's writings on art, media and the machine to argue that the "line of flight" of such media experimentation "is the construction of new and strong forms of subjectivity". Such artistic practices, which draw upon the networking and transformative features of digital media, Broeckmann suggests, "point us in the direction of the positive potentials of post media". 'Desire' Issue Editor Laurie Johnson takes a slightly different tack in this issue by developing a reading of classic 50s SF film Forbidden Planet in order to come to terms with deleuzoguattarian theorisations of the machine. He asks the question of Deleuze and Guattari: "how can we use a concept of machine that claims to go beyond the concept of utility (or techné, the function of technical machines)?" Johnson's article "Félix and Gilles's Tempestuous, Monstrous Machines" develops through a sophisticated cross-reading of Forbidden Planet with its source text, The Tempest, and the analysis of civilization that Deleuze and Guattari elaborate in Anti-Oedipus: "what the film demonstrates instead is that stripping a civilisation of its instrumentalities produces something other than just a return of the primitive repressed". In "From Haptic Interfaces to Man-Machine Symbiosis", Sonja Kangas provides an historicised account of the ways in which humans and machines interact with one another, which develops into a discussion of the problems of interactivity from the paradigm of game interface design. Drawing on the notions of cultural capital as developed by Pierre Bourdieu and the theorisations of the cyborg by Donna Haraway, Susan Luckman in "XX @ MM: Cyborg Subjectivity as Millennial Fashion Statement" addresses "some of the ways in which the traditional determinants of class are being redefined in light of the so-called postmodern capitalist information economy". In doing so she brings in "nerd chic", cyborgs, rave culture and I-Macs. Paul Benneworth, in his article "The Machine as Mythology -- The Case of the Joyce-Loebl Microdensitometer", takes us on an archeology of technology, examining the history and the fate of a small, pre-digital measuring device, the microdensitometer. His social history locates the device within its geographic and historical -- as well as its technological -- milieux, to demonstrate that the existence of any machine depends upon far more than merely its technical characteristics; that every machine is a social machine. If the machinic philosophy of Félix Guattari is the refrain of this issue of M/C, then Donna Haraway's cyborg metaphor is its melody. Drawing on the work of Haraway, Sophie Taysom's article "True Love Is a Trued Wheel: Technopleasure in Mountain Biking" seeks to make clear some of the ways in which mountain biking magazines have an important role in negotiating the boundaries between the "inside" and "outside" of mountain biking practices. Taysom achieves this by conducting a semiotic reading of this magazine genre, to prise open the trope of technology and see how it is reconfigured both literally and figuratively within their pages. Frances Bonner takes to task the highly gendered division in Science Fiction of "hard" and "soft" SF through an analysis of the representations of the 'technological' in Lois McMaster Bujold's Vorkosigan saga. As Bonner explains, this series is not so easily placed in either category, mixing as it does the tropes of space adventure with explorations of the ramifications of advanced reproductive technology. The engagement with the work of Guattari returns as a coda in Belinda Barnet's article "Machinic Heterogenesis and Evolution: Collected Notes on Sound, Machines and Sonicform". Hotwiring together Guattari's machinic philosophy with the work of the complexity theorists such as Stuart Kauffman and Ilya Prigogine, and using the example of the self-organising and evolving Sonicform Web site/sound system, Barnet examines the use and spread of evolutionary metaphors beyond their scientific origins. As she points out, "evolution is not just conflict, competition, selfish genes, tree diagrams, living and non-living systems. It's not just something furry, crawling things do. It's music. It's a dance. Poetic-existential. Hybrid subjectivities. And all the wor(l)ds in between". Nick Caldwell, Sean Aylward Smith -- 'Machine' Issue Editors Citation reference for this article MLA style: Nick Caldwell, Sean Aylward Smith. "Editorial: 'Machine'." M/C: A Journal of Media and Culture 2.6 (1999). [your date of access] <http://www.uq.edu.au/mc/9909/edit.php>. Chicago style: Nick Caldwell, Sean Aylward Smith, "Editorial: 'Machine'," M/C: A Journal of Media and Culture 2, no. 6 (1999), <http://www.uq.edu.au/mc/9909/edit.php> ([your date of access]). APA style: Nick Caldwell, Sean Aylward Smith. (1999) Editorial: 'machine'. M/C: A Journal of Media and Culture 2(6). <http://www.uq.edu.au/mc/9909/edit.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
34

Khajehnejad, Moein, Julian García, and Bernd Meyer. "Explaining workers’ inactivity in social colonies from first principles." Journal of The Royal Society Interface 20, no. 198 (January 2023). http://dx.doi.org/10.1098/rsif.2022.0808.

Full text
Abstract:
Social insects are among the ecologically most successful collectively living organisms, with efficient division of labour a key feature of this success. Surprisingly, these efficient colonies often have a large proportion of inactive workers in their workforce, sometimes referred to as lazy workers . The dominant hypotheses explaining this are based on specific life-history traits, specific behavioural features or uncertain environments where inactive workers can provide a ‘reserve’ workforce that can spring into action quickly. While there is a number of experimental studies that show and investigate the presence of inactive workers, mathematical and computational models exploring specific hypotheses are not common. Here, using a simple mathematical model, we show that a parsimonious hypothesis can explain this puzzling social phenomenon. Our model incorporates social interactions and environmental influences into a game-theoretical framework and captures how individuals react to environment by allocating their activity according to environmental conditions. This model shows that inactivity can emerge under specific environmental conditions as a by-product of the task allocation process. Our model confirms the empirical observation that in the case of worker loss, prior homeostatic balance is re-established by replacing some of the lost force with previously inactive workers. Most importantly, our model shows that inactivity in social colonies can be explained without the need to assume an adaptive function for this phenomenon.
APA, Harvard, Vancouver, ISO, and other styles
35

Gupta, Sakshi, Anupam Agrawal, and Ekta Singla. "Muscle weakness assessment tool for automated therapy selection in elbow rehabilitation." Robotica, June 23, 2022, 1–17. http://dx.doi.org/10.1017/s0263574722000844.

Full text
Abstract:
Abstract Clinical observations and subjective judgements have traditionally been used to evaluate patients with muscular and neurological disorders. As a result, identifying and analyzing functional improvements are difficult, especially in the absence of expertise. Quantitative assessment, which serves as the motivation for this study, is an essential prerequisite to forecast the task of the rehabilitation device in order to develop rehabilitation training. This work provides a quantitative assessment tool for muscle weakness in the human upper limbs for robotic-assisted rehabilitation. The goal is to map the assessment metrics to the recommended rehabilitation exercises. Measurable interaction forces and muscle correlation factors are the selected parameters to design a framework for muscular nerve cell condition detection and appropriate limb trajectory selection. In this work, a data collection setup is intended for extracting muscle intervention and assessment using MyoMeter, Goniometer and surface electromyography data for upper limbs. Force signals and human physiological response data are evaluated and categorized to infer the relevant progress. Based upon the most influencing muscles, curve fitting is performed. Trajectory-based data points are collected through a scaled geometric Open-Sim musculoskeletal model that fits the subject’s anthropometric data. These data are found to be most suitable to prescribe relevant exercise and to design customized robotic assistance. Case studies demonstrate the approach’s efficacy, including optimally synthesized automated configuration for the desired trajectory.
APA, Harvard, Vancouver, ISO, and other styles
36

Rossiter, Ned. "Creative Industries and the Limits of Critique from." M/C Journal 6, no. 3 (June 1, 2003). http://dx.doi.org/10.5204/mcj.2208.

Full text
Abstract:
‘Every space has become ad space’. Steve Hayden, Wired Magazine, May 2003. Marshall McLuhan’s (1964) dictum that media technologies constitute a sensory extension of the body shares a conceptual affinity with Ernst Jünger’s notion of ‘“organic construction” [which] indicates [a] synergy between man and machine’ and Walter Benjamin’s exploration of the mimetic correspondence between the organic and the inorganic, between human and non-human forms (Bolz, 2002: 19). The logo or brand is co-extensive with various media of communication – billboards, TV advertisements, fashion labels, book spines, mobile phones, etc. Often the logo is interchangeable with the product itself or a way or life. Since all social relations are mediated, whether by communications technologies or architectonic forms ranging from corporate buildings to sporting grounds to family living rooms, it follows that there can be no outside for sociality. The social is and always has been in a mutually determining relationship with mediating forms. It is in this sense that there is no outside. Such an idea has become a refrain amongst various contemporary media theorists. Here’s a sample: There is no outside position anymore, nor is this perceived as something desirable. (Lovink, 2002a: 4) Both “us” and “them” (whoever we are, whoever they are) are all always situated in this same virtual geography. There’s no outside …. There is nothing outside the vector. (Wark, 2002: 316) There is no more outside. The critique of information is in the information itself. (Lash, 2002: 220) In declaring a universality for media culture and information flows, all of the above statements acknowledge the political and conceptual failure of assuming a critical position outside socio-technically constituted relations. Similarly, they recognise the problems inherent in the “ideology critique” of the Frankfurt School who, in their distinction between “truth” and “false-consciousness”, claimed a sort of absolute knowledge for the critic that transcended the field of ideology as it is produced by the culture industry. Althusser’s more complex conception of ideology, material practices and subject formation nevertheless also fell prey to the pretence of historical materialism as an autonomous “science” that is able to determine the totality, albeit fragmented, of lived social relations. One of the key failings of ideology critique, then, is its incapacity to account for the ways in which the critic, theorist or intellectual is implicated in the operations of ideology. That is, such approaches displace the reflexivity and power relationships between epistemology, ontology and their constitution as material practices within socio-political institutions and historical constellations, which in turn are the settings for the formation of ideology. Scott Lash abandons the term ideology altogether due to its conceptual legacies within German dialectics and French post-structuralist aporetics, both of which ‘are based in a fundamental dualism, a fundamental binary, of the two types of reason. One speaks of grounding and reconciliation, the other of unbridgeability …. Both presume a sphere of transcendence’ (Lash, 2002: 8). Such assertions can be made at a general level concerning these diverse and often conflicting approaches when they are reduced to categories for the purpose of a polemic. However, the work of “post-structuralists” such as Foucault, Deleuze and Guattari and the work of German systems theorist Niklas Luhmann is clearly amenable to the task of critique within information societies (see Rossiter, 2003). Indeed, Lash draws on such theorists in assembling his critical dispositif for the information age. More concretely, Lash (2002: 9) advances his case for a new mode of critique by noting the socio-technical and historical shift from ‘constitutive dualisms of the era of the national manufacturing society’ to global information cultures, whose constitutive form is immanent to informational networks and flows. Such a shift, according to Lash, needs to be met with a corresponding mode of critique: Ideologycritique [ideologiekritik] had to be somehow outside of ideology. With the disappearance of a constitutive outside, informationcritique must be inside of information. There is no outside any more. (2002: 10) Lash goes on to note, quite rightly, that ‘Informationcritique itself is branded, another object of intellectual property, machinically mediated’ (2002: 10). It is the political and conceptual tensions between information critique and its regulation via intellectual property regimes which condition critique as yet another brand or logo that I wish to explore in the rest of this essay. Further, I will question the supposed erasure of a “constitutive outside” to the field of socio-technical relations within network societies and informational economies. Lash is far too totalising in supposing a break between industrial modes of production and informational flows. Moreover, the assertion that there is no more outside to information too readily and simplistically assumes informational relations as universal and horizontally organised, and hence overlooks the significant structural, cultural and economic obstacles to participation within media vectors. That is, there certainly is an outside to information! Indeed, there are a plurality of outsides. These outsides are intertwined with the flows of capital and the imperial biopower of Empire, as Hardt and Negri (2000) have argued. As difficult as it may be to ascertain the boundaries of life in all its complexity, borders, however defined, nonetheless exist. Just ask the so-called “illegal immigrant”! This essay identifies three key modalities comprising a constitutive outside: material (uneven geographies of labour-power and the digital divide), symbolic (cultural capital), and strategic (figures of critique). My point of reference in developing this inquiry will pivot around an analysis of the importation in Australia of the British “Creative Industries” project and the problematic foundation such a project presents to the branding and commercialisation of intellectual labour. The creative industries movement – or Queensland Ideology, as I’ve discussed elsewhere with Danny Butt (2002) – holds further implications for the political and economic position of the university vis-à-vis the arts and humanities. Creative industries constructs itself as inside the culture of informationalism and its concomitant economies by the very fact that it is an exercise in branding. Such branding is evidenced in the discourses, rhetoric and policies of creative industries as adopted by university faculties, government departments and the cultural industries and service sectors seeking to reposition themselves in an institutional environment that is adjusting to ongoing structural reforms attributed to the demands by the “New Economy” for increased labour flexibility and specialisation, institutional and economic deregulation, product customisation and capital accumulation. Within the creative industries the content produced by labour-power is branded as copyrights and trademarks within the system of Intellectual Property Regimes (IPRs). However, as I will go on to show, a constitutive outside figures in material, symbolic and strategic ways that condition the possibility of creative industries. The creative industries project, as envisioned by the Blair government’s Department of Culture, Media and Sport (DCMS) responsible for the Creative Industry Task Force Mapping Documents of 1998 and 2001, is interested in enhancing the “creative” potential of cultural labour in order to extract a commercial value from cultural objects and services. Just as there is no outside for informationcritique, for proponents of the creative industries there is no culture that is worth its name if it is outside a market economy. That is, the commercialisation of “creativity” – or indeed commerce as a creative undertaking – acts as a legitimising function and hence plays a delimiting role for “culture” and, by association, sociality. And let us not forget, the institutional life of career academics is also at stake in this legitimating process. The DCMS cast its net wide when defining creative sectors and deploys a lexicon that is as vague and unquantifiable as the next mission statement by government and corporate bodies enmeshed within a neo-liberal paradigm. At least one of the key proponents of the creative industries in Australia is ready to acknowledge this (see Cunningham, 2003). The list of sectors identified as holding creative capacities in the CITF Mapping Document include: film, music, television and radio, publishing, software, interactive leisure software, design, designer fashion, architecture, performing arts, crafts, arts and antique markets, architecture and advertising. The Mapping Document seeks to demonstrate how these sectors consist of ‘... activities which have their origin in individual creativity, skill and talent and which have the potential for wealth and job creation through generation and exploitation of intellectual property’ (CITF: 1998/2001). The CITF’s identification of intellectual property as central to the creation of jobs and wealth firmly places the creative industries within informational and knowledge economies. Unlike material property, intellectual property such as artistic creations (films, music, books) and innovative technical processes (software, biotechnologies) are forms of knowledge that do not diminish when they are distributed. This is especially the case when information has been encoded in a digital form and distributed through technologies such as the internet. In such instances, information is often attributed an “immaterial” and nonrivalrous quality, although this can be highly misleading for both the conceptualisation of information and the politics of knowledge production. Intellectual property, as distinct from material property, operates as a scaling device in which the unit cost of labour is offset by the potential for substantial profit margins realised by distribution techniques availed by new information and communication technologies (ICTs) and their capacity to infinitely reproduce the digital commodity object as a property relation. Within the logic of intellectual property regimes, the use of content is based on the capacity of individuals and institutions to pay. The syndication of media content ensures that market saturation is optimal and competition is kept to a minimum. However, such a legal architecture and hegemonic media industry has run into conflict with other net cultures such as open source movements and peer-to-peer networks (Lovink, 2002b; Meikle, 2002), which is to say nothing of the digital piracy of software and digitally encoded cinematic forms. To this end, IPRs are an unstable architecture for extracting profit. The operation of Intellectual Property Regimes constitutes an outside within creative industries by alienating labour from its mode of information or form of expression. Lash is apposite on this point: ‘Intellectual property carries with it the right to exclude’ (Lash, 2002: 24). This principle of exclusion applies not only to those outside the informational economy and culture of networks as result of geographic, economic, infrastructural, and cultural constraints. The very practitioners within the creative industries are excluded from control over their creations. It is in this sense that a legal and material outside is established within an informational society. At the same time, this internal outside – to put it rather clumsily – operates in a constitutive manner in as much as the creative industries, by definition, depend upon the capacity to exploit the IP produced by its primary source of labour. For all the emphasis the Mapping Document places on exploiting intellectual property, it’s really quite remarkable how absent any elaboration or considered development of IP is from creative industries rhetoric. It’s even more astonishing that media and cultural studies academics have given at best passing attention to the issues of IPRs. Terry Flew (2002: 154-159) is one of the rare exceptions, though even here there is no attempt to identify the implications IPRs hold for those working in the creative industries sectors. Perhaps such oversights by academics associated with the creative industries can be accounted for by the fact that their own jobs rest within the modern, industrial institution of the university which continues to offer the security of a salary award system and continuing if not tenured employment despite the onslaught of neo-liberal reforms since the 1980s. Such an industrial system of traditional and organised labour, however, does not define the labour conditions for those working in the so-called creative industries. Within those sectors engaged more intensively in commercialising culture, labour practices closely resemble work characterised by the dotcom boom, which saw young people working excessively long hours without any of the sort of employment security and protection vis-à-vis salary, health benefits and pension schemes peculiar to traditional and organised labour (see McRobbie, 2002; Ross, 2003). During the dotcom mania of the mid to late 90s, stock options were frequently offered to people as an incentive for offsetting the often minimum or even deferred payment of wages (see Frank, 2000). It is understandable that the creative industries project holds an appeal for managerial intellectuals operating in arts and humanities disciplines in Australia, most particularly at Queensland University of Technology (QUT), which claims to have established the ‘world’s first’ Creative Industries faculty (http://www.creativeindustries.qut.com/). The creative industries provide a validating discourse for those suffering anxiety disorders over what Ruth Barcan (2003) has called the ‘usefulness’ of ‘idle’ intellectual pastimes. As a project that endeavours to articulate graduate skills with labour markets, the creative industries is a natural extension of the neo-liberal agenda within education as advocated by successive governments in Australia since the Dawkins reforms in the mid 1980s (see Marginson and Considine, 2000). Certainly there’s a constructive dimension to this: graduates, after all, need jobs and universities should display an awareness of market conditions; they also have a responsibility to do so. And on this count, I find it remarkable that so many university departments in my own field of communications and media studies are so bold and, let’s face it, stupid, as to make unwavering assertions about market demands and student needs on the basis of doing little more than sniffing the wind! Time for a bit of a reality check, I’d say. And this means becoming a little more serious about allocating funds and resources towards market research and analysis based on the combination of needs between students, staff, disciplinary values, university expectations, and the political economy of markets. However, the extent to which there should be a wholesale shift of the arts and humanities into a creative industries model is open to debate. The arts and humanities, after all, are a set of disciplinary practices and values that operate as a constitutive outside for creative industries. Indeed, in their creative industries manifesto, Stuart Cunningham and John Hartley (2002) loath the arts and humanities in such confused, paradoxical and hypocritical ways in order to establish the arts and humanities as a cultural and ideological outside. To this end, to subsume the arts and humanities into the creative industries, if not eradicate them altogether, is to spell the end of creative industries as it’s currently conceived at the institutional level within academe. Too much specialisation in one post-industrial sector, broad as it may be, ensures a situation of labour reserves that exceed market needs. One only needs to consider all those now unemployed web-designers that graduated from multi-media programs in the mid to late 90s. Further, it does not augur well for the inevitable shift from or collapse of a creative industries economy. Where is the standing reserve of labour shaped by university education and training in a post-creative industries economy? Diehard neo-liberals and true-believers in the capacity for perpetual institutional flexibility would say that this isn’t a problem. The university will just “organically” adapt to prevailing market conditions and shape their curriculum and staff composition accordingly. Perhaps. Arguably if the university is to maintain a modality of time that is distinct from the just-in-time mode of production characteristic of informational economies – and indeed, such a difference is a quality that defines the market value of the educational commodity – then limits have to be established between institutions of education and the corporate organisation or creative industry entity. The creative industries project is a reactionary model insofar as it reinforces the status quo of labour relations within a neo-liberal paradigm in which bids for industry contracts are based on a combination of rich technological infrastructures that have often been subsidised by the state (i.e. paid for by the public), high labour skills, a low currency exchange rate and the lowest possible labour costs. In this respect it is no wonder that literature on the creative industries omits discussion of the importance of unions within informational, networked economies. What is the place of unions in a labour force constituted as individualised units? The conditions of possibility for creative industries within Australia are at once its frailties. In many respects, the success of the creative industries sector depends upon the ongoing combination of cheap labour enabled by a low currency exchange rate and the capacity of students to access the skills and training offered by universities. Certainly in relation to matters such as these there is no outside for the creative industries. There’s a great need to explore alternative economic models to the content production one if wealth is to be successfully extracted and distributed from activities in the new media sectors. The suggestion that the creative industries project initiates a strategic response to the conditions of cultural production within network societies and informational economies is highly debateable. The now well documented history of digital piracy in the film and software industries and the difficulties associated with regulating violations to proprietors of IP in the form of copyright and trademarks is enough of a reason to look for alternative models of wealth extraction. And you can be sure this will occur irrespective of the endeavours of the creative industries. To conclude, I am suggesting that those working in the creative industries, be they content producers or educators, need to intervene in IPRs in such a way that: 1) ensures the alienation of their labour is minimised; 2) collectivising “creative” labour in the form of unions or what Wark (2001) has termed the “hacker class”, as distinct from the “vectoralist class”, may be one way of achieving this; and 3) the advocates of creative industries within the higher education sector in particular are made aware of the implications IPRs have for graduates entering the workforce and adjust their rhetoric, curriculum, and policy engagements accordingly. Works Cited Barcan, Ruth. ‘The Idleness of Academics: Reflections on the Usefulness of Cultural Studies’. Continuum: Journal of Media & Cultural Studies (forthcoming, 2003). Bolz, Norbert. ‘Rethinking Media Aesthetics’, in Geert Lovink, Uncanny Networks: Dialogues with the Virtual Intelligentsia. Cambridge, Mass.: MIT Press, 2002, 18-27. Butt, Danny and Rossiter, Ned. ‘Blowing Bubbles: Post-Crash Creative Industries and the Withering of Political Critique in Cultural Studies’. Paper presented at Ute Culture: The Utility of Culture and the Uses of Cultural Studies, Cultural Studies Association of Australia Conference, Melbourne, 5-7 December, 2002. Posted to fibreculture mailing list, 10 December, 2002, http://www.fibreculture.org/archives/index.html Creative Industry Task Force: Mapping Document, DCMS (Department of Culture, Media and Sport), London, 1998/2001. http://www.culture.gov.uk/creative/mapping.html Cunningham, Stuart. ‘The Evolving Creative Industries: From Original Assumptions to Contemporary Interpretations’. Seminar Paper, QUT, Brisbane, 9 May, 2003, http://www.creativeindustries.qut.com/research/cirac/documen... ...ts/THE_EVOLVING_CREATIVE_INDUSTRIES.pdf Cunningham, Stuart; Hearn, Gregory; Cox, Stephen; Ninan, Abraham and Keane, Michael. Brisbane’s Creative Industries 2003. Report delivered to Brisbane City Council, Community and Economic Development, Brisbane: CIRAC, 2003. http://www.creativeindustries.qut.com/research/cirac/documen... ...ts/bccreportonly.pdf Flew, Terry. New Media: An Introduction. Oxford: Oxford University Press, 2002. Frank, Thomas. One Market under God: Extreme Capitalism, Market Populism, and the End of Economic Democracy. New York: Anchor Books, 2000. Hartley, John and Cunningham, Stuart. ‘Creative Industries: from Blue Poles to fat pipes’, in Malcolm Gillies (ed.) The National Humanities and Social Sciences Summit: Position Papers. Canberra: DEST, 2002. Hayden, Steve. ‘Tastes Great, Less Filling: Ad Space – Will Advertisers Learn the Hard Lesson of Over-Development?’. Wired Magazine 11.06 (June, 2003), http://www.wired.com/wired/archive/11.06/ad_spc.html Hardt, Michael and Negri, Antonio. Empire. Cambridge, Mass.: Harvard University Press, 2000. Lash, Scott. Critique of Information. London: Sage, 2002. Lovink, Geert. Uncanny Networks: Dialogues with the Virtual Intelligentsia. Cambridge, Mass.: MIT Press, 2002a. Lovink, Geert. Dark Fiber: Tracking Critical Internet Culture. Cambridge, Mass.: MIT Press, 2002b. McLuhan, Marshall. Understanding Media: The Extensions of Man. London: Routledge and Kegan Paul, 1964. McRobbie, Angela. ‘Clubs to Companies: Notes on the Decline of Political Culture in Speeded up Creative Worlds’, Cultural Studies 16.4 (2002): 516-31. Marginson, Simon and Considine, Mark. The Enterprise University: Power, Governance and Reinvention in Australia. Cambridge: Cambridge University Press, 2000. Meikle, Graham. Future Active: Media Activism and the Internet. Sydney: Pluto Press, 2002. Ross, Andrew. No-Collar: The Humane Workplace and Its Hidden Costs. New York: Basic Books, 2003. Rossiter, Ned. ‘Processual Media Theory’, in Adrian Miles (ed.) Streaming Worlds: 5th International Digital Arts & Culture (DAC) Conference. 19-23 May. Melbourne: RMIT University, 2003, 173-184. http://hypertext.rmit.edu.au/dac/papers/Rossiter.pdf Sassen, Saskia. Losing Control? Sovereignty in an Age of Globalization. New York: Columbia University Press, 1996. Wark, McKenzie. ‘Abstraction’ and ‘Hack’, in Hugh Brown, Geert Lovink, Helen Merrick, Ned Rossiter, David Teh, Michele Willson (eds). Politics of a Digital Present: An Inventory of Australian Net Culture, Criticism and Theory. Melbourne: Fibreculture Publications, 2001, 3-7, 99-102. Wark, McKenzie. ‘The Power of Multiplicity and the Multiplicity of Power’, in Geert Lovink, Uncanny Networks: Dialogues with the Virtual Intelligentsia. Cambridge, Mass.: MIT Press, 2002, 314-325. Links http://hypertext.rmit.edu.au/dac/papers/Rossiter.pdf http://www.creativeindustries.qut.com/ http://www.creativeindustries.qut.com/research/cirac/documents/THE_EVOLVING_CREATIVE_INDUSTRIES.pdf http://www.creativeindustries.qut.com/research/cirac/documents/bccreportonly.pdf http://www.culture.gov.uk/creative/mapping.html http://www.fibreculture.org/archives/index.html http://www.wired.com/wired/archive/11.06/ad_spc.html Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Rossiter, Ned. "Creative Industries and the Limits of Critique from " M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0306/11-creativeindustries.php>. APA Style Rossiter, N. (2003, Jun 19). Creative Industries and the Limits of Critique from . M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0306/11-creativeindustries.php>
APA, Harvard, Vancouver, ISO, and other styles
37

Jr., Joseph Reagle. "Open Content Communities." M/C Journal 7, no. 3 (July 1, 2004). http://dx.doi.org/10.5204/mcj.2364.

Full text
Abstract:
In this brief essay I sketch the characteristics of an open content community by considering a number of prominent examples, reviewing sociological literature, teasing apart the concepts of open and voluntary implicit in most usages of the term, and I offer a definition in which the much maligned possibility of 'forking' is actually an integral aspect of openness. Introduction What is often meant by the term 'open' is a generalization from the Free Software, Open Source and open standards movements. Communities marshaling themselves under these banners cooperatively produce, in public view, software, technical standards, or other content that is intended to be widely shared. Free Software and Open Source The Free Software movement was begun by Richard Stallman at MIT in the 1980s. Previously, computer science operated within the scientific norm of collaboration and information sharing. When Stallman found it difficult to obtain the source code of a troublesome Xerox printer, he feared that the norms of freedom and openness were being challenged by a different, proprietary, conceptualization of information. To challenge this shift he created the GNU Project in 1984 (Stallman 1998), the Free Software Foundation (FSF) in 1985 (Stallman 1996), and the authored the GNU General Public License in 1989. The goal of the GNU Project was to create a free version of the UNIX computing environment with which many computer practitioners were familiar with, and even contributed to, but was increasingly being encumbered with proprietary claims. GNU is playful form of a recursive acronym: GNU is Not Unix. The computing environment was supposed to be similar to but independent of UNIX and include everything a user needed including an operating system kernel (e.g., Hurd) and common applications such as small utilities, text editors (e.g., EMACS) and software compilers (e.g,. GCC). The FSF is now the principle sponsor of the GNU Project and focuses on administrative issues such as copyright licenses, policy, and funding issues; software development and maintenance is still an activity of GNU. The GPL is the FSF's famous copyright license for 'free software'; it ensures that the 'freedom' associated with being able to access and modify software is maintained with the original software and its derivations. It has important safeguards, including its famous 'viral' provision: if you modify and distribute software obtained under the GPL license, your derivation also must be publicly accessible and licensed under the GPL. In 1991, Linus Torvalds started development of Linux: a UNIX like operating system kernel, the core computer program that mediates between applications and the underlying hardware. While it was not part of the GNU Project, and differed in design philosophy and aspiration from the GNU's kernel (Hurd), it was released under the GPL. While Stallman's stance on 'freedom' is more ideological, Torvalds approach is more pragmatic. Furthermore, other projects, such as the Apache web server, and eventually Netscape's Mozilla web browser, were being developed in open communities and under similar licenses except that, unlike the GPL, they often permit proprietary derivations. With such a license, a company may take open source software, change it, and include it in their product without releasing their changes back to the community. The tension between the ideology of free software and its other, additional, benefits led to the concept of Open Source in 1998. The Open Source Initiative (OSI) was founded when, "We realized it was time to dump the confrontational attitude that has been associated with 'free software' in the past and sell the idea strictly on the same pragmatic, business-case grounds that motivated Netscape" (OSI 2003). Since the open source label is intended to cover open communities and licenses beyond the GPL, they have developed a meta (more abstract) Open Source Definition (OSI 1997) which defines openness as: Free redistribution Accessible source code Permits derived works Ensures the integrity of the author's source code Prohibits discrimination against persons or groups Prohibits discrimination against fields of endeavor Prohibits NDA (Non-Disclosure Agreement) entanglements Ensures the license must not be specific to a product Ensures the license must not restrict other software Ensures the license must be technology-neutral A copyright license which is found by OSI to satisfy these requirements will be listed as a OSI certified/approved license, including the GPL of course. Substantively, Free Software and Open Source are not that different: the differences are of motivation, personality, and strategy. The FLOSS (Free/Libre and Open Source Software) survey of 2,784 Free/Open Source (F/OS) developers found that 18% of those that identified with the Free Software community and 9% of those that identified with the Open Source community considered the distinction to be 'fundamental' (Ghosh et al. 2002:55). Given the freedom of these communities, forking (a split of the community where work is taken in a different direction) is common to the development of the software and its communities. One can conceive of Open Source movement as having forked from Free Software movement. The benefits of openness are not limited to the development of software. The Internet Engineering Task Force (IETF) and World Wide Web Consortium (W3C) host the authoring of technical specifications that are publicly available and implemented by applications that must interoperably communicate over the Internet. For example, different Web servers and browsers should be able to work together using the technical specifications of HTML, which structures a Web page, and HTTP, which is used to request and send Web pages. The approach of these organizations is markedly different from the 'big S' (e.g., ISO) standards organizations which typically predicate membership on nationality and often only provide specifications for a fee. This model of openness has extended even to forms of cultural production beyond technical content. For example, the Wikipedia is a collaborative encyclopedia and the Creative Commons provides licenses and community for supporting the sharing of texts, photos, and music. Openness and Voluntariness Organization can be characterized along numerous criteria including size; public versus private ownership; criterion for membership; beneficiaries (cui bono); Hughes's voluntary, military, philanthropic, corporate, and family types; Parsons's social pattern variables; and Thompson and Tuden's decision making strategies, among others (Blau and Scott 1962:40). I posit that within the contemporary usage of the term 'open,' one can identify a number of defining characteristics as well as an implicit connotation of voluntariness. Openness The definition of an 'open' community in the previous section is extensional: describing the characteristics of Free/Open Software (F/OS), and open standards and content. While useful, this approach is incomplete because such a description is of products, not of the social organization of producers. For example, private firms do release F/OS software but this tells us little about how work is done 'in the open.' The approach of Tzouris was to borrow from the literature of 'epistemic' communities so as to provide four characteristics of 'free/open' communities: Shared normative and principled beliefs: refers to the shared understanding of the value-based rationale for contributing to the software. Shared causal beliefs: refers to the shared causal understanding or the reward structures. Therefore, shared causal beliefs have a coordinating effect on the development process. Shared notions of validity: refers to contributors' consensus that the adopted solution is a valid solution for the problem at hand. Common policy enterprise: refers to a common goal that can be achieved through contributing code to the software. In simple words, there is a mutual understanding, a common frame of reference of what to develop and how to do it. (Tzouris 2002:21) However, these criteria seem over-determined: it is difficult to imagine a coherent community ('open' or otherwise) that does not satisfy these requirements. Consequently, I provide an alternative set of criteria that also resists myopic notions of perfect 'openness' or 'democracy.' Very few organizations have completely homogeneous social structures. As argued in Why the Internet is Good: Community Governance That Works Well (Reagle 1999), even an organization like the IETF with the credo of, "We reject kings, presidents and voting. We believe in rough consensus and running code," has explicit authority roles and informal elders. Consequently, in the following definition of open communities there is some room for contention. An open community delivers or demonstrates: Open products: provides products which are available under licenses like those that satisfy the Open Source Definition. Transparency: makes its processes, rules, determinations, and their rationales available. Integrity: ensures the integrity of the processes and the participants' contributions. Non-discrimination: prohibits arbitrary discrimination against persons, groups, or characteristics not relevant to the community's scope of activity. Persons and proposals should be judged on their merits. Leadership should be based on meritocratic or representative processes. Non-interference: the linchpin of openness, if a constituency disagrees with the implementation of the previous three criteria, the first criteria permits them to take the products and commence work on them under their own conceptualization without interference. While 'forking' is often complained about in open communities -- it can create some redundancy/inefficiency -- it is an essential characteristic and major benefit of open communities as well. Voluntariness In addition to the models of organization referenced by Blau and Scott (1962), Amitai Etzioni describes three types of organizations: 'coercive' organizations that use physical means (or threats thereof), 'utilitarian' organizations that use material incentives, and 'normative' organizations that use symbolic awards and status. He also describes three types of membership: 'alienative members' feel negatively towards the organization and wish to leave, 'calculative members' weigh benefits and limitations of belonging, and 'moral members' feel positively towards the organization and may even sublimate their own needs in order to participate (Etzioni 1961). As noted by Jennifer Lois (1999:118) normative organizations are the most underrepresented type of organization discussed in the sociological literature. Even so, Etzioni's model is sufficient such that I define a -- voluntary -- community as a 'normative' organization of 'moral' members. I adopt this synonymous definition not only because it allows me to integrate the character of the members into the character of the organization, but to echo the importance of the sense of the collaborative 'gift' in discussions among members of the community. Yet, obviously, not all voluntary organizations are necessarily open according to the definition above. A voluntary community can produce proprietary products and have opaque processes -- collegiate secret societies are a silly but demonstrative example. However, like with openness, it is difficult to draw a clear line: one cannot exclusively locate open communities and their members strictly within the 'normative' and 'moral' categories, though they are dominant in the open communities I introduced. Many members of those open communities are volunteers, either because of a 'moral' inclination and/or informal 'calculative' concern with a sense of satisfaction and reputation. While the FLOSS survey concluded, "that this activity still resembles rather a hobby than salaried work" (Ghosh et al. 2002:67), 15.7% of their sample declared they do receive some renumeration for developing F/OS. Even at the IETF and W3C, where many engineers are paid to participate, it is not uncommon for some to endeavor to maintain their membership even when not employed or their employers change. The openness of these communities is perhaps dominant in describing the character of the organization, though the voluntariness is critical to understanding the moral/ideological light in which many of the members view their participation. Conclusion I've attempted to provide a definition for openness that reflects an understanding of contemporary usage. The popular connotation, and consequently the definition put forth in this essay, arises from well known examples that include -- at least in part -- a notion of voluntary effort. On further consideration, I believe we can identity a loose conceptualization of shared products, and a process of transparency, integrity, and non-discrimination. Brevity prevents me from considering variations of these characteristics and consequent claims of 'openness' in different communities. And such an exercise isn't necessary for my argument. A common behavior of an open community is the self-reflexive discourse of what it means to be open on difficult boundary cases; the test of an open community is if a constituency that is dissatisfied with the results of such a discussion can can fork (relocate) the work elsewhere. Works Cited Blau, Peter and W. Richard Scott. Formal organizations: a comparative approach. New York, NY: John Wiley, 1962. Etzioni, Amitai. Modern organizations. New York, NY: Free Press of Glencoe., 1961. Ghosh, Rishab, Ruediger Glott, Bernhard Krieger, and Gregorio Robles. Free/Libre and open source software: survey and study. 2002. http://www.infonomics.nl/FLOSS/report/ Lois, Jennifer. "Socialization to heroism: individualism and collectivism in a voluntary search and rescue group." Social Psychology Quarterly 62 (1999): 117-135. Nardi, Bonnie and Steve Whittaker. "The place of face-to-face communication in distributed work." Distributed Work. Ed. Pamela Hinds and Sara Kiesler. Boston, Ma: MIT Press., 2002. chapter 4. Reagle, Joseph. Why the Internet is good community governance that works well. 1999.http://cyber.law.harvard.edu/people/reagle/regulation-19990326.html Stallman, Richard. Free Software Foundation. 1996. http://www.gnu.org/fsf/fsf.html Stallman, Richard. Linux and the GNU project. 1997. http://www.gnu.org/gnu/linux-and-gnu.html Stallman, Richard. The GNU project. 1998. http://www.gnu.org/gnu/thegnuproject.html Tzouris, Menelaos. Software freedom, open software and the participant's motivation -- a multidisciplinary study. London, UK: London School of Economics and Political Science, 2002. Citation reference for this article MLA Style Reagle Jr., Joseph. "Open Content Communities." M/C Journal 7.3 (2004). <http://www.media-culture.org.au/0406/06_Reagle.rft.php>. APA Style Reagle Jr., J. (2004, Jul.) Open Content Communities, M/C Journal, 7(3), <http://www.media-culture.org.au/0406/06_Reagle.rft.php>.
APA, Harvard, Vancouver, ISO, and other styles
38

Liu, Vicky. "Seal Culture Still Remains in Electronic Commerce." M/C Journal 8, no. 2 (June 1, 2005). http://dx.doi.org/10.5204/mcj.2335.

Full text
Abstract:
History of Seal and Printing Cultures Implications of the four important Chinese inventions, the compass, gun powder, papermaking, and printing, have far-reaching significance for human civilisation. The Chinese seal is intimately related to printing. Seals have the practical function of duplicating impressions of words or patterns. This process shares a very similar concept to printing on a small scale. Printing originated from the function of seals for making duplicated impressions, and for this reason Wang believes that seals constitute the prototype of printing. Seals in Traditional Commere Seals in certain Asian countries, such as Taiwan and Japan, play a vital role similar to that played by signatures in Western society. Particularly, the Chinese seal has been an integral part of Chinese heritage and culture. Wong states that seals usually symbolise tokens of promise in Chinese society. Ancient seals in their various forms have played a major role in information systems, in terms of authority, authentication, identification, certified proof, and authenticity, and have also been used for tamper-proofing, impression duplication, and branding purposes. To illustrate, clay sealing has been applied to folded documents to detect when sealed documents have been exposed or tampered with. Interestingly, one of the features of digital signature technology is also designed to achieve this purpose. Wong records that when the commodity economy began to develop and business transactions became more frequent, seals were used to prove that particular goods had been certified by customs. Moreover, when the goods were subject to tax by the government, seals were applied to the goods to prove the levy paid. Seals continue to be used in Chinese society as personal identification and in business transactions, official and legal documents, administrative warrants and charters. Paper-based Contract Signing with Seal Certificates In Taiwan and Japan, in certain circumstances, when two parties wish to formalise a contract, the seals of the two parties must be affixed to the contract. As Figure 1 illustrates, seal certificates are required to be attached to the signed and sealed contract for authentication as well as the statement of intent of a voluntary agreement in Taiwan. Figure 1. Example of a contract attached with the seal certificates A person can have more than one seal; however, only one seal at a time is allowed to be registered with a jurisdictional registration authority. The purpose of seal registration is to prevent seal forgery and to prove the identity of the seal owner. Namely, the seal registration process aims to associate the identity of the seal owner with the seal owner’s nominated seal, through attestation by a jurisdictional registration authority. Upon confirmation of the seal registration, the registration authority issues a seal certificate with both the seals of the registration authority and the registration authority executive. Digital Signatures for Electronic Commerce Handwritten signatures and tangible ink seals are highly impractical within the electronic commerce environment. However, the shift towards electronic commerce by both the public and private sector is an inevitable trend. ‘Trust’ in electronic commerce is developed through the use of ‘digital signatures’ in conjunction with a trustworthy environment. In principle, digital signatures are designed to simulate the functions of handwritten signatures and traditional seals for the purposes of authentication, data integrity, and non-repudiation within the electronic commerce environment. Various forms of Public Key Infrastructure (PKI) are employed to ensure the reliability of using digital signatures so as to ensure the integrity of the message. PKI does not, however, contribute in any way to the signatory’s ability to verify and approve the content of an electronic document prior to the affixation of his/her digital signature. Shortcomings of Digital Signature Scheme One of the primary problems with existing digital signatures is that a digital signature does not ’feel’ like, or resemble, a traditional seal or signature to the human observer; it does not have a recognisably individual or aesthetic quality. Historically, the authenticity of documents has always been verified by visual examination of the document. Often in legal proceedings, examination of both the affixed signature or seal as an integral part of the document will occur, as well as the detection of any possible modifications to the document. Yet, the current digital signature regime overlooks the importance of this sense of visualisation. Currently, digital signatures, such as the OpenPGP (Pretty Good Privacy) digital signature, are appended to an electronic document as a long, incomprehensible string of arbitrary characters. As shown in Figure 2, this offers no sense of identity or ownership by simple visual inspection. Figure 2. Example of a PGP signature To add to this confusion for the user, a digital signature will be different each time the user applies it. The usual digital signature is formed as an amalgam of the contents of the digital document and the user’s private key, meaning that a digital signature attached to an electronic document will vary with each document. This again represents a departure from the traditional use of the term ‘signature’. A digital signature application generates its output by firstly applying a hash algorithm over the contents of the digital document and then encrypting that hash output value using the user’s private cryptographic key of the normal dual-key pair provided by the Public Key cryptography systems. Therefore, digital signatures are not like traditional signatures which an individual can identify as being uniquely theirs, or as a recognisable identity attributable to an individual entity. New Visualised Digital Signature Scheme Liu et al. have developed the visualised digital signature scheme to enhance existing digital signature schemes through visualisation; namely, this scheme makes the intangible digital signature virtually tangible. Liu et al.’s work employs the visualised digital signature scheme with the aim of developing visualised signing and verification in electronic situations. The visualised digital signature scheme is sustained by the digital certificate containing both the certificate issuer’s and potential signer’s seal images. This thereby facilitates verification of a signer’s seal by reference to the appropriate certificate. The mechanism of ensuring the integrity and authenticity of seal images is to incorporate the signer’s seal image into an X.509 v3 certificate, as outlined in RFC 3280. Thus, visualised digital signature applications will be able to accept the visualised digital certificate for use. The data structure format of the visualised digital certificate is detailed in Liu. The visualised signing and verification processes are intended to simulate traditional signing techniques incorporating visualisation. When the signer is signing the document, the user interface of the electronic contracting application should allow the signer to insert the seal from the seal image file location into the document. After the seal image object is embedded in the document, the document is referred to as a ’visually sealed’ document. The sealed document is ready to be submitted to the digital signing process, to be transmitted with the signer’s digital certificate to the other party for verification. The visualised signature verification process is analogous to the traditional, sealed paper-based document with the seal certificate attached for verification. In history, documents have always required visual stimulus for verification, which highlights the need for visual stimulus evidence to rapidly facilitate verification. The user interface of the electronic contracting application should display the visually sealed document together with the associated digital certificate for human verification. The verifier immediately perceives the claimed signer’s seal on the document, particularly when the signer’s seal is recognisable to the verifier. This would be the case particularity where regular business transactions between parties occur. Significantly, having both the issuing CA’s and the signer’s seal images on the digital certificate instils confidence that the signer’s public key is attested to by the CA, as shown in Figure 3. This is unlike the current digital signature verification process which presents long, meaningless strings to the verifier. Figure 3. Example of a new digital certificate presentation Conclusions Seals have a long history accompanying the civilisation of mankind. In particular, certain business documents and government communities within seal-culture societies still require the imprints of the participating entities. Inevitably, the use of modern technologies will replace traditional seals and handwritten signatures. Many involved in implementing electronic government services and electronic commerce care little about the absence of imprints and/or signatures; however, there is concern that the population may experience difficulty in adapting to a new electronic commerce system where traditional practices have become obsolete. The purpose of the visualised digital signature scheme is to explore enhancements to existing digital signature schemes through the integration of culturally relevant features. This article highlights the experience of the use and development of Chinese seals, particularly in visualised seals used in a recognition process. Importantly, seals in their various forms have played a major role in information systems for thousands of years. In the advent of the electronic commerce, seal cultures still remain in the digital signing environment. References Housley, R., et al. RFC 3280 Internet X.509 Public Key Infrastructure: Certificate and Certificate Revocation List (CRL) Profile. The Internet Engineering Task Force, 2002. Liu, V., et al. “Visually Sealed and Digital Signed Documents.” 27th Australasian Computer Science Conference. Dunedin, NZ: Australian Computer Science Communications, 2004. Liu, V. “Visually Sealed and Digital Signed Electronic Documents: Building on Asian Tradition.” Dissertation. Queensland University of Technology, 2004. Wang, P.Y. The Art of Seal Carving. Taipei: Council for Cultural Planning and Development, Executive Yuan, 1991. Wong, Y.C., and H.W. Yau. The Art of Chinese Seals through the Ages. Hong Kong: The Zhejiang Provincial Museum and the Art Museum of the Chinese University Hong Kong, 2000. Citation reference for this article MLA Style Liu, Vicky. "Seal Culture Still Remains in Electronic Commerce." M/C Journal 8.2 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0506/03-liu.php>. APA Style Liu, V. (Jun. 2005) "Seal Culture Still Remains in Electronic Commerce," M/C Journal, 8(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0506/03-liu.php>.
APA, Harvard, Vancouver, ISO, and other styles
39

Mules, Warwick. "Virtual Culture, Time and Images." M/C Journal 3, no. 2 (May 1, 2000). http://dx.doi.org/10.5204/mcj.1839.

Full text
Abstract:
Introduction The proliferation of electronic images and audiovisual forms, together with the recent expansion of Internet communication makes me wonder about the adequacy of present theoretical apparatus within the humanities and communication disciplines to explain these new phenomena and their effects on human life. As someone working roughly within a cultural and media studies framework, I have long harboured suspicions about the ability of concepts such as text, discourse and representation to give an account of the new media which does not simply reduce them to another version of earlier media forms. Many of these concepts were established during the 1970s and 80s, in the development of poststructuralism and its linguistic bias towards the analysis of literary and print media text. The application of these concepts to an electronic medium based on the visual image rather than the printed word seems somewhat perverse, and needs to be replaced by the application of other concepts drawn from a paradigm more suited for the purpose. In this brief essay, I want to explore some of the issues involved in thinking about a new cultural paradigm based on the photovisual/electronic image, to describe and critique the transformation of culture currently taking place through the accelerated uptake of new televisual, audiovisual and computer technologies. I am reminded here of the existential philosopher Heidegger's words about technology: 'the essence of technology is by no means anything technological' (Heidegger 4). For Heidegger, technology is part of the 'enframing' of the beingness which humans inhabit in various ways (Dasein). But technology itself does not constitute this beingness. This is good news for those of us (like myself) who have only a general and non-technical knowledge of the new technologies currently sweeping the globe, but who sense their profound effects on the human condition. Indeed, it suggests that technical knowledge in itself is insufficient and even inadequate to formulate appropriate questions about the relationship between technology and human being, and to the capacities of humans to respond to, and transform their technologically mediated situations. We need a new way of understanding human being as mediated by technologies, which takes into account the specific technological form in which mediation occurs today. To do this, we need new ways of conceptualising culture, and the specific kind of human subjectivity made possible within a culture conditioned by electronic media. From Material to Virtual Culture The concept of culture, as it has been predominantly understood in the humanities and associated disciplines, is based on the idea of physical presence. That is to say, culture is understood in terms of the various representations and practices that people experience within social and historical contexts defined by the living presence of one human being to another. The paradigm case here is speech-based linguistics in which all forms of communication are understood in terms of an innate subjectivity, expressed in the act of communicating something to someone else. Although privileging the site and moment of co-presence, this model does not require the speakers to be immediately present to each other in face-to-face situations, but asks only that co-presence be the ideal upon which successful acts of communication take place. As French philosopher Jacques Derrida has consistently argued over the last thirty years, all forms of western discourse, in one way or another, have been based on this kind of understanding of the way meanings and expressions of subject identity take place (Derrida 27ff.). A good case in point is the introductory essay by John Frow and Meaghan Morris to their edited text book Australian Cultural Studies: A Reader, where culture is defined as "a contested and conflictual set of practices of representation bound up with the processes of formation and re-formation of social groups" (xx). If culture is defined in terms of the agonistic formation of social groups through practices of representation, then there can be no way of thinking about culture outside the social as the privileged domain of human interaction. Culture is reduced to the social as a kind of paradigm limit, which is, in turn, characterised by the formation of social groups fixed in time and space. Even when an effort is made to indicate that social groups are themselves culturally constituted, as Frow and Morris go on to say, the social is nevertheless invoked again as an underlying presumption: "the social processes by which the categories of the real and of group existence are formed" (xx). In this model, social groups are formed by social processes. The task of representation and signification (the task of culture) is to draw the group together, no matter how widespread or dispersed, to make it coherent and identifiably different from other groups. Under these terms, the task of cultural analysis is to describe how this process takes place. This 'material' approach to culture normalises the social at the expense of the cultural, underpinned by a 'metaphysics of presence' whereby meaning and identity are established within a system of differential values (difference) by fixing human subjectivity in space and time. I argue that the uptake of new communication technologies makes this concept of culture obsolete. Culture now has to be understood in terms of 'virtual presence' in which the physical context of human existence is simultaneously 'doubled' and indeed proliferated into a virtual reality, with effective force in the 'real' world. From this perspective, we need to rethink culture so that it is no longer understood in terms of differential meanings, identities, texts, discourses and representational forms, but rather as a new kind of ontology involving the 'being' of human subjects and their relations to each other in deterritorialised fields of mediated co-presence, where the real and the virtual enmesh and interact. In this case, the laws governing physical presence no longer apply since it is possible to be 'here' and 'there' at the same time. We need a new approach and a new set of analytical terms to account for this new phenomenon. Virtual Culture and the Time of Human Presence In his well known critique of modern culture, Walter Benjamin invents the concept of the 'dialectical image' to define the visual concreteness of the everyday world and its effect on human consciousness. Dialectical images operate through an instantaneous flash of vision which breaks through everyday reality, allowing an influx of otherness to flood present awareness in a transformation of the past into the present: "the past can be seized only as an image which flashes up at the instant when it can be recognized and is never seen again" (Benjamin, Theses 255). Bypassing discourse, language and meaning, dialectical images invoke the eternal return -- the affirmation of the present as an ever-constant repetition of temporality -- as the 'ground' of history, progress and the future. Modern technology and its infinite power of reproduction has created the condition under which the image separates from its object, thereby releasing materiality from its moribund state in the past (Benjamin, The Work of Art). The ground of temporality is thus rendered virtual and evanescent, involving a 'deterritorialisation' of human experience from its ego-attachment to the present; an experience which Benjamin understands in repressed mythical terms. For Benjamin, the exemplary modern technology is photography. A photograph 'destroys' the originariness of the object, by robbing it of aura, or "the unique phenomenon of a distance, however close it may be" (Benjamin, The Work of Art 222). The photographic image is thus dialectical because it collapses the distance between the object and its image, thereby undermining the ontological space between the past and the present which might otherwise grant to the object a unique being in the presence of the viewer. But all 'things' also have their images, which can be separated and dispersed through space and time. Benjamin's approach to culture, where time surpasses space, and where the reproduced image takes priority over the real, now appears strangely prophetic. By suggesting that images are somehow directly and concretely affective in the constitution of human temporality, Benjamin has anticipated the current 'postmodern' condition in which the electronic image has become enmeshed in everyday life. As Paul Virilio argues, new communication technologies accelerate the transmission of images to such a rate that the past is collapsed into the present, creating an overpowering sense of immediacy: the speed of new optoelectronic and electroacoustic milieu becomes a final void (the void of the quick), a vacuum that no longer depends on the interval between places or things and so on the world's very extension, but on the interface of an instantaneous transmission of remote appearances, on a geographic and geometric retention in which all volume, all relief vanish. (33) Distance is now experienced in terms of its virtual proximity to the perceiving subject, in which space is no longer understood in terms of Newtonian extension, but as collapsed or compressed temporality, defined by the speed of light. In this Einsteinian world, human interaction is no longer governed by the law of non-contradiction which demands that one thing cannot be something else or somewhere else at the same time, and instead becomes 'interfacial', where the image-double enmeshes with its originary being as a co-extensive ontology based on "trans-appearance", or the effective appearance on a single horizon of two things from different space and time zones: "the direct transparence of space that enables each of us to perceive our immediate neighbours is completed by the indirect transparence of the speed-time of the electromagnetic waves that transmit our images and our voices" (Virilio 37). Like the light from some distant star which reaches earth millions of years after its explosive death, we now live in a world of remote and immediately past events, whose effects are constantly felt in real time. In this case the present is haunted by its past, creating a doppelgänger effect in which human being is doubled with its image in a co-extensive existence across space and time. Body Doubles Here we can no longer speak of the image as a representation, or even a signification, since the image is no longer secondary to the thing from which it is separated, nor is it a sign of anything else. Rather, we need to think of the possibility of a kind of 'image-event', incorporating both the physical reality of the human body and its image, stretched through time and space. French theorists Gilles Deleuze and Félix Guattari have developed an entire theoretical scheme to define and describe this kind of phenomenon. At one point in their magnum opus, A Thousand Plateaus: Capitalism and Schizophrenia, they introduce the concept of haecceity: a body is not defined by the form that determines it nor as a determinate substance or subject nor by the organs it possesses or the function it fulfils. On the plane of consistency, a body is defined by a longitude and a latitude: in other words the sum total of the material elements belonging to it under given relations of movement and rest, speed and slowness (longitude); the sum total of the intensive affects it is capable of at a given power or degree of potential (latitude). (260) This haecceity of the human body, as "trajectory", or "interassemblage" (262) denies the priority of an originating event or substance from which its constitutive elements could be derived. For instance photographs cease to be 'indexes' of things, and become instead part of an assemblage which includes living bodies and other forms of human presence (speech, writing, expressive signs), linked contingently into assemblages through space and time. A photographic image is just as much part of the 'beingness' of something as the thing itself; things and images are part of a perpetual process of becoming; a contingent linking of bricolage with different and diverging material expressions and effects. Thinking along these lines will get us around the problem of non-contradiction (that something cannot be both 'here' and 'there' at the same time), by extending the concept of 'thing' to include all the elements of its dispersal in time and space. Here we move from the idea of a thing as unique to itself (for instance the body as human presence) and hence subject to a logic of exchange based on scarcity and lack, to the idea of a thing as 'becoming', and subject to a logic of proliferation and excess. In this case, the unique phenomenon of human presence anchored in speech can no longer be used as a focal point to fix human subjectivity, its meanings and forms of expression, since there will be many different kinds of 'presencing' of human being, through the myriad trajectories traced out in all the practices and assemblages through time and space. A Practical Approach By thinking of culture in terms of virtual presence, we can no longer assume the existence of a bedrock foundation for human interaction based on the physical proximity of individuals to each other in time and space. Rather we need to think of culture in terms the emergence of new kinds of 'beingness', which deterritorialises human presence in different ways through the mediating power of photovisual and electronic imagery. These new kinds of beingness are not really new. Recent writers and cultural theorists have already described in detail the emergence of a virtual culture in the nineteenth century with the invention of photography and film, as well as various viewing devices such as the stereoscope and other staging apparatuses including the panorama and diorama (Friedberg, Batchen, Crary). Analysis of virtual culture needs to identify the various trajectories along which elements are assembled into an incessant and contingent 'becoming'. In terms of photovisual and electronic media, this can take place in different ways. By tracing the effective history of an image, it is possible to locate points at which transformations from one form to another occur, indicating different effects in different contexts through time. For instance by scanning through old magazines, you might be able to trace the 'destiny' of a particular type of image, and the kinds of meanings associated with it. Keeping in mind that an image is not a representation, but a form of affect, it might be possible to identify critical points where the image turns into its other (in fashion imagery we are now confronted with images of thin bodies suddenly becoming too thin, and hence dangerously subversive). Another approach concerns the phenomenon known as the media event, in which electronic images outstrip and overdetermine physical events in real time to which they are attached. In this case an analysis of a media event would involve the description of the interaction between events and their mediated presence, as mutually effective in real time. Recent examples here include the Gulf War and other international emergencies and conflicts in the Balkans and the 1986 coup in the Philippines, where media presence enabled images to have a direct effect on the decisions and deployment of troops and strategic activities. In certain circumstances, the conduct of warfare might now take place entirely in virtual reality (Kellner). But these 'peak events' don't really exhaust the ways in which the phenomenon of the media event inhabits and affects our everyday lives. Indeed, it might be better to characterise our entire lives as conditioned to various degrees by media eventness, as we become more and more attached and dependent on electronic imagery and communication to gain our sense of place in the world. An analysis of this kind of everyday interaction is long overdue. We can learn about the virtual through our own everyday experiences. Here I am not so much thinking of experiences to be had in futuristic apparatuses such as the virtual reality body suit and other computer generated digital environments, but the kinds of experiences of the virtual described by Benjamin in his wanderings through the streets of Berlin and Paris in the 1920s (Benjamin, One Way Street). A casual walk down the main street of any town, and a perfunctory gaze in the shop windows will trigger many interesting connections between specific elements and the assemblages through which their effects are made known. On a recent trip to Bundaberg, a country town in Queensland, I came across a mechanised doll in a jewellery store display, made up in the likeness of a watchmaker working at a miniature workbench. The constant motion of the doll's arm as it moved up and down on the bench in a simulation of work repeated the electromechanical movements of the dozens of clocks and watches displayed elsewhere in the store window, suggesting a link between the human and the machine. Here I was presented not only with a pleasant shop display, but also with the commodification of time itself, as an endless repetition of an interval between successive actions, acted out by the doll and its perpetual movement. My pleasure at the display was channelled through the doll and his work, as a fetishised enchantment or "fairy scene" of industrialised productivity, in which the idea of time is visualised in a specific image-material form. I can imagine many other such displays in other windows in other towns and cities, all working to reproduce this particular kind of assemblage, which constantly 'pushes' the idea-image of time as commodity into the future, so long as the displays and their associated apparatuses of marketing continue in this way rather than some other way. So my suggestion then, is to open our eyes to the virtual not as a futuristic technology, but as it already shapes and defines the world around us through time. By taking the visual appearance of things as immaterial forms with material affectivity, we allow ourselves to move beyond the limitations of physical presence, which demands that one thing cannot be something else, or somewhere else at the same time. The reduction of culture to the social should be replaced by an inquiry into the proliferation of the social through the cultural, as so many experiences of the virtual in time and space. References Bataille, Georges. Visions of Excess: Selected Writings, 1927-1939.Trans. Allan Stoekl. Minneapolis: Minnesota UP, 1985. Batchen, Geoffrey. "Spectres of Cyberspace." Afterimage 23.3. Benjamin, Walter. "Theses on the Philosophy of History." Illuminations: Essays and Reflections. Trans. Hannah Arendt. New York: Schocken, 1968. 253-64. ---. "The Work of Art in the Age of Electronic Reproduction." Illuminations: Essays and Reflections. Trans. Hannah Arendt. New York: Schocken, 1968. 217-51. ---. One Way Street and Other Writings. Trans. Edmund Jephcott and Kingsley Shorter. London: Verso, 1979. Buck-Morss, Susan. The Dialectics of Seeing: Walter Benjamin and the Arcades Project. Cambridge, Mass.: MIT P, 1997. Crary, Jonathan. Techniques of the Observer: On Vision and Modernity in the Nineteenth Century. Chicago: MIT P, 1992. Derrida, Jacques. Of Grammatology. Trans. Gayatri Spivak. Baltimore: Johns Hopkins UP, 1974. Friedberg, Anne. Window Shopping: Cinema and the Postmodern. Berkeley: U of California P, 1993. Frow, John. Time & Commodity Culture: Essays in Cultural Theory and Postmodernity. Oxford: Clarendon, 1997. Frow, John, and Meaghan Morris, eds. Australian Cultural Studies: A Reader. St. Leonards, NSW: Allen and Unwin, 1993. Heidegger, Martin. "The Question Concerning Technology." The Question Concerning Technology. Trans. William Lovitt. New York: Harper. 3-35. Kellner, Douglas. "Virilio, War and Technology." Theory, Culture & Society 16.5-6 (1999): 103-25. Sean Aylward Smith. "Where Does the Body End?" M/C: A Journal of Media and Culture 2.3 (1999). 30 Apr. 2000 <http://www.uq.edu.au/mc/9905/end.php>. Virilio, Paul. Open Sky. Trans. Julie Rose. London: Verso, 1997. Zimnik, Nina. "'Give Me a Body': Deleuze's Time Image and the Taxonomy of the Body in the Work of Gabriele Leidloff." Enculturation 2.1 (1998). <http://www.uta.edu/huma/enculturation/>. Citation reference for this article MLA style: Warwick Mules. "Virtual Culture, Time and Images: Beyond Representation." M/C: A Journal of Media and Culture 3.2 (2000). [your date of access] <http://www.api-network.com/mc/0005/images.php>. Chicago style: Warwick Mules, "Virtual Culture, Time and Images: Beyond Representation," M/C: A Journal of Media and Culture 3, no. 2 (2000), <http://www.api-network.com/mc/0005/images.php> ([your date of access]). APA style: Warwick Mules. (2000) Virtual culture, time and images: beyond representation. M/C: A Journal of Media and Culture 3(2). <http://www.api-network.com/mc/0005/images.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
40

Baker, Stephanie Alice, and Alexia Maddox. "From COVID-19 Treatment to Miracle Cure." M/C Journal 25, no. 1 (March 16, 2022). http://dx.doi.org/10.5204/mcj.2872.

Full text
Abstract:
Introduction Medical misinformation and conspiracies have thrived during the current infodemic as a result of the volume of information people have been exposed to during the disease outbreak. Given that SARS-CoV-2 (COVID-19) is a novel coronavirus discovered in 2019, much remains unknown about the disease. Moreover, a considerable amount of what was originally thought to be known has turned out to be inaccurate, incomplete, or based on an obsolete knowledge of the virus. It is in this context of uncertainty and confusion that conspiracies flourish. Michael Golebiewski and danah boyd’s work on ‘data voids’ highlights the ways that actors can work quickly to produce conspiratorial content to fill a void. The data void absent of high-quality data surrounding COVID-19 provides a fertile information environment for conspiracies to prosper (Chou et al.). Conspiracism is the belief that society and social institutions are secretly controlled by a powerful group of corrupt elites (Douglas et al.). Michael Barkun’s typology of conspiracy reveals three components: 1) the belief that nothing happens by accident or coincidence; 2) nothing is as it seems: the "appearance of innocence" is to be suspected; 3) the belief that everything is connected through a hidden pattern. At the heart of conspiracy theories is narrative storytelling, in particular plots involving influential elites secretly colluding to control society (Fenster). Conspiracies following this narrative playbook have flourished during the pandemic. Pharmaceutical corporations profiting from national vaccine rollouts, and the emergency powers given to governments around the world to curb the spread of coronavirus, have led some to cast these powerful commercial and State organisations as nefarious actors – 'big evil' drug companies and the ‘Deep State’ – in conspiratorial narratives. Several drugs believed to be potential treatments for COVID-19 have become entangled with conspiracy. At the start of the pandemic scientists experimented with repurposing existing drugs as potential treatments for COVID-19 because safe and effective vaccines were not yet available. A series of antimicrobials with potential activity against SARS-CoV-2 were tested in clinical trials, including lopinavir/ritonavir, favipiravir and remdesivir (Smith et al.). Only hydroxychloroquine and ivermectin transformed from potential COVID treatments into conspiracy objects. This article traces how the hydroxychloroquine and ivermectin conspiracy theories were amplified in the news media and online. It highlights how debunking processes contribute to amplification effects due to audience segmentation in the current media ecology. We conceive of these amplification and debunking processes as key components of a ‘Conspiracy Course’ (Baker and Maddox), identifying the interrelations and tensions between amplification and debunking practices as a conspiracy develops, particularly through mainstream news, social media and alternative media spaces. We do this in order to understand how medical claims about potential treatments for COVID-19 succumb to conspiracism and how we can intervene in their development and dissemination. In this article we present a commentary on how public discourse and actors surrounding two potential treatments for COVID-19: the anti-malarial drug hydroxychloroquine and the anti-parasitic drug ivermectin became embroiled in conspiracy. We examine public discourse and events surrounding these treatments over a 24-month period from January 2020, when the virus gained global attention, to January 2022, the time this article was submitted. Our analysis is contextually informed by an extended digital ethnography into medical misinformation, which has included social media monitoring and observational digital field work of social media sites, news media, and digital media such as blogs, podcasts, and newsletters. Our analysis focusses on the role that public figures and influencers play in amplifying these conspiracies, as well as their amplification by some wellness influencers, referred to as “alt.health influencers” (Baker), and those affiliated with the Intellectual Dark Web, many of whom occupy status in alternative media spaces. The Intellectual Dark Web (IDW) is a term used to describe an alternative influence network comprised of public intellectuals including the Canadian psychologist Jordan Peterson and the British political commentator Douglas Murray. The term was coined by the American mathematician and podcast host Eric Weinstein, who described the IDW as a group opposed to “the gated institutional narrative” of the mainstream media and the political establishment (Kelsey). As a consequence, many associated with the IDW use alternative media, including podcasts and newsletters, as an "eclectic conversational space" where those intellectual thinkers excluded from mainstream conversational spaces in media, politics, and academia can “have a much easier time talking amongst ourselves” (Kelsey). In his analysis of the IDW, Parks describes these figures as "organic" intellectuals who build identification with their audiences by branding themselves as "reasonable thinkers" and reinforcing dominant narratives of polarisation. Hence, while these influential figures are influencers in so far as they cultivate an online audience as a vocation in exchange for social, economic and political gain, they are distinct from earlier forms of micro-celebrity (Senft; Marwick) in that they do not merely achieve fame on social media among a niche community of followers, but appeal to those disillusioned with the mainstream media and politics. The IDW are contrasted not with mainstream celebrities, as is the case with earlier forms of micro-celebrity (Abidin Internet Celebrity), but with the mainstream media and politics. A public figure, on the other hand, is a “famous person” broadcast in the media. While celebrities are public figures, public figures are not necessarily celebrities; a public figure is ‘a person of great public interest or familiarity’, such as a government official, politician, entrepreneur, celebrity, or athlete. Analysis In what follows we explore the role of influencers and public figures in amplifying the hydroxychloroquine and ivermectin conspiracy theories during the pandemic. As part of this analysis, we consider how debunking processes can further amplify these conspiracies, raising important questions about how to most effectively respond to conspiracies in the current media ecology. Discussions around hydroxychloroquine and ivermectin as potential treatments for COVID-19 emerged in early 2020 at the start of the pandemic when people were desperate for a cure, and safe and effective vaccines for the virus were not yet publicly available. While claims concerning the promising effects of both treatments emerged in the mainstream, the drugs remained experimental COVID treatments and had not yet received widespread acceptance among scientific and medical professionals. Much of the hype around these drugs as COVID “cures” emerged from preprints not yet subject to peer review and scientific studies based on unreliable data, which were retracted due to quality issues (Mehra et al.). Public figures, influencers, and news media organisations played a key role in amplifying these narratives in the mainstream, thereby extending the audience reach of these claims. However, their transformation into conspiracy objects followed different amplification processes for each drug. Hydroxychloroquine, the “Game Changer” Hydroxychloroquine gained public attention on 17 March 2020 when the US tech entrepreneur Elon Musk shared a Google Doc with his 40 million followers on Twitter, proposing “maybe worth considering chloroquine for C19”. Musk’s tweet was liked over 50,200 times and received more than 13,500 retweets. The tweet was followed by several other tweets that day in which Musk shared a series of graphs and a paper alluding to the “potential benefit” of hydroxychloroquine in in vitro and early clinical data. Although Musk is not a medical expert, he is a public figure with status and large online following, which contributed to the hype around hydroxychloroquine as a potential treatment for COVID-19. Following Musk’s comments, search interest in chloroquine soared and mainstream media outlets covered his apparent endorsement of the drug. On 19 March 2020, the Fox News programme Tucker Carlson Tonight cited a study declaring hydroxychloroquine to have a “100% cure rate against coronavirus” (Gautret et al.). Within hours another public figure, the then-US President Donald Trump, announced at a White House Coronavirus Task Force briefing that the FDA would fast-track approval of hydroxychloroquine, a drug used to treat malaria and arthritis, which he said had, “tremendous promise based on the results and other tests”. Despite the Chief Medical Advisor to the President, Dr Anthony Fauci, disputing claims concerning the efficacy of hydroxychloroquine as a potential therapy for coronavirus as “anecdotal evidence”, Trump continued to endorse hydroxychloroquine describing the drug as a “game changer”: HYDROXYCHLOROQUINE & AZITHROMYCIN, taken together, have a real chance to be one of the biggest game changers in the history of medicine. He said that the drugs should be put in use IMMEDIATELY. PEOPLE ARE DYING, MOVE FAST, and GOD BLESS EVERYONE! Trump’s tweet was shared over 102,800 times and liked over 384,800 times. His statements correlated with a 2000% increase in prescriptions for the anti-malarial drugs hydroxychloroquine and chloroquine in the US between 15 and 21 March 2020, resulting in many lupus patients unable to source the drug. There were also reports of overdoses as individuals sought to self-medicate with the drug to treat the virus. Once Trump declared himself a proponent of hydroxychloroquine, scientific inquiry into the drug was eclipsed by an overtly partisan debate. An analysis by Media Matters found that Fox News had promoted the drug 109 times between 23 and 25 March 2020, with other right wing media outlets following suit. The drug was further amplified and politicised by conservative public figures including Trump’s attorney Rudy Giuliani, who claimed on 27 March 2020 that “hydroxychloroquine has been shown to have a 100% effective rate in treating COVID-19”, and Brazil’s President, Jair Bolsonaro, who shared a Facebook post on 8 July 2020 admitting to taking the drug to treat the virus: “I’m one more person for whom this is working. So I trust hydroxychloroquine”. In addition to these conservative political figures endorsing hydroxychloroquine, on 27 July 2020 the right-wing syndicated news outlet Breitbart livestreamed a video depicting America’s Frontline Doctors – a group of physicians backed by the Tea Party Patriots, a conservative political organisation supportive of Trump – at a press conference outside the US Supreme Court in Washington. In the video, Stella Immanuel, a primary care physician in Texas, said “You don’t need masks…There is prevention and there is a cure!”, explaining that Americans could resume their normal lives by preemptively taking hydroxychloroquine. The video was retweeted by public figures including President Trump and Trump’s son Donald Trump Jr., before going viral reaching over 20 million users on Facebook. The video explicitly framed hydroxychloroquine as an effective “cure” for COVID-19 suppressed by “fake doctors”, thereby transferring it from potential treatment to a conspiracy object. These examples not only demonstrate the role of prominent public figures in amplifying conspiratorial claims about hydroxychloroquine as an effective cure for COVID-19, they reveal how these figures converted the drug into an “article of faith” divorced from scientific evidence. Consequently, to believe in its efficacy as a cure for COVID-19 demonstrated support for Trump and ideological skepticism of the scientific and medical establishment. Ivermectin, the “Miracle Cure” Ivermectin followed a different amplification trajectory. The amplifying process was primarily led by influencers in alternative media spaces and those associated with the IDW, many of whom position themselves in contrast to the mainstream media and politics. Despite scientists conducting clinical trials for ivermectin in early 2020, the ivermectin conspiracy peaked much later that year. On 8 December 2020, the pulmonary and ICU specialist Dr. Pierre Kory testified to the US Senate Committee about I-MASK: a prevention and early outpatient treatment protocol for COVID-19. During the hearing, Kory claimed that “ivermectin is effectively a ‘miracle drug’ against COVID-19”, which could end the pandemic. Kory’s depiction of ivermectin as a panacea, and the subsequent media hype, elevated him as a public figure and led to an increase in public demand for ivermectin in early 2021. This resulted in supply issues and led some people to seek formulations of the drug designed for animals, which were in greater supply and easier to access. Several months later in June 2021, Kory’s description of ivermectin as a “miracle cure” was amplified by a series of influencers, including Bret Weinstein and Joe Rogan, both of whom featured Kory on their podcasts as a key public figure in the fight against COVID Conspiratorial associations with ivermectin were further amplified on 9 July 2021 when Bret Weinstein appeared on Fox Nation's Tucker Carlson Today claiming he had “been censored for raising concerns about the shots and the medical establishment's opposition to alternative treatments”. The drug was embroiled in further controversy on 1 September 2021 when Joe Rogan shared an Instagram post explaining that he had taken ivermectin as one of many drugs to treat the virus. In the months that followed, Rogan featured several controversial scientists on his podcast who implied that ivermectin was an effective COVID “cure” suppressed as part of a global agenda to promote vaccine uptake. These public figures included Dr Robert Malone, an American physician who contributed to the development of mRNA technology, and Dr Peter McCullough, an American cardiologist with expertise in vaccines. As McCullough explained to Rogan in December 2021: it seemed to me early on that there was an intentional very comprehensive suppression of early treatment in order to promote fear, suffering, isolation, hospitalisation and death and it seemed to be completely organised and intentional in order to create acceptance for and then promote mass vaccination. McCullough went on to imply that the pandemic was planned and that vaccine manufacturers were engaged in a coordinated response to profit from mass vaccination. Consequently, whereas conservative public figures, such as Trump and Bolsonaro, played a primary role in amplifying the hype around hydroxychloroquine as a COVID cure and embroiling it in a political and conspiratorial narrative of collusion, influencers, especially those associated with alternative media and the IDW, were crucial in amplifying the ivermectin conspiracy online by platforming controversial scientists who espoused the drug as a “miracle cure”, which could allegedly end the pandemic but was being suppressed by the government and medical establishment. Debunking Debunking processes refuting the efficacy of these drugs as COVID “cures” contributed to the amplification of these conspiracies. In April 2020 the paper endorsing hydroxychloroquine that Trump tweeted about a week earlier was debunked. The debunking process for hydroxychloroquine involved a series of statements, papers, randomised clinical trials and retractions not only rejecting the efficacy of hydroxychloroquine, but suggesting it was unsafe and had the potential to cause harm (Boulware et al.; Mehra; Voss). In April 2020, the FDA released a statement cautioning against the use of hydroxychloroquine for COVID-19 outside of a hospital setting or a clinical trial due to risk of heart rhythm problems, and in June the FDA revoked its emergency use authorisation to treat COVID-19 in certain hospitalised patients. The debunking process was not limited to fact-based claims, it also involved satire and ridicule of those endorsing the drug as a treatment for COVID-19. Given the politicisation of the drug, much of this criticism was directed at Trump, as a key proponent of the drug, and Republicans in general, both of whom were cast as scientifically illiterate. The debunking process for ivermectin was similarly initiated by scientific and medical authorities who questioned the efficacy of ivermectin as a COVID-19 treatment due to reliability issues with trials and the quality of evidence (Lawrence). In response to claims that supply issues led people to seek formulations of the drug designed for animals, in April 2021 the FDA released a statement cautioning people not to take ivermectin to prevent or treat COVID-19: While there are approved uses for ivermectin in people and animals, it is not approved for the prevention or treatment of COVID-19 … . People should never take animal drugs … . Using these products in humans could cause serious harm. The CDC echoed this warning, claiming that “veterinary formulations intended for use in large animals such as horses, sheep, and cattle can be highly concentrated and result in overdoses when used by humans”. Many journalists and Internet users involved in debunking ivermectin reduced the drug to horse paste. Social media feeds debunking ivermectin were filled with memes ridiculing those consuming “horse dewormer”. Mockery of those endorsing ivermectin extended beyond social media, with the popular US sketch comedy show Saturday Night Live featuring a skit mocking Joe Rogan for consuming “horse medicine” to treat the virus. The skit circulated on social media in the following days, further deriding advocates of the drug as a COVID cure as not only irresponsible, but stupid. This type of ridicule, visually expressed in videos and Internet memes, fuelled polarisation. This polarisation was then weaponised by influencers associated with the IDW to sell ivermectin as a “miracle drug” suppressed by the medical and political establishment, thereby embroiling the drug further in conspiracy (Baker and Maddox). This type of opportunistic marketing is not intended for a mass audience. Instead, audiences are taking advantage of what Crystal Abidin refers to as “silosociality”, wherein content is tailored for specific subcommunities, which are not necessarily “accessible” or “legible” to outsiders (Abidin Refracted Publics 4). This dynamic both reflects and reinforces the audience segmentation that occurs in the current media ecology by virtue of alternative media with mockery and ridicule strengthening in- and out-group dynamics. Conclusion In this article we have traced how hydroxychloroquine and ivermectin moved from promising potential COVID-19 treatments to objects tainted by conspiracy. Despite common associations of conspiracy theories with the fringe, both the hydroxychloroquine and ivermectin conspiracy theories emerged in the mainstream, amplified across mainstream social networks with the help of influencers and public figures whose claims were further amplified by the news media commenting on their apparent endorsement of these drugs as COVID cures. Whereas hydroxychloroquine was politicised as a result of controversial public figures and right-wing media outlets endorsing the drug and the conspiratorial narrative espoused by America’s Frontline Doctors, notably much of the conspiracy around ivermectin shifted to alternative media spaces amplified by influencers disillusioned with the mainstream media. We have demonstrated how debunking processes, which sought to discredit these drugs as potential treatments for COVID-19, often ridiculed those who endorsed them, further polarising discussions involving these treatments and pushing advocates to the extreme. By encouraging proponents of these treatments to retreat to alternative media spaces, such as podcasts and newsletters, polarisation strengthened in-group dynamics, assisting the ability for opportunistic influencers to weaponise these conspiracies for social, economic, and political gain. These findings raise important questions about how to effectively counter conspiracies. When debunking not only refutes claims but ridicules advocates, debunking can have unintended consequences by strengthening in-group dynamics and fuelling the legitimacy of conspiratorial narratives. References Abidin, Crystal. Internet Celebrity: Understanding Fame Online. Emerald Group Publishing, 2018. Abidin, Crystal. "From ‘Networked Publics’ to ‘Refracted Publics’: A Companion Framework for Researching ‘below the Radar’ Studies." Social Media + Society 7.1 (2021). Baker, Stephanie Alice. "Alt.Health Influencers: How Wellness Culture and Web Culture Have Been Weaponised to Promote Conspiracy Theories and Far-Right Extremism during the COVID-19 Pandemic." European Journal of Cultural Studies 25.1 (2022): 3-24. Baker, Stephanie Alice, and Alexia Maddox. “COVID-19 Treatment or Miracle 'Cure'?: Tracking the Hydroxychloroquine, Remdesivir and Ivermectin Conspiracies on Social Media.” Paper presented at the BSA Annual Conference 2022: Building Equality and Justice Now, 20-22 April 2022. <https://www.britsoc.co.uk/media/25695/ac2022_draft_conf_prog.pdf>. Barkun, Michael. A Culture of Conspiracy. University of California Press, 2013. Boulware, David R., et al. "A Randomized Trial of Hydroxychloroquine as Postexposure Prophylaxis for Covid-19." New England Journal of Medicine 383.6 (2020): 517-525. Chou, Wen-Ying Sylvia, Anna Gaysynsky, and Robin C. Vanderpool. "The COVID-19 Misinfodemic: Moving beyond Fact-Checking." Health Education & Behavior 48.1 (2021): 9-13. Douglas, Karen M., et al. "Understanding Conspiracy Theories." Political Psychology 40 (2019): 3-35. Fenster, Mark. Conspiracy Theories: Secrecy and Power in American Culture. University of Minnesota Press, 1999. Gautret, Philippe, et al. "Hydroxychloroquine and Azithromycin as a Treatment of COVID-19: Results of an Open-Label Non-Randomized Clinical Trial." International Journal of Antimicrobial Agents 56.1 (2020): 105949. Golebiewski, Michael, and danah boyd. "Data Voids: Where Missing Data Can Easily Be Exploited." Data & Society (2019). Kelsey, Darren. "Archetypal Populism: The ‘Intellectual Dark Web’ and the ‘Peterson Paradox’." Discursive Approaches to Populism across Disciplines. Cham: Palgrave Macmillan, 2020. 171-198. Lawrence, Jack M., et al. "The Lesson of Ivermectin: Meta-Analyses Based on Summary Data Alone Are Inherently Unreliable." Nature Medicine 27.11 (2021): 1853-1854. Marwick, Alice E. Status Update. Yale University Press, 2013. Mehra, Mandeep R., et al. "RETRACTED: Hydroxychloroquine or Chloroquine with or without a Macrolide for Treatment of COVID-19: A Multinational Registry Analysis." (2020). Parks, Gabriel. "Considering the Purpose of ‘an Alternative Sense-Making Collective’: A Rhetorical Analysis of the Intellectual Dark Web." Southern Communication Journal 85.3 (2020): 178-190. Senft, Theresa M. Camgirls: Celebrity and Community in the Age of Social Networks. Peter Lang, 2008. Smith, Tim, et al. "COVID-19 Drug Therapy." Elsevier (2020). Voss, Andreas. “Official Statement from International Society of Antimicrobial Chemotherapy (ISAC).” International Society of Antimicrobial Chemotherapy 3 Apr. 2020. <https://www.isac.world/news-and-publications/official-isac-statement>.
APA, Harvard, Vancouver, ISO, and other styles
41

Dieter, Michael. "Amazon Noir." M/C Journal 10, no. 5 (October 1, 2007). http://dx.doi.org/10.5204/mcj.2709.

Full text
Abstract:
There is no diagram that does not also include, besides the points it connects up, certain relatively free or unbounded points, points of creativity, change and resistance, and it is perhaps with these that we ought to begin in order to understand the whole picture. (Deleuze, “Foucault” 37) Monty Cantsin: Why do we use a pervert software robot to exploit our collective consensual mind? Letitia: Because we want the thief to be a digital entity. Monty Cantsin: But isn’t this really blasphemic? Letitia: Yes, but god – in our case a meta-cocktail of authorship and copyright – can not be trusted anymore. (Amazon Noir, “Dialogue”) In 2006, some 3,000 digital copies of books were silently “stolen” from online retailer Amazon.com by targeting vulnerabilities in the “Search inside the Book” feature from the company’s website. Over several weeks, between July and October, a specially designed software program bombarded the Search Inside!™ interface with multiple requests, assembling full versions of texts and distributing them across peer-to-peer networks (P2P). Rather than a purely malicious and anonymous hack, however, the “heist” was publicised as a tactical media performance, Amazon Noir, produced by self-proclaimed super-villains Paolo Cirio, Alessandro Ludovico, and Ubermorgen.com. While controversially directed at highlighting the infrastructures that materially enforce property rights and access to knowledge online, the exploit additionally interrogated its own interventionist status as theoretically and politically ambiguous. That the “thief” was represented as a digital entity or machinic process (operating on the very terrain where exchange is differentiated) and the emergent act of “piracy” was fictionalised through the genre of noir conveys something of the indeterminacy or immensurability of the event. In this short article, I discuss some political aspects of intellectual property in relation to the complexities of Amazon Noir, particularly in the context of control, technological action, and discourses of freedom. Software, Piracy As a force of distribution, the Internet is continually subject to controversies concerning flows and permutations of agency. While often directed by discourses cast in terms of either radical autonomy or control, the technical constitution of these digital systems is more regularly a case of establishing structures of operation, codified rules, or conditions of possibility; that is, of guiding social processes and relations (McKenzie, “Cutting Code” 1-19). Software, as a medium through which such communication unfolds and becomes organised, is difficult to conceptualise as a result of being so event-orientated. There lies a complicated logic of contingency and calculation at its centre, a dimension exacerbated by the global scale of informational networks, where the inability to comprehend an environment that exceeds the limits of individual experience is frequently expressed through desires, anxieties, paranoia. Unsurprisingly, cautionary accounts and moral panics on identity theft, email fraud, pornography, surveillance, hackers, and computer viruses are as commonplace as those narratives advocating user interactivity. When analysing digital systems, cultural theory often struggles to describe forces that dictate movement and relations between disparate entities composed by code, an aspect heightened by the intensive movement of informational networks where differences are worked out through the constant exposure to unpredictability and chance (Terranova, “Communication beyond Meaning”). Such volatility partially explains the recent turn to distribution in media theory, as once durable networks for constructing economic difference – organising information in space and time (“at a distance”), accelerating or delaying its delivery – appear contingent, unstable, or consistently irregular (Cubitt 194). Attributing actions to users, programmers, or the software itself is a difficult task when faced with these states of co-emergence, especially in the context of sharing knowledge and distributing media content. Exchanges between corporate entities, mainstream media, popular cultural producers, and legal institutions over P2P networks represent an ongoing controversy in this respect, with numerous stakeholders competing between investments in property, innovation, piracy, and publics. Beginning to understand this problematic landscape is an urgent task, especially in relation to the technological dynamics that organised and propel such antagonisms. In the influential fragment, “Postscript on the Societies of Control,” Gilles Deleuze describes the historical passage from modern forms of organised enclosure (the prison, clinic, factory) to the contemporary arrangement of relational apparatuses and open systems as being materially provoked by – but not limited to – the mass deployment of networked digital technologies. In his analysis, the disciplinary mode most famously described by Foucault is spatially extended to informational systems based on code and flexibility. According to Deleuze, these cybernetic machines are connected into apparatuses that aim for intrusive monitoring: “in a control-based system nothing’s left alone for long” (“Control and Becoming” 175). Such a constant networking of behaviour is described as a shift from “molds” to “modulation,” where controls become “a self-transmuting molding changing from one moment to the next, or like a sieve whose mesh varies from one point to another” (“Postscript” 179). Accordingly, the crisis underpinning civil institutions is consistent with the generalisation of disciplinary logics across social space, forming an intensive modulation of everyday life, but one ambiguously associated with socio-technical ensembles. The precise dynamics of this epistemic shift are significant in terms of political agency: while control implies an arrangement capable of absorbing massive contingency, a series of complex instabilities actually mark its operation. Noise, viral contamination, and piracy are identified as key points of discontinuity; they appear as divisions or “errors” that force change by promoting indeterminacies in a system that would otherwise appear infinitely calculable, programmable, and predictable. The rendering of piracy as a tactic of resistance, a technique capable of levelling out the uneven economic field of global capitalism, has become a predictable catch-cry for political activists. In their analysis of multitude, for instance, Antonio Negri and Michael Hardt describe the contradictions of post-Fordist production as conjuring forth a tendency for labour to “become common.” That is, as productivity depends on flexibility, communication, and cognitive skills, directed by the cultivation of an ideal entrepreneurial or flexible subject, the greater the possibilities for self-organised forms of living that significantly challenge its operation. In this case, intellectual property exemplifies such a spiralling paradoxical logic, since “the infinite reproducibility central to these immaterial forms of property directly undermines any such construction of scarcity” (Hardt and Negri 180). The implications of the filesharing program Napster, accordingly, are read as not merely directed toward theft, but in relation to the private character of the property itself; a kind of social piracy is perpetuated that is viewed as radically recomposing social resources and relations. Ravi Sundaram, a co-founder of the Sarai new media initiative in Delhi, has meanwhile drawn attention to the existence of “pirate modernities” capable of being actualised when individuals or local groups gain illegitimate access to distributive media technologies; these are worlds of “innovation and non-legality,” of electronic survival strategies that partake in cultures of dispersal and escape simple classification (94). Meanwhile, pirate entrepreneurs Magnus Eriksson and Rasmus Fleische – associated with the notorious Piratbyrn – have promoted the bleeding away of Hollywood profits through fully deployed P2P networks, with the intention of pushing filesharing dynamics to an extreme in order to radicalise the potential for social change (“Copies and Context”). From an aesthetic perspective, such activist theories are complemented by the affective register of appropriation art, a movement broadly conceived in terms of antagonistically liberating knowledge from the confines of intellectual property: “those who pirate and hijack owned material, attempting to free information, art, film, and music – the rhetoric of our cultural life – from what they see as the prison of private ownership” (Harold 114). These “unruly” escape attempts are pursued through various modes of engagement, from experimental performances with legislative infrastructures (i.e. Kembrew McLeod’s patenting of the phrase “freedom of expression”) to musical remix projects, such as the work of Negativland, John Oswald, RTMark, Detritus, Illegal Art, and the Evolution Control Committee. Amazon Noir, while similarly engaging with questions of ownership, is distinguished by specifically targeting information communication systems and finding “niches” or gaps between overlapping networks of control and economic governance. Hans Bernhard and Lizvlx from Ubermorgen.com (meaning ‘Day after Tomorrow,’ or ‘Super-Tomorrow’) actually describe their work as “research-based”: “we not are opportunistic, money-driven or success-driven, our central motivation is to gain as much information as possible as fast as possible as chaotic as possible and to redistribute this information via digital channels” (“Interview with Ubermorgen”). This has led to experiments like Google Will Eat Itself (2005) and the construction of the automated software thief against Amazon.com, as process-based explorations of technological action. Agency, Distribution Deleuze’s “postscript” on control has proven massively influential for new media art by introducing a series of key questions on power (or desire) and digital networks. As a social diagram, however, control should be understood as a partial rather than totalising map of relations, referring to the augmentation of disciplinary power in specific technological settings. While control is a conceptual regime that refers to open-ended terrains beyond the architectural locales of enclosure, implying a move toward informational networks, data solicitation, and cybernetic feedback, there remains a peculiar contingent dimension to its limits. For example, software code is typically designed to remain cycling until user input is provided. There is a specifically immanent and localised quality to its actions that might be taken as exemplary of control as a continuously modulating affective materialism. The outcome is a heightened sense of bounded emergencies that are either flattened out or absorbed through reconstitution; however, these are never linear gestures of containment. As Tiziana Terranova observes, control operates through multilayered mechanisms of order and organisation: “messy local assemblages and compositions, subjective and machinic, characterised by different types of psychic investments, that cannot be the subject of normative, pre-made political judgments, but which need to be thought anew again and again, each time, in specific dynamic compositions” (“Of Sense and Sensibility” 34). This event-orientated vitality accounts for the political ambitions of tactical media as opening out communication channels through selective “transversal” targeting. Amazon Noir, for that reason, is pitched specifically against the material processes of communication. The system used to harvest the content from “Search inside the Book” is described as “robot-perversion-technology,” based on a network of four servers around the globe, each with a specific function: one located in the United States that retrieved (or “sucked”) the books from the site, one in Russia that injected the assembled documents onto P2P networks and two in Europe that coordinated the action via intelligent automated programs (see “The Diagram”). According to the “villains,” the main goal was to steal all 150,000 books from Search Inside!™ then use the same technology to steal books from the “Google Print Service” (the exploit was limited only by the amount of technological resources financially available, but there are apparent plans to improve the technique by reinvesting the money received through the settlement with Amazon.com not to publicise the hack). In terms of informational culture, this system resembles a machinic process directed at redistributing copyright content; “The Diagram” visualises key processes that define digital piracy as an emergent phenomenon within an open-ended and responsive milieu. That is, the static image foregrounds something of the activity of copying being a technological action that complicates any analysis focusing purely on copyright as content. In this respect, intellectual property rights are revealed as being entangled within information architectures as communication management and cultural recombination – dissipated and enforced by a measured interplay between openness and obstruction, resonance and emergence (Terranova, “Communication beyond Meaning” 52). To understand data distribution requires an acknowledgement of these underlying nonhuman relations that allow for such informational exchanges. It requires an understanding of the permutations of agency carried along by digital entities. According to Lawrence Lessig’s influential argument, code is not merely an object of governance, but has an overt legislative function itself. Within the informational environments of software, “a law is defined, not through a statue, but through the code that governs the space” (20). These points of symmetry are understood as concretised social values: they are material standards that regulate flow. Similarly, Alexander Galloway describes computer protocols as non-institutional “etiquette for autonomous agents,” or “conventional rules that govern the set of possible behavior patterns within a heterogeneous system” (7). In his analysis, these agreed-upon standardised actions operate as a style of management fostered by contradiction: progressive though reactionary, encouraging diversity by striving for the universal, synonymous with possibility but completely predetermined, and so on (243-244). Needless to say, political uncertainties arise from a paradigm that generates internal material obscurities through a constant twinning of freedom and control. For Wendy Hui Kyong Chun, these Cold War systems subvert the possibilities for any actual experience of autonomy by generalising paranoia through constant intrusion and reducing social problems to questions of technological optimisation (1-30). In confrontation with these seemingly ubiquitous regulatory structures, cultural theory requires a critical vocabulary differentiated from computer engineering to account for the sociality that permeates through and concatenates technological realities. In his recent work on “mundane” devices, software and code, Adrian McKenzie introduces a relevant analytic approach in the concept of technological action as something that both abstracts and concretises relations in a diffusion of collective-individual forces. Drawing on the thought of French philosopher Gilbert Simondon, he uses the term “transduction” to identify a key characteristic of technology in the relational process of becoming, or ontogenesis. This is described as bringing together disparate things into composites of relations that evolve and propagate a structure throughout a domain, or “overflow existing modalities of perception and movement on many scales” (“Impersonal and Personal Forces in Technological Action” 201). Most importantly, these innovative diffusions or contagions occur by bridging states of difference or incompatibilities. Technological action, therefore, arises from a particular type of disjunctive relation between an entity and something external to itself: “in making this relation, technical action changes not only the ensemble, but also the form of life of its agent. Abstraction comes into being and begins to subsume or reconfigure existing relations between the inside and outside” (203). Here, reciprocal interactions between two states or dimensions actualise disparate potentials through metastability: an equilibrium that proliferates, unfolds, and drives individuation. While drawing on cybernetics and dealing with specific technological platforms, McKenzie’s work can be extended to describe the significance of informational devices throughout control societies as a whole, particularly as a predictive and future-orientated force that thrives on staged conflicts. Moreover, being a non-deterministic technical theory, it additionally speaks to new tendencies in regimes of production that harness cognition and cooperation through specially designed infrastructures to enact persistent innovation without any end-point, final goal or natural target (Thrift 283-295). Here, the interface between intellectual property and reproduction can be seen as a site of variation that weaves together disparate objects and entities by imbrication in social life itself. These are specific acts of interference that propel relations toward unforeseen conclusions by drawing on memories, attention spans, material-technical traits, and so on. The focus lies on performance, context, and design “as a continual process of tuning arrived at by distributed aspiration” (Thrift 295). This later point is demonstrated in recent scholarly treatments of filesharing networks as media ecologies. Kate Crawford, for instance, describes the movement of P2P as processual or adaptive, comparable to technological action, marked by key transitions from partially decentralised architectures such as Napster, to the fully distributed systems of Gnutella and seeded swarm-based networks like BitTorrent (30-39). Each of these technologies can be understood as a response to various legal incursions, producing radically dissimilar socio-technological dynamics and emergent trends for how agency is modulated by informational exchanges. Indeed, even these aberrant formations are characterised by modes of commodification that continually spillover and feedback on themselves, repositioning markets and commodities in doing so, from MP3s to iPods, P2P to broadband subscription rates. However, one key limitation of this ontological approach is apparent when dealing with the sheer scale of activity involved, where mass participation elicits certain degrees of obscurity and relative safety in numbers. This represents an obvious problem for analysis, as dynamics can easily be identified in the broadest conceptual sense, without any understanding of the specific contexts of usage, political impacts, and economic effects for participants in their everyday consumptive habits. Large-scale distributed ensembles are “problematic” in their technological constitution, as a result. They are sites of expansive overflow that provoke an equivalent individuation of thought, as the Recording Industry Association of America observes on their educational website: “because of the nature of the theft, the damage is not always easy to calculate but not hard to envision” (“Piracy”). The politics of the filesharing debate, in this sense, depends on the command of imaginaries; that is, being able to conceptualise an overarching structural consistency to a persistent and adaptive ecology. As a mode of tactical intervention, Amazon Noir dramatises these ambiguities by framing technological action through the fictional sensibilities of narrative genre. Ambiguity, Control The extensive use of imagery and iconography from “noir” can be understood as an explicit reference to the increasing criminalisation of copyright violation through digital technologies. However, the term also refers to the indistinct or uncertain effects produced by this tactical intervention: who are the “bad guys” or the “good guys”? Are positions like ‘good’ and ‘evil’ (something like freedom or tyranny) so easily identified and distinguished? As Paolo Cirio explains, this political disposition is deliberately kept obscure in the project: “it’s a representation of the actual ambiguity about copyright issues, where every case seems to lack a moral or ethical basis” (“Amazon Noir Interview”). While user communications made available on the site clearly identify culprits (describing the project as jeopardising arts funding, as both irresponsible and arrogant), the self-description of the artists as political “failures” highlights the uncertainty regarding the project’s qualities as a force of long-term social renewal: Lizvlx from Ubermorgen.com had daily shootouts with the global mass-media, Cirio continuously pushed the boundaries of copyright (books are just pixels on a screen or just ink on paper), Ludovico and Bernhard resisted kickback-bribes from powerful Amazon.com until they finally gave in and sold the technology for an undisclosed sum to Amazon. Betrayal, blasphemy and pessimism finally split the gang of bad guys. (“Press Release”) Here, the adaptive and flexible qualities of informatic commodities and computational systems of distribution are knowingly posited as critical limits; in a certain sense, the project fails technologically in order to succeed conceptually. From a cynical perspective, this might be interpreted as guaranteeing authenticity by insisting on the useless or non-instrumental quality of art. However, through this process, Amazon Noir illustrates how forces confined as exterior to control (virality, piracy, noncommunication) regularly operate as points of distinction to generate change and innovation. Just as hackers are legitimately employed to challenge the durability of network exchanges, malfunctions are relied upon as potential sources of future information. Indeed, the notion of demonstrating ‘autonomy’ by illustrating the shortcomings of software is entirely consistent with the logic of control as a modulating organisational diagram. These so-called “circuit breakers” are positioned as points of bifurcation that open up new systems and encompass a more general “abstract machine” or tendency governing contemporary capitalism (Parikka 300). As a consequence, the ambiguities of Amazon Noir emerge not just from the contrary articulation of intellectual property and digital technology, but additionally through the concept of thinking “resistance” simultaneously with regimes of control. This tension is apparent in Galloway’s analysis of the cybernetic machines that are synonymous with the operation of Deleuzian control societies – i.e. “computerised information management” – where tactical media are posited as potential modes of contestation against the tyranny of code, “able to exploit flaws in protocological and proprietary command and control, not to destroy technology, but to sculpt protocol and make it better suited to people’s real desires” (176). While pushing a system into a state of hypertrophy to reform digital architectures might represent a possible technique that produces a space through which to imagine something like “our” freedom, it still leaves unexamined the desire for reformation itself as nurtured by and produced through the coupling of cybernetics, information theory, and distributed networking. This draws into focus the significance of McKenzie’s Simondon-inspired cybernetic perspective on socio-technological ensembles as being always-already predetermined by and driven through asymmetries or difference. As Chun observes, consequently, there is no paradox between resistance and capture since “control and freedom are not opposites, but different sides of the same coin: just as discipline served as a grid on which liberty was established, control is the matrix that enables freedom as openness” (71). Why “openness” should be so readily equated with a state of being free represents a major unexamined presumption of digital culture, and leads to the associated predicament of attempting to think of how this freedom has become something one cannot not desire. If Amazon Noir has political currency in this context, however, it emerges from a capacity to recognise how informational networks channel desire, memories, and imaginative visions rather than just cultivated antagonisms and counterintuitive economics. As a final point, it is worth observing that the project was initiated without publicity until the settlement with Amazon.com. There is, as a consequence, nothing to suggest that this subversive “event” might have actually occurred, a feeling heightened by the abstractions of software entities. To the extent that we believe in “the big book heist,” that such an act is even possible, is a gauge through which the paranoia of control societies is illuminated as a longing or desire for autonomy. As Hakim Bey observes in his conceptualisation of “pirate utopias,” such fleeting encounters with the imaginaries of freedom flow back into the experience of the everyday as political instantiations of utopian hope. Amazon Noir, with all its underlying ethical ambiguities, presents us with a challenge to rethink these affective investments by considering our profound weaknesses to master the complexities and constant intrusions of control. It provides an opportunity to conceive of a future that begins with limits and limitations as immanently central, even foundational, to our deep interconnection with socio-technological ensembles. References “Amazon Noir – The Big Book Crime.” http://www.amazon-noir.com/>. Bey, Hakim. T.A.Z.: The Temporary Autonomous Zone, Ontological Anarchy, Poetic Terrorism. New York: Autonomedia, 1991. Chun, Wendy Hui Kyong. Control and Freedom: Power and Paranoia in the Age of Fibre Optics. Cambridge, MA: MIT Press, 2006. Crawford, Kate. “Adaptation: Tracking the Ecologies of Music and Peer-to-Peer Networks.” Media International Australia 114 (2005): 30-39. Cubitt, Sean. “Distribution and Media Flows.” Cultural Politics 1.2 (2005): 193-214. Deleuze, Gilles. Foucault. Trans. Seán Hand. Minneapolis: U of Minnesota P, 1986. ———. “Control and Becoming.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 169-176. ———. “Postscript on the Societies of Control.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 177-182. Eriksson, Magnus, and Rasmus Fleische. “Copies and Context in the Age of Cultural Abundance.” Online posting. 5 June 2007. Nettime 25 Aug 2007. Galloway, Alexander. Protocol: How Control Exists after Decentralization. Cambridge, MA: MIT Press, 2004. Hardt, Michael, and Antonio Negri. Multitude: War and Democracy in the Age of Empire. New York: Penguin Press, 2004. Harold, Christine. OurSpace: Resisting the Corporate Control of Culture. Minneapolis: U of Minnesota P, 2007. Lessig, Lawrence. Code and Other Laws of Cyberspace. New York: Basic Books, 1999. McKenzie, Adrian. Cutting Code: Software and Sociality. New York: Peter Lang, 2006. ———. “The Strange Meshing of Impersonal and Personal Forces in Technological Action.” Culture, Theory and Critique 47.2 (2006): 197-212. Parikka, Jussi. “Contagion and Repetition: On the Viral Logic of Network Culture.” Ephemera: Theory & Politics in Organization 7.2 (2007): 287-308. “Piracy Online.” Recording Industry Association of America. 28 Aug 2007. http://www.riaa.com/physicalpiracy.php>. Sundaram, Ravi. “Recycling Modernity: Pirate Electronic Cultures in India.” Sarai Reader 2001: The Public Domain. Delhi, Sarai Media Lab, 2001. 93-99. http://www.sarai.net>. Terranova, Tiziana. “Communication beyond Meaning: On the Cultural Politics of Information.” Social Text 22.3 (2004): 51-73. ———. “Of Sense and Sensibility: Immaterial Labour in Open Systems.” DATA Browser 03 – Curating Immateriality: The Work of the Curator in the Age of Network Systems. Ed. Joasia Krysa. New York: Autonomedia, 2006. 27-38. Thrift, Nigel. “Re-inventing Invention: New Tendencies in Capitalist Commodification.” Economy and Society 35.2 (2006): 279-306. Citation reference for this article MLA Style Dieter, Michael. "Amazon Noir: Piracy, Distribution, Control." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/07-dieter.php>. APA Style Dieter, M. (Oct. 2007) "Amazon Noir: Piracy, Distribution, Control," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/07-dieter.php>.
APA, Harvard, Vancouver, ISO, and other styles
42

Michele Guerra. "Cinema as a form of composition." TECHNE - Journal of Technology for Architecture and Environment, May 25, 2021, 51–57. http://dx.doi.org/10.36253/techne-10979.

Full text
Abstract:
Technique and creativity Having been called upon to provide a contribution to a publication dedicated to “Techne”, I feel it is fitting to start from the theme of technique, given that for too many years now, we have fruitlessly attempted to understand the inner workings of cinema whilst disregarding the element of technique. And this has posed a significant problem in our field of study, as it would be impossible to gain a true understanding of what cinema is without immersing ourselves in the technical and industrial culture of the 19th century. It was within this culture that a desire was born: to mould the imaginary through the new techniques of reproduction and transfiguration of reality through images. Studying the development of the so-called “pre-cinema” – i.e. the period up to the conventional birth of cinema on 28 December 1895 with the presentation of the Cinématographe Lumière – we discover that the technical history of cinema is not only almost more enthralling than its artistic and cultural history, but that it contains all the great theoretical, philosophical and scientific insights that we need to help us understand the social, economic and cultural impact that cinema had on the culture of the 20th century. At the 1900 Paris Exposition, when cinema had already existed in some form for a few years, when the first few short films of narrative fiction also already existed, the cinematograph was placed in the Pavilion of Technical Discoveries, to emphasise the fact that the first wonder, this element of unparalleled novelty and modernity, was still there, in technique, in this marvel of innovation and creativity. I would like to express my idea through the words of Franco Moretti, who claims in one of his most recent works that it is only possible to understand form through the forces that pulsate through it and press on it from beneath, finally allowing the form itself to come to the surface and make itself visible and comprehensible to our senses. As such, the cinematic form – that which appears on the screen, that which is now so familiar to us, that which each of us has now internalised, that has even somehow become capable of configuring our way of thinking, imagining, dreaming – that form is underpinned by forces that allow it to eventually make its way onto the screen and become artistic and narrative substance. And those forces are the forces of technique, the forces of industry, the economic, political and social forces without which we could never hope to understand cinema. One of the issues that I always make a point of addressing in the first few lessons with my students is that if they think that the history of cinema is made up of films, directors, narrative plots to be understood, perhaps even retold in some way, then they are entirely on the wrong track; if, on the other hand, they understand that it is the story of an institution with economic, political and social drivers within it that can, in some way, allow us to come to the great creators, the great titles, but that without a firm grasp of those drivers, there is no point in even attempting to explore it, then they are on the right track. As I see it, cinema in the twentieth century was a great democratic, interclassist laboratory such as no other art has ever been, and this occurred thanks to the fact that what underpinned it was an industrial reasoning: it had to respond to the capital invested in it, it had to make money, and as such, it had to reach the largest possible number of people, immersing it into a wholly unprecedented relational situation. The aim was to be as inclusive as possible, ultimately giving rise to the idea that cinema could not be autonomous, as other forms of art could be, but that it must instead be able to negotiate all the various forces acting upon it, pushing it in every direction. This concept of negotiation is one which has been explored in great detail by one of the greatest film theorists of our modern age, Francesco Casetti. In a 2005 book entitled “Eye of the Century”, which I consider to be a very important work, Casetti actually argues that cinema has proven itself to be the art form most capable of adhering to the complexity and fast pace of the short century, and that it is for this very reason that its golden age (in the broadest sense) can be contained within the span of just a hundred years. The fact that cinema was the true epistemological driving force of 20th-century modernity – a position now usurped by the Internet – is not, in my opinion, something that diminishes the strength of cinema, but rather an element of even greater interest. Casetti posits that cinema was the great negotiator of new cultural needs, of the need to look at art in a different way, of the willingness to adapt to technique and technology: indeed, the form of cinema has always changed according to the techniques and technologies that it has brought to the table or established a dialogue with on a number of occasions. Barry Salt, whose background is in physics, wrote an important book – publishing it at his own expense, as a mark of how difficult it is to work in certain fields – entitled “Film Style and Technology”, in which he calls upon us stop writing the history of cinema starting from the creators, from the spirit of the time, from the great cultural and historical questions, and instead to start afresh by following the techniques available over the course of its development. Throughout the history of cinema, the creation of certain films has been the result of a particular set of technical conditions: having a certain type of film, a certain type of camera, only being able to move in a certain way, needing a certain level of lighting, having an entire arsenal of equipment that was very difficult to move and handle; and as the equipment, medium and techniques changed and evolved over the years, so too did the type of cinema that we were able to make. This means framing the history of cinema and film theory in terms of the techniques that were available, and starting from there: of course, whilst Barry Salt’s somewhat provocative suggestion by no means cancels out the entire cultural, artistic and aesthetic discourse in cinema – which remains fundamental – it nonetheless raises an interesting point, as if we fail to consider the methods and techniques of production, we will probably never truly grasp what cinema is. These considerations also help us to understand just how vast the “construction site” of cinema is – the sort of “factory” that lies behind the production of any given film. Erwin Panofsky wrote a single essay on cinema in the 1930s entitled “Style and Medium in the Motion Pictures” – a very intelligent piece, as one would expect from Panofsky – in which at a certain point, he compares the construction site of the cinema to those of Gothic cathedrals, which were also under an immense amount of pressure from different forces, namely religious ones, but also socio-political and economic forces which ultimately shaped – in the case of the Gothic cathedral and its development – an idea of the relationship between the earth and the otherworldly. The same could be said for cinema, because it also involves starting with something very earthly, very grounded, which is then capable of unleashing an idea of imaginary metamorphosis. Some scholars, such as Edgar Morin, will say that cinema is increasingly becoming the new supernatural, the world of contemporary gods, as religion gradually gives way to other forms of deification. Panofsky’s image is a very focused one: by making film production into a construction site, which to all intents and purposes it is, he leads us to understand that there are different forces at work, represented by a producer, a scriptwriter, a director, but also a workforce, the simple labourers, as is always the case in large construction sites, calling into question the idea of who the “creator” truly is. So much so that cinema, now more than ever before, is reconsidering the question of authorship, moving towards a “history of cinema without names” in an attempt to combat the “policy of the author” which, in the 1950s, especially in France, identified the director as the de facto author of the film. Today, we are still in that position, with the director still considered the author of the film, but that was not always so: back in the 1910s, in the United States, the author of the film was the scriptwriter, the person who wrote it (as is now the case for TV series, where they have once again taken pride of place as the showrunner, the creator, the true author of the series, and nobody remembers the names of the directors of the individual episodes); or at times, it can be the producer, as was the case for a long time when the Oscar for Best Picture, for example, was accepted by the producer in their capacity as the commissioner, as the “owner” of the work. As such, the theme of authorship is a very controversial one indeed, but one which helps us to understand the great meeting of minds that goes into the production of a film, starting with the technicians, of course, but also including the actors. Occasionally, a film is even attributed to the name of a star, almost as if to declare that that film is theirs, in that it is their body and their talent as an actor lending it a signature that provides far more of a draw to audiences than the name of the director does. In light of this, the theme of authorship, which Panofsky raised in the 1930s through the example of the Gothic cathedral, which ultimately does not have a single creator, is one which uses the image of the construction site to also help us to better understand what kind of development a film production can go through and to what extent this affects its critical and historical reception; as such, grouping films together based on their director means doing something that, whilst certainly not incorrect in itself, precludes other avenues of interpretation and analysis which could have favoured or could still favour a different reading of the “cinematographic construction site”. Design and execution The great classic Hollywood film industry was a model that, although it no longer exists in the same form today, unquestionably made an indelible mark at a global level on the history not only of cinema, but more broadly, of the culture of the 20th century. The industry involved a very strong vertical system resembling an assembly line, revolving around producers, who had a high level of decision-making autonomy and a great deal of expertise, often inclined towards a certain genre of film and therefore capable of bringing together the exact kinds of skills and visions required to make that particular film. The history of classic American cinema is one that can also be reconstructed around the units that these producers would form. The “majors”, along with the so-called “minors”, were put together like football teams, with a chairman flanked by figures whom we would nowadays refer to as a sporting director and a managing director, who built the team based on specific ideas, “buying” directors, scriptwriters, scenographers, directors of photography, and even actors and actresses who generally worked almost exclusively for their major – although they could occasionally be “loaned out” to other studios. This system led to a very marked characterisation and allowed for the film to be designed in a highly consistent, recognisable way in an age when genres reigned supreme and there was the idea that in order to keep the audience coming back, it was important to provide certain reassurances about what they would see: anyone going to see a Western knew what sorts of characters and storylines to expect, with the same applying to a musical, a crime film, a comedy, a melodrama, and so on. The star system served to fuel this working method, with these major actors also representing both forces and materials in the hands of an approach to the filmmaking which had the ultimate objective of constructing the perfect film, in which everything had to function according to a rule rooted in both the aesthetic and the economic. Gore Vidal wrote that from 1939 onwards, Hollywood did not produce a single “wrong” film: indeed, whilst certainly hyperbolic, this claim confirms that that system produced films that were never wrong, never off-key, but instead always perfectly in tune with what the studios wished to achieve. Whilst this long-entrenched system of yesteryear ultimately imploded due to certain historical phenomena that determined it to be outdated, the way of thinking about production has not changed all that much, with film design remaining tied to a professional approach that is still rooted within it. The overwhelming majority of productions still start from a system which analyses the market and the possible economic impact of the film, before even starting to tackle the various steps that lead up to the creation of the film itself. Following production systems and the ways in which they have changed, in terms of both the technology and the cultural contexts, also involves taking stock of the still considerable differences that exist between approaches to filmmaking in different countries, or indeed the similarities linking highly disparate economic systems (consider, for example, India’s “Bollywood” or Nigeria’s “Nollywood”: two incredibly strong film industries that we are not generally familiar with as they lack global distribution, although they are built very solidly). In other words, any attempt to study Italian cinema and American cinema – to stay within this double field – with the same yardstick is unthinkable, precisely because the context of their production and design is completely different. Composition and innovation Studying the publications on cinema in the United States in the early 1900s – which, from about 1911 to 1923, offers us a revealing insight into the attempts made to garner an in-depth understanding of how this new storytelling machine worked and the development of the first real cultural industry of the modern age – casts light on the centrality of the issues of design and composition. I remain convinced that without reading and understanding that debate, it is very difficult to understand why cinema is as we have come to be familiar with it today. Many educational works investigated the inner workings of cinema, and some, having understood them, suggested that they were capable of teaching others to do so. These publications have almost never been translated into Italian and remain seldom studied even in the US, and yet they are absolutely crucial for understanding how cinema established itself on an industrial and aesthetic level. There are two key words that crop up time and time again in these books, the first being “action”, one of the first words uttered when a film starts rolling: “lights, camera, action”. This collection of terms is interesting in that “motore” highlights the presence of a machine that has to be started up, followed by “action”, which expresses that something must happen at that moment in front of that machine, otherwise the film will not exist. As such, “action” – a term to which I have devoted some of my studies – is a fundamental word here in that it represents a sort of moment of birth of the film that is very clear – tangible, even. The other word is “composition”, and this is an even more interesting word with a history that deserves a closer look: the first professor of cinema in history, Victor Oscar Freeburg (I edited the Italian translation of his textbook “The Art of Photoplay Making”, published in 1918), took up his position at Columbia University in 1915 and, in doing so, took on the task of teaching the first ever university course in cinema. Whilst Freeburg was, for his time, a very well-educated and highly-qualified person, having studied at Yale and then obtained his doctorate in theatre at Columbia, cinema was not entirely his field of expertise. He was asked to teach a course entitled “Photoplay Writing”. At the time, a film was known as a “photoplay”, in that it was a photographed play of sorts, and the fact that the central topic of the course was photoplay writing makes it clear that back then, the scriptwriter was considered the main author of the work. From this point of view, it made sense to entrust the teaching of cinema to an expert in theatre, based on the idea that it was useful to first and foremost teach a sort of photographable dramaturgy. However, upon arriving at Columbia, Freeburg soon realised whilst preparing his course that “photoplay writing” risked misleading the students, as it is not enough to simply write a story in order to make a film; as such, he decided to change the title of his course to “photoplay composition”. This apparently minor alteration, from “writing” to “composition”, in fact marked a decisive conceptual shift in that it highlighted that it was no longer enough to merely write: one had to “compose”. So it was that the author of a film became, according to Freeburg, not the scriptwriter or director, but the “cinema composer” (a term of his own coinage), thus directing and broadening the concept of composition towards music, on the one hand, and architecture, on the other. We are often inclined to think that cinema has inherited expressive modules that come partly from literature, partly from theatre and partly from painting, but in actual fact, what Freeburg helps us to understand is that there are strong elements of music and architecture in a film, emphasising the lofty theme of the project. In his book, he explores at great length the relationship between static and dynamic forms in cinema, a topic that few have ever addressed in that way and that again, does not immediately spring to mind as applicable to a film. I believe that those initial intuitions were the result of a reflection unhindered by all the prejudices and preconceived notions that subsequently began to condition film studies as a discipline, and I feel that they are of great use to use today because they guide us, on the one hand, towards a symphonic idea of filmmaking, and on the other, towards an idea that preserves the fairly clear imprint of architecture. Space-Time In cinema as in architecture, the relationship between space and time is a crucial theme: in every textbook, space and time are amongst the first chapters to be studied precisely because in cinema, they undergo a process of metamorphosis – as Edgar Morin would say – which is vital to constructing the intermediate world of film. Indeed, from both a temporal and a spatial point of view, cinema provides a kind of ubiquitous opportunity to overlap different temporalities and spatialities, to move freely from one space to another, but above all, to construct new systems of time. The rules of film editing – especially so-called “invisible editing”, i.e. classical editing that conceals its own presence – are rules built upon specific and precise connections that hold together different spaces – even distant ones – whilst nonetheless giving the impression of unity, of contiguity, of everything that cinema never is in reality, because cinema is constantly fragmented and interrupted, even though we very often perceive it in continuity. As such, from both a spatial and a temporal perspective, there are technical studies that explain the rules of how to edit so as to give the idea of spatial continuity, as well as theoretical studies that explain how cinema has transformed our sense of space and time. To mark the beginning of Parma’s run as Italy’s Capital of Culture, an exhibition was organised entitled “Time Machine. Seeing and Experiencing Time”, curated by Antonio Somaini, with the challenge of demonstrating how cinema, from its earliest experiments to the digital age, has managed to manipulate and transform time, profoundly affecting our way of engaging with it. The themes of time and space are vital to understanding cinema, including from a philosophical point of view: in two of Gilles Deleuze’s seminal volumes, “The Movement Image” and “The Time Image”, the issues of space and time become the two great paradigms not only for explaining cinema, but also – as Deleuze himself says – for explaining a certain 20th-century philosophy. Deleuze succeeds in a truly impressive endeavour, namely linking cinema to philosophical reflection – indeed, making cinema into an instrument of philosophical thought; this heteronomy of filmmaking is then also transferred to its ability to become an instrument that goes beyond its own existence to become a reflection on the century that saw it as a protagonist of sorts. Don Ihde argues that every era has a technical discovery that somehow becomes what he calls an “epistemological engine”: a tool that opens up a system of thought that would never have been possible without that discovery. One of the many examples of this over the centuries is the camera obscura, but we could also name cinema as the defining discovery for 20th-century thought: indeed, cinema is indispensable for understanding the 20th century, just as the Internet is for understanding our way of thinking in the 21st century. Real-virtual Nowadays, the film industry is facing the crisis of cinema closures, ultimately caused by ever-spreading media platforms and the power of the economic competition that they are exerting by aggressively entering the field of production and distribution, albeit with a different angle on the age-old desire to garner audiences. Just a few days ago, Martin Scorsese was lamenting the fact that on these platforms, the artistic project is in danger of foundering, as excellent projects are placed in a catalogue alongside a series of products of varying quality, thus confusing the viewer. A few years ago, during the opening ceremony of the academic year at the University of Southern California, Steven Spielberg and George Lucas expressed the same concept about the future of cinema in a different way. Lucas argued that cinemas would soon have to become incredibly high-tech places where people can have an experience that is impossible to reproduce elsewhere, with a ticket price that takes into account the expanded and increased experiential value on offer thanks to the new technologies used. Spielberg, meanwhile, observed that cinemas will manage to survive if they manage to transform the cinemagoer from a simple viewer into a player, an actor of sorts. The history of cinema has always been marked by continuous adaptation to technological evolutions. I do not believe that cinema will ever end. Jean-Luc Godard, one of the great masters of the Nouvelle Vague, once said in an interview: «I am very sorry not to have witnessed the birth of cinema, but I am sure that I will witness its death». Godard, who was born in 1930, is still alive. Since its origins, cinema has always transformed rather than dying. Raymond Bellour says that cinema is an art that never finishes finishing, a phrase that encapsulates the beauty and the secret of cinema: an art that never quite finishes finishing is an art that is always on the very edge of the precipice but never falls off, although it leans farther and farther over that edge. This is undoubtedly down to cinema’s ability to continually keep up with technique and technology, and in doing so to move – even to a different medium – to relocate, as contemporary theorists say, even finally moving out of cinemas themselves to shift onto platforms and tablets, yet all without ever ceasing to be cinema. That said, we should give everything we’ve got to ensure that cinemas survive.
APA, Harvard, Vancouver, ISO, and other styles
43

Cesarini, Paul. "‘Opening’ the Xbox." M/C Journal 7, no. 3 (July 1, 2004). http://dx.doi.org/10.5204/mcj.2371.

Full text
Abstract:
“As the old technologies become automatic and invisible, we find ourselves more concerned with fighting or embracing what’s new”—Dennis Baron, From Pencils to Pixels: The Stage of Literacy Technologies What constitutes a computer, as we have come to expect it? Are they necessarily monolithic “beige boxes”, connected to computer monitors, sitting on computer desks, located in computer rooms or computer labs? In order for a device to be considered a true computer, does it need to have a keyboard and mouse? If this were 1991 or earlier, our collective perception of what computers are and are not would largely be framed by this “beige box” model: computers are stationary, slab-like, and heavy, and their natural habitats must be in rooms specifically designated for that purpose. In 1992, when Apple introduced the first PowerBook, our perception began to change. Certainly there had been other portable computers prior to that, such as the Osborne 1, but these were more luggable than portable, weighing just slightly less than a typical sewing machine. The PowerBook and subsequent waves of laptops, personal digital assistants (PDAs), and so-called smart phones from numerous other companies have steadily forced us to rethink and redefine what a computer is and is not, how we interact with them, and the manner in which these tools might be used in the classroom. However, this reconceptualization of computers is far from over, and is in fact steadily evolving as new devices are introduced, adopted, and subsequently adapted for uses beyond of their original purpose. Pat Crowe’s Book Reader project, for example, has morphed Nintendo’s GameBoy and GameBoy Advance into a viable electronic book platform, complete with images, sound, and multi-language support. (Crowe, 2003) His goal was to take this existing technology previously framed only within the context of proprietary adolescent entertainment, and repurpose it for open, flexible uses typically associated with learning and literacy. Similar efforts are underway to repurpose Microsoft’s Xbox, perhaps the ultimate symbol of “closed” technology given Microsoft’s propensity for proprietary code, in order to make it a viable platform for Open Source Software (OSS). However, these efforts are not forgone conclusions, and are in fact typical of the ongoing battle over who controls the technology we own in our homes, and how open source solutions are often at odds with a largely proprietary world. In late 2001, Microsoft launched the Xbox with a multimillion dollar publicity drive featuring events, commercials, live models, and statements claiming this new console gaming platform would “change video games the way MTV changed music”. (Chan, 2001) The Xbox launched with the following technical specifications: 733mhz Pentium III 64mb RAM, 8 or 10gb internal hard disk drive CD/DVD ROM drive (speed unknown) Nvidia graphics processor, with HDTV support 4 USB 1.1 ports (adapter required), AC3 audio 10/100 ethernet port, Optional 56k modem (TechTV, 2001) While current computers dwarf these specifications in virtually all areas now, for 2001 these were roughly on par with many desktop systems. The retail price at the time was $299, but steadily dropped to nearly half that with additional price cuts anticipated. Based on these features, the preponderance of “off the shelf” parts and components used, and the relatively reasonable price, numerous programmers quickly became interested in seeing it if was possible to run Linux and additional OSS on the Xbox. In each case, the goal has been similar: exceed the original purpose of the Xbox, to determine if and how well it might be used for basic computing tasks. If these attempts prove to be successful, the Xbox could allow institutions to dramatically increase the student-to-computer ratio in select environments, or allow individuals who could not otherwise afford a computer to instead buy and Xbox, download and install Linux, and use this new device to write, create, and innovate . This drive to literally and metaphorically “open” the Xbox comes from many directions. Such efforts include Andrew Huang’s self-published “Hacking the Xbox” book in which, under the auspices of reverse engineering, Huang analyzes the architecture of the Xbox, detailing step-by-step instructions for flashing the ROM, upgrading the hard drive and/or RAM, and generally prepping the device for use as an information appliance. Additional initiatives include Lindows CEO Michael Robertson’s $200,000 prize to encourage Linux development on the Xbox, and the Xbox Linux Project at SourceForge. What is Linux? Linux is an alternative operating system initially developed in 1991 by Linus Benedict Torvalds. Linux was based off a derivative of the MINIX operating system, which in turn was a derivative of UNIX. (Hasan 2003) Linux is currently available for Intel-based systems that would normally run versions of Windows, PowerPC-based systems that would normally run Apple’s Mac OS, and a host of other handheld, cell phone, or so-called “embedded” systems. Linux distributions are based almost exclusively on open source software, graphic user interfaces, and middleware components. While there are commercial Linux distributions available, these mainly just package the freely available operating system with bundled technical support, manuals, some exclusive or proprietary commercial applications, and related services. Anyone can still download and install numerous Linux distributions at no cost, provided they do not need technical support beyond the community / enthusiast level. Typical Linux distributions come with open source web browsers, word processors and related productivity applications (such as those found in OpenOffice.org), and related tools for accessing email, organizing schedules and contacts, etc. Certain Linux distributions are more or less designed for network administrators, system engineers, and similar “power users” somewhat distanced from that of our students. However, several distributions including Lycoris, Mandrake, LindowsOS, and other are specifically tailored as regular, desktop operating systems, with regular, everyday computer users in mind. As Linux has no draconian “product activation key” method of authentication, or digital rights management-laden features associated with installation and implementation on typical desktop and laptop systems, Linux is becoming an ideal choice both individually and institutionally. It still faces an uphill battle in terms of achieving widespread acceptance as a desktop operating system. As Finnie points out in Desktop Linux Edges Into The Mainstream: “to attract users, you need ease of installation, ease of device configuration, and intuitive, full-featured desktop user controls. It’s all coming, but slowly. With each new version, desktop Linux comes closer to entering the mainstream. It’s anyone’s guess as to when critical mass will be reached, but you can feel the inevitability: There’s pent-up demand for something different.” (Finnie 2003) Linux is already spreading rapidly in numerous capacities, in numerous countries. Linux has “taken hold wherever computer users desire freedom, and wherever there is demand for inexpensive software.” Reports from technology research company IDG indicate that roughly a third of computers in Central and South America run Linux. Several countries, including Mexico, Brazil, and Argentina, have all but mandated that state-owned institutions adopt open source software whenever possible to “give their people the tools and education to compete with the rest of the world.” (Hills 2001) The Goal Less than a year after Microsoft introduced the The Xbox, the Xbox Linux project formed. The Xbox Linux Project has a goal of developing and distributing Linux for the Xbox gaming console, “so that it can be used for many tasks that Microsoft don’t want you to be able to do. ...as a desktop computer, for email and browsing the web from your TV, as a (web) server” (Xbox Linux Project 2002). Since the Linux operating system is open source, meaning it can freely be tinkered with and distributed, those who opt to download and install Linux on their Xbox can do so with relatively little overhead in terms of cost or time. Additionally, Linux itself looks very “windows-like”, making for fairly low learning curve. To help increase overall awareness of this project and assist in diffusing it, the Xbox Linux Project offers step-by-step installation instructions, with the end result being a system capable of using common peripherals such as a keyboard and mouse, scanner, printer, a “webcam and a DVD burner, connected to a VGA monitor; 100% compatible with a standard Linux PC, all PC (USB) hardware and PC software that works with Linux.” (Xbox Linux Project 2002) Such a system could have tremendous potential for technology literacy. Pairing an Xbox with Linux and OpenOffice.org, for example, would provide our students essentially the same capability any of them would expect from a regular desktop computer. They could send and receive email, communicate using instant messaging IRC, or newsgroup clients, and browse Internet sites just as they normally would. In fact, the overall browsing experience for Linux users is substantially better than that for most Windows users. Internet Explorer, the default browser on all systems running Windows-base operating systems, lacks basic features standard in virtually all competing browsers. Native blocking of “pop-up” advertisements is still not yet possible in Internet Explorer without the aid of a third-party utility. Tabbed browsing, which involves the ability to easily open and sort through multiple Web pages in the same window, often with a single mouse click, is also missing from Internet Explorer. The same can be said for a robust download manager, “find as you type”, and a variety of additional features. Mozilla, Netscape, Firefox, Konqueror, and essentially all other OSS browsers for Linux have these features. Of course, most of these browsers are also available for Windows, but Internet Explorer is still considered the standard browser for the platform. If the Xbox Linux Project becomes widely diffused, our students could edit and save Microsoft Word files in OpenOffice.org’s Writer program, and do the same with PowerPoint and Excel files in similar OpenOffice.org components. They could access instructor comments originally created in Microsoft Word documents, and in turn could add their own comments and send the documents back to their instructors. They could even perform many functions not yet capable in Microsoft Office, including saving files in PDF or Flash format without needing Adobe’s Acrobat product or Macromedia’s Flash Studio MX. Additionally, by way of this project, the Xbox can also serve as “a Linux server for HTTP/FTP/SMB/NFS, serving data such as MP3/MPEG4/DivX, or a router, or both; without a monitor or keyboard or mouse connected.” (Xbox Linux Project 2003) In a very real sense, our students could use these inexpensive systems previously framed only within the context of entertainment, for educational purposes typically associated with computer-mediated learning. Problems: Control and Access The existing rhetoric of technological control surrounding current and emerging technologies appears to be stifling many of these efforts before they can even be brought to the public. This rhetoric of control is largely typified by overly-restrictive digital rights management (DRM) schemes antithetical to education, and the Digital Millennium Copyright Act (DMCA). Combined,both are currently being used as technical and legal clubs against these efforts. Microsoft, for example, has taken a dim view of any efforts to adapt the Xbox to Linux. Microsoft CEO Steve Ballmer, who has repeatedly referred to Linux as a cancer and has equated OSS as being un-American, stated, “Given the way the economic model works - and that is a subsidy followed, essentially, by fees for every piece of software sold - our license framework has to do that.” (Becker 2003) Since the Xbox is based on a subsidy model, meaning that Microsoft actually sells the hardware at a loss and instead generates revenue off software sales, Ballmer launched a series of concerted legal attacks against the Xbox Linux Project and similar efforts. In 2002, Nintendo, Sony, and Microsoft simultaneously sued Lik Sang, Inc., a Hong Kong-based company that produces programmable cartridges and “mod chips” for the PlayStation II, Xbox, and Game Cube. Nintendo states that its company alone loses over $650 million each year due to piracy of their console gaming titles, which typically originate in China, Paraguay, and Mexico. (GameIndustry.biz) Currently, many attempts to “mod” the Xbox required the use of such chips. As Lik Sang is one of the only suppliers, initial efforts to adapt the Xbox to Linux slowed considerably. Despite that fact that such chips can still be ordered and shipped here by less conventional means, it does not change that fact that the chips themselves would be illegal in the U.S. due to the anticircumvention clause in the DMCA itself, which is designed specifically to protect any DRM-wrapped content, regardless of context. The Xbox Linux Project then attempted to get Microsoft to officially sanction their efforts. They were not only rebuffed, but Microsoft then opted to hire programmers specifically to create technological countermeasures for the Xbox, to defeat additional attempts at installing OSS on it. Undeterred, the Xbox Linux Project eventually arrived at a method of installing and booting Linux without the use of mod chips, and have taken a more defiant tone now with Microsoft regarding their circumvention efforts. (Lettice 2002) They state that “Microsoft does not want you to use the Xbox as a Linux computer, therefore it has some anti-Linux-protection built in, but it can be circumvented easily, so that an Xbox can be used as what it is: an IBM PC.” (Xbox Linux Project 2003) Problems: Learning Curves and Usability In spite of the difficulties imposed by the combined technological and legal attacks on this project, it has succeeded at infiltrating this closed system with OSS. It has done so beyond the mere prototype level, too, as evidenced by the Xbox Linux Project now having both complete, step-by-step instructions available for users to modify their own Xbox systems, and an alternate plan catering to those who have the interest in modifying their systems, but not the time or technical inclinations. Specifically, this option involves users mailing their Xbox systems to community volunteers within the Xbox Linux Project, and basically having these volunteers perform the necessary software preparation or actually do the full Linux installation for them, free of charge (presumably not including shipping). This particular aspect of the project, dubbed “Users Help Users”, appears to be fairly new. Yet, it already lists over sixty volunteers capable and willing to perform this service, since “Many users don’t have the possibility, expertise or hardware” to perform these modifications. Amazingly enough, in some cases these volunteers are barely out of junior high school. One such volunteer stipulates that those seeking his assistance keep in mind that he is “just 14” and that when performing these modifications he “...will not always be finished by the next day”. (Steil 2003) In addition to this interesting if somewhat unusual level of community-driven support, there are currently several Linux-based options available for the Xbox. The two that are perhaps the most developed are GentooX, which is based of the popular Gentoo Linux distribution, and Ed’s Debian, based off the Debian GNU / Linux distribution. Both Gentoo and Debian are “seasoned” distributions that have been available for some time now, though Daniel Robbins, Chief Architect of Gentoo, refers to the product as actually being a “metadistribution” of Linux, due to its high degree of adaptability and configurability. (Gentoo 2004) Specifically, the Robbins asserts that Gentoo is capable of being “customized for just about any application or need. ...an ideal secure server, development workstation, professional desktop, gaming system, embedded solution or something else—whatever you need it to be.” (Robbins 2004) He further states that the whole point of Gentoo is to provide a better, more usable Linux experience than that found in many other distributions. Robbins states that: “The goal of Gentoo is to design tools and systems that allow a user to do their work pleasantly and efficiently as possible, as they see fit. Our tools should be a joy to use, and should help the user to appreciate the richness of the Linux and free software community, and the flexibility of free software. ...Put another way, the Gentoo philosophy is to create better tools. When a tool is doing its job perfectly, you might not even be very aware of its presence, because it does not interfere and make its presence known, nor does it force you to interact with it when you don’t want it to. The tool serves the user rather than the user serving the tool.” (Robbins 2004) There is also a so-called “live CD” Linux distribution suitable for the Xbox, called dyne:bolic, and an in-progress release of Slackware Linux, as well. According to the Xbox Linux Project, the only difference between the standard releases of these distributions and their Xbox counterparts is that “...the install process – and naturally the bootloader, the kernel and the kernel modules – are all customized for the Xbox.” (Xbox Linux Project, 2003) Of course, even if Gentoo is as user-friendly as Robbins purports, even if the Linux kernel itself has become significantly more robust and efficient, and even if Microsoft again drops the retail price of the Xbox, is this really a feasible solution in the classroom? Does the Xbox Linux Project have an army of 14 year olds willing to modify dozens, perhaps hundreds of these systems for use in secondary schools and higher education? Of course not. If such an institutional rollout were to be undertaken, it would require significant support from not only faculty, but Department Chairs, Deans, IT staff, and quite possible Chief Information Officers. Disk images would need to be customized for each institution to reflect their respective needs, ranging from setting specific home pages on web browsers, to bookmarks, to custom back-up and / or disk re-imaging scripts, to network authentication. This would be no small task. Yet, the steps mentioned above are essentially no different than what would be required of any IT staff when creating a new disk image for a computer lab, be it one for a Windows-based system or a Mac OS X-based one. The primary difference would be Linux itself—nothing more, nothing less. The institutional difficulties in undertaking such an effort would likely be encountered prior to even purchasing a single Xbox, in that they would involve the same difficulties associated with any new hardware or software initiative: staffing, budget, and support. If the institutional in question is either unwilling or unable to address these three factors, it would not matter if the Xbox itself was as free as Linux. An Open Future, or a Closed one? It is unclear how far the Xbox Linux Project will be allowed to go in their efforts to invade an essentially a proprietary system with OSS. Unlike Sony, which has made deliberate steps to commercialize similar efforts for their PlayStation 2 console, Microsoft appears resolute in fighting OSS on the Xbox by any means necessary. They will continue to crack down on any companies selling so-called mod chips, and will continue to employ technological protections to keep the Xbox “closed”. Despite clear evidence to the contrary, in all likelihood Microsoft continue to equate any OSS efforts directed at the Xbox with piracy-related motivations. Additionally, Microsoft’s successor to the Xbox would likely include additional anticircumvention technologies incorporated into it that could set the Xbox Linux Project back by months, years, or could stop it cold. Of course, it is difficult to say with any degree of certainty how this “Xbox 2” (perhaps a more appropriate name might be “Nextbox”) will impact this project. Regardless of how this device evolves, there can be little doubt of the value of Linux, OpenOffice.org, and other OSS to teaching and learning with technology. This value exists not only in terms of price, but in increased freedom from policies and technologies of control. New Linux distributions from Gentoo, Mandrake, Lycoris, Lindows, and other companies are just now starting to focus their efforts on Linux as user-friendly, easy to use desktop operating systems, rather than just server or “techno-geek” environments suitable for advanced programmers and computer operators. While metaphorically opening the Xbox may not be for everyone, and may not be a suitable computing solution for all, I believe we as educators must promote and encourage such efforts whenever possible. I suggest this because I believe we need to exercise our professional influence and ultimately shape the future of technology literacy, either individually as faculty and collectively as departments, colleges, or institutions. Moran and Fitzsimmons-Hunter argue this very point in Writing Teachers, Schools, Access, and Change. One of their fundamental provisions they use to define “access” asserts that there must be a willingness for teachers and students to “fight for the technologies that they need to pursue their goals for their own teaching and learning.” (Taylor / Ward 160) Regardless of whether or not this debate is grounded in the “beige boxes” of the past, or the Xboxes of the present, much is at stake. Private corporations should not be in a position to control the manner in which we use legally-purchased technologies, regardless of whether or not these technologies are then repurposed for literacy uses. I believe the exigency associated with this control, and the ongoing evolution of what is and is not a computer, dictates that we assert ourselves more actively into this discussion. We must take steps to provide our students with the best possible computer-mediated learning experience, however seemingly unorthodox the technological means might be, so that they may think critically, communicate effectively, and participate actively in society and in their future careers. About the Author Paul Cesarini is an Assistant Professor in the Department of Visual Communication & Technology Education, Bowling Green State University, Ohio Email: pcesari@bgnet.bgsu.edu Works Cited http://xbox-linux.sourceforge.net/docs/debian.php>.Baron, Denis. “From Pencils to Pixels: The Stages of Literacy Technologies.” Passions Pedagogies and 21st Century Technologies. Hawisher, Gail E., and Cynthia L. Selfe, Eds. Utah: Utah State University Press, 1999. 15 – 33. Becker, David. “Ballmer: Mod Chips Threaten Xbox”. News.com. 21 Oct 2002. http://news.com.com/2100-1040-962797.php>. http://news.com.com/2100-1040-978957.html?tag=nl>. http://archive.infoworld.com/articles/hn/xml/02/08/13/020813hnchina.xml>. http://www.neoseeker.com/news/story/1062/>. http://www.bookreader.co.uk>.Finni, Scott. “Desktop Linux Edges Into The Mainstream”. TechWeb. 8 Apr 2003. http://www.techweb.com/tech/software/20030408_software. http://www.theregister.co.uk/content/archive/29439.html http://gentoox.shallax.com/. http://ragib.hypermart.net/linux/. http://www.itworld.com/Comp/2362/LWD010424latinlinux/pfindex.html. http://www.xbox-linux.sourceforge.net. http://www.theregister.co.uk/content/archive/27487.html. http://www.theregister.co.uk/content/archive/26078.html. http://www.us.playstation.com/peripherals.aspx?id=SCPH-97047. http://www.techtv.com/extendedplay/reviews/story/0,24330,3356862,00.html. http://www.wired.com/news/business/0,1367,61984,00.html. http://www.gentoo.org/main/en/about.xml http://www.gentoo.org/main/en/philosophy.xml http://techupdate.zdnet.com/techupdate/stories/main/0,14179,2869075,00.html. http://xbox-linux.sourceforge.net/docs/usershelpusers.html http://www.cnn.com/2002/TECH/fun.games/12/16/gamers.liksang/. Citation reference for this article MLA Style Cesarini, Paul. "“Opening” the Xbox" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0406/08_Cesarini.php>. APA Style Cesarini, P. (2004, Jul1). “Opening” the Xbox. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0406/08_Cesarini.php>
APA, Harvard, Vancouver, ISO, and other styles
44

Richardson, Nicholas. "A Curatorial Turn in Policy Development? Managing the Changing Nature of Policymaking Subject to Mediatisation." M/C Journal 18, no. 4 (August 7, 2015). http://dx.doi.org/10.5204/mcj.998.

Full text
Abstract:
There’s always this never-ending discussion about the curator who imposes meaning or imposes the concept of art, of what art is. I think this is the wrong opposition. Every artwork produces its concept, or a concept of what art is. And the role of the curator is not to produce a concept of art but to invent, to fabricate, elaborate reading grids or coexistence grids between them.(Nicolas Bourriaud quoted in Bourriaud, Lunghi, O’Neill, and Ruf 91–92)In 2010 at a conference in Rotterdam, Nicolas Bourriaud, Enrico Lunghi, Paul O’Neill, and Beatrix Ruf discussed the question, “Is the curator per definition a political animal?” This paper draws on their discussion when posing the reverse scenario—is the political animal per definition a curator in the context of the development of large-scale public policy? In exploring this question, I suggest that recent conceptual discussions centring on “the curatorial turn” in the arena of the creative arts provide a useful framework for understanding and managing opportunities and pitfalls in policymaking that is influenced by news media. Such a conceptual understanding is important. My empirical research has identified a transport policy arena that is changing due to news media scrutiny in Sydney, Australia. My findings are that the discourses arising and circulating in the public and the news media wield considerable influence. I posit in this paper the view that recent academic discussion of curatorial practices could identify more effective and successful approaches to policy development and implementation. I also question whether some of the key problems highlighted by commentary on the curatorial turn, such as the silencing of the voice of the artist, find parallels in policy as the influence of the bureaucrat or technical expert is diminished by the rise of the politician as curator in mediatised policy. The Political AnimalPaul O’Neill defines a political animal: “to be a passionate and human visionary—someone who bridges gaps, negotiates the impossible in order to generate change, even slight change, movements, a shivering” (Bourriaud et al. 90). O’Neill’s definition is a different definition from Aristotle’s famous assertion that humans (collectively) are the “political animal” because they are the only animals to possess speech (Danta and Vardoulakis 3). The essence of O’Neill’s definition shifts from the Aristotelian view that all humans are political, towards what Chris Danta and Dimitris Vardoulakis (4) refer to as “the consumption of the political by politics,” where the domain of the political is the realm of the elite few rather than innately human as Aristotle suggests. Moreover, there is a suggestion in O’Neill’s definition that the “political animal” is the consummate politician, creating change against great opposition. I suggest that this idea of struggle and adversity in O’Neill’s definition echoes policy development’s own “turn” of the early 1990s, “the argumentative turn in policy analysis and planning” (Fischer and Forester 43). The Argumentative Turn The argumentative turn in policy analysis and planning is premised on the assertion that “policy is made of language” (Majone 1). It represents a seismic shift in previously championed academic conceptions of policy analysis—decisionism, rationality, the economic model of choice, and other models that advocate measured, rational, and objective policy development processes. The argumentative turn highlights the importance of communication in policy development. Prior to this turn, policy analysts considered formal communication to be something that happened after policy elites had completed the scientific, objective, analytical, and rational work. Communication was perceived as being the process of “seducing” or the “‘mere words’ that add gloss to the important stuff” (Throgmorton 117–19). Communication had meant selling or “spinning” the policy—a task often left to the devices of the public relations industry by the “less scrupulous” policymaker (Dryzek 227).The new line of inquiry posits the alternative view that, far from communication being peripheral, “the policy process is constituted by and mediated through communicative practices” (Fischer and Gottweis 2). Thanks largely to the work of Deborah Stone and Giandomenico Majone, academics began to ask, “What if our language does not simply mirror or picture the world but instead profoundly shapes our view of it in the first place?” (Fischer and Forester 1). The importance of this turn to the argument, I posit in this paper, is illustrated by Stone when she contends that the communication of conflicting views and interests create a world where paradoxical positions on policy are inevitable. Stone states, “Ask a politician to define a problem and he will probably draw a battlefield and tell you who stands on which side. The analytical language of politics includes ‘for and against,’ ‘supporters and enemies,’ ‘our side and their side’” (166). Stone describes a policymaking process that is inherently difficult. Her ideas echo O’Neill’s intonation that in order for movement or even infinitesimal change it is the negotiation of the impossible that makes a political animal. The Mediatisation of Sydney Transport Stone and Majone speak only cursorily of the media in policy development. However, in recent years academics have increasingly contended that “mediatisation” be recognised as referring to the increasing influence of media in social, cultural, and political spheres (Deacon and Stanyer; Strömbäck and Esser; Shehata and Strömbäck). My own research into the influence of mediatisation on transport policy and projects in Sydney has centred more specifically on the influence of news media. My focus has been a trend towards news media influence in Australian politics and policy that has been observed by academics for more than a decade (Craig; Young; Ward, PR State; Ward, Public Affair; Ward, Power). My research entailed two case study projects, the failed Sydney CBD Metro (SCM) rail line and a North West Rail Link (NWR) currently under construction. Data-gathering included a news media study of 180 relevant print articles; 30 expert interviews with respondents from politics, the bureaucracy, transport planning, news media, and public relations, whose work related to transport (with a number working on the case study projects); and surveys, interviews, and focus groups with 149 public respondents. The research identified projects whose contrasting fortunes tell a significant story in relation to the influence of news media. The SCM, despite being a project deemed to be of considerable merit by the majority of expert respondents, was, as stated by a transport planner who worked on the project, “poorly sold,” which “turned it into a project that was very easy to ridicule.” Following a resulting period of intense news media criticism, the SCM was abandoned. As a transport reporter for a daily newspaper asserts in an interview, the prevailing view in the news media is that the project “was done on the back of an envelope.” According to experts with knowledge of the SCM, that years of planning had been undertaken was not properly presented to the public. Conversely, the experts I interviewed deem the NWR to be a low-priority project for Sydney. As a former chief of staff within both federal and state government departments including transport states, “if you are going to put money into anything in Sydney it would not be the NWR.” However, in the project’s favour is an overwhelming dominant public and media discourse that I label The north-west of Sydney is overdue rail transport. A communications respondent contends in an interview that because the NWR has “been talked about for so long” it holds “the right sighting, if you like, in people’s minds,” in other words, the media and the public have become used to the idea of the project.Ultimately, my findings, dealt with in more detail elsewhere (Richardson), suggest that powerful news media and public discourses, if not managed effectively, can be highly problematic for policymaking. This was found to be the case for the failure of the SCM. It is with this finding that I assert that the concept of curating the discourses surrounding a policy arena could hold considerable merit as a conceptual framework for discourse management. The Curatorial Turn in Policy Development? I was alerted to the idea of curating mediatised policy development during an expert interview for my empirical research. The respondent, chief editor of a Sydney newspaper, stated that, with an overwhelming mountain of information, news, views, and commentary being generated daily through the likes of the Internet and social media, the public needs curators to sift and sort the most important themes and arguments. The expert suggested this is now part of a journalist’s role. The idea of journalists as curators is far from new (Bakker 596). Nor is it the purpose of this paper. However, what struck me in this notion of curating was the critical role of sifting, sorting and ultimately selecting which themes, ideas, or pieces of information are privileged in myriad choices. My own empirical research was indicating that the management of highly influential news media and public discourses surrounding transport infrastructure also involved a considerable level of selection. Therefore, I hypothesised that the concept of curating might aid the managing of discourses when it comes to communicating for successful policy and project development that is subject to news media scrutiny. Research into scholarship has indicated that the concept of “the curatorial turn” is significant to this hypothesis. Since the 1960s the role of curator in art exhibition has shifted from that of “caretaker” for a collection to the shaper of an exhibition (O’Neill, “Turn”; O’Neill, Culture). Central to this shift is “the changing perception of the curator as carer to a curator who has a more creative and active part to play within the production of art itself” (O’Neill, Turn 243). Some commentators go so far as to suggest that curators have become cultural agents that “participate in the production of cultural value” (244). The curator’s role in exhibition design has also been equated to that of an author or auteur that drives an exhibition’s meaning (251–52). Why is this important for policy development? It is my view that there is certainly merit to viewing a significant part of the role of the political animal in policymaking as the curator of public and media discourse. As Beatrix Ruf suggests, the role of the curator is to create a “freedom for things to happen” within “a societal context” that not only takes into account the needs of the “artist” but also the “audience” (Bourriaud et al. 91). If we were to substitute bureaucrat for artist and media/public for audience then Ruf’s suggestion seems particularly relevant for the communication of policy. To return to Bourriaud’s quote that began this paper, perhaps the role of the curator/policymaker is not solely to produce a policy “but to invent, to fabricate, elaborate reading grids or coexistence grids,” to manage the discourses that influence the policy arena (Bourriaud et al. 92). Furthermore, the answer to why the concept of the curatorial turn seems relevant to policy development requires consideration not only of the rise of the voice and influence of the curator/policymaker but also of those at whose expense this shift has occurred. Through the rise of the curator the voice of the artist has dimmed. As the exhibition is elevated to “the status of quasi-artwork,” individual artworks themselves become simply “a useful fragment” (O’Neill, “Turn” 253). One of the underlying tensions of the curatorial turn is the rise of actors that are not practicing artists themselves. In other words, the producers of art, the artists, have less influence over their own practice. In New South Wales (NSW), we have witnessed a similar scenario with the steady rise of the voice and influence of the politician (and political adviser), at the expense of the public service. This loss of bureaucratic power was embedded structurally in the mid-1970s when Premier Neville Wran established the Ministerial Advisory Unit (MAU) to oversee NSW state government decisions. A respondent for my research states that when he began his career as a public servant: politicians didn’t really have a lot of ideas about things … the public service really ran the place … [Premier Wran] said, ‘this isn’t good enough. I’m being manipulated by the government departments. I’m going to set up something called the MAU which is politically appointed as a countervailing force to the bureaucracy to get the advice that I want.’The respondent infers a power grab by political actors to stymie the influence of the bureaucracy. This view is shared by several expert respondents for my research, as well as being substantiated by historian John Gunn (503). One of the clear results of the structural change has been that a politically driven media focus is now embedded in the structure of government policy and project decision-making. Instead of taking its lead from priorities emanating from the community, the bureaucracy is instead left with little choice but to look to the minister for guidance. As a project management consultant to government states in an interview:I think today the bureaucrat who makes the hard administrative decisions, the management decisions, is basically outweighed by communications, public relations, media relations director … the politicians are poll driven not policy driven. The respondent makes a point with which former politician Lindsay Tanner (Tanner) and academic Ian Ward (Ward, Power) agree—Australian politicians are increasingly structuring their operations around news media. The bureaucracy has become less relevant to policymaking as a result. My empirical research indicates this. The SCM and the NWR were highly publicised projects where the views of transport experts were largely ignored. They represent cases where the voice of the experts/artists had been completely suppressed by the voice of the politician/curator. I contend that this is where key questions of the role of the politician and the curator converge. Experts interviewed for my research express concerns that policymaking has been altered by structural changes to the bureaucracy. Similarly, some academics concerned with the rise of the curator question whether the shift will change the very nature of art (O’Neill, Cultures). A shared concern of the art world and those witnessing the policy arena in NSW is that the thoughts and ideas of those that do are being overshadowed by the views of those who talk. In terms of curatorial practice, O’Neill (Cultures) cites the views of Mick Wilson, who speaks of the rise of the “Foucauldian moment” and the “ubiquitous appeal of the term ‘discourse’ as a word to conjure and perform power,” where “even talking is doing something.” As O’Neill contends, “at this extreme, the discursive stands in the place of ‘doing’ within discourses on curatorial practice” (43). O’Neill submits Wilson’s point as an extreme view within the curatorial turn. However, the concern for the art world should be similar to the one experienced in the policy arena. Technical advice from the bureaucracy (doers) to ministers (talkers) has changed. In an interview with me, a partner in one of Australia’s leading architectural and planning practices contends that the technical advice of the bureaucracy to ministers is not as “fearless and robust” as it once was. Furthermore, he is concerned that planners have lost their influence as ministers now look to political advisers rather than technical advisers for direction. He states, “now what happens is most advisors to ministers are political advisers and they will give political advice … the planning advice hasn’t come from the planners.” The ultimate concern is that, through a silencing of the technical expert, policymaking is losing a vital layer of experience and knowledge that can only be to the detriment of the practice and its beneficiaries, the public. The closer one looks, the more evident the similarities between curating and policy development become. Acute budgetary limitations exist. There is an increased reliance on public funding. Large-scale curating, like policy development, involves “a negotiation of the relationship between public and private interests” (Ruf in Bourriaud et al. 90). There is also a tension between short- and long-term outlooks as well as local and global perspectives (Lunghi in Bourriaud et al. 97). And, significantly for my argument for the privileging of the concept of curating of discourse in policy, curating has also been called “a battlefield of ideas in which the public (or audience) has become ‘the big Other’” in that “everything that cannot find its audience, its public, is highly suspicious or very problematic” (Bourriaud in Bourriaud et al. 96–97). The closer the inspection, the starker the similarities of each pursuit. Lessons, Ramifications and Conclusions What can policymakers learn from the curatorial turn? For policymaking, it seems that the argumentative turn, the rise of news mediatisation, the strengthening of power and influence of the politician, and the “Foucauldian moment” have seen the rise of the discursive in place of doing that some quarters identify as being the case with the curatorial turn (O’Neill, Cultures). Therefore, it would be pertinent for policymakers to heed Bourriaud’s statement that began this paper: “the role of the curator is not to produce a concept of art (or policy) but to invent, to fabricate, elaborate reading grids or coexistence grids between them” (Bourriaud et al. 92). Is such a method of curating discourse the way forward for the political animal that seeks to achieve the politically “impossible” in policymaking? Perhaps for policymaking the importance of the concept of curating holds both opportunity and a warning. The opportunity, exemplified by the success of the NWR and the failure of the SCM projects in Sydney, is in accepting the role of media and public discourses in policy development so that they may be more thoroughly investigated and understood before being more effectively folded into the policymaking process. The warning lies in the concerns the curatorial turn has raised over the demise of the artist in light of the rise of discourse. The voice of the technical expert appears to be fading. How do we effectively curate discourses as well as restore the bureaucrat to former levels of robust fearlessness? I dare say it will take a political animal to do either. ReferencesBakker, Piet. “Mr Gates Returns.” Journalism Studies 15.5 (2014): 596–606.Bourriaud, Nicolas, Enrico Lunghi, Paul O’Neill, and Beatrix Ruf. “Is the Curator per Definition a Political Animal?” Rotterdam Dialogues: The Critics, the Curators, the Artists. Eds. Zoe Gray, Miriam Kathrein, Nicolaus Schafhausen, Monika Szewczyk, and Ariadne Urlus. Rotterdam: Witte de With Publishers, 2010. 87–99. Craig, Geoffrey. The Media, Politics and Public Life. Crows Nest, NSW: Allen and Unwin, 2004.Danta, Chris, and Dimitris Vardoulakis. “The Political Animal.” SubStance 37.3 (2008): 3–6. Dryzek, John S. “Policy Analysis and Planning: From Science to Argument.” The Argumentative Turn in Policy Analysis and Planning. Eds. Frank Fischer and John Forester. Durham, NC: Duke UP, 1993. 213–32.Fischer, Frank, and John Forester. “Editors’ Introduction.” The Argumentative Turn in Policy Analysis and Planning. Eds. Frank Fischer and John Forester. Durham, NC: Duke UP, 1993. 1–17.Fischer, Frank, and Herbert Gottweis. Argumentative Turn Revisited: Public Policy as Communicative Practice. Durham, NC: Duke UP, 2012.Gunn, John. Along Parallel Lines: A History of the Railways of New South Wales. Carlton: Melbourne UP, 1989.Majone, Giandomenico. Evidence, Argument, and Persuasion in the Policy Process. New Haven: Yale UP, 1989.O’Neill, Paul. “The Curatorial Turn: From Practice to Discourse.” The Biennial Reader. Eds. Elena Filipovic, Marieke Van Hal, and Solvig Øvstebø. Bergen, Norway: Bergen Kunsthall, 2007. 240–59.———. The Culture of Curating and the Curating of Cultures. Cambridge, MA: The MIT P, 2012.Richardson, Nicholas. “Political Upheaval in Australia: Media, Foucault and Shocking Policy.” Media International Australia. Forthcoming.Shehata, Adam, and Jesper Strömbäck. “Mediation of Political Realities: Media as Crucial Sources of Information.” Mediatization of Politics: Understanding the Transformation of Western Democracies. Eds. Frank Esser and Jesper Strömbäck. Basingstoke, Hampshire; New York, NY: Palgrave Macmillan, 2014. 93–112. Stone, Deborah. Policy Paradox and Political Reason. Glenview, Illinois: Scott, Foresman and Company, 1988.Strömbäck, Jesper, and Frank Esser. “Mediatization of Politics: Towards a Theoretical Framework.” Mediatization of Politics: Understanding the Transformation of Western Democracies. Eds. Frank Esser and Jesper Strömbäck. Basingstoke, Hampshire: Palgrave Macmillan, 2014. 3–28.Tanner, Lindsay. Sideshow: Dumbing Down Democracy. Carlton North, Victoria: Scribe, 2011.Throgmorton, James A. “Survey Research as Rhetorical Trope: Electric Power Planning in Chicago.” The Argumentative Turn in Policy Analysis and Planning. Eds. Frank Fischer and John Forester. Durham, NC: Duke UP, 1993. 117–44.Ward, Ian. “An Australian PR State?” Australian Journal of Communication 30.1 (2003): 25–42. ———. “Lobbying as a Public Affair: PR and Politics in Australia.” Communication, Creativity and Global Citizenship. ANZCA: Brisbane, 2009. 1039–56. ‹http://www.anzca.net/documents/anzca-09-1/refereed-proceedings-2009-1/79-lobbying-as-a-public-affair-pr-and-politics-in-australia-1/file.html›.———. “The New and Old Media, Power and Politics.” Government, Politics, Power and Policy in Australia. Eds. Dennis Woodward, Andrew Parkin, and John Summers. Frenchs Forest, NSW: Pearson, 2010. 374–93.Young, Sally. “Killing Competition: Restricting Access to Political Communication Channels in Australia.” AQ: Journal of Contemporary Analysis 75.3 (2003): 9–15.
APA, Harvard, Vancouver, ISO, and other styles
45

Geoghegan, Hilary. "“If you can walk down the street and recognise the difference between cast iron and wrought iron, the world is altogether a better place”: Being Enthusiastic about Industrial Archaeology." M/C Journal 12, no. 2 (May 13, 2009). http://dx.doi.org/10.5204/mcj.140.

Full text
Abstract:
Introduction: Technology EnthusiasmEnthusiasts are people who have a passion, keenness, dedication or zeal for a particular activity or hobby. Today, there are enthusiasts for almost everything, from genealogy, costume dramas, and country houses, to metal detectors, coin collecting, and archaeology. But to be described as an enthusiast is not necessarily a compliment. Historically, the term “enthusiasm” was first used in England in the early seventeenth century to describe “religious or prophetic frenzy among the ancient Greeks” (Hanks, n.p.). This frenzy was ascribed to being possessed by spirits sent not only by God but also the devil. During this period, those who disobeyed the powers that be or claimed to have a message from God were considered to be enthusiasts (McLoughlin).Enthusiasm retained its religious connotations throughout the eighteenth century and was also used at this time to describe “the tendency within the population to be swept by crazes” (Mee 31). However, as part of the “rehabilitation of enthusiasm,” the emerging middle-classes adopted the word to characterise the intensity of Romantic poetry. The language of enthusiasm was then used to describe the “literary ideas of affect” and “a private feeling of religious warmth” (Mee 2 and 34). While the notion of enthusiasm was embraced here in a more optimistic sense, attempts to disassociate enthusiasm from crowd-inciting fanaticism were largely unsuccessful. As such enthusiasm has never quite managed to shake off its pejorative connotations.The 'enthusiasm' discussed in this paper is essentially a personal passion for technology. It forms part of a longer tradition of historical preservation in the United Kingdom and elsewhere in the world. From preserved railways to Victorian pumping stations, people have long been fascinated by the history of technology and engineering; manifesting their enthusiasm through their nostalgic longings and emotional attachment to its enduring material culture. Moreover, enthusiasts have been central to the collection, conservation, and preservation of this particular material record. Technology enthusiasm in this instance is about having a passion for the history and material record of technological development, specifically here industrial archaeology. Despite being a pastime much participated in, technology enthusiasm is relatively under-explored within the academic literature. For the most part, scholarship has tended to focus on the intended users, formal spaces, and official narratives of science and technology (Adas, Latour, Mellström, Oldenziel). In recent years attempts have been made to remedy this imbalance, with researchers from across the social sciences examining the position of hobbyists, tinkerers and amateurs in scientific and technical culture (Ellis and Waterton, Haring, Saarikoski, Takahashi). Work from historians of technology has focussed on the computer enthusiast; for example, Saarikoski’s work on the Finnish personal computer hobby:The definition of the computer enthusiast varies historically. Personal interest, pleasure and entertainment are the most significant factors defining computing as a hobby. Despite this, the hobby may also lead to acquiring useful knowledge, skills or experience of information technology. Most often the activity takes place outside working hours but can still have links to the development of professional expertise or the pursuit of studies. In many cases it takes place in the home environment. On the other hand, it is characteristically social, and the importance of friends, clubs and other communities is greatly emphasised.In common with a number of other studies relating to technical hobbies, for example Takahashi who argues tinkerers were behind the advent of the radio and television receiver, Saarikoski’s work focuses on the role these users played in shaping the technology in question. The enthusiasts encountered in this paper are important here not for their role in shaping the technology, but keeping technological heritage alive. As historian of technology Haring reminds us, “there exist alternative ways of using and relating to technology” (18). Furthermore, the sociological literature on audiences (Abercrombie and Longhurst, Ang), fans (Hills, Jenkins, Lewis, Sandvoss) and subcultures (Hall, Hebdige, Schouten and McAlexander) has also been extended in order to account for the enthusiast. In Abercrombie and Longhurst’s Audiences, the authors locate ‘the enthusiast’ and ‘the fan’ at opposing ends of a continuum of consumption defined by questions of specialisation of interest, social organisation of interest and material productivity. Fans are described as:skilled or competent in different modes of production and consumption; active in their interactions with texts and in their production of new texts; and communal in that they construct different communities based on their links to the programmes they like. (127 emphasis in original) Based on this definition, Abercrombie and Longhurst argue that fans and enthusiasts differ in three ways: (1) enthusiasts’ activities are not based around media images and stars in the way that fans’ activities are; (2) enthusiasts can be hypothesized to be relatively light media users, particularly perhaps broadcast media, though they may be heavy users of the specialist publications which are directed towards the enthusiasm itself; (3) the enthusiasm would appear to be rather more organised than the fan activity. (132) What is striking about this attempt to differentiate between the fan and the enthusiast is that it is based on supposition rather than the actual experience and observation of enthusiasm. It is here that the ethnographic account of enthusiasm presented in this paper and elsewhere, for example works by Dannefer on vintage car culture, Moorhouse on American hot-rodding and Fuller on modified-car culture in Australia, can shed light on the subject. My own ethnographic study of groups with a passion for telecommunications heritage, early British computers and industrial archaeology takes the discussion of “technology enthusiasm” further still. Through in-depth interviews, observation and textual analysis, I have examined in detail the formation of enthusiast societies and their membership, the importance of the material record to enthusiasts (particularly at home) and the enthusiastic practices of collecting and hoarding, as well as the figure of the technology enthusiast in the public space of the museum, namely the Science Museum in London (Geoghegan). In this paper, I explore the culture of enthusiasm for the industrial past through the example of the Greater London Industrial Archaeology Society (GLIAS). Focusing on industrial sites around London, GLIAS meet five or six times a year for field visits, walks and a treasure hunt. The committee maintain a website and produce a quarterly newsletter. The title of my paper, “If you can walk down the street and recognise the difference between cast iron and wrought iron, the world is altogether a better place,” comes from an interview I conducted with the co-founder and present chairman of GLIAS. He was telling me about his fascination with the materials of industrialisation. In fact, he said even concrete is sexy. Some call it a hobby; others call it a disease. But enthusiasm for industrial archaeology is, as several respondents have themselves identified, “as insidious in its side effects as any debilitating germ. It dictates your lifestyle, organises your activity and decides who your friends are” (Frow and Frow 177, Gillespie et al.). Through the figure of the industrial archaeology enthusiast, I discuss in this paper what it means to be enthusiastic. I begin by reflecting on the development of this specialist subject area. I go on to detail the formation of the Society in the late 1960s, before exploring the Society’s fieldwork methods and some of the other activities they now engage in. I raise questions of enthusiast and professional knowledge and practice, as well as consider the future of this particular enthusiasm.Defining Industrial ArchaeologyThe practice of 'industrial archaeology' is much contested. For a long time, enthusiasts and professional archaeologists have debated the meaning and use of the term (Palmer). On the one hand, there are those interested in the history, preservation, and recording of industrial sites. For example the grandfather figures of the subject, namely Kenneth Hudson and Angus Buchanan, who both published widely in the 1960s and 1970s in order to encourage publics to get involved in recording. Many members of GLIAS refer to the books of Hudson Industrial Archaeology: an Introduction and Buchanan Industrial Archaeology in Britain with their fine descriptions and photographs as integral to their early interest in the subject. On the other hand, there are those within the academic discipline of archaeology who consider the study of remains produced by the Industrial Revolution as too modern. Moreover, they find the activities of those calling themselves industrial archaeologists as lacking sufficient attention to the understanding of past human activity to justify the name. As a result, the definition of 'industrial archaeology' is problematic for both enthusiasts and professionals. Even the early advocates of professional industrial archaeology felt uneasy about the subject’s methods and practices. In 1973, Philip Riden (described by one GLIAS member as the angry young man of industrial archaeology), the then president of the Oxford University Archaeology Society, wrote a damning article in Antiquity, calling for the subject to “shed the amateur train drivers and others who are not part of archaeology” (215-216). He decried the “appallingly low standard of some of the work done under the name of ‘industrial archaeology’” (211). He felt that if enthusiasts did not attempt to maintain high technical standards, publish their work in journals or back up their fieldwork with documentary investigation or join their county archaeological societies then there was no value in the efforts of these amateurs. During this period, enthusiasts, academics, and professionals were divided. What was wrong with doing something for the pleasure it provides the participant?Although relations today between the so-called amateur (enthusiast) and professional archaeologies are less potent, some prejudice remains. Describing them as “barrow boys”, some enthusiasts suggest that what was once their much-loved pastime has been “hijacked” by professional archaeologists who, according to one respondent,are desperate to find subjects to get degrees in. So the whole thing has been hijacked by academia as it were. Traditional professional archaeologists in London at least are running head on into things that we have been doing for decades and they still don’t appreciate that this is what we do. A lot of assessments are handed out to professional archaeology teams who don’t necessarily have any knowledge of industrial archaeology. (James, GLIAS committee member)James went on to reveal that GLIAS receives numerous enquiries from professional archaeologists, developers and town planners asking what they know about particular sites across the city. Although the Society has compiled a detailed database covering some areas of London, it is by no means comprehensive. In addition, many active members often record and monitor sites in London for their own personal enjoyment. This leaves many questioning the need to publish their results for the gain of third parties. Canadian sociologist Stebbins discusses this situation in his research on “serious leisure”. He has worked extensively with amateur archaeologists in order to understand their approach to their leisure activity. He argues that amateurs are “neither dabblers who approach the activity with little commitment or seriousness, nor professionals who make a living from that activity” (55). Rather they pursue their chosen leisure activity to professional standards. A point echoed by Fine in his study of the cultures of mushrooming. But this is to get ahead of myself. How did GLIAS begin?GLIAS: The GroupThe 1960s have been described by respondents as a frantic period of “running around like headless chickens.” Enthusiasts of London’s industrial archaeology were witnessing incredible changes to the city’s industrial landscape. Individuals and groups like the Thames Basin Archaeology Observers Group were recording what they could. Dashing around London taking photos to capture London’s industrial legacy before it was lost forever. However the final straw for many, in London at least, was the proposed and subsequent demolition of the “Euston Arch”. The Doric portico at Euston Station was completed in 1838 and stood as a symbol to the glory of railway travel. Despite strong protests from amenity societies, this Victorian symbol of progress was finally pulled down by British Railways in 1962 in order to make way for what enthusiasts have called a “monstrous concrete box”.In response to these changes, GLIAS was founded in 1968 by two engineers and a locomotive driver over afternoon tea in a suburban living room in Woodford, North-East London. They held their first meeting one Sunday afternoon in December at the Science Museum in London and attracted over 130 people. Firing the imagination of potential members with an exhibition of photographs of the industrial landscape taken by Eric de Maré, GLIAS’s first meeting was a success. Bringing together like-minded people who are motivated and enthusiastic about the subject, GLIAS currently has over 600 members in the London area and beyond. This makes it the largest industrial archaeology society in the UK and perhaps Europe. Drawing some of its membership from a series of evening classes hosted by various members of the Society’s committee, GLIAS initially had a quasi-academic approach. Although some preferred the hands-on practical element and were more, as has been described by one respondent, “your free-range enthusiast”. The society has an active committee, produces a newsletter and journal, as well as runs regular events for members. However the Society is not simply about the study of London’s industrial heritage, over time the interest in industrial archaeology has developed for some members into long-term friendships. Sociability is central to organised leisure activities. It underpins and supports the performance of enthusiasm in groups and societies. For Fine, sociability does not always equal friendship, but it is the state from which people might become friends. Some GLIAS members have taken this one step further: there have even been a couple of marriages. Although not the subject of my paper, technical culture is heavily gendered. Industrial archaeology is a rare exception attracting a mixture of male and female participants, usually retired husband and wife teams.Doing Industrial Archaeology: GLIAS’s Method and PracticeIn what has been described as GLIAS’s heyday, namely the 1970s to early 1980s, fieldwork was fundamental to the Society’s activities. The Society’s approach to fieldwork during this period was much the same as the one described by champion of industrial archaeology Arthur Raistrick in 1973:photographing, measuring, describing, and so far as possible documenting buildings, engines, machinery, lines of communication, still or recently in use, providing a satisfactory record for the future before the object may become obsolete or be demolished. (13)In the early years of GLIAS and thanks to the committed efforts of two active Society members, recording parties were organised for extended lunch hours and weekends. The majority of this early fieldwork took place at the St Katherine Docks. The Docks were constructed in the 1820s by Thomas Telford. They became home to the world’s greatest concentration of portable wealth. Here GLIAS members learnt and employed practical (also professional) skills, such as measuring, triangulations and use of a “dumpy level”. For many members this was an incredibly exciting time. It was a chance to gain hands-on experience of industrial archaeology. Having been left derelict for many years, the Docks have since been redeveloped as part of the Docklands regeneration project.At this time the Society was also compiling data for what has become known to members as “The GLIAS Book”. The book was to have separate chapters on the various industrial histories of London with contributions from Society members about specific sites. Sadly the book’s editor died and the project lost impetus. Several years ago, the committee managed to digitise the data collected for the book and began to compile a database. However, the GLIAS database has been beset by problems. Firstly, there are often questions of consistency and coherence. There is a standard datasheet for recording industrial buildings – the Index Record for Industrial Sites. However, the quality of each record is different because of the experience level of the different authors. Some authors are automatically identified as good or expert record keepers. Secondly, getting access to the database in order to upload the information has proved difficult. As one of the respondents put it: “like all computer babies [the creator of the database], is finding it hard to give birth” (Sally, GLIAS member). As we have learnt enthusiasm is integral to movements such as industrial archaeology – public historian Raphael Samuel described them as the “invisible hands” of historical enquiry. Yet, it is this very enthusiasm that has the potential to jeopardise projects such as the GLIAS book. Although active in their recording practices, the GLIAS book saga reflects one of the challenges encountered by enthusiast groups and societies. In common with other researchers studying amenity societies, such as Ellis and Waterton’s work with amateur naturalists, unlike the world of work where people are paid to complete a task and are therefore meant to have a singular sense of purpose, the activities of an enthusiast group like GLIAS rely on the goodwill of their members to volunteer their time, energy and expertise. When this is lost for whatever reason, there is no requirement for any other member to take up that position. As such, levels of commitment vary between enthusiasts and can lead to the aforementioned difficulties, such as disputes between group members, the occasional miscommunication of ideas and an over-enthusiasm for some parts of the task in hand. On top of this, GLIAS and societies like it are confronted with changing health and safety policies and tightened security surrounding industrial sites. This has made the practical side of industrial archaeology increasingly difficult. As GLIAS member Bob explains:For me to go on site now I have to wear site boots and borrow a hard hat and a high visibility jacket. Now we used to do incredibly dangerous things in the seventies and nobody batted an eyelid. You know we were exploring derelict buildings, which you are virtually not allowed in now because the floor might give way. Again the world has changed a lot there. GLIAS: TodayGLIAS members continue to record sites across London. Some members are currently surveying the site chosen as the location of the Olympic Games in London in 2012 – the Lower Lea Valley. They describe their activities at this site as “rescue archaeology”. GLIAS members are working against the clock and some important structures have already been demolished. They only have time to complete a quick flash survey. Armed with the information they collated in previous years, GLIAS is currently in discussions with the developer to orchestrate a detailed recording of the site. It is important to note here that GLIAS members are less interested in campaigning for the preservation of a site or building, they appreciate that sites must change. Instead they want to ensure that large swathes of industrial London are not lost without a trace. Some members regard this as their public duty.Restricted by health and safety mandates and access disputes, GLIAS has had to adapt. The majority of practical recording sessions have given way to guided walks in the summer and public lectures in the winter. Some respondents have identified a difference between those members who call themselves “industrial archaeologists” and those who are just “ordinary members” of GLIAS. The walks are for those with a general interest, not serious members, and the talks are public lectures. Some audience researchers have used Bourdieu’s metaphor of “capital” to describe the experience, knowledge and skill required to be a fan, clubber or enthusiast. For Hills, fan status is built up through the demonstration of cultural capital: “where fans share a common interest while also competing over fan knowledge, access to the object of fandom, and status” (46). A clear membership hierarchy can be seen within GLIAS based on levels of experience, knowledge and practical skill.With a membership of over 600 and rising annually, the Society’s future is secure at present. However some of the more serious members, although retaining their membership, are pursuing their enthusiasm elsewhere: through break-away recording groups in London; active membership of other groups and societies, for example the national Association for Industrial Archaeology; as well as heading off to North Wales in the summer for practical, hands-on industrial archaeology in Snowdonia’s slate quarries – described in the Ffestiniog Railway Journal as the “annual convention of slate nutters.” ConclusionsGLIAS has changed since its foundation in the late 1960s. Its operation has been complicated by questions of health and safety, site access, an ageing membership, and the constant changes to London’s industrial archaeology. Previously rejected by professional industrial archaeology as “limited in skill and resources” (Riden), enthusiasts are now approached by professional archaeologists, developers, planners and even museums that are interested in engaging in knowledge exchange programmes. As a recent report from the British think-tank Demos has argued, enthusiasts or pro-ams – “amateurs who work to professional standards” (Leadbeater and Miller 12) – are integral to future innovation and creativity; for example computer pro-ams developed an operating system to rival Microsoft Windows. As such the specialist knowledge, skill and practice of these communities is of increasing interest to policymakers, practitioners, and business. So, the subject once described as “the ugly offspring of two parents that shouldn’t have been allowed to breed” (Hudson), the so-called “amateur” industrial archaeology offers enthusiasts and professionals alike alternative ways of knowing, seeing and being in the recent and contemporary past.Through the case study of GLIAS, I have described what it means to be enthusiastic about industrial archaeology. I have introduced a culture of collective and individual participation and friendship based on a mutual interest in and emotional attachment to industrial sites. As we have learnt in this paper, enthusiasm is about fun, pleasure and joy. The enthusiastic culture presented here advances themes such as passion in relation to less obvious communities of knowing, skilled practices, material artefacts and spaces of knowledge. Moreover, this paper has been about the affective narratives that are sometimes missing from academic accounts; overlooked for fear of sniggers at the back of a conference hall. Laughter and humour are a large part of what enthusiasm is. Enthusiastic cultures then are about the pleasure and joy experienced in doing things. Enthusiasm is clearly a potent force for active participation. I will leave the last word to GLIAS member John:One meaning of enthusiasm is as a form of possession, madness. Obsession perhaps rather than possession, which I think is entirely true. It is a pejorative term probably. The railway enthusiast. But an awful lot of energy goes into what they do and achieve. Enthusiasm to my mind is an essential ingredient. If you are not a person who can muster enthusiasm, it is very difficult, I think, to get anything out of it. On the basis of the more you put in the more you get out. In terms of what has happened with industrial archaeology in this country, I think, enthusiasm is a very important aspect of it. The movement needs people who can transmit that enthusiasm. ReferencesAbercrombie, N., and B. Longhurst. Audiences: A Sociological Theory of Performance and Imagination. London: Sage Publications, 1998.Adas, M. Machines as the Measure of Men: Science, Technology and Ideologies of Western Dominance. Ithaca: Cornell UP, 1989.Ang, I. Desperately Seeking the Audience. London: Routledge, 1991.Bourdieu, P. Distinction: A Social Critique of the Judgement of Taste. London: Routledge, 1984.Buchanan, R.A. Industrial Archaeology in Britain. Harmondsworth, Middlesex: Penguin, 1972.Dannefer, D. “Rationality and Passion in Private Experience: Modern Consciousness and the Social World of Old-Car Collectors.” Social Problems 27 (1980): 392–412.Dannefer, D. “Neither Socialization nor Recruitment: The Avocational Careers of Old-Car Enthusiasts.” Social Forces 60 (1981): 395–413.Ellis, R., and C. Waterton. “Caught between the Cartographic and the Ethnographic Imagination: The Whereabouts of Amateurs, Professionals, and Nature in Knowing Biodiversity.” Environment and Planning D: Society and Space 23 (2005): 673–693.Fine, G.A. “Mobilizing Fun: Provisioning Resources in Leisure Worlds.” Sociology of Sport Journal 6 (1989): 319–334.Fine, G.A. Morel Tales: The Culture of Mushrooming. Champaign, Ill.: U of Illinois P, 2003.Frow, E., and R. Frow. “Travels with a Caravan.” History Workshop Journal 2 (1976): 177–182Fuller, G. Modified: Cars, Culture, and Event Mechanics. Unpublished PhD Thesis, University of Western Sydney, 2007.Geoghegan, H. The Culture of Enthusiasm: Technology, Collecting and Museums. Unpublished PhD Thesis, University of London, 2008.Gillespie, D.L., A. Leffler, and E. Lerner. “‘If It Weren’t for My Hobby, I’d Have a Life’: Dog Sports, Serious Leisure, and Boundary Negotiations.” Leisure Studies 21 (2002): 285–304.Hall, S., and T. Jefferson, eds. Resistance through Rituals: Youth Sub-Cultures in Post-War Britain. London: Hutchinson, 1976.Hanks, P. “Enthusiasm and Condescension.” Euralex ’98 Proceedings. 1998. 18 Jul. 2005 ‹http://www.patrickhanks.com/papers/enthusiasm.pdf›.Haring, K. “The ‘Freer Men’ of Ham Radio: How a Technical Hobby Provided Social and Spatial Distance.” Technology and Culture 44 (2003): 734–761.Haring, K. Ham Radio’s Technical Culture. London: MIT Press, 2007.Hebdige, D. Subculture: The Meaning of Style. London: Methuen, 1979.Hills, M. Fan Cultures. London: Routledge, 2002.Hudson, K. Industrial Archaeology London: John Baker, 1963.Jenkins, H. Textual Poachers: Television Fans and Participatory Culture. London: Routledge, 1992.Latour, B. Aramis, or the Love of Technology. London: Harvard UP, 1996.Leadbeater, C., and P. Miller. The Pro-Am Revolution: How Enthusiasts Are Changing Our Economy and Society. London: Demos, 2004.Lewis, L.A., ed. The Adoring Audience: Fan Culture and Popular Media. London: Routledge, 1992.McLoughlin, W.G. Revivals, Awakenings, and Reform: An Essay on Religion and Social Change in America, 1607-1977. London: U of Chicago P, 1977.Mee, J. Romanticism, Enthusiasm, and Regulation: Poetics and the Policing of Culture in the Romantic Period. Oxford: Oxford UP, 2003.Mellström, U. “Patriarchal Machines and Masculine Embodiment.” Science, Technology, & Human Values 27 (2002): 460–478.Moorhouse, H.F. Driving Ambitions: A Social Analysis of American Hot Rod Enthusiasm. Manchester: Manchester UP, 1991.Oldenziel, R. Making Technology Masculine: Men, Women and Modern Machines in America 1870-1945. Amsterdam: Amsterdam UP, 1999.Palmer, M. “‘We Have Not Factory Bell’: Domestic Textile Workers in the Nineteenth Century.” The Local Historian 34 (2004): 198–213.Raistrick, A. Industrial Archaeology. London: Granada, 1973.Riden, P. “Post-Post-Medieval Archaeology.” Antiquity XLVII (1973): 210-216.Rix, M. “Industrial Archaeology: Progress Report 1962.” The Amateur Historian 5 (1962): 56–60.Rix, M. Industrial Archaeology. London: The Historical Association, 1967.Saarikoski, P. The Lure of the Machine: The Personal Computer Interest in Finland from the 1970s to the Mid-1990s. Unpublished PhD Thesis, 2004. ‹http://users.utu.fi/petsaari/lure.pdf›.Samuel, R. Theatres of Memory London: Verso, 1994.Sandvoss, C. Fans: The Mirror of Consumption Cambridge: Polity, 2005.Schouten, J.W., and J. McAlexander. “Subcultures of Consumption: An Ethnography of the New Bikers.” Journal of Consumer Research 22 (1995) 43–61.Stebbins, R.A. Amateurs: On the Margin between Work and Leisure. Beverly Hills: Sage, 1979.Stebbins, R.A. Amateurs, Professionals, and Serious Leisure. London: McGill-Queen’s UP, 1992.Takahashi, Y. “A Network of Tinkerers: The Advent of the Radio and Television Receiver Industry in Japan.” Technology and Culture 41 (2000): 460–484.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography