Artículos de revistas sobre el tema "Simple Object Access Protocol (Computer network protocol)"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Simple Object Access Protocol (Computer network protocol).

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 40 mejores artículos de revistas para su investigación sobre el tema "Simple Object Access Protocol (Computer network protocol)".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Hussein, Mahmoud, Ahmed I. Galal, Emad Abd-Elrahman y Mohamed Zorkany. "Internet of Things (IoT) Platform for Multi-Topic Messaging". Energies 13, n.º 13 (30 de junio de 2020): 3346. http://dx.doi.org/10.3390/en13133346.

Texto completo
Resumen
IoT-based applications operate in a client–server architecture, which requires a specific communication protocol. This protocol is used to establish the client–server communication model, allowing all clients of the system to perform specific tasks through internet communications. Many data communication protocols for the Internet of Things are used by IoT platforms, including message queuing telemetry transport (MQTT), advanced message queuing protocol (AMQP), MQTT for sensor networks (MQTT-SN), data distribution service (DDS), constrained application protocol (CoAP), and simple object access protocol (SOAP). These protocols only support single-topic messaging. Thus, in this work, an IoT message protocol that supports multi-topic messaging is proposed. This protocol will add a simple “brain” for IoT platforms in order to realize an intelligent IoT architecture. Moreover, it will enhance the traffic throughput by reducing the overheads of messages and the delay of multi-topic messaging. Most current IoT applications depend on real-time systems. Therefore, an RTOS (real-time operating system) as a famous OS (operating system) is used for the embedded systems to provide the constraints of real-time features, as required by these real-time systems. Using RTOS for IoT applications adds important features to the system, including reliability. Many of the undertaken research works into IoT platforms have only focused on specific applications; they did not deal with the real-time constraints under a real-time system umbrella. In this work, the design of the multi-topic IoT protocol and platform is implemented for real-time systems and also for general-purpose applications; this platform depends on the proposed multi-topic communication protocol, which is implemented here to show its functionality and effectiveness over similar protocols.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

HONG, PENGYU, SHENG ZHONG y WING H. WONG. "UBIC2 — TOWARDS UBIQUITOUS BIO-INFORMATION COMPUTING: DATA PROTOCOLS, MIDDLEWARE, AND WEB SERVICES FOR HETEROGENEOUS BIOLOGICAL INFORMATION INTEGRATION AND RETRIEVAL". International Journal of Software Engineering and Knowledge Engineering 15, n.º 03 (junio de 2005): 475–85. http://dx.doi.org/10.1142/s0218194005001951.

Texto completo
Resumen
The Ubiquitous Bio-Information Computing (UBIC2) project aims to disseminate protocols and software packages to facilitate the development of heterogeneous bio-information computing units that are interoperable and may run distributedly. UBIC2 specifies biological data in XML formats and queries data using XQuery. The UBIC2 programming library provides interfaces for integrating, retrieving, and manipulating heterogeneous biological data. Interoperability is achieved via Simple Object Access Protocol (SOAP) based web services. The documents and software packages of UBIC2 are available at .
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Selvan, Satiaseelan y Manmeet Mahinderjit Singh. "Adaptive Contextual Risk-Based Model to Tackle Confidentiality-Based Attacks in Fog-IoT Paradigm". Computers 11, n.º 2 (24 de enero de 2022): 16. http://dx.doi.org/10.3390/computers11020016.

Texto completo
Resumen
The Internet of Things (IoT) allows billions of physical objects to be connected to gather and exchange information to offer numerous applications. It has unsupported features such as low latency, location awareness, and geographic distribution that are important for a few IoT applications. Fog computing is integrated into IoT to aid these features to increase computing, storage, and networking resources to the network edge. Unfortunately, it is faced with numerous security and privacy risks, raising severe concerns among users. Therefore, this research proposes a contextual risk-based access control model for Fog-IoT technology that considers real-time data information requests for IoT devices and gives dynamic feedback. The proposed model uses Fog-IoT environment features to estimate the security risk associated with each access request using device context, resource sensitivity, action severity, and risk history as inputs for the fuzzy risk model to compute the risk factor. Then, the proposed model uses a security agent in a fog node to provide adaptive features in which the device’s behaviour is monitored to detect any abnormal actions from authorised devices. The proposed model is then evaluated against the existing model to benchmark the results. The fuzzy-based risk assessment model with enhanced MQTT authentication protocol and adaptive security agent showed an accurate risk score for seven random scenarios tested compared to the simple risk score calculations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Mitts, H�kan y Harri Hans�n. "A simple and efficient routing protocol for the UMTSA Access network". Mobile Networks and Applications 1, n.º 2 (junio de 1996): 167–81. http://dx.doi.org/10.1007/bf01193335.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Polgar, Jana. "Using WSRP 2.0 with JSR 168 and 286 Portlets". International Journal of Web Portals 2, n.º 1 (enero de 2010): 45–57. http://dx.doi.org/10.4018/jwp.2010010105.

Texto completo
Resumen
WSRP—Web Services for Remote Portlets—specification builds on current standard technologies, such as WSDL (Web Services Definition Language), UDDI (Universal Description, Discovery and Integration), and SOAP (Simple Object Access Protocol). It aims to sol
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Zientarski, Tomasz, Marek Miłosz, Marek Kamiński y Maciej Kołodziej. "APPLICABILITY ANALYSIS OF REST AND SOAP WEB SERVICES". Informatics Control Measurement in Economy and Environment Protection 7, n.º 4 (21 de diciembre de 2017): 28–31. http://dx.doi.org/10.5604/01.3001.0010.7249.

Texto completo
Resumen
Web Services are common means to exchange data and information over the network. Web Services make themselves available over the Internet, where technology and platform are independent. These web services can be developed on the basis of two interaction styles such as Simple Object Access Protocol (SOAP) and Representational State Transfer Protocol (REST). In this study, a comparison of REST and SOAP web services is presented in terms of their applicability in diverse areas. It is concluded that in the past both technologies were equally popular, but during the rapid Internet development the REST technology has become the leading one in the area of access to Internet services.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Ge, Jian Xia y Wen Ya Xiao. "Network Layer Network Topology Discovery Algorithm Research". Applied Mechanics and Materials 347-350 (agosto de 2013): 2071–76. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2071.

Texto completo
Resumen
Along with the development of the network information age, people on the dependence of the computer network is more and more high, the computer network itself the security and reliability of becomes very important, the network management put forward higher request. This paper analyzes two algorithms of the network layer topology discovery based on the SNMP and ICMP protocol, based on this, this paper puts forward a improved algorithm of the comprehensive two algorithm, and makes the discovery process that has a simple, efficient, and has a strong generalization, and solved in the discovery process met the subnet judge, multiple access routers identification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Ge, Jian Xia y Wen Ya Xiao. "Network Layer Network Topology Discovery Algorithm Research". Applied Mechanics and Materials 380-384 (agosto de 2013): 1327–32. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.1327.

Texto completo
Resumen
Along with the development of the network information age, people on the dependence of the computer network is more and more high, the computer network itself the security and reliability of becomes very important, the network management put forward higher request. This paper analyzes two algorithms of the network layer topology discovery based on the SNMP and ICMP protocol, based on this, this paper puts forward a improved algorithm of the comprehensive two algorithm, and makes the discovery process that has a simple, efficient, and has a strong generalization, and solved in the discovery process met the subnet judge, multiple access routers identification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Song, Min Su, Jae Dong Lee, Young-Sik Jeong, Hwa-Young Jeong y Jong Hyuk Park. "DS-ARP: A New Detection Scheme for ARP Spoofing Attacks Based on Routing Trace for Ubiquitous Environments". Scientific World Journal 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/264654.

Texto completo
Resumen
Despite the convenience, ubiquitous computing suffers from many threats and security risks. Security considerations in the ubiquitous network are required to create enriched and more secure ubiquitous environments. The address resolution protocol (ARP) is a protocol used to identify the IP address and the physical address of the associated network card. ARP is designed to work without problems in general environments. However, since it does not include security measures against malicious attacks, in its design, an attacker can impersonate another host using ARP spoofing or access important information. In this paper, we propose a new detection scheme for ARP spoofing attacks using a routing trace, which can be used to protect the internal network. Tracing routing can find the change of network movement path. The proposed scheme provides high constancy and compatibility because it does not alter the ARP protocol. In addition, it is simple and stable, as it does not use a complex algorithm or impose extra load on the computer system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Tan, Wenken y Jianmin Hu. "Design of the Wireless Network Hierarchy System of Intelligent City Industrial Data Management Based on SDN Network Architecture". Security and Communication Networks 2021 (10 de noviembre de 2021): 1–12. http://dx.doi.org/10.1155/2021/5732300.

Texto completo
Resumen
With the rapid development of the industrial Internet of Things and the comprehensive popularization of mobile intelligent devices, the construction of smart city and economic development of wireless network demand are increasingly high. SDN has the advantages of control separation, programmable interface, and centralized control logic. Therefore, integrating this technical concept into the smart city data management WLAN network not only can effectively solve the problems existing in the previous wireless network operation but also provide more functions according to different user needs. In this case, the traditional WLAN network is of low cost and is simple to operate, but it cannot guarantee network compatibility and performance. From a practical perspective, further network compatibility and security are a key part of industrial IoT applications. This paper designs the network architecture of smart city industrial IoT based on SDN, summarizes the access control requirements and research status of industrial IoT, and puts forward the access control requirements and objectives of industrial IoT based on SDN. The characteristics of the industrial Internet of Things are regularly associated with data resources. In the framework of SDN industrial Internet of Things, gateway protocol is simplified and topology discovery algorithm is designed. The access control policy is configured on the gateway. The access control rule can be dynamically adjusted in real time. An SDN-based intelligent city industrial Internet of Things access control function test platform was built, and the system was simulated. The proposed method is compared with other methods in terms of extension protocol and channel allocation algorithm. Experimental results verify the feasibility of the proposed scheme. Finally, on the basis of performance analysis, the practical significance of the design of a smart city wireless network hierarchical data management system based on SDN industrial Internet of Things architecture is expounded.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Halili, Festim y Erenis Ramadani. "Web Services: A Comparison of Soap and Rest Services". Modern Applied Science 12, n.º 3 (28 de febrero de 2018): 175. http://dx.doi.org/10.5539/mas.v12n3p175.

Texto completo
Resumen
The interest on Web services has been growing rapidly in these couple of years since their start of use. A web service would be described as a method for exchanging/communicating information between devices over a network. Often, when deciding which service would fit on the architecture design to develop a product, then the question rises which service to use and when?SOAP (Simple Object Access Protocol) and REST (Representational State Transfer) are the two most used protocols to exchange messages, so choosing one over the other has its own advantages and disadvantages. In this paper we have addressed the differences and best practices when to use one over the other.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Lertsutthiwong, Monchai, Thinh Nguyen y Alan Fern. "Scalable Video Streaming for Single-Hop Wireless Networks Using a Contention-Based Access MAC Protocol". Advances in Multimedia 2008 (2008): 1–21. http://dx.doi.org/10.1155/2008/928521.

Texto completo
Resumen
Limited bandwidth and high packet loss rate pose a serious challenge for video streaming applications over wireless networks. Even when packet loss is not present, the bandwidth fluctuation, as a result of an arbitrary number of active flows in an IEEE 802.11 network, can significantly degrade the video quality. This paper aims to enhance the quality of video streaming applications in wireless home networks via a joint optimization of video layer-allocation technique, admission control algorithm, and medium access control (MAC) protocol. Using an Aloha-like MAC protocol, we propose a novel admission control framework, which can be viewed as an optimization problem that maximizes the average quality of admitted videos, given a specified minimum video quality for each flow. We present some hardness results for the optimization problem under various conditions and propose some heuristic algorithms for finding a good solution. In particular, we show that a simple greedy layer-allocation algorithm can perform reasonably well, although it is typically not optimal. Consequently, we present a more expensive heuristic algorithm that guarantees to approximate the optimal solution within a constant factor. Simulation results demonstrate that our proposed framework can improve the video quality up to 26% as compared to those of the existing approaches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Zhang, Xiao. "Intranet Web System, a Simple Solution to Companywide Information-on-demand". Proceedings, annual meeting, Electron Microscopy Society of America 54 (11 de agosto de 1996): 404–5. http://dx.doi.org/10.1017/s0424820100164489.

Texto completo
Resumen
Intranet, a private corporate network that mirrors the internet Web structure, is the new internal communication technology being embraced by more than 50% of large US companies. Within the intranet, computers using Web-server software store and manage documents built on the Web’s hypertext markup language (HTML) format. This emerging technology allows disparate computer systems within companies to “speak” to one another using the Internet’s TCP/IP protocol. A “fire wall” allows internal users Internet access, but denies external intruders intranet access. As industrial microscopists, how can we take advantage of this new technology? This paper is a preliminary summary of our recent progress in building an intranet Web system among GE microscopy labs. Applications and future development are also discussed.The intranet Web system is an inexpensive yet powerful alternative to other forms of internal communication. It can greatly improve communications, unlock hidden information, and transform an organization. The intranet microscopy Web system was built on the existing GE corporate-wide Ethernet link running Internet’s TCP/IP protocol (Fig. 1). Netscape Navigator was selected as the Web browser. Web’s HTML documentation was generated by using Microsoft® Internet Assistant software. Each microscopy lab has its own home page. The microscopy Web system is also an integrated part of GE Plastics analytical technology Web system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Chang, Chee Er, Azhar Kassim Mustapha y Faisal Mohd-Yasin. "FPGA Prototyping of Web Service Using REST and SOAP Packages". Chips 1, n.º 3 (5 de diciembre de 2022): 210–17. http://dx.doi.org/10.3390/chips1030014.

Texto completo
Resumen
This Communication reports on FPGA prototyping of an embedded web service that sends XML messages under two different packages, namely Simple Object Access Protocol (SOAP) and Representational State Transfer (REST). The request and response messages are communicated through a 100 Mbps local area network between a Spartan-3E FPGA board and washing machine simulator. The performances of REST-based and SOAP-based web services implemented on reconfigurable hardware are then compared. In general, the former performs better than the latter in terms of FPGA resource utilization (~12% less), message length (~57% shorter), and processing time (~4.5 μs faster). This work confirms the superiority of REST over SOAP for data transmission using reconfigurable computing, which paves the way for adoption of these low-cost systems for web services of consumer electronics such as home appliances.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Comeau, Donald C., Chih-Hsuan Wei, Rezarta Islamaj Doğan y Zhiyong Lu. "PMC text mining subset in BioC: about three million full-text articles and growing". Bioinformatics 35, n.º 18 (31 de enero de 2019): 3533–35. http://dx.doi.org/10.1093/bioinformatics/btz070.

Texto completo
Resumen
Abstract Motivation Interest in text mining full-text biomedical research articles is growing. To facilitate automated processing of nearly 3 million full-text articles (in PubMed Central® Open Access and Author Manuscript subsets) and to improve interoperability, we convert these articles to BioC, a community-driven simple data structure in either XML or JavaScript Object Notation format for conveniently sharing text and annotations. Results The resultant articles can be downloaded via both File Transfer Protocol for bulk access and a Web API for updates or a more focused collection. Since the availability of the Web API in 2017, our BioC collection has been widely used by the research community. Availability and implementation https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PMC/.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Al-Dailami, Abdulrahman, Chang Ruan, Zhihong Bao y Tao Zhang. "QoS3: Secure Caching in HTTPS Based on Fine-Grained Trust Delegation". Security and Communication Networks 2019 (28 de diciembre de 2019): 1–16. http://dx.doi.org/10.1155/2019/3107543.

Texto completo
Resumen
With the ever-increasing concern in network security and privacy, a major portion of Internet traffic is encrypted now. Recent research shows that more than 70% of Internet content is transmitted using HyperText Transfer Protocol Secure (HTTPS). However, HTTPS encryption eliminates the advantages of many intermediate services like the caching proxy, which can significantly degrade the performance of web content delivery. We argue that these restrictions lead to the need for other mechanisms to access sites quickly and safely. In this paper, we introduce QoS3, which is a protocol that can overcome such limitations by allowing clients to explicitly and securely re-introduce in-network caching proxies using fine-grained trust delegation without compromising the integrity of the HTTPS content and modifying the format of Transport Layer Security (TLS). In QoS3, we classify web page contents into two types: (1) public contents that are common for all users, which can be stored in the caching proxies, and (2) private contents that are specific for each user. Correspondingly, QoS3 establishes two separate TLS connections between the client and the web server for them. Specifically, for private contents, QoS3 just leverages the original HTTPS protocol to deliver them, without involving any middlebox. For public contents, QoS3 allows clients to delegate trust to specific caching proxy along the path, thereby allowing the clients to use the cached contents in the caching proxy via a delegated HTTPS connection. Meanwhile, to prevent Man-in-the-Middle (MitM) attacks on public contents, QoS3 validates the public contents by employing Document object Model (DoM) object-level checksums, which are delivered through the original HTTPS connection. We implement a prototype of QoS3 and evaluate its performance in our testbed. Experimental results show that QoS3 provides acceleration on page load time ranging between 30% and 64% over traditional HTTPS with negligible overhead. Moreover, QoS3 is deployable since it requires just minor software modifications to the server, client, and the middlebox.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Alasri, Abbas y Rossilawati Sulaiman. "Protection of XML-Based Denail-of-Service and Httpflooding Attacks in Web Services Using the Middleware Tool". International Journal of Engineering & Technology 7, n.º 4.7 (27 de septiembre de 2018): 322. http://dx.doi.org/10.14419/ijet.v7i4.7.20570.

Texto completo
Resumen
A web service is defined as the method of communication between the web applications and the clients. Web services are very flexible and scalable as they are independent of both the hardware and software infrastructure. The lack of security protection offered by web services creates a gap which attackers can make use of. Web services are offered on the HyperText Transfer Protocol (HTTP) with Simple Object Access Protocol (SOAP) as the underlying infrastructure. Web services rely heavily on the Extended Mark-up Language (XML). Hence, web services are most vulnerable to attacks which use XML as the attack parameter. Recently, a new type of XML-based Denial-of-Service (XDoS) attacks has surfaced, which targets the web services. The purpose of these attacks is to consume the system resources by sending SOAP requests that contain malicious XML content. Unfortunately, these malicious requests go undetected underneath the network or transportation layers of the Transfer Control Protocol/Internet Protocol (TCP/IP), as they appear to be legitimate packets.In this paper, a middleware tool is proposed to provide real time detection and prevention of XDoS and HTTP flooding attacks in web service. This tool focuses on the attacks on the two layers of the Open System Interconnection (OSI) model, which are to detect and prevent XDoS attacks on the application layer and prevent flooding attacks at the Network layer.The rule-based approach is used to classify requests either as normal or malicious,in order to detect the XDoS attacks. The experimental results from the middleware tool have demonstrated that the rule-based technique has efficiently detected and prevented theattacks of XDoS and HTTP flooding attacks such as the oversized payload, coercive parsing and XML external entities close to real-time such as 0.006s over the web services. The middleware tool provides close to 100% service availability to normal request, hence protecting the web service against the attacks of XDoS and distributed XDoS (DXDoS).\
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Cao, Jin, Hui Li, Maode Ma y Fenghua Li. "UPPGHA: Uniform Privacy Preservation Group Handover Authentication Mechanism for mMTC in LTE-A Networks". Security and Communication Networks 2018 (2018): 1–16. http://dx.doi.org/10.1155/2018/6854612.

Texto completo
Resumen
Machine Type Communication (MTC), as one of the most important wireless communication technologies in the future wireless communication, has become the new business growth point of mobile communication network. It is a key point to achieve seamless handovers within Evolved-Universal Terrestrial Radio Access Network (E-UTRAN) for massive MTC (mMTC) devices in order to support mobility in the Long Term Evolution-Advanced (LTE-A) networks. When mMTC devices simultaneously roam from a base station to a new base station, the current handover mechanisms suggested by the Third-Generation Partnership Project (3GPP) require several handover signaling interactions, which could cause the signaling load over the access network and the core network. Besides, several distinct handover procedures are proposed for different mobility scenarios, which will increase the system complexity. In this paper, we propose a simple and secure uniform group-based handover authentication scheme for mMTC devices based on the multisignature and aggregate message authentication code (AMAC) techniques, which is to fit in with all of the mobility scenarios in the LTE-A networks. Compared with the current 3GPP standards, our scheme can achieve a simple authentication process with robust security protection including privacy preservation and thus avoid signaling congestion. The correctness of the proposed group handover authentication protocol is formally proved in the Canetti-Krawczyk (CK) model and verified based on the AVISPA and SPAN.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Shukla, Samiksha, D. K. Mishra y Kapil Tiwari. "Performance Enhancement of Soap Via Multi Level Caching". Mapana - Journal of Sciences 9, n.º 2 (30 de noviembre de 2010): 47–52. http://dx.doi.org/10.12723/mjs.17.6.

Texto completo
Resumen
Due to complex infrastructure of web application response time for different service request by client requires significantly larger time. Simple Object Access Protocol (SOAP) is a recent and emerging technology in the field of web services, which aims at replacing traditional methods of remote communications. Basic aim of designing SOAP was to increase interoperability among broad range of programs and environment, SOAP allows applications from different languages, installed on different platforms to communicate with each other over the network. Web services demand security, high performance and extensibility. SOAP provides various benefits for interoperability but we need to pay price of performance degradation and security for that. This formulates SOAP a poor preference for high performance web services. In this paper we present a new approach by enabling multi-level caching at client side as well as server side. Reference describes implementation based on the Apache Java SOAP client, which gives radically enhanced performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Yulianto, Budi. "Analisis Korelasi Faktor Perilaku Konsumen terhadap Keputusan Penggunaan Teknologi Komunikasi Voip". ComTech: Computer, Mathematics and Engineering Applications 5, n.º 1 (30 de junio de 2014): 236. http://dx.doi.org/10.21512/comtech.v5i1.2619.

Texto completo
Resumen
The advancement of communication technology that is combined with computer and the Internet brings Internet Telephony or VoIP (Voice over Internet Protocol). Through VoIP technology, the cost of telecommunications in particular for international direct dialing (IDD) can be reduced. This research analyzes the growth rate of VoIP users, the correlation of the consumer behavior towards using VoIP, and cost comparisons of using telecommunication services between VoIP and other operators. This research is using descriptive analysis method to describe researched object through sampling data collection for hypothesis testing. This research will lead to the conclusion that the use of VoIP for international area will be more advantageous than the use of other operators of GSM (Global System for Mobile), CDMA (Code Division Multiple Access), or the PSTN (Public Switched Telephone Network).
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Al-Mashadani, Abdulrahman Khalid Abdullah y Muhammad Ilyas. "Distributed Denial of Service Attack Alleviated and Detected by Using Mininet and Software Defined Network". Webology 19, n.º 1 (20 de enero de 2022): 4129–44. http://dx.doi.org/10.14704/web/v19i1/web19272.

Texto completo
Resumen
The network security and how to keep it safe from malicious attacks now days is attract huge interest of the developers and cyber security experts (SDN) Software- Defined Network is simple framework for network that allow programmability and monitoring that enable the operators to manage the entire network in a consistent and comprehensive manner also used to detect and alleviate the DDoS attacks the SDN now is the trending of network security evolution there many threats that faces the networks one of them is the distributed Denial of Service (DDoS) because of the architecture weakness in traditional network SDN use new architecture and the point of power in it is the separation of control and data plane the DDoS attack prevent the users from access into resource of the network or make huge delays in the network this paper shows the impact of DDoS attacks on SDN, and how to use SDN applications written in Python and by using OpenFlow protocol to automatically detect and resist attacks with average time to response to the attack between 95-145 second.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Bartlett, H. y R. Wong. "Modelling and Simulation of the Operational and Information Processing Functions of a Computer-Integrated Manufacturing Cell". Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 209, n.º 4 (agosto de 1995): 245–59. http://dx.doi.org/10.1243/pime_proc_1995_209_081_02.

Texto completo
Resumen
This paper investigates the information processing function of a computer-integrated manufacturing (CIM) system, which is usually neglected by engineers when they study the performance of a manufacturing system. The feasibility of developing a complete simulation model which would include both the operational and information processing functions is therefore considered. In order to achieve this, a typical pick-and-place manufacturing cell was considered in which 46 devices were connected to a local area network (LAN). Two independent simulation tools, SIMAN* and L-NET†, were used in order to develop the complete model. The model was evaluated and used to study the performance of the pick-and-place cell using different communication protocols such as the IEEE 802.3 carrier sense multiple access with collision detection (CSMA/CD), IEEE 802.4 token bus and the IEEE 802.5 token ring. Results presented in this paper show that with careful design and simulation it is possible to develop a complete model which includes both operational and informational processing functions. Although the example of the pick-and-place cell is relatively simple the technique adopted could be applied to any CIM system. Results have also shown that for the pick-and-place cell considered in this paper an IEEE 802.3 CSMA/CD protocol operating at 10 Mbps would guarantee access to the network within the shortest station processing time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Bujari, Armir y Claudio Enrico Palazzi. "AirCache: A Crowd-Based Solution for Geoanchored Floating Data". Mobile Information Systems 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/3247903.

Texto completo
Resumen
The Internet edge has evolved from a simple consumer of information and data to eager producer feeding sensed data at a societal scale. The crowdsensing paradigm is a representative example which has the potential to revolutionize the way we acquire and consume data. Indeed, especially in the era of smartphones, the geographical and temporalscopusof data is often local. For instance, users’ queries are more and more frequently about a nearby object, event, person, location, and so forth. These queries could certainly be processed and answered locally, without the need for contacting a remote server through the Internet. In this scenario, the data is alimented (sensed) by the users and, as a consequence, data lifetime is limited by human organizational factors (e.g., mobility). From this basis, data survivability in the Area of Interest (AoI) is crucial and, if not guaranteed, could undermine system deployment. Addressing this scenario, we discuss and contribute with a novel protocol named AirCache, whose aim is to guarantee data availability in the AoI while at the same time reducing the data access costs at the network edges. We assess our proposal through a simulation analysis showing that our approach effectively fulfills its design objectives.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Djibo, Moumouni, Wend Yam Serge Boris Ouedraogo, Ali Doumounia, Serge Roland Sanou, Moumouni Sawadogo, Idrissa Guira, Nicolas Koné, Christian Chwala, Harald Kunstmann y François Zougmoré. "Towards Innovative Solutions for Monitoring Precipitation in Poorly Instrumented Regions: Real-Time System for Collecting Power Levels of Microwave Links of Mobile Phone Operators for Rainfall Quantification in Burkina Faso". Applied System Innovation 6, n.º 1 (27 de diciembre de 2022): 4. http://dx.doi.org/10.3390/asi6010004.

Texto completo
Resumen
Since the 1990s, mobile telecommunication networks have gradually become denser around the world. Nowadays, large parts of their backhaul network consist of commercial microwave links (CMLs). Since CML signals are attenuated by rainfall, the exploitation of records of this attenuation is an innovative and an inexpensive solution for precipitation monitoring purposes. Performance data from mobile operators’ networks are crucial for the implementation of this technology. Therefore, a real-time system for collecting and storing CML power levels from the mobile phone operator “Telecel Faso” in Burkina Faso has been implemented. This new acquisition system, which uses the Simple Network Management Protocol (SNMP), can simultaneously record the transmitted and received power levels from all the CMLs to which it has access, with a time resolution of one minute. Installed at “Laboratoire des Matériaux et Environnement de l’Université Joseph KI-ZERBO (Burkina Faso)”, this acquisition system is dynamic and has gradually grown from eight, in 2019, to more than 1000 radio links of Telecel Faso’s network in 2021. The system covers the capital Ouagadougou and the main cities of Burkina Faso (Bobo Dioulasso, Ouahigouya, Koudougou, and Kaya) as well as the axes connecting Ouagadougou to these cities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Bristola, Glenn Arwin M. "Integrating of voice recognition email application system for visually impaired person using linear regression algorithm". South Asian Journal of Engineering and Technology 12, n.º 1 (31 de marzo de 2022): 74–83. http://dx.doi.org/10.26524/sajet.2022.12.12.

Texto completo
Resumen
The outcome of this study will surely help visually impaired people, who face difficulties in accessing the computer system. Voice recognition will help them to access e-mail. This study also reduces cognitive load taken by a visually impaired users to remember and type characters using keyboard. If this system is implemented, self-esteem and social and emotional well-being of the visually impaired users will be lifted up for they will now feel they are being valued in the society and has fair treatment and access in technologyThe main function of this study is to use a keyboard of the user that will respond through voice. The purpose of this study is to help a visually impaired person to use modernize application to interact with voice recognition system with the use of email into different types of modern gadgets Line computers or mobile phones. In terms of Functionality of the application, the proponents will use a set of APIs’ or Application Program Interface such as Google Speech-to-text and text-to-speech application and it will process though Email System and also the SNMTP or Simple Network Management Protocol will be used for mailing services, in programming software, the proponent will be using PHP for the backend of web interface. HTML and CSS is the front end programming for the creation of Web Base User Interface that will be used. Voice typing and Dictation Speech Interaction models using windows dictation engine. The proponent used descriptive research design in this study. Descriptive research design is being used by the proponents to describe the characteristics of a population or phenomenon of visually impaired persons being studied. Descriptive research is mainly done because the researchers wants to gain a better understanding for a topic. It focuses on providing information that is useful in the development. The research is based on mixed method focused in producing such informative outcomes that can be used. Based on the results of the surveys, conclusions were drawn: Majority of the respondents were male adultery period ranging ages 32-41.all are working in the massage therapist. Majority of the respondents rated the overall function of the application Excellent and rated the level of security of the application is Secured.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Bristol, Glenn Arwin M. "Integrating of Voice Recognition Email Application System for Visually Impaired Person using Linear Regression Algorithm". Proceedings of The International Halal Science and Technology Conference 14, n.º 1 (10 de marzo de 2022): 56–66. http://dx.doi.org/10.31098/ihsatec.v14i1.486.

Texto completo
Resumen
The outcome of this study will surely help visually impaired people who face difficulties in accessing the computer system. Voice recognition will help them to access email. This study also reduces the cognitive load taken by visually impaired users to remember and type characters using a keyboard. If this system is implemented, self-esteem and social and emotional well-being of the visually impaired users will be lifted up for they will now feel they are being valued in society and has fair treatment and access to technology main function of this study is to use a keyboard of the user that will respond through voice. The purpose of this study is to help a visually impaired person to use the modern application to interact with voice recognition systems with the use of email into different types of modern gadgets, Line computers, or mobile phones. In terms of functionality of the application, the proponents will use a set of APIs,' or Application Program Interface such as Google Speech-to-text and text-to-speech application and it will process through Email System and also the SNMTP or Simple Network Management Protocol will be used for mailing services, in programming software, the proponent will be using PHP for the backend of a web interface. For the creation of a Web Base UI, HTML and CSS will be used. Voice typing and Dictation Speech Interaction models using windows dictation engine. The proponent used a descriptive research design in this study. Descriptive research design is being used by the proponents to describe the characteristics of a population or phenomenon of visually impaired persons being studied. Descriptive research is mainly done because the researchers want to gain a better understanding of a topic. It focuses on providing information that is useful in the development. The research is based on a mixed method focused on producing such informative outcomes that can be used. Based on the results of the surveys, conclusions were drawn: The majority of the respondents were male adultery period ranging from ages 32-41.all are working as massage therapists. The majority of the respondents rated the overall function of the application as Excellent and rated the level of security of the application as Secured.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Glenn Arwin M Bristola y Joevilzon C Calderon. "Integrating of voice recognition email application system for visually impaired person using linear regression algorithm". South Asian Journal of Engineering and Technology 12, n.º 1 (31 de marzo de 2022): 74–83. http://dx.doi.org/10.26524/sajet.2022.12.012.

Texto completo
Resumen
The outcome of this study will surely help visually impaired people, who face difficulties in accessing the computer system. Voice recognition will help them to access e-mail. This study also reduces cognitive load taken by a visually impaired users to remember and type characters using keyboard. If this system is implemented, self-esteem and social and emotional well-being of the visually impaired users will be lifted up for they will now feel they are being valued in the society and has fair treatment and access in technologyThe main function of this study is to use a keyboard of the user that will respond through voice. The purpose of this study is to help a visually impaired person to use modernize application to interact with voice recognition system with the use of email into different types of modern gadgets Line computers or mobile phones. In terms of Functionality of the application, the proponents will use a set of APIs’ or Application Program Interface such as Google Speech-to-text and text-to-speech application and it will process though Email System and also the SNMTP or Simple Network Management Protocol will be used for mailing services, in programming software, the proponent will be using PHP for the backend of web interface. HTML and CSS is the front end programming for the creation of Web Base User Interface that will be used. Voice typing and Dictation Speech Interaction models using windows dictation engine. The proponent used descriptive research design in this study. Descriptive research design is being used by the proponents to describe the characteristics of a population or phenomenon of visually impaired persons being studied. Descriptive research is mainly done because the researchers wants to gain a better understanding for a topic. It focuses on providing information that is useful in the development. The research is based on mixed method focused in producing such informative outcomes that can be used. Based on the results of the surveys, conclusions were drawn: Majority of the respondents were male adultery period ranging ages 32-41.all are working in the massage therapist. Majority of the respondents rated the overall function of the application Excellent and rated the level of security of the application is Secured.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Abraham, Ajith, Sung-Bae Cho, Thomas Hite y Sang-Yong Han. "Special Issue on Web Services Practices". Journal of Advanced Computational Intelligence and Intelligent Informatics 10, n.º 5 (20 de septiembre de 2006): 703–4. http://dx.doi.org/10.20965/jaciii.2006.p0703.

Texto completo
Resumen
Web services – a new breed of self-contained, self-describing, modular applications published, located, and invoked across the Web – handle functions, from simple requests to complicated business processes. They are defined as network-based application components with a services-oriented architecture (SOA) using standard interface description languages and uniform communication protocols. SOA enables organizations to grasp and respond to changing trends and to adapt their business processes rapidly without major changes to the IT infrastructure. The Inaugural International Conference on Next-Generation Web Services Practices (NWeSP'05) attracted researchers who are also the world's most respected authorities on the semantic Web, Web-based services, and Web applications and services. NWeSP'05 was held in cooperation with the IEEE Computer Society Task Force on Electronic Commerce, the Technical Committee on Internet, and the Technical Committee on Scalable Computing. This special issue presents eight papers focused on different aspects of Web services and their applications. Papers were selected based on fundamental ideas and concepts rather than the thoroughness of techniques employed. Papers are organized as follows: <I>Taher et al.</I> present the first paper, on a Quality of Service Information and Computational framework (QoS-IC) supporting QoS-based service selection for SOA. The framework's functionality is expanded using a QoS constraints model that establishes an association relationship between different QoS properties and is used to govern QoS-based service selection in the underlying algorithm. Using a prototype implementation, the authors demonstrate how QoS constraints improve QoS-based service selection and save consumers valuable time. Due to the complex infrastructure of web applications, response times perceived by clients may be significantly longer than desired. To overcome some of the current problems, <I>Vilas et al.</I>, in the second paper, propose a cache-based extension of the architecture that enhances the current web services architecture, which is mainly based on program-logic or protocol-dependent optimization. In the third paper, Jo and Yoo present authorization for securing XML sources on the Web. One of the disadvantages of existing access control is that the DOM tree must be loaded into memory while all XML documents are parsed to generate the DOM tree, such that a lot of memory is used in repetitive search for tree to authorize access to all nodes in the DOM tree. The complex authorization evaluation process required thus lowers system performance. Existing access control fails to consider information structure and semantics sufficiently due to basic HTML limitations. The authors overcome some of these limitations in the proposed model. In the fourth paper, Jung and Cho propose a novel behavior-network-based method for Web service composition. The behavior network selects services automatically through internal and external links with environmental information from sensors and goals. An optimal service is selected at each step, resulting in a globally optimal service sequence for achieving preset goals. The authors detail experimental results for the proposed model by comparing them with rule-based system and user tests. <I>Kong et al.</I> present an efficient method in the fifth paper for merging heterogeneous ontologies – no ontology building standard currently exists – and the many ontology-building tools available are based on different ontology languages, mostly focusing on how to create, edit and infer the ontology efficiently. Even ontologies about the same domain differ because ontology experts hold different view points. For these reasons, interoperability between ontologies is very low. The authors propose merging heterogeneous domain ontologies by overcoming some of the above limitations. In the sixth paper, Chen and Che provide polynomial-time tree pattern query minimization algorithm whose efficiency stems from two key observations: (i) Inherent redundant "components" usually exist inside the rudimentary query provided by the user, and (ii) nonedundant nodes may become redundant when constraints such as co-occurrence and required child/descendant are given. They show that the algorithm obtained by first augmenting the input tree pattern using constraints, then applying minimization, invariably finds a unique minimal equivalent to the original query. Chen and Che present a polynomial-time algorithm for tree pattern query (TPQ) minimization without XML constraints in the seventh paper. The two-part algorithm is a dynamic programming strategy for finding all matching subtrees within a TPQ. The algorithm consists of one for subtree recognization and a second for subtree deletion. In the last paper, <I>Bagchi et al.</I> present the mobile distributed virtual memory (MDVM) concept and architecture for cellular networks containing server-groups (SG). They detail a two-round randomized distributed algorithm to elect a unique leader and co-leader of the SG that is free of any assumption about network topology, and buffer space limitations and is based on dynamically elected coordinators eliminating single points of failure. As guest editors, we thank all authors featured in this special issue for their contributions and the referees for critically evaluating the papers within the short time allotted. We sincerely believe that readers will share our enjoyment of this special issue and find the information it presents both timely and useful.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Nayyar, Anand, Pijush Kanti Dutta Pramankit y Rajni Mohana. "Introduction to the Special Issue on Evolving IoT and Cyber-Physical Systems: Advancements, Applications, and Solutions". Scalable Computing: Practice and Experience 21, n.º 3 (1 de agosto de 2020): 347–48. http://dx.doi.org/10.12694/scpe.v21i3.1568.

Texto completo
Resumen
Internet of Things (IoT) is regarded as a next-generation wave of Information Technology (IT) after the widespread emergence of the Internet and mobile communication technologies. IoT supports information exchange and networked interaction of appliances, vehicles and other objects, making sensing and actuation possible in a low-cost and smart manner. On the other hand, cyber-physical systems (CPS) are described as the engineered systems which are built upon the tight integration of the cyber entities (e.g., computation, communication, and control) and the physical things (natural and man-made systems governed by the laws of physics). The IoT and CPS are not isolated technologies. Rather it can be said that IoT is the base or enabling technology for CPS and CPS is considered as the grownup development of IoT, completing the IoT notion and vision. Both are merged into closed-loop, providing mechanisms for conceptualizing, and realizing all aspects of the networked composed systems that are monitored and controlled by computing algorithms and are tightly coupled among users and the Internet. That is, the hardware and the software entities are intertwined, and they typically function on different time and location-based scales. In fact, the linking between the cyber and the physical world is enabled by IoT (through sensors and actuators). CPS that includes traditional embedded and control systems are supposed to be transformed by the evolving and innovative methodologies and engineering of IoT. Several applications areas of IoT and CPS are smart building, smart transport, automated vehicles, smart cities, smart grid, smart manufacturing, smart agriculture, smart healthcare, smart supply chain and logistics, etc. Though CPS and IoT have significant overlaps, they differ in terms of engineering aspects. Engineering IoT systems revolves around the uniquely identifiable and internet-connected devices and embedded systems; whereas engineering CPS requires a strong emphasis on the relationship between computation aspects (complex software) and the physical entities (hardware). Engineering CPS is challenging because there is no defined and fixed boundary and relationship between the cyber and physical worlds. In CPS, diverse constituent parts are composed and collaborated together to create unified systems with global behaviour. These systems need to be ensured in terms of dependability, safety, security, efficiency, and adherence to real‐time constraints. Hence, designing CPS requires knowledge of multidisciplinary areas such as sensing technologies, distributed systems, pervasive and ubiquitous computing, real-time computing, computer networking, control theory, signal processing, embedded systems, etc. CPS, along with the continuous evolving IoT, has posed several challenges. For example, the enormous amount of data collected from the physical things makes it difficult for Big Data management and analytics that includes data normalization, data aggregation, data mining, pattern extraction and information visualization. Similarly, the future IoT and CPS need standardized abstraction and architecture that will allow modular designing and engineering of IoT and CPS in global and synergetic applications. Another challenging concern of IoT and CPS is the security and reliability of the components and systems. Although IoT and CPS have attracted the attention of the research communities and several ideas and solutions are proposed, there are still huge possibilities for innovative propositions to make IoT and CPS vision successful. The major challenges and research scopes include system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. It is our great privilege to present Volume 21, Issue 3 of Scalable Computing: Practice and Experience. We had received 30 research papers and out of which 14 papers are selected for publication. The objective of this special issue is to explore and report recent advances and disseminate state-of-the-art research related to IoT, CPS and the enabling and associated technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to IoT and CPS. Vivek Kumar Prasad and Madhuri D Bhavsar in the paper titled "Monitoring and Prediction of SLA for IoT based Cloud described the mechanisms for monitoring by using the concept of reinforcement learning and prediction of the cloud resources, which forms the critical parts of cloud expertise in support of controlling and evolution of the IT resources and has been implemented using LSTM. The proper utilization of the resources will generate revenues to the provider and also increases the trust factor of the provider of cloud services. For experimental analysis, four parameters have been used i.e. CPU utilization, disk read/write throughput and memory utilization. Kasture et al. in the paper titled "Comparative Study of Speaker Recognition Techniques in IoT Devices for Text Independent Negative Recognition" compared the performance of features which are used in state of art speaker recognition models and analyse variants of Mel frequency cepstrum coefficients (MFCC) predominantly used in feature extraction which can be further incorporated and used in various smart devices. Mahesh Kumar Singh and Om Prakash Rishi in the paper titled "Event Driven Recommendation System for E-Commerce using Knowledge based Collaborative Filtering Technique" proposed a novel system that uses a knowledge base generated from knowledge graph to identify the domain knowledge of users, items, and relationships among these, knowledge graph is a labelled multidimensional directed graph that represents the relationship among the users and the items. The proposed approach uses about 100 percent of users' participation in the form of activities during navigation of the web site. Thus, the system expects under the users' interest that is beneficial for both seller and buyer. The proposed system is compared with baseline methods in area of recommendation system using three parameters: precision, recall and NDGA through online and offline evaluation studies with user data and it is observed that proposed system is better as compared to other baseline systems. Benbrahim et al. in the paper titled "Deep Convolutional Neural Network with TensorFlow and Keras to Classify Skin Cancer" proposed a novel classification model to classify skin tumours in images using Deep Learning methodology and the proposed system was tested on HAM10000 dataset comprising of 10,015 dermatoscopic images and the results observed that the proposed system is accurate in order of 94.06\% in validation set and 93.93\% in the test set. Devi B et al. in the paper titled "Deadlock Free Resource Management Technique for IoT-Based Post Disaster Recovery Systems" proposed a new class of techniques that do not perform stringent testing before allocating the resources but still ensure that the system is deadlock-free and the overhead is also minimal. The proposed technique suggests reserving a portion of the resources to ensure no deadlock would occur. The correctness of the technique is proved in the form of theorems. The average turnaround time is approximately 18\% lower for the proposed technique over Banker's algorithm and also an optimal overhead of O(m). Deep et al. in the paper titled "Access Management of User and Cyber-Physical Device in DBAAS According to Indian IT Laws Using Blockchain" proposed a novel blockchain solution to track the activities of employees managing cloud. Employee authentication and authorization are managed through the blockchain server. User authentication related data is stored in blockchain. The proposed work assists cloud companies to have better control over their employee's activities, thus help in preventing insider attack on User and Cyber-Physical Devices. Sumit Kumar and Jaspreet Singh in paper titled "Internet of Vehicles (IoV) over VANETS: Smart and Secure Communication using IoT" highlighted a detailed description of Internet of Vehicles (IoV) with current applications, architectures, communication technologies, routing protocols and different issues. The researchers also elaborated research challenges and trade-off between security and privacy in area of IoV. Deore et al. in the paper titled "A New Approach for Navigation and Traffic Signs Indication Using Map Integrated Augmented Reality for Self-Driving Cars" proposed a new approach to supplement the technology used in self-driving cards for perception. The proposed approach uses Augmented Reality to create and augment artificial objects of navigational signs and traffic signals based on vehicles location to reality. This approach help navigate the vehicle even if the road infrastructure does not have very good sign indications and marking. The approach was tested locally by creating a local navigational system and a smartphone based augmented reality app. The approach performed better than the conventional method as the objects were clearer in the frame which made it each for the object detection to detect them. Bhardwaj et al. in the paper titled "A Framework to Systematically Analyse the Trustworthiness of Nodes for Securing IoV Interactions" performed literature on IoV and Trust and proposed a Hybrid Trust model that seperates the malicious and trusted nodes to secure the interaction of vehicle in IoV. To test the model, simulation was conducted on varied threshold values. And results observed that PDR of trusted node is 0.63 which is higher as compared to PDR of malicious node which is 0.15. And on the basis of PDR, number of available hops and Trust Dynamics the malicious nodes are identified and discarded. Saniya Zahoor and Roohie Naaz Mir in the paper titled "A Parallelization Based Data Management Framework for Pervasive IoT Applications" highlighted the recent studies and related information in data management for pervasive IoT applications having limited resources. The paper also proposes a parallelization-based data management framework for resource-constrained pervasive applications of IoT. The comparison of the proposed framework is done with the sequential approach through simulations and empirical data analysis. The results show an improvement in energy, processing, and storage requirements for the processing of data on the IoT device in the proposed framework as compared to the sequential approach. Patel et al. in the paper titled "Performance Analysis of Video ON-Demand and Live Video Streaming Using Cloud Based Services" presented a review of video analysis over the LVS \& VoDS video application. The researchers compared different messaging brokers which helps to deliver each frame in a distributed pipeline to analyze the impact on two message brokers for video analysis to achieve LVS & VoS using AWS elemental services. In addition, the researchers also analysed the Kafka configuration parameter for reliability on full-service-mode. Saniya Zahoor and Roohie Naaz Mir in the paper titled "Design and Modeling of Resource-Constrained IoT Based Body Area Networks" presented the design and modeling of a resource-constrained BAN System and also discussed the various scenarios of BAN in context of resource constraints. The Researchers also proposed an Advanced Edge Clustering (AEC) approach to manage the resources such as energy, storage, and processing of BAN devices while performing real-time data capture of critical health parameters and detection of abnormal patterns. The comparison of the AEC approach is done with the Stable Election Protocol (SEP) through simulations and empirical data analysis. The results show an improvement in energy, processing time and storage requirements for the processing of data on BAN devices in AEC as compared to SEP. Neelam Saleem Khan and Mohammad Ahsan Chishti in the paper titled "Security Challenges in Fog and IoT, Blockchain Technology and Cell Tree Solutions: A Review" outlined major authentication issues in IoT, map their existing solutions and further tabulate Fog and IoT security loopholes. Furthermore, this paper presents Blockchain, a decentralized distributed technology as one of the solutions for authentication issues in IoT. In addition, the researchers discussed the strength of Blockchain technology, work done in this field, its adoption in COVID-19 fight and tabulate various challenges in Blockchain technology. The researchers also proposed Cell Tree architecture as another solution to address some of the security issues in IoT, outlined its advantages over Blockchain technology and tabulated some future course to stir some attempts in this area. Bhadwal et al. in the paper titled "A Machine Translation System from Hindi to Sanskrit Language Using Rule Based Approach" proposed a rule-based machine translation system to bridge the language barrier between Hindi and Sanskrit Language by converting any test in Hindi to Sanskrit. The results are produced in the form of two confusion matrices wherein a total of 50 random sentences and 100 tokens (Hindi words or phrases) were taken for system evaluation. The semantic evaluation of 100 tokens produce an accuracy of 94\% while the pragmatic analysis of 50 sentences produce an accuracy of around 86\%. Hence, the proposed system can be used to understand the whole translation process and can further be employed as a tool for learning as well as teaching. Further, this application can be embedded in local communication based assisting Internet of Things (IoT) devices like Alexa or Google Assistant. Anshu Kumar Dwivedi and A.K. Sharma in the paper titled "NEEF: A Novel Energy Efficient Fuzzy Logic Based Clustering Protocol for Wireless Sensor Network" proposed a a deterministic novel energy efficient fuzzy logic-based clustering protocol (NEEF) which considers primary and secondary factors in fuzzy logic system while selecting cluster heads. After selection of cluster heads, non-cluster head nodes use fuzzy logic for prudent selection of their cluster head for cluster formation. NEEF is simulated and compared with two recent state of the art protocols, namely SCHFTL and DFCR under two scenarios. Simulation results unveil better performance by balancing the load and improvement in terms of stability period, packets forwarded to the base station, improved average energy and extended lifetime.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Abisa, Michael. "Meaningful Use and Electronic Laboratory Reporting: Challenges Health Information Technology Vendors Face in Kentucky". Online Journal of Public Health Informatics 9, n.º 3 (30 de diciembre de 2017). http://dx.doi.org/10.5210/ojphi.v9i3.7491.

Texto completo
Resumen
Objectives: To explore the challenges Health Information Technology (HIT) vendors face to satisfy the requirements for Meaningful Use (MU) and Electronic Laboratory Reporting (ELR) of reportable diseases to the public health departments in Kentucky.Methodology: A survey was conducted of Health Information Exchange (HIE) vendors in Kentucky through the Kentucky Health Information Exchange (KHIE). The survey was cross-sectional. Data were collected between February and March 2014. Participants were recruited from KHIE vendors. Participants received online survey link and by email and asked to submit their responses. Vendors’ feedback were summarized and analyzed to identify their challenges. Out of the 55 vendors who received the survey, 35(63.64%) responded.Results: Of the seven transport protocol options for ELR, vendors selected virtual private network (VPN) as the most difficult to implement (31.7%). Secure File Transfer Protocol (SFTP) was selected as preferred ELR transport protocol (31.4%). Most of the respondents, 80% responded that they do not have any challenge with the Health Level 7 (HL7) standard implementation guide required by MU for 2014 ELR certification.Conclusion: The study found that the most difficult transport protocol to implement for ELR is VPN and if vendors have preference, they would use SFTP for ELR over KHIE choice of VPN and Simple Object Access Protocol (SOAP). KHIE vendors do not see any variability in what is reportable by different jurisdiction and also it is not difficult for them to detect what is reportable from one jurisdiction verse the other
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

"Safety Measures and Auto Detection against SQL Injection Attacks". International Journal of Recent Technology and Engineering 8, n.º 4 (30 de diciembre de 2019): 2827–33. http://dx.doi.org/10.35940/ijeat.b3316.129219.

Texto completo
Resumen
The SQL injection attack (SQLIA) occurred when the attacker integrating a code of a malicious SQL query into a valid query statement via a non-valid input. As a result the relational database management system will trigger these malicious query that cause to SQL injection attack. After successful execution, it may interrupts the CIA (confidentiality, integrity and availability) of web API. The vulnerability of Web Application Programming Interface (API) is the prior concern for any programming. The Web API is mainly based of Simple Object Access Protocol (SOAP) protocol which provide its own security and Representational State Transfer (REST) is provide the architectural style to security measures form transport layer. Most of the time developers or newly programmers does not follow the standards of safe programming and forget to validate their input fields in the form. This vulnerability in the web API opens the door for the threats and it’s become a cake walk for the attacker to exploit the database associated with the web API. The objective of paper is to automate the detection of SQL injection attack and secure the poorly coded web API access through large network traffic. The Snort and Moloch approaches are used to develop the hybrid model for auto detection as well as analyze the SQL injection attack for the prototype system
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Soni, Gulshan y Kandasamy Selvaradjou. "Rational Allocation of Guaranteed Time Slots to support real-time traffic in Wireless Body Area Networks". International Journal of Sensors, Wireless Communications and Control 11 (8 de marzo de 2021). http://dx.doi.org/10.2174/2210327911666210308155147.

Texto completo
Resumen
Background: The main requirement of Wireless Body Area Network (WBAN) is on-time delivery of vital signs that are sensed through the delay-sensitive biological sensors that are implanted in the body of the patient being monitored to the central gateway. The Medium Access Control (MAC) protocol standard IEEE 802.15.4 supports real-time data delivery through its unique feature called Guaranteed Time Slot (GTS) under its beacon-enabled mode. This protocol is considered suitable for the WBAN Scenario. However, as per standard, IEEE 802.15.4 uses a simple and straightforward First Come First Served (FCFS) mechanism to distribute GTS slots among the contender nodes. This kind of blindfolded allocation of GTS slots results in poor utilization of bandwidth and also prevents the delay-sensitive sensor nodes from effectively utilizing the contention-free slots. Objective: The main objective of this work is to provide a solution for the unfair allocation of GTS slots in the beacon-enabled mode of the IEEE 802.15.4 standard in WBAN. Method: We propose a Rational Allocation of Guaranteed Time Slot (RAGTS) protocol that distributes the available GTS slots based on the delay-sensitivity of the contender nodes. Results: A series of simulation experiments has been performed to assess the performance of our proposed RAGTS protocol. The simulations capture the dynamic nature of the real-time deadlines associated with sensor traffic. Through simulations, we show that our proposed RAGTS protocol appears to be more stable in terms of various performance metrics than that of the FCFS nature of the GTS allocation technique. Conclusion: In this article, we introduced the RAGTS scheme that enhances the real-time traffic feature of the beacon-enabled mode of IEEE 802.15.4 MAC.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Peters, Andreas J. y Daniel C. van der Ster. "Evaluating CephFS Performance vs. Cost on High-Density Commodity Disk Servers". Computing and Software for Big Science 5, n.º 1 (9 de noviembre de 2021). http://dx.doi.org/10.1007/s41781-021-00071-1.

Texto completo
Resumen
AbstractCephFS is a network filesystem built upon the Reliable Autonomic Distributed Object Store (RADOS). At CERN we have demonstrated its reliability and elasticity while operating several 100-to-1000TB clusters which provide NFS-like storage to infrastructure applications and services. At the same time, our lab developed EOS to offer high performance 100PB-scale storage for the LHC at extremely low costs while also supporting the complete set of security and functional APIs required by the particle-physics user community. This work seeks to evaluate the performance of CephFS on this cost-optimized hardware when it is combined with EOS to support the missing functionalities. To this end, we have setup a proof-of-concept Ceph Octopus cluster on high-density JBOD servers (840 TB each) with 100Gig-E networking. The system uses EOS to provide an overlayed namespace and protocol gateways for HTTP(S) and XROOTD, and uses CephFS as an erasure-coded object storage backend. The solution also enables operators to aggregate several CephFS instances and adds features, such as third-party-copy, SciTokens, and high-level user and quota management. Using simple benchmarks we measure the cost/performance tradeoffs of different erasure-coding layouts, as well as the network overheads of these coding schemes. We demonstrate some relevant limitations of the CephFS metadata server and offer improved tunings which can be generally applicable. To conclude, we reflect on the advantages and drawbacks related to this architecture, such as RADOS-level free space requirements and double-network penalties, and offer ideas for improvements in the future.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Dieter, Michael. "Amazon Noir". M/C Journal 10, n.º 5 (1 de octubre de 2007). http://dx.doi.org/10.5204/mcj.2709.

Texto completo
Resumen
There is no diagram that does not also include, besides the points it connects up, certain relatively free or unbounded points, points of creativity, change and resistance, and it is perhaps with these that we ought to begin in order to understand the whole picture. (Deleuze, “Foucault” 37) Monty Cantsin: Why do we use a pervert software robot to exploit our collective consensual mind? Letitia: Because we want the thief to be a digital entity. Monty Cantsin: But isn’t this really blasphemic? Letitia: Yes, but god – in our case a meta-cocktail of authorship and copyright – can not be trusted anymore. (Amazon Noir, “Dialogue”) In 2006, some 3,000 digital copies of books were silently “stolen” from online retailer Amazon.com by targeting vulnerabilities in the “Search inside the Book” feature from the company’s website. Over several weeks, between July and October, a specially designed software program bombarded the Search Inside!™ interface with multiple requests, assembling full versions of texts and distributing them across peer-to-peer networks (P2P). Rather than a purely malicious and anonymous hack, however, the “heist” was publicised as a tactical media performance, Amazon Noir, produced by self-proclaimed super-villains Paolo Cirio, Alessandro Ludovico, and Ubermorgen.com. While controversially directed at highlighting the infrastructures that materially enforce property rights and access to knowledge online, the exploit additionally interrogated its own interventionist status as theoretically and politically ambiguous. That the “thief” was represented as a digital entity or machinic process (operating on the very terrain where exchange is differentiated) and the emergent act of “piracy” was fictionalised through the genre of noir conveys something of the indeterminacy or immensurability of the event. In this short article, I discuss some political aspects of intellectual property in relation to the complexities of Amazon Noir, particularly in the context of control, technological action, and discourses of freedom. Software, Piracy As a force of distribution, the Internet is continually subject to controversies concerning flows and permutations of agency. While often directed by discourses cast in terms of either radical autonomy or control, the technical constitution of these digital systems is more regularly a case of establishing structures of operation, codified rules, or conditions of possibility; that is, of guiding social processes and relations (McKenzie, “Cutting Code” 1-19). Software, as a medium through which such communication unfolds and becomes organised, is difficult to conceptualise as a result of being so event-orientated. There lies a complicated logic of contingency and calculation at its centre, a dimension exacerbated by the global scale of informational networks, where the inability to comprehend an environment that exceeds the limits of individual experience is frequently expressed through desires, anxieties, paranoia. Unsurprisingly, cautionary accounts and moral panics on identity theft, email fraud, pornography, surveillance, hackers, and computer viruses are as commonplace as those narratives advocating user interactivity. When analysing digital systems, cultural theory often struggles to describe forces that dictate movement and relations between disparate entities composed by code, an aspect heightened by the intensive movement of informational networks where differences are worked out through the constant exposure to unpredictability and chance (Terranova, “Communication beyond Meaning”). Such volatility partially explains the recent turn to distribution in media theory, as once durable networks for constructing economic difference – organising information in space and time (“at a distance”), accelerating or delaying its delivery – appear contingent, unstable, or consistently irregular (Cubitt 194). Attributing actions to users, programmers, or the software itself is a difficult task when faced with these states of co-emergence, especially in the context of sharing knowledge and distributing media content. Exchanges between corporate entities, mainstream media, popular cultural producers, and legal institutions over P2P networks represent an ongoing controversy in this respect, with numerous stakeholders competing between investments in property, innovation, piracy, and publics. Beginning to understand this problematic landscape is an urgent task, especially in relation to the technological dynamics that organised and propel such antagonisms. In the influential fragment, “Postscript on the Societies of Control,” Gilles Deleuze describes the historical passage from modern forms of organised enclosure (the prison, clinic, factory) to the contemporary arrangement of relational apparatuses and open systems as being materially provoked by – but not limited to – the mass deployment of networked digital technologies. In his analysis, the disciplinary mode most famously described by Foucault is spatially extended to informational systems based on code and flexibility. According to Deleuze, these cybernetic machines are connected into apparatuses that aim for intrusive monitoring: “in a control-based system nothing’s left alone for long” (“Control and Becoming” 175). Such a constant networking of behaviour is described as a shift from “molds” to “modulation,” where controls become “a self-transmuting molding changing from one moment to the next, or like a sieve whose mesh varies from one point to another” (“Postscript” 179). Accordingly, the crisis underpinning civil institutions is consistent with the generalisation of disciplinary logics across social space, forming an intensive modulation of everyday life, but one ambiguously associated with socio-technical ensembles. The precise dynamics of this epistemic shift are significant in terms of political agency: while control implies an arrangement capable of absorbing massive contingency, a series of complex instabilities actually mark its operation. Noise, viral contamination, and piracy are identified as key points of discontinuity; they appear as divisions or “errors” that force change by promoting indeterminacies in a system that would otherwise appear infinitely calculable, programmable, and predictable. The rendering of piracy as a tactic of resistance, a technique capable of levelling out the uneven economic field of global capitalism, has become a predictable catch-cry for political activists. In their analysis of multitude, for instance, Antonio Negri and Michael Hardt describe the contradictions of post-Fordist production as conjuring forth a tendency for labour to “become common.” That is, as productivity depends on flexibility, communication, and cognitive skills, directed by the cultivation of an ideal entrepreneurial or flexible subject, the greater the possibilities for self-organised forms of living that significantly challenge its operation. In this case, intellectual property exemplifies such a spiralling paradoxical logic, since “the infinite reproducibility central to these immaterial forms of property directly undermines any such construction of scarcity” (Hardt and Negri 180). The implications of the filesharing program Napster, accordingly, are read as not merely directed toward theft, but in relation to the private character of the property itself; a kind of social piracy is perpetuated that is viewed as radically recomposing social resources and relations. Ravi Sundaram, a co-founder of the Sarai new media initiative in Delhi, has meanwhile drawn attention to the existence of “pirate modernities” capable of being actualised when individuals or local groups gain illegitimate access to distributive media technologies; these are worlds of “innovation and non-legality,” of electronic survival strategies that partake in cultures of dispersal and escape simple classification (94). Meanwhile, pirate entrepreneurs Magnus Eriksson and Rasmus Fleische – associated with the notorious Piratbyrn – have promoted the bleeding away of Hollywood profits through fully deployed P2P networks, with the intention of pushing filesharing dynamics to an extreme in order to radicalise the potential for social change (“Copies and Context”). From an aesthetic perspective, such activist theories are complemented by the affective register of appropriation art, a movement broadly conceived in terms of antagonistically liberating knowledge from the confines of intellectual property: “those who pirate and hijack owned material, attempting to free information, art, film, and music – the rhetoric of our cultural life – from what they see as the prison of private ownership” (Harold 114). These “unruly” escape attempts are pursued through various modes of engagement, from experimental performances with legislative infrastructures (i.e. Kembrew McLeod’s patenting of the phrase “freedom of expression”) to musical remix projects, such as the work of Negativland, John Oswald, RTMark, Detritus, Illegal Art, and the Evolution Control Committee. Amazon Noir, while similarly engaging with questions of ownership, is distinguished by specifically targeting information communication systems and finding “niches” or gaps between overlapping networks of control and economic governance. Hans Bernhard and Lizvlx from Ubermorgen.com (meaning ‘Day after Tomorrow,’ or ‘Super-Tomorrow’) actually describe their work as “research-based”: “we not are opportunistic, money-driven or success-driven, our central motivation is to gain as much information as possible as fast as possible as chaotic as possible and to redistribute this information via digital channels” (“Interview with Ubermorgen”). This has led to experiments like Google Will Eat Itself (2005) and the construction of the automated software thief against Amazon.com, as process-based explorations of technological action. Agency, Distribution Deleuze’s “postscript” on control has proven massively influential for new media art by introducing a series of key questions on power (or desire) and digital networks. As a social diagram, however, control should be understood as a partial rather than totalising map of relations, referring to the augmentation of disciplinary power in specific technological settings. While control is a conceptual regime that refers to open-ended terrains beyond the architectural locales of enclosure, implying a move toward informational networks, data solicitation, and cybernetic feedback, there remains a peculiar contingent dimension to its limits. For example, software code is typically designed to remain cycling until user input is provided. There is a specifically immanent and localised quality to its actions that might be taken as exemplary of control as a continuously modulating affective materialism. The outcome is a heightened sense of bounded emergencies that are either flattened out or absorbed through reconstitution; however, these are never linear gestures of containment. As Tiziana Terranova observes, control operates through multilayered mechanisms of order and organisation: “messy local assemblages and compositions, subjective and machinic, characterised by different types of psychic investments, that cannot be the subject of normative, pre-made political judgments, but which need to be thought anew again and again, each time, in specific dynamic compositions” (“Of Sense and Sensibility” 34). This event-orientated vitality accounts for the political ambitions of tactical media as opening out communication channels through selective “transversal” targeting. Amazon Noir, for that reason, is pitched specifically against the material processes of communication. The system used to harvest the content from “Search inside the Book” is described as “robot-perversion-technology,” based on a network of four servers around the globe, each with a specific function: one located in the United States that retrieved (or “sucked”) the books from the site, one in Russia that injected the assembled documents onto P2P networks and two in Europe that coordinated the action via intelligent automated programs (see “The Diagram”). According to the “villains,” the main goal was to steal all 150,000 books from Search Inside!™ then use the same technology to steal books from the “Google Print Service” (the exploit was limited only by the amount of technological resources financially available, but there are apparent plans to improve the technique by reinvesting the money received through the settlement with Amazon.com not to publicise the hack). In terms of informational culture, this system resembles a machinic process directed at redistributing copyright content; “The Diagram” visualises key processes that define digital piracy as an emergent phenomenon within an open-ended and responsive milieu. That is, the static image foregrounds something of the activity of copying being a technological action that complicates any analysis focusing purely on copyright as content. In this respect, intellectual property rights are revealed as being entangled within information architectures as communication management and cultural recombination – dissipated and enforced by a measured interplay between openness and obstruction, resonance and emergence (Terranova, “Communication beyond Meaning” 52). To understand data distribution requires an acknowledgement of these underlying nonhuman relations that allow for such informational exchanges. It requires an understanding of the permutations of agency carried along by digital entities. According to Lawrence Lessig’s influential argument, code is not merely an object of governance, but has an overt legislative function itself. Within the informational environments of software, “a law is defined, not through a statue, but through the code that governs the space” (20). These points of symmetry are understood as concretised social values: they are material standards that regulate flow. Similarly, Alexander Galloway describes computer protocols as non-institutional “etiquette for autonomous agents,” or “conventional rules that govern the set of possible behavior patterns within a heterogeneous system” (7). In his analysis, these agreed-upon standardised actions operate as a style of management fostered by contradiction: progressive though reactionary, encouraging diversity by striving for the universal, synonymous with possibility but completely predetermined, and so on (243-244). Needless to say, political uncertainties arise from a paradigm that generates internal material obscurities through a constant twinning of freedom and control. For Wendy Hui Kyong Chun, these Cold War systems subvert the possibilities for any actual experience of autonomy by generalising paranoia through constant intrusion and reducing social problems to questions of technological optimisation (1-30). In confrontation with these seemingly ubiquitous regulatory structures, cultural theory requires a critical vocabulary differentiated from computer engineering to account for the sociality that permeates through and concatenates technological realities. In his recent work on “mundane” devices, software and code, Adrian McKenzie introduces a relevant analytic approach in the concept of technological action as something that both abstracts and concretises relations in a diffusion of collective-individual forces. Drawing on the thought of French philosopher Gilbert Simondon, he uses the term “transduction” to identify a key characteristic of technology in the relational process of becoming, or ontogenesis. This is described as bringing together disparate things into composites of relations that evolve and propagate a structure throughout a domain, or “overflow existing modalities of perception and movement on many scales” (“Impersonal and Personal Forces in Technological Action” 201). Most importantly, these innovative diffusions or contagions occur by bridging states of difference or incompatibilities. Technological action, therefore, arises from a particular type of disjunctive relation between an entity and something external to itself: “in making this relation, technical action changes not only the ensemble, but also the form of life of its agent. Abstraction comes into being and begins to subsume or reconfigure existing relations between the inside and outside” (203). Here, reciprocal interactions between two states or dimensions actualise disparate potentials through metastability: an equilibrium that proliferates, unfolds, and drives individuation. While drawing on cybernetics and dealing with specific technological platforms, McKenzie’s work can be extended to describe the significance of informational devices throughout control societies as a whole, particularly as a predictive and future-orientated force that thrives on staged conflicts. Moreover, being a non-deterministic technical theory, it additionally speaks to new tendencies in regimes of production that harness cognition and cooperation through specially designed infrastructures to enact persistent innovation without any end-point, final goal or natural target (Thrift 283-295). Here, the interface between intellectual property and reproduction can be seen as a site of variation that weaves together disparate objects and entities by imbrication in social life itself. These are specific acts of interference that propel relations toward unforeseen conclusions by drawing on memories, attention spans, material-technical traits, and so on. The focus lies on performance, context, and design “as a continual process of tuning arrived at by distributed aspiration” (Thrift 295). This later point is demonstrated in recent scholarly treatments of filesharing networks as media ecologies. Kate Crawford, for instance, describes the movement of P2P as processual or adaptive, comparable to technological action, marked by key transitions from partially decentralised architectures such as Napster, to the fully distributed systems of Gnutella and seeded swarm-based networks like BitTorrent (30-39). Each of these technologies can be understood as a response to various legal incursions, producing radically dissimilar socio-technological dynamics and emergent trends for how agency is modulated by informational exchanges. Indeed, even these aberrant formations are characterised by modes of commodification that continually spillover and feedback on themselves, repositioning markets and commodities in doing so, from MP3s to iPods, P2P to broadband subscription rates. However, one key limitation of this ontological approach is apparent when dealing with the sheer scale of activity involved, where mass participation elicits certain degrees of obscurity and relative safety in numbers. This represents an obvious problem for analysis, as dynamics can easily be identified in the broadest conceptual sense, without any understanding of the specific contexts of usage, political impacts, and economic effects for participants in their everyday consumptive habits. Large-scale distributed ensembles are “problematic” in their technological constitution, as a result. They are sites of expansive overflow that provoke an equivalent individuation of thought, as the Recording Industry Association of America observes on their educational website: “because of the nature of the theft, the damage is not always easy to calculate but not hard to envision” (“Piracy”). The politics of the filesharing debate, in this sense, depends on the command of imaginaries; that is, being able to conceptualise an overarching structural consistency to a persistent and adaptive ecology. As a mode of tactical intervention, Amazon Noir dramatises these ambiguities by framing technological action through the fictional sensibilities of narrative genre. Ambiguity, Control The extensive use of imagery and iconography from “noir” can be understood as an explicit reference to the increasing criminalisation of copyright violation through digital technologies. However, the term also refers to the indistinct or uncertain effects produced by this tactical intervention: who are the “bad guys” or the “good guys”? Are positions like ‘good’ and ‘evil’ (something like freedom or tyranny) so easily identified and distinguished? As Paolo Cirio explains, this political disposition is deliberately kept obscure in the project: “it’s a representation of the actual ambiguity about copyright issues, where every case seems to lack a moral or ethical basis” (“Amazon Noir Interview”). While user communications made available on the site clearly identify culprits (describing the project as jeopardising arts funding, as both irresponsible and arrogant), the self-description of the artists as political “failures” highlights the uncertainty regarding the project’s qualities as a force of long-term social renewal: Lizvlx from Ubermorgen.com had daily shootouts with the global mass-media, Cirio continuously pushed the boundaries of copyright (books are just pixels on a screen or just ink on paper), Ludovico and Bernhard resisted kickback-bribes from powerful Amazon.com until they finally gave in and sold the technology for an undisclosed sum to Amazon. Betrayal, blasphemy and pessimism finally split the gang of bad guys. (“Press Release”) Here, the adaptive and flexible qualities of informatic commodities and computational systems of distribution are knowingly posited as critical limits; in a certain sense, the project fails technologically in order to succeed conceptually. From a cynical perspective, this might be interpreted as guaranteeing authenticity by insisting on the useless or non-instrumental quality of art. However, through this process, Amazon Noir illustrates how forces confined as exterior to control (virality, piracy, noncommunication) regularly operate as points of distinction to generate change and innovation. Just as hackers are legitimately employed to challenge the durability of network exchanges, malfunctions are relied upon as potential sources of future information. Indeed, the notion of demonstrating ‘autonomy’ by illustrating the shortcomings of software is entirely consistent with the logic of control as a modulating organisational diagram. These so-called “circuit breakers” are positioned as points of bifurcation that open up new systems and encompass a more general “abstract machine” or tendency governing contemporary capitalism (Parikka 300). As a consequence, the ambiguities of Amazon Noir emerge not just from the contrary articulation of intellectual property and digital technology, but additionally through the concept of thinking “resistance” simultaneously with regimes of control. This tension is apparent in Galloway’s analysis of the cybernetic machines that are synonymous with the operation of Deleuzian control societies – i.e. “computerised information management” – where tactical media are posited as potential modes of contestation against the tyranny of code, “able to exploit flaws in protocological and proprietary command and control, not to destroy technology, but to sculpt protocol and make it better suited to people’s real desires” (176). While pushing a system into a state of hypertrophy to reform digital architectures might represent a possible technique that produces a space through which to imagine something like “our” freedom, it still leaves unexamined the desire for reformation itself as nurtured by and produced through the coupling of cybernetics, information theory, and distributed networking. This draws into focus the significance of McKenzie’s Simondon-inspired cybernetic perspective on socio-technological ensembles as being always-already predetermined by and driven through asymmetries or difference. As Chun observes, consequently, there is no paradox between resistance and capture since “control and freedom are not opposites, but different sides of the same coin: just as discipline served as a grid on which liberty was established, control is the matrix that enables freedom as openness” (71). Why “openness” should be so readily equated with a state of being free represents a major unexamined presumption of digital culture, and leads to the associated predicament of attempting to think of how this freedom has become something one cannot not desire. If Amazon Noir has political currency in this context, however, it emerges from a capacity to recognise how informational networks channel desire, memories, and imaginative visions rather than just cultivated antagonisms and counterintuitive economics. As a final point, it is worth observing that the project was initiated without publicity until the settlement with Amazon.com. There is, as a consequence, nothing to suggest that this subversive “event” might have actually occurred, a feeling heightened by the abstractions of software entities. To the extent that we believe in “the big book heist,” that such an act is even possible, is a gauge through which the paranoia of control societies is illuminated as a longing or desire for autonomy. As Hakim Bey observes in his conceptualisation of “pirate utopias,” such fleeting encounters with the imaginaries of freedom flow back into the experience of the everyday as political instantiations of utopian hope. Amazon Noir, with all its underlying ethical ambiguities, presents us with a challenge to rethink these affective investments by considering our profound weaknesses to master the complexities and constant intrusions of control. It provides an opportunity to conceive of a future that begins with limits and limitations as immanently central, even foundational, to our deep interconnection with socio-technological ensembles. References “Amazon Noir – The Big Book Crime.” http://www.amazon-noir.com/>. Bey, Hakim. T.A.Z.: The Temporary Autonomous Zone, Ontological Anarchy, Poetic Terrorism. New York: Autonomedia, 1991. Chun, Wendy Hui Kyong. Control and Freedom: Power and Paranoia in the Age of Fibre Optics. Cambridge, MA: MIT Press, 2006. Crawford, Kate. “Adaptation: Tracking the Ecologies of Music and Peer-to-Peer Networks.” Media International Australia 114 (2005): 30-39. Cubitt, Sean. “Distribution and Media Flows.” Cultural Politics 1.2 (2005): 193-214. Deleuze, Gilles. Foucault. Trans. Seán Hand. Minneapolis: U of Minnesota P, 1986. ———. “Control and Becoming.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 169-176. ———. “Postscript on the Societies of Control.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 177-182. Eriksson, Magnus, and Rasmus Fleische. “Copies and Context in the Age of Cultural Abundance.” Online posting. 5 June 2007. Nettime 25 Aug 2007. Galloway, Alexander. Protocol: How Control Exists after Decentralization. Cambridge, MA: MIT Press, 2004. Hardt, Michael, and Antonio Negri. Multitude: War and Democracy in the Age of Empire. New York: Penguin Press, 2004. Harold, Christine. OurSpace: Resisting the Corporate Control of Culture. Minneapolis: U of Minnesota P, 2007. Lessig, Lawrence. Code and Other Laws of Cyberspace. New York: Basic Books, 1999. McKenzie, Adrian. Cutting Code: Software and Sociality. New York: Peter Lang, 2006. ———. “The Strange Meshing of Impersonal and Personal Forces in Technological Action.” Culture, Theory and Critique 47.2 (2006): 197-212. Parikka, Jussi. “Contagion and Repetition: On the Viral Logic of Network Culture.” Ephemera: Theory & Politics in Organization 7.2 (2007): 287-308. “Piracy Online.” Recording Industry Association of America. 28 Aug 2007. http://www.riaa.com/physicalpiracy.php>. Sundaram, Ravi. “Recycling Modernity: Pirate Electronic Cultures in India.” Sarai Reader 2001: The Public Domain. Delhi, Sarai Media Lab, 2001. 93-99. http://www.sarai.net>. Terranova, Tiziana. “Communication beyond Meaning: On the Cultural Politics of Information.” Social Text 22.3 (2004): 51-73. ———. “Of Sense and Sensibility: Immaterial Labour in Open Systems.” DATA Browser 03 – Curating Immateriality: The Work of the Curator in the Age of Network Systems. Ed. Joasia Krysa. New York: Autonomedia, 2006. 27-38. Thrift, Nigel. “Re-inventing Invention: New Tendencies in Capitalist Commodification.” Economy and Society 35.2 (2006): 279-306. Citation reference for this article MLA Style Dieter, Michael. "Amazon Noir: Piracy, Distribution, Control." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/07-dieter.php>. APA Style Dieter, M. (Oct. 2007) "Amazon Noir: Piracy, Distribution, Control," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/07-dieter.php>.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Mackenzie, Adrian. "Making Data Flow". M/C Journal 5, n.º 4 (1 de agosto de 2002). http://dx.doi.org/10.5204/mcj.1975.

Texto completo
Resumen
Why has software code become an object of intense interest in several different domains of cultural life? In art (.net art or software art), in Open source software (Linux, Perl, Apache, et cetera (Moody; Himanen)), in tactical media actions (hacking of WEF Melbourne and Nike websites), and more generally, in the significance attributed to coding as work at the pinnacle of contemporary production of information (Negri and Hardt 298), code itself has somehow recently become significant, at least for some subcultures. Why has that happened? At one level, we could say that this happened because informatic interaction (websites, email, chat, online gaming, ecommerce, etc) has become mainstream to media production, organisational practice and indeed, quotidian life in developed and developing countries. As information production moves into the mainstream, working against mainstream control of flows of information means going upstream. For artists, tactical media groups and hackers, code seems to provide a way to, so to speak, reach over the shoulder of mainstream media channels and contest their control of information flows.1 A basic question is: does it? What code does We all see content flowing through the networks. Yet the expressive traits of the flows themselves are harder to grapple with, partly because they are largely infrastructural. When media and cultural theory discuss information-network society, cyberculture or new media, questions of flow specificity are usually downplayed in favour of high-level engagement with information as content. Arguably, the heightened attention to code attests to an increasing awareness that power relations are embedded in the generation and control of flow rather than just the meanings or contents that might be transported by flow. In this context, loops provide a really elementary and concrete way to explore how code participates in information flows. Loops structure almost every code object at a basic level. The programmed loop, a very mundane construct, can be found in any new media artist's or software engineer's coding toolkit. All programming languages have them. In popular programming and scripting languages such as FORTRAN, C, Pascal, C++, Java, Visual Basic, Perl, Python, JavaScript, ActionScript, etc, an almost identical set of looping constructs are found.2 Working with loops as material and as instrument constitutes an indispensable part of producing code-based objects. On the one hand, the loop is the most basic technical element of code as written text. On the other hand, as process executed by CPUs, and in ways that are not immediately obvious even to programmers themselves, loops of various kinds underpin the generative potential of code.3 Crucially, code is concerned with operationality rather than meaning (Lash 203). Code does not directly create meaning. It circulates, transforms, and reproduces messages and patterns of widely varying semantic and contextual richness. By definition, flow is something continuous. In the case of information, what flows are not things but patterns which can be rendered perceptible in different ways—as image, text, sound—on screen, display, and speaker. While the patterns become perceptible in a range of different spatio-temporal modes, their circulation is serialised. They are, as we know, composed of sequences of modulations (bits). Loops control the flow of patterns. Lev Manovich writes: programming involves altering the linear flow of data through control structures, such as 'if/then' and 'repeat/while'; the loop is the most elementary of these control structures (Manovich 189). Drawing on these constructs, programming or coding work gain traction in flows. Interactive looping Loops also generate flows by multiplying events. The most obvious example of how code loops generate and control flows comes from the graphic user interfaces (GUIs) provided by typical operating systems such as Windows, MacOs or one of the Linux desktop environments. These operating systems configure the visual space of millions of desktop screen according to heavily branded designs. Basically they all divide the screen into different framing areas—panels, dividing lines, toolbars, frames, windows—and then populate those areas with controls and indicators—buttons, icons, checkboxes, dropdown lists, menus, popup menus. Framing areas hold content—text, tables, images, video. Controls, usually clustered around the edge of the frame, transform the content displayed in the framed areas in many different ways. Visual controls are themselves hooked up via code to physical input devices such as keyboard, mouse, joystick, buttons and trackpad. The highly habituated and embodied experience of interacting with contemporary GUIs consists of moving in and out, within and between different framing areas, using visual controls that respond either to pointing (with the mouse) or keyboard command to change what is displayed, how it is displayed or indeed to move that content elsewhere (onto disk, across a network). Beneath the highly organised visual space of the GUI, lie hundreds if not thousands of loops. The work of coding these interfaces involves making loops, splicing loops together, and nesting loops within loops. At base, the so-called event loop means that the GUI in principle stands ready at any time to accept input from the physical interface devices. Depending on what that input is, it may translate into direct changes within the framed areas (for instance, keystrokes appear in a text field as letters) or changes affecting the controls (for instance, Control-Enter might signal send the text as an email). What we usually understand by interactivity stems from the way that a loop constantly accepts signals from the physical inputs, queues the signals as events, and deals with them one by one as discrete changes in what appears on screen. Within the GUI's basic event loop, many other loops are constantly starting and finishing. They are nested and unnested. They often affect some or other of the dozens of processes running at any one time within the operating system. Sometimes a command coming from the keyboard or a signal arriving from some other peripheral interface (the network interface card, the printer, a scanner, etc) will trigger the execution of a new process, itself composed of manifold loops. Hence loops often transiently interact with each other during execution of code. At base, the GUI shows something important, something that extends well beyond the domain of the GUI per se: the event loop generates and controls informations flows at the same time. People type on keyboards or manipulate game controllers. A single keypress or mouse click itself hardly constitutes a flow. Yet the event loop can amplify it into a cascade of thousands of events because it sets other loops in process. What we call information flow springs from the multiplicatory effect of loops. A typology of looping Information flows don't come from nowhere. They always go somewhere. Perhaps we could generalise a little from the mundane example of the GUI and say that the generation and control of information flows through loops is itself regulated by bounding conditions. A bounding condition determines the number of times and the sequence of operations carried out by a loop. They often come from outside the machine (interfaces of many different kinds) and from within it (other processes running at the same time, dependent on the operating system architecture and the hardware platform). Their regulatory role suggests the possibility of classifying loops according to boundary conditions.4 The following table classifies loops based on bounding conditions: Type of loop Bounding condition Typical location Simple & indefinite No bounding conditions Event loops in GUIs, servers ... Simple & definite Bounding conditions determined by a finite set of elements Counting, sorting, input and output Nested & definite Multiple bounding conditions Transforming grid and table structures Recursive Depth of possible recursion (memory or time) Searching and sorting of tree or network structures Result controlled Loop ends when some goal has been reached Goal-seeking algorithms Interactive and indefinite Bounding conditions change during the course of the loop User interfaces or interaction Although it risks simplifying something that is quite intricate in any actually executing process, this classification does stress that the distinguishing feature of loops may well be their bounding conditions. In practical terms, within program code, a bounding condition takes the form of some test carried out before, during or after each iteration of a loop. The bounding conditions for some loops relate to data that the code expects to come from other places—across networks, from the user interface, or some other devices. For other loops, the bounding conditions continually emerge in the course of the loop itself—the result of a calculation, finding some result in the course of searching a collection or receiving some new input in a flow of data from an interface or network connection. Based on the classification, we could suggest that loops not only generate flows, but they generate those flows within particular spatio-temporal manifolds. Put less abstractly, if we accept that flows don't come from nowhere, we then need to say what kind of places they do come from. The classification shows that they do not come from homogeneous spaces. In fact they relate to different topologies, to the hugely diverse orderings of signs and gestures within mediatic cultures. To take a mundane example, why has the table become such an important element in the HTML coding of webpages? Clearly tables provide an easy way to organise a page. Tables as classifying and visual ordering devices are nothing new. Along with lists, they have been used for centuries. However, the table as onscreen spatial entity also maps very directly onto a nested loop: the inner loop generates the horizontal row contents; the outer loop places the output of the inner loop in vertical order. As web-designers quickly discovered during the 1990s, HTML tables are rendered quickly by browsers and can easily position different contents—images, headings, text, lines, spaces—in proximity. In shorts, nested loops can quickly turn a table into a serial flow or quickly render a table out of a serial flow. Implications We started with the observation that artists, writers, hackers and media activists are working with code in order to reposition themselves in relation to information flows. Through technical elements such as loops, they reappropriate certain facets of the production of information and communication. Working with these and other elements, they look for different points of entry into the flows, attempting to move upstream of the heavily capitalised sites of mainstream production such as the Windows GUI, eCommerce websites or blockbuster game titles. The proliferation of information objects in music, in visual culture, in database and net-centred forms of interactivity ranging from computer games to chat protocols, suggests that the coding work can trigger powerful shifts in the cultures of circulation. Analysis of loops also suggests that the notion of data or information flow, understood as the continuous gliding of bits through systems of communication, needs revision. Rather than code simply controlling flow, code generates flows as well. What might warrant further thought is just how different kinds of bounding conditions generate different spatio-temporal patterns and modes of inclusion within flows. The diversity of loops within information objects imply a variety of topologically complicated places. It would be possible to work through the classification describing how each kind of loop maps into different spatial and temporal orderings. In particular, we might want to focus on how more complicated loops—result controlled, recursive, or interactive and indefinite types—map out more topologically complicated spaces and times. For my purposes, the important point is that bounding conditions not only regulate loops, they bring different kinds of spatio-temporal manifold into the seriality of flow. They imprint spatial and temporal ordering. Here the operationality of code begins to display a generative dimension that goes well beyond merely transporting or communicating content. Notes 1. At a more theoretical level, for a decade or so fairly abstract notions of virtuality have dominated media and cultural studies approaches to new media. While that domination has been increasingly contested by more fine grained studies of how the Internet is enmeshed with different places (Miller and Slater), attention to code is justified on the grounds that it constitutes an increasingly important form of expression within information flows. 2. Detailed discussion of these looping constructs can be found in any programming textbook or introductory computer science course, so I will not be going through them in any detail. 3. For instance, the cycles of the clock chip are absolutely irreducible. Virtually all programs implicitly rely on a clock chip to regulate execution of their instructions. 4. A classification can act as a symptomatology, that is, as something that sets out the various signs of the existence of a particular condition (Deleuze 368), in this case, the operationality of code. References Appadurai, Arjun. Modernity at Large: Cultural Dimensions of Globalization. Minneapolis: U of Minnesota P, 1996. Deleuze, Gilles. The Brain is the Screen. An Interview with Gilles Deleuze. The Brain is the Screen. Deleuze and the Philosophy of Cinema. Ed Gregory Flaxman. Minneapolis: U of Minnesota P, 2000. 365-68. Hardt, Michael and Antonio Negri. Empire. Cambridge, MA: Harvard U P, 2000. Himanen, Pekka. The Hacker Ethic and the Spirit of the Information Age. London: Secker and Warburg, 2001. Lash, Scott. Critique of Information. London: Sage, 2002. Manovich, Lev. What is Digital Cinema? Ed. Peter Lunenfeld. The Digital Dialectic: New Essays on New Media. Cambridge, MA: MIT, 1999. 172-92. Miller, Daniel, and Don Slater. The Internet: An Ethnographic Approach. Oxford: Berg, 2000. Moody, Glyn. Rebel Code: Linux and the Open Source Revolution. Middlesworth: Penguin, 2001. Citation reference for this article MLA Style Mackenzie, Adrian. "Making Data Flow" M/C: A Journal of Media and Culture 5.4 (2002). [your date of access] < http://www.media-culture.org.au/mc/0208/data.php>. Chicago Style Mackenzie, Adrian, "Making Data Flow" M/C: A Journal of Media and Culture 5, no. 4 (2002), < http://www.media-culture.org.au/mc/0208/data.php> ([your date of access]). APA Style Mackenzie, Adrian. (2002) Making Data Flow. M/C: A Journal of Media and Culture 5(4). < http://www.media-culture.org.au/mc/0208/data.php> ([your date of access]).
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Hinner, Kajetan. "Statistics of Major IRC Networks". M/C Journal 3, n.º 4 (1 de agosto de 2000). http://dx.doi.org/10.5204/mcj.1867.

Texto completo
Resumen
Internet Relay Chat (IRC) is a text-based computer-mediated communication (CMC) service in which people can meet and chat in real time. Most chat occurs in channels named for a specific topic, such as #usa or #linux. A user can take part in several channels when connected to an IRC network. For a long time the only major IRC network available was EFnet, founded in 1990. Over the 1990s three other major IRC networks developed, Undernet (1993), DALnet (1994) and IRCnet (which split from EFnet in June 1996). Several causes led to the separate development of IRC networks: fast growth of user numbers, poor scalability of the IRC protocol and content disagreements, like allowing or prohibiting 'bot programs. Today we are experiencing the development of regional IRC networks, such as BrasNet for Brazilian users, and increasing regionalisation of the global networks -- IRCnet users are generally European, EFnet users generally from the Americas and Australia. All persons connecting to an IRC network at one time create that IRC network's user space. People are constantly signing on and off each network. The total number of users who have ever been to a specific IRC network could be called its 'social space' and an IRC network's social space is by far larger than its user space at any one time. Although there has been research on IRC almost from its beginning (it was developed in 1988, and the first research was made available in late 1991 (Reid)), resources on quantitative development are rare. To rectify this situation, a quantitative data logging 'bot program -- Socip -- was created and set to run on various IRC networks. Socip has been running for almost two years on several IRC networks, giving Internet researchers empirical data of the quantitative development of IRC. Methodology Any approach to gathering quantitative data on IRC needs to fulfil the following tasks: Store the number of users that are on an IRC network at a given time, e.g. every five minutes; Store the number of channels; and, Store the number of servers. It is possible to get this information using the '/lusers' command on an IRC-II client, entered by hand. This approach yields results as in Table 1. Table 1: Number of IRC users on January 31st, 1995 Date Time Users Invisible Servers Channels 31.01.95 10:57 2737 2026 93 1637 During the first months of 1995, it was even possible to get all user information using the '/who **' command. However, on current major IRC networks with greater than 50000 users this method is denied by the IRC Server program, which terminates the connection because it is too slow to accept that amount of data. Added to this problem is the fact that collecting these data manually is an exhausting and repetitive task, better suited to automation. Three approaches to automation were attempted in the development process. The 'Eggdrop' approach The 'Eggdrop' 'bot is one of the best-known IRC 'bot programs. Once programmed, 'bots can act autonomously on an IRC network, and Eggdrop was considered particularly convenient because customised modules could be easily installed. However, testing showed that the Eggdrop 'bot was unsuitable for two reasons. The first was technical: for reasons undetermined, all Eggdrop modules created extensive CPU usage, making it impossible to run several Eggdrops simultaneously to research a number of IRC networks. The second reason had to do with the statistics to be obtained. The objective was to get a snapshot of current IRC users and IRC channel use every five minutes, written into an ASCII file. It was impossible to extend Eggdrop's possibilities in a way that it would periodically submit the '/lusers' command and write the received data into a file. For these reasons, and some security concerns, the Eggdrop approach was abandoned. IrcII was a UNIX IRC client with its own scripting language, making it possible to write command files which periodically submit the '/lusers' command to any chosen IRC server and log the command's output. Four different scripts were used to monitor IRCnet, EFnet, DALnet and Undernet from January to October 1998. These scripts were named Socius_D, Socius_E, Socius_I and Socius_U (depending on the network). Every hour each script stored the number of users and channels in a logfile (examinable using another script written in the Perl language). There were some drawbacks to the ircII script approach. While the need for a terminal to run on could be avoided using the 'screen' package -- making it possible to start ircII, run the scripts, detach, and log off again -- it was impossible to restart ircII and the scripts using an automatic task-scheduler. Thus periodic manual checks were required to find out if the scripts were still running and restart them if needed (e.g. if the server connection was lost). These checks showed that at least one script would not be running after 10 hours. Additional disadvantages were the lengthy log files and the necessity of providing a second program to extract the log file data and write it into a second file from which meaningful graphs could be created. The failure of the Eggdrop and ircII scripting approaches lead to the solution still in use today. Perl script-only approach Perl is a powerful script language for handling file-oriented data when speed is not extremely important. Its version 5 flavour allows a lot of modules to use it for expansion, including the Net::IRC package. The object-oriented Perl interface enables Perl scripts to connect to an IRC server, and use the basic IRC commands. The Socip.pl program includes all server definitions needed to create connections. Socip is currently monitoring ten major IRC networks, including DALnet, EFnet, IRCnet, the Microsoft Network, Talkcity, Undernet and Galaxynet. When run, "Social science IRC program" selects a nickname from its list corresponding to the network -- For EFnet, the first nickname used is Socip_E1. It then functions somewhat like a 'bot. Using that nickname, Socip tries to create an IRC connection to a server of the given network. If there is no failure, handlers are set up which take care of proper reactions to IRC server messages (such as Ping-pong, message output and reply). Socip then joins the channel #hose (the name has no special meaning), a maintenance channel with the additional effect of real persons meeting the 'bot and trying to interact with it every now and then. Those interactions are logged too. Sitting in that channel, the script sleeps periodically and checks if a certain time span has passed (the default is five minutes). After that, the '/lusers' command's output is stored in a data file for each IRC network and the IRC network's RRD (Round Robin database) file is updated. This database, which is organised chronologically, offers great detail for recent events and more condensed information for older events. User and channel information younger than 10 days is stored in five-minute detail. If older than two years, the same information is automatically averaged and stored in a per-day resolution. In case of network problems, Socip acts as necessary. For example, it recognises a connection termination and tries to reconnect after pausing by using the next nickname on the list. This prevents nickname collision problems. If the IRC server does not respond to '/luser' commands three times in a row, the next server on the list is accessed. Special (crontab-invoked) scripts take care of restarting Socip when necessary, as in termination of script because of network problems, IRC operator kill or power failure. After a reboot all scripts are automatically restarted. All monitoring is done on a Linux machine (Pentium 120, 32 MB, Debian Linux 2.1) which is up all the time. Processor load is not extensive, and this machine also acts as the Sociology Department's WWW-Server. Graphs creation Graphs can be created from the data in Socip's RRD files. This task is done using the MRTG (multi router traffic grapher) program by Tobias Oetiker. A script updates all IRC graphs four times a day. Usage of each IRC network is visualised through five graphs: Daily, Weekly and Monthly users and channels, accompanied by two graphs showing all known data users/channels and servers. All this information is continuously published on the World Wide Web at http://www.hinner.com/ircstat. Figures The following samples demonstrate what information can be produced by Socip. As already mentioned, graphs of all monitored networks are updated four times a day, with five graphs for each IRC network. Figure 1 shows the rise of EFnet users from about 40000 in November 1998 to 65000 in July 2000. Sampled data is oscillating around an average amount, which is resulting from the different time zones of users. Fig. 1: EFnet - Users and Channels since November 1998 Figure 2 illustrates the decrease of interconnected EFnet servers over the years. Each server is now handling more and more users. Reasons for taking IRC servers off the net are security concerns (attacks on the server by malicious persons), new payment schemes, maintenance and cost effort. Fig. 2: EFnet - Servers since November 1998 A nice example of a heavily changing weekly graph is Figure 3, which shows peaks shortly before 6pm CEST and almost no users shortly after midnight. Fig. 3: Galaxynet: Weekly Graph (July, 15th-22nd, 2000) The daily graph portrays usage variations with even more detail. Figure 4 is taken from Undernet user and channel data. The vertical gap in the graph indicates missing data, caused either by a net split or other network problems. Fig. 4: Undernet: Daily Graph: July, 22nd, 2000 The final example (Figure 5) shows a weekly graph of the Webchat (http://www.webchat.org) network. It can be seen that every day the user count varies from 5000 to nearly 20000, and that channel numbers fluctuate in concert accordingly from 2500 to 5000. Fig. 5: Webchat: Monthly graph, Week 24-29, 2000 Not every IRC user is connected all the time to an IRC network. This figure may have increased lately with more and more flatrates and cheap Internet access offers, but in general most users will sign off the network after some time. This is why IRC is a very dynamic society, with its membership constantly in flux. Maximum user counts only give the highest number of members who were simultaneously online at some point, and one could only guess at the number of total users of the network -- that is, including those who are using that IRC service but are not signed on at that time. To answer these questions, more thorough investigation is necessary. Then inflows and outflows might be more readily estimated. Table 2 shows the all time maximum user counts of seven IRC networks, compared to the average numbers of IRC users of the four major IRC networks during the third quarter 1998 (based on available data). Table 2: Maximum user counts of selected IRC networks DALnet EFnet Galaxy Net IRCnet MS Chat Undernet Webchat Max. 2000 64276 64309 15253 65340 17392 60210 19793 3rd Q. 1998 21000 37000 n/a 24500 n/a 24000 n/a Compared with the 200-300 users in 1991 and the 7000 IRC-chatters in 1994, the recent growth is certainly extraordinary: it adds up to a total of 306573 users across all monitored networks. It can be expected that the 500000 IRC user threshold will be passed some time during the year 2001. As a final remark, it should be said that obviously Web-based chat systems will be more and more common in the future. These chat services do not use standard IRC protocols, and will be very hard to monitor. Given that these systems are already quite popular, the actual number of chat users in the world could have already passed the half million landmark. References Reid, Elizabeth. "Electropolis: Communications and Community on Internet Relay Chat." Unpublished Honours Dissertation. U of Melbourne, 1991. The Socip program can be obtained at no cost from http://www.hinner.com. Most IRC networks can be accessed with the original Net::Irc Perl extension, but for some special cases (e.g. Talkcity) an extended version is needed, which can also be found there. Citation reference for this article MLA style: Kajetan Hinner. "Statistics of Major IRC Networks: Methods and Summary of User Count." M/C: A Journal of Media and Culture 3.4 (2000). [your date of access] <http://www.api-network.com/mc/0008/count.php>. Chicago style: Kajetan Hinner, "Statistics of Major IRC Networks: Methods and Summary of User Count," M/C: A Journal of Media and Culture 3, no. 4 (2000), <http://www.api-network.com/mc/0008/count.php> ([your date of access]). APA style: Kajetan Hinner. (2000) Statistics of major IRC networks: methods and summary of user count. M/C: A Journal of Media and Culture 3(4). <http://www.api-network.com/mc/0008/count.php> ([your date of access]).
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Mallan, Kerry Margaret y Annette Patterson. "Present and Active: Digital Publishing in a Post-print Age". M/C Journal 11, n.º 4 (24 de junio de 2008). http://dx.doi.org/10.5204/mcj.40.

Texto completo
Resumen
At one point in Victor Hugo’s novel, The Hunchback of Notre Dame, the archdeacon, Claude Frollo, looked up from a book on his table to the edifice of the gothic cathedral, visible from his canon’s cell in the cloister of Notre Dame: “Alas!” he said, “this will kill that” (146). Frollo’s lament, that the book would destroy the edifice, captures the medieval cleric’s anxiety about the way in which Gutenberg’s print technology would become the new universal means for recording and communicating humanity’s ideas and artistic expression, replacing the grand monuments of architecture, human engineering, and craftsmanship. For Hugo, architecture was “the great handwriting of humankind” (149). The cathedral as the material outcome of human technology was being replaced by the first great machine—the printing press. At this point in the third millennium, some people undoubtedly have similar anxieties to Frollo: is it now the book’s turn to be destroyed by yet another great machine? The inclusion of “post print” in our title is not intended to sound the death knell of the book. Rather, we contend that despite the enduring value of print, digital publishing is “present and active” and is changing the way in which research, particularly in the humanities, is being undertaken. Our approach has three related parts. First, we consider how digital technologies are changing the way in which content is constructed, customised, modified, disseminated, and accessed within a global, distributed network. This section argues that the transition from print to electronic or digital publishing means both losses and gains, particularly with respect to shifts in our approaches to textuality, information, and innovative publishing. Second, we discuss the Children’s Literature Digital Resources (CLDR) project, with which we are involved. This case study of a digitising initiative opens out the transformative possibilities and challenges of digital publishing and e-scholarship for research communities. Third, we reflect on technology’s capacity to bring about major changes in the light of the theoretical and practical issues that have arisen from our discussion. I. Digitising in a “post-print age” We are living in an era that is commonly referred to as “the late age of print” (see Kho) or the “post-print age” (see Gunkel). According to Aarseth, we have reached a point whereby nearly all of our public and personal media have become more or less digital (37). As Kho notes, web newspapers are not only becoming increasingly more popular, but they are also making rather than losing money, and paper-based newspapers are finding it difficult to recruit new readers from the younger generations (37). Not only can such online-only publications update format, content, and structure more economically than print-based publications, but their wide distribution network, speed, and flexibility attract advertising revenue. Hype and hyperbole aside, publishers are not so much discarding their legacy of print, but recognising the folly of not embracing innovative technologies that can add value by presenting information in ways that satisfy users’ needs for content to-go or for edutainment. As Kho notes: “no longer able to satisfy customer demand by producing print-only products, or even by enabling online access to semi-static content, established publishers are embracing new models for publishing, web-style” (42). Advocates of online publishing contend that the major benefits of online publishing over print technology are that it is faster, more economical, and more interactive. However, as Hovav and Gray caution, “e-publishing also involves risks, hidden costs, and trade-offs” (79). The specific focus for these authors is e-journal publishing and they contend that while cost reduction is in editing, production and distribution, if the journal is not open access, then costs relating to storage and bandwith will be transferred to the user. If we put economics aside for the moment, the transition from print to electronic text (e-text), especially with electronic literary works, brings additional considerations, particularly in their ability to make available different reading strategies to print, such as “animation, rollovers, screen design, navigation strategies, and so on” (Hayles 38). Transition from print to e-text In his book, Writing Space, David Bolter follows Victor Hugo’s lead, but does not ask if print technology will be destroyed. Rather, he argues that “the idea and ideal of the book will change: print will no longer define the organization and presentation of knowledge, as it has for the past five centuries” (2). As Hayles noted above, one significant indicator of this change, which is a consequence of the shift from analogue to digital, is the addition of graphical, audio, visual, sonic, and kinetic elements to the written word. A significant consequence of this transition is the reinvention of the book in a networked environment. Unlike the printed book, the networked book is not bound by space and time. Rather, it is an evolving entity within an ecology of readers, authors, and texts. The Web 2.0 platform has enabled more experimentation with blending of digital technology and traditional writing, particularly in the use of blogs, which have spawned blogwriting and the wikinovel. Siva Vaidhyanathan’s The Googlization of Everything: How One Company is Disrupting Culture, Commerce and Community … and Why We Should Worry is a wikinovel or blog book that was produced over a series of weeks with contributions from other bloggers (see: http://www.sivacracy.net/). Penguin Books, in collaboration with a media company, “Six Stories to Start,” have developed six stories—“We Tell Stories,” which involve different forms of interactivity from users through blog entries, Twitter text messages, an interactive google map, and other features. For example, the story titled “Fairy Tales” allows users to customise the story using their own choice of names for characters and descriptions of character traits. Each story is loosely based on a classic story and links take users to synopses of these original stories and their authors and to online purchase of the texts through the Penguin Books sales website. These examples of digital stories are a small part of the digital environment, which exploits computer and online technologies’ capacity to be interactive and immersive. As Janet Murray notes, the interactive qualities of digital environments are characterised by their procedural and participatory abilities, while their immersive qualities are characterised by their spatial and encyclopedic dimensions (71–89). These immersive and interactive qualities highlight different ways of reading texts, which entail different embodied and cognitive functions from those that reading print texts requires. As Hayles argues: the advent of electronic textuality presents us with an unparalleled opportunity to reformulate fundamental ideas about texts and, in the process, to see print as well as electronic texts with fresh eyes (89–90). The transition to e-text also highlights how digitality is changing all aspects of everyday life both inside and outside the academy. Online teaching and e-research Another aspect of the commercial arm of publishing that is impacting on academe and other organisations is the digitising and indexing of print content for niche distribution. Kho offers the example of the Mark Logic Corporation, which uses its XML content platform to repurpose content, create new content, and distribute this content through multiple portals. As the promotional website video for Mark Logic explains, academics can use this service to customise their own textbooks for students by including only articles and book chapters that are relevant to their subject. These are then organised, bound, and distributed by Mark Logic for sale to students at a cost that is generally cheaper than most textbooks. A further example of how print and digital materials can form an integrated, customised source for teachers and students is eFictions (Trimmer, Jennings, & Patterson). eFictions was one of the first print and online short story anthologies that teachers of literature could customise to their own needs. Produced as both a print text collection and a website, eFictions offers popular short stories in English by well-known traditional and contemporary writers from the US, Australia, New Zealand, UK, and Europe, with summaries, notes on literary features, author biographies, and, in one instance, a YouTube movie of the story. In using the eFictions website, teachers can build a customised anthology of traditional and innovative stories to suit their teaching preferences. These examples provide useful indicators of how content is constructed, customised, modified, disseminated, and accessed within a distributed network. However, the question remains as to how to measure their impact and outcomes within teaching and learning communities. As Harley suggests in her study on the use and users of digital resources in the humanities and social sciences, several factors warrant attention, such as personal teaching style, philosophy, and specific disciplinary requirements. However, in terms of understanding the benefits of digital resources for teaching and learning, Harley notes that few providers in her sample had developed any plans to evaluate use and users in a systematic way. In addition to the problems raised in Harley’s study, another relates to how researchers can be supported to take full advantage of digital technologies for e-research. The transformation brought about by information and communication technologies extends and broadens the impact of research, by making its outputs more discoverable and usable by other researchers, and its benefits more available to industry, governments, and the wider community. Traditional repositories of knowledge and information, such as libraries, are juggling the space demands of books and computer hardware alongside increasing reader demand for anywhere, anytime, anyplace access to information. Researchers’ expectations about online access to journals, eprints, bibliographic data, and the views of others through wikis, blogs, and associated social and information networking sites such as YouTube compete with the traditional expectations of the institutions that fund libraries for paper-based archives and book repositories. While university libraries are finding it increasingly difficult to purchase all hardcover books relevant to numerous and varied disciplines, a significant proportion of their budgets goes towards digital repositories (e.g., STORS), indexes, and other resources, such as full-text electronic specialised and multidisciplinary journal databases (e.g., Project Muse and Proquest); electronic serials; e-books; and specialised information sources through fast (online) document delivery services. An area that is becoming increasingly significant for those working in the humanities is the digitising of historical and cultural texts. II. Bringing back the dead: The CLDR project The CLDR project is led by researchers and librarians at the Queensland University of Technology, in collaboration with Deakin University, University of Sydney, and members of the AustLit team at The University of Queensland. The CLDR project is a “Research Community” of the electronic bibliographic database AustLit: The Australian Literature Resource, which is working towards the goal of providing a complete bibliographic record of the nation’s literature. AustLit offers users with a single entry point to enhanced scholarly resources on Australian writers, their works, and other aspects of Australian literary culture and activities. AustLit and its Research Communities are supported by grants from the Australian Research Council and financial and in-kind contributions from a consortium of Australian universities, and by other external funding sources such as the National Collaborative Research Infrastructure Strategy. Like other more extensive digitisation projects, such as Project Gutenberg and the Rosetta Project, the CLDR project aims to provide a centralised access point for digital surrogates of early published works of Australian children’s literature, with access pathways to existing resources. The first stage of the CLDR project is to provide access to digitised, full-text, out-of-copyright Australian children’s literature from European settlement to 1945, with selected digitised critical works relevant to the field. Texts comprise a range of genres, including poetry, drama, and narrative for young readers and picture books, songs, and rhymes for infants. Currently, a selection of 75 e-texts and digital scans of original texts from Project Gutenberg and Internet Archive have been linked to the Children’s Literature Research Community. By the end of 2009, the CLDR will have digitised approximately 1000 literary texts and a significant number of critical works. Stage II and subsequent development will involve digitisation of selected texts from 1945 onwards. A precursor to the CLDR project has been undertaken by Deakin University in collaboration with the State Library of Victoria, whereby a digital bibliographic index comprising Victorian School Readers has been completed with plans for full-text digital surrogates of a selection of these texts. These texts provide valuable insights into citizenship, identity, and values formation from the 1930s onwards. At the time of writing, the CLDR is at an early stage of development. An extensive survey of out-of-copyright texts has been completed and the digitisation of these resources is about to commence. The project plans to make rich content searchable, allowing scholars from children’s literature studies and education to benefit from the many advantages of online scholarship. What digital publishing and associated digital archives, electronic texts, hypermedia, and so forth foreground is the fact that writers, readers, publishers, programmers, designers, critics, booksellers, teachers, and copyright laws operate within a context that is highly mediated by technology. In his article on large-scale digitisation projects carried out by Cornell and University of Michigan with the Making of America collection of 19th-century American serials and monographs, Hirtle notes that when special collections’ materials are available via the Web, with appropriate metadata and software, then they can “increase use of the material, contribute to new forms of research, and attract new users to the material” (44). Furthermore, Hirtle contends that despite the poor ergonomics associated with most electronic displays and e-book readers, “people will, when given the opportunity, consult an electronic text over the print original” (46). If this preference is universally accurate, especially for researchers and students, then it follows that not only will the preference for electronic surrogates of original material increase, but preference for other kinds of electronic texts will also increase. It is with this preference for electronic resources in mind that we approached the field of children’s literature in Australia and asked questions about how future generations of researchers would prefer to work. If electronic texts become the reference of choice for primary as well as secondary sources, then it seems sensible to assume that researchers would prefer to sit at the end of the keyboard than to travel considerable distances at considerable cost to access paper-based print texts in distant libraries and archives. We considered the best means for providing access to digitised primary and secondary, full text material, and digital pathways to existing online resources, particularly an extensive indexing and bibliographic database. Prior to the commencement of the CLDR project, AustLit had already indexed an extensive number of children’s literature. Challenges and dilemmas The CLDR project, even in its early stages of development, has encountered a number of challenges and dilemmas that centre on access, copyright, economic capital, and practical aspects of digitisation, and sustainability. These issues have relevance for digital publishing and e-research. A decision is yet to be made as to whether the digital texts in CLDR will be available on open or closed/tolled access. The preference is for open access. As Hayles argues, copyright is more than a legal basis for intellectual property, as it also entails ideas about authorship, creativity, and the work as an “immaterial mental construct” that goes “beyond the paper, binding, or ink” (144). Seeking copyright permission is therefore only part of the issue. Determining how the item will be accessed is a further matter, particularly as future technologies may impact upon how a digital item is used. In the case of e-journals, the issue of copyright payment structures are evolving towards a collective licensing system, pay-per-view, and other combinations of print and electronic subscription (see Hovav and Gray). For research purposes, digitisation of items for CLDR is not simply a scan and deliver process. Rather it is one that needs to ensure that the best quality is provided and that the item is both accessible and usable by researchers, and sustainable for future researchers. Sustainability is an important consideration and provides a challenge for institutions that host projects such as CLDR. Therefore, items need to be scanned to a high quality and this requires an expensive scanner and personnel costs. Files need to be in a variety of formats for preservation purposes and so that they may be manipulated to be useable in different technologies (for example, Archival Tiff, Tiff, Jpeg, PDF, HTML). Hovav and Gray warn that when technology becomes obsolete, then content becomes unreadable unless backward integration is maintained. The CLDR items will be annotatable given AustLit’s NeAt funded project: Aus-e-Lit. The Aus-e-Lit project will extend and enhance the existing AustLit web portal with data integration and search services, empirical reporting services, collaborative annotation services, and compound object authoring, editing, and publishing services. For users to be able to get the most out of a digital item, it needs to be searchable, either through double keying or OCR (optimal character recognition). The value of CLDR’s contribution The value of the CLDR project lies in its goal to provide a comprehensive, searchable body of texts (fictional and critical) to researchers across the humanities and social sciences. Other projects seem to be intent on putting up as many items as possible to be considered as a first resort for online texts. CLDR is more specific and is not interested in simply generating a presence on the Web. Rather, it is research driven both in its design and implementation, and in its focussed outcomes of assisting academics and students primarily in their e-research endeavours. To this end, we have concentrated on the following: an extensive survey of appropriate texts; best models for file location, distribution, and use; and high standards of digitising protocols. These issues that relate to data storage, digitisation, collections, management, and end-users of data are aligned with the “Development of an Australian Research Data Strategy” outlined in An Australian e-Research Strategy and Implementation Framework (2006). CLDR is not designed to simply replicate resources, as it has a distinct focus, audience, and research potential. In addition, it looks at resources that may be forgotten or are no longer available in reproduction by current publishing companies. Thus, the aim of CLDR is to preserve both the time and a period of Australian history and literary culture. It will also provide users with an accessible repository of rare and early texts written for children. III. Future directions It is now commonplace to recognize that the Web’s role as information provider has changed over the past decade. New forms of “collective intelligence” or “distributed cognition” (Oblinger and Lombardi) are emerging within and outside formal research communities. Technology’s capacity to initiate major cultural, social, educational, economic, political and commercial shifts has conditioned us to expect the “next big thing.” We have learnt to adapt swiftly to the many challenges that online technologies have presented, and we have reaped the benefits. As the examples in this discussion have highlighted, the changes in online publishing and digitisation have provided many material, network, pedagogical, and research possibilities: we teach online units providing students with access to e-journals, e-books, and customized archives of digitised materials; we communicate via various online technologies; we attend virtual conferences; and we participate in e-research through a global, digital network. In other words, technology is deeply engrained in our everyday lives. In returning to Frollo’s concern that the book would destroy architecture, Umberto Eco offers a placatory note: “in the history of culture it has never happened that something has simply killed something else. Something has profoundly changed something else” (n. pag.). Eco’s point has relevance to our discussion of digital publishing. The transition from print to digital necessitates a profound change that impacts on the ways we read, write, and research. As we have illustrated with our case study of the CLDR project, the move to creating digitised texts of print literature needs to be considered within a dynamic network of multiple causalities, emergent technological processes, and complex negotiations through which digital texts are created, stored, disseminated, and used. Technological changes in just the past five years have, in many ways, created an expectation in the minds of people that the future is no longer some distant time from the present. Rather, as our title suggests, the future is both present and active. References Aarseth, Espen. “How we became Postdigital: From Cyberstudies to Game Studies.” Critical Cyber-culture Studies. Ed. David Silver and Adrienne Massanari. New York: New York UP, 2006. 37–46. An Australian e-Research Strategy and Implementation Framework: Final Report of the e-Research Coordinating Committee. Commonwealth of Australia, 2006. Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, NJ: Erlbaum, 1991. Eco, Umberto. “The Future of the Book.” 1994. 3 June 2008 ‹http://www.themodernword.com/eco/eco_future_of_book.html>. Gunkel, David. J. “What's the Matter with Books?” Configurations 11.3 (2003): 277–303. Harley, Diane. “Use and Users of Digital Resources: A Focus on Undergraduate Education in the Humanities and Social Sciences.” Research and Occasional Papers Series. Berkeley: University of California. Centre for Studies in Higher Education. 12 June 2008 ‹http://www.themodernword.com/eco/eco_future_of_book.html>. Hayles, N. Katherine. My Mother was a Computer: Digital Subjects and Literary Texts. Chicago: U of Chicago P, 2005. Hirtle, Peter B. “The Impact of Digitization on Special Collections in Libraries.” Libraries & Culture 37.1 (2002): 42–52. Hovav, Anat and Paul Gray. “Managing Academic E-journals.” Communications of the ACM 47.4 (2004): 79–82. Hugo, Victor. The Hunchback of Notre Dame (Notre-Dame de Paris). Ware, Hertfordshire: Wordsworth editions, 1993. Kho, Nancy D. “The Medium Gets the Message: Post-Print Publishing Models.” EContent 30.6 (2007): 42–48. Oblinger, Diana and Marilyn Lombardi. “Common Knowledge: Openness in Higher Education.” Opening up Education: The Collective Advancement of Education Through Open Technology, Open Content and Open Knowledge. Ed. Toru Liyoshi and M. S. Vijay Kumar. Cambridge, MA: MIT Press, 2007. 389–400. Murray, Janet H. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. Cambridge, MA: MIT Press, 2001. Trimmer, Joseph F., Wade Jennings, and Annette Patterson. eFictions. New York: Harcourt, 2001.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Cinque, Toija. "A Study in Anxiety of the Dark". M/C Journal 24, n.º 2 (27 de abril de 2021). http://dx.doi.org/10.5204/mcj.2759.

Texto completo
Resumen
Introduction This article is a study in anxiety with regard to social online spaces (SOS) conceived of as dark. There are two possible ways to define ‘dark’ in this context. The first is that communication is dark because it either has limited distribution, is not open to all users (closed groups are a case example) or hidden. The second definition, linked as a result of the first, is the way that communication via these means is interpreted and understood. Dark social spaces disrupt the accepted top-down flow by the ‘gazing elite’ (data aggregators including social media), but anxious users might need to strain to notice what is out there, and this in turn destabilises one’s reception of the scene. In an environment where surveillance technologies are proliferating, this article examines contemporary, dark, interconnected, and interactive communications for the entangled affordances that might be brought to bear. A provocation is that resistance through counterveillance or “sousveillance” is one possibility. An alternative (or addition) is retreating to or building ‘dark’ spaces that are less surveilled and (perhaps counterintuitively) less fearful. This article considers critically the notion of dark social online spaces via four broad socio-technical concerns connected to the big social media services that have helped increase a tendency for fearful anxiety produced by surveillance and the perceived implications for personal privacy. It also shines light on the aspect of darkness where some users are spurred to actively seek alternative, dark social online spaces. Since the 1970s, public-key cryptosystems typically preserved security for websites, emails, and sensitive health, government, and military data, but this is now reduced (Williams). We have seen such systems exploited via cyberattacks and misappropriated data acquired by affiliations such as Facebook-Cambridge Analytica for targeted political advertising during the 2016 US elections. Via the notion of “parasitic strategies”, such events can be described as news/information hacks “whose attack vectors target a system’s weak points with the help of specific strategies” (von Nordheim and Kleinen-von Königslöw, 88). In accord with Wilson and Serisier’s arguments (178), emerging technologies facilitate rapid data sharing, collection, storage, and processing wherein subsequent “outcomes are unpredictable”. This would also include the effect of acquiescence. In regard to our digital devices, for some, being watched overtly—through cameras encased in toys, computers, and closed-circuit television (CCTV) to digital street ads that determine the resonance of human emotions in public places including bus stops, malls, and train stations—is becoming normalised (McStay, Emotional AI). It might appear that consumers immersed within this Internet of Things (IoT) are themselves comfortable interacting with devices that record sound and capture images for easy analysis and distribution across the communications networks. A counter-claim is that mainstream social media corporations have cultivated a sense of digital resignation “produced when people desire to control the information digital entities have about them but feel unable to do so” (Draper and Turow, 1824). Careful consumers’ trust in mainstream media is waning, with readers observing a strong presence of big media players in the industry and are carefully picking their publications and public intellectuals to follow (Mahmood, 6). A number now also avoid the mainstream internet in favour of alternate dark sites. This is done by users with “varying backgrounds, motivations and participation behaviours that may be idiosyncratic (as they are rooted in the respective person’s biography and circumstance)” (Quandt, 42). By way of connection with dark internet studies via Biddle et al. (1; see also Lasica), the “darknet” is a collection of networks and technologies used to share digital content … not a separate physical network but an application and protocol layer riding on existing networks. Examples of darknets are peer-to-peer file sharing, CD and DVD copying, and key or password sharing on email and newsgroups. As we note from the quote above, the “dark web” uses existing public and private networks that facilitate communication via the Internet. Gehl (1220; see also Gehl and McKelvey) has detailed that this includes “hidden sites that end in ‘.onion’ or ‘.i2p’ or other Top-Level Domain names only available through modified browsers or special software. Accessing I2P sites requires a special routing program ... . Accessing .onion sites requires Tor [The Onion Router]”. For some, this gives rise to social anxiety, read here as stemming from that which is not known, and an exaggerated sense of danger, which makes fight or flight seem the only options. This is often justified or exacerbated by the changing media and communication landscape and depicted in popular documentaries such as The Social Dilemma or The Great Hack, which affect public opinion on the unknown aspects of internet spaces and the uses of personal data. The question for this article remains whether the fear of the dark is justified. Consider that most often one will choose to make one’s intimate bedroom space dark in order to have a good night’s rest. We might pleasurably escape into a cinema’s darkness for the stories told therein, or walk along a beach at night enjoying unseen breezes. Most do not avoid these experiences, choosing to actively seek them out. Drawing this thread, then, is the case made here that agency can also be found in the dark by resisting socio-political structural harms. 1. Digital Futures and Anxiety of the Dark Fear of the darkI have a constant fear that something's always nearFear of the darkFear of the darkI have a phobia that someone's always there In the lyrics to the song “Fear of the Dark” (1992) by British heavy metal group Iron Maiden is a sense that that which is unknown and unseen causes fear and anxiety. Holding a fear of the dark is not unusual and varies in degree for adults as it does for children (Fellous and Arbib). Such anxiety connected to the dark does not always concern darkness itself. It can also be a concern for the possible or imagined dangers that are concealed by the darkness itself as a result of cognitive-emotional interactions (McDonald, 16). Extending this claim is this article’s non-binary assertion that while for some technology and what it can do is frequently misunderstood and shunned as a result, for others who embrace the possibilities and actively take it on it is learning by attentively partaking. Mistakes, solecism, and frustrations are part of the process. Such conceptual theorising falls along a continuum of thinking. Global interconnectivity of communications networks has certainly led to consequent concerns (Turkle Alone Together). Much focus for anxiety has been on the impact upon social and individual inner lives, levels of media concentration, and power over and commercialisation of the internet. Of specific note is that increasing commercial media influence—such as Facebook and its acquisition of WhatsApp, Oculus VR, Instagram, CRTL-labs (translating movements and neural impulses into digital signals), LiveRail (video advertising technology), Chainspace (Blockchain)—regularly changes the overall dynamics of the online environment (Turow and Kavanaugh). This provocation was born out recently when Facebook disrupted the delivery of news to Australian audiences via its service. Mainstream social online spaces (SOS) are platforms which provide more than the delivery of media alone and have been conceptualised predominantly in a binary light. On the one hand, they can be depicted as tools for the common good of society through notional widespread access and as places for civic participation and discussion, identity expression, education, and community formation (Turkle; Bruns; Cinque and Brown; Jenkins). This end of the continuum of thinking about SOS seems set hard against the view that SOS are operating as businesses with strategies that manipulate consumers to generate revenue through advertising, data, venture capital for advanced research and development, and company profit, on the other hand. In between the two polar ends of this continuum are the range of other possibilities, the shades of grey, that add contemporary nuance to understanding SOS in regard to what they facilitate, what the various implications might be, and for whom. By way of a brief summary, anxiety of the dark is steeped in the practices of privacy-invasive social media giants such as Facebook and its ancillary companies. Second are the advertising technology companies, surveillance contractors, and intelligence agencies that collect and monitor our actions and related data; as well as the increased ease of use and interoperability brought about by Web 2.0 that has seen a disconnection between technological infrastructure and social connection that acts to limit user permissions and online affordances. Third are concerns for the negative effects associated with depressed mental health and wellbeing caused by “psychologically damaging social networks”, through sleep loss, anxiety, poor body image, real world relationships, and the fear of missing out (FOMO; Royal Society for Public Health (UK) and the Young Health Movement). Here the harms are both individual and societal. Fourth is the intended acceleration toward post-quantum IoT (Fernández-Caramés), as quantum computing’s digital components are continually being miniaturised. This is coupled with advances in electrical battery capacity and interconnected telecommunications infrastructures. The result of such is that the ontogenetic capacity of the powerfully advanced network/s affords supralevel surveillance. What this means is that through devices and the services that they provide, individuals’ data is commodified (Neff and Nafus; Nissenbaum and Patterson). Personal data is enmeshed in ‘things’ requiring that the decisions that are both overt, subtle, and/or hidden (dark) are scrutinised for the various ways they shape social norms and create consequences for public discourse, cultural production, and the fabric of society (Gillespie). Data and personal information are retrievable from devices, sharable in SOS, and potentially exposed across networks. For these reasons, some have chosen to go dark by being “off the grid”, judiciously selecting their means of communications and their ‘friends’ carefully. 2. Is There Room for Privacy Any More When Everyone in SOS Is Watching? An interesting turn comes through counterarguments against overarching institutional surveillance that underscore the uses of technologies to watch the watchers. This involves a practice of counter-surveillance whereby technologies are tools of resistance to go ‘dark’ and are used by political activists in protest situations for both communication and avoiding surveillance. This is not new and has long existed in an increasingly dispersed media landscape (Cinque, Changing Media Landscapes). For example, counter-surveillance video footage has been accessed and made available via live-streaming channels, with commentary in SOS augmenting networking possibilities for niche interest groups or micropublics (Wilson and Serisier, 178). A further example is the Wordpress site Fitwatch, appealing for an end to what the site claims are issues associated with police surveillance (fitwatch.org.uk and endpolicesurveillance.wordpress.com). Users of these sites are called to post police officers’ identity numbers and photographs in an attempt to identify “cops” that might act to “misuse” UK Anti-terrorism legislation against activists during legitimate protests. Others that might be interested in doing their own “monitoring” are invited to reach out to identified personal email addresses or other private (dark) messaging software and application services such as Telegram (freeware and cross-platform). In their work on surveillance, Mann and Ferenbok (18) propose that there is an increase in “complex constructs between power and the practices of seeing, looking, and watching/sensing in a networked culture mediated by mobile/portable/wearable computing devices and technologies”. By way of critical definition, Mann and Ferenbok (25) clarify that “where the viewer is in a position of power over the subject, this is considered surveillance, but where the viewer is in a lower position of power, this is considered sousveillance”. It is the aspect of sousveillance that is empowering to those using dark SOS. One might consider that not all surveillance is “bad” nor institutionalised. It is neither overtly nor formally regulated—as yet. Like most technologies, many of the surveillant technologies are value-neutral until applied towards specific uses, according to Mann and Ferenbok (18). But this is part of the ‘grey area’ for understanding the impact of dark SOS in regard to which actors or what nations are developing tools for surveillance, where access and control lies, and with what effects into the future. 3. Big Brother Watches, So What Are the Alternatives: Whither the Gazing Elite in Dark SOS? By way of conceptual genealogy, consideration of contemporary perceptions of surveillance in a visually networked society (Cinque, Changing Media Landscapes) might be usefully explored through a revisitation of Jeremy Bentham’s panopticon, applied here as a metaphor for contemporary surveillance. Arguably, this is a foundational theoretical model for integrated methods of social control (Foucault, Surveiller et Punir, 192-211), realised in the “panopticon” (prison) in 1787 by Jeremy Bentham (Bentham and Božovič, 29-95) during a period of social reformation aimed at the improvement of the individual. Like the power for social control over the incarcerated in a panopticon, police power, in order that it be effectively exercised, “had to be given the instrument of permanent, exhaustive, omnipresent surveillance, capable of making all visible … like a faceless gaze that transformed the whole social body into a field of perception” (Foucault, Surveiller et Punir, 213–4). In grappling with the impact of SOS for the individual and the collective in post-digital times, we can trace out these early ruminations on the complex documentary organisation through state-controlled apparatuses (such as inspectors and paid observers including “secret agents”) via Foucault (Surveiller et Punir, 214; Subject and Power, 326-7) for comparison to commercial operators like Facebook. Today, artificial intelligence (AI), facial recognition technology (FRT), and closed-circuit television (CCTV) for video surveillance are used for social control of appropriate behaviours. Exemplified by governments and the private sector is the use of combined technologies to maintain social order, from ensuring citizens cross the street only on green lights, to putting rubbish in the correct recycling bin or be publicly shamed, to making cashless payments in stores. The actions see advantages for individual and collective safety, sustainability, and convenience, but also register forms of behaviour and attitudes with predictive capacities. This gives rise to suspicions about a permanent account of individuals’ behaviour over time. Returning to Foucault (Surveiller et Punir, 135), the impact of this finds a dissociation of power from the individual, whereby they become unwittingly impelled into pre-existing social structures, leading to a ‘normalisation’ and acceptance of such systems. If we are talking about the dark, anxiety is key for a Ministry of SOS. Following Foucault again (Subject and Power, 326-7), there is the potential for a crawling, creeping governance that was once distinct but is itself increasingly hidden and growing. A blanket call for some form of ongoing scrutiny of such proliferating powers might be warranted, but with it comes regulation that, while offering certain rights and protections, is not without consequences. For their part, a number of SOS platforms had little to no moderation for explicit content prior to December 2018, and in terms of power, notwithstanding important anxiety connected to arguments that children and the vulnerable need protections from those that would seek to take advantage, this was a crucial aspect of community building and self-expression that resulted in this freedom of expression. In unearthing the extent that individuals are empowered arising from the capacity to post sexual self-images, Tiidenberg ("Bringing Sexy Back") considered that through dark SOS (read here as unregulated) some users could work in opposition to the mainstream consumer culture that provides select and limited representations of bodies and their sexualities. This links directly to Mondin’s exploration of the abundance of queer and feminist pornography on dark SOS as a “counterpolitics of visibility” (288). This work resulted in a reasoned claim that the technological structure of dark SOS created a highly political and affective social space that users valued. What also needs to be underscored is that many users also believed that such a space could not be replicated on other mainstream SOS because of the differences in architecture and social norms. Cho (47) worked with this theory to claim that dark SOS are modern-day examples in a history of queer individuals having to rely on “underground economies of expression and relation”. Discussions such as these complicate what dark SOS might now become in the face of ‘adult’ content moderation and emerging tracking technologies to close sites or locate individuals that transgress social norms. Further, broader questions are raised about how content moderation fits in with the public space conceptualisations of SOS more generally. Increasingly, “there is an app for that” where being able to identify the poster of an image or an author of an unknown text is seen as crucial. While there is presently no standard approach, models for combining instance-based and profile-based features such as SVM for determining authorship attribution are in development, with the result that potentially far less content will remain hidden in the future (Bacciu et al.). 4. There’s Nothing New under the Sun (Ecclesiastes 1:9) For some, “[the] high hopes regarding the positive impact of the Internet and digital participation in civic society have faded” (Schwarzenegger, 99). My participant observation over some years in various SOS, however, finds that critical concern has always existed. Views move along the spectrum of thinking from deep scepticisms (Stoll, Silicon Snake Oil) to wondrous techo-utopian promises (Negroponte, Being Digital). Indeed, concerns about the (then) new technologies of wireless broadcasting can be compared with today’s anxiety over the possible effects of the internet and SOS. Inglis (7) recalls, here, too, were fears that humanity was tampering with some dangerous force; might wireless wave be causing thunderstorms, droughts, floods? Sterility or strokes? Such anxieties soon evaporated; but a sense of mystery might stay longer with evangelists for broadcasting than with a laity who soon took wireless for granted and settled down to enjoy the products of a process they need not understand. As the analogy above makes clear, just as audiences came to use ‘the wireless’ and later the internet regularly, it is reasonable to argue that dark SOS will also gain widespread understanding and find greater acceptance. Dark social spaces are simply the recent development of internet connectivity and communication more broadly. The dark SOS afford choice to be connected beyond mainstream offerings, which some users avoid for their perceived manipulation of content and user both. As part of the wider array of dark web services, the resilience of dark social spaces is reinforced by the proliferation of users as opposed to decentralised replication. Virtual Private Networks (VPNs) can be used for anonymity in parallel to TOR access, but they guarantee only anonymity to the client. A VPN cannot guarantee anonymity to the server or the internet service provider (ISP). While users may use pseudonyms rather than actual names as seen on Facebook and other SOS, users continue to take to the virtual spaces they inhabit their off-line, ‘real’ foibles, problems, and idiosyncrasies (Chenault). To varying degrees, however, people also take their best intentions to their interactions in the dark. The hyper-efficient tools now deployed can intensify this, which is the great advantage attracting some users. In balance, however, in regard to online information access and dissemination, critical examination of what is in the public’s interest, and whether content should be regulated or controlled versus allowing a free flow of information where users self-regulate their online behaviour, is fraught. O’Loughlin (604) was one of the first to claim that there will be voluntary loss through negative liberty or freedom from (freedom from unwanted information or influence) and an increase in positive liberty or freedom to (freedom to read or say anything); hence, freedom from surveillance and interference is a kind of negative liberty, consistent with both libertarianism and liberalism. Conclusion The early adopters of initial iterations of SOS were hopeful and liberal (utopian) in their beliefs about universality and ‘free’ spaces of open communication between like-minded others. This was a way of virtual networking using a visual motivation (led by images, text, and sounds) for consequent interaction with others (Cinque, Visual Networking). The structural transformation of the public sphere in a Habermasian sense—and now found in SOS and their darker, hidden or closed social spaces that might ensure a counterbalance to the power of those with influence—towards all having equal access to platforms for presenting their views, and doing so respectfully, is as ever problematised. Broadly, this is no more so, however, than for mainstream SOS or for communicating in the world. References Bacciu, Andrea, Massimo La Morgia, Alessandro Mei, Eugenio Nerio Nemmi, Valerio Neri, and Julinda Stefa. “Cross-Domain Authorship Attribution Combining Instance Based and Profile-Based Features.” CLEF (Working Notes). Lugano, Switzerland, 9-12 Sep. 2019. Bentham, Jeremy, and Miran Božovič. The Panopticon Writings. London: Verso Trade, 1995. Biddle, Peter, et al. “The Darknet and the Future of Content Distribution.” Proceedings of the 2002 ACM Workshop on Digital Rights Management. Vol. 6. Washington DC, 2002. Bruns, Axel. Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang, 2008. Chenault, Brittney G. “Developing Personal and Emotional Relationships via Computer-Mediated Communication.” CMC Magazine 5.5 (1998). 1 May 2020 <http://www.december.com/cmc/mag/1998/may/chenault.html>. Cho, Alexander. “Queer Reverb: Tumblr, Affect, Time.” Networked Affect. Eds. K. Hillis, S. Paasonen, and M. Petit. Cambridge, Mass.: MIT Press, 2015: 43-58. Cinque, Toija. Changing Media Landscapes: Visual Networking. London: Oxford UP, 2015. ———. “Visual Networking: Australia's Media Landscape.” Global Media Journal: Australian Edition 6.1 (2012): 1-8. Cinque, Toija, and Adam Brown. “Educating Generation Next: Screen Media Use, Digital Competencies, and Tertiary Education.” Digital Culture & Education 7.1 (2015). Draper, Nora A., and Joseph Turow. “The Corporate Cultivation of Digital Resignation.” New Media & Society 21.8 (2019): 1824-1839. Fellous, Jean-Marc, and Michael A. Arbib, eds. Who Needs Emotions? The Brain Meets the Robot. New York: Oxford UP, 2005. Fernández-Caramés, Tiago M. “From Pre-Quantum to Post-Quantum IoT Security: A Survey on Quantum-Resistant Cryptosystems for the Internet of Things.” IEEE Internet of Things Journal 7.7 (2019): 6457-6480. Foucault, Michel. Surveiller et Punir: Naissance de la Prison [Discipline and Punish—The Birth of The Prison]. Trans. Alan Sheridan. New York: Random House, 1977. Foucault, Michel. “The Subject and Power.” Michel Foucault: Power, the Essential Works of Michel Foucault 1954–1984. Vol. 3. Trans. R. Hurley and others. Ed. J.D. Faubion. London: Penguin, 2001. Gehl, Robert W. Weaving the Dark Web: Legitimacy on Freenet, Tor, and I2P. Cambridge, Massachusetts: MIT Press, 2018. Gehl, Robert, and Fenwick McKelvey. “Bugging Out: Darknets as Parasites of Large-Scale Media Objects.” Media, Culture & Society 41.2 (2019): 219-235. Gillespie, Tarleton. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. London: Yale UP, 2018. Habermas, Jürgen. The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society. Trans. Thomas Burger with the assistance of Frederick Lawrence. Cambridge, Mass.: MIT Press, 1989. Inglis, Ken S. This Is the ABC: The Australian Broadcasting Commission 1932–1983. Melbourne: Melbourne UP, 1983. Iron Maiden. “Fear of the Dark.” London: EMI, 1992. Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York UP, 2006. Lasica, J. D. Darknet: Hollywood’s War against the Digital Generation. New York: John Wiley and Sons, 2005. Mahmood, Mimrah. “Australia's Evolving Media Landscape.” 13 Apr. 2021 <https://www.meltwater.com/en/resources/australias-evolving-media-landscape>. Mann, Steve, and Joseph Ferenbok. “New Media and the Power Politics of Sousveillance in a Surveillance-Dominated World.” Surveillance & Society 11.1/2 (2013): 18-34. McDonald, Alexander J. “Cortical Pathways to the Mammalian Amygdala.” Progress in Neurobiology 55.3 (1998): 257-332. McStay, Andrew. Emotional AI: The Rise of Empathic Media. London: Sage, 2018. Mondin, Alessandra. “‘Tumblr Mostly, Great Empowering Images’: Blogging, Reblogging and Scrolling Feminist, Queer and BDSM Desires.” Journal of Gender Studies 26.3 (2017): 282-292. Neff, Gina, and Dawn Nafus. Self-Tracking. Cambridge, Mass.: MIT Press, 2016. Negroponte, Nicholas. Being Digital. New York: Alfred A. Knopf, 1995. Nissenbaum, Helen, and Heather Patterson. “Biosensing in Context: Health Privacy in a Connected World.” Quantified: Biosensing Technologies in Everyday Life. Ed. Dawn Nafus. 2016. 68-79. O’Loughlin, Ben. “The Political Implications of Digital Innovations.” Information, Communication and Society 4.4 (2001): 595–614. Quandt, Thorsten. “Dark Participation.” Media and Communication 6.4 (2018): 36-48. Royal Society for Public Health (UK) and the Young Health Movement. “#Statusofmind.” 2017. 2 Apr. 2021 <https://www.rsph.org.uk/our-work/campaigns/status-of-mind.html>. Statista. “Number of IoT devices 2015-2025.” 27 Nov. 2020 <https://www.statista.com/statistics/471264/iot-number-of-connected-devices-worldwide/>. Schwarzenegger, Christian. “Communities of Darkness? Users and Uses of Anti-System Alternative Media between Audience and Community.” Media and Communication 9.1 (2021): 99-109. Stoll, Clifford. Silicon Snake Oil: Second Thoughts on the Information Highway. Anchor, 1995. Tiidenberg, Katrin. “Bringing Sexy Back: Reclaiming the Body Aesthetic via Self-Shooting.” Cyberpsychology: Journal of Psychosocial Research on Cyberspace 8.1 (2014). The Great Hack. Dirs. Karim Amer, Jehane Noujaim. Netflix, 2019. The Social Dilemma. Dir. Jeff Orlowski. Netflix, 2020. Turkle, Sherry. The Second Self: Computers and the Human Spirit. Cambridge, Mass.: MIT Press, 2005. Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each Other. UK: Hachette, 2017. Turow, Joseph, and Andrea L. Kavanaugh, eds. The Wired Homestead: An MIT Press Sourcebook on the Internet and the Family. Cambridge, Mass.: MIT Press, 2003. Von Nordheim, Gerret, and Katharina Kleinen-von Königslöw. “Uninvited Dinner Guests: A Theoretical Perspective on the Antagonists of Journalism Based on Serres’ Parasite.” Media and Communication 9.1 (2021): 88-98. Williams, Chris K. “Configuring Enterprise Public Key Infrastructures to Permit Integrated Deployment of Signature, Encryption and Access Control Systems.” MILCOM 2005-2005 IEEE Military Communications Conference. IEEE, 2005. Wilson, Dean, and Tanya Serisier. “Video Activism and the Ambiguities of Counter-Surveillance.” Surveillance & Society 8.2 (2010): 166-180.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Deck, Andy. "Treadmill Culture". M/C Journal 6, n.º 2 (1 de abril de 2003). http://dx.doi.org/10.5204/mcj.2157.

Texto completo
Resumen
Since the first days of the World Wide Web, artists like myself have been exploring the new possibilities of network interactivity. Some good tools and languages have been developed and made available free for the public to use. This has empowered individuals to participate in the media in ways that are quite remarkable. Nonetheless, the future of independent media is clouded by legal, regulatory, and organisational challenges that need to be addressed. It is not clear to what extent independent content producers will be able to build upon the successes of the 90s – it is yet to be seen whether their efforts will be largely nullified by the anticyclones of a hostile media market. Not so long ago, American news magazines were covering the Browser War. Several real wars later, the terms of surrender are becoming clearer. Now both of the major Internet browsers are owned by huge media corporations, and most of the states (and Reagan-appointed judges) that were demanding the break-up of Microsoft have given up. A curious about-face occurred in U.S. Justice Department policy when John Ashcroft decided to drop the federal case. Maybe Microsoft's value as a partner in covert activity appealed to Ashcroft more than free competition. Regardless, Microsoft is now turning its wrath on new competitors, people who are doing something very, very bad: sharing the products of their own labour. This practice of sharing source code and building free software infrastructure is epitomised by the continuing development of Linux. Everything in the Linux kernel is free, publicly accessible information. As a rule, the people building this "open source" operating system software believe that maintaining transparency is important. But U.S. courts are not doing much to help. In a case brought by the Motion Picture Association of America against Eric Corley, a federal district court blocked the distribution of source code that enables these systems to play DVDs. In addition to censoring Corley's journal, the court ruled that any programmer who writes a program that plays a DVD must comply with a host of license restrictions. In short, an established and popular media format (the DVD) cannot be used under open source operating systems without sacrificing the principle that software source code should remain in the public domain. Should the contents of operating systems be tightly guarded secrets, or subject to public review? If there are capable programmers willing to create good, free operating systems, should the law stand in their way? The question concerning what type of software infrastructure will dominate personal computers in the future is being answered as much by disappointing legal decisions as it is by consumer choice. Rather than ensuring the necessary conditions for innovation and cooperation, the courts permit a monopoly to continue. Rather than endorsing transparency, secrecy prevails. Rather than aiming to preserve a balance between the commercial economy and the gift-economy, sharing is being undermined by the law. Part of the mystery of the Internet for a lot of newcomers must be that it seems to disprove the old adage that you can't get something for nothing. Free games, free music, free pornography, free art. Media corporations are doing their best to change this situation. The FBI and trade groups have blitzed the American news media with alarmist reports about how children don't understand that sharing digital information is a crime. Teacher Gail Chmura, the star of one such media campaign, says of her students, "It's always been interesting that they don't see a connection between the two. They just don't get it" (Hopper). Perhaps the confusion arises because the kids do understand that digital duplication lets two people have the same thing. Theft is at best a metaphor for the copying of data, because the original is not stolen in the same sense as a material object. In the effort to liken all copying to theft, legal provisions for the fair use of intellectual property are neglected. Teachers could just as easily emphasise the importance of sharing and the development of an electronic commons that is free for all to use. The values advanced by the trade groups are not beyond question and are not historical constants. According to Donald Krueckeberg, Rutgers University Professor of Urban Planning, native Americans tied the concept of property not to ownership but to use. "One used it, one moved on, and use was shared with others" (qtd. in Batt). Perhaps it is necessary for individuals to have dominion over some private data. But who owns the land, wind, sun, and sky of the Internet – the infrastructure? Given that publicly-funded research and free software have been as important to the development of the Internet as have business and commercial software, it is not surprising that some ambiguity remains about the property status of the dataverse. For many the Internet is as much a medium for expression and the interplay of languages as it is a framework for monetary transaction. In the case involving DVD software mentioned previously, there emerged a grass-roots campaign in opposition to censorship. Dozens of philosophical programmers and computer scientists asserted the expressive and linguistic bases of software by creating variations on the algorithm needed to play DVDs. The forbidden lines of symbols were printed on T-shirts, translated into different computer languages, translated into legal rhetoric, and even embedded into DNA and pictures of MPAA president Jack Valenti (see e.g. Touretzky). These efforts were inspired by a shared conviction that important liberties were at stake. Supporting the MPAA's position would do more than protect movies from piracy. The use of the algorithm was not clearly linked to an intent to pirate movies. Many felt that outlawing the DVD algorithm, which had been experimentally developed by a Norwegian teenager, represented a suppression of gumption and ingenuity. The court's decision rejected established principles of fair use, denied the established legality of reverse engineering software to achieve compatibility, and asserted that journalists and scientists had no right to publish a bit of code if it might be misused. In a similar case in April 2000, a U.S. court of appeals found that First Amendment protections did apply to software (Junger). Noting that source code has both an expressive feature and a functional feature, this court held that First Amendment protection is not reserved only for purely expressive communication. Yet in the DVD case, the court opposed this view and enforced the inflexible demands of the Digital Millennium Copyright Act. Notwithstanding Ted Nelson's characterisation of computers as literary machines, the decision meant that the linguistic and expressive aspects of software would be subordinated to other concerns. A simple series of symbols were thereby cast under a veil of legal secrecy. Although they were easy to discover, and capable of being committed to memory or translated to other languages, fair use and other intuitive freedoms were deemed expendable. These sorts of legal obstacles are serious challenges to the continued viability of free software like Linux. The central value proposition of Linux-based operating systems – free, open source code – is threatening to commercial competitors. Some corporations are intent on stifling further development of free alternatives. Patents offer another vulnerability. The writing of free software has become a minefield of potential patent lawsuits. Corporations have repeatedly chosen to pursue patent litigation years after the alleged infringements have been incorporated into widely used free software. For example, although it was designed to avoid patent problems by an array of international experts, the image file format known as JPEG (Joint Photographic Experts Group) has recently been dogged by patent infringement charges. Despite good intentions, low-budget initiatives and ad hoc organisations are ill equipped to fight profiteering patent lawsuits. One wonders whether software innovation is directed more by lawyers or computer scientists. The present copyright and patent regimes may serve the needs of the larger corporations, but it is doubtful that they are the best means of fostering software innovation and quality. Orwell wrote in his Homage to Catalonia, There was a new rule that censored portions of the newspaper must not be left blank but filled up with other matter; as a result it was often impossible to tell when something had been cut out. The development of the Internet has a similar character: new diversions spring up to replace what might have been so that the lost potential is hardly felt. The process of retrofitting Internet software to suit ideological and commercial agendas is already well underway. For example, Microsoft has announced recently that it will discontinue support for the Java language in 2004. The problem with Java, from Microsoft's perspective, is that it provides portable programming tools that work under all operating systems, not just Windows. With Java, programmers can develop software for the large number of Windows users, while simultaneously offering software to users of other operating systems. Java is an important piece of the software infrastructure for Internet content developers. Yet, in the interest of coercing people to use only their operating systems, Microsoft is willing to undermine thousands of existing Java-language projects. Their marketing hype calls this progress. The software industry relies on sales to survive, so if it means laying waste to good products and millions of hours of work in order to sell something new, well, that's business. The consequent infrastructure instability keeps software developers, and other creative people, on a treadmill. From Progressive Load by Andy Deck, artcontext.org/progload As an Internet content producer, one does not appeal directly to the hearts and minds of the public; one appeals through the medium of software and hardware. Since most people are understandably reluctant to modify the software running on their computers, the software installed initially is a critical determinant of what is possible. Unconventional, independent, and artistic uses of the Internet are diminished when the media infrastructure is effectively established by decree. Unaccountable corporate control over infrastructure software tilts the playing field against smaller content producers who have neither the advance warning of industrial machinations, nor the employees and resources necessary to keep up with a regime of strategic, cyclical obsolescence. It seems that independent content producers must conform to the distribution technologies and content formats favoured by the entertainment and marketing sectors, or else resign themselves to occupying the margins of media activity. It is no secret that highly diversified media corporations can leverage their assets to favour their own media offerings and confound their competitors. Yet when media giants AOL and Time-Warner announced their plans to merge in 2000, the claim of CEOs Steve Case and Gerald Levin that the merged companies would "operate in the public interest" was hardly challenged by American journalists. Time-Warner has since fought to end all ownership limits in the cable industry; and Case, who formerly championed third-party access to cable broadband markets, changed his tune abruptly after the merger. Now that Case has been ousted, it is unclear whether he still favours oligopoly. According to Levin, global media will be and is fast becoming the predominant business of the 21st century ... more important than government. It's more important than educational institutions and non-profits. We're going to need to have these corporations redefined as instruments of public service, and that may be a more efficient way to deal with society's problems than bureaucratic governments. Corporate dominance is going to be forced anyhow because when you have a system that is instantly available everywhere in the world immediately, then the old-fashioned regulatory system has to give way (Levin). It doesn't require a lot of insight to understand that this "redefinition," this slight of hand, does not protect the public from abuses of power: the dissolution of the "old-fashioned regulatory system" does not serve the public interest. From Lexicon by Andy Deck, artcontext.org/lexicon) As an artist who has adopted telecommunications networks and software as his medium, it disappoints me that a mercenary vision of electronic media's future seems to be the prevailing blueprint. The giantism of media corporations, and the ongoing deregulation of media consolidation (Ahrens), underscore the critical need for independent media sources. If it were just a matter of which cola to drink, it would not be of much concern, but media corporations control content. In this hyper-mediated age, content – whether produced by artists or journalists – crucially affects what people think about and how they understand the world. Content is not impervious to the software, protocols, and chicanery that surround its delivery. It is about time that people interested in independent voices stop believing that laissez faire capitalism is building a better media infrastructure. The German writer Hans Magnus Enzensberger reminds us that the media tyrannies that affect us are social products. The media industry relies on thousands of people to make the compromises necessary to maintain its course. The rapid development of the mind industry, its rise to a key position in modern society, has profoundly changed the role of the intellectual. He finds himself confronted with new threats and new opportunities. Whether he knows it or not, whether he likes it or not, he has become the accomplice of a huge industrial complex which depends for its survival on him, as he depends on it for his own. He must try, at any cost, to use it for his own purposes, which are incompatible with the purposes of the mind machine. What it upholds he must subvert. He may play it crooked or straight, he may win or lose the game; but he would do well to remember that there is more at stake than his own fortune (Enzensberger 18). Some cultural leaders have recognised the important role that free software already plays in the infrastructure of the Internet. Among intellectuals there is undoubtedly a genuine concern about the emerging contours of corporate, global media. But more effective solidarity is needed. Interest in open source has tended to remain superficial, leading to trendy, cosmetic, and symbolic uses of terms like "open source" rather than to a deeper commitment to an open, public information infrastructure. Too much attention is focussed on what's "cool" and not enough on the road ahead. Various media specialists – designers, programmers, artists, and technical directors – make important decisions that affect the continuing development of electronic media. Many developers have failed to recognise (or care) that their decisions regarding media formats can have long reaching consequences. Web sites that use media formats which are unworkable for open source operating systems should be actively discouraged. Comparable technologies are usually available to solve compatibility problems. Going with the market flow is not really giving people what they want: it often opposes the work of thousands of activists who are trying to develop open source alternatives (see e.g. Greene). Average Internet users can contribute to a more innovative, free, open, and independent media – and being conscientious is not always difficult or unpleasant. One project worthy of support is the Internet browser Mozilla. Currently, many content developers create their Websites so that they will look good only in Microsoft's Internet Explorer. While somewhat understandable given the market dominance of Internet Explorer, this disregard for interoperability undercuts attempts to popularise standards-compliant alternatives. Mozilla, written by a loose-knit group of activists and programmers (some of whom are paid by AOL/Time-Warner), can be used as an alternative to Microsoft's browser. If more people use Mozilla, it will be harder for content providers to ignore the way their Web pages appear in standards-compliant browsers. The Mozilla browser, which is an open source initiative, can be downloaded from http://www.mozilla.org/. While there are many people working to create real and lasting alternatives to the monopolistic and technocratic dynamics that are emerging, it takes a great deal of cooperation to resist the media titans, the FCC, and the courts. Oddly enough, corporate interests sometimes overlap with those of the public. Some industrial players, such as IBM, now support open source software. For them it is mostly a business decision. Frustrated by the coercive control of Microsoft, they support efforts to develop another operating system platform. For others, including this writer, the open source movement is interesting for the potential it holds to foster a more heterogeneous and less authoritarian communications infrastructure. Many people can find common cause in this resistance to globalised uniformity and consolidated media ownership. The biggest challenge may be to get people to believe that their choices really matter, that by endorsing certain products and operating systems and not others, they can actually make a difference. But it's unlikely that this idea will flourish if artists and intellectuals don't view their own actions as consequential. There is a troubling tendency for people to see themselves as powerless in the face of the market. This paralysing habit of mind must be abandoned before the media will be free. Works Cited Ahrens, Frank. "Policy Watch." Washington Post (23 June 2002): H03. 30 March 2003 <http://www.washingtonpost.com/ac2/wp-dyn/A27015-2002Jun22?la... ...nguage=printer>. Batt, William. "How Our Towns Got That Way." 7 Oct. 1996. 31 March 2003 <http://www.esb.utexas.edu/drnrm/WhatIs/LandValue.htm>. Chester, Jeff. "Gerald Levin's Negative Legacy." Alternet.org 6 Dec. 2001. 5 March 2003 <http://www.democraticmedia.org/resources/editorials/levin.php>. Enzensberger, Hans Magnus. "The Industrialisation of the Mind." Raids and Reconstructions. London: Pluto Press, 1975. 18. Greene, Thomas C. "MS to Eradicate GPL, Hence Linux." 25 June 2002. 5 March 2003 <http://www.theregus.com/content/4/25378.php>. Hopper, D. Ian. "FBI Pushes for Cyber Ethics Education." Associated Press 10 Oct. 2000. 29 March 2003 <http://www.billingsgazette.com/computing/20001010_cethics.php>. Junger v. Daley. U.S. Court of Appeals for 6th Circuit. 00a0117p.06. 2000. 31 March 2003 <http://pacer.ca6.uscourts.gov/cgi-bin/getopn.pl?OPINION=00a0... ...117p.06>. Levin, Gerald. "Millennium 2000 Special." CNN 2 Jan. 2000. Touretzky, D. S. "Gallery of CSS Descramblers." 2000. 29 March 2003 <http://www.cs.cmu.edu/~dst/DeCSS/Gallery>. Links http://artcontext.org/lexicon/ http://artcontext.org/progload http://pacer.ca6.uscourts.gov/cgi-bin/getopn.pl?OPINION=00a0117p.06 http://www.billingsgazette.com/computing/20001010_cethics.html http://www.cs.cmu.edu/~dst/DeCSS/Gallery http://www.democraticmedia.org/resources/editorials/levin.html http://www.esb.utexas.edu/drnrm/WhatIs/LandValue.htm http://www.mozilla.org/ http://www.theregus.com/content/4/25378.html http://www.washingtonpost.com/ac2/wp-dyn/A27015-2002Jun22?language=printer Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Deck, Andy. "Treadmill Culture " M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0304/04-treadmillculture.php>. APA Style Deck, A. (2003, Apr 23). Treadmill Culture . M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0304/04-treadmillculture.php>
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Goggin, Gerard. "‘mobile text’". M/C Journal 7, n.º 1 (1 de enero de 2004). http://dx.doi.org/10.5204/mcj.2312.

Texto completo
Resumen
Mobile In many countries, more people have mobile phones than they do fixed-line phones. Mobile phones are one of the fastest growing technologies ever, outstripping even the internet in many respects. With the advent and widespread deployment of digital systems, mobile phones were used by an estimated 1, 158, 254, 300 people worldwide in 2002 (up from approximately 91 million in 1995), 51. 4% of total telephone subscribers (ITU). One of the reasons for this is mobility itself: the ability for people to talk on the phone wherever they are. The communicative possibilities opened up by mobile phones have produced new uses and new discourses (see Katz and Aakhus; Brown, Green, and Harper; and Plant). Contemporary soundscapes now feature not only voice calls in previously quiet public spaces such as buses or restaurants but also the aural irruptions of customised polyphonic ringtones identifying whose phone is ringing by the tune downloaded. The mobile phone plays an important role in contemporary visual and material culture as fashion item and status symbol. Most tragically one might point to the tableau of people in the twin towers of the World Trade Centre, or aboard a plane about to crash, calling their loved ones to say good-bye (Galvin). By contrast, one can look on at the bathos of Australian cricketer Shane Warne’s predilection for pressing his mobile phone into service to arrange wanted and unwanted assignations while on tour. In this article, I wish to consider another important and so far also under-theorised aspect of mobile phones: text. Of contemporary textual and semiotic systems, mobile text is only a recent addition. Yet it is already produces millions of inscriptions each day, and promises to be of far-reaching significance. Txt Txt msg ws an acidnt. no 1 expcted it. Whn the 1st txt msg ws sent, in 1993 by Nokia eng stdnt Riku Pihkonen, the telcom cpnies thought it ws nt important. SMS – Short Message Service – ws nt considrd a majr pt of GSM. Like mny teks, the *pwr* of txt — indeed, the *pwr* of the fon — wz discvrd by users. In the case of txt mssng, the usrs were the yng or poor in the W and E. (Agar 105) As Jon Agar suggests in Constant Touch, textual communication through mobile phone was an after-thought. Mobile phones use radio waves, operating on a cellular system. The first such mobile service went live in Chicago in December 1978, in Sweden in 1981, in January 1985 in the United Kingdom (Agar), and in the mid-1980s in Australia. Mobile cellular systems allowed efficient sharing of scarce spectrum, improvements in handsets and quality, drawing on advances in science and engineering. In the first instance, technology designers, manufacturers, and mobile phone companies had been preoccupied with transferring telephone capabilities and culture to the mobile phone platform. With the growth in data communications from the 1960s onwards, consideration had been given to data capabilities of mobile phone. One difficulty, however, had been the poor quality and slow transfer rates of data communications over mobile networks, especially with first-generation analogue and early second-generation digital mobile phones. As the internet was widely and wildly adopted in the early to mid-1990s, mobile phone proponents looked at mimicking internet and online data services possibilities on their hand-held devices. What could work on a computer screen, it was thought, could be reinvented in miniature for the mobile phone — and hence much money was invested into the wireless access protocol (or WAP), which spectacularly flopped. The future of mobiles as a material support for text culture was not to lie, at first at least, in aping the world-wide web for the phone. It came from an unexpected direction: cheap, simple letters, spelling out short messages with strange new ellipses. SMS was built into the European Global System for Mobile (GSM) standard as an insignificant, additional capability. A number of telecommunications manufacturers thought so little of the SMS as not to not design or even offer the equipment needed (the servers, for instance) for the distribution of the messages. The character sets were limited, the keyboards small, the typeface displays rudimentary, and there was no acknowledgement that messages were actually received by the recipient. Yet SMS was cheap, and it offered one-to-one, or one-to-many, text communications that could be read at leisure, or more often, immediately. SMS was avidly taken up by young people, forming a new culture of media use. Sending a text message offered a relatively cheap and affordable alternative to the still expensive timed calls of voice mobile. In its early beginnings, mobile text can be seen as a subcultural activity. The text culture featured compressed, cryptic messages, with users devising their own abbreviations and grammar. One of the reasons young people took to texting was a tactic of consolidating and shaping their own shared culture, in distinction from the general culture dominated by their parents and other adults. Mobile texting become involved in a wider reworking of youth culture, involving other new media forms and technologies, and cultural developments (Butcher and Thomas). Another subculture that also was in the vanguard of SMS was the Deaf ‘community’. Though the Alexander Graham Bell, celebrated as the inventor of the telephone, very much had his hearing-impaired wife in mind in devising a new form of communication, Deaf people have been systematically left off the telecommunications network since this time. Deaf people pioneered an earlier form of text communications based on the Baudot standard, used for telex communications. Known as teletypewriter (TTY), or telecommunications device for the Deaf (TDD) in the US, this technology allowed Deaf people to communicate with each other by connecting such devices to the phone network. The addition of a relay service (established in Australia in the mid-1990s after much government resistance) allows Deaf people to communicate with hearing people without TTYs (Goggin & Newell). Connecting TTYs to mobile phones have been a vexed issue, however, because the digital phone network in Australia does not allow compatibility. For this reason, and because of other features, Deaf people have become avid users of SMS (Harper). An especially favoured device in Europe has been the Nokia Communicator, with its hinged keyboard. The move from a ‘restricted’, ‘subcultural’ economy to a ‘general’ economy sees mobile texting become incorporated in the semiotic texture and prosaic practices of everyday life. Many users were already familiar with the new conventions already developed around electronic mail, with shorter, crisper messages sent and received — more conversation-like than other correspondence. Unlike phone calls, email is asynchronous. The sender can respond immediately, and the reply will be received with seconds. However, they can also choose to reply at their leisure. Similarly, for the adept user, SMS offers considerable advantages over voice communications, because it makes textual production mobile. Writing and reading can take place wherever a mobile phone can be turned on: in the street, on the train, in the club, in the lecture theatre, in bed. The body writes differently too. Writing with a pen takes a finger and thumb. Typing on a keyboard requires between two and ten fingers. The mobile phone uses the ‘fifth finger’ — the thumb. Always too early, and too late, to speculate on contemporary culture (Morris), it is worth analyzing the textuality of mobile text. Theorists of media, especially television, have insisted on understanding the specific textual modes of different cultural forms. We are familiar with this imperative, and other methods of making visible and decentring structures of text, and the institutions which animate and frame them (whether author or producer; reader or audience; the cultural expectations encoded in genre; the inscriptions in technology). In formal terms, mobile text can be described as involving elision, great compression, and open-endedness. Its channels of communication physically constrain the composition of a very long single text message. Imagine sending James Joyce’s Finnegan’s Wake in one text message. How long would it take to key in this exemplar of the disintegration of the cultural form of the novel? How long would it take to read? How would one navigate the text? Imagine sending the Courier-Mail or Financial Review newspaper over a series of text messages? The concept of the ‘news’, with all its cultural baggage, is being reconfigured by mobile text — more along the lines of the older technology of the telegraph, perhaps: a few words suffices to signify what is important. Mobile textuality, then, involves a radical fragmentation and unpredictable seriality of text lexia (Barthes). Sometimes a mobile text looks singular: saying ‘yes’ or ‘no’, or sending your name and ID number to obtain your high school or university results. Yet, like a telephone conversation, or any text perhaps, its structure is always predicated upon, and haunted by, the other. Its imagined reader always has a mobile phone too, little time, no fixed address (except that hailed by the network’s radio transmitter), and a finger poised to respond. Mobile text has structure and channels. Yet, like all text, our reading and writing of it reworks those fixities and makes destabilizes our ‘clear’ communication. After all, mobile textuality has a set of new pre-conditions and fragilities. It introduces new sorts of ‘noise’ to signal problems to annoy those theorists cleaving to the Shannon and Weaver linear model of communication; signals often drop out; there is a network confirmation (and message displayed) that text messages have been sent, but no system guarantee that they have been received. Our friend or service provider might text us back, but how do we know that they got our text message? Commodity We are familiar now with the pleasures of mobile text, the smile of alerting a friend to our arrival, celebrating good news, jilting a lover, making a threat, firing a worker, flirting and picking-up. Text culture has a new vector of mobility, invented by its users, but now coveted and commodified by businesses who did not see it coming in the first place. Nimble in its keystrokes, rich in expressivity and cultural invention, but relatively rudimentary in its technical characteristics, mobile text culture has finally registered in the boardrooms of communications companies. Not only is SMS the preferred medium of mobile phone users to keep in touch with each other, SMS has insinuated itself into previously separate communication industries arenas. In 2002-2003 SMS became firmly established in television broadcasting. Finally, interactive television had arrived after many years of prototyping and being heralded. The keenly awaited back-channel for television arrives courtesy not of cable or satellite television, nor an extra fixed-phone line. It’s the mobile phone, stupid! Big Brother was not only a watershed in reality television, but also in convergent media. Less obvious perhaps than supplementary viewing, or biographies, or chat on Big Brother websites around the world was the use of SMS for voting. SMS is now routinely used by mainstream television channels for viewer feedback, contest entry, and program information. As well as its widespread deployment in broadcasting, mobile text culture has been the language of prosaic, everyday transactions. Slipping into a café at Bronte Beach in Sydney, why not pay your parking meter via SMS? You’ll even receive a warning when your time is up. The mobile is becoming the ‘electronic purse’, with SMS providing its syntax and sentences. The belated ingenuity of those fascinated by the economics of mobile text has also coincided with a technological reworking of its possibilities, with new implications for its semiotic possibilities. Multimedia messaging (MMS) has now been deployed, on capable digital phones (an instance of what has been called 2.5 generation [G] digital phones) and third-generation networks. MMS allows images, video, and audio to be communicated. At one level, this sort of capability can be user-generated, as in the popularity of mobiles that take pictures and send these to other users. Television broadcasters are also interested in the capability to send video clips of favourite programs to viewers. Not content with the revenues raised from millions of standard-priced SMS, and now MMS transactions, commercial participants along the value chain are keenly awaiting the deployment of what is called ‘premium rate’ SMS and MMS services. These services will involve the delivery of desirable content via SMS and MMS, and be priced at a premium. Products and services are likely to include: one-to-one textchat; subscription services (content delivered on handset); multi-party text chat (such as chat rooms); adult entertainment services; multi-part messages (such as text communications plus downloads); download of video or ringtones. In August 2003, one text-chat service charged $4.40 for a pair of SMS. Pwr At the end of 2003, we have scarcely registered the textual practices and systems in mobile text, a culture that sprang up in the interstices of telecommunications. It may be urgent that we do think about the stakes here, as SMS is being extended and commodified. There are obvious and serious policy issues in premium rate SMS and MMS services, and questions concerning the political economy in which these are embedded. Yet there are cultural questions too, with intricate ramifications. How do we understand the effects of mobile textuality, rewriting the telephone book for this new cultural form (Ronell). What are the new genres emerging? And what are the implications for cultural practice and policy? Does it matter, for instance, that new MMS and 3rd generation mobile platforms are not being designed or offered with any-to-any capabilities in mind: allowing any user to upload and send multimedia communications to other any. True, as the example of SMS shows, the inventiveness of users is difficult to foresee and predict, and so new forms of mobile text may have all sorts of relationships with content and communication. However, there are worrying signs of these developing mobile circuits being programmed for narrow channels of retail purchase of cultural products rather than open-source, open-architecture, publicly usable nodes of connection. Works Cited Agar, Jon. Constant Touch: A Global History of the Mobile Phone. Cambridge: Icon, 2003. Barthes, Roland. S/Z. Trans. Richard Miller. New York: Hill & Wang, 1974. Brown, Barry, Green, Nicola, and Harper, Richard, eds. Wireless World: Social, Cultural, and Interactional Aspects of the Mobile Age. London: Springer Verlag, 2001. Butcher, Melissa, and Thomas, Mandy, eds. Ingenious: Emerging youth cultures in urban Australia. Melbourne: Pluto, 2003. Galvin, Michael. ‘September 11 and the Logistics of Communication.’ Continuum: Journal of Media and Cultural Studies 17.3 (2003): 303-13. Goggin, Gerard, and Newell, Christopher. Digital Disability: The Social Construction of Digital in New Media. Lanham, MA: Rowman & Littlefield, 2003. Harper, Phil. ‘Networking the Deaf Nation.’ Australian Journal of Communication 30. 3 (2003), in press. International Telecommunications Union (ITU). ‘Mobile Cellular, subscribers per 100 people.’ World Telecommunication Indicators <http://www.itu.int/ITU-D/ict/statistics/> accessed 13 October 2003. Katz, James E., and Aakhus, Mark, eds. Perpetual Contact: Mobile Communication, Private Talk, Public Performance. Cambridge: Cambridge U P, 2002. Morris, Meaghan. Too Soon, Too Late: History in Popular Culture. Bloomington and Indianapolis: U of Indiana P, 1998. Plant, Sadie. On the Mobile: The Effects of Mobile Telephones on Social and Individual Life. < http://www.motorola.com/mot/documents/0,1028,296,00.pdf> accessed 5 October 2003. Ronell, Avital. The Telephone Book: Technology—schizophrenia—electric speech. Lincoln: U of Nebraska P, 1989. Citation reference for this article MLA Style Goggin, Gerard. "‘mobile text’" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0401/03-goggin.php>. APA Style Goggin, G. (2004, Jan 12). ‘mobile text’. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0401/03-goggin.php>
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía