Siga este link para ver outros tipos de publicações sobre o tema: Simple Object Access Protocol (Computer network protocol).

Artigos de revistas sobre o tema "Simple Object Access Protocol (Computer network protocol)"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 47 melhores artigos de revistas para estudos sobre o assunto "Simple Object Access Protocol (Computer network protocol)".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Hussein, Mahmoud, Ahmed I. Galal, Emad Abd-Elrahman e Mohamed Zorkany. "Internet of Things (IoT) Platform for Multi-Topic Messaging". Energies 13, n.º 13 (30 de junho de 2020): 3346. http://dx.doi.org/10.3390/en13133346.

Texto completo da fonte
Resumo:
IoT-based applications operate in a client–server architecture, which requires a specific communication protocol. This protocol is used to establish the client–server communication model, allowing all clients of the system to perform specific tasks through internet communications. Many data communication protocols for the Internet of Things are used by IoT platforms, including message queuing telemetry transport (MQTT), advanced message queuing protocol (AMQP), MQTT for sensor networks (MQTT-SN), data distribution service (DDS), constrained application protocol (CoAP), and simple object access protocol (SOAP). These protocols only support single-topic messaging. Thus, in this work, an IoT message protocol that supports multi-topic messaging is proposed. This protocol will add a simple “brain” for IoT platforms in order to realize an intelligent IoT architecture. Moreover, it will enhance the traffic throughput by reducing the overheads of messages and the delay of multi-topic messaging. Most current IoT applications depend on real-time systems. Therefore, an RTOS (real-time operating system) as a famous OS (operating system) is used for the embedded systems to provide the constraints of real-time features, as required by these real-time systems. Using RTOS for IoT applications adds important features to the system, including reliability. Many of the undertaken research works into IoT platforms have only focused on specific applications; they did not deal with the real-time constraints under a real-time system umbrella. In this work, the design of the multi-topic IoT protocol and platform is implemented for real-time systems and also for general-purpose applications; this platform depends on the proposed multi-topic communication protocol, which is implemented here to show its functionality and effectiveness over similar protocols.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

HONG, PENGYU, SHENG ZHONG e WING H. WONG. "UBIC2 — TOWARDS UBIQUITOUS BIO-INFORMATION COMPUTING: DATA PROTOCOLS, MIDDLEWARE, AND WEB SERVICES FOR HETEROGENEOUS BIOLOGICAL INFORMATION INTEGRATION AND RETRIEVAL". International Journal of Software Engineering and Knowledge Engineering 15, n.º 03 (junho de 2005): 475–85. http://dx.doi.org/10.1142/s0218194005001951.

Texto completo da fonte
Resumo:
The Ubiquitous Bio-Information Computing (UBIC2) project aims to disseminate protocols and software packages to facilitate the development of heterogeneous bio-information computing units that are interoperable and may run distributedly. UBIC2 specifies biological data in XML formats and queries data using XQuery. The UBIC2 programming library provides interfaces for integrating, retrieving, and manipulating heterogeneous biological data. Interoperability is achieved via Simple Object Access Protocol (SOAP) based web services. The documents and software packages of UBIC2 are available at .
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Al-Musawi, Nawras A., e Dhiah Al-Shammary. "Static Hilbert convex set clustering for web services aggregation". Indonesian Journal of Electrical Engineering and Computer Science 32, n.º 1 (1 de outubro de 2023): 372. http://dx.doi.org/10.11591/ijeecs.v32.i1.pp372-380.

Texto completo da fonte
Resumo:
<span>Web services' high levels of duplicate textual structures have caused network bottlenecks and congestion. Clustering and then aggregating similar web services as one compressed message can potentially achieve network traffic reduction. In this paper, a static Hilbert clustering as new model for clustering web services based on convex set similarity is proposed. Mathematically, the proposed model calculates similarity among simple object access protocol (SOAP) messages and then cluster them based on higher similarity values. Next, each cluster is aggregated as a compact message. The experiments have explained the proposed model performance as it has outperformed the convention clustering strategies in both compression ratio and clustering time. The best results have been achievable by the proposed model has reached up to (15) with fixed-length and up to (21) with Huffman.</span>
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Al-Musawi, Nawras A., e Dhiah Al-Shammary. "Dynamic Hilbert clustering based on convex set for web services aggregation". International Journal of Electrical and Computer Engineering (IJECE) 13, n.º 6 (1 de dezembro de 2023): 6654. http://dx.doi.org/10.11591/ijece.v13i6.pp6654-6662.

Texto completo da fonte
Resumo:
<div align=""><span lang="EN-US">In recent years, web services run by big corporations and different application-specific data centers have all been embraced by several companies worldwide. Web services provide several benefits when compared to other communication technologies. However, it still suffers from congestion and bottlenecks as well as a significant delay due to the tremendous load caused by a large number of web service requests from end users. Clustering and then aggregating similar web services as one compressed message can potentially achieve network traffic reduction. This paper proposes a dynamic Hilbert clustering as a new model for clustering web services based on convex set similarity. Mathematically, the suggested models compute the degree of similarity between simple object access protocol (SOAP) messages and then cluster them into groups with high similarity. Next, each cluster is aggregated as a compact message that is finally encoded by fixed-length or Huffman. The experiment results have shown the suggested model performs better than the conventional clustering techniques in terms of compression ratio. The suggested model has produced the best results, reaching up to 15 with fixed-length and up to 20 with Huffman</span></div>
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Selvan, Satiaseelan, e Manmeet Mahinderjit Singh. "Adaptive Contextual Risk-Based Model to Tackle Confidentiality-Based Attacks in Fog-IoT Paradigm". Computers 11, n.º 2 (24 de janeiro de 2022): 16. http://dx.doi.org/10.3390/computers11020016.

Texto completo da fonte
Resumo:
The Internet of Things (IoT) allows billions of physical objects to be connected to gather and exchange information to offer numerous applications. It has unsupported features such as low latency, location awareness, and geographic distribution that are important for a few IoT applications. Fog computing is integrated into IoT to aid these features to increase computing, storage, and networking resources to the network edge. Unfortunately, it is faced with numerous security and privacy risks, raising severe concerns among users. Therefore, this research proposes a contextual risk-based access control model for Fog-IoT technology that considers real-time data information requests for IoT devices and gives dynamic feedback. The proposed model uses Fog-IoT environment features to estimate the security risk associated with each access request using device context, resource sensitivity, action severity, and risk history as inputs for the fuzzy risk model to compute the risk factor. Then, the proposed model uses a security agent in a fog node to provide adaptive features in which the device’s behaviour is monitored to detect any abnormal actions from authorised devices. The proposed model is then evaluated against the existing model to benchmark the results. The fuzzy-based risk assessment model with enhanced MQTT authentication protocol and adaptive security agent showed an accurate risk score for seven random scenarios tested compared to the simple risk score calculations.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Polgar, Jana. "Using WSRP 2.0 with JSR 168 and 286 Portlets". International Journal of Web Portals 2, n.º 1 (janeiro de 2010): 45–57. http://dx.doi.org/10.4018/jwp.2010010105.

Texto completo da fonte
Resumo:
WSRP—Web Services for Remote Portlets—specification builds on current standard technologies, such as WSDL (Web Services Definition Language), UDDI (Universal Description, Discovery and Integration), and SOAP (Simple Object Access Protocol). It aims to sol
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Mitts, H�kan, e Harri Hans�n. "A simple and efficient routing protocol for the UMTSA Access network". Mobile Networks and Applications 1, n.º 2 (junho de 1996): 167–81. http://dx.doi.org/10.1007/bf01193335.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Zientarski, Tomasz, Marek Miłosz, Marek Kamiński e Maciej Kołodziej. "APPLICABILITY ANALYSIS OF REST AND SOAP WEB SERVICES". Informatics Control Measurement in Economy and Environment Protection 7, n.º 4 (21 de dezembro de 2017): 28–31. http://dx.doi.org/10.5604/01.3001.0010.7249.

Texto completo da fonte
Resumo:
Web Services are common means to exchange data and information over the network. Web Services make themselves available over the Internet, where technology and platform are independent. These web services can be developed on the basis of two interaction styles such as Simple Object Access Protocol (SOAP) and Representational State Transfer Protocol (REST). In this study, a comparison of REST and SOAP web services is presented in terms of their applicability in diverse areas. It is concluded that in the past both technologies were equally popular, but during the rapid Internet development the REST technology has become the leading one in the area of access to Internet services.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Ge, Jian Xia, e Wen Ya Xiao. "Network Layer Network Topology Discovery Algorithm Research". Applied Mechanics and Materials 347-350 (agosto de 2013): 2071–76. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.2071.

Texto completo da fonte
Resumo:
Along with the development of the network information age, people on the dependence of the computer network is more and more high, the computer network itself the security and reliability of becomes very important, the network management put forward higher request. This paper analyzes two algorithms of the network layer topology discovery based on the SNMP and ICMP protocol, based on this, this paper puts forward a improved algorithm of the comprehensive two algorithm, and makes the discovery process that has a simple, efficient, and has a strong generalization, and solved in the discovery process met the subnet judge, multiple access routers identification.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Ge, Jian Xia, e Wen Ya Xiao. "Network Layer Network Topology Discovery Algorithm Research". Applied Mechanics and Materials 380-384 (agosto de 2013): 1327–32. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.1327.

Texto completo da fonte
Resumo:
Along with the development of the network information age, people on the dependence of the computer network is more and more high, the computer network itself the security and reliability of becomes very important, the network management put forward higher request. This paper analyzes two algorithms of the network layer topology discovery based on the SNMP and ICMP protocol, based on this, this paper puts forward a improved algorithm of the comprehensive two algorithm, and makes the discovery process that has a simple, efficient, and has a strong generalization, and solved in the discovery process met the subnet judge, multiple access routers identification.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Datsenko, Serhii, e Heorhii Kuchuk. "BIOMETRIC AUTHENTICATION UTILIZING CONVOLUTIONAL NEURAL NETWORKS". Advanced Information Systems 7, n.º 2 (12 de junho de 2023): 87–91. http://dx.doi.org/10.20998/2522-9052.2023.2.12.

Texto completo da fonte
Resumo:
Relevance. Cryptographic algorithms and protocols are important tools in modern cybersecurity. They are used in various applications, from simple software for encrypting computer information to complex information and telecommunications systems that implement various electronic trust services. Developing complete biometric cryptographic systems will allow using personal biometric data as a unique secret parameter instead of needing to remember cryptographic keys or using additional authentication devices. The object of research the process of generating cryptographic keys from biometric images of a person's face with the implementation of fuzzy extractors. The subject of the research is the means and methods of building a neural network using modern technologies. The purpose of this paper to study new methods for generating cryptographic keys from biometric images using convolutional neural networks and histogram of oriented gradients. Research results. The proposed technology allows for the implementation of a new cryptographic mechanism - a technology for generating reliable cryptographic passwords from biometric images for further use as attributes for access to secure systems, as well as a source of keys for existing cryptographic algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Song, Min Su, Jae Dong Lee, Young-Sik Jeong, Hwa-Young Jeong e Jong Hyuk Park. "DS-ARP: A New Detection Scheme for ARP Spoofing Attacks Based on Routing Trace for Ubiquitous Environments". Scientific World Journal 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/264654.

Texto completo da fonte
Resumo:
Despite the convenience, ubiquitous computing suffers from many threats and security risks. Security considerations in the ubiquitous network are required to create enriched and more secure ubiquitous environments. The address resolution protocol (ARP) is a protocol used to identify the IP address and the physical address of the associated network card. ARP is designed to work without problems in general environments. However, since it does not include security measures against malicious attacks, in its design, an attacker can impersonate another host using ARP spoofing or access important information. In this paper, we propose a new detection scheme for ARP spoofing attacks using a routing trace, which can be used to protect the internal network. Tracing routing can find the change of network movement path. The proposed scheme provides high constancy and compatibility because it does not alter the ARP protocol. In addition, it is simple and stable, as it does not use a complex algorithm or impose extra load on the computer system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Tan, Wenken, e Jianmin Hu. "Design of the Wireless Network Hierarchy System of Intelligent City Industrial Data Management Based on SDN Network Architecture". Security and Communication Networks 2021 (10 de novembro de 2021): 1–12. http://dx.doi.org/10.1155/2021/5732300.

Texto completo da fonte
Resumo:
With the rapid development of the industrial Internet of Things and the comprehensive popularization of mobile intelligent devices, the construction of smart city and economic development of wireless network demand are increasingly high. SDN has the advantages of control separation, programmable interface, and centralized control logic. Therefore, integrating this technical concept into the smart city data management WLAN network not only can effectively solve the problems existing in the previous wireless network operation but also provide more functions according to different user needs. In this case, the traditional WLAN network is of low cost and is simple to operate, but it cannot guarantee network compatibility and performance. From a practical perspective, further network compatibility and security are a key part of industrial IoT applications. This paper designs the network architecture of smart city industrial IoT based on SDN, summarizes the access control requirements and research status of industrial IoT, and puts forward the access control requirements and objectives of industrial IoT based on SDN. The characteristics of the industrial Internet of Things are regularly associated with data resources. In the framework of SDN industrial Internet of Things, gateway protocol is simplified and topology discovery algorithm is designed. The access control policy is configured on the gateway. The access control rule can be dynamically adjusted in real time. An SDN-based intelligent city industrial Internet of Things access control function test platform was built, and the system was simulated. The proposed method is compared with other methods in terms of extension protocol and channel allocation algorithm. Experimental results verify the feasibility of the proposed scheme. Finally, on the basis of performance analysis, the practical significance of the design of a smart city wireless network hierarchical data management system based on SDN industrial Internet of Things architecture is expounded.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Halili, Festim, e Erenis Ramadani. "Web Services: A Comparison of Soap and Rest Services". Modern Applied Science 12, n.º 3 (28 de fevereiro de 2018): 175. http://dx.doi.org/10.5539/mas.v12n3p175.

Texto completo da fonte
Resumo:
The interest on Web services has been growing rapidly in these couple of years since their start of use. A web service would be described as a method for exchanging/communicating information between devices over a network. Often, when deciding which service would fit on the architecture design to develop a product, then the question rises which service to use and when?SOAP (Simple Object Access Protocol) and REST (Representational State Transfer) are the two most used protocols to exchange messages, so choosing one over the other has its own advantages and disadvantages. In this paper we have addressed the differences and best practices when to use one over the other.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Zhang, Xiao. "Intranet Web System, a Simple Solution to Companywide Information-on-demand". Proceedings, annual meeting, Electron Microscopy Society of America 54 (11 de agosto de 1996): 404–5. http://dx.doi.org/10.1017/s0424820100164489.

Texto completo da fonte
Resumo:
Intranet, a private corporate network that mirrors the internet Web structure, is the new internal communication technology being embraced by more than 50% of large US companies. Within the intranet, computers using Web-server software store and manage documents built on the Web’s hypertext markup language (HTML) format. This emerging technology allows disparate computer systems within companies to “speak” to one another using the Internet’s TCP/IP protocol. A “fire wall” allows internal users Internet access, but denies external intruders intranet access. As industrial microscopists, how can we take advantage of this new technology? This paper is a preliminary summary of our recent progress in building an intranet Web system among GE microscopy labs. Applications and future development are also discussed.The intranet Web system is an inexpensive yet powerful alternative to other forms of internal communication. It can greatly improve communications, unlock hidden information, and transform an organization. The intranet microscopy Web system was built on the existing GE corporate-wide Ethernet link running Internet’s TCP/IP protocol (Fig. 1). Netscape Navigator was selected as the Web browser. Web’s HTML documentation was generated by using Microsoft® Internet Assistant software. Each microscopy lab has its own home page. The microscopy Web system is also an integrated part of GE Plastics analytical technology Web system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Lertsutthiwong, Monchai, Thinh Nguyen e Alan Fern. "Scalable Video Streaming for Single-Hop Wireless Networks Using a Contention-Based Access MAC Protocol". Advances in Multimedia 2008 (2008): 1–21. http://dx.doi.org/10.1155/2008/928521.

Texto completo da fonte
Resumo:
Limited bandwidth and high packet loss rate pose a serious challenge for video streaming applications over wireless networks. Even when packet loss is not present, the bandwidth fluctuation, as a result of an arbitrary number of active flows in an IEEE 802.11 network, can significantly degrade the video quality. This paper aims to enhance the quality of video streaming applications in wireless home networks via a joint optimization of video layer-allocation technique, admission control algorithm, and medium access control (MAC) protocol. Using an Aloha-like MAC protocol, we propose a novel admission control framework, which can be viewed as an optimization problem that maximizes the average quality of admitted videos, given a specified minimum video quality for each flow. We present some hardness results for the optimization problem under various conditions and propose some heuristic algorithms for finding a good solution. In particular, we show that a simple greedy layer-allocation algorithm can perform reasonably well, although it is typically not optimal. Consequently, we present a more expensive heuristic algorithm that guarantees to approximate the optimal solution within a constant factor. Simulation results demonstrate that our proposed framework can improve the video quality up to 26% as compared to those of the existing approaches.
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Chang, Chee Er, Azhar Kassim Mustapha e Faisal Mohd-Yasin. "FPGA Prototyping of Web Service Using REST and SOAP Packages". Chips 1, n.º 3 (5 de dezembro de 2022): 210–17. http://dx.doi.org/10.3390/chips1030014.

Texto completo da fonte
Resumo:
This Communication reports on FPGA prototyping of an embedded web service that sends XML messages under two different packages, namely Simple Object Access Protocol (SOAP) and Representational State Transfer (REST). The request and response messages are communicated through a 100 Mbps local area network between a Spartan-3E FPGA board and washing machine simulator. The performances of REST-based and SOAP-based web services implemented on reconfigurable hardware are then compared. In general, the former performs better than the latter in terms of FPGA resource utilization (~12% less), message length (~57% shorter), and processing time (~4.5 μs faster). This work confirms the superiority of REST over SOAP for data transmission using reconfigurable computing, which paves the way for adoption of these low-cost systems for web services of consumer electronics such as home appliances.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Comeau, Donald C., Chih-Hsuan Wei, Rezarta Islamaj Doğan e Zhiyong Lu. "PMC text mining subset in BioC: about three million full-text articles and growing". Bioinformatics 35, n.º 18 (31 de janeiro de 2019): 3533–35. http://dx.doi.org/10.1093/bioinformatics/btz070.

Texto completo da fonte
Resumo:
Abstract Motivation Interest in text mining full-text biomedical research articles is growing. To facilitate automated processing of nearly 3 million full-text articles (in PubMed Central® Open Access and Author Manuscript subsets) and to improve interoperability, we convert these articles to BioC, a community-driven simple data structure in either XML or JavaScript Object Notation format for conveniently sharing text and annotations. Results The resultant articles can be downloaded via both File Transfer Protocol for bulk access and a Web API for updates or a more focused collection. Since the availability of the Web API in 2017, our BioC collection has been widely used by the research community. Availability and implementation https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PMC/.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Al-Dailami, Abdulrahman, Chang Ruan, Zhihong Bao e Tao Zhang. "QoS3: Secure Caching in HTTPS Based on Fine-Grained Trust Delegation". Security and Communication Networks 2019 (28 de dezembro de 2019): 1–16. http://dx.doi.org/10.1155/2019/3107543.

Texto completo da fonte
Resumo:
With the ever-increasing concern in network security and privacy, a major portion of Internet traffic is encrypted now. Recent research shows that more than 70% of Internet content is transmitted using HyperText Transfer Protocol Secure (HTTPS). However, HTTPS encryption eliminates the advantages of many intermediate services like the caching proxy, which can significantly degrade the performance of web content delivery. We argue that these restrictions lead to the need for other mechanisms to access sites quickly and safely. In this paper, we introduce QoS3, which is a protocol that can overcome such limitations by allowing clients to explicitly and securely re-introduce in-network caching proxies using fine-grained trust delegation without compromising the integrity of the HTTPS content and modifying the format of Transport Layer Security (TLS). In QoS3, we classify web page contents into two types: (1) public contents that are common for all users, which can be stored in the caching proxies, and (2) private contents that are specific for each user. Correspondingly, QoS3 establishes two separate TLS connections between the client and the web server for them. Specifically, for private contents, QoS3 just leverages the original HTTPS protocol to deliver them, without involving any middlebox. For public contents, QoS3 allows clients to delegate trust to specific caching proxy along the path, thereby allowing the clients to use the cached contents in the caching proxy via a delegated HTTPS connection. Meanwhile, to prevent Man-in-the-Middle (MitM) attacks on public contents, QoS3 validates the public contents by employing Document object Model (DoM) object-level checksums, which are delivered through the original HTTPS connection. We implement a prototype of QoS3 and evaluate its performance in our testbed. Experimental results show that QoS3 provides acceleration on page load time ranging between 30% and 64% over traditional HTTPS with negligible overhead. Moreover, QoS3 is deployable since it requires just minor software modifications to the server, client, and the middlebox.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Zulkarnain, Hafid, Joseph Dedy Irawan e Renaldi Primaswara Prasetya. "DESIGN OF A TRAFFIC RATE PRIORITY NETWORK CONNECTION MANAGEMENT SYSTEM ON A COMPUTER NETWORK". International Journal of Computer Science and Information Technology 1, n.º 1 (18 de janeiro de 2024): 28–34. http://dx.doi.org/10.36040/ijcomit.v1i1.6720.

Texto completo da fonte
Resumo:
The practicum laboratory is one of the university facilities that accompany your studies, where students can practice their understanding of learning concepts through tests or practical exercises. In this sense the role of the laboratory is very important because the laboratory is a center for teaching and learning, where lectures are used for experiments, research or research. For this reason, in order to expedite the teaching and learning process, adequate facilities are needed in the laboratory, one of which is the Network. However, there are a number of problems that occur where when students download, zoom browsing activity becomes slow. Of course this will make the teaching and learning process less efficient. Based on the problems above, the author wants to conduct research by providing different bandwidth allocations and managing user bandwidth based on priority. The mangle method is used to mark data packets based on port, protocol, src and etc. addresses, and the simple queue method is used to set the amount of bandwidth and priority for each user. From the results of functional testing or research, it can be concluded that the results of applying the mangle method are used to mark a packet that will be restricted based on the configured dst port and the Simple Queue method as a determinant for setting the priority given to each user so that when the user uses zoom access, global, games and others can use the internet network smoothly without internet interference because traffic rate priority and bandwidth management has been carried out.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Alasri, Abbas, e Rossilawati Sulaiman. "Protection of XML-Based Denail-of-Service and Httpflooding Attacks in Web Services Using the Middleware Tool". International Journal of Engineering & Technology 7, n.º 4.7 (27 de setembro de 2018): 322. http://dx.doi.org/10.14419/ijet.v7i4.7.20570.

Texto completo da fonte
Resumo:
A web service is defined as the method of communication between the web applications and the clients. Web services are very flexible and scalable as they are independent of both the hardware and software infrastructure. The lack of security protection offered by web services creates a gap which attackers can make use of. Web services are offered on the HyperText Transfer Protocol (HTTP) with Simple Object Access Protocol (SOAP) as the underlying infrastructure. Web services rely heavily on the Extended Mark-up Language (XML). Hence, web services are most vulnerable to attacks which use XML as the attack parameter. Recently, a new type of XML-based Denial-of-Service (XDoS) attacks has surfaced, which targets the web services. The purpose of these attacks is to consume the system resources by sending SOAP requests that contain malicious XML content. Unfortunately, these malicious requests go undetected underneath the network or transportation layers of the Transfer Control Protocol/Internet Protocol (TCP/IP), as they appear to be legitimate packets.In this paper, a middleware tool is proposed to provide real time detection and prevention of XDoS and HTTP flooding attacks in web service. This tool focuses on the attacks on the two layers of the Open System Interconnection (OSI) model, which are to detect and prevent XDoS attacks on the application layer and prevent flooding attacks at the Network layer.The rule-based approach is used to classify requests either as normal or malicious,in order to detect the XDoS attacks. The experimental results from the middleware tool have demonstrated that the rule-based technique has efficiently detected and prevented theattacks of XDoS and HTTP flooding attacks such as the oversized payload, coercive parsing and XML external entities close to real-time such as 0.006s over the web services. The middleware tool provides close to 100% service availability to normal request, hence protecting the web service against the attacks of XDoS and distributed XDoS (DXDoS).\
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Cao, Jin, Hui Li, Maode Ma e Fenghua Li. "UPPGHA: Uniform Privacy Preservation Group Handover Authentication Mechanism for mMTC in LTE-A Networks". Security and Communication Networks 2018 (2018): 1–16. http://dx.doi.org/10.1155/2018/6854612.

Texto completo da fonte
Resumo:
Machine Type Communication (MTC), as one of the most important wireless communication technologies in the future wireless communication, has become the new business growth point of mobile communication network. It is a key point to achieve seamless handovers within Evolved-Universal Terrestrial Radio Access Network (E-UTRAN) for massive MTC (mMTC) devices in order to support mobility in the Long Term Evolution-Advanced (LTE-A) networks. When mMTC devices simultaneously roam from a base station to a new base station, the current handover mechanisms suggested by the Third-Generation Partnership Project (3GPP) require several handover signaling interactions, which could cause the signaling load over the access network and the core network. Besides, several distinct handover procedures are proposed for different mobility scenarios, which will increase the system complexity. In this paper, we propose a simple and secure uniform group-based handover authentication scheme for mMTC devices based on the multisignature and aggregate message authentication code (AMAC) techniques, which is to fit in with all of the mobility scenarios in the LTE-A networks. Compared with the current 3GPP standards, our scheme can achieve a simple authentication process with robust security protection including privacy preservation and thus avoid signaling congestion. The correctness of the proposed group handover authentication protocol is formally proved in the Canetti-Krawczyk (CK) model and verified based on the AVISPA and SPAN.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Shukla, Samiksha, D. K. Mishra e Kapil Tiwari. "Performance Enhancement of Soap Via Multi Level Caching". Mapana - Journal of Sciences 9, n.º 2 (30 de novembro de 2010): 47–52. http://dx.doi.org/10.12723/mjs.17.6.

Texto completo da fonte
Resumo:
Due to complex infrastructure of web application response time for different service request by client requires significantly larger time. Simple Object Access Protocol (SOAP) is a recent and emerging technology in the field of web services, which aims at replacing traditional methods of remote communications. Basic aim of designing SOAP was to increase interoperability among broad range of programs and environment, SOAP allows applications from different languages, installed on different platforms to communicate with each other over the network. Web services demand security, high performance and extensibility. SOAP provides various benefits for interoperability but we need to pay price of performance degradation and security for that. This formulates SOAP a poor preference for high performance web services. In this paper we present a new approach by enabling multi-level caching at client side as well as server side. Reference describes implementation based on the Apache Java SOAP client, which gives radically enhanced performance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Yulianto, Budi. "Analisis Korelasi Faktor Perilaku Konsumen terhadap Keputusan Penggunaan Teknologi Komunikasi Voip". ComTech: Computer, Mathematics and Engineering Applications 5, n.º 1 (30 de junho de 2014): 236. http://dx.doi.org/10.21512/comtech.v5i1.2619.

Texto completo da fonte
Resumo:
The advancement of communication technology that is combined with computer and the Internet brings Internet Telephony or VoIP (Voice over Internet Protocol). Through VoIP technology, the cost of telecommunications in particular for international direct dialing (IDD) can be reduced. This research analyzes the growth rate of VoIP users, the correlation of the consumer behavior towards using VoIP, and cost comparisons of using telecommunication services between VoIP and other operators. This research is using descriptive analysis method to describe researched object through sampling data collection for hypothesis testing. This research will lead to the conclusion that the use of VoIP for international area will be more advantageous than the use of other operators of GSM (Global System for Mobile), CDMA (Code Division Multiple Access), or the PSTN (Public Switched Telephone Network).
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Al-Mashadani, Abdulrahman Khalid Abdullah, e Muhammad Ilyas. "Distributed Denial of Service Attack Alleviated and Detected by Using Mininet and Software Defined Network". Webology 19, n.º 1 (20 de janeiro de 2022): 4129–44. http://dx.doi.org/10.14704/web/v19i1/web19272.

Texto completo da fonte
Resumo:
The network security and how to keep it safe from malicious attacks now days is attract huge interest of the developers and cyber security experts (SDN) Software- Defined Network is simple framework for network that allow programmability and monitoring that enable the operators to manage the entire network in a consistent and comprehensive manner also used to detect and alleviate the DDoS attacks the SDN now is the trending of network security evolution there many threats that faces the networks one of them is the distributed Denial of Service (DDoS) because of the architecture weakness in traditional network SDN use new architecture and the point of power in it is the separation of control and data plane the DDoS attack prevent the users from access into resource of the network or make huge delays in the network this paper shows the impact of DDoS attacks on SDN, and how to use SDN applications written in Python and by using OpenFlow protocol to automatically detect and resist attacks with average time to response to the attack between 95-145 second.
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Bartlett, H., e R. Wong. "Modelling and Simulation of the Operational and Information Processing Functions of a Computer-Integrated Manufacturing Cell". Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 209, n.º 4 (agosto de 1995): 245–59. http://dx.doi.org/10.1243/pime_proc_1995_209_081_02.

Texto completo da fonte
Resumo:
This paper investigates the information processing function of a computer-integrated manufacturing (CIM) system, which is usually neglected by engineers when they study the performance of a manufacturing system. The feasibility of developing a complete simulation model which would include both the operational and information processing functions is therefore considered. In order to achieve this, a typical pick-and-place manufacturing cell was considered in which 46 devices were connected to a local area network (LAN). Two independent simulation tools, SIMAN* and L-NET†, were used in order to develop the complete model. The model was evaluated and used to study the performance of the pick-and-place cell using different communication protocols such as the IEEE 802.3 carrier sense multiple access with collision detection (CSMA/CD), IEEE 802.4 token bus and the IEEE 802.5 token ring. Results presented in this paper show that with careful design and simulation it is possible to develop a complete model which includes both operational and informational processing functions. Although the example of the pick-and-place cell is relatively simple the technique adopted could be applied to any CIM system. Results have also shown that for the pick-and-place cell considered in this paper an IEEE 802.3 CSMA/CD protocol operating at 10 Mbps would guarantee access to the network within the shortest station processing time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Bujari, Armir, e Claudio Enrico Palazzi. "AirCache: A Crowd-Based Solution for Geoanchored Floating Data". Mobile Information Systems 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/3247903.

Texto completo da fonte
Resumo:
The Internet edge has evolved from a simple consumer of information and data to eager producer feeding sensed data at a societal scale. The crowdsensing paradigm is a representative example which has the potential to revolutionize the way we acquire and consume data. Indeed, especially in the era of smartphones, the geographical and temporalscopusof data is often local. For instance, users’ queries are more and more frequently about a nearby object, event, person, location, and so forth. These queries could certainly be processed and answered locally, without the need for contacting a remote server through the Internet. In this scenario, the data is alimented (sensed) by the users and, as a consequence, data lifetime is limited by human organizational factors (e.g., mobility). From this basis, data survivability in the Area of Interest (AoI) is crucial and, if not guaranteed, could undermine system deployment. Addressing this scenario, we discuss and contribute with a novel protocol named AirCache, whose aim is to guarantee data availability in the AoI while at the same time reducing the data access costs at the network edges. We assess our proposal through a simulation analysis showing that our approach effectively fulfills its design objectives.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Djibo, Moumouni, Wend Yam Serge Boris Ouedraogo, Ali Doumounia, Serge Roland Sanou, Moumouni Sawadogo, Idrissa Guira, Nicolas Koné, Christian Chwala, Harald Kunstmann e François Zougmoré. "Towards Innovative Solutions for Monitoring Precipitation in Poorly Instrumented Regions: Real-Time System for Collecting Power Levels of Microwave Links of Mobile Phone Operators for Rainfall Quantification in Burkina Faso". Applied System Innovation 6, n.º 1 (27 de dezembro de 2022): 4. http://dx.doi.org/10.3390/asi6010004.

Texto completo da fonte
Resumo:
Since the 1990s, mobile telecommunication networks have gradually become denser around the world. Nowadays, large parts of their backhaul network consist of commercial microwave links (CMLs). Since CML signals are attenuated by rainfall, the exploitation of records of this attenuation is an innovative and an inexpensive solution for precipitation monitoring purposes. Performance data from mobile operators’ networks are crucial for the implementation of this technology. Therefore, a real-time system for collecting and storing CML power levels from the mobile phone operator “Telecel Faso” in Burkina Faso has been implemented. This new acquisition system, which uses the Simple Network Management Protocol (SNMP), can simultaneously record the transmitted and received power levels from all the CMLs to which it has access, with a time resolution of one minute. Installed at “Laboratoire des Matériaux et Environnement de l’Université Joseph KI-ZERBO (Burkina Faso)”, this acquisition system is dynamic and has gradually grown from eight, in 2019, to more than 1000 radio links of Telecel Faso’s network in 2021. The system covers the capital Ouagadougou and the main cities of Burkina Faso (Bobo Dioulasso, Ouahigouya, Koudougou, and Kaya) as well as the axes connecting Ouagadougou to these cities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Rouissat, Mehdi, Mohammed Belkheir, Ibrahim S. Alsukayti e Allel Mokaddem. "A Lightweight Mitigation Approach against a New Inundation Attack in RPL-Based IoT Networks". Applied Sciences 13, n.º 18 (16 de setembro de 2023): 10366. http://dx.doi.org/10.3390/app131810366.

Texto completo da fonte
Resumo:
Internet of Things (IoT) networks are being widely deployed for a broad range of critical applications. Without effective security support, such a trend would open the doors to notable security challenges. Due to their inherent constrained characteristics, IoT networks are highly vulnerable to the adverse impacts of a wide scope of IoT attacks. Among these, flooding attacks would cause great damage given the limited computational and energy capacity of IoT devices. However, IETF-standardized IoT routing protocols, such as the IPv6 Routing Protocol for Low Power and Lossy Networks (RPL), have no relevant security-provision mechanism. Different variants of the flooding attack can be easily initiated in RPL networks to exhaust network resources and degrade overall network performance. In this paper, a novel variant referred to as the Destination Information Object Flooding (DIOF) attack is introduced. The DIOF attack involves an internal malicious node disseminating falsified information to instigate excessive transmissions of DIO control messages. The results of the experimental evaluation demonstrated the significant adverse impact of DIOF attacks on control overhead and energy consumption, which increased by more than 500% and 210%, respectively. A reduction of more than 32% in Packet Delivery Ratio (PDR) and an increase of more than 192% in latency were also experienced. These were more evident in cases in which the malicious node was in close proximity to the sink node. To effectively address the DIOF attack, we propose a new lightweight approach based on a collaborative and distributed security scheme referred to as DIOF-Secure RPL (DSRPL). It provides an effective solution, enhancing RPL network resilience against DIOF attacks with only simple in-protocol modifications. As the experimental results indicated, DSRPL guaranteed responsive detection and mitigation of the DIOF attacks in a matter of a few seconds. Compared to RPL attack scenarios, it also succeeded in reducing network overhead and energy consumption by more than 80% while maintaining QoS performance at satisfactory levels.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Zhang, Liujing, Jin Li, Wenyang Guan e Xiaoqin Lian. "Optimization of User Service Rate with Image Compression in Edge Computing-Based Vehicular Networks". Mathematics 12, n.º 4 (12 de fevereiro de 2024): 558. http://dx.doi.org/10.3390/math12040558.

Texto completo da fonte
Resumo:
The prevalence of intelligent transportation systems in alleviating traffic congestion and reducing the number of traffic accidents has risen in recent years owing to the rapid advancement of information and communication technology (ICT). Nevertheless, the increase in Internet of Vehicles (IoV) users has led to massive data transmission, resulting in significant delays and network instability during vehicle operation due to limited bandwidth resources. This poses serious security risks to the traffic system and endangers the safety of IoV users. To alleviate the computational load on the core network and provide more timely, effective, and secure data services to proximate users, this paper proposes the deployment of edge servers utilizing edge computing technologies. The massive image data of users are processed using an image compression algorithm, revealing a positive correlation between the compression quality factor and the image’s spatial occupancy. A performance analysis model for the ADHOC MAC (ADHOC Medium Access Control) protocol is established, elucidating a positive correlation between the frame length and the number of service users, and a negative correlation between the service user rate and the compression quality factor. The optimal service user rate, within the constraints of compression that does not compromise detection accuracy, is determined by using the target detection result as a criterion for effective compression. The simulation results demonstrate that the proposed scheme satisfies the object detection accuracy requirements in the IoV context. It enables the number of successfully connected users to approach the total user count, and increases the service rate by up to 34%, thereby enhancing driving safety, stability, and efficiency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Khudair Madhloom, Jamal, Hussein Najm Abd Ali, Haifaa Ahmed Hasan, Oday Ali Hassen e Saad Mohamed Darwish. "A Quantum-Inspired Ant Colony Optimization Approach for Exploring Routing Gateways in Mobile Ad Hoc Networks". Electronics 12, n.º 5 (28 de fevereiro de 2023): 1171. http://dx.doi.org/10.3390/electronics12051171.

Texto completo da fonte
Resumo:
Establishing internet access for mobile ad hoc networks (MANET) is a job that is both vital and complex. MANET is used to build a broad range of applications, both commercial and non-commercial, with the majority of these apps obtaining access to internet resources. Since the gateways (GWs) are the central nodes in a MANET’s ability to connect to the internet, it is common practice to deploy numerous GWs to increase the capabilities of a MANET. Current routing methods have been adapted and optimized for use with MANET through the use of both conventional routing techniques and tree-based network architectures. Exploring new or tacking-failure GWs also increases network overhead but is essential given that MANET is a dynamic and complicated network. To handle these issues, the work presented in this paper presents a modified gateway discovery approach inspired by the quantum swarm intelligence technique. The suggested approach follows the non-root tree-based GW discovery category to reduce broadcasting in the process of exploring GWs and uses quantum-inspired ant colony optimization (QACO) for constructing new paths. Due to the sequential method of execution of the algorithms, the complexity of ACO grows dramatically with the rise in the number of paths explored and the number of iterations required to obtain better performance. The exploration of a huge optimization problem’s solution space may be made much more efficient with the help of quantum parallelization and entanglement of quantum states. Compared to other broad evolutionary algorithms, QACO s have more promise for tackling large-scale issues due to their ability to prevent premature convergence with a simple implementation. The experimental results using benchmarked datasets reveal the feasibility of the suggested approach of improving the processes of exploring new GWs, testing and maintaining existing paths to GWs, exploring different paths to existing GWs, detecting any connection failure in any route, and attempting to fix that failure by discovering an alternative optimal path. Furthermore, the comparative study demonstrates that the utilized QACO is valid and outperforms the discrete binary ACO algorithm (AntHocNet Protocol) in terms of time to discover new GWs (27% improvement on average), time that the recently inserted node takes to discover all GWs (on average, 70% improvement), routing overhead (53% improvement on average), and gateway’s overhead (on average, 60% improvement).
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Bristola, Glenn Arwin M. "Integrating of voice recognition email application system for visually impaired person using linear regression algorithm". South Asian Journal of Engineering and Technology 12, n.º 1 (31 de março de 2022): 74–83. http://dx.doi.org/10.26524/sajet.2022.12.12.

Texto completo da fonte
Resumo:
The outcome of this study will surely help visually impaired people, who face difficulties in accessing the computer system. Voice recognition will help them to access e-mail. This study also reduces cognitive load taken by a visually impaired users to remember and type characters using keyboard. If this system is implemented, self-esteem and social and emotional well-being of the visually impaired users will be lifted up for they will now feel they are being valued in the society and has fair treatment and access in technologyThe main function of this study is to use a keyboard of the user that will respond through voice. The purpose of this study is to help a visually impaired person to use modernize application to interact with voice recognition system with the use of email into different types of modern gadgets Line computers or mobile phones. In terms of Functionality of the application, the proponents will use a set of APIs’ or Application Program Interface such as Google Speech-to-text and text-to-speech application and it will process though Email System and also the SNMTP or Simple Network Management Protocol will be used for mailing services, in programming software, the proponent will be using PHP for the backend of web interface. HTML and CSS is the front end programming for the creation of Web Base User Interface that will be used. Voice typing and Dictation Speech Interaction models using windows dictation engine. The proponent used descriptive research design in this study. Descriptive research design is being used by the proponents to describe the characteristics of a population or phenomenon of visually impaired persons being studied. Descriptive research is mainly done because the researchers wants to gain a better understanding for a topic. It focuses on providing information that is useful in the development. The research is based on mixed method focused in producing such informative outcomes that can be used. Based on the results of the surveys, conclusions were drawn: Majority of the respondents were male adultery period ranging ages 32-41.all are working in the massage therapist. Majority of the respondents rated the overall function of the application Excellent and rated the level of security of the application is Secured.
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Bristol, Glenn Arwin M. "Integrating of Voice Recognition Email Application System for Visually Impaired Person using Linear Regression Algorithm". Proceedings of The International Halal Science and Technology Conference 14, n.º 1 (10 de março de 2022): 56–66. http://dx.doi.org/10.31098/ihsatec.v14i1.486.

Texto completo da fonte
Resumo:
The outcome of this study will surely help visually impaired people who face difficulties in accessing the computer system. Voice recognition will help them to access email. This study also reduces the cognitive load taken by visually impaired users to remember and type characters using a keyboard. If this system is implemented, self-esteem and social and emotional well-being of the visually impaired users will be lifted up for they will now feel they are being valued in society and has fair treatment and access to technology main function of this study is to use a keyboard of the user that will respond through voice. The purpose of this study is to help a visually impaired person to use the modern application to interact with voice recognition systems with the use of email into different types of modern gadgets, Line computers, or mobile phones. In terms of functionality of the application, the proponents will use a set of APIs,' or Application Program Interface such as Google Speech-to-text and text-to-speech application and it will process through Email System and also the SNMTP or Simple Network Management Protocol will be used for mailing services, in programming software, the proponent will be using PHP for the backend of a web interface. For the creation of a Web Base UI, HTML and CSS will be used. Voice typing and Dictation Speech Interaction models using windows dictation engine. The proponent used a descriptive research design in this study. Descriptive research design is being used by the proponents to describe the characteristics of a population or phenomenon of visually impaired persons being studied. Descriptive research is mainly done because the researchers want to gain a better understanding of a topic. It focuses on providing information that is useful in the development. The research is based on a mixed method focused on producing such informative outcomes that can be used. Based on the results of the surveys, conclusions were drawn: The majority of the respondents were male adultery period ranging from ages 32-41.all are working as massage therapists. The majority of the respondents rated the overall function of the application as Excellent and rated the level of security of the application as Secured.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Glenn Arwin M Bristola e Joevilzon C Calderon. "Integrating of voice recognition email application system for visually impaired person using linear regression algorithm". South Asian Journal of Engineering and Technology 12, n.º 1 (31 de março de 2022): 74–83. http://dx.doi.org/10.26524/sajet.2022.12.012.

Texto completo da fonte
Resumo:
The outcome of this study will surely help visually impaired people, who face difficulties in accessing the computer system. Voice recognition will help them to access e-mail. This study also reduces cognitive load taken by a visually impaired users to remember and type characters using keyboard. If this system is implemented, self-esteem and social and emotional well-being of the visually impaired users will be lifted up for they will now feel they are being valued in the society and has fair treatment and access in technologyThe main function of this study is to use a keyboard of the user that will respond through voice. The purpose of this study is to help a visually impaired person to use modernize application to interact with voice recognition system with the use of email into different types of modern gadgets Line computers or mobile phones. In terms of Functionality of the application, the proponents will use a set of APIs’ or Application Program Interface such as Google Speech-to-text and text-to-speech application and it will process though Email System and also the SNMTP or Simple Network Management Protocol will be used for mailing services, in programming software, the proponent will be using PHP for the backend of web interface. HTML and CSS is the front end programming for the creation of Web Base User Interface that will be used. Voice typing and Dictation Speech Interaction models using windows dictation engine. The proponent used descriptive research design in this study. Descriptive research design is being used by the proponents to describe the characteristics of a population or phenomenon of visually impaired persons being studied. Descriptive research is mainly done because the researchers wants to gain a better understanding for a topic. It focuses on providing information that is useful in the development. The research is based on mixed method focused in producing such informative outcomes that can be used. Based on the results of the surveys, conclusions were drawn: Majority of the respondents were male adultery period ranging ages 32-41.all are working in the massage therapist. Majority of the respondents rated the overall function of the application Excellent and rated the level of security of the application is Secured.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

Bezprozvannych, G. V., e O. A. Pushkar. "Ensuring standardized parameters for the transmission of digital signals by twisted pairs at the technological stage of manufacturing cables for industrial operating technologies". Electrical Engineering & Electromechanics, n.º 4 (27 de junho de 2023): 57–64. http://dx.doi.org/10.20998/2074-272x.2023.4.09.

Texto completo da fonte
Resumo:
Introduction. In production control and control systems, buildings use many simple devices - sensors to detect light, heat, movement, smoke, humidity and pressure, mechanisms for activation and control of switches, closing devices, alarm, etc. - «operating technologies» (OT). Different communication protocols and field tire technologies, such as Modbus for conditioning systems, Bacnet for access control and Lonworks for lighting, have been traditionally used and used for their connection. Network fragmentation leads to the need to use gateways to transform protocols when creating a single automation system, which complicates the implementation of complex control systems for any object. At the same time, information networks are unified, but the Ethernet protocol used in them for operating technologies for various reasons (technological, cost) has not been widespread. Due to its high bandwidth compared to existing field tire networks, industrial Ethernet is significantly capable of increasing flexibility in the implementation of additional functions in OT. Modern industrial Ethernet networks are based on non - shielded and shielded twisted pair category 5E cables. The presence of additional metal screens in the structure of twisted pair causes the increase in electrical resistance of conductors due to the effect of closeness, the electrical capacity, and the ratio of attenuation in the range of transmission of broadband signals. Purpose. Substantiation of the range of settings of technological equipment to ensure standardized values of the extinction coefficient and immunity based on the analysis of the results of measurements in a wide frequency range of electrical parameters of shielded and unshielded cables for industrial operating technologies. Methodology. Experimental studies have been performed for statistically averaged electrical parameters of the transmission of pairs for 10 and 85 samples of 305 meters long and shielded cables of category 5e, respectively. It is determined that in the frequency range from 1 to 10 MHz, unshielded cables have less values of the attenuation coefficient. In the range of more than 30 MHz, the shielded cables have smaller values of the attenuation due to the influence of the alumopolymer tape screen. It is established that the coefficient of paired correlation between asymmetries of resistance and capacity of twisted pairs is 0,9735 - for unshielded and 0,9257 - for shielded cables. The impact has been proven to a greater extent asymmetry of resistance the pairs on the increasing noise immunity of cables. The influence noise interference on the deviation of the diameter and electrical capacity of the isolated conductor from the nominal values in the stochastic technological process is analyzed. The strategy of technological process settings to ensure the attenuation and the noise immunity in the high -frequency range is substantiated. Practical value. Multiplicative interference, caused by random changes in the stochastic technological process, can lead to a deviation of diameter 2 times from the nominal value at level of probability at 50 %. The equipment settings of the technological equipment must guarantee the coefficient of variation capacity of the insulated conductor at 0.3 % for high level of noise immunity.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Abraham, Ajith, Sung-Bae Cho, Thomas Hite e Sang-Yong Han. "Special Issue on Web Services Practices". Journal of Advanced Computational Intelligence and Intelligent Informatics 10, n.º 5 (20 de setembro de 2006): 703–4. http://dx.doi.org/10.20965/jaciii.2006.p0703.

Texto completo da fonte
Resumo:
Web services – a new breed of self-contained, self-describing, modular applications published, located, and invoked across the Web – handle functions, from simple requests to complicated business processes. They are defined as network-based application components with a services-oriented architecture (SOA) using standard interface description languages and uniform communication protocols. SOA enables organizations to grasp and respond to changing trends and to adapt their business processes rapidly without major changes to the IT infrastructure. The Inaugural International Conference on Next-Generation Web Services Practices (NWeSP'05) attracted researchers who are also the world's most respected authorities on the semantic Web, Web-based services, and Web applications and services. NWeSP'05 was held in cooperation with the IEEE Computer Society Task Force on Electronic Commerce, the Technical Committee on Internet, and the Technical Committee on Scalable Computing. This special issue presents eight papers focused on different aspects of Web services and their applications. Papers were selected based on fundamental ideas and concepts rather than the thoroughness of techniques employed. Papers are organized as follows: <I>Taher et al.</I> present the first paper, on a Quality of Service Information and Computational framework (QoS-IC) supporting QoS-based service selection for SOA. The framework's functionality is expanded using a QoS constraints model that establishes an association relationship between different QoS properties and is used to govern QoS-based service selection in the underlying algorithm. Using a prototype implementation, the authors demonstrate how QoS constraints improve QoS-based service selection and save consumers valuable time. Due to the complex infrastructure of web applications, response times perceived by clients may be significantly longer than desired. To overcome some of the current problems, <I>Vilas et al.</I>, in the second paper, propose a cache-based extension of the architecture that enhances the current web services architecture, which is mainly based on program-logic or protocol-dependent optimization. In the third paper, Jo and Yoo present authorization for securing XML sources on the Web. One of the disadvantages of existing access control is that the DOM tree must be loaded into memory while all XML documents are parsed to generate the DOM tree, such that a lot of memory is used in repetitive search for tree to authorize access to all nodes in the DOM tree. The complex authorization evaluation process required thus lowers system performance. Existing access control fails to consider information structure and semantics sufficiently due to basic HTML limitations. The authors overcome some of these limitations in the proposed model. In the fourth paper, Jung and Cho propose a novel behavior-network-based method for Web service composition. The behavior network selects services automatically through internal and external links with environmental information from sensors and goals. An optimal service is selected at each step, resulting in a globally optimal service sequence for achieving preset goals. The authors detail experimental results for the proposed model by comparing them with rule-based system and user tests. <I>Kong et al.</I> present an efficient method in the fifth paper for merging heterogeneous ontologies – no ontology building standard currently exists – and the many ontology-building tools available are based on different ontology languages, mostly focusing on how to create, edit and infer the ontology efficiently. Even ontologies about the same domain differ because ontology experts hold different view points. For these reasons, interoperability between ontologies is very low. The authors propose merging heterogeneous domain ontologies by overcoming some of the above limitations. In the sixth paper, Chen and Che provide polynomial-time tree pattern query minimization algorithm whose efficiency stems from two key observations: (i) Inherent redundant "components" usually exist inside the rudimentary query provided by the user, and (ii) nonedundant nodes may become redundant when constraints such as co-occurrence and required child/descendant are given. They show that the algorithm obtained by first augmenting the input tree pattern using constraints, then applying minimization, invariably finds a unique minimal equivalent to the original query. Chen and Che present a polynomial-time algorithm for tree pattern query (TPQ) minimization without XML constraints in the seventh paper. The two-part algorithm is a dynamic programming strategy for finding all matching subtrees within a TPQ. The algorithm consists of one for subtree recognization and a second for subtree deletion. In the last paper, <I>Bagchi et al.</I> present the mobile distributed virtual memory (MDVM) concept and architecture for cellular networks containing server-groups (SG). They detail a two-round randomized distributed algorithm to elect a unique leader and co-leader of the SG that is free of any assumption about network topology, and buffer space limitations and is based on dynamically elected coordinators eliminating single points of failure. As guest editors, we thank all authors featured in this special issue for their contributions and the referees for critically evaluating the papers within the short time allotted. We sincerely believe that readers will share our enjoyment of this special issue and find the information it presents both timely and useful.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Nayyar, Anand, Pijush Kanti Dutta Pramankit e Rajni Mohana. "Introduction to the Special Issue on Evolving IoT and Cyber-Physical Systems: Advancements, Applications, and Solutions". Scalable Computing: Practice and Experience 21, n.º 3 (1 de agosto de 2020): 347–48. http://dx.doi.org/10.12694/scpe.v21i3.1568.

Texto completo da fonte
Resumo:
Internet of Things (IoT) is regarded as a next-generation wave of Information Technology (IT) after the widespread emergence of the Internet and mobile communication technologies. IoT supports information exchange and networked interaction of appliances, vehicles and other objects, making sensing and actuation possible in a low-cost and smart manner. On the other hand, cyber-physical systems (CPS) are described as the engineered systems which are built upon the tight integration of the cyber entities (e.g., computation, communication, and control) and the physical things (natural and man-made systems governed by the laws of physics). The IoT and CPS are not isolated technologies. Rather it can be said that IoT is the base or enabling technology for CPS and CPS is considered as the grownup development of IoT, completing the IoT notion and vision. Both are merged into closed-loop, providing mechanisms for conceptualizing, and realizing all aspects of the networked composed systems that are monitored and controlled by computing algorithms and are tightly coupled among users and the Internet. That is, the hardware and the software entities are intertwined, and they typically function on different time and location-based scales. In fact, the linking between the cyber and the physical world is enabled by IoT (through sensors and actuators). CPS that includes traditional embedded and control systems are supposed to be transformed by the evolving and innovative methodologies and engineering of IoT. Several applications areas of IoT and CPS are smart building, smart transport, automated vehicles, smart cities, smart grid, smart manufacturing, smart agriculture, smart healthcare, smart supply chain and logistics, etc. Though CPS and IoT have significant overlaps, they differ in terms of engineering aspects. Engineering IoT systems revolves around the uniquely identifiable and internet-connected devices and embedded systems; whereas engineering CPS requires a strong emphasis on the relationship between computation aspects (complex software) and the physical entities (hardware). Engineering CPS is challenging because there is no defined and fixed boundary and relationship between the cyber and physical worlds. In CPS, diverse constituent parts are composed and collaborated together to create unified systems with global behaviour. These systems need to be ensured in terms of dependability, safety, security, efficiency, and adherence to real‐time constraints. Hence, designing CPS requires knowledge of multidisciplinary areas such as sensing technologies, distributed systems, pervasive and ubiquitous computing, real-time computing, computer networking, control theory, signal processing, embedded systems, etc. CPS, along with the continuous evolving IoT, has posed several challenges. For example, the enormous amount of data collected from the physical things makes it difficult for Big Data management and analytics that includes data normalization, data aggregation, data mining, pattern extraction and information visualization. Similarly, the future IoT and CPS need standardized abstraction and architecture that will allow modular designing and engineering of IoT and CPS in global and synergetic applications. Another challenging concern of IoT and CPS is the security and reliability of the components and systems. Although IoT and CPS have attracted the attention of the research communities and several ideas and solutions are proposed, there are still huge possibilities for innovative propositions to make IoT and CPS vision successful. The major challenges and research scopes include system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. It is our great privilege to present Volume 21, Issue 3 of Scalable Computing: Practice and Experience. We had received 30 research papers and out of which 14 papers are selected for publication. The objective of this special issue is to explore and report recent advances and disseminate state-of-the-art research related to IoT, CPS and the enabling and associated technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to IoT and CPS. Vivek Kumar Prasad and Madhuri D Bhavsar in the paper titled "Monitoring and Prediction of SLA for IoT based Cloud described the mechanisms for monitoring by using the concept of reinforcement learning and prediction of the cloud resources, which forms the critical parts of cloud expertise in support of controlling and evolution of the IT resources and has been implemented using LSTM. The proper utilization of the resources will generate revenues to the provider and also increases the trust factor of the provider of cloud services. For experimental analysis, four parameters have been used i.e. CPU utilization, disk read/write throughput and memory utilization. Kasture et al. in the paper titled "Comparative Study of Speaker Recognition Techniques in IoT Devices for Text Independent Negative Recognition" compared the performance of features which are used in state of art speaker recognition models and analyse variants of Mel frequency cepstrum coefficients (MFCC) predominantly used in feature extraction which can be further incorporated and used in various smart devices. Mahesh Kumar Singh and Om Prakash Rishi in the paper titled "Event Driven Recommendation System for E-Commerce using Knowledge based Collaborative Filtering Technique" proposed a novel system that uses a knowledge base generated from knowledge graph to identify the domain knowledge of users, items, and relationships among these, knowledge graph is a labelled multidimensional directed graph that represents the relationship among the users and the items. The proposed approach uses about 100 percent of users' participation in the form of activities during navigation of the web site. Thus, the system expects under the users' interest that is beneficial for both seller and buyer. The proposed system is compared with baseline methods in area of recommendation system using three parameters: precision, recall and NDGA through online and offline evaluation studies with user data and it is observed that proposed system is better as compared to other baseline systems. Benbrahim et al. in the paper titled "Deep Convolutional Neural Network with TensorFlow and Keras to Classify Skin Cancer" proposed a novel classification model to classify skin tumours in images using Deep Learning methodology and the proposed system was tested on HAM10000 dataset comprising of 10,015 dermatoscopic images and the results observed that the proposed system is accurate in order of 94.06\% in validation set and 93.93\% in the test set. Devi B et al. in the paper titled "Deadlock Free Resource Management Technique for IoT-Based Post Disaster Recovery Systems" proposed a new class of techniques that do not perform stringent testing before allocating the resources but still ensure that the system is deadlock-free and the overhead is also minimal. The proposed technique suggests reserving a portion of the resources to ensure no deadlock would occur. The correctness of the technique is proved in the form of theorems. The average turnaround time is approximately 18\% lower for the proposed technique over Banker's algorithm and also an optimal overhead of O(m). Deep et al. in the paper titled "Access Management of User and Cyber-Physical Device in DBAAS According to Indian IT Laws Using Blockchain" proposed a novel blockchain solution to track the activities of employees managing cloud. Employee authentication and authorization are managed through the blockchain server. User authentication related data is stored in blockchain. The proposed work assists cloud companies to have better control over their employee's activities, thus help in preventing insider attack on User and Cyber-Physical Devices. Sumit Kumar and Jaspreet Singh in paper titled "Internet of Vehicles (IoV) over VANETS: Smart and Secure Communication using IoT" highlighted a detailed description of Internet of Vehicles (IoV) with current applications, architectures, communication technologies, routing protocols and different issues. The researchers also elaborated research challenges and trade-off between security and privacy in area of IoV. Deore et al. in the paper titled "A New Approach for Navigation and Traffic Signs Indication Using Map Integrated Augmented Reality for Self-Driving Cars" proposed a new approach to supplement the technology used in self-driving cards for perception. The proposed approach uses Augmented Reality to create and augment artificial objects of navigational signs and traffic signals based on vehicles location to reality. This approach help navigate the vehicle even if the road infrastructure does not have very good sign indications and marking. The approach was tested locally by creating a local navigational system and a smartphone based augmented reality app. The approach performed better than the conventional method as the objects were clearer in the frame which made it each for the object detection to detect them. Bhardwaj et al. in the paper titled "A Framework to Systematically Analyse the Trustworthiness of Nodes for Securing IoV Interactions" performed literature on IoV and Trust and proposed a Hybrid Trust model that seperates the malicious and trusted nodes to secure the interaction of vehicle in IoV. To test the model, simulation was conducted on varied threshold values. And results observed that PDR of trusted node is 0.63 which is higher as compared to PDR of malicious node which is 0.15. And on the basis of PDR, number of available hops and Trust Dynamics the malicious nodes are identified and discarded. Saniya Zahoor and Roohie Naaz Mir in the paper titled "A Parallelization Based Data Management Framework for Pervasive IoT Applications" highlighted the recent studies and related information in data management for pervasive IoT applications having limited resources. The paper also proposes a parallelization-based data management framework for resource-constrained pervasive applications of IoT. The comparison of the proposed framework is done with the sequential approach through simulations and empirical data analysis. The results show an improvement in energy, processing, and storage requirements for the processing of data on the IoT device in the proposed framework as compared to the sequential approach. Patel et al. in the paper titled "Performance Analysis of Video ON-Demand and Live Video Streaming Using Cloud Based Services" presented a review of video analysis over the LVS \& VoDS video application. The researchers compared different messaging brokers which helps to deliver each frame in a distributed pipeline to analyze the impact on two message brokers for video analysis to achieve LVS & VoS using AWS elemental services. In addition, the researchers also analysed the Kafka configuration parameter for reliability on full-service-mode. Saniya Zahoor and Roohie Naaz Mir in the paper titled "Design and Modeling of Resource-Constrained IoT Based Body Area Networks" presented the design and modeling of a resource-constrained BAN System and also discussed the various scenarios of BAN in context of resource constraints. The Researchers also proposed an Advanced Edge Clustering (AEC) approach to manage the resources such as energy, storage, and processing of BAN devices while performing real-time data capture of critical health parameters and detection of abnormal patterns. The comparison of the AEC approach is done with the Stable Election Protocol (SEP) through simulations and empirical data analysis. The results show an improvement in energy, processing time and storage requirements for the processing of data on BAN devices in AEC as compared to SEP. Neelam Saleem Khan and Mohammad Ahsan Chishti in the paper titled "Security Challenges in Fog and IoT, Blockchain Technology and Cell Tree Solutions: A Review" outlined major authentication issues in IoT, map their existing solutions and further tabulate Fog and IoT security loopholes. Furthermore, this paper presents Blockchain, a decentralized distributed technology as one of the solutions for authentication issues in IoT. In addition, the researchers discussed the strength of Blockchain technology, work done in this field, its adoption in COVID-19 fight and tabulate various challenges in Blockchain technology. The researchers also proposed Cell Tree architecture as another solution to address some of the security issues in IoT, outlined its advantages over Blockchain technology and tabulated some future course to stir some attempts in this area. Bhadwal et al. in the paper titled "A Machine Translation System from Hindi to Sanskrit Language Using Rule Based Approach" proposed a rule-based machine translation system to bridge the language barrier between Hindi and Sanskrit Language by converting any test in Hindi to Sanskrit. The results are produced in the form of two confusion matrices wherein a total of 50 random sentences and 100 tokens (Hindi words or phrases) were taken for system evaluation. The semantic evaluation of 100 tokens produce an accuracy of 94\% while the pragmatic analysis of 50 sentences produce an accuracy of around 86\%. Hence, the proposed system can be used to understand the whole translation process and can further be employed as a tool for learning as well as teaching. Further, this application can be embedded in local communication based assisting Internet of Things (IoT) devices like Alexa or Google Assistant. Anshu Kumar Dwivedi and A.K. Sharma in the paper titled "NEEF: A Novel Energy Efficient Fuzzy Logic Based Clustering Protocol for Wireless Sensor Network" proposed a a deterministic novel energy efficient fuzzy logic-based clustering protocol (NEEF) which considers primary and secondary factors in fuzzy logic system while selecting cluster heads. After selection of cluster heads, non-cluster head nodes use fuzzy logic for prudent selection of their cluster head for cluster formation. NEEF is simulated and compared with two recent state of the art protocols, namely SCHFTL and DFCR under two scenarios. Simulation results unveil better performance by balancing the load and improvement in terms of stability period, packets forwarded to the base station, improved average energy and extended lifetime.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Abisa, Michael. "Meaningful Use and Electronic Laboratory Reporting: Challenges Health Information Technology Vendors Face in Kentucky". Online Journal of Public Health Informatics 9, n.º 3 (30 de dezembro de 2017). http://dx.doi.org/10.5210/ojphi.v9i3.7491.

Texto completo da fonte
Resumo:
Objectives: To explore the challenges Health Information Technology (HIT) vendors face to satisfy the requirements for Meaningful Use (MU) and Electronic Laboratory Reporting (ELR) of reportable diseases to the public health departments in Kentucky.Methodology: A survey was conducted of Health Information Exchange (HIE) vendors in Kentucky through the Kentucky Health Information Exchange (KHIE). The survey was cross-sectional. Data were collected between February and March 2014. Participants were recruited from KHIE vendors. Participants received online survey link and by email and asked to submit their responses. Vendors’ feedback were summarized and analyzed to identify their challenges. Out of the 55 vendors who received the survey, 35(63.64%) responded.Results: Of the seven transport protocol options for ELR, vendors selected virtual private network (VPN) as the most difficult to implement (31.7%). Secure File Transfer Protocol (SFTP) was selected as preferred ELR transport protocol (31.4%). Most of the respondents, 80% responded that they do not have any challenge with the Health Level 7 (HL7) standard implementation guide required by MU for 2014 ELR certification.Conclusion: The study found that the most difficult transport protocol to implement for ELR is VPN and if vendors have preference, they would use SFTP for ELR over KHIE choice of VPN and Simple Object Access Protocol (SOAP). KHIE vendors do not see any variability in what is reportable by different jurisdiction and also it is not difficult for them to detect what is reportable from one jurisdiction verse the other
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

"Safety Measures and Auto Detection against SQL Injection Attacks". International Journal of Recent Technology and Engineering 8, n.º 4 (30 de dezembro de 2019): 2827–33. http://dx.doi.org/10.35940/ijeat.b3316.129219.

Texto completo da fonte
Resumo:
The SQL injection attack (SQLIA) occurred when the attacker integrating a code of a malicious SQL query into a valid query statement via a non-valid input. As a result the relational database management system will trigger these malicious query that cause to SQL injection attack. After successful execution, it may interrupts the CIA (confidentiality, integrity and availability) of web API. The vulnerability of Web Application Programming Interface (API) is the prior concern for any programming. The Web API is mainly based of Simple Object Access Protocol (SOAP) protocol which provide its own security and Representational State Transfer (REST) is provide the architectural style to security measures form transport layer. Most of the time developers or newly programmers does not follow the standards of safe programming and forget to validate their input fields in the form. This vulnerability in the web API opens the door for the threats and it’s become a cake walk for the attacker to exploit the database associated with the web API. The objective of paper is to automate the detection of SQL injection attack and secure the poorly coded web API access through large network traffic. The Snort and Moloch approaches are used to develop the hybrid model for auto detection as well as analyze the SQL injection attack for the prototype system
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Soni, Gulshan, e Kandasamy Selvaradjou. "Rational Allocation of Guaranteed Time Slots to support real-time traffic in Wireless Body Area Networks". International Journal of Sensors, Wireless Communications and Control 11 (8 de março de 2021). http://dx.doi.org/10.2174/2210327911666210308155147.

Texto completo da fonte
Resumo:
Background: The main requirement of Wireless Body Area Network (WBAN) is on-time delivery of vital signs that are sensed through the delay-sensitive biological sensors that are implanted in the body of the patient being monitored to the central gateway. The Medium Access Control (MAC) protocol standard IEEE 802.15.4 supports real-time data delivery through its unique feature called Guaranteed Time Slot (GTS) under its beacon-enabled mode. This protocol is considered suitable for the WBAN Scenario. However, as per standard, IEEE 802.15.4 uses a simple and straightforward First Come First Served (FCFS) mechanism to distribute GTS slots among the contender nodes. This kind of blindfolded allocation of GTS slots results in poor utilization of bandwidth and also prevents the delay-sensitive sensor nodes from effectively utilizing the contention-free slots. Objective: The main objective of this work is to provide a solution for the unfair allocation of GTS slots in the beacon-enabled mode of the IEEE 802.15.4 standard in WBAN. Method: We propose a Rational Allocation of Guaranteed Time Slot (RAGTS) protocol that distributes the available GTS slots based on the delay-sensitivity of the contender nodes. Results: A series of simulation experiments has been performed to assess the performance of our proposed RAGTS protocol. The simulations capture the dynamic nature of the real-time deadlines associated with sensor traffic. Through simulations, we show that our proposed RAGTS protocol appears to be more stable in terms of various performance metrics than that of the FCFS nature of the GTS allocation technique. Conclusion: In this article, we introduced the RAGTS scheme that enhances the real-time traffic feature of the beacon-enabled mode of IEEE 802.15.4 MAC.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Vieira Junior, Ivanilson França, Jorge Granjal e Marilia Curado. "RT-Ranked: Towards Network Resiliency by Anticipating Demand in TSCH/RPL Communication Environments". Journal of Network and Systems Management 32, n.º 1 (3 de janeiro de 2024). http://dx.doi.org/10.1007/s10922-023-09796-3.

Texto completo da fonte
Resumo:
AbstractTime-slotted Channel Hopping (TSCH) Media Access Control (MAC) was specified to target the Industrial Internet of Things needs. This MAC balances energy, bandwidth, and latency for deterministic communications in unreliable wireless environments. Building a distributed or autonomous TSCH schedule is arduous because the node negotiates cells with its neighbours based on queue occupancy, latency, and consumption metrics. The Minimal TSCH Configuration defined by RFC 8180 was specified for bootstrapping a 6TiSCH network and detailed configurations necessary to be supported. In particular, it adopts Routing Protocol for Low Power and Lossy networks (RPL) Non-Storing mode, which reduces the node’s network awareness. Dealing with unpredicted traffic far from the forwarding node is difficult due to limited network information. Anticipating this unexpected flow from multiple network regions is essential because it can turn the forwarding node into a network bottleneck leading to high latency, packet discard or disconnection rates, forcing RPL to change the topology. To cope with that, this work proposes a new mechanism that implements an RPL control message option for passing forward the node’s cell demand, allowing the node to anticipate the proper cell allocation for supporting the traffic originating by nodes far from the forwarding point embedded in Destination-Oriented Directed Acyclic Graph (DODAG) Information Object (DIO) and Destination Advertisement Object (DAO) RPL control messages. Implementing this mechanism in a distributed TSCH Scheduling developed in Contiki-NG yielded promising results in supporting unforeseen traffic bursts and has the potential to significantly improve the performance and reliability of TSCH schedules in challenging network environments.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Peters, Andreas J., e Daniel C. van der Ster. "Evaluating CephFS Performance vs. Cost on High-Density Commodity Disk Servers". Computing and Software for Big Science 5, n.º 1 (9 de novembro de 2021). http://dx.doi.org/10.1007/s41781-021-00071-1.

Texto completo da fonte
Resumo:
AbstractCephFS is a network filesystem built upon the Reliable Autonomic Distributed Object Store (RADOS). At CERN we have demonstrated its reliability and elasticity while operating several 100-to-1000TB clusters which provide NFS-like storage to infrastructure applications and services. At the same time, our lab developed EOS to offer high performance 100PB-scale storage for the LHC at extremely low costs while also supporting the complete set of security and functional APIs required by the particle-physics user community. This work seeks to evaluate the performance of CephFS on this cost-optimized hardware when it is combined with EOS to support the missing functionalities. To this end, we have setup a proof-of-concept Ceph Octopus cluster on high-density JBOD servers (840 TB each) with 100Gig-E networking. The system uses EOS to provide an overlayed namespace and protocol gateways for HTTP(S) and XROOTD, and uses CephFS as an erasure-coded object storage backend. The solution also enables operators to aggregate several CephFS instances and adds features, such as third-party-copy, SciTokens, and high-level user and quota management. Using simple benchmarks we measure the cost/performance tradeoffs of different erasure-coding layouts, as well as the network overheads of these coding schemes. We demonstrate some relevant limitations of the CephFS metadata server and offer improved tunings which can be generally applicable. To conclude, we reflect on the advantages and drawbacks related to this architecture, such as RADOS-level free space requirements and double-network penalties, and offer ideas for improvements in the future.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Dieter, Michael. "Amazon Noir". M/C Journal 10, n.º 5 (1 de outubro de 2007). http://dx.doi.org/10.5204/mcj.2709.

Texto completo da fonte
Resumo:
There is no diagram that does not also include, besides the points it connects up, certain relatively free or unbounded points, points of creativity, change and resistance, and it is perhaps with these that we ought to begin in order to understand the whole picture. (Deleuze, “Foucault” 37) Monty Cantsin: Why do we use a pervert software robot to exploit our collective consensual mind? Letitia: Because we want the thief to be a digital entity. Monty Cantsin: But isn’t this really blasphemic? Letitia: Yes, but god – in our case a meta-cocktail of authorship and copyright – can not be trusted anymore. (Amazon Noir, “Dialogue”) In 2006, some 3,000 digital copies of books were silently “stolen” from online retailer Amazon.com by targeting vulnerabilities in the “Search inside the Book” feature from the company’s website. Over several weeks, between July and October, a specially designed software program bombarded the Search Inside!™ interface with multiple requests, assembling full versions of texts and distributing them across peer-to-peer networks (P2P). Rather than a purely malicious and anonymous hack, however, the “heist” was publicised as a tactical media performance, Amazon Noir, produced by self-proclaimed super-villains Paolo Cirio, Alessandro Ludovico, and Ubermorgen.com. While controversially directed at highlighting the infrastructures that materially enforce property rights and access to knowledge online, the exploit additionally interrogated its own interventionist status as theoretically and politically ambiguous. That the “thief” was represented as a digital entity or machinic process (operating on the very terrain where exchange is differentiated) and the emergent act of “piracy” was fictionalised through the genre of noir conveys something of the indeterminacy or immensurability of the event. In this short article, I discuss some political aspects of intellectual property in relation to the complexities of Amazon Noir, particularly in the context of control, technological action, and discourses of freedom. Software, Piracy As a force of distribution, the Internet is continually subject to controversies concerning flows and permutations of agency. While often directed by discourses cast in terms of either radical autonomy or control, the technical constitution of these digital systems is more regularly a case of establishing structures of operation, codified rules, or conditions of possibility; that is, of guiding social processes and relations (McKenzie, “Cutting Code” 1-19). Software, as a medium through which such communication unfolds and becomes organised, is difficult to conceptualise as a result of being so event-orientated. There lies a complicated logic of contingency and calculation at its centre, a dimension exacerbated by the global scale of informational networks, where the inability to comprehend an environment that exceeds the limits of individual experience is frequently expressed through desires, anxieties, paranoia. Unsurprisingly, cautionary accounts and moral panics on identity theft, email fraud, pornography, surveillance, hackers, and computer viruses are as commonplace as those narratives advocating user interactivity. When analysing digital systems, cultural theory often struggles to describe forces that dictate movement and relations between disparate entities composed by code, an aspect heightened by the intensive movement of informational networks where differences are worked out through the constant exposure to unpredictability and chance (Terranova, “Communication beyond Meaning”). Such volatility partially explains the recent turn to distribution in media theory, as once durable networks for constructing economic difference – organising information in space and time (“at a distance”), accelerating or delaying its delivery – appear contingent, unstable, or consistently irregular (Cubitt 194). Attributing actions to users, programmers, or the software itself is a difficult task when faced with these states of co-emergence, especially in the context of sharing knowledge and distributing media content. Exchanges between corporate entities, mainstream media, popular cultural producers, and legal institutions over P2P networks represent an ongoing controversy in this respect, with numerous stakeholders competing between investments in property, innovation, piracy, and publics. Beginning to understand this problematic landscape is an urgent task, especially in relation to the technological dynamics that organised and propel such antagonisms. In the influential fragment, “Postscript on the Societies of Control,” Gilles Deleuze describes the historical passage from modern forms of organised enclosure (the prison, clinic, factory) to the contemporary arrangement of relational apparatuses and open systems as being materially provoked by – but not limited to – the mass deployment of networked digital technologies. In his analysis, the disciplinary mode most famously described by Foucault is spatially extended to informational systems based on code and flexibility. According to Deleuze, these cybernetic machines are connected into apparatuses that aim for intrusive monitoring: “in a control-based system nothing’s left alone for long” (“Control and Becoming” 175). Such a constant networking of behaviour is described as a shift from “molds” to “modulation,” where controls become “a self-transmuting molding changing from one moment to the next, or like a sieve whose mesh varies from one point to another” (“Postscript” 179). Accordingly, the crisis underpinning civil institutions is consistent with the generalisation of disciplinary logics across social space, forming an intensive modulation of everyday life, but one ambiguously associated with socio-technical ensembles. The precise dynamics of this epistemic shift are significant in terms of political agency: while control implies an arrangement capable of absorbing massive contingency, a series of complex instabilities actually mark its operation. Noise, viral contamination, and piracy are identified as key points of discontinuity; they appear as divisions or “errors” that force change by promoting indeterminacies in a system that would otherwise appear infinitely calculable, programmable, and predictable. The rendering of piracy as a tactic of resistance, a technique capable of levelling out the uneven economic field of global capitalism, has become a predictable catch-cry for political activists. In their analysis of multitude, for instance, Antonio Negri and Michael Hardt describe the contradictions of post-Fordist production as conjuring forth a tendency for labour to “become common.” That is, as productivity depends on flexibility, communication, and cognitive skills, directed by the cultivation of an ideal entrepreneurial or flexible subject, the greater the possibilities for self-organised forms of living that significantly challenge its operation. In this case, intellectual property exemplifies such a spiralling paradoxical logic, since “the infinite reproducibility central to these immaterial forms of property directly undermines any such construction of scarcity” (Hardt and Negri 180). The implications of the filesharing program Napster, accordingly, are read as not merely directed toward theft, but in relation to the private character of the property itself; a kind of social piracy is perpetuated that is viewed as radically recomposing social resources and relations. Ravi Sundaram, a co-founder of the Sarai new media initiative in Delhi, has meanwhile drawn attention to the existence of “pirate modernities” capable of being actualised when individuals or local groups gain illegitimate access to distributive media technologies; these are worlds of “innovation and non-legality,” of electronic survival strategies that partake in cultures of dispersal and escape simple classification (94). Meanwhile, pirate entrepreneurs Magnus Eriksson and Rasmus Fleische – associated with the notorious Piratbyrn – have promoted the bleeding away of Hollywood profits through fully deployed P2P networks, with the intention of pushing filesharing dynamics to an extreme in order to radicalise the potential for social change (“Copies and Context”). From an aesthetic perspective, such activist theories are complemented by the affective register of appropriation art, a movement broadly conceived in terms of antagonistically liberating knowledge from the confines of intellectual property: “those who pirate and hijack owned material, attempting to free information, art, film, and music – the rhetoric of our cultural life – from what they see as the prison of private ownership” (Harold 114). These “unruly” escape attempts are pursued through various modes of engagement, from experimental performances with legislative infrastructures (i.e. Kembrew McLeod’s patenting of the phrase “freedom of expression”) to musical remix projects, such as the work of Negativland, John Oswald, RTMark, Detritus, Illegal Art, and the Evolution Control Committee. Amazon Noir, while similarly engaging with questions of ownership, is distinguished by specifically targeting information communication systems and finding “niches” or gaps between overlapping networks of control and economic governance. Hans Bernhard and Lizvlx from Ubermorgen.com (meaning ‘Day after Tomorrow,’ or ‘Super-Tomorrow’) actually describe their work as “research-based”: “we not are opportunistic, money-driven or success-driven, our central motivation is to gain as much information as possible as fast as possible as chaotic as possible and to redistribute this information via digital channels” (“Interview with Ubermorgen”). This has led to experiments like Google Will Eat Itself (2005) and the construction of the automated software thief against Amazon.com, as process-based explorations of technological action. Agency, Distribution Deleuze’s “postscript” on control has proven massively influential for new media art by introducing a series of key questions on power (or desire) and digital networks. As a social diagram, however, control should be understood as a partial rather than totalising map of relations, referring to the augmentation of disciplinary power in specific technological settings. While control is a conceptual regime that refers to open-ended terrains beyond the architectural locales of enclosure, implying a move toward informational networks, data solicitation, and cybernetic feedback, there remains a peculiar contingent dimension to its limits. For example, software code is typically designed to remain cycling until user input is provided. There is a specifically immanent and localised quality to its actions that might be taken as exemplary of control as a continuously modulating affective materialism. The outcome is a heightened sense of bounded emergencies that are either flattened out or absorbed through reconstitution; however, these are never linear gestures of containment. As Tiziana Terranova observes, control operates through multilayered mechanisms of order and organisation: “messy local assemblages and compositions, subjective and machinic, characterised by different types of psychic investments, that cannot be the subject of normative, pre-made political judgments, but which need to be thought anew again and again, each time, in specific dynamic compositions” (“Of Sense and Sensibility” 34). This event-orientated vitality accounts for the political ambitions of tactical media as opening out communication channels through selective “transversal” targeting. Amazon Noir, for that reason, is pitched specifically against the material processes of communication. The system used to harvest the content from “Search inside the Book” is described as “robot-perversion-technology,” based on a network of four servers around the globe, each with a specific function: one located in the United States that retrieved (or “sucked”) the books from the site, one in Russia that injected the assembled documents onto P2P networks and two in Europe that coordinated the action via intelligent automated programs (see “The Diagram”). According to the “villains,” the main goal was to steal all 150,000 books from Search Inside!™ then use the same technology to steal books from the “Google Print Service” (the exploit was limited only by the amount of technological resources financially available, but there are apparent plans to improve the technique by reinvesting the money received through the settlement with Amazon.com not to publicise the hack). In terms of informational culture, this system resembles a machinic process directed at redistributing copyright content; “The Diagram” visualises key processes that define digital piracy as an emergent phenomenon within an open-ended and responsive milieu. That is, the static image foregrounds something of the activity of copying being a technological action that complicates any analysis focusing purely on copyright as content. In this respect, intellectual property rights are revealed as being entangled within information architectures as communication management and cultural recombination – dissipated and enforced by a measured interplay between openness and obstruction, resonance and emergence (Terranova, “Communication beyond Meaning” 52). To understand data distribution requires an acknowledgement of these underlying nonhuman relations that allow for such informational exchanges. It requires an understanding of the permutations of agency carried along by digital entities. According to Lawrence Lessig’s influential argument, code is not merely an object of governance, but has an overt legislative function itself. Within the informational environments of software, “a law is defined, not through a statue, but through the code that governs the space” (20). These points of symmetry are understood as concretised social values: they are material standards that regulate flow. Similarly, Alexander Galloway describes computer protocols as non-institutional “etiquette for autonomous agents,” or “conventional rules that govern the set of possible behavior patterns within a heterogeneous system” (7). In his analysis, these agreed-upon standardised actions operate as a style of management fostered by contradiction: progressive though reactionary, encouraging diversity by striving for the universal, synonymous with possibility but completely predetermined, and so on (243-244). Needless to say, political uncertainties arise from a paradigm that generates internal material obscurities through a constant twinning of freedom and control. For Wendy Hui Kyong Chun, these Cold War systems subvert the possibilities for any actual experience of autonomy by generalising paranoia through constant intrusion and reducing social problems to questions of technological optimisation (1-30). In confrontation with these seemingly ubiquitous regulatory structures, cultural theory requires a critical vocabulary differentiated from computer engineering to account for the sociality that permeates through and concatenates technological realities. In his recent work on “mundane” devices, software and code, Adrian McKenzie introduces a relevant analytic approach in the concept of technological action as something that both abstracts and concretises relations in a diffusion of collective-individual forces. Drawing on the thought of French philosopher Gilbert Simondon, he uses the term “transduction” to identify a key characteristic of technology in the relational process of becoming, or ontogenesis. This is described as bringing together disparate things into composites of relations that evolve and propagate a structure throughout a domain, or “overflow existing modalities of perception and movement on many scales” (“Impersonal and Personal Forces in Technological Action” 201). Most importantly, these innovative diffusions or contagions occur by bridging states of difference or incompatibilities. Technological action, therefore, arises from a particular type of disjunctive relation between an entity and something external to itself: “in making this relation, technical action changes not only the ensemble, but also the form of life of its agent. Abstraction comes into being and begins to subsume or reconfigure existing relations between the inside and outside” (203). Here, reciprocal interactions between two states or dimensions actualise disparate potentials through metastability: an equilibrium that proliferates, unfolds, and drives individuation. While drawing on cybernetics and dealing with specific technological platforms, McKenzie’s work can be extended to describe the significance of informational devices throughout control societies as a whole, particularly as a predictive and future-orientated force that thrives on staged conflicts. Moreover, being a non-deterministic technical theory, it additionally speaks to new tendencies in regimes of production that harness cognition and cooperation through specially designed infrastructures to enact persistent innovation without any end-point, final goal or natural target (Thrift 283-295). Here, the interface between intellectual property and reproduction can be seen as a site of variation that weaves together disparate objects and entities by imbrication in social life itself. These are specific acts of interference that propel relations toward unforeseen conclusions by drawing on memories, attention spans, material-technical traits, and so on. The focus lies on performance, context, and design “as a continual process of tuning arrived at by distributed aspiration” (Thrift 295). This later point is demonstrated in recent scholarly treatments of filesharing networks as media ecologies. Kate Crawford, for instance, describes the movement of P2P as processual or adaptive, comparable to technological action, marked by key transitions from partially decentralised architectures such as Napster, to the fully distributed systems of Gnutella and seeded swarm-based networks like BitTorrent (30-39). Each of these technologies can be understood as a response to various legal incursions, producing radically dissimilar socio-technological dynamics and emergent trends for how agency is modulated by informational exchanges. Indeed, even these aberrant formations are characterised by modes of commodification that continually spillover and feedback on themselves, repositioning markets and commodities in doing so, from MP3s to iPods, P2P to broadband subscription rates. However, one key limitation of this ontological approach is apparent when dealing with the sheer scale of activity involved, where mass participation elicits certain degrees of obscurity and relative safety in numbers. This represents an obvious problem for analysis, as dynamics can easily be identified in the broadest conceptual sense, without any understanding of the specific contexts of usage, political impacts, and economic effects for participants in their everyday consumptive habits. Large-scale distributed ensembles are “problematic” in their technological constitution, as a result. They are sites of expansive overflow that provoke an equivalent individuation of thought, as the Recording Industry Association of America observes on their educational website: “because of the nature of the theft, the damage is not always easy to calculate but not hard to envision” (“Piracy”). The politics of the filesharing debate, in this sense, depends on the command of imaginaries; that is, being able to conceptualise an overarching structural consistency to a persistent and adaptive ecology. As a mode of tactical intervention, Amazon Noir dramatises these ambiguities by framing technological action through the fictional sensibilities of narrative genre. Ambiguity, Control The extensive use of imagery and iconography from “noir” can be understood as an explicit reference to the increasing criminalisation of copyright violation through digital technologies. However, the term also refers to the indistinct or uncertain effects produced by this tactical intervention: who are the “bad guys” or the “good guys”? Are positions like ‘good’ and ‘evil’ (something like freedom or tyranny) so easily identified and distinguished? As Paolo Cirio explains, this political disposition is deliberately kept obscure in the project: “it’s a representation of the actual ambiguity about copyright issues, where every case seems to lack a moral or ethical basis” (“Amazon Noir Interview”). While user communications made available on the site clearly identify culprits (describing the project as jeopardising arts funding, as both irresponsible and arrogant), the self-description of the artists as political “failures” highlights the uncertainty regarding the project’s qualities as a force of long-term social renewal: Lizvlx from Ubermorgen.com had daily shootouts with the global mass-media, Cirio continuously pushed the boundaries of copyright (books are just pixels on a screen or just ink on paper), Ludovico and Bernhard resisted kickback-bribes from powerful Amazon.com until they finally gave in and sold the technology for an undisclosed sum to Amazon. Betrayal, blasphemy and pessimism finally split the gang of bad guys. (“Press Release”) Here, the adaptive and flexible qualities of informatic commodities and computational systems of distribution are knowingly posited as critical limits; in a certain sense, the project fails technologically in order to succeed conceptually. From a cynical perspective, this might be interpreted as guaranteeing authenticity by insisting on the useless or non-instrumental quality of art. However, through this process, Amazon Noir illustrates how forces confined as exterior to control (virality, piracy, noncommunication) regularly operate as points of distinction to generate change and innovation. Just as hackers are legitimately employed to challenge the durability of network exchanges, malfunctions are relied upon as potential sources of future information. Indeed, the notion of demonstrating ‘autonomy’ by illustrating the shortcomings of software is entirely consistent with the logic of control as a modulating organisational diagram. These so-called “circuit breakers” are positioned as points of bifurcation that open up new systems and encompass a more general “abstract machine” or tendency governing contemporary capitalism (Parikka 300). As a consequence, the ambiguities of Amazon Noir emerge not just from the contrary articulation of intellectual property and digital technology, but additionally through the concept of thinking “resistance” simultaneously with regimes of control. This tension is apparent in Galloway’s analysis of the cybernetic machines that are synonymous with the operation of Deleuzian control societies – i.e. “computerised information management” – where tactical media are posited as potential modes of contestation against the tyranny of code, “able to exploit flaws in protocological and proprietary command and control, not to destroy technology, but to sculpt protocol and make it better suited to people’s real desires” (176). While pushing a system into a state of hypertrophy to reform digital architectures might represent a possible technique that produces a space through which to imagine something like “our” freedom, it still leaves unexamined the desire for reformation itself as nurtured by and produced through the coupling of cybernetics, information theory, and distributed networking. This draws into focus the significance of McKenzie’s Simondon-inspired cybernetic perspective on socio-technological ensembles as being always-already predetermined by and driven through asymmetries or difference. As Chun observes, consequently, there is no paradox between resistance and capture since “control and freedom are not opposites, but different sides of the same coin: just as discipline served as a grid on which liberty was established, control is the matrix that enables freedom as openness” (71). Why “openness” should be so readily equated with a state of being free represents a major unexamined presumption of digital culture, and leads to the associated predicament of attempting to think of how this freedom has become something one cannot not desire. If Amazon Noir has political currency in this context, however, it emerges from a capacity to recognise how informational networks channel desire, memories, and imaginative visions rather than just cultivated antagonisms and counterintuitive economics. As a final point, it is worth observing that the project was initiated without publicity until the settlement with Amazon.com. There is, as a consequence, nothing to suggest that this subversive “event” might have actually occurred, a feeling heightened by the abstractions of software entities. To the extent that we believe in “the big book heist,” that such an act is even possible, is a gauge through which the paranoia of control societies is illuminated as a longing or desire for autonomy. As Hakim Bey observes in his conceptualisation of “pirate utopias,” such fleeting encounters with the imaginaries of freedom flow back into the experience of the everyday as political instantiations of utopian hope. Amazon Noir, with all its underlying ethical ambiguities, presents us with a challenge to rethink these affective investments by considering our profound weaknesses to master the complexities and constant intrusions of control. It provides an opportunity to conceive of a future that begins with limits and limitations as immanently central, even foundational, to our deep interconnection with socio-technological ensembles. References “Amazon Noir – The Big Book Crime.” http://www.amazon-noir.com/>. Bey, Hakim. T.A.Z.: The Temporary Autonomous Zone, Ontological Anarchy, Poetic Terrorism. New York: Autonomedia, 1991. Chun, Wendy Hui Kyong. Control and Freedom: Power and Paranoia in the Age of Fibre Optics. Cambridge, MA: MIT Press, 2006. Crawford, Kate. “Adaptation: Tracking the Ecologies of Music and Peer-to-Peer Networks.” Media International Australia 114 (2005): 30-39. Cubitt, Sean. “Distribution and Media Flows.” Cultural Politics 1.2 (2005): 193-214. Deleuze, Gilles. Foucault. Trans. Seán Hand. Minneapolis: U of Minnesota P, 1986. ———. “Control and Becoming.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 169-176. ———. “Postscript on the Societies of Control.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 177-182. Eriksson, Magnus, and Rasmus Fleische. “Copies and Context in the Age of Cultural Abundance.” Online posting. 5 June 2007. Nettime 25 Aug 2007. Galloway, Alexander. Protocol: How Control Exists after Decentralization. Cambridge, MA: MIT Press, 2004. Hardt, Michael, and Antonio Negri. Multitude: War and Democracy in the Age of Empire. New York: Penguin Press, 2004. Harold, Christine. OurSpace: Resisting the Corporate Control of Culture. Minneapolis: U of Minnesota P, 2007. Lessig, Lawrence. Code and Other Laws of Cyberspace. New York: Basic Books, 1999. McKenzie, Adrian. Cutting Code: Software and Sociality. New York: Peter Lang, 2006. ———. “The Strange Meshing of Impersonal and Personal Forces in Technological Action.” Culture, Theory and Critique 47.2 (2006): 197-212. Parikka, Jussi. “Contagion and Repetition: On the Viral Logic of Network Culture.” Ephemera: Theory & Politics in Organization 7.2 (2007): 287-308. “Piracy Online.” Recording Industry Association of America. 28 Aug 2007. http://www.riaa.com/physicalpiracy.php>. Sundaram, Ravi. “Recycling Modernity: Pirate Electronic Cultures in India.” Sarai Reader 2001: The Public Domain. Delhi, Sarai Media Lab, 2001. 93-99. http://www.sarai.net>. Terranova, Tiziana. “Communication beyond Meaning: On the Cultural Politics of Information.” Social Text 22.3 (2004): 51-73. ———. “Of Sense and Sensibility: Immaterial Labour in Open Systems.” DATA Browser 03 – Curating Immateriality: The Work of the Curator in the Age of Network Systems. Ed. Joasia Krysa. New York: Autonomedia, 2006. 27-38. Thrift, Nigel. “Re-inventing Invention: New Tendencies in Capitalist Commodification.” Economy and Society 35.2 (2006): 279-306. Citation reference for this article MLA Style Dieter, Michael. "Amazon Noir: Piracy, Distribution, Control." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/07-dieter.php>. APA Style Dieter, M. (Oct. 2007) "Amazon Noir: Piracy, Distribution, Control," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/07-dieter.php>.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Mackenzie, Adrian. "Making Data Flow". M/C Journal 5, n.º 4 (1 de agosto de 2002). http://dx.doi.org/10.5204/mcj.1975.

Texto completo da fonte
Resumo:
Why has software code become an object of intense interest in several different domains of cultural life? In art (.net art or software art), in Open source software (Linux, Perl, Apache, et cetera (Moody; Himanen)), in tactical media actions (hacking of WEF Melbourne and Nike websites), and more generally, in the significance attributed to coding as work at the pinnacle of contemporary production of information (Negri and Hardt 298), code itself has somehow recently become significant, at least for some subcultures. Why has that happened? At one level, we could say that this happened because informatic interaction (websites, email, chat, online gaming, ecommerce, etc) has become mainstream to media production, organisational practice and indeed, quotidian life in developed and developing countries. As information production moves into the mainstream, working against mainstream control of flows of information means going upstream. For artists, tactical media groups and hackers, code seems to provide a way to, so to speak, reach over the shoulder of mainstream media channels and contest their control of information flows.1 A basic question is: does it? What code does We all see content flowing through the networks. Yet the expressive traits of the flows themselves are harder to grapple with, partly because they are largely infrastructural. When media and cultural theory discuss information-network society, cyberculture or new media, questions of flow specificity are usually downplayed in favour of high-level engagement with information as content. Arguably, the heightened attention to code attests to an increasing awareness that power relations are embedded in the generation and control of flow rather than just the meanings or contents that might be transported by flow. In this context, loops provide a really elementary and concrete way to explore how code participates in information flows. Loops structure almost every code object at a basic level. The programmed loop, a very mundane construct, can be found in any new media artist's or software engineer's coding toolkit. All programming languages have them. In popular programming and scripting languages such as FORTRAN, C, Pascal, C++, Java, Visual Basic, Perl, Python, JavaScript, ActionScript, etc, an almost identical set of looping constructs are found.2 Working with loops as material and as instrument constitutes an indispensable part of producing code-based objects. On the one hand, the loop is the most basic technical element of code as written text. On the other hand, as process executed by CPUs, and in ways that are not immediately obvious even to programmers themselves, loops of various kinds underpin the generative potential of code.3 Crucially, code is concerned with operationality rather than meaning (Lash 203). Code does not directly create meaning. It circulates, transforms, and reproduces messages and patterns of widely varying semantic and contextual richness. By definition, flow is something continuous. In the case of information, what flows are not things but patterns which can be rendered perceptible in different ways—as image, text, sound—on screen, display, and speaker. While the patterns become perceptible in a range of different spatio-temporal modes, their circulation is serialised. They are, as we know, composed of sequences of modulations (bits). Loops control the flow of patterns. Lev Manovich writes: programming involves altering the linear flow of data through control structures, such as 'if/then' and 'repeat/while'; the loop is the most elementary of these control structures (Manovich 189). Drawing on these constructs, programming or coding work gain traction in flows. Interactive looping Loops also generate flows by multiplying events. The most obvious example of how code loops generate and control flows comes from the graphic user interfaces (GUIs) provided by typical operating systems such as Windows, MacOs or one of the Linux desktop environments. These operating systems configure the visual space of millions of desktop screen according to heavily branded designs. Basically they all divide the screen into different framing areas—panels, dividing lines, toolbars, frames, windows—and then populate those areas with controls and indicators—buttons, icons, checkboxes, dropdown lists, menus, popup menus. Framing areas hold content—text, tables, images, video. Controls, usually clustered around the edge of the frame, transform the content displayed in the framed areas in many different ways. Visual controls are themselves hooked up via code to physical input devices such as keyboard, mouse, joystick, buttons and trackpad. The highly habituated and embodied experience of interacting with contemporary GUIs consists of moving in and out, within and between different framing areas, using visual controls that respond either to pointing (with the mouse) or keyboard command to change what is displayed, how it is displayed or indeed to move that content elsewhere (onto disk, across a network). Beneath the highly organised visual space of the GUI, lie hundreds if not thousands of loops. The work of coding these interfaces involves making loops, splicing loops together, and nesting loops within loops. At base, the so-called event loop means that the GUI in principle stands ready at any time to accept input from the physical interface devices. Depending on what that input is, it may translate into direct changes within the framed areas (for instance, keystrokes appear in a text field as letters) or changes affecting the controls (for instance, Control-Enter might signal send the text as an email). What we usually understand by interactivity stems from the way that a loop constantly accepts signals from the physical inputs, queues the signals as events, and deals with them one by one as discrete changes in what appears on screen. Within the GUI's basic event loop, many other loops are constantly starting and finishing. They are nested and unnested. They often affect some or other of the dozens of processes running at any one time within the operating system. Sometimes a command coming from the keyboard or a signal arriving from some other peripheral interface (the network interface card, the printer, a scanner, etc) will trigger the execution of a new process, itself composed of manifold loops. Hence loops often transiently interact with each other during execution of code. At base, the GUI shows something important, something that extends well beyond the domain of the GUI per se: the event loop generates and controls informations flows at the same time. People type on keyboards or manipulate game controllers. A single keypress or mouse click itself hardly constitutes a flow. Yet the event loop can amplify it into a cascade of thousands of events because it sets other loops in process. What we call information flow springs from the multiplicatory effect of loops. A typology of looping Information flows don't come from nowhere. They always go somewhere. Perhaps we could generalise a little from the mundane example of the GUI and say that the generation and control of information flows through loops is itself regulated by bounding conditions. A bounding condition determines the number of times and the sequence of operations carried out by a loop. They often come from outside the machine (interfaces of many different kinds) and from within it (other processes running at the same time, dependent on the operating system architecture and the hardware platform). Their regulatory role suggests the possibility of classifying loops according to boundary conditions.4 The following table classifies loops based on bounding conditions: Type of loop Bounding condition Typical location Simple & indefinite No bounding conditions Event loops in GUIs, servers ... Simple & definite Bounding conditions determined by a finite set of elements Counting, sorting, input and output Nested & definite Multiple bounding conditions Transforming grid and table structures Recursive Depth of possible recursion (memory or time) Searching and sorting of tree or network structures Result controlled Loop ends when some goal has been reached Goal-seeking algorithms Interactive and indefinite Bounding conditions change during the course of the loop User interfaces or interaction Although it risks simplifying something that is quite intricate in any actually executing process, this classification does stress that the distinguishing feature of loops may well be their bounding conditions. In practical terms, within program code, a bounding condition takes the form of some test carried out before, during or after each iteration of a loop. The bounding conditions for some loops relate to data that the code expects to come from other places—across networks, from the user interface, or some other devices. For other loops, the bounding conditions continually emerge in the course of the loop itself—the result of a calculation, finding some result in the course of searching a collection or receiving some new input in a flow of data from an interface or network connection. Based on the classification, we could suggest that loops not only generate flows, but they generate those flows within particular spatio-temporal manifolds. Put less abstractly, if we accept that flows don't come from nowhere, we then need to say what kind of places they do come from. The classification shows that they do not come from homogeneous spaces. In fact they relate to different topologies, to the hugely diverse orderings of signs and gestures within mediatic cultures. To take a mundane example, why has the table become such an important element in the HTML coding of webpages? Clearly tables provide an easy way to organise a page. Tables as classifying and visual ordering devices are nothing new. Along with lists, they have been used for centuries. However, the table as onscreen spatial entity also maps very directly onto a nested loop: the inner loop generates the horizontal row contents; the outer loop places the output of the inner loop in vertical order. As web-designers quickly discovered during the 1990s, HTML tables are rendered quickly by browsers and can easily position different contents—images, headings, text, lines, spaces—in proximity. In shorts, nested loops can quickly turn a table into a serial flow or quickly render a table out of a serial flow. Implications We started with the observation that artists, writers, hackers and media activists are working with code in order to reposition themselves in relation to information flows. Through technical elements such as loops, they reappropriate certain facets of the production of information and communication. Working with these and other elements, they look for different points of entry into the flows, attempting to move upstream of the heavily capitalised sites of mainstream production such as the Windows GUI, eCommerce websites or blockbuster game titles. The proliferation of information objects in music, in visual culture, in database and net-centred forms of interactivity ranging from computer games to chat protocols, suggests that the coding work can trigger powerful shifts in the cultures of circulation. Analysis of loops also suggests that the notion of data or information flow, understood as the continuous gliding of bits through systems of communication, needs revision. Rather than code simply controlling flow, code generates flows as well. What might warrant further thought is just how different kinds of bounding conditions generate different spatio-temporal patterns and modes of inclusion within flows. The diversity of loops within information objects imply a variety of topologically complicated places. It would be possible to work through the classification describing how each kind of loop maps into different spatial and temporal orderings. In particular, we might want to focus on how more complicated loops—result controlled, recursive, or interactive and indefinite types—map out more topologically complicated spaces and times. For my purposes, the important point is that bounding conditions not only regulate loops, they bring different kinds of spatio-temporal manifold into the seriality of flow. They imprint spatial and temporal ordering. Here the operationality of code begins to display a generative dimension that goes well beyond merely transporting or communicating content. Notes 1. At a more theoretical level, for a decade or so fairly abstract notions of virtuality have dominated media and cultural studies approaches to new media. While that domination has been increasingly contested by more fine grained studies of how the Internet is enmeshed with different places (Miller and Slater), attention to code is justified on the grounds that it constitutes an increasingly important form of expression within information flows. 2. Detailed discussion of these looping constructs can be found in any programming textbook or introductory computer science course, so I will not be going through them in any detail. 3. For instance, the cycles of the clock chip are absolutely irreducible. Virtually all programs implicitly rely on a clock chip to regulate execution of their instructions. 4. A classification can act as a symptomatology, that is, as something that sets out the various signs of the existence of a particular condition (Deleuze 368), in this case, the operationality of code. References Appadurai, Arjun. Modernity at Large: Cultural Dimensions of Globalization. Minneapolis: U of Minnesota P, 1996. Deleuze, Gilles. The Brain is the Screen. An Interview with Gilles Deleuze. The Brain is the Screen. Deleuze and the Philosophy of Cinema. Ed Gregory Flaxman. Minneapolis: U of Minnesota P, 2000. 365-68. Hardt, Michael and Antonio Negri. Empire. Cambridge, MA: Harvard U P, 2000. Himanen, Pekka. The Hacker Ethic and the Spirit of the Information Age. London: Secker and Warburg, 2001. Lash, Scott. Critique of Information. London: Sage, 2002. Manovich, Lev. What is Digital Cinema? Ed. Peter Lunenfeld. The Digital Dialectic: New Essays on New Media. Cambridge, MA: MIT, 1999. 172-92. Miller, Daniel, and Don Slater. The Internet: An Ethnographic Approach. Oxford: Berg, 2000. Moody, Glyn. Rebel Code: Linux and the Open Source Revolution. Middlesworth: Penguin, 2001. Citation reference for this article MLA Style Mackenzie, Adrian. "Making Data Flow" M/C: A Journal of Media and Culture 5.4 (2002). [your date of access] < http://www.media-culture.org.au/mc/0208/data.php>. Chicago Style Mackenzie, Adrian, "Making Data Flow" M/C: A Journal of Media and Culture 5, no. 4 (2002), < http://www.media-culture.org.au/mc/0208/data.php> ([your date of access]). APA Style Mackenzie, Adrian. (2002) Making Data Flow. M/C: A Journal of Media and Culture 5(4). < http://www.media-culture.org.au/mc/0208/data.php> ([your date of access]).
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Hinner, Kajetan. "Statistics of Major IRC Networks". M/C Journal 3, n.º 4 (1 de agosto de 2000). http://dx.doi.org/10.5204/mcj.1867.

Texto completo da fonte
Resumo:
Internet Relay Chat (IRC) is a text-based computer-mediated communication (CMC) service in which people can meet and chat in real time. Most chat occurs in channels named for a specific topic, such as #usa or #linux. A user can take part in several channels when connected to an IRC network. For a long time the only major IRC network available was EFnet, founded in 1990. Over the 1990s three other major IRC networks developed, Undernet (1993), DALnet (1994) and IRCnet (which split from EFnet in June 1996). Several causes led to the separate development of IRC networks: fast growth of user numbers, poor scalability of the IRC protocol and content disagreements, like allowing or prohibiting 'bot programs. Today we are experiencing the development of regional IRC networks, such as BrasNet for Brazilian users, and increasing regionalisation of the global networks -- IRCnet users are generally European, EFnet users generally from the Americas and Australia. All persons connecting to an IRC network at one time create that IRC network's user space. People are constantly signing on and off each network. The total number of users who have ever been to a specific IRC network could be called its 'social space' and an IRC network's social space is by far larger than its user space at any one time. Although there has been research on IRC almost from its beginning (it was developed in 1988, and the first research was made available in late 1991 (Reid)), resources on quantitative development are rare. To rectify this situation, a quantitative data logging 'bot program -- Socip -- was created and set to run on various IRC networks. Socip has been running for almost two years on several IRC networks, giving Internet researchers empirical data of the quantitative development of IRC. Methodology Any approach to gathering quantitative data on IRC needs to fulfil the following tasks: Store the number of users that are on an IRC network at a given time, e.g. every five minutes; Store the number of channels; and, Store the number of servers. It is possible to get this information using the '/lusers' command on an IRC-II client, entered by hand. This approach yields results as in Table 1. Table 1: Number of IRC users on January 31st, 1995 Date Time Users Invisible Servers Channels 31.01.95 10:57 2737 2026 93 1637 During the first months of 1995, it was even possible to get all user information using the '/who **' command. However, on current major IRC networks with greater than 50000 users this method is denied by the IRC Server program, which terminates the connection because it is too slow to accept that amount of data. Added to this problem is the fact that collecting these data manually is an exhausting and repetitive task, better suited to automation. Three approaches to automation were attempted in the development process. The 'Eggdrop' approach The 'Eggdrop' 'bot is one of the best-known IRC 'bot programs. Once programmed, 'bots can act autonomously on an IRC network, and Eggdrop was considered particularly convenient because customised modules could be easily installed. However, testing showed that the Eggdrop 'bot was unsuitable for two reasons. The first was technical: for reasons undetermined, all Eggdrop modules created extensive CPU usage, making it impossible to run several Eggdrops simultaneously to research a number of IRC networks. The second reason had to do with the statistics to be obtained. The objective was to get a snapshot of current IRC users and IRC channel use every five minutes, written into an ASCII file. It was impossible to extend Eggdrop's possibilities in a way that it would periodically submit the '/lusers' command and write the received data into a file. For these reasons, and some security concerns, the Eggdrop approach was abandoned. IrcII was a UNIX IRC client with its own scripting language, making it possible to write command files which periodically submit the '/lusers' command to any chosen IRC server and log the command's output. Four different scripts were used to monitor IRCnet, EFnet, DALnet and Undernet from January to October 1998. These scripts were named Socius_D, Socius_E, Socius_I and Socius_U (depending on the network). Every hour each script stored the number of users and channels in a logfile (examinable using another script written in the Perl language). There were some drawbacks to the ircII script approach. While the need for a terminal to run on could be avoided using the 'screen' package -- making it possible to start ircII, run the scripts, detach, and log off again -- it was impossible to restart ircII and the scripts using an automatic task-scheduler. Thus periodic manual checks were required to find out if the scripts were still running and restart them if needed (e.g. if the server connection was lost). These checks showed that at least one script would not be running after 10 hours. Additional disadvantages were the lengthy log files and the necessity of providing a second program to extract the log file data and write it into a second file from which meaningful graphs could be created. The failure of the Eggdrop and ircII scripting approaches lead to the solution still in use today. Perl script-only approach Perl is a powerful script language for handling file-oriented data when speed is not extremely important. Its version 5 flavour allows a lot of modules to use it for expansion, including the Net::IRC package. The object-oriented Perl interface enables Perl scripts to connect to an IRC server, and use the basic IRC commands. The Socip.pl program includes all server definitions needed to create connections. Socip is currently monitoring ten major IRC networks, including DALnet, EFnet, IRCnet, the Microsoft Network, Talkcity, Undernet and Galaxynet. When run, "Social science IRC program" selects a nickname from its list corresponding to the network -- For EFnet, the first nickname used is Socip_E1. It then functions somewhat like a 'bot. Using that nickname, Socip tries to create an IRC connection to a server of the given network. If there is no failure, handlers are set up which take care of proper reactions to IRC server messages (such as Ping-pong, message output and reply). Socip then joins the channel #hose (the name has no special meaning), a maintenance channel with the additional effect of real persons meeting the 'bot and trying to interact with it every now and then. Those interactions are logged too. Sitting in that channel, the script sleeps periodically and checks if a certain time span has passed (the default is five minutes). After that, the '/lusers' command's output is stored in a data file for each IRC network and the IRC network's RRD (Round Robin database) file is updated. This database, which is organised chronologically, offers great detail for recent events and more condensed information for older events. User and channel information younger than 10 days is stored in five-minute detail. If older than two years, the same information is automatically averaged and stored in a per-day resolution. In case of network problems, Socip acts as necessary. For example, it recognises a connection termination and tries to reconnect after pausing by using the next nickname on the list. This prevents nickname collision problems. If the IRC server does not respond to '/luser' commands three times in a row, the next server on the list is accessed. Special (crontab-invoked) scripts take care of restarting Socip when necessary, as in termination of script because of network problems, IRC operator kill or power failure. After a reboot all scripts are automatically restarted. All monitoring is done on a Linux machine (Pentium 120, 32 MB, Debian Linux 2.1) which is up all the time. Processor load is not extensive, and this machine also acts as the Sociology Department's WWW-Server. Graphs creation Graphs can be created from the data in Socip's RRD files. This task is done using the MRTG (multi router traffic grapher) program by Tobias Oetiker. A script updates all IRC graphs four times a day. Usage of each IRC network is visualised through five graphs: Daily, Weekly and Monthly users and channels, accompanied by two graphs showing all known data users/channels and servers. All this information is continuously published on the World Wide Web at http://www.hinner.com/ircstat. Figures The following samples demonstrate what information can be produced by Socip. As already mentioned, graphs of all monitored networks are updated four times a day, with five graphs for each IRC network. Figure 1 shows the rise of EFnet users from about 40000 in November 1998 to 65000 in July 2000. Sampled data is oscillating around an average amount, which is resulting from the different time zones of users. Fig. 1: EFnet - Users and Channels since November 1998 Figure 2 illustrates the decrease of interconnected EFnet servers over the years. Each server is now handling more and more users. Reasons for taking IRC servers off the net are security concerns (attacks on the server by malicious persons), new payment schemes, maintenance and cost effort. Fig. 2: EFnet - Servers since November 1998 A nice example of a heavily changing weekly graph is Figure 3, which shows peaks shortly before 6pm CEST and almost no users shortly after midnight. Fig. 3: Galaxynet: Weekly Graph (July, 15th-22nd, 2000) The daily graph portrays usage variations with even more detail. Figure 4 is taken from Undernet user and channel data. The vertical gap in the graph indicates missing data, caused either by a net split or other network problems. Fig. 4: Undernet: Daily Graph: July, 22nd, 2000 The final example (Figure 5) shows a weekly graph of the Webchat (http://www.webchat.org) network. It can be seen that every day the user count varies from 5000 to nearly 20000, and that channel numbers fluctuate in concert accordingly from 2500 to 5000. Fig. 5: Webchat: Monthly graph, Week 24-29, 2000 Not every IRC user is connected all the time to an IRC network. This figure may have increased lately with more and more flatrates and cheap Internet access offers, but in general most users will sign off the network after some time. This is why IRC is a very dynamic society, with its membership constantly in flux. Maximum user counts only give the highest number of members who were simultaneously online at some point, and one could only guess at the number of total users of the network -- that is, including those who are using that IRC service but are not signed on at that time. To answer these questions, more thorough investigation is necessary. Then inflows and outflows might be more readily estimated. Table 2 shows the all time maximum user counts of seven IRC networks, compared to the average numbers of IRC users of the four major IRC networks during the third quarter 1998 (based on available data). Table 2: Maximum user counts of selected IRC networks DALnet EFnet Galaxy Net IRCnet MS Chat Undernet Webchat Max. 2000 64276 64309 15253 65340 17392 60210 19793 3rd Q. 1998 21000 37000 n/a 24500 n/a 24000 n/a Compared with the 200-300 users in 1991 and the 7000 IRC-chatters in 1994, the recent growth is certainly extraordinary: it adds up to a total of 306573 users across all monitored networks. It can be expected that the 500000 IRC user threshold will be passed some time during the year 2001. As a final remark, it should be said that obviously Web-based chat systems will be more and more common in the future. These chat services do not use standard IRC protocols, and will be very hard to monitor. Given that these systems are already quite popular, the actual number of chat users in the world could have already passed the half million landmark. References Reid, Elizabeth. "Electropolis: Communications and Community on Internet Relay Chat." Unpublished Honours Dissertation. U of Melbourne, 1991. The Socip program can be obtained at no cost from http://www.hinner.com. Most IRC networks can be accessed with the original Net::Irc Perl extension, but for some special cases (e.g. Talkcity) an extended version is needed, which can also be found there. Citation reference for this article MLA style: Kajetan Hinner. "Statistics of Major IRC Networks: Methods and Summary of User Count." M/C: A Journal of Media and Culture 3.4 (2000). [your date of access] <http://www.api-network.com/mc/0008/count.php>. Chicago style: Kajetan Hinner, "Statistics of Major IRC Networks: Methods and Summary of User Count," M/C: A Journal of Media and Culture 3, no. 4 (2000), <http://www.api-network.com/mc/0008/count.php> ([your date of access]). APA style: Kajetan Hinner. (2000) Statistics of major IRC networks: methods and summary of user count. M/C: A Journal of Media and Culture 3(4). <http://www.api-network.com/mc/0008/count.php> ([your date of access]).
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Deck, Andy. "Treadmill Culture". M/C Journal 6, n.º 2 (1 de abril de 2003). http://dx.doi.org/10.5204/mcj.2157.

Texto completo da fonte
Resumo:
Since the first days of the World Wide Web, artists like myself have been exploring the new possibilities of network interactivity. Some good tools and languages have been developed and made available free for the public to use. This has empowered individuals to participate in the media in ways that are quite remarkable. Nonetheless, the future of independent media is clouded by legal, regulatory, and organisational challenges that need to be addressed. It is not clear to what extent independent content producers will be able to build upon the successes of the 90s – it is yet to be seen whether their efforts will be largely nullified by the anticyclones of a hostile media market. Not so long ago, American news magazines were covering the Browser War. Several real wars later, the terms of surrender are becoming clearer. Now both of the major Internet browsers are owned by huge media corporations, and most of the states (and Reagan-appointed judges) that were demanding the break-up of Microsoft have given up. A curious about-face occurred in U.S. Justice Department policy when John Ashcroft decided to drop the federal case. Maybe Microsoft's value as a partner in covert activity appealed to Ashcroft more than free competition. Regardless, Microsoft is now turning its wrath on new competitors, people who are doing something very, very bad: sharing the products of their own labour. This practice of sharing source code and building free software infrastructure is epitomised by the continuing development of Linux. Everything in the Linux kernel is free, publicly accessible information. As a rule, the people building this "open source" operating system software believe that maintaining transparency is important. But U.S. courts are not doing much to help. In a case brought by the Motion Picture Association of America against Eric Corley, a federal district court blocked the distribution of source code that enables these systems to play DVDs. In addition to censoring Corley's journal, the court ruled that any programmer who writes a program that plays a DVD must comply with a host of license restrictions. In short, an established and popular media format (the DVD) cannot be used under open source operating systems without sacrificing the principle that software source code should remain in the public domain. Should the contents of operating systems be tightly guarded secrets, or subject to public review? If there are capable programmers willing to create good, free operating systems, should the law stand in their way? The question concerning what type of software infrastructure will dominate personal computers in the future is being answered as much by disappointing legal decisions as it is by consumer choice. Rather than ensuring the necessary conditions for innovation and cooperation, the courts permit a monopoly to continue. Rather than endorsing transparency, secrecy prevails. Rather than aiming to preserve a balance between the commercial economy and the gift-economy, sharing is being undermined by the law. Part of the mystery of the Internet for a lot of newcomers must be that it seems to disprove the old adage that you can't get something for nothing. Free games, free music, free pornography, free art. Media corporations are doing their best to change this situation. The FBI and trade groups have blitzed the American news media with alarmist reports about how children don't understand that sharing digital information is a crime. Teacher Gail Chmura, the star of one such media campaign, says of her students, "It's always been interesting that they don't see a connection between the two. They just don't get it" (Hopper). Perhaps the confusion arises because the kids do understand that digital duplication lets two people have the same thing. Theft is at best a metaphor for the copying of data, because the original is not stolen in the same sense as a material object. In the effort to liken all copying to theft, legal provisions for the fair use of intellectual property are neglected. Teachers could just as easily emphasise the importance of sharing and the development of an electronic commons that is free for all to use. The values advanced by the trade groups are not beyond question and are not historical constants. According to Donald Krueckeberg, Rutgers University Professor of Urban Planning, native Americans tied the concept of property not to ownership but to use. "One used it, one moved on, and use was shared with others" (qtd. in Batt). Perhaps it is necessary for individuals to have dominion over some private data. But who owns the land, wind, sun, and sky of the Internet – the infrastructure? Given that publicly-funded research and free software have been as important to the development of the Internet as have business and commercial software, it is not surprising that some ambiguity remains about the property status of the dataverse. For many the Internet is as much a medium for expression and the interplay of languages as it is a framework for monetary transaction. In the case involving DVD software mentioned previously, there emerged a grass-roots campaign in opposition to censorship. Dozens of philosophical programmers and computer scientists asserted the expressive and linguistic bases of software by creating variations on the algorithm needed to play DVDs. The forbidden lines of symbols were printed on T-shirts, translated into different computer languages, translated into legal rhetoric, and even embedded into DNA and pictures of MPAA president Jack Valenti (see e.g. Touretzky). These efforts were inspired by a shared conviction that important liberties were at stake. Supporting the MPAA's position would do more than protect movies from piracy. The use of the algorithm was not clearly linked to an intent to pirate movies. Many felt that outlawing the DVD algorithm, which had been experimentally developed by a Norwegian teenager, represented a suppression of gumption and ingenuity. The court's decision rejected established principles of fair use, denied the established legality of reverse engineering software to achieve compatibility, and asserted that journalists and scientists had no right to publish a bit of code if it might be misused. In a similar case in April 2000, a U.S. court of appeals found that First Amendment protections did apply to software (Junger). Noting that source code has both an expressive feature and a functional feature, this court held that First Amendment protection is not reserved only for purely expressive communication. Yet in the DVD case, the court opposed this view and enforced the inflexible demands of the Digital Millennium Copyright Act. Notwithstanding Ted Nelson's characterisation of computers as literary machines, the decision meant that the linguistic and expressive aspects of software would be subordinated to other concerns. A simple series of symbols were thereby cast under a veil of legal secrecy. Although they were easy to discover, and capable of being committed to memory or translated to other languages, fair use and other intuitive freedoms were deemed expendable. These sorts of legal obstacles are serious challenges to the continued viability of free software like Linux. The central value proposition of Linux-based operating systems – free, open source code – is threatening to commercial competitors. Some corporations are intent on stifling further development of free alternatives. Patents offer another vulnerability. The writing of free software has become a minefield of potential patent lawsuits. Corporations have repeatedly chosen to pursue patent litigation years after the alleged infringements have been incorporated into widely used free software. For example, although it was designed to avoid patent problems by an array of international experts, the image file format known as JPEG (Joint Photographic Experts Group) has recently been dogged by patent infringement charges. Despite good intentions, low-budget initiatives and ad hoc organisations are ill equipped to fight profiteering patent lawsuits. One wonders whether software innovation is directed more by lawyers or computer scientists. The present copyright and patent regimes may serve the needs of the larger corporations, but it is doubtful that they are the best means of fostering software innovation and quality. Orwell wrote in his Homage to Catalonia, There was a new rule that censored portions of the newspaper must not be left blank but filled up with other matter; as a result it was often impossible to tell when something had been cut out. The development of the Internet has a similar character: new diversions spring up to replace what might have been so that the lost potential is hardly felt. The process of retrofitting Internet software to suit ideological and commercial agendas is already well underway. For example, Microsoft has announced recently that it will discontinue support for the Java language in 2004. The problem with Java, from Microsoft's perspective, is that it provides portable programming tools that work under all operating systems, not just Windows. With Java, programmers can develop software for the large number of Windows users, while simultaneously offering software to users of other operating systems. Java is an important piece of the software infrastructure for Internet content developers. Yet, in the interest of coercing people to use only their operating systems, Microsoft is willing to undermine thousands of existing Java-language projects. Their marketing hype calls this progress. The software industry relies on sales to survive, so if it means laying waste to good products and millions of hours of work in order to sell something new, well, that's business. The consequent infrastructure instability keeps software developers, and other creative people, on a treadmill. From Progressive Load by Andy Deck, artcontext.org/progload As an Internet content producer, one does not appeal directly to the hearts and minds of the public; one appeals through the medium of software and hardware. Since most people are understandably reluctant to modify the software running on their computers, the software installed initially is a critical determinant of what is possible. Unconventional, independent, and artistic uses of the Internet are diminished when the media infrastructure is effectively established by decree. Unaccountable corporate control over infrastructure software tilts the playing field against smaller content producers who have neither the advance warning of industrial machinations, nor the employees and resources necessary to keep up with a regime of strategic, cyclical obsolescence. It seems that independent content producers must conform to the distribution technologies and content formats favoured by the entertainment and marketing sectors, or else resign themselves to occupying the margins of media activity. It is no secret that highly diversified media corporations can leverage their assets to favour their own media offerings and confound their competitors. Yet when media giants AOL and Time-Warner announced their plans to merge in 2000, the claim of CEOs Steve Case and Gerald Levin that the merged companies would "operate in the public interest" was hardly challenged by American journalists. Time-Warner has since fought to end all ownership limits in the cable industry; and Case, who formerly championed third-party access to cable broadband markets, changed his tune abruptly after the merger. Now that Case has been ousted, it is unclear whether he still favours oligopoly. According to Levin, global media will be and is fast becoming the predominant business of the 21st century ... more important than government. It's more important than educational institutions and non-profits. We're going to need to have these corporations redefined as instruments of public service, and that may be a more efficient way to deal with society's problems than bureaucratic governments. Corporate dominance is going to be forced anyhow because when you have a system that is instantly available everywhere in the world immediately, then the old-fashioned regulatory system has to give way (Levin). It doesn't require a lot of insight to understand that this "redefinition," this slight of hand, does not protect the public from abuses of power: the dissolution of the "old-fashioned regulatory system" does not serve the public interest. From Lexicon by Andy Deck, artcontext.org/lexicon) As an artist who has adopted telecommunications networks and software as his medium, it disappoints me that a mercenary vision of electronic media's future seems to be the prevailing blueprint. The giantism of media corporations, and the ongoing deregulation of media consolidation (Ahrens), underscore the critical need for independent media sources. If it were just a matter of which cola to drink, it would not be of much concern, but media corporations control content. In this hyper-mediated age, content – whether produced by artists or journalists – crucially affects what people think about and how they understand the world. Content is not impervious to the software, protocols, and chicanery that surround its delivery. It is about time that people interested in independent voices stop believing that laissez faire capitalism is building a better media infrastructure. The German writer Hans Magnus Enzensberger reminds us that the media tyrannies that affect us are social products. The media industry relies on thousands of people to make the compromises necessary to maintain its course. The rapid development of the mind industry, its rise to a key position in modern society, has profoundly changed the role of the intellectual. He finds himself confronted with new threats and new opportunities. Whether he knows it or not, whether he likes it or not, he has become the accomplice of a huge industrial complex which depends for its survival on him, as he depends on it for his own. He must try, at any cost, to use it for his own purposes, which are incompatible with the purposes of the mind machine. What it upholds he must subvert. He may play it crooked or straight, he may win or lose the game; but he would do well to remember that there is more at stake than his own fortune (Enzensberger 18). Some cultural leaders have recognised the important role that free software already plays in the infrastructure of the Internet. Among intellectuals there is undoubtedly a genuine concern about the emerging contours of corporate, global media. But more effective solidarity is needed. Interest in open source has tended to remain superficial, leading to trendy, cosmetic, and symbolic uses of terms like "open source" rather than to a deeper commitment to an open, public information infrastructure. Too much attention is focussed on what's "cool" and not enough on the road ahead. Various media specialists – designers, programmers, artists, and technical directors – make important decisions that affect the continuing development of electronic media. Many developers have failed to recognise (or care) that their decisions regarding media formats can have long reaching consequences. Web sites that use media formats which are unworkable for open source operating systems should be actively discouraged. Comparable technologies are usually available to solve compatibility problems. Going with the market flow is not really giving people what they want: it often opposes the work of thousands of activists who are trying to develop open source alternatives (see e.g. Greene). Average Internet users can contribute to a more innovative, free, open, and independent media – and being conscientious is not always difficult or unpleasant. One project worthy of support is the Internet browser Mozilla. Currently, many content developers create their Websites so that they will look good only in Microsoft's Internet Explorer. While somewhat understandable given the market dominance of Internet Explorer, this disregard for interoperability undercuts attempts to popularise standards-compliant alternatives. Mozilla, written by a loose-knit group of activists and programmers (some of whom are paid by AOL/Time-Warner), can be used as an alternative to Microsoft's browser. If more people use Mozilla, it will be harder for content providers to ignore the way their Web pages appear in standards-compliant browsers. The Mozilla browser, which is an open source initiative, can be downloaded from http://www.mozilla.org/. While there are many people working to create real and lasting alternatives to the monopolistic and technocratic dynamics that are emerging, it takes a great deal of cooperation to resist the media titans, the FCC, and the courts. Oddly enough, corporate interests sometimes overlap with those of the public. Some industrial players, such as IBM, now support open source software. For them it is mostly a business decision. Frustrated by the coercive control of Microsoft, they support efforts to develop another operating system platform. For others, including this writer, the open source movement is interesting for the potential it holds to foster a more heterogeneous and less authoritarian communications infrastructure. Many people can find common cause in this resistance to globalised uniformity and consolidated media ownership. The biggest challenge may be to get people to believe that their choices really matter, that by endorsing certain products and operating systems and not others, they can actually make a difference. But it's unlikely that this idea will flourish if artists and intellectuals don't view their own actions as consequential. There is a troubling tendency for people to see themselves as powerless in the face of the market. This paralysing habit of mind must be abandoned before the media will be free. Works Cited Ahrens, Frank. "Policy Watch." Washington Post (23 June 2002): H03. 30 March 2003 <http://www.washingtonpost.com/ac2/wp-dyn/A27015-2002Jun22?la... ...nguage=printer>. Batt, William. "How Our Towns Got That Way." 7 Oct. 1996. 31 March 2003 <http://www.esb.utexas.edu/drnrm/WhatIs/LandValue.htm>. Chester, Jeff. "Gerald Levin's Negative Legacy." Alternet.org 6 Dec. 2001. 5 March 2003 <http://www.democraticmedia.org/resources/editorials/levin.php>. Enzensberger, Hans Magnus. "The Industrialisation of the Mind." Raids and Reconstructions. London: Pluto Press, 1975. 18. Greene, Thomas C. "MS to Eradicate GPL, Hence Linux." 25 June 2002. 5 March 2003 <http://www.theregus.com/content/4/25378.php>. Hopper, D. Ian. "FBI Pushes for Cyber Ethics Education." Associated Press 10 Oct. 2000. 29 March 2003 <http://www.billingsgazette.com/computing/20001010_cethics.php>. Junger v. Daley. U.S. Court of Appeals for 6th Circuit. 00a0117p.06. 2000. 31 March 2003 <http://pacer.ca6.uscourts.gov/cgi-bin/getopn.pl?OPINION=00a0... ...117p.06>. Levin, Gerald. "Millennium 2000 Special." CNN 2 Jan. 2000. Touretzky, D. S. "Gallery of CSS Descramblers." 2000. 29 March 2003 <http://www.cs.cmu.edu/~dst/DeCSS/Gallery>. Links http://artcontext.org/lexicon/ http://artcontext.org/progload http://pacer.ca6.uscourts.gov/cgi-bin/getopn.pl?OPINION=00a0117p.06 http://www.billingsgazette.com/computing/20001010_cethics.html http://www.cs.cmu.edu/~dst/DeCSS/Gallery http://www.democraticmedia.org/resources/editorials/levin.html http://www.esb.utexas.edu/drnrm/WhatIs/LandValue.htm http://www.mozilla.org/ http://www.theregus.com/content/4/25378.html http://www.washingtonpost.com/ac2/wp-dyn/A27015-2002Jun22?language=printer Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Deck, Andy. "Treadmill Culture " M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0304/04-treadmillculture.php>. APA Style Deck, A. (2003, Apr 23). Treadmill Culture . M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0304/04-treadmillculture.php>
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Goggin, Gerard. "‘mobile text’". M/C Journal 7, n.º 1 (1 de janeiro de 2004). http://dx.doi.org/10.5204/mcj.2312.

Texto completo da fonte
Resumo:
Mobile In many countries, more people have mobile phones than they do fixed-line phones. Mobile phones are one of the fastest growing technologies ever, outstripping even the internet in many respects. With the advent and widespread deployment of digital systems, mobile phones were used by an estimated 1, 158, 254, 300 people worldwide in 2002 (up from approximately 91 million in 1995), 51. 4% of total telephone subscribers (ITU). One of the reasons for this is mobility itself: the ability for people to talk on the phone wherever they are. The communicative possibilities opened up by mobile phones have produced new uses and new discourses (see Katz and Aakhus; Brown, Green, and Harper; and Plant). Contemporary soundscapes now feature not only voice calls in previously quiet public spaces such as buses or restaurants but also the aural irruptions of customised polyphonic ringtones identifying whose phone is ringing by the tune downloaded. The mobile phone plays an important role in contemporary visual and material culture as fashion item and status symbol. Most tragically one might point to the tableau of people in the twin towers of the World Trade Centre, or aboard a plane about to crash, calling their loved ones to say good-bye (Galvin). By contrast, one can look on at the bathos of Australian cricketer Shane Warne’s predilection for pressing his mobile phone into service to arrange wanted and unwanted assignations while on tour. In this article, I wish to consider another important and so far also under-theorised aspect of mobile phones: text. Of contemporary textual and semiotic systems, mobile text is only a recent addition. Yet it is already produces millions of inscriptions each day, and promises to be of far-reaching significance. Txt Txt msg ws an acidnt. no 1 expcted it. Whn the 1st txt msg ws sent, in 1993 by Nokia eng stdnt Riku Pihkonen, the telcom cpnies thought it ws nt important. SMS – Short Message Service – ws nt considrd a majr pt of GSM. Like mny teks, the *pwr* of txt — indeed, the *pwr* of the fon — wz discvrd by users. In the case of txt mssng, the usrs were the yng or poor in the W and E. (Agar 105) As Jon Agar suggests in Constant Touch, textual communication through mobile phone was an after-thought. Mobile phones use radio waves, operating on a cellular system. The first such mobile service went live in Chicago in December 1978, in Sweden in 1981, in January 1985 in the United Kingdom (Agar), and in the mid-1980s in Australia. Mobile cellular systems allowed efficient sharing of scarce spectrum, improvements in handsets and quality, drawing on advances in science and engineering. In the first instance, technology designers, manufacturers, and mobile phone companies had been preoccupied with transferring telephone capabilities and culture to the mobile phone platform. With the growth in data communications from the 1960s onwards, consideration had been given to data capabilities of mobile phone. One difficulty, however, had been the poor quality and slow transfer rates of data communications over mobile networks, especially with first-generation analogue and early second-generation digital mobile phones. As the internet was widely and wildly adopted in the early to mid-1990s, mobile phone proponents looked at mimicking internet and online data services possibilities on their hand-held devices. What could work on a computer screen, it was thought, could be reinvented in miniature for the mobile phone — and hence much money was invested into the wireless access protocol (or WAP), which spectacularly flopped. The future of mobiles as a material support for text culture was not to lie, at first at least, in aping the world-wide web for the phone. It came from an unexpected direction: cheap, simple letters, spelling out short messages with strange new ellipses. SMS was built into the European Global System for Mobile (GSM) standard as an insignificant, additional capability. A number of telecommunications manufacturers thought so little of the SMS as not to not design or even offer the equipment needed (the servers, for instance) for the distribution of the messages. The character sets were limited, the keyboards small, the typeface displays rudimentary, and there was no acknowledgement that messages were actually received by the recipient. Yet SMS was cheap, and it offered one-to-one, or one-to-many, text communications that could be read at leisure, or more often, immediately. SMS was avidly taken up by young people, forming a new culture of media use. Sending a text message offered a relatively cheap and affordable alternative to the still expensive timed calls of voice mobile. In its early beginnings, mobile text can be seen as a subcultural activity. The text culture featured compressed, cryptic messages, with users devising their own abbreviations and grammar. One of the reasons young people took to texting was a tactic of consolidating and shaping their own shared culture, in distinction from the general culture dominated by their parents and other adults. Mobile texting become involved in a wider reworking of youth culture, involving other new media forms and technologies, and cultural developments (Butcher and Thomas). Another subculture that also was in the vanguard of SMS was the Deaf ‘community’. Though the Alexander Graham Bell, celebrated as the inventor of the telephone, very much had his hearing-impaired wife in mind in devising a new form of communication, Deaf people have been systematically left off the telecommunications network since this time. Deaf people pioneered an earlier form of text communications based on the Baudot standard, used for telex communications. Known as teletypewriter (TTY), or telecommunications device for the Deaf (TDD) in the US, this technology allowed Deaf people to communicate with each other by connecting such devices to the phone network. The addition of a relay service (established in Australia in the mid-1990s after much government resistance) allows Deaf people to communicate with hearing people without TTYs (Goggin & Newell). Connecting TTYs to mobile phones have been a vexed issue, however, because the digital phone network in Australia does not allow compatibility. For this reason, and because of other features, Deaf people have become avid users of SMS (Harper). An especially favoured device in Europe has been the Nokia Communicator, with its hinged keyboard. The move from a ‘restricted’, ‘subcultural’ economy to a ‘general’ economy sees mobile texting become incorporated in the semiotic texture and prosaic practices of everyday life. Many users were already familiar with the new conventions already developed around electronic mail, with shorter, crisper messages sent and received — more conversation-like than other correspondence. Unlike phone calls, email is asynchronous. The sender can respond immediately, and the reply will be received with seconds. However, they can also choose to reply at their leisure. Similarly, for the adept user, SMS offers considerable advantages over voice communications, because it makes textual production mobile. Writing and reading can take place wherever a mobile phone can be turned on: in the street, on the train, in the club, in the lecture theatre, in bed. The body writes differently too. Writing with a pen takes a finger and thumb. Typing on a keyboard requires between two and ten fingers. The mobile phone uses the ‘fifth finger’ — the thumb. Always too early, and too late, to speculate on contemporary culture (Morris), it is worth analyzing the textuality of mobile text. Theorists of media, especially television, have insisted on understanding the specific textual modes of different cultural forms. We are familiar with this imperative, and other methods of making visible and decentring structures of text, and the institutions which animate and frame them (whether author or producer; reader or audience; the cultural expectations encoded in genre; the inscriptions in technology). In formal terms, mobile text can be described as involving elision, great compression, and open-endedness. Its channels of communication physically constrain the composition of a very long single text message. Imagine sending James Joyce’s Finnegan’s Wake in one text message. How long would it take to key in this exemplar of the disintegration of the cultural form of the novel? How long would it take to read? How would one navigate the text? Imagine sending the Courier-Mail or Financial Review newspaper over a series of text messages? The concept of the ‘news’, with all its cultural baggage, is being reconfigured by mobile text — more along the lines of the older technology of the telegraph, perhaps: a few words suffices to signify what is important. Mobile textuality, then, involves a radical fragmentation and unpredictable seriality of text lexia (Barthes). Sometimes a mobile text looks singular: saying ‘yes’ or ‘no’, or sending your name and ID number to obtain your high school or university results. Yet, like a telephone conversation, or any text perhaps, its structure is always predicated upon, and haunted by, the other. Its imagined reader always has a mobile phone too, little time, no fixed address (except that hailed by the network’s radio transmitter), and a finger poised to respond. Mobile text has structure and channels. Yet, like all text, our reading and writing of it reworks those fixities and makes destabilizes our ‘clear’ communication. After all, mobile textuality has a set of new pre-conditions and fragilities. It introduces new sorts of ‘noise’ to signal problems to annoy those theorists cleaving to the Shannon and Weaver linear model of communication; signals often drop out; there is a network confirmation (and message displayed) that text messages have been sent, but no system guarantee that they have been received. Our friend or service provider might text us back, but how do we know that they got our text message? Commodity We are familiar now with the pleasures of mobile text, the smile of alerting a friend to our arrival, celebrating good news, jilting a lover, making a threat, firing a worker, flirting and picking-up. Text culture has a new vector of mobility, invented by its users, but now coveted and commodified by businesses who did not see it coming in the first place. Nimble in its keystrokes, rich in expressivity and cultural invention, but relatively rudimentary in its technical characteristics, mobile text culture has finally registered in the boardrooms of communications companies. Not only is SMS the preferred medium of mobile phone users to keep in touch with each other, SMS has insinuated itself into previously separate communication industries arenas. In 2002-2003 SMS became firmly established in television broadcasting. Finally, interactive television had arrived after many years of prototyping and being heralded. The keenly awaited back-channel for television arrives courtesy not of cable or satellite television, nor an extra fixed-phone line. It’s the mobile phone, stupid! Big Brother was not only a watershed in reality television, but also in convergent media. Less obvious perhaps than supplementary viewing, or biographies, or chat on Big Brother websites around the world was the use of SMS for voting. SMS is now routinely used by mainstream television channels for viewer feedback, contest entry, and program information. As well as its widespread deployment in broadcasting, mobile text culture has been the language of prosaic, everyday transactions. Slipping into a café at Bronte Beach in Sydney, why not pay your parking meter via SMS? You’ll even receive a warning when your time is up. The mobile is becoming the ‘electronic purse’, with SMS providing its syntax and sentences. The belated ingenuity of those fascinated by the economics of mobile text has also coincided with a technological reworking of its possibilities, with new implications for its semiotic possibilities. Multimedia messaging (MMS) has now been deployed, on capable digital phones (an instance of what has been called 2.5 generation [G] digital phones) and third-generation networks. MMS allows images, video, and audio to be communicated. At one level, this sort of capability can be user-generated, as in the popularity of mobiles that take pictures and send these to other users. Television broadcasters are also interested in the capability to send video clips of favourite programs to viewers. Not content with the revenues raised from millions of standard-priced SMS, and now MMS transactions, commercial participants along the value chain are keenly awaiting the deployment of what is called ‘premium rate’ SMS and MMS services. These services will involve the delivery of desirable content via SMS and MMS, and be priced at a premium. Products and services are likely to include: one-to-one textchat; subscription services (content delivered on handset); multi-party text chat (such as chat rooms); adult entertainment services; multi-part messages (such as text communications plus downloads); download of video or ringtones. In August 2003, one text-chat service charged $4.40 for a pair of SMS. Pwr At the end of 2003, we have scarcely registered the textual practices and systems in mobile text, a culture that sprang up in the interstices of telecommunications. It may be urgent that we do think about the stakes here, as SMS is being extended and commodified. There are obvious and serious policy issues in premium rate SMS and MMS services, and questions concerning the political economy in which these are embedded. Yet there are cultural questions too, with intricate ramifications. How do we understand the effects of mobile textuality, rewriting the telephone book for this new cultural form (Ronell). What are the new genres emerging? And what are the implications for cultural practice and policy? Does it matter, for instance, that new MMS and 3rd generation mobile platforms are not being designed or offered with any-to-any capabilities in mind: allowing any user to upload and send multimedia communications to other any. True, as the example of SMS shows, the inventiveness of users is difficult to foresee and predict, and so new forms of mobile text may have all sorts of relationships with content and communication. However, there are worrying signs of these developing mobile circuits being programmed for narrow channels of retail purchase of cultural products rather than open-source, open-architecture, publicly usable nodes of connection. Works Cited Agar, Jon. Constant Touch: A Global History of the Mobile Phone. Cambridge: Icon, 2003. Barthes, Roland. S/Z. Trans. Richard Miller. New York: Hill & Wang, 1974. Brown, Barry, Green, Nicola, and Harper, Richard, eds. Wireless World: Social, Cultural, and Interactional Aspects of the Mobile Age. London: Springer Verlag, 2001. Butcher, Melissa, and Thomas, Mandy, eds. Ingenious: Emerging youth cultures in urban Australia. Melbourne: Pluto, 2003. Galvin, Michael. ‘September 11 and the Logistics of Communication.’ Continuum: Journal of Media and Cultural Studies 17.3 (2003): 303-13. Goggin, Gerard, and Newell, Christopher. Digital Disability: The Social Construction of Digital in New Media. Lanham, MA: Rowman & Littlefield, 2003. Harper, Phil. ‘Networking the Deaf Nation.’ Australian Journal of Communication 30. 3 (2003), in press. International Telecommunications Union (ITU). ‘Mobile Cellular, subscribers per 100 people.’ World Telecommunication Indicators <http://www.itu.int/ITU-D/ict/statistics/> accessed 13 October 2003. Katz, James E., and Aakhus, Mark, eds. Perpetual Contact: Mobile Communication, Private Talk, Public Performance. Cambridge: Cambridge U P, 2002. Morris, Meaghan. Too Soon, Too Late: History in Popular Culture. Bloomington and Indianapolis: U of Indiana P, 1998. Plant, Sadie. On the Mobile: The Effects of Mobile Telephones on Social and Individual Life. < http://www.motorola.com/mot/documents/0,1028,296,00.pdf> accessed 5 October 2003. Ronell, Avital. The Telephone Book: Technology—schizophrenia—electric speech. Lincoln: U of Nebraska P, 1989. Citation reference for this article MLA Style Goggin, Gerard. "‘mobile text’" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0401/03-goggin.php>. APA Style Goggin, G. (2004, Jan 12). ‘mobile text’. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0401/03-goggin.php>
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia