Artículos de revistas sobre el tema "Telecommunication – Message processing"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Telecommunication – Message processing.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 29 mejores artículos de revistas para su investigación sobre el tema "Telecommunication – Message processing".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Kruglikov, S. V. y A. Yu Zalizka. "Synthesis of wireless telecommunication network with adaptation to refusals of central elements of average and high intensity". Proceedings of the National Academy of Sciences of Belarus, Physical-Technical Series 65, n.º 1 (6 de abril de 2020): 117–28. http://dx.doi.org/10.29235/1561-8358-2020-65-1-117-128.

Texto completo
Resumen
A technique of synthesis of a wireless digital communication network with package switching, providing transfer of video messages of real time scale between elements of multipurpose information-operating system in conditions of high failure rate of central elements, is considered. As conceptual model of a telecommunication network – the network of the mixed structure, including multipurpose devices, constructed on the basis of standards of a broadband radio access with switching of packages and two interconnected levels of network interaction of elements (local and main) is accepted. The technique of synthesis of a wireless network is based on the multilevel, combined adaptation of a telecommunication network in the conditions of refusals of central elements, which primary goal is rational change of parameters, functions of network elements in close interrelation with purposeful transformation of structure of telecommunication system subnetworks. The main objective of carrying out the combined adaptation of the network consists in achievement of necessary throughput of communication system depending on degree of failure rate of central elements. Properties of multilevel adaptation were investigated in the course of realization of the combined (structurally-parametrical) synthesis with use of the aggregate approach of modelling of difficult technical systems. Efficiency of the specified technique is proven by the results of the imitating experiment with use of the aggregate model of a wireless network of data transmission with switching of packages, obtained previously. The experimental data, received at natural research of networks of a broadband radio communication on the basis of standards 802.11 b/g/n, have shown, that time of processing of packages of a message essentially depends on use of existing ways of adaptation. In particular, application of effective algorithms of adaptation (both parametrical and structural) will allow to reduce the time of finding of details (packages) in broadband communication devices by several times and, thereby, to provide demanded throughput of the network functioning in the conditions of refusals of central elements.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Sinyavskiy, Ivan, Igor Sorokin y Andrei Sukhov. "Prototype wireless network for internet of things based on DECT standard". Telfor Journal 14, n.º 1 (2022): 8–11. http://dx.doi.org/10.5937/telfor2201008s.

Texto completo
Resumen
This paper presents a software prototype of a wireless network for the Internet of Things (IoT) based on the DECT (Digital Enhanced Cordless Telecommunication) standard. It proposes an architecture for encapsulating commands from the most common IoT protocol, MQTT (Message Queuing Telemetry Transport), into SIP (Session Initiation Protocol) packets. A module is created to embed MQTT-SN (MQTT for Sensor Networks) packets into SIP packets. The module is developed in Go language using the built-in "net" library. Delivery of MQTT-SN packets to IoT devices is carried out using the SIP protocol. Source codes and instructions for installing the gateway can be found at https://github.com/iSinyavsky/mqtt-sn-sip-gateway.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Nezamoddini, Nasim y Amirhosein Gholami. "A Survey of Adaptive Multi-Agent Networks and Their Applications in Smart Cities". Smart Cities 5, n.º 1 (9 de marzo de 2022): 318–46. http://dx.doi.org/10.3390/smartcities5010019.

Texto completo
Resumen
The world is moving toward a new connected world in which millions of intelligent processing devices communicate with each other to provide services in transportation, telecommunication, and power grids in the future’s smart cities. Distributed computing is considered one of the efficient platforms for processing and management of massive amounts of data collected by smart devices. This can be implemented by utilizing multi-agent systems (MASs) with multiple autonomous computational entities by memory and computation capabilities and the possibility of message-passing between them. These systems provide a dynamic and self-adaptive platform for managing distributed large-scale systems, such as the Internet-of-Things (IoTs). Despite, the potential applicability of MASs in smart cities, very few practical systems have been deployed using agent-oriented systems. This research surveys the existing techniques presented in the literature that can be utilized for implementing adaptive multi-agent networks in smart cities. The related literature is categorized based on the steps of designing and controlling these adaptive systems. These steps cover the techniques required to define, monitor, plan, and evaluate the performance of an autonomous MAS. At the end, the challenges and barriers for the utilization of these systems in current smart cities, and insights and directions for future research in this domain, are presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Adeliyi, Timothy T. y Oludayo O. Olugbara. "Fast Channel Navigation of Internet Protocol Television Using Adaptive Hybrid Delivery Method". Journal of Computer Networks and Communications 2018 (8 de julio de 2018): 1–11. http://dx.doi.org/10.1155/2018/2721950.

Texto completo
Resumen
The Internet protocol television brought seamless potential that has revolutionized the media and telecommunication industries by providing a platform for transmitting digitized television services. However, zapping delay is a critical factor that affects the quality of experience in the Internet protocol television. This problem is intrinsically caused by command processing time, network delay, jitter, buffer delay, and video decoding delay. The overarching objective of this paper is to use a hybrid delivery method that agglutinates multicast- and unicast-enabled services over a converged network to minimize zapping delay to the bare minimum. The hybrid method will deliver Internet protocol television channels to subscribers using the unicast stream coupled with differentiated service quality of experience when zapping delay is greater than 0.43 s. This aids a faster transmission by sending a join message to the multicast stream at the service provider zone to acquire the requested channel. The hybrid method reported in this paper is benchmarked with the state-of-the-art multicast stream and unicast stream methods. Results show that the hybrid method has an excellent performance by lowering point-to-point queuing delay, end-to-end packet delay, and packet variation and increasing throughput rate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Nadeem, Asim, Agha Kashif, Sohail Zafar y Zohaib Zahid. "On 2-partition dimension of the circulant graphs". Journal of Intelligent & Fuzzy Systems 40, n.º 5 (22 de abril de 2021): 9493–503. http://dx.doi.org/10.3233/jifs-201982.

Texto completo
Resumen
The partition dimension is a variant of metric dimension in graphs. It has arising applications in the fields of network designing, robot navigation, pattern recognition and image processing. Let G (V (G) , E (G)) be a connected graph and Γ = {P1, P2, …, Pm} be an ordered m-partition of V (G). The partition representation of vertex v with respect to Γ is an m-vector r (v|Γ) = (d (v, P1) , d (v, P2) , …, d (v, Pm)), where d (v, P) = min {d (v, x) |x ∈ P} is the distance between v and P. If the m-vectors r (v|Γ) differ in at least 2 positions for all v ∈ V (G), then the m-partition is called a 2-partition generator of G. A 2-partition generator of G with minimum cardinality is called a 2-partition basis of G and its cardinality is known as the 2-partition dimension of G. Circulant graphs outperform other network topologies due to their low message delay, high connectivity and survivability, therefore are widely used in telecommunication networks, computer networks, parallel processing systems and social networks. In this paper, we computed partition dimension of circulant graphs Cn (1, 2) for n ≡ 2 (mod 4), n ≥ 18 and hence corrected the result given by Salman et al. [Acta Math. Sin. Engl. Ser. 2012, 28, 1851-1864]. We further computed the 2-partition dimension of Cn (1, 2) for n ≥ 6.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Alzahrani, Ali y Theyazn H. H. Aldhyani. "Artificial Intelligence Algorithms for Detecting and Classifying MQTT Protocol Internet of Things Attacks". Electronics 11, n.º 22 (21 de noviembre de 2022): 3837. http://dx.doi.org/10.3390/electronics11223837.

Texto completo
Resumen
The Internet of Things (IoT) grew in popularity in recent years, becoming a crucial component of industrial, residential, and telecommunication applications, among others. This innovative idea promotes communication between physical components, such as sensors and actuators, to improve process flexibility and efficiency. Smart gadgets in IoT contexts interact using various message protocols. Message queuing telemetry transfer (MQTT) is a protocol that is used extensively in the IoT context to deliver sensor or event data. The aim of the proposed system is to create an intrusion detection system based on an artificial intelligence algorithm, which is becoming essential in the defense of the IoT networks against cybersecurity threats. This study proposes using a k-nearest neighbors (KNN) algorithm, linear discriminant analysis (LDA), a convolutional neural network (CNN), and a convolutional long short-term memory neural network (CNN-LSTM) to identify MQTT protocol IoT intrusions. A cybersecurity system based on artificial intelligence algorithms was examined and evaluated using a standard dataset retrieved from the Kaggle repository. The dataset was injected by five attacks, namely brute-force, flooding, malformed packet, SlowITe, and normal packets. The deep learning algorithm achieved high performance compared with the developing security system using machine learning algorithms. The performance accuracy of the KNN method was 80.82%, while the accuracy of the LDA algorithm was 76.60%. The CNN-LSTM model attained a high level of precision (98.94%) and is thus very effective at detecting intrusions in IoT settings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

HERRERO, JUAN CARLOS. "CDMA & TDMA BASED NEURAL NETS". International Journal of Neural Systems 11, n.º 03 (junio de 2001): 237–46. http://dx.doi.org/10.1142/s0129065701000679.

Texto completo
Resumen
CDMA and TDMA telecommunication techniques were established long time ago, but they have acquired a renewed presence due to the rapidly increasing mobile phones demand. In this paper, we are going to see they are suitable for neural nets, if we leave the concept "connection" between processing units and we adopt the concept "messages" exchanged between them. This may open the door to neural nets with a higher number of processing units and flexible configuration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Nesterenko, Mykola, Oleksandr Romanov y Oleksandr Uspenskyi. "Analytical model of assessment of message processing time in the management of telecommunications networks". Collection "Information technology and security" 1, n.º 1 (30 de junio de 2012): 69–75. http://dx.doi.org/10.20535/2411-1031.2012.1.1.53666.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Aryani, Diah, Ade Setiadi y Fifit Alfiah. "APLIKASI WEB PENGIRIMAN DAN PENERIMAAN SMS DENGAN GAMMU SMS ENGINE BERBASIS PHP". CCIT Journal 8, n.º 3 (19 de mayo de 2015): 174–90. http://dx.doi.org/10.33050/ccit.v8i3.340.

Texto completo
Resumen
The development of the rapidly evolving technology in this era, especially in the telecommunications, media, and information (telecommunications) received positive feedback and negative in society. The technology used by the right target will be very useful in supporting the activities of either the agency or agencies. penyampaikan for information processing and information so that the information presented is fast, precise, and lack of errors, thus making the job more effectively and efficiently. Short Message Service (SMS) is the delivery of messages or information that is already being replaced by chat applications such as fuel and whatsapp. SMS has experienced growth in terms of the use and function as Polling SMS, SMS Banking, and SMS Gateway. Gammu SMS Engine Engine is used for SMS Gateway application. Gammu can be implemented in a variety of programming languages such as PHP and can be used functions as needed. Gammu SMS messaging engine is processing applications that do not perform bulk SMS delivery on most SMS gateway. The method used for Gammu SMS engines do not get hung up on a theory but a configuration setting through some source code and also through the device manager to check the port that is connected to the computer. To use the database files in a manner that is available on the database import gammu folder on the server mysql which we use to connect to the modem and mysql. To run the service engine gammu must use the command prompt by way mengetikm "gammu-smsd smsconf -i -s -c" to the process of running the engine at a command prompt first entry in the directory "bin". Knowing gammu runs aktive with the presence of link to stop and restart the service by running aktivenya gammu we can directly use gammu engine SMS to send and receive SMS..
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Buzhin, I. G., V. M. Antonova, E. A. Gaifutdinov y Yu B. Mironov. "METHODOLOGY FOR A COMPREHENSIVE ASSESSMENT OF THE TELECOMMUNICATION SERVICES QUALITY OF TRANSPORT NETWORKS USING SDN/NFV TECHNOLOGIES". T-Comm 16, n.º 12 (2022): 40–45. http://dx.doi.org/10.36724/2072-8735-2022-16-12-40-45.

Texto completo
Resumen
Methodology for a comprehensive assessment of the quality of telecommunication services of transport networks using SDN/NFV technology has been developed. The current state and development trends of communication networks have shown that the potential for growth in productivity and bandwidth of networks based on traditional technologies is practically exhausted. These problems can be solved by the technology of software-defined networks and virtualization of network functions (here-inafter SDN/NFV). This methodology can be the basis for selecting the structure and number of SDN controllers and their optimal location in the communication network based on SDN/NFV, calculating reliability indicators, obtaining loss probabilities of streams and control messages as well as time delays for processing streams in SDN telecommunication equipment. Proposals on balancing the traffic load on SDN controllers of the communication network were also given.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Molchanov, Pavel y Alexandr Totsky. "Application of Triple Correlation and Bispectrum for Interference Immunity Improvement in Telecommunications Systems". International Journal of Applied Mathematics and Computer Science 18, n.º 3 (1 de septiembre de 2008): 361–67. http://dx.doi.org/10.2478/v10006-008-0032-9.

Texto completo
Resumen
Application of Triple Correlation and Bispectrum for Interference Immunity Improvement in Telecommunications SystemsThis paper presents a new noise immunity encoding/decoding technique by using the features of triple correlation and bispectrum widely employed in digital signal processing systems operating in noise environments. The triple correlation-and bispectrum-based encoding/decoding algorithm is tested for a digital radio telecommunications binary frequency shift keying system. The errorless decoding probability was analyzed by means of computer simulation for the transmission and reception of a test message in a radio channel disturbed by both additive white Gaussian noise (AWGN) and a mixture of an AWGN and an impulsive noise. Computer simulation results obtained for varying and less than unity signal-to-noise ratios at the demodulator input demonstrate a considerable improvement in the noise immunity of the technique suggested in comparison with the traditional redundant linear block encoding/decoding technique.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Teju, V., N. NagaSai Krishna y K. V. Deepesh Reddy. "Authentication Process in Smart Card Using Sha-256". Journal of Computational and Theoretical Nanoscience 17, n.º 5 (1 de mayo de 2020): 2379–82. http://dx.doi.org/10.1166/jctn.2020.8899.

Texto completo
Resumen
Smart cards are related to the issues of security. The applications of smart card are manipulated in hardware, software and telecommunications which appears to be a big issue. In this paper we introduce a new design which is a secured 2 level processing in authentication which uses the number of the pin as 1st authenticated level by the use of SHA-256. OTP is brought as a 2nd authenticated processing level. The project is to generate an OTP using the sha-256 hashing algorithm. As the program initiates, ‘local host: 8080/view’ is typed in the web browser to bring up the front end of the application. Client and server nodes are created in the client node, the column for the username, credentials, messages and the trust values created while on server node, name and user credentials are created.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Mochalov, Valery P., Gennady I. Linets, Natalya Yu Bratchenko y Svetlana V. Govorova. "An Analytical Model of a Corporate Software-Controlled Network Switch". Scalable Computing: Practice and Experience 21, n.º 2 (27 de junio de 2020): 337–46. http://dx.doi.org/10.12694/scpe.v21i2.1698.

Texto completo
Resumen
Implementing the almost limitless possibilities of a software-defined network requires additional study of its infrastructure level and assessment of the telecommunications aspect. The aim of this study is to develop an analytical model for analyzing the main quality indicators of modern network switches. Based on the general theory of queuing systems and networks, generated functions and Laplace-Stieltjes transforms, a three-phase model of a network switch was developed. Given that, in this case, the relationship between processing steps is not significant, quality indicators were obtained by taking into account the parameters of single-phase networks. This research identified the dependencies of service latency and service time of incoming network packets on load, as well as equations for finding the volume of a switch’s buffer memory with an acceptable probability for message loss.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Gutiérrez-Muñoz, Michelle y Marvin Coto-Jiménez. "An Experimental Study on Speech Enhancement Based on a Combination of Wavelets and Deep Learning". Computation 10, n.º 6 (20 de junio de 2022): 102. http://dx.doi.org/10.3390/computation10060102.

Texto completo
Resumen
The purpose of speech enhancement is to improve the quality of speech signals degraded by noise, reverberation, or other artifacts that can affect the intelligibility, automatic recognition, or other attributes involved in speech technologies and telecommunications, among others. In such applications, it is essential to provide methods to enhance the signals to allow the understanding of the messages or adequate processing of the speech. For this purpose, during the past few decades, several techniques have been proposed and implemented for the abundance of possible conditions and applications. Recently, those methods based on deep learning seem to outperform previous proposals even on real-time processing. Among the new explorations found in the literature, the hybrid approaches have been presented as a possibility to extend the capacity of individual methods, and therefore increase their capacity for the applications. In this paper, we evaluate a hybrid approach that combines both deep learning and wavelet transformation. The extensive experimentation performed to select the proper wavelets and the training of neural networks allowed us to assess whether the hybrid approach is of benefit or not for the speech enhancement task under several types and levels of noise, providing relevant information for future implementations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Akanda, Nazmul Islam, Md Alomgir Hossain, Md Mazharul Islam Fahad, Md Nur Rahman y Khairunnaher Khairunnaher. "Cost-effective and user-friendly vehicle tracking system using GPS and GSM technology based on IoT". Indonesian Journal of Electrical Engineering and Computer Science 28, n.º 3 (7 de octubre de 2022): 1826. http://dx.doi.org/10.11591/ijeecs.v28.i3.pp1826-1833.

Texto completo
Resumen
<span>Security <span>is very apart important for vehicles to prevent the injury as vehicle theft is very common phenomenon now a days. We can ensure the security of our vehicles via monitoring our vehicles 24/7. There are many possible ways to track a vehicle. A few groups do not concern about the users need. Our research is about the tracking vehicle according to user’s demand. We are concerning on low budget, better geographic coordinate and easy user access. The system needs global positioning system (GPS) and global system for mobile telecommunications (GSM) technology. User can access the system by short message service (SMS) on a mobile phone. GSM module communicate with the user and GPS module communicate with satellite to get latitude and longitude coordinate. Location of vehicle's on earth is determined using Google Maps</span>.</span>
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Horvath, Tomas, Petr Munster y Ning-Hai Bao. "Lasers in Passive Optical Networks and the Activation Process of an End Unit: A Tutorial". Electronics 9, n.º 7 (9 de julio de 2020): 1114. http://dx.doi.org/10.3390/electronics9071114.

Texto completo
Resumen
It is 21 years since the first passive optical network (PON) was standardized as an asynchronous transfer mode passive optical network (APON) with same optical distribution network scheme as we know in current networks. A lot of PON networks were standardized in the following years and became an important part of telecommunication. The general principles of these PON networks are described in many papers and books, but only a little information about used lasers is available. The aim of this tutorial is to describe lasers used in PON networks and principles of their operation. The paper describes the principles of single longitudinal mode (SLM), multi longitudinal mode (MLM), distributed-feedback (DFB), and Fabry–Pérot (FP) lasers. Furthermore, the lasers are compared by their usage in optical line termination (OLT) for passive optical networks. The second part of this tutorial deals with activation process of optical network unit. The described principle is the same for connection of a new customer or blackout scenario. The end unit is not able to communicate until reach the operational state; each state is defined with physical layer operation and administration and maintenance (PLOAM) messages sequence and their processing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Bao, Guanghai y Sikai Ke. "Load Transfer Device for Solving a Three-Phase Unbalance Problem Under a Low-Voltage Distribution Network". Energies 12, n.º 15 (24 de julio de 2019): 2842. http://dx.doi.org/10.3390/en12152842.

Texto completo
Resumen
In the low-voltage (LV) distribution network, a three-phase unbalance problem often exists. It does not only increase line loss but also threaten the safety of the distribution network. Therefore, the author designs a residential load transfer device for a LV distribution network that can deal with a three-phase unbalance problem by changing the connecting phase of the load. It consists of three parts: user controller for phase swapping, central controller for signal processing and monitoring platform for strategy calculation. This design was based on message queuing telemetry transport (MQTT) communication protocol, and Long Range and 4th Generation mobile telecommunications (LoRa + 4G) communication mode is used to realize the wireless connection between equipment and monitoring platform, and a control scheme is proposed. The improved multi-population genetic algorithm (IMPGA) with multi-objective is used to find the optimal swapping strategy, which is implemented on the monitoring platform. Then the phase swapping is realized by remote control, and the function of reducing three-phase unbalance is realized. The practical experimental result shows that the method can help to reduce the three-phase unbalance rate by changing the connection phase of the load, and the simulation results verify the effectiveness of the algorithm in the phase-swapping strategy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

FARAG, EMAD N., MOHAMED I. ELMASRY, MOHAMED N. SALEH y NABIL M. ELNADY. "A TWO-LEVEL HIERARCHICAL MOBILE NETWORK: STRUCTURE AND NETWORK CONTROL". International Journal of Reliability, Quality and Safety Engineering 03, n.º 04 (diciembre de 1996): 325–51. http://dx.doi.org/10.1142/s0218539396000211.

Texto completo
Resumen
The increase in demand for mobile telecommunication systems, and the limited bandwidth allocated to these systems, have led to systems with smaller cell dimensions, which in turn led to the increase of control messages. In order to prevent controller bottle necks, it is desirable to distribute the network control functions throughout the network. To satisfy this requirement, a mobile network structure characterized by its hierarchical and decentralized network control is presented in this paper. The area served by the mobile system is divided into regions, and the regions are further divided into cells. Each cell is served by a base station, each base station is connected to a regional network through a base station interface unit (BIU). Each region has its own regional network. Connected to each regional network are the cellular controller, the home database, the visitor database, the trunk interface unit (TIU) and the gateway interface unit (GIU). The TIU connects the regional network to the public switched telephone network (PSTN). The GIU connects the regional network to other regional networks through the gateway network. This architecture distributes the network control functions among a large number of processing elements, thus preventing controller bottle necks — a problem faced by centralized controlled systems. The information and network control messages are transferred in the form of packets across this network. Processes inherent to the operation of this network structure are illustrated and discussed. These processes include the location update process, the setting up of a call, the handoff process (both the intra-region handoff process and the inter-region handoff process are considered), and the process of terminating a call.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Olaiwola Dalamu, Taofeek. "Social semiotic genre: exploring the interplay of words and images in advertising". Anuari de Filologia Lleng�es i Literatures Modernes - LLM, n.º 11 (3 de enero de 2022): 29–51. http://dx.doi.org/10.1344/aflm2021.11.2.

Texto completo
Resumen
This study examined the interplay of pictorial and written modes that position advertising as a multimodal genre, explainable through a social semiotic perspective. Eight advertisements of the financial, telecommunications, and beverage products functioned as devices of analysis. Nevertheless, multimodal communicative acts served as the processing tool, elucidating the meaning potentials of the advertising configurations. Having deployed a system of multimodal interacts, tables and graphs assisted in accounting for the frequency of the semiotic resources of the written modes. The analysis indicated large and highlighted fonts (Celebrating the world’s no. 1 fixer), repetitions (Guinness, Maltina, real deal), and deviant constructs (EazyLoans, GTWorld) as elements of propagating intended messages. The deployment of codes (*966*11#, 737) and fragmented clauses (Over N100 million worth of airtime) played some roles in the meaning-making operations. Of significance is the Guinness’ conceptual “digits” of 17:59, contextualising the year, time, and channel of promotional benefits. Though questions (Have you called mum today?), offer (It can be), and minor clauses (Welcome to Guinness time) were parts of the communicative systems, statements (Terms and condition apply) and commands (Enjoy the complete richness of Maltina) dominated the entire dialogues. One might suggest that communicators should endeavour to deploy apt constructions and create eye-lines between participants as means of sensitising readers into consumption.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Nguyen, Hong T., Trung Q. Duong, Liem D. Nguyen, Tram Q. N. Vo, Nhat T. Tran, Phuong D. N. Dang, Long D. Nguyen, Cuong K. Dang y Loi K. Nguyen. "Development of a Spatial Decision Support System for Real-Time Flood Early Warning in the Vu Gia-Thu Bon River Basin, Quang Nam Province, Vietnam". Sensors 20, n.º 6 (17 de marzo de 2020): 1667. http://dx.doi.org/10.3390/s20061667.

Texto completo
Resumen
Vu Gia-Thu Bon (VGTB) river basin is an area where flash flood and heavy flood events occur frequently, negatively impacting the local community and socio-economic development of Quang Nam Province. In recent years, structural and non–structural solutions have been implemented to mitigate damages due to floods. However, under the impact of climate change, natural disasters continue to happen unpredictably day by day. It is, therefore, necessary to develop a spatial decision support system for real-time flood warnings in the VGTB river basin, which will support in ensuring the area’s socio-economic development. The main purpose of this study is to develop an online flood warning system in real-time based on Internet-of-Things (IoT) technologies, GIS, telecommunications, and modeling (Soil and Water Assessment Tool (SWAT) and Hydrologic Engineering Center’s River Analysis System (HEC–RAS)) in order to support the local community in the vulnerable downstream areas in the event of heavy rainfall upstream. The structure of the designed system consists of these following components: (1) real-time hydro-meteorological monitoring network, (2) IoT communication infrastructure (Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), wireless networks), (3) database management system (bio-physical, socio-economic, hydro-meteorological, and inundation), (4) simulating and predicting model (SWAT, HEC–RAS), (5) automated simulating and predicting module, (6) flood warning module via short message service (SMS), (7) WebGIS, application for providing and managing hydro-meteorological and inundation data, and (8) users (citizens and government officers). The entire operating processes of the flood warning system (i.e., hydro-meteorological data collecting, transferring, updating, processing, running SWAT and HEC–RAS, visualizing) are automated. A complete flood warning system for the VGTB river basin has been developed as an outcome of this study, which enables the prediction of flood events 5 h in advance and with high accuracy of 80%.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Hikmaturokhman, Alfin, Hanin Nafi’ah, Solichah Larasati, Ade Wahyudin, Galih Ariprawira y Subuh Pramono. "Deep Learning Algorithm Models for Spam Identification on Cellular Short Message Service". Journal of Communications, 2022, 769–76. http://dx.doi.org/10.12720/jcm.17.9.769-776.

Texto completo
Resumen
Nowadays, the types and products of cellular telecommunications services are very diverse, especially with the upcoming of 5G technology, which makes telecommunications service products such as voice, video, and text messages rely on data packages. Even though the digital era is rapidly growing, the Short Messaging Service (SMS) is still relevant and used as a telecommunication service despite so many sophisticated instant messaging services that rely on the internet. Smartphone users especially in Indonesia are often terrorized by spam messages with pretentious content. Moreover, the SMS came from an unknown number and contained a message or link to a fraudulent site. This study develops a Deep Learning model to predict whether a short text message (SMS) is important or spam. This research domain belongs to Natural Language Processing (NLP) for text processing. The models used are Dense Network, Long Short Term Memory (LSTM), and Bi-directional Long Short Term Memory (Bi-LSTM). Based on the evaluation of the Dense Network model, it produces a loss of 14.22% and an accuracy of 95.63%. The evaluation of the LSTM model is 19.89% loss and 94.76% accuracy. Finally, the evaluation of the Bi-LSTM model is 19.88% loss and 94.75% accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Dhotre, Amol S., Abhishek S. Chandurkar y Sumedh S. Jadhav. "Design of a GSM Cell – Phone based Vehicle Monitoring & Theft Security System". International Journal of Electronics and Electical Engineering, abril de 2013, 272–76. http://dx.doi.org/10.47893/ijeee.2013.1051.

Texto completo
Resumen
This project will focus on developing an enhancement of the vehicle alarm security system via SMS. The system will manipulate a mobile phone to send SMS (Short Message Service). Even though the SMS can be sent using the features available in the mobile, the objective of this project is to activate the SMS sending by the mobile phone using external program, connected physically to the mobile phone. The study of telecommunication is an interesting field because it involves digital signal processing, signal and systems, programming and more. This inspires people to improvise the technology into daily use. In this project, the technology of telecommunication, to be specific; SMS, is integrated or improvised to the present vehicle security system. Instead of human to human telecommunication, this system creates new entity which is machine to human telecommunication. This system is an upgrading and improving vehicle security system by integrating SMS features to alert vehicle owners whenever intrusion occurs. This project involves hardware and software parts construction and the integration of both parts to create the system. We succeed in achieving the objective and in fact, add another feature to the system which will initiates a call to the owner after sending the SMS. In the end of this project, we will document all the hardware and software development and provide a simulation model of the system. An interfacing mobile is also connected to the microcontroller, which is in turn, connected to the engine. Once, the vehicle is being stolen, the information is being used by the vehicle owner for further processing. The information is passed onto the central processing insurance system which is in the form of the sms, the microcontroller unit reads the sms and sends it to the Global Positioning System (GPS) module and using the triangulation method, GPS module feeds the exact location in the form of latitude and longitude to the user’s mobile. By reading the signals received by the mobile, one can control the ignition of the engine.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Ayoola, Kehinde A. "Manipulative Use of Short Messaging Service (SMS) Text Messages by Nigerian Telecommunications Companies". Linguistik Online 63, n.º 1 (6 de marzo de 2014). http://dx.doi.org/10.13092/lo.63.1323.

Texto completo
Resumen
This paper is an application of Relevance Theory for the interpretation of short messaging service (SMS) text messages emanating from Nigerian telecommunications companies to their subscribers. The aim of the research was to identify and describe the manipulative strategies employed by Nigerian telecommunications companies to induce subscribers to part with their money through sales promotion lotteries. 100 SMS texts were purposively extracted from the cell phones of randomly selected residents of Lagos Nigeria who had received promotional SMS text messages from three major Nigerian telecommunications companies. Using Sperber and Wilson's Relevance Theory (1995) as its theoretical framework, the paper described the manipulative use of SMS by Nigerian telecommunications companies. The analysis revealed that SMS text messages were encoded to achieve maximization of relevance through explicature and implicature; contextual implication and strengthening; and the reduction of processing effort through violating the maxim of truthfulness and the creative use of graphology. The paper concludes that SMS text-messages were used manipulatively by Nigerian telecommunications companies to earn indirect income from sales promotion lottery.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Mershad, Khaleel, Hayssam Dahrouj, Hadi Sarieddeen, Basem Shihada, Tareq Al-Naffouri y Mohamed-Slim Alouini. "Cloud-Enabled High-Altitude Platform Systems: Challenges and Opportunities". Frontiers in Communications and Networks 2 (16 de julio de 2021). http://dx.doi.org/10.3389/frcmn.2021.716265.

Texto completo
Resumen
Augmenting ground-level communications with flying networks, such as the high-altitude platform system (HAPS), is among the major innovative initiatives of the next generation of wireless systems (6G). Given HAPS quasi-static positioning at the stratosphere, HAPS-to-ground and HAPS-to-air connectivity frameworks are expected to be prolific in terms of data acquisition and computing, especially given the mild weather and quasi-constant wind speed characteristics of the stratospheric layer. This paper explores the opportunities stemming from the realization of cloud-enabled HAPS in the context of telecommunications applications and services. The paper first advocates for the potential physical advantages of deploying HAPS as flying data-centers, also known as super-macro base stations. The paper then describes various cloud services that can be offered from the HAPS and the merits that can be achieved by this integration, such as enhancing the quality, speed, and range of the offered services. The proposed services span a wide range of fields, including satellites, Internet of Things (IoT), ad hoc networks (such as sensor; vehicular; and aerial networks), gaming, and social networks. For each service, the paper illustrates the methods that would be used by cloud providers to offload the service data to the HAPS and enable the cloud customers to consume the service. The paper further sheds light on the challenges that need to be addressed for realizing practical cloud-enabled HAPS, mainly, those related to high energy, processing power, quality of service (QoS), and security considerations. Finally, the paper discusses some open issues on the topic, namely, HAPS mobility and message routing, HAPS security via blockchain and machine learning, artificial intelligence-based resource allocation in cloud-enabled HAPS, and integration with vertical heterogeneous networks.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

"Sensor node for wireless radiation monitoring network". Bulletin of V.N. Karazin Kharkiv National University, series «Mathematical modeling. Information technology. Automated control systems», n.º 44 (2019). http://dx.doi.org/10.26565/2304-6201-2019-44-09.

Texto completo
Resumen
The structure of a sensor node for wireless network for environmental radiation monitoring is described in the article. The sensor node is developed on the base of semiconductor detector, modern microprocessor technology, and a last-generation telecommunications radio module. A new algorithm for measuring the power of the exposure dose of ionizing radiation has been investigated in the article. The amount of ionizing radiation energy absorbed by the human body affects the degree of radiation damage to its functional organs radically. In order to solve this problem we are working on improving the parameters of detectors, as well as the characteristics of electronic modules of detecting systems and creating software for controlling the detection process, collecting and processing information digitally, and presenting it properly to users in online mode. A wireless sensor network (WSN) is a distributed, self-organizing network of multiple sensors (sensors, motors, etc.) containing "Motes" (a specks of dust), so named because of the tendency to miniaturization and Executive devices combined with each other through the radio channel. The coverage area of such a network can range from several meters to several kilometers due to the ability to relay messages from one element to another. The motes usually contain battery-powered autonomous microcomputers (controllers) and transceivers. That allows the motes to self-organize into specialized networks, communicate with each other and exchange data. The role of human changes significantly in the model of sensor networks, since their elements – sensor microcomputers – become much more independent, often anticipating human requests long before they are received. "Homocentric" model of network computing with a human as a central link belongs to the past − a human moves from the center to the periphery and concentrates on the process managing, becoming a kind of an intermediary between the real world and computers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Goggin, Gerard. "SMS Riot: Transmitting Race on a Sydney Beach, December 2005". M/C Journal 9, n.º 1 (1 de marzo de 2006). http://dx.doi.org/10.5204/mcj.2582.

Texto completo
Resumen
My message is this in regard to SMS messages and swarming crowds; this is ludicrous behaviour; it is unAustralian. We all share this wonderful country. (NSW Police Assistant Commissioners Mark Goodwin, quoted in Kennedy) The cops hate and fear the swarming packs of Lebanese who respond when some of their numbers are confronted, mobilising quickly via mobile phones and showing open contempt for Australian law. All this is the real world, as distinct from the world preferred by ideological academics who talk about “moral panic” and the oppression of Muslims. They will see only Australian racism as the problem. (Sheehan) The Politics of Transmission On 11 December 2005, as Sydney was settling into early summer haze, there was a race riot on the popular Cronulla beach in the city’s southern suburbs. Hundreds of people, young men especially, gathered for a weekend protest. Their target and pretext were visitors from the culturally diverse suburbs to the west, and the need to defend their women and beaches in the face of such unwelcome incursions and behaviours. In the ensuing days, there were violent raids and assaults criss-crossing back and forth across Sydney’s beaches and suburbs, involving almost farcical yet deadly earnest efforts to identify, respectively, people of “anglo” or “Middle Eastern” appearance (often specifically “Lebanese”) and to threaten or bash them. At the very heart of this state of siege and the fear, outrage, and sadness that gripped those living in Sydney were the politics of transmission. The spark that set off this conflagration was widely believed to have been caused by the transmission of racist and violent “calls to arms” via mobile text messages. Predictably perhaps media outlets sought out experts on text messaging and cell phone culture for commentary, including myself and most mainstream media appeared interested in portraying a fascination for texting and reinforcing its pivotal role in the riots. In participating in media interviews, I found myself torn between wishing to attest to the significance and importance of cell phone culture and texting, on the one hand (or thumb perhaps), while being extremely sceptical about its alleged power in shaping these unfolding events, on the other — not to mention being disturbed about the ethical implications of what had unfolded. In this article, I wish to discuss the subject of transmission and the power of mobile texting culture, something that attracted much attention elsewhere — and to which the Sydney riots offer a fascinating and instructive lesson. My argument runs like this. Mobile phone culture, especially texting, has emerged over the past decade, and has played a central role in communicative and cultural practice in many countries and contexts as scholars have shown (Glotz and Bertschi; Harper, Palen and Taylor). Among other features, texting often plays a significant, if not decisive, role in co-ordinated as well as spontaneous social and political organization and networks, if not, on occasion, in revolution. However, it is important not to over-play the role, significance and force of such texting culture in the exercise of power, or the formation of collective action and identities (whether mobs, crowds, masses, movements, or multitudes). I think texting has been figured in such a hyperbolic and technological determinist way, especially, and ironically, through how it has been represented in other media (print, television, radio, and online). The difficulty then is to identify the precise contribution of mobile texting in organized and disorganized social networks, without the antimonies conferred alternatively by dystopian treatments (such as moral panic) or utopian ones (such as the technological sublime) — something which I shall try to elucidate in what follows. On the Beach Again Largely caught unawares and initially slow to respond, the New South Wales state government responded with a massive show of force and repression. 2005 had been marked by the state and Federal enactment of draconian terror laws. Now here was an opportunity for the government to demonstrate the worth of the instruments and rationales for suppression of liberties, to secure public order against threats of a more (un)civil than martial order. Outflanking the opposition party on law-and-order rhetoric once again, the government immediately formulated new laws to curtail accused and offender’s rights (Brown). The police “locked” down whole suburbs — first Cronulla, then others — and made a show of policing all beaches north and south (Sydney Morning Herald). The race riots were widely reported in the international press, and, not for the first time (especially since the recent Redfern and Macquarie Fields), the city’s self-image of a cosmopolitan, multicultural nation (or in Australian Prime Minister John Howard’s prim and loaded terms, a nation “relaxed and comfortable”) looked like a mirage. Debate raged on why the riots occurred, how harmony could be restored and what the events signified for questions of race and identity — the latter most narrowly construed in the Prime Minister’s insistence that the riots did not reflect underlying racism in Australia (Dodson, Timms and Creagh). There were suggestions that the unrest was rather at base about the contradictions and violence of masculinity, some two-odd decades after Puberty Blues — the famous account of teenage girls growing up on the (Cronulla) Shire beaches. Journalists agonized about whether the media amounted to reporter or amplifier of tensions. In the lead-up to the riots, at their height, and in their wake, there was much emphasis on the role mobile text messages played in creating the riots and sustaining the subsequent atmosphere of violence and racial tension (The Australian; Overington and Warne-Smith). Not only were text messages circulating in the Sydney area, but in other states as well (Daily Telegraph). The volume of such text messages and emails also increased in the wake of the riot (certainly I received one personally from a phone number I did not recognise). New messages were sent to exhort Lebanese-Australians and others to fight back. Those decrying racism, such as the organizers of a rally, pointedly circulated text messages, hoping to spread peace. Media commentators, police, government officials, and many others held such text messages directly and centrally responsible for organizing the riot and for the violent scuffles that followed: The text message hate mail that inspired 5000 people to attend the rally at Cronulla 10 days ago demonstrated to the police the power of the medium. The retaliation that followed, when gangs marauded through Maroubra and Cronulla, was also co-ordinated by text messaging (Davies). It is rioting for a tech-savvy generation. Mobile phones are providing the call to arms for the tribes in the race war dividing Sydney. More than 5000 people turn up to Cronulla on Sunday … many were drawn to the rally, which turned into a mob, by text messages on their mobiles (Hayes and Kearney). Such accounts were crucial to the international framing of the events as this report from The Times in London illustrates: In the days leading up to the riot racist text messages had apparently been circulating calling upon concerned “white” Australians to rally at Cronulla to defend their beach and women. Following the attacks on the volunteer lifeguards, a mobile telephone text campaign started, backed up by frenzied discussions on weblogs, calling on Cronulla locals to rally to protect their beach. In response, a text campaign urged youths from western Sydney to be at Cronulla on Sunday to protect their friends (Maynard). There were calls upon the mobile companies to intercept and ban such messages, with industry spokespeople pointing out text messages were usually only held for twenty-four hours and were in many ways more difficult to intercept than it was to tap phone calls (Burke and Cubby). Mobs and Messages I think there are many reasons to suggest that the transmission of text messages did constitute a moral panic (what I’ve called elsewhere a “mobile panic”; see Goggin), pace columnist Paul Sheehan. Notably the wayward texting drew a direct and immediate response from the state government, with legislative changes that included provisions allowing the confiscation of cell phones and outlawing sending, receipt or keeping of racist or inflammatory text messages. For some days police proceeded to stop cars and board buses and demand to inspect mobiles, checking and reading text messages, arresting at least one person for being responsible for transmitting banned text messages. However, there is another important set of ideas adduced by commentators to explain how people came together to riot in Sydney, taking their cue from Howard Rheingold’s 2002 book Smart Mobs, a widely discussed and prophetic text on social revolution and new technologies. Rheingold sees text messaging as the harbinger of such new, powerful forms of collectivity, studying emergent uses around the world. A prime example he uses to illustrate the “power of the mobile many” is the celebrated overthrow of President Joseph Estrada of the Philippines in January 2001: President Joseph Estrada of the Philippines became the first head of state in history to lose power to a smart mob. More than 1 million Manila residents, mobilized and coordinated by waves of text messages, assembled … Estrada fell. The legend of “Generation Txt” was born (Rheingold 157-58). Rheingold is careful to emphasize the social as much as technical nature of this revolution, yet still sees such developments leading to “smart mobs”. As with his earlier, prescient book Virtual Community (Rheingold 1993) did for the Internet, so has Smart Mobs compellingly fused and circulated a set of ideas about cell phones and the pervasive, wearable and mobile technologies that are their successors. The received view of the overthrow of the Estrada government is summed up in a remark attributed to Estrada himself: “I was ousted by a coup d’text” (Pertierra et al. ch. 6). The text-toppling of Estrada is typically attributed to “Generation Txt”, underlining the power of text messaging and the new social category which marks it, and has now passed into myth. What is less well-known is that the overriding role of the cell phone in the Estrada overthrow has been challenged. In the most detailed study of text messaging and subjectivity in the Philippines, which reviewed accounts of the events of the Estrada overthrow, as well as conducting interviews with participants, Pertierra et al. discern in EDSA2 a “utopian vision of the mobile phone that is characteristic of ‘discourses of sublime technology’”: It focuses squarely on the mobile phone, and ignores the people who used it … the technology is said to possess a mysterious force, called “Text Power” ... it is the technology that does things — makes things happen — not the people who use it. (Pertierra et al. ch. 6) Given the recrudescence of the technological sublime in digital media (on which see Mosco) the detailed examination of precise details and forms of agency and coordination using cell phones is most instructive. Pertierra et al. confirm that the cell phone did play an important role in EDSA2 (the term given to the events surrounding the downfall of Estrada). That role, however, was not the one for which it has usually been praised in the media since the event — namely, that of crowd-drawer par excellence … less than half of our survey respondents who took part in People Power 2 noted that text messaging influenced them to go. If people did attend, it was because they were persuaded to by an ensemble of other reasons … (2002: ch. 6) Instead, they argue, the significance of the cell phone lay firstly, in the way it helped join people who disapproved of Pres. Estrada in a network of complex connectivity … Secondly, the mobile phone was instrumental as an organizational device … In the hands of activists and powerbrokers from politics, the military, business groups and civil society, the mobile phone becomes a “potent communications tool” … (Pertierra et al. 2002: ch. 6) What this revisionist account of the Estrada coup underscores is that careful research and analysis is required to understand how SMS is used and what it signifies. Indeed it is worth going further to step back from either the celebratory or minatory discourses on the cell phone and its powerful effects, and reframe this set of events as very much to do with the mutual construction of society and technology, in which culture is intimately involved. This involves placing both the technology of text messaging and the social and political forces manifested in this uprising in a much wider setting. For instance, in his account of the Estrada crisis Vicente L. Rafael terms the tropes of text messaging and activism evident in the discourses surrounding it as: a set of telecommunicative fantasies among middle-class Filipinos … [that] reveal certain pervasive beliefs of the middle classes … in the power of communication technologies to transmit messages at a distance and in their own ability to possess that power (Rafael 399). For Rafael, rather than possessing instrinsic politics in its own right, text messaging here is about a “media politics (understood in both senses of the phrase: the politics of media systems, but also the inescapable mediation of the political) [that] reveal the unstable workings of Filipino middle-class sentiments” (400). “Little Square of Light” Doubtless there are emergent cultural and social forms created in conjunction with new technologies, which unfreeze and open up (for a time) social relations. As my discussion of the Estrada “coup d’text” shows, however, the dynamics of media, politics and technology in any revolution or riot need to be carefully traced. A full discussion of mobile media and the Sydney uprising will need to wait for another occasion. However, it is worth noting that the text messages in question to which the initial riot had been attributed, were actually read out on one of the country’s highest-rating and most influential talk-radio programs. The contents of such messages had also been detailed in print media, especially tabloids, and been widely discussed (McLellan, Marr). What remains unknown and unclear, however, is the actual use of text messages and cell phones in the conceiving, co-ordination, and improvisational dynamics of the riots, and affective, cultural processing of what occurred. Little retrospective interpretation at all has emerged in the months since the riots, but it certainly felt as if the police and state’s over-reaction, and the arrival of the traditionally hot and lethargic Christmas — combined with the underlying structures of power and feeling to achieve the reinstitution of calm, or rather perhaps the habitual, much less invisible, expression of whiteness as usual. The policing of the crisis had certainly been fuelled by the mobile panic, but setting law enforcement the task of bringing those text messages to book was much like asking them to catch the wind. For analysts, as well as police, the novel and salience appearance of texting also has a certain lure. Yet in concentrating on the deadly power of the cell phone to conjure up a howling or smart mob, or in the fascination with the new modes of transmission of mobile devices, it is important to give credit to the formidable, implacable role of media and cultural representations more generally, in all this, as they are transmitted, received, interpreted and circulated through old as well as new modes, channels and technologies. References The Australian. “SMS Message Goes Out: Let’s March for Racial Tolerance.” The Australian. 17 September, 2005. 6. Brown, M. “Powers Tested in the Text”. Sydney Morning Herald. 20 December, 2005. 7. Burke, K. and Cubby, B. “Police Track Text Message Senders”. Sydney Morning Herald, 23-25 December, 2005. 7. Daily Telegraph. “Police Intercept Interstate Riot SMS — Race Riot: Flames of Fear.” Daily Telegraph. 15 December, 2005. 5. Davis, A. “Flying Bats Rang Alarm”. Sydney Morning Herald. 21 December, 2005. 1, 5. Dodson, L., Timms, A. and Creagh, S. “Tourism Starts Counting the Cost of Race Riots”, Sydney Morning Herald. 21 December, 2005. 1. Goggin, G. Cell Phone Culture: Mobile Technology in Everyday Life. London: Routledge, 2006. In press. Glotz, P., and Bertschi, S. (ed.) Thumb Culture: Social Trends and Mobile Phone Use, Bielefeld: Transcript Verlag. Harper, R., Palen, L. and Taylor, A. (ed.)_ _The Inside Text: Social, Cultural and Design Perspectives on SMS. Dordrecht: Springer. Hayes, S. and Kearney, S. “Call to Arms Transmitted by Text”. Sydney Morning Herald. 13 December, 2005. 4. Kennedy, L. “Police Act Swiftly to Curb Attacks”. Sydney Morning Herald. 13 December, 2005. 6. Maynard, R. “Battle on Beach as Mob Vows to Defend ‘Aussie Way of Life.’ ” The Times. 12 December 2005. 29. Marr, D. “One-Way Radio Plays by Its Own Rules.” Sydney Morning Herald. 13 December, 2005. 6. McLellan, A. “Solid Reportage or Fanning the Flames?” The Australian. 15 December, 2005. 16. Mosco, V. The Digital Sublime: Myth, Power, and Cyberspace. Cambridge, MA: MIT Press, 2004. Overington, C. and Warne-Smith, D. “Countdown to Conflict”. The Australian. 17 December, 2005. 17, 20. Pertierra, R., E.F. Ugarte, A. Pingol, J. Hernandez, and N.L. Dacanay, N.L. Txt-ing Selves: Cellphones and Philippine Modernity. Manila: De La Salle University Press, 2002. 1 January 2006 http://www.finlandembassy.ph/texting1.htm>. Rafael, V. L. “The Cell Phone and the Crowd: Messianic Politics in the Contemporary Philippines.” Public Culture 15 (2003): 399-425. Rheingold, H. Smart Mobs: The Next Social Revolution. Cambridge, MA: Perseus, 2002. Sheehan, P. “Nasty Reality Surfs In as Ugly Tribes Collide”. Sydney Morning Herald. 12 December, 2005. 13. Sydney Morning Herald. “Beach Wars 1: After Lockdown”. Editorial. Sydney Morning Herald. 20 December, 2005. 12. Citation reference for this article MLA Style Goggin, Gerard. "SMS Riot: Transmitting Race on a Sydney Beach, December 2005." M/C Journal 9.1 (2006). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0603/02-goggin.php>. APA Style Goggin, G. (Mar. 2006) "SMS Riot: Transmitting Race on a Sydney Beach, December 2005," M/C Journal, 9(1). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0603/02-goggin.php>.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Osman Goni y Abu Shameem. "Implementation of Local Area Network (LAN) & Build a Secure LAN System for BAEC Head Quarter". Research Journal of Computer Science and Engineering, 5 de junio de 2021, 1–15. http://dx.doi.org/10.36811/rjcse.2021.110003.

Texto completo
Resumen
Network security is the process of taking physical and software preventative measures to protect the underlying networking infrastructure from unauthorized access, misuse, malfunction, modification, destruction, or improper disclosure, thereby creating a secure platform for computers, users, and programs to perform their permitted critical functions within a secure environment. A local area network (LAN) is a computer network within a small geographical area such as a home, school, computer laboratory, office building or group of buildings. A LAN is composed of inter-connected workstations and personal computers which are each capable of accessing and sharing data and devices, such as printers, scanners and data storage devices, anywhere on the LAN. LANs are characterized by higher communication and data transfer rates and the lack of any need for leased communication lines. A data network is an interconnected system of computers, peripherals and software over which data files and messages are sent and received. LAN is only one type of computer network. LAN define is Data com system allowing a number of independent devices to communicate directly with each other, within a moderately sized geographic area over a physical communications channel of moderate data rates. Fiber-optic communication is a method of transmitting information from one place to another by sending pulses of infrared light through an optical fiber. The light is a form of carrier wave that is modulated to carry information. Fiber is preferred over electrical cabling when high bandwidth, long distance, or immunity to electromagnetic interference is required. This type of communication can transmit voice, video, and telemetry through local area networks or across long distances. Optical fiber is used by many telecommunications companies to transmit telephone signals, Internet communication, and cable television signals. Researchers at Bell Labs have reached a record bandwidth distance product of over 100 petabit × kilometers per second using fiber optic communication. Communication between remote parties can be achieved through a process called Networking, involving the connection of computers, media and networking devices. When we talk about networks, we need to keep in mind three concepts, distributed processing, network criteria and network structure. The purpose of this Network is to design a Local Area Network (LAN) for a BAEC (Bangladesh Atomic Energy Commission) Head Quarter and implement security measures to protect network resources and system services. To do so, we will deal with the physical and logical design of a LAN. The goal of this Network is to examine of the Local Area Network set up for a BAEC HQ and build a secure LAN system. Keywords: LAN, Secure LAN, BTCL, UTP, RJ-45, Bandwidth, Wavelength, ISP, Firewall, BAEC
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Maxwell, Richard y Toby Miller. "The Real Future of the Media". M/C Journal 15, n.º 3 (27 de junio de 2012). http://dx.doi.org/10.5204/mcj.537.

Texto completo
Resumen
When George Orwell encountered ideas of a technological utopia sixty-five years ago, he acted the grumpy middle-aged man Reading recently a batch of rather shallowly optimistic “progressive” books, I was struck by the automatic way in which people go on repeating certain phrases which were fashionable before 1914. Two great favourites are “the abolition of distance” and “the disappearance of frontiers”. I do not know how often I have met with the statements that “the aeroplane and the radio have abolished distance” and “all parts of the world are now interdependent” (1944). It is worth revisiting the old boy’s grumpiness, because the rhetoric he so niftily skewers continues in our own time. Facebook features “Peace on Facebook” and even claims that it can “decrease world conflict” through inter-cultural communication. Twitter has announced itself as “a triumph of humanity” (“A Cyber-House” 61). Queue George. In between Orwell and latter-day hoody cybertarians, a whole host of excitable public intellectuals announced the impending end of materiality through emergent media forms. Marshall McLuhan, Neil Postman, Daniel Bell, Ithiel de Sola Pool, George Gilder, Alvin Toffler—the list of 1960s futurists goes on and on. And this wasn’t just a matter of punditry: the OECD decreed the coming of the “information society” in 1975 and the European Union (EU) followed suit in 1979, while IBM merrily declared an “information age” in 1977. Bell theorized this technological utopia as post-ideological, because class would cease to matter (Mattelart). Polluting industries seemingly no longer represented the dynamic core of industrial capitalism; instead, market dynamism radiated from a networked, intellectual core of creative and informational activities. The new information and knowledge-based economies would rescue First World hegemony from an “insurgent world” that lurked within as well as beyond itself (Schiller). Orwell’s others and the Cold-War futurists propagated one of the most destructive myths shaping both public debate and scholarly studies of the media, culture, and communication. They convinced generations of analysts, activists, and arrivistes that the promises and problems of the media could be understood via metaphors of the environment, and that the media were weightless and virtual. The famous medium they wished us to see as the message —a substance as vital to our wellbeing as air, water, and soil—turned out to be no such thing. Today’s cybertarians inherit their anti-Marxist, anti-materialist positions, as a casual glance at any new media journal, culture-industry magazine, or bourgeois press outlet discloses. The media are undoubtedly important instruments of social cohesion and fragmentation, political power and dissent, democracy and demagoguery, and other fraught extensions of human consciousness. But talk of media systems as equivalent to physical ecosystems—fashionable among marketers and media scholars alike—is predicated on the notion that they are environmentally benign technologies. This has never been true, from the beginnings of print to today’s cloud-covered computing. Our new book Greening the Media focuses on the environmental impact of the media—the myriad ways that media technology consumes, despoils, and wastes natural resources. We introduce ideas, stories, and facts that have been marginal or absent from popular, academic, and professional histories of media technology. Throughout, ecological issues have been at the core of our work and we immodestly think the same should apply to media communications, and cultural studies more generally. We recognize that those fields have contributed valuable research and teaching that address environmental questions. For instance, there is an abundant literature on representations of the environment in cinema, how to communicate environmental messages successfully, and press coverage of climate change. That’s not enough. You may already know that media technologies contain toxic substances. You may have signed an on-line petition protesting the hazardous and oppressive conditions under which workers assemble cell phones and computers. But you may be startled, as we were, by the scale and pervasiveness of these environmental risks. They are present in and around every site where electronic and electric devices are manufactured, used, and thrown away, poisoning humans, animals, vegetation, soil, air and water. We are using the term “media” as a portmanteau word to cover a multitude of cultural and communications machines and processes—print, film, radio, television, information and communications technologies (ICT), and consumer electronics (CE). This is not only for analytical convenience, but because there is increasing overlap between the sectors. CE connect to ICT and vice versa; televisions resemble computers; books are read on telephones; newspapers are written through clouds; and so on. Cultural forms and gadgets that were once separate are now linked. The currently fashionable notion of convergence doesn’t quite capture the vastness of this integration, which includes any object with a circuit board, scores of accessories that plug into it, and a global nexus of labor and environmental inputs and effects that produce and flow from it. In 2007, a combination of ICT/CE and media production accounted for between 2 and 3 percent of all greenhouse gases emitted around the world (“Gartner Estimates,”; International Telecommunication Union; Malmodin et al.). Between twenty and fifty million tonnes of electronic waste (e-waste) are generated annually, much of it via discarded cell phones and computers, which affluent populations throw out regularly in order to buy replacements. (Presumably this fits the narcissism of small differences that distinguishes them from their own past.) E-waste is historically produced in the Global North—Australasia, Western Europe, Japan, and the US—and dumped in the Global South—Latin America, Africa, Eastern Europe, Southern and Southeast Asia, and China. It takes the form of a thousand different, often deadly, materials for each electrical and electronic gadget. This trend is changing as India and China generate their own media detritus (Robinson; Herat). Enclosed hard drives, backlit screens, cathode ray tubes, wiring, capacitors, and heavy metals pose few risks while these materials remain encased. But once discarded and dismantled, ICT/CE have the potential to expose workers and ecosystems to a morass of toxic components. Theoretically, “outmoded” parts could be reused or swapped for newer parts to refurbish devices. But items that are defined as waste undergo further destruction in order to collect remaining parts and valuable metals, such as gold, silver, copper, and rare-earth elements. This process causes serious health risks to bones, brains, stomachs, lungs, and other vital organs, in addition to birth defects and disrupted biological development in children. Medical catastrophes can result from lead, cadmium, mercury, other heavy metals, poisonous fumes emitted in search of precious metals, and such carcinogenic compounds as polychlorinated biphenyls, dioxin, polyvinyl chloride, and flame retardants (Maxwell and Miller 13). The United States’ Environmental Protection Agency estimates that by 2007 US residents owned approximately three billion electronic devices, with an annual turnover rate of 400 million units, and well over half such purchases made by women. Overall CE ownership varied with age—adults under 45 typically boasted four gadgets; those over 65 made do with one. The Consumer Electronics Association (CEA) says US$145 billion was expended in the sector in 2006 in the US alone, up 13% on the previous year. The CEA refers joyously to a “consumer love affair with technology continuing at a healthy clip.” In the midst of a recession, 2009 saw $165 billion in sales, and households owned between fifteen and twenty-four gadgets on average. By 2010, US$233 billion was spent on electronic products, three-quarters of the population owned a computer, nearly half of all US adults owned an MP3 player, and 85% had a cell phone. By all measures, the amount of ICT/CE on the planet is staggering. As investigative science journalist, Elizabeth Grossman put it: “no industry pushes products into the global market on the scale that high-tech electronics does” (Maxwell and Miller 2). In 2007, “of the 2.25 million tons of TVs, cell phones and computer products ready for end-of-life management, 18% (414,000 tons) was collected for recycling and 82% (1.84 million tons) was disposed of, primarily in landfill” (Environmental Protection Agency 1). Twenty million computers fell obsolete across the US in 1998, and the rate was 130,000 a day by 2005. It has been estimated that the five hundred million personal computers discarded in the US between 1997 and 2007 contained 6.32 billion pounds of plastics, 1.58 billion pounds of lead, three million pounds of cadmium, 1.9 million pounds of chromium, and 632000 pounds of mercury (Environmental Protection Agency; Basel Action Network and Silicon Valley Toxics Coalition 6). The European Union is expected to generate upwards of twelve million tons annually by 2020 (Commission of the European Communities 17). While refrigerators and dangerous refrigerants account for the bulk of EU e-waste, about 44% of the most toxic e-waste measured in 2005 came from medium-to-small ICT/CE: computer monitors, TVs, printers, ink cartridges, telecommunications equipment, toys, tools, and anything with a circuit board (Commission of the European Communities 31-34). Understanding the enormity of the environmental problems caused by making, using, and disposing of media technologies should arrest our enthusiasm for them. But intellectual correctives to the “love affair” with technology, or technophilia, have come and gone without establishing much of a foothold against the breathtaking flood of gadgets and the propaganda that proclaims their awe-inspiring capabilities.[i] There is a peculiar enchantment with the seeming magic of wireless communication, touch-screen phones and tablets, flat-screen high-definition televisions, 3-D IMAX cinema, mobile computing, and so on—a totemic, quasi-sacred power that the historian of technology David Nye has named the technological sublime (Nye Technological Sublime 297).[ii] We demonstrate in our book why there is no place for the technological sublime in projects to green the media. But first we should explain why such symbolic power does not accrue to more mundane technologies; after all, for the time-strapped cook, a pressure cooker does truly magical things. Three important qualities endow ICT/CE with unique symbolic potency—virtuality, volume, and novelty. The technological sublime of media technology is reinforced by the “virtual nature of much of the industry’s content,” which “tends to obscure their responsibility for a vast proliferation of hardware, all with high levels of built-in obsolescence and decreasing levels of efficiency” (Boyce and Lewis 5). Planned obsolescence entered the lexicon as a new “ethics” for electrical engineering in the 1920s and ’30s, when marketers, eager to “habituate people to buying new products,” called for designs to become quickly obsolete “in efficiency, economy, style, or taste” (Grossman 7-8).[iii] This defines the short lifespan deliberately constructed for computer systems (drives, interfaces, operating systems, batteries, etc.) by making tiny improvements incompatible with existing hardware (Science and Technology Council of the American Academy of Motion Picture Arts and Sciences 33-50; Boyce and Lewis). With planned obsolescence leading to “dizzying new heights” of product replacement (Rogers 202), there is an overstated sense of the novelty and preeminence of “new” media—a “cult of the present” is particularly dazzled by the spread of electronic gadgets through globalization (Mattelart and Constantinou 22). References to the symbolic power of media technology can be found in hymnals across the internet and the halls of academe: technologies change us, the media will solve social problems or create new ones, ICTs transform work, monopoly ownership no longer matters, journalism is dead, social networking enables social revolution, and the media deliver a cleaner, post-industrial, capitalism. Here is a typical example from the twilight zone of the technological sublime (actually, the OECD): A major feature of the knowledge-based economy is the impact that ICTs have had on industrial structure, with a rapid growth of services and a relative decline of manufacturing. Services are typically less energy intensive and less polluting, so among those countries with a high and increasing share of services, we often see a declining energy intensity of production … with the emergence of the Knowledge Economy ending the old linear relationship between output and energy use (i.e. partially de-coupling growth and energy use) (Houghton 1) This statement mixes half-truths and nonsense. In reality, old-time, toxic manufacturing has moved to the Global South, where it is ascendant; pollution levels are rising worldwide; and energy consumption is accelerating in residential and institutional sectors, due almost entirely to ICT/CE usage, despite advances in energy conservation technology (a neat instance of the age-old Jevons Paradox). In our book we show how these are all outcomes of growth in ICT/CE, the foundation of the so-called knowledge-based economy. ICT/CE are misleadingly presented as having little or no material ecological impact. In the realm of everyday life, the sublime experience of electronic machinery conceals the physical work and material resources that go into them, while the technological sublime makes the idea that more-is-better palatable, axiomatic; even sexy. In this sense, the technological sublime relates to what Marx called “the Fetishism which attaches itself to the products of labour” once they are in the hands of the consumer, who lusts after them as if they were “independent beings” (77). There is a direct but unseen relationship between technology’s symbolic power and the scale of its environmental impact, which the economist Juliet Schor refers to as a “materiality paradox” —the greater the frenzy to buy goods for their transcendent or nonmaterial cultural meaning, the greater the use of material resources (40-41). We wrote Greening the Media knowing that a study of the media’s effect on the environment must work especially hard to break the enchantment that inflames popular and elite passions for media technologies. We understand that the mere mention of the political-economic arrangements that make shiny gadgets possible, or the environmental consequences of their appearance and disappearance, is bad medicine. It’s an unwelcome buzz kill—not a cool way to converse about cool stuff. But we didn’t write the book expecting to win many allies among high-tech enthusiasts and ICT/CE industry leaders. We do not dispute the importance of information and communication media in our lives and modern social systems. We are media people by profession and personal choice, and deeply immersed in the study and use of emerging media technologies. But we think it’s time for a balanced assessment with less hype and more practical understanding of the relationship of media technologies to the biosphere they inhabit. Media consumers, designers, producers, activists, researchers, and policy makers must find new and effective ways to move ICT/CE production and consumption toward ecologically sound practices. In the course of this project, we found in casual conversation, lecture halls, classroom discussions, and correspondence, consistent and increasing concern with the environmental impact of media technology, especially the deleterious effects of e-waste toxins on workers, air, water, and soil. We have learned that the grip of the technological sublime is not ironclad. Its instability provides a point of departure for investigating and criticizing the relationship between the media and the environment. The media are, and have been for a long time, intimate environmental participants. Media technologies are yesterday’s, today’s, and tomorrow’s news, but rarely in the way they should be. The prevailing myth is that the printing press, telegraph, phonograph, photograph, cinema, telephone, wireless radio, television, and internet changed the world without changing the Earth. In reality, each technology has emerged by despoiling ecosystems and exposing workers to harmful environments, a truth obscured by symbolic power and the power of moguls to set the terms by which such technologies are designed and deployed. Those who benefit from ideas of growth, progress, and convergence, who profit from high-tech innovation, monopoly, and state collusion—the military-industrial-entertainment-academic complex and multinational commandants of labor—have for too long ripped off the Earth and workers. As the current celebration of media technology inevitably winds down, perhaps it will become easier to comprehend that digital wonders come at the expense of employees and ecosystems. This will return us to Max Weber’s insistence that we understand technology in a mundane way as a “mode of processing material goods” (27). Further to understanding that ordinariness, we can turn to the pioneering conversation analyst Harvey Sacks, who noted three decades ago “the failures of technocratic dreams [:] that if only we introduced some fantastic new communication machine the world will be transformed.” Such fantasies derived from the very banality of these introductions—that every time they took place, one more “technical apparatus” was simply “being made at home with the rest of our world’ (548). Media studies can join in this repetitive banality. Or it can withdraw the welcome mat for media technologies that despoil the Earth and wreck the lives of those who make them. In our view, it’s time to green the media by greening media studies. References “A Cyber-House Divided.” Economist 4 Sep. 2010: 61-62. “Gartner Estimates ICT Industry Accounts for 2 Percent of Global CO2 Emissions.” Gartner press release. 6 April 2007. ‹http://www.gartner.com/it/page.jsp?id=503867›. Basel Action Network and Silicon Valley Toxics Coalition. Exporting Harm: The High-Tech Trashing of Asia. Seattle: Basel Action Network, 25 Feb. 2002. Benjamin, Walter. “Central Park.” Trans. Lloyd Spencer with Mark Harrington. New German Critique 34 (1985): 32-58. Biagioli, Mario. “Postdisciplinary Liaisons: Science Studies and the Humanities.” Critical Inquiry 35.4 (2009): 816-33. Boyce, Tammy and Justin Lewis, eds. Climate Change and the Media. New York: Peter Lang, 2009. Commission of the European Communities. “Impact Assessment.” Commission Staff Working Paper accompanying the Proposal for a Directive of the European Parliament and of the Council on Waste Electrical and Electronic Equipment (WEEE) (recast). COM (2008) 810 Final. Brussels: Commission of the European Communities, 3 Dec. 2008. Environmental Protection Agency. Management of Electronic Waste in the United States. Washington, DC: EPA, 2007 Environmental Protection Agency. Statistics on the Management of Used and End-of-Life Electronics. Washington, DC: EPA, 2008 Grossman, Elizabeth. Tackling High-Tech Trash: The E-Waste Explosion & What We Can Do about It. New York: Demos, 2008. ‹http://www.demos.org/pubs/e-waste_FINAL.pdf› Herat, Sunil. “Review: Sustainable Management of Electronic Waste (e-Waste).” Clean 35.4 (2007): 305-10. Houghton, J. “ICT and the Environment in Developing Countries: Opportunities and Developments.” Paper prepared for the Organization for Economic Cooperation and Development, 2009. International Telecommunication Union. ICTs for Environment: Guidelines for Developing Countries, with a Focus on Climate Change. Geneva: ICT Applications and Cybersecurity Division Policies and Strategies Department ITU Telecommunication Development Sector, 2008. Malmodin, Jens, Åsa Moberg, Dag Lundén, Göran Finnveden, and Nina Lövehagen. “Greenhouse Gas Emissions and Operational Electricity Use in the ICT and Entertainment & Media Sectors.” Journal of Industrial Ecology 14.5 (2010): 770-90. Marx, Karl. Capital: Vol. 1: A Critical Analysis of Capitalist Production, 3rd ed. Trans. Samuel Moore and Edward Aveling, Ed. Frederick Engels. New York: International Publishers, 1987. Mattelart, Armand and Costas M. Constantinou. “Communications/Excommunications: An Interview with Armand Mattelart.” Trans. Amandine Bled, Jacques Guot, and Costas Constantinou. Review of International Studies 34.1 (2008): 21-42. Mattelart, Armand. “Cómo nació el mito de Internet.” Trans. Yanina Guthman. El mito internet. Ed. Victor Hugo de la Fuente. Santiago: Editorial aún creemos en los sueños, 2002. 25-32. Maxwell, Richard and Toby Miller. Greening the Media. New York: Oxford University Press, 2012. Nye, David E. American Technological Sublime. Cambridge, Mass.: MIT Press, 1994. Nye, David E. Technology Matters: Questions to Live With. Cambridge, Mass.: MIT Press. 2007. Orwell, George. “As I Please.” Tribune. 12 May 1944. Richtel, Matt. “Consumers Hold on to Products Longer.” New York Times: B1, 26 Feb. 2011. Robinson, Brett H. “E-Waste: An Assessment of Global Production and Environmental Impacts.” Science of the Total Environment 408.2 (2009): 183-91. Rogers, Heather. Gone Tomorrow: The Hidden Life of Garbage. New York: New Press, 2005. Sacks, Harvey. Lectures on Conversation. Vols. I and II. Ed. Gail Jefferson. Malden: Blackwell, 1995. Schiller, Herbert I. Information and the Crisis Economy. Norwood: Ablex Publishing, 1984. Schor, Juliet B. Plenitude: The New Economics of True Wealth. New York: Penguin, 2010. Science and Technology Council of the American Academy of Motion Picture Arts and Sciences. The Digital Dilemma: Strategic Issues in Archiving and Accessing Digital Motion Picture Materials. Los Angeles: Academy Imprints, 2007. Weber, Max. “Remarks on Technology and Culture.” Trans. Beatrix Zumsteg and Thomas M. Kemple. Ed. Thomas M. Kemple. Theory, Culture [i] The global recession that began in 2007 has been the main reason for some declines in Global North energy consumption, slower turnover in gadget upgrades, and longer periods of consumer maintenance of electronic goods (Richtel). [ii] The emergence of the technological sublime has been attributed to the Western triumphs in the post-Second World War period, when technological power supposedly supplanted the power of nature to inspire fear and astonishment (Nye Technology Matters 28). Historian Mario Biagioli explains how the sublime permeates everyday life through technoscience: "If around 1950 the popular imaginary placed science close to the military and away from the home, today’s technoscience frames our everyday life at all levels, down to our notion of the self" (818). [iii] This compulsory repetition is seemingly undertaken each time as a novelty, governed by what German cultural critic Walter Benjamin called, in his awkward but occasionally illuminating prose, "the ever-always-the-same" of "mass-production" cloaked in "a hitherto unheard-of significance" (48).
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Merchant, Melissa, Katie M. Ellis y Natalie Latter. "Captions and the Cooking Show". M/C Journal 20, n.º 3 (21 de junio de 2017). http://dx.doi.org/10.5204/mcj.1260.

Texto completo
Resumen
While the television cooking genre has evolved in numerous ways to withstand competition and become a constant feature in television programming (Collins and College), it has been argued that audience demand for televisual cooking has always been high because of the daily importance of cooking (Hamada, “Multimedia Integration”). Early cooking shows were characterised by an instructional discourse, before quickly embracing an entertainment focus; modern cooking shows take on a more competitive, out of the kitchen focus (Collins and College). The genre has continued to evolve, with celebrity chefs and ordinary people embracing transmedia affordances to return to the instructional focus of the early cooking shows. While the television cooking show is recognised for its broad cultural impacts related to gender (Ouellette and Hay), cultural capital (Ibrahim; Oren), television formatting (Oren), and even communication itself (Matwick and Matwick), its role in the widespread adoption of television captions is significantly underexplored. Even the fact that a cooking show was the first ever program captioned on American television is almost completely unremarked within cooking show histories and literature.A Brief History of Captioning WorldwideWhen captions were first introduced on US television in the early 1970s, programmers were guided by the general principle to make the captioned program “accessible to every deaf viewer regardless of reading ability” (Jensema, McCann and Ramsey 284). However, there were no exact rules regarding captioning quality and captions did not reflect verbatim what was said onscreen. According to Jensema, McCann and Ramsey (285), less than verbatim captioning continued for many years because “deaf people were so delighted to have captions that they accepted almost anything thrown on the screen” (see also Newell 266 for a discussion of the UK context).While the benefits of captions for people who are D/deaf or hard of hearing were immediate, its commercial applications also became apparent. When the moral argument that people who were D/deaf or hard of hearing had a right to access television via captions proved unsuccessful in the fight for legislation, advocates lobbied the US Congress about the mainstream commercial benefits such as in education and the benefits for people learning English as a second language (Downey). Activist efforts and hard-won legal battles meant D/deaf and hard of hearing viewers can now expect closed captions on almost all television content. With legislation in place to determine the provision of captions, attention began to focus on their quality. D/deaf viewers are no longer just delighted to accept anything thrown on the screen and have begun to demand verbatim captioning. At the same time, market-based incentives are capturing the attention of television executives seeking to make money, and the widespread availability of verbatim captions has been recognised for its multimedia—and therefore commercial—applications. These include its capacity for information retrieval (Miura et al.; Agnihotri et al.) and for creative repurposing of television content (Blankinship et al.). Captions and transcripts have been identified as being of particular importance to augmenting the information provided in cooking shows (Miura et al.; Oh et al.).Early Captions in the US: Julia Child’s The French ChefJulia Child is indicative of the early period of the cooking genre (Collins and College)—she has been described as “the epitome of the TV chef” (ray 53) and is often credited for making cooking accessible to American audiences through her onscreen focus on normalising techniques that she promised could be mastered at home (ray). She is still recognised for her mastery of the genre, and for her capacity to entertain in a way that stood out from her contemporaries (Collins and College; ray).Julia Child’s The French Chef originally aired on the US publicly-funded Public Broadcasting System (PBS) affiliate WBGH from 1963–1973. The captioning of television also began in the 1960s, with educators creating the captions themselves, mainly for educational use in deaf schools (Downey 70). However, there soon came calls for public television to also be made accessible for the deaf and hard of hearing—the debate focused on equality and pushed for recognition that deaf people were culturally diverse (Downey 70).The PBS therefore began a trial of captioning programs (Downey 71). These would be “open captions”—characters which were positioned on the screen as part of the normal image for all viewers to see (Downey 71). The trial was designed to determine both the number of D/deaf and hard of hearing people viewing the program, as well as to test if non-D/deaf and hard of hearing viewers would watch a program which had captions (Downey 71). The French Chef was selected for captioning by WBGH because it was their most popular television show in the early 1970s and in 1972 eight episodes of The French Chef were aired using open—albeit inconsistent—captions (Downey 71; Jensema et al. 284).There were concerns from some broadcasters that openly captioned programs would drive away the “hearing majority” (Downey 71). However, there was no explicit study carried out in 1972 on the viewers of The French Chef to determine if this was the case because WBGH ran out of funds to research this further (Downey 71). Nevertheless, Jensema, McCann and Ramsey (284) note that WBGH did begin to re-broadcast ABC World News Tonight in the 1970s with open captions and that this was the only regularly captioned show at the time.Due to changes in technology and fears that not everyone wanted to see captions onscreen, television’s focus shifted from open captions to closed captioning in the 1980s. Captions became encoded, with viewers needing a decoder to be able to access them. However, the high cost of the decoders meant that many could not afford to buy them and adoption of the technology was slow (Youngblood and Lysaght 243; Downey 71). In 1979, the US government had set up the National Captioning Institute (NCI) with a mandate to develop and sell these decoders, and provide captioning services to the networks. This was initially government-funded but was designed to eventually be self-sufficient (Downey 73).PBS, ABC and NBC (but not CBS) had agreed to a trial (Downey 73). However, there was a reluctance on the part of broadcasters to pay to caption content when there was not enough evidence that the demand was high (Downey 73—74). The argument for the provision of captioned content therefore began to focus on the rights of all citizens to be able to access a public service. A complaint was lodged claiming that the Los Angeles station KCET, which was a PBS affiliate, did not provide captioned content that was available elsewhere (Downey 74). When Los Angeles PBS station KCET refused to air captioned episodes of The French Chef, the Greater Los Angeles Council on Deafness (GLAD) picketed the station until the decision was reversed. GLAD then focused on legislation and used the Rehabilitation Act to argue that television was federally assisted and, by not providing captioned content, broadcasters were in violation of the Act (Downey 74).GLAD also used the 1934 Communications Act in their argument. This Act had firstly established the Federal Communications Commission (FCC) and then assigned them the right to grant and renew broadcast licenses as long as those broadcasters served the ‘‘public interest, convenience, and necessity’’ (Michalik, cited in Downey 74). The FCC could, argued GLAD, therefore refuse to renew the licenses of broadcasters who did not air captioned content. However, rather than this argument working in their favour, the FCC instead changed its own procedures to avoid such legal actions in the future (Downey 75). As a result, although some stations began to voluntarily caption more content, it was not until 1996 that it became a legally mandated requirement with the introduction of the Telecommunications Act (Youngblood and Lysaght 244)—too late for The French Chef.My Kitchen Rules: Captioning BreachWhereas The French Chef presented instructional cooking programming from a kitchen set, more recently the food genre has moved away from the staged domestic kitchen set as an instructional space to use real-life domestic kitchens and more competitive multi-bench spaces. The Australian program MKR straddles this shift in the cooking genre with the first half of each season occurring in domestic settings and the second half in Iron Chef style studio competition (see Oren for a discussion of the influence of Iron Chef on contemporary cooking shows).All broadcast channels in Australia are mandated to caption 100 per cent of programs aired between 6am and midnight. However, the 2013 MKR Grand Final broadcast by Channel Seven Brisbane Pty Ltd and Channel Seven Melbourne Pty Ltd (Seven) failed to transmit 10 minutes of captions some 30 minutes into the 2-hour program. The ACMA received two complaints relating to this. The first complaint, received on 27 April 2013, the same evening as the program was broadcast, noted ‘[the D/deaf community] … should not have to miss out’ (ACMA, Report No. 3046 3). The second complaint, received on 30 April 2013, identified the crucial nature of the missing segment and its effect on viewers’ overall enjoyment of the program (ACMA, Report No. 3046 3).Seven explained that the relevant segment (approximately 10 per cent of the program) was missing from the captioning file, but that it had not appeared to be missing when Seven completed its usual captioning checks prior to broadcast (ACMA, Report No. 3046 4). The ACMA found that Seven had breached the conditions of their commercial television broadcasting licence by “failing to provide a captioning service for the program” (ACMA, Report No. 3046 12). The interruption of captioning was serious enough to constitute a breach due, in part, to the nature and characteristic of the program:the viewer is engaged in the momentum of the competitive process by being provided with an understanding of each of the competition stages; how the judges, guests and contestants interact; and their commentaries of the food and the cooking processes during those stages. (ACMA, Report No. 3046 6)These interactions have become a crucial part of the cooking genre, a genre often described as offering a way to acquire cultural capital via instructions in both cooking and ideological food preferences (Oren 31). Further, in relation to the uncaptioned MKR segment, ACMA acknowledged it would have been difficult to follow both the cooking process and the exchanges taking place between contestants (ACMA, Report No. 3046 8). ACMA considered these exchanges crucial to ‘a viewer’s understanding of, and secondly to their engagement with the different inter-related stages of the program’ (ACMA, Report No. 3046 7).An additional complaint was made with regards to the same program broadcast on Prime Television (Northern) Pty Ltd (Prime), a Seven Network affiliate. The complaint stated that the lack of captions was “Not good enough in prime time and for a show that is non-live in nature” (ACMA, Report No. 3124 3). Despite the fact that the ACMA found that “the fault arose from the affiliate, Seven, rather than from the licensee [Prime]”, Prime was also found to also have breached their licence conditions by failing to provide a captioning service (ACMA, Report No. 3124 12).The following year, Seven launched captions for their online catch-up television platform. Although this was a result of discussions with a complainant over the broader lack of captioned online television content, it was also a step that re-established Seven’s credentials as a leader in commercial television access. The 2015 season of MKR also featured their first partially-deaf contestant, Emilie Biggar.Mainstreaming Captions — Inter-Platform CooperationOver time, cooking shows on television have evolved from an informative style (The French Chef) to become more entertaining in their approach (MKR). As Oren identifies, this has seen a shift in the food genre “away from the traditional, instructional format and towards professionalism and competition” (Oren 25). The affordances of television itself as a visual medium has also been recognised as crucial in the popularity of this genre and its more recent transmedia turn. That is, following Joshua Meyrowitz’s medium theory regarding how different media can afford us different messages, televised cooking shows offer audiences stylised knowledge about food and cooking beyond the traditional cookbook (Oren; ray). In addition, cooking shows are taking their product beyond just television and increasing their inter-platform cooperation (Oren)—for example, MKR has a comprehensive companion website that viewers can visit to watch whole episodes, obtain full recipes, and view shopping lists. While this can be viewed as a modern take on Julia Child’s cookbook success, it must also be considered in the context of the increasing focus on multimedia approaches to cooking instructions (Hamada et al., Multimedia Integration; Cooking Navi; Oh et al.). Audiences today are more likely to attempt a recipe if they have seen it on television, and will use transmedia to download the recipe. As Oren explains:foodism’s ascent to popular culture provides the backdrop and motivation for the current explosion of food-themed formats that encourages audiences’ investment in their own expertise as critics, diners, foodies and even wanna-be professional chefs. FoodTV, in turn, feeds back into a web-powered, gastro-culture and critique-economy where appraisal outranks delight. (Oren 33)This explosion in popularity of the web-powered gastro culture Oren refers to has led to an increase in appetite for step by step, easy to access instructions. These are being delivered using captions. As a result of the legislation and activism described throughout this paper, captions are more widely available and, in many cases, now describe what is said onscreen verbatim. In addition, the mainstream commercial benefits and uses of captions are being explored. Captions have therefore moved from a specialist assistive technology for people who are D/deaf or hard of hearing to become recognised as an important resource for creative television viewers regardless of their hearing (Blankinship et al.). With captions becoming more accessible, accurate, financially viable, and mainstreamed, their potential as an additional television resource is of interest. As outlined above, within the cooking show genre—especially with its current multimedia turn and the demand for captioned recipe instructions (Hamada et al., “Multimedia Integration”, “Cooking Navi”; Oh et al.)—this is particularly pertinent.Hamada et al. identify captions as a useful technology to use in the increasingly popular educational, yet entertaining, cooking show genre as the required information—ingredient lists, instructions, recipes—is in high demand (Hamada et al., “Multimedia Integration” 658). They note that cooking shows often present information out of order, making them difficult to follow, particularly if a recipe must be sourced later from a website (Hamada et al., “Multimedia Integration” 658-59; Oh et al.). Each step in a recipe must be navigated and coordinated, particularly if multiple recipes are being completed at the same times (Hamada, et al., Cooking Navi) as is often the case on cooking shows such as MKR. Using captions as part of a software program to index cooking videos facilitates a number of search affordances for people wishing to replicate the recipe themselves. As Kyeong-Jin et al. explain:if food and recipe information are published as linked data with the scheme, it enables to search food recipe and annotate certain recipe by communities (sic). In addition, because of characteristics of linked data, information on food recipes can be connected to additional data source such as products for ingredients, and recipe websites can support users’ decision making in the cooking domain. (Oh et al. 2)The advantages of such a software program are many. For the audience there is easy access to desired information. For the number of commercial entities involved, this consumer desire facilitates endless marketing opportunities including product placement, increased ratings, and software development. Interesting, all of this falls outside the “usual” parameters of captions as purely an assistive device for a few, and facilitates the mainstreaming—and perhaps beginnings of acceptance—of captions.ConclusionCaptions are a vital accessibility feature for television viewers who are D/deaf or hard of hearing, not just from an informative or entertainment perspective but also to facilitate social inclusion for this culturally diverse group. The availability and quality of television captions has moved through three stages. These can be broadly summarised as early yet inconsistent captions, captions becoming more widely available and accurate—often as a direct result of activism and legislation—but not yet fully verbatim, and verbatim captions as adopted within mainstream software applications. This paper has situated these stages within the television cooking genre, a genre often remarked for its appeal towards inclusion and cultural capital.If television facilitates social inclusion, then food television offers vital cultural capital. While Julia Child’s The French Chef offered the first example of television captions via open captions in 1972, a lack of funding means we do not know how viewers (both hearing and not) actually received the program. However, at the time, captions that would be considered unacceptable today were received favourably (Jensema, McCann and Ramsey; Newell)—anything was deemed better than nothing. Increasingly, as the focus shifted to closed captioning and the cooking genre embraced a more competitive approach, viewers who required captions were no longer happy with missing or inconsistent captioning quality. The was particularly significant in Australia in 2013 when several viewers complained to ACMA that captions were missing from the finale of MKR. These captions provided more than vital cooking instructions—their lack prevented viewers from understanding conflict within the program. Following this breach, Seven became the only Australian commercial television station to offer captions on their web based catch-up platform. While this may have gone a long way to rehabilitate Seven amongst D/deaf and hard of hearing audiences, there is the potential too for commercial benefits. Caption technology is now being mainstreamed for use in cooking software applications developed from televised cooking shows. These allow viewers—both D/deaf and hearing—to access information in a completely new, and inclusive, way.ReferencesAgnihotri, Lalitha, et al. “Summarization of Video Programs Based on Closed Captions.” 4315 (2001): 599–607.Australian Communications and Media Authority (ACMA). Investigation Report No. 3046. 2013. 26 Apr. 2017 <http://www.acma.gov.au/~/media/Diversity%20Localism%20and%20Accessibility/Investigation%20reports/Word%20document/3046%20My%20Kitchen%20Rules%20Grand%20Final%20docx.docx>.———. Investigation Report No. 3124. 2014. 26 Apr. 2017 <http://www.acma.gov.au/~/media/Diversity%20Localism%20and%20Accessibility/Investigation%20reports/Word%20document/3124%20NEN%20My%20Kitchen%20Rules%20docx.docx>.Blankinship, E., et al. “Closed Caption, Open Source.” BT Technology Journal 22.4 (2004): 151–59.Collins, Kathleen, and John Jay College. “TV Cooking Shows: The Evolution of a Genre”. Flow: A Critical Forum on Television and Media Culture (7 May 2008). 14 May 2017 <http://www.flowjournal.org/2008/05/tv-cooking-shows-the-evolution-of-a-genre/>.Downey, Greg. “Constructing Closed-Captioning in the Public Interest: From Minority Media Accessibility to Mainstream Educational Technology.” The Journal of Policy, Regulation and Strategy for Telecommunications, Information and Media 9.2/3 (2007): 69–82. DOI: 10.1108/14636690710734670.Hamada, Reiko, et al. “Multimedia Integration for Cooking Video Indexing.” Advances in Multimedia Information Processing-PCM 2004 (2005): 657–64.Hamada, Reiko, et al. “Cooking Navi: Assistant for Daily Cooking in Kitchen.” Proceedings of the 13th Annual ACM International Conference on Multimedia. ACM.Ibrahim, Yasmin. “Food Porn and the Invitation to Gaze: Ephemeral Consumption and the Digital Spectacle.” International Journal of E-Politics (IJEP) 6.3 (2015): 1–12.Jensema, Carl J., Ralph McCann, and Scott Ramsey. “Closed-Captioned Television Presentation Speed and Vocabulary.” American Annals of the Deaf 141.4 (1996): 284–292.Matwick, Kelsi, and Keri Matwick. “Inquiry in Television Cooking Shows.” Discourse & Communication 9.3 (2015): 313–30.Meyrowitz, Joshua. No Sense of Place: The Impact of Electronic Media on Social Behavior. New York: Oxford University Press, 1985.Miura, K., et al. “Automatic Generation of a Multimedia Encyclopedia from TV Programs by Using Closed Captions and Detecting Principal Video Objects.” Eighth IEEE International Symposium on Multimedia (2006): 873–80.Newell, A.F. “Teletext for the Deaf.” Electronics and Power 28.3 (1982): 263–66.Oh, K.J. et al. “Automatic Indexing of Cooking Video by Using Caption-Recipe Alignment.” 2014 International Conference on Behavioral, Economic, and Socio-Cultural Computing (BESC2014) (2014): 1–6.Oren, Tasha. “On the Line: Format, Cooking and Competition as Television Values.” Critical Studies in Television: The International Journal of Television Studies 8.2 (2013): 20–35.Ouellette, Laurie, and James Hay. “Makeover Television, Governmentality and the Good Citizen.” Continuum: Journal of Media & Cultural Studies 22.4 (2008): 471–84.ray, krishnendu. “Domesticating Cuisine: Food and Aesthetics on American Television.” Gastronomica 7.1 (2007): 50–63.Youngblood, Norman E., and Ryan Lysaght. “Accessibility and Use of Online Video Captions by Local Television News Websites.” Electronic News 9.4 (2015): 242–256.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía