To see the other types of publications on this topic, follow the link: Wi-Fi frames.

Journal articles on the topic 'Wi-Fi frames'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 journal articles for your research on the topic 'Wi-Fi frames.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Reyes-Moncayo, Hector Ivan, Luis Daniel Malaver- Mendoza, and Andrea Lorena Ochoa-Murillo. "Survey of the security risks of Wi-Fi networks based on the information elements of beacon and probe response frames." Scientia et Technica 25, no. 3 (September 30, 2020): 351–57. http://dx.doi.org/10.22517/23447214.23781.

Full text
Abstract:
Wi-Fi networks have become prevalent in homes, businesses, and public places. Wi-Fi is one of the most common means that people use to access digital services like Facebook, WhatsApp, Instagram, email, and even payment platforms. Equipment for deploying Wi-Fi networks is affordable and its basic features are easy to manipulate. In many cases Wi-Fi users do not even have to buy any communication equipment, since Wi-Fi routers are installed by internet service providers (ISP) in the premises of their customers. Wi-Fi equipment, owned either by end users or ISP companies, should be configured as securely as possible to avoid potential attacks. The security capabilities and features of Wi-Fi routers and access points are inserted into beacon and probe response frames. Potential attackers can use sniffing tools like Wireshark to capture these frames and extract information about security features to discover vulnerabilities. In order to assess the security risks of Wi-Fi networks we conducted a survey in which we used Wireshark to capture the traffic from several Wi-Fi networks, and then through a filter we selected the beacon and probe response frames to analyze the security information elements carried by those frames. We came to the conclusion that despite technical recommendations, some security parameters and options are still set in a way that makes networks more prone to attacks. With this paper we want the readers to be aware of the security risks of their Wi-Fi networks, even the ones set up by their internet service providers.
APA, Harvard, Vancouver, ISO, and other styles
2

Vega-Barbas, Mario, Manuel Álvarez-Campana, Diego Rivera, Mario Sanz, and Julio Berrocal. "AFOROS: A Low-Cost Wi-Fi-Based Monitoring System for Estimating Occupancy of Public Spaces." Sensors 21, no. 11 (June 3, 2021): 3863. http://dx.doi.org/10.3390/s21113863.

Full text
Abstract:
Estimating the number of people present in a given venue in real-time is extremely useful from a security, management, and resource optimization perspective. This article presents the architecture of a system based on the use of Wi-Fi sensor devices that allows estimating, almost in real-time, the number of people attending an event that is taking place in a venue. The estimate is based on the analysis of the “probe request” messages periodically transmitted by smartphones to determine the existence of Wi-Fi access points in the vicinity. The method considers the MAC address randomization mechanisms introduced in recent years in smartphones, which prevents the estimation of the number of devices by simply counting different MAC addresses. To solve this difficulty, our Wi-Fi sensors analyze other fields present in the header of the IEEE 802.11 frames, the information elements, to extract a unique fingerprint from each smartphone. The designed system was tested in a set of real scenarios, obtaining an estimate of attendance at different public events with an accuracy close to 95%.
APA, Harvard, Vancouver, ISO, and other styles
3

Gu, Xiaolin, Wenjia Wu, Xiaodan Gu, Zhen Ling, Ming Yang, and Aibo Song. "Probe Request Based Device Identification Attack and Defense." Sensors 20, no. 16 (August 17, 2020): 4620. http://dx.doi.org/10.3390/s20164620.

Full text
Abstract:
Wi-Fi network has an open nature so that it needs to face greater security risks compared to wired network. The MAC address represents the unique identifier of the device, and is easily obtained by an attacker. Therefore MAC address randomization is proposed to protect the privacy of devices in a Wi-Fi network. However, implicit identifiers are used by attackers to identify user’s device, which can cause the leakage of user’s privacy. We propose device identification based on 802.11ac probe request frames. Here, a detailed analysis on the effectiveness of 802.11ac fields is given and a novel device identification method based on deep learning whose average f1-score exceeds 99% is presented. With a purpose of preventing attackers from obtaining relevant information by the device identification method above, we design a novel defense mechanism based on stream cipher. In that case, the original content of probe request frame is hidden by encrypting probe request frames and construction of probe request is reserved to avoid the finding of attackers. This defense mechanism can effectively reduce the performance of the proposed device identification method whose average f1-score is below 30%. In general, our research on attack and defense mechanism can preserve device privacy better.
APA, Harvard, Vancouver, ISO, and other styles
4

Kim, Dong Hyun, and Jong Deok Kim. "Unequal loss protection scheme using a quality prediction model in a Wi-Fi broadcasting system." International Journal of Distributed Sensor Networks 15, no. 6 (June 2019): 155014771985424. http://dx.doi.org/10.1177/1550147719854247.

Full text
Abstract:
Wireless local area network–based broadcasting techniques are a type of mobile Internet Protocol television technology that simultaneously transmits multimedia content to local users. Contrary to the existing wireless local area network–based multimedia transmission systems, which transmit multimedia data to users using unicast packets, a wireless local area network–based broadcasting system is able to transmit multimedia data to many users in a single broadcast packet. Consequently, network resources do not increase with the increase in the number of users. However, IEEE 802.11 does not provide a packet loss recovery algorithm for broadcast packet loss, which is unavoidable. Therefore, the forward error correction technique is required to address the issue of broadcast packet loss. The broadcast packet loss rate of a wireless local area network–based broadcasting system that transmits compressed multimedia data is not proportional to the quality deterioration of the received video signals; therefore, it is difficult to predict the quality of the received video while also considering the effect of broadcast packet loss. In this scenario, allocating equal forward error correction packets to compressed frames is not an effective method for recovering broadcast packet loss. Thus, several studies on unequal loss protection have been conducted. This study proposes an effective, prediction-based unequal loss protection algorithm that can be applied to wireless local area network–based broadcasting systems. The proposed unequal loss protection algorithm adopts a novel approach by adding forward error correction packets to every transmission frame while considering frame loss. This algorithm was used as a new metric to predict video quality deterioration, and an unequal loss protection structure was designed, implemented, and verified. The effectiveness of the quality deterioration model and the validity of the unequal loss protection algorithm were demonstrated through experiments.
APA, Harvard, Vancouver, ISO, and other styles
5

Lu, Qian, Haipeng Qu, Yuzhan Ouyang, and Jiahui Zhang. "SLFAT: Client-Side Evil Twin Detection Approach Based on Arrival Time of Special Length Frames." Security and Communication Networks 2019 (June 2, 2019): 1–10. http://dx.doi.org/10.1155/2019/2718741.

Full text
Abstract:
In general, the IEEE 802.11 network identifiers used by wireless access points (APs) can be easily spoofed. Accordingly, a malicious adversary is able to clone the identity information of a legitimate AP (LAP) to launch evil twin attacks (ETAs). The evil twin is a class of rogue access point (RAP) that masquerades as a LAP and allures Wi-Fi victims’ traffic. It enables an attacker with little effort and expenditure to eavesdrop or manipulate wireless communications. Due to the characteristics of strong concealment, high confusion, great harmfulness, and easy implementation, the ETA has become one of the most severe security threats in Wireless Local Area Networks (WLANs). Here, we propose a novel client-side approach, Speical Length Frames Arrival Time (SLFAT), to detect the ETA, which utilizes the same gateway as the LAP. By monitoring the traffic emitted by target APs at a detection node, SLFAT extracts the arrival time of the special frames with the same length to determine the evil twin’s forwarding behavior. SLFAT is passive, lightweight, efficient, hard to be escaped. It allows users to independently detect ETA on ordinary wireless devices. Through implementation and evaluation in our study, SLFAT achieves a very high detection rate in distinguishing evil twins from LAPs.
APA, Harvard, Vancouver, ISO, and other styles
6

Ishikawa, Naoki, Yasuhiro Ohishi, and Kaori Maeda. "Nulls in the Air: Passive and Low-Complexity QoS Estimation Method for a Large-Scale Wi-Fi Network Based on Null Function Data Frames." IEEE Access 7 (2019): 28581–91. http://dx.doi.org/10.1109/access.2019.2902182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jeong, Baek, and Son. "Hierarchical Network Architecture for Non-Safety Applications in Urban Vehicular Ad-Hoc Networks." Sensors 19, no. 19 (October 4, 2019): 4306. http://dx.doi.org/10.3390/s19194306.

Full text
Abstract:
In the vehicular ad-hoc networks (VANETs), wireless access in vehicular environments (WAVE) as the core networking technology is suitable for supporting safety-critical applications, but it is difficult to guarantee its performance when transmitting non-safety data, especially high volumes of data, in a multi-hop manner. Therefore, to provide non-safety applications effectively and reliably for users, we propose a hybrid V2V communication system (HVCS) using hierarchical networking architecture: a centralized control model for the establishment of a fast connection and a local data propagation model for efficient and reliable transmissions. The centralized control model had the functionality of node discovery, local ad-hoc group (LAG) formation, a LAG owner (LAGO) determination, and LAG management. The local data propagation indicates that data are transmitted only within the LAG under the management of the LAGO. To support the end-to-end multi-hop transmission over V2V communication, vehicles outside the LAG employ the store and forward model. We designed three phases consisting of concise device discovery (CDD), concise provisioning (CP), and data transmission, so that the HVCS is highly efficient and robust on the hierarchical networking architecture. Under the centralized control, the phase of the CDD operates to improve connection establishment time, and the CP is to simplify operations required for security establishment. Our HVCS is implemented as a two-tier system using a traffic controller for centralized control using cellular networks and a smartphone for local data propagation over Wi-Fi Direct. The HVCS’ performance was evaluated using Veins, and compared with WAVE in terms of throughput, connectivity, and quality of service (QoS). The effectiveness of the centralized control was demonstrated in comparative experiments with Wi-Fi Direct. The connection establishment time measured was only 0.95 s for the HVCS. In the case of video streaming services through the HVCS, about 98% of the events could be played over 16 frames per second. The throughput for the streaming data was between 74% to 81% when the vehicle density was over 50%. We demonstrated that the proposed system has high throughput and satisfies the QoS of streaming services even though the end-to-end delay is a bit longer when compared to that of WAVE.
APA, Harvard, Vancouver, ISO, and other styles
8

Ahmed, Adel A., and Omar Barukab. "Autonomous framework simulation tools for real-time multimedia streaming over wireless ad-hoc networks." SIMULATION 96, no. 2 (July 2, 2019): 185–97. http://dx.doi.org/10.1177/0037549719859065.

Full text
Abstract:
Real-time video communication has become one of the most significant applications extensively used by homogeneous/heterogeneous wireless network technologies, such as Wi-Fi, the Internet of things, the wireless sensor network (WSN), 5G, etc. This leads to enhanced deployment of multimedia streaming applications over wireless network technologies. In order to accomplish the optimal performance of real-time multimedia streaming applications over the homogeneous/heterogeneous wireless network, it is therefore necessary to develop a simulation tool-set that effectively measures the quality of service (QoS) for different multimedia streaming applications over transport layer protocols. This paper proposes an autonomous simulation tool (AST) that is entirely independent from the source code of transport layer protocols. Furthermore, the AST is integrated into NS-2 to evaluate the QoS of real-time video streaming over numerous transport layer protocols and it uses new QoS measurement tools to test the video delivery quality based on I-frames to speeds up the assessment of multimedia streaming quality and ensure high accuracy of performance metrics. The simulation results show that using the AST to simulate real-time multimedia stream results in between 13% and 36% higher delivery ratio and 150–250% less cumulative jitter delay compared with using baseline simulation tools. Also, the AST guarantees an optimal QoS performance measurements in terms of the peak signal-to-noise Ratio and visual quality of the received video.
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Kan, and Song Lin Liu. "Remote Stepper Motor Controlled Based on Wi-Fi Module." Applied Mechanics and Materials 727-728 (January 2015): 612–15. http://dx.doi.org/10.4028/www.scientific.net/amm.727-728.612.

Full text
Abstract:
This paperdescribes an approach based on Wi-Fi technology to remotelycontrol the stepper motor. Through usingMCU Nested Vectored InterruptController (NVIC) and timer, the stepper motor would run smooth without intervalor frame miss, which could happen due to the problem of unstable Wi-Fi connection.The drive board and Wi-Fi module were also re designed to smooth the runningprocess of stepper motor.
APA, Harvard, Vancouver, ISO, and other styles
10

Hong, Sung-Tae, Harim Lee, Hyoil Kim, and Hyun Jong Yang. "Lightweight Wi-Fi Frame Detection for Licensed Assisted Access LTE." IEEE Access 7 (2019): 77618–28. http://dx.doi.org/10.1109/access.2019.2921724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Arisandi, Diki, Nazrul Muhaimin Ahmad, and Subarmaniam Kannan. "The rogue access point identification: a model and classification review." Indonesian Journal of Electrical Engineering and Computer Science 23, no. 3 (September 1, 2021): 1527. http://dx.doi.org/10.11591/ijeecs.v23.i3.pp1527-1537.

Full text
Abstract:
Most people around the world make use of public Wi-Fi hotspots, as their daily routine companion in communication. The access points (APs) of public Wi-Fi are easily deployed by anyone and everywhere, to provide hassle-free Internet connectivity. The availability of Wi-Fi increases the danger of adversaries, taking advantages of sniffing the sensitive data. One of the most serious security issues encountered by Wi-Fi users, is the presence of rogue access points (RAP). Several studies have been published regarding how to identify the RAP. Using systematic literature review, this research aims to explore the various methods on how to distinguish the AP, as a rogue or legitimate, based on the hardware and software approach model. In conclusion, all the classifications were summarized, and produced an alternative solution using beacon frame manipulation technique. Therefore, further research is needed to identify the RAP.
APA, Harvard, Vancouver, ISO, and other styles
12

Lee, Hoonyong, Changbum Ahn, Nakjung Choi, Toseung Kim, and Hyunsoo Lee. "The Effects of Housing Environments on the Performance of Activity-Recognition Systems Using Wi-Fi Channel State Information: An Exploratory Study." Sensors 19, no. 5 (February 26, 2019): 983. http://dx.doi.org/10.3390/s19050983.

Full text
Abstract:
Recently, device-free human activity–monitoring systems using commercial Wi-Fi devices have demonstrated a great potential to support smart home environments. These systems exploit Channel State Information (CSI), which represents how human activities–based environmental changes affect the Wi-Fi signals propagating through physical space. However, given that Wi-Fi signals either penetrate through an obstacle or are reflected by the obstacle, there is a high chance that the housing environment would have a great impact on the performance of a CSI-based activity-recognition system. In this context, this paper examines whether and to what extent housing environment affects the performance of the CSI-based activity recognition systems. Activities in daily living (ADL)–recognition systems were implemented in two typical housing environments representative of the United States and South Korea: a wood-frame apartment (Unit A) and a reinforced concrete-frame apartment (Unit B), respectively. The experimental results show that housing environments, combined with various environmental factors (i.e., structural building materials, surrounding Wi-Fi interference, housing layout, and population density), generate a significant difference in the accuracy of the applied CSI-based ADL-recognition systems. This outcome provides insights into how such ADL systems should be configured for various home environments.
APA, Harvard, Vancouver, ISO, and other styles
13

Barsocchi, Paolo, Gabriele Oligeri, and Francesco Potorti. "Measurement-based frame error model for simulating outdoor Wi-Fi networks." IEEE Transactions on Wireless Communications 8, no. 3 (March 2009): 1154–58. http://dx.doi.org/10.1109/twc.2009.060475.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Kosek-Szott, Katarzyna, and Norbert Rapacz. "Tuning Wi-Fi Traffic Differentiation by Combining Frame Aggregation with TXOP Limits." IEEE Communications Letters 24, no. 3 (March 2020): 700–703. http://dx.doi.org/10.1109/lcomm.2019.2958625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Grazia, Carlo Augusto. "A Performance Model for Wi-Fi Frame Aggregation Considering Throughput and Latency." IEEE Communications Letters 24, no. 7 (July 2020): 1577–80. http://dx.doi.org/10.1109/lcomm.2020.2995590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Rizzi, Antonello, Giuseppe Granato, and Andrea Baiocchi. "Frame-by-frame Wi-Fi attack detection algorithm with scalable and modular machine-learning design." Applied Soft Computing 91 (June 2020): 106188. http://dx.doi.org/10.1016/j.asoc.2020.106188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Camps-Mur, Daniel, Manil Dev Gomony, Xavier Pérez-Costa, and Sebastià Sallent-Ribes. "Leveraging 802.11n frame aggregation to enhance QoS and power consumption in Wi-Fi networks." Computer Networks 56, no. 12 (August 2012): 2896–911. http://dx.doi.org/10.1016/j.comnet.2012.05.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

He, Wen, Xiaofeng Tao, and Defeng Ren. "The analysis of IEEE 802.11 PCF protocol based on LEO satellite Wi-Fi." MATEC Web of Conferences 189 (2018): 04014. http://dx.doi.org/10.1051/matecconf/201818904014.

Full text
Abstract:
In view of the problem of long propagation delay, hidden terminals and serious user conflicts in LEO satellite Wi-Fi scenario, this paper analysis the incompatibility of 802.11 DCF protocol briefly. A scheme of PCF polling mechanism based on priority is proposed and the frame format is adaptively modified. Through MATLAB simulation, the system throughput of the original scheme and the improvement scheme over the different transmission distance and nodes are analysed. The simulation results show that the improved PCF protocol can improve the throughput under the multi-user over long distances.
APA, Harvard, Vancouver, ISO, and other styles
19

Sutton, Gordon J., Ren Ping Liu, and Y. Jay Guo. "Harmonising Coexistence of Machine Type Communications With Wi-Fi Data Traffic Under Frame-Based LBT." IEEE Transactions on Communications 65, no. 9 (September 2017): 4000–4011. http://dx.doi.org/10.1109/tcomm.2017.2710131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Shim, Dongha, and Jason Yi. "Ultra-wide-angle Wireless Endoscope with a Backend-camera-controller Architecture." International Journal of Electronics and Telecommunications 63, no. 1 (March 1, 2017): 19–24. http://dx.doi.org/10.1515/eletel-2017-0003.

Full text
Abstract:
Abstract This paper presents a wireless endoscope with an ultra-wide FOV (Field of View) of 130° and HD resolution (1280×720 pixels). The proposed endoscope consists of a camera head, cable, camera controller, and wireless handle. The lens module with a 150° degrees AOV (Angle of View) is achieved using the plastic injection-molding process to reduce manufacturing costs. A serial CMOS image sensor using the MIPI (Mobile Industry Processor Interface) CSI-2 (Camera Serial Interface-2) interface physically separates the camera processor from the camera head. The camera head and the cable have a compact structure due to the BCC (Backend-Camera-Controller) architecture. The size of the camera head and the camera controller is 8×8×26 mm and 7×55 mm, respectively. The wireless handle supports a UWB (Ultra-Wide-Band) or a Wi-Fi communication to transmit video data. The UWB link supports a maximum data transfer rate of ~37 Mbps to transmit video data with a resolution of 1280×720 pixels at a frame rate of 30 fps in the MJPEG (Motion JPEG) format. Although the Wi-Fi link provides a lower data transfer rate (~8 Mbps Max.), it has the advantage of flexible interoperability with various mobile devices. The latency of the UWB link is measured to be ~0.1 sec. The Wi-Fi link has a larger latency (~0.5 sec) due to its lower data transfer rate. The proposed endoscope demonstrates the feasibility of a high-performance yet low-cost wireless endoscope using the BCC architecture. To the best of the author’s knowledge, the proposed endoscope has the largest FOV among all presently existing wireless endoscopes.
APA, Harvard, Vancouver, ISO, and other styles
21

Kaburcuk, Fatih, and Atef Elsherbeni. "Efficient Electromagnetic Analysis of a Dispersive Head Model Due to Smart Glasses Embedded Antennas at Wi-Fi and 5G Frequencies." Applied Computational Electromagnetics Society 36, no. 2 (March 16, 2021): 159–67. http://dx.doi.org/10.47037/2020.aces.j.360207.

Full text
Abstract:
Numerical study of electromagnetic interaction between an adjacent antenna and a human head model requires long computation time and large computer memory. In this paper, two speeding up techniques for a dispersive algorithm based on finite-difference time-domain method are used to reduce the required computation time and computer memory. In order to evaluate the validity of these two speeding up techniques, specific absorption rate (SAR) and temperature rise distributions in a dispersive human head model due to radiation from an antenna integrated into a pair of smart glasses are investigated. The antenna integrated into the pair of smart glasses have wireless connectivity at 2.4 GHz and 5th generation (5G) cellular connectivity at 4.9 GHz. Two different positions for the antenna integrated into the frame are considered in this investigation. These techniques provide remarkable reduction in computation time and computer memory.
APA, Harvard, Vancouver, ISO, and other styles
22

Geyikoğlu, M. Dilruba, Hilal Koç Polat, Fatih Kaburcuk, and Bülent Çavuşoğlu. "SAR analysis of tri-band antennas for a 5G eyewear device." International Journal of Microwave and Wireless Technologies 12, no. 8 (March 16, 2020): 754–61. http://dx.doi.org/10.1017/s1759078720000173.

Full text
Abstract:
AbstractThe goal of this study is to analyze the specific absorption rate (SAR) distribution of the projected 5G frequencies below 6 GHz and at Wi-Fi frequency (2.45 GHz) on a human head, for eyewear device applications. Two separate tri-band printed dipole antennas for this purpose are designed and fabricated at operating frequencies of 2.45/3.8/6 GHz for prototype-1 and at operating frequencies of 2.45/3.6/4.56 GHz for prototype-2. In order to obtain the desired frequencies: first, the prototypes of the proposed antennas are fine-tuned via Computer Simulation Technology Microwave Studio (CST) and then fabricated on the FR4 layer. The reflection coefficient (S11) is tested and the simulation results are confirmed. In order to analyze the effect of wearing a pair of glasses' frame including a tri-band 5G antenna, a frame is designed and produced via 3D printer with polylactic acid material which has high dielectric constant (ɛr = 8.1). The SAR results of the proposed antennas have been examined for the cases where the antenna is embedded in the frame and is used alone. Both cases were analyzed by using the homogeneous specific anthropomorphic mannequin and the heterogeneous visible human head phantoms and the results have been evaluated in terms of SAR10 g values.
APA, Harvard, Vancouver, ISO, and other styles
23

Cui, Xue-Rong, Juan Li, Hao Zhang, T. Aaron Gulliver, and Chunlei Wu. "Improving Ultra-Wideband Positioning Security Using a Pseudo-Random Turnaround Delay Protocol." Journal of Circuits, Systems and Computers 24, no. 10 (October 25, 2015): 1550149. http://dx.doi.org/10.1142/s0218126615501492.

Full text
Abstract:
Ultra-wideband (UWB) technology is very suitable for indoor wireless localization and ranging. IEEE 802.15.4a is the first physical layer standard specifically developed for wireless ranging and positioning. While malicious devices are not typically present, snoopers, impostors and jammers can exist. The data link and network layers in standards such as Wi-Fi, IEEE 802.15.4 and 802.11 mainly provide authentication and encryption support, but security about ranging or location is rarely considered. Ranging can be achieved using just the preamble and start of frame delimiter (SFD), so in this case malicious devices can easily obtain position information. Therefore, the security of ranging or positioning protocols is very important, which differs from the case with data exchange protocols. To provide secure location services, a protocol is presented which is based on a pseudo-random turnaround delay. In this protocol, devices use different turnaround times so that it is difficult for a snooper to figure out the location of sensor devices in protected areas. At the same time, in the period of Hello frame transmission, together with the authentication mechanism of IEEE 802.15.4, an impostor cannot easily engages its deception attack.
APA, Harvard, Vancouver, ISO, and other styles
24

Robyns, Pieter, Bram Bonné, Peter Quax, and Wim Lamotte. "Noncooperative 802.11 MAC Layer Fingerprinting and Tracking of Mobile Devices." Security and Communication Networks 2017 (2017): 1–21. http://dx.doi.org/10.1155/2017/6235484.

Full text
Abstract:
We present two novel noncooperative MAC layer fingerprinting and tracking techniques for Wi-Fi (802.11) enabled mobile devices. Our first technique demonstrates how a per-bit entropy analysis of a single captured frame allows an adversary to construct a fingerprint of the transmitter that is 80.0 to 67.6 percent unique for 50 to 100 observed devices and 33.0 to 15.1 percent unique for 1,000 to 10,000 observed devices. We show how existing mitigation strategies such as MAC address randomization can be circumvented using only this fingerprint and temporal information. Our second technique leverages peer-to-peer 802.11u Generic Advertisement Service (GAS) requests and 802.11e Block Acknowledgement (BA) requests to instigate transmissions on demand from devices that support these protocols. We validate these techniques using two datasets, one of which was recorded at a music festival containing 28,048 unique devices and the other at our research lab containing 138 unique devices. Finally, we discuss a number of countermeasures that can be put in place by mobile device vendors in order to prevent noncooperative tracking through the discussed techniques.
APA, Harvard, Vancouver, ISO, and other styles
25

Amewuda, Andy Bubune, Ferdinand Apietu Katsriku, and Jamal-Deen Abdulai. "Implementation and Evaluation of WLAN 802.11ac for Residential Networks in NS-3." Journal of Computer Networks and Communications 2018 (2018): 1–10. http://dx.doi.org/10.1155/2018/3518352.

Full text
Abstract:
Wi-Fi has been an amazingly successful technology. Its success may be attributed to the fact that, despite the significant advances made in technology over the last decade, it has remained backward compatible. 802.11ac is the latest version of the wireless LAN (WLAN) standard that is currently being adopted, and it promises to deliver very high throughput (VHT), operating at the 5 GHz band. In this paper, we report on an implementation of 802.11ac wireless LAN for residential scenario based on the 802.11ax task group scenario document. We evaluate the 802.11ac protocol performance under different operating conditions. Key features such as modulation coding set (MCS), frame aggregation, and multiple-input multiple-output (MIMO) were investigated. We also evaluate the average throughput, delay, jitter, optimum range for goodput, and effect of station (STA) density per access point (AP) in a network. ns-3, an open source network simulator with features supporting 802.11ac, was used to perform the simulation. Results obtained indicate that very high data rates are achievable. The highest data rate, the best mean delay, and mean jitter are possible under combined features of 802.11ac (MIMO and A-MPDU).
APA, Harvard, Vancouver, ISO, and other styles
26

Et.al, Nur Zafirah Bt Muhammad Zubir. "Wideband MIMO antenna for SCADA Wireless Communication Backhaul Application." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (April 10, 2021): 4538–47. http://dx.doi.org/10.17762/turcomat.v12i3.1844.

Full text
Abstract:
A wideband multiple-input-multiple-output (MIMO) antenna system with common elements suitable for SCADA wireless communication backhaul application which is operating frequency of 0.85-2.6GHz that can cover global system for mobile communication (GSM) 900MHz and 1.8GHz, The Universal Mobile Telecommunication System (UMTS) 2GHz, Wi-Fi (2.4GHz) and Long Term Evolution (LTE) 2.6GHz is proposed. The proposed MIMO antenna system consists of four microstrip feedline with common radiating element and a frame shaped ground plane. A single port antenna also was designed and presented in this paper to show the process to design wideband MIMO antenna structure. The radiator of the MIMO antenna system is designed as the shape of modified rectangle with straight line at each corner to enhance the bandwidth frequency. To improve the isolation between ports, the ground plane is modified by inserting four L-slots in each corner to reduce mutual coupling. For an antenna efficiency of more than 60%, the simulated reflection coefficients are below -10dB for all ports at expected frequency. Simulated isolation is achieved greater than -10dB by using a modified ground plane. Also, a low envelope correlation coefficient (ECC) less than 0.1 and polarization diversity gain of about 10dB with the orthogonal mode of linear polarization and omnidirectional pattern during the analysis of the radiation characteristic are achieved. Therefore, the proposed design can be used for SCADA wireless communication backhaul application.
APA, Harvard, Vancouver, ISO, and other styles
27

Gopal, Banala Krishna. "Light Monitoring System using Z-Score Analysis." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 10, 2021): 196–204. http://dx.doi.org/10.22214/ijraset.2021.34963.

Full text
Abstract:
In today’s modern world where everything is being automated and security is a growing concern, we made an automated module to live-monitor the anomalies in any provided space at all times to ensure security in our personal space. By implementing our project, we can monitor anything important which would be out of our reach at the moment with a live alert system through which we can identify any anomalies. In our proposed system we integrated Machine Learning to work with an IoT system by using Bolt Wi-Fi module which also uses an LDR sensor to detect the light intensity, here LDR is used specifically to better understand the Z-Score analysis. We are using ML to do an analysis known as Z-Score, which processes a math equation to detect anomalies. This analysis is done to predict a frame of upper and lower boundaries for the light intensity. Eventually, when the LDR sensor value i.e., light intensity goes out of range in a room, it generates Real-Time alerts in the form of an SMS alert which will be directed to the user's mobile phone through Twilio. This alert system is an advanced way to increase the work efficiency of any live monitoring system as the ML is always working to increase accuracy. In our project, this system specifically uses Light Dependent Resistor to detect changes in light intensities, but this can be implemented for any sensor to detect.
APA, Harvard, Vancouver, ISO, and other styles
28

Bucciol, P., E. Masala, E. Filippi, and J. C. De Martin. "Cross-Layer Perceptual ARQ for Video Communications over 802.11e Wireless Networks." Advances in Multimedia 2007 (2007): 1–12. http://dx.doi.org/10.1155/2007/13969.

Full text
Abstract:
This work presents an application-level perceptual ARQ algorithm for video streaming over 802.11e wireless networks. A simple and effective formula is proposed to combine the perceptual and temporal importance of each packet into a single priority value, which is then used to drive the packet-selection process at each retransmission opportunity. Compared to the standard 802.11 MAC-layer ARQ scheme, the proposed technique delivers higher perceptual quality because it can retransmit only the most perceptually important packets reducing retransmission bandwidth waste. Video streaming of H.264 test sequences has been simulated withnsin a realistic 802.11e home scenario, in which the various kinds of traffic flows have been assigned to different 802.11e access categories according to the Wi-Fi alliance WMM specification. Extensive simulations show that the proposed method consistently outperforms the standard link-layer 802.11 retransmission scheme, delivering PSNR gains up to 12 dB while achieving low transmission delay and limited impact on concurrent traffic. Moreover, comparisons with a MAC-level ARQ scheme which adapts the retry limit to the type of frame contained in packets and with an application-level deadline-based priority retransmission scheme show that the PSNR gain offered by the proposed algorithm is significant, up to 5 dB. Additional results obtained in a scenario in which the transmission relies on an intermediate node (i.e., the access point) further confirms the consistency of the perceptual ARQ performance. Finally, results obtained by varying network conditions such as congestion and channel noise levels show the consistency of the improvements achieved by the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
29

Rao, Velamala Ranga. "Citizen Relationship and Grievance Management System (CiR&GMS) through Multi-Channel Access for e-Government Services." International Journal of E-Services and Mobile Applications 7, no. 2 (April 2015): 43–67. http://dx.doi.org/10.4018/ijesma.2015040103.

Full text
Abstract:
Citizens are demanding greater access to interaction with government through their preferred channels or devices. The private sector uses different channels for their services, citizens except same level of services from the public sector. Therefore public sector needs to focus on creating multiple delivery channels (Traditional such as face to face, Telephone and Modern channels such as Website, E-mail, SMS), so that citizens can have ‘channels of choice', depending on specific needs, demands and preferences in order to increase citizens' participation and satisfaction. For this reason, the paper's purpose is 1) To understand multi-channel architecture, Integration, Management and its Strengths & Weakness 2) To develop a frame work for Citizen Relationship and Grievance Management System (CiR&GMS) for a single view 3) By applying proposed framework, To identify what types of channels are providing to access public services at National, State and Local level governments in India as a case study 4) To find out challenges and issues in implementation of multi-channel service delivery. The key findings of the case study are: a) There is no declining in providing traditional channels after introducing modern channels b) Many departments are offering mixed channels c) Usage of Mobile/SMS, Social media and Wi-Fi hotspots based channels are in initial stage d) t-Government channel is not yet initiated in any department e) Multi-channel integration and management is not yet initiated by many departments, these departments are managed channels as separate silos. The proposed framework may provide some guidance to the decision and policy makers in the public sector. However, such initiatives have many challenges to the developing countries like India.
APA, Harvard, Vancouver, ISO, and other styles
30

"Automatic Fish Feeder System for Aquaponics using Wi-Fi Based WSN." International Journal of Recent Technology and Engineering 8, no. 6 (March 30, 2020): 835–40. http://dx.doi.org/10.35940/ijrte.b3685.038620.

Full text
Abstract:
Aquaponics is a farming method, which is the combination of aquaculture and hydroponics, which grows fish and plants together in one integrated system. The fish waste provides an organic food source for the plants, and the plants naturally filter the water for the fish. The purpose of this project is to build an automatic fish feeder system for aquaponics using image processing technique with the help of Wireless Sensor Network (WSN). This helps the farmers to reduce manual effort and safeguard a balanced food delivery. The number of fish in the pond may vary over time, so the amount of fish feed provided need to be changed. As there will be a large number of fish moving randomly in a pond, the manual tracking and counting of fish is very difficult. It is a time consuming and erroneous process. This work focuses on developing a system that tracks and counts the fish in the pond for aquaponics. This automatic fish identification system processes the video of the entire pond and makes it easier to estimate the count of fish. The frames from the video are processed using Raspberry-Pi board and the count of fish is transmitted through Wi-Fi. Such a system would assist to feed the fish accordingly. Based on the count transferred, a fish feeder mechanism is controlled using NodeMCU at the other end of the Wi-Fi. The amount of fish feed remaining in the feeding box is informed to the user through mobile application.
APA, Harvard, Vancouver, ISO, and other styles
31

"Design and Development of Linux Based Quadcopter Cameraman System using Raspberry Pi Microcontroller." International Journal of Innovative Technology and Exploring Engineering 8, no. 12 (October 10, 2019): 5347–52. http://dx.doi.org/10.35940/ijitee.l3965.1081219.

Full text
Abstract:
The moto of this paper is to develop an unmanned aerial vehicle equipped with modern technologies used for auto pilot video recording Application. In this paper we proposed a method todesigna quadcopter to record the video in any location using Linux operating system. we use Raspberry Pi micro controller for flight control operation. we have developed a cloud based solution by creating a Wi-Fi network connection between the laptop and theflightcontrollerboard with the help of raspberry pi micro controller to produce “Hard Real Time output” .The quadcopter will able to acquire Additional characteristics such as • Altitude and longitude • Auto pilot • Speed • Detects ground level and height • Geographic coordinates. The Raspberry PI Micro controller is interfaced with two cameras having 1080-pixel resolution and captures 30 frames per second. The camera size is very small and light weight with good quality interfaced with quadcopter to record the videos. The proposed system can reduce the manpower involved in live outdoor video recording camera man system
APA, Harvard, Vancouver, ISO, and other styles
32

"Pilot Based Channel-Estimation 4G LTE OFDM Utilizing Time Space Procedure in Video Transmission." International Journal of Recent Technology and Engineering 8, no. 3 (September 30, 2019): 5342–51. http://dx.doi.org/10.35940/ijrte.c6881.098319.

Full text
Abstract:
Current system for communications utilize wireless for data transmission to exchange the information between associated mobile devices. The researchers are exploring novel methods to use the devices efficiently, faster and accurate. The ever increasing demand for new features by the user is making the industry standards to grow at a faster pace. The parameter like Bit Error Rate (BER) and Signal to Noise Ratio (SNR) are considered to understand the network performance. In this research paper, the transmission of different formats of video over the 4G LTE is carried amongst two systems using Wi-Fi. Different frames having various colour, size, black and white and video of different formats like .avi, mov, mpeg4 videos are transmitted. The frames are transmitted with and without channel. The transmission time and end delay parameters are observed. The performance of LSE algorithm in the OFDM channel is estimated. BER is estimated for low SNR using MPSK modulation and LS algorithm. The M-PSK& BPSK are compared with each other and M-PSK found to give better result for low SNR with low BER. Rayleigh, Rician& AWGN noise is compared with each other and it is observed that Ricon gives better result. The channel length of 4, 16, 64 is taken to compare the SNR with BER. For low SNR, BER is low with less number of channels.
APA, Harvard, Vancouver, ISO, and other styles
33

"Antenna beam Forming and beam Controlling for Improving the Wi-Fi signal." International Journal of Engineering and Advanced Technology 8, no. 6S3 (November 22, 2019): 1859–60. http://dx.doi.org/10.35940/ijeat.f1355.0986s319.

Full text
Abstract:
Antenna design plays a prominent role when we consider the far-field patterns when compared with near field patterns. By using a PIN diode (which has a wide intrinsic region when compared with normal diode) as a switch and a capacitor as a filter we are proposing an antenna design that able to radiate far field regions. The reason behind choosing capacitor as a filter means that we are designing antenna to transfer high frequency signals. For antenna deign simulations we have different software’s (like FEKO, ZELAND IE 3D etc.,) but the preferred software for this antenna design is Computer Simulation Technology Microwave Studio (CST MWS) as we are operating in a frequency band of 2 to 2.54GHz and the diameter of the antenna 20X20cm. Based on the diameter chosen the antenna is able to switch direction of signalling by using a switch with four different angles. The material used for antenna design FR-4 which is a Composite material composed of fibre glass cloth with an epoxy resin blinder i.e. frame resistance and SMB connectors with 75Ω impedance.
APA, Harvard, Vancouver, ISO, and other styles
34

"Implementation of Energy Efficient Models using MATLAB for Wireless Sensors Network Protocols." International Journal of Innovative Technology and Exploring Engineering 9, no. 2S3 (December 30, 2019): 282–86. http://dx.doi.org/10.35940/ijitee.b1072.1292s319.

Full text
Abstract:
Wireless era is the most developing technology in this era and used in many actual time applications; as an instance, our everyday usable Smartphone’s lie in wireless era. These days the variety of humans die from style of illnesses like heart attack, blood strain, allergies, cardiovascular dieses, etc. It is thrilling to apply Wi-Fi era, that's aggregate of wireless networking and Wi-Fi communication to boom the lifestyles of humankind through technique named Wi-Fi frame region networks (WBAN). The distinctive form of sensor nodes is connected in and across the human frame to feel the modifications, fetch the statistics and skip the fetched records to the server that's inside the far flung region for far off fitness monitoring purpose. Loss while consuming the different Energies in network or via nodes is in the running situation, the actual data transmission will take region. Another factor that we initiate the required preferences which are obligation cycles. It is the time taken via the sensor nodes that perform irregularly in preference to continuously. When the transmission distance may be very small in comparison to the threshold distance, obligation cycle may be very high, and for this reason, to get better prices for battery for the fact of ideal time, and consequently, battery recovery effect is not taken into consideration. This paper focused on techniques designing a network location having duration and width of length 10×10 along with 20 numbers of sensor nodes for special transmission distance modifications from 10 -1500 for one of a kind bit quotes. All the simulation tactics could be done in MATLAB surroundings, and the performance parameters like CDF, postpone spread, path loss Duty cycle and Energy consumption with baseline. offline, and recovery set of rules may be measured along with optimization algorithm named as Hybrid-Genetic algorithm.
APA, Harvard, Vancouver, ISO, and other styles
35

"Teledetection System and Management of Intrusions Events, Based on Cloud Infrastructure." International Journal of Innovative Technology and Exploring Engineering 9, no. 2 (December 10, 2019): 4754–60. http://dx.doi.org/10.35940/ijitee.b7401.129219.

Full text
Abstract:
This document describes the design and development of a low-cost security system to detect unauthorized access of people in an office, using Arduino microcontroller technology. It used a passive infrared sensor (PIR) to detect movements of people, a Wi-Fi interface to send security alerts to the cloud platform and a GSM modem, in the same way, to send the detected security events to the frame of Cloud. The latter will be used if it is not possible to establish a link to the cloud platform, via the Wi-Fi interface. In addition, the security system sends an SMS (Short Message Service) directly to the security agent's mobile every time a security event is detected. A dual communication interface is used to guarantee the sending of alerts to the cloud platform and, on the other hand, to ensure the delivery of notifications of the alerts detected to the security agent, through the channels: Notify my device (NMD), email, Twitter and SMS. As a result, it has been obtained that the fastest interface to send the detected security alerts to the cloud platform is Wi-Fi and the channel with less time to notify the security agent is NMD. Therefore, this proposed security system represents an ideal solution for security problems, both level domestic and commercial, since it has characteristics of being pervasive, that is, it can be used anywhere and agnostic in so far as of the wireless interface Communication.
APA, Harvard, Vancouver, ISO, and other styles
36

Sun, Weiping, Munhwan Choi, and Sunghyun Choi. "IEEE 802.11ah: A Long Range 802.11 WLAN at Sub 1 GHz." Journal of ICT Standardization, July 26, 2013, 83–108. http://dx.doi.org/10.13052/jicts2245-800x.125.

Full text
Abstract:
IEEE 802.11ah is an emerging Wireless LAN (WLAN) standard that defines a WLAN system operating at sub 1 GHz license-exempt bands. Thanks to the favorable propagation characteristics of the low frequency spectra, 802.11ah can provide much improved transmission range compared with the conventional 802.11 WLANs operating at 2.4 GHz and 5 GHz bands. 802.11ah can be used for various purposes including large scale sensor networks, extended range hotspot, and outdoor Wi-Fi for cellular traffic offloading, whereas the available bandwidth is relatively narrow. In this paper, we give a technical overview of 802.11ah Physical (PHY) layer and Medium Access Control (MAC) layer. For the 802.11ah PHY, which is designed based on the down-clocked operation of IEEE 802.11ac’s PHY layer, we describe its channelization and transmission modes. Besides, 802.11ah MAC layer has adopted some enhancements to fulfill the expected system requirements. These enhancements include the improvement of power saving features, support of large number of stations, efficient medium access mechanisms and throughput enhancements by greater compactness of various frame formats. Through the numerical analysis, we evaluate the transmission range for indoor and outdoor environments and the theoretical throughput with newly defined channel access mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
37

"Security of Drone Hacking with Raspberry-Pi using Internet-of-Things." International Journal of Innovative Technology and Exploring Engineering 9, no. 4 (April 10, 2020): 1629–34. http://dx.doi.org/10.35940/ijitee.f4124.049620.

Full text
Abstract:
Internet-of-Things (IoT) implementation in technological application is developing at higher speed because of more demand from the customers and firms which reply with the advantages proposed through the brilliant and elegant hardware unit. Using Drone concept, it is finding more applications in various areas which rises the threat of data hacking and also poses safety risk to common people. We must appreciate due to online data collection from the application of commercial drones which is widespread across varieties of drone use. Implementation of IoT enabled drones in large quantity causes more number of drones which is exposed to the possibility of being attacked or harmed by the fraudster. This research work examine the various issues concerned to drone security, safety, threat, attacks and illustrate a group of wireless fidelity risk possibilities. My research work has observed different types of threat and attacks on commercial drones. The propagation channel risk analyzed are, Access Denial, Authorization techniques, unauthorized third party presence, illegal source access gain and frame misuse of device characteristic features. Our work prohibits illegal connection entry controlled based on Raspberry-pi & wireless fidelity between access point and drone happens using tell-net with the help on IP address and sign up process. The investigation work explains the execution of methodology which includes attack steps and data security enabling using RSA Algorithm. The main work focuses on the data collected on parrot security OS and to secure the communication in Wi-Fi mode using drone and another devices were analyzed.
APA, Harvard, Vancouver, ISO, and other styles
38

Whelan, Andrew, Alexandra James, Justine Humphry, Tanja Dreher, Danielle Hynes, and Scarlet Wilcock. "SMART TECHNOLOGIES, ALGORITHMIC GOVERNANCE AND DATA JUSTICE." AoIR Selected Papers of Internet Research 2019 (October 31, 2019). http://dx.doi.org/10.5210/spir.v2019i0.10977.

Full text
Abstract:
This panel engages critically with the development, application and emerging effects of ‘smart’ technologies of governance. Attending specifically to the ramifications of new forms of (‘big’) data capture and integration implemented by or for state agencies, the panel describes how the rollout of these technologies impacts on and is shaped by contexts prefigured by social and economic inequalities. Two specific arenas are addressed and juxtaposed, with two papers on each of these. The first arena is the introduction of ‘smart city’ technologies and their implications for low income and marginalised communities. Often presented as novel augmentations of urban space, enhancing and customising the urban experience at the same time that they increase the city’s efficiency and ‘awareness’, smart city technologies also reconfigure urban spaces and how they are understood and governed by rendering the city a site of data generation and capture. This presents new opportunities and risks for residents and powerful commercial and state actors alike. The emergence of public wi-fi kiosks as a means of providing internet access to underserved communities, as one panellist describes, can be shown to expose low-income residents to new forms of surveillance and to new kinds of inequity in terms of the asymmetry of information made available to the parties in the exchange at the kiosk. Surveillance and data capture is organised to particular ends and powerful interests shape and leverage the design and affordances of such initiatives in particular ways. Insofar as concerns are raised about these developments, they are commonly framed in terms of individual rights to privacy, missing the scale of the issues involved. It is not merely that ‘opting out’ becomes untenable. As other panellists show, the issues involved are fundamentally social rather than individual in that they foreground questions around the appropriate relations between state and commercial actors, the use and nature of public space, and the uneven distribution of rights of access to space, information, and other resources within the city. Economically disenfranchised groups are not only denied meaningful access and participation, but colonised by data processes designed to extract various forms of value from their use of ‘public’ infrastructure which may not best serve their own interests. The second arena addressed by the panel is the role of algorithmic governance and artificial intelligence in the provision of social welfare. This context is described in terms of both the effects for the frontline service encounter, and the design, justification, and implementation of the technologies reformatting this encounter from key locations within state agencies. Emerging technological infrastructures for social welfare do not simply reconfigure how existing services are offered and accessed. They facilitate the identification of new target populations for intervention, at the same time that they introduce additional burdens, hurdles and forms of intervention and surveillance for these populations. As such, it is evident in the design and application of these technologies that they accord with and expedite punitive logics in welfare provision, providing new opportunities for the application of dominant neoliberal governance strategies. In both arenas, one can conceptualize ‘pipelines’ for the implementation of these developments. These pipelines are interstitial and heterogeneous, and combine different timelines, technologies and actors. They are often technically or administratively opaque or otherwise obscured from view. This gives rise to a methodological and intellectual problem, around the extent to which researchers can say they know enough to point to determining instances, political agendas, commercial agreements, incidental alignments and so on in such a way as to advocate effectively for democratic input and oversight. In this sense the papers assembled highlight how these developments call for new politics of method, new modalities of analysis and critique, and more effective activist and academic engagements with the question of how ideals of justice and equity can best be instantiated in these contexts.
APA, Harvard, Vancouver, ISO, and other styles
39

Costello, Eamon. "Editorial." Irish Journal of Technology Enhanced Learning 2, no. 1 (June 12, 2017). http://dx.doi.org/10.22554/ijtel.v2i1.21.

Full text
Abstract:
Dear Reader, Welcome to this special issue of the Irish Journal of Technology Enhanced Learning the journal of the Irish Learning Technology Association. This issue comprises a selection of papers based on submissions to the Next Generation: Digital Learning Research Symposium in November 2016. This symposium was held in partnership between theIrish Learning Technology Association, the Educational Studies Association of Ireland, and both theInstitute of Education andNational Institute for Digital Learning at Dublin City University. The Symposium was framed around the notion of building capacity in research in digital learning. The Symposium’s title not only alluded to generations of teaching and learning but also to learning futures and how we might ford the chasm of the great promise of the digital with evidence of its actual effects. The symposium sought to foster discussion, debate and above all a community of scholars by discussing and debating the big issues we face in digital learning research. The event gave voice to a wide range of Irish educators and researchers across all levels and sectors. The articles in this issue represent a selection of the highlights of presentations at the symposium in extended written form. Professor Gráinne Conole’s keynote served in many ways to set the scene for a broad gathering of educators and researchers from across all levels and sectors. Her article in this issue - Research through the Generations: Reflecting on the Past, Present and Future - traces a broad arc of research in educational technology framed around key technologies and methodological developments. She identifies five transformative technologies: the web/WI-FI; Learning Management Systems (LMSs); mobile devices, Open Educational Resources (OER) and Massive Open Online Courses (MOOCs); and social media. Her piece considers the characteristics that made these developments transformative, along with the challenges to their usage. This examination is followed by an overview of the field of digital learning research and divides digital learning research into three main types: research around the pedagogies of digital learning, research on underpinning technologies, and research at an organizational level. Using this framework the reader is afforded an insight into how digital learning has emerged as a new interdisciplinary field. It is always at the tectonic plates of previously separate disciplines that new terrain emerges and Professor Conole’s piece will provide an invaluable contextual overview to readers both new to the field of digital learning research but also to those more experienced researchers who may be too close to see it. As such it is a richly rewarding read for Ed Tech visitors and residents alike. A second position paper in the issue by Tony Murphy also invokes Educational Technology futures. It examines a key area, and one the three broad themes identified by Conole, that of organizational forms in digital learning. Behind the provocative title of The future of Technology Enhanced Learning (TEL) is in the hands of the anonymous, grey nondescript mid-level professional manager is an informative and insightful research-informed commentary on how technology confronts existing practices and boundaries in higher education. Key insights that this paper affords arises from how it draws on well developed concepts from literature of organizational forms outside of education and uses them to interrogate the emerging practices of work and professional roles for 21st century educators. The theme of contemporary professional practice is central to Exploring higher education professionals’ use of Twitter for learning by Muireann O’Keeffe. Using a Visitor and Resident typology this paper reports on research into how higher education professionals were involved in a range of types of participation (and nonparticipation) on Twitter. It shows how participants both use and sometimes fail to use social networking for professional learning and attempts to unpick the complexities of participation in online spaces. This paper makes an important contribution to the topical area of how we participate (or resist participation) in online spaces as a professional community. Professional practice is also to the fore in a fascinating research piece by Michael Hallissy that looks at practices in Synchronous Computer Mediated Conferencing (SCMC) which is, relative to asynchronous environments, an under researched area. Sharing Professional Practice – Tutors have their say reports on research into teaching in synchronous environments. Using Pedagogical and Content Framework (TPACK) in parallel with the Flanders Interaction Analysis Category the practices of teachers are explored with the aim of challenging and changing them. The findings of this research enjoin us as educators to share our practice critically and reflectively. Its core message is perhaps encapsulated in that both teaching and teaching development are a form of dialogue. Appropriately then, this article is written in a style that immediately engages the reader, draws them into a story and is a richly rewarding read. A number of current trends are captured in Barry James Ryan’s research article Near Peers: Harnessing the power of the populous to enhance the learning environment. This research investigated the impact of a tool called NearPod used in third level educational settings. Using a case study methodology it shows a practical implementation of some key trends in higher education and reports on its aims to enhance the student learning experience through the integration of BYOD (Bring Your Own Device) and flipped classroom learning. Methodologically this study is interesting for its use of student and teacher reflective forms of data and provides a valuable vignette of contemporary research informed teaching. To conclude it is hoped that the diverse array of articles in this issue offers something for every interested reader. Indeed this is reflective of the diversity of the Irish community that is engaged in practices informed, mediated or enabled by some kind of digital learning technology. Conole’s article sets this out from a research perspective and the picture has been painted elsewhere of what “Ed Tech” as an emergent discipline might look like. To this end it is hoped that this issue goes some way towards helping us build our community through the critical lens of research. On behalf of the Irish Learning Technology Association and the journal editorial team we wish you happy reading (and hope to see your work in a future issue). Best wishes, Eamon, Tom and Fiona [1] Irish Journal of Technology Enhanced Learning Ireland, 2017. © 2017 E. Costello. The Irish Journal of TechnologyEnhanced Learning Ireland is the journal of the Irish Learning Technology Association, an Irish-based professional and scholarly society and membership organisation. (CRO# 520231) http://www.ilta.ie/. This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.
APA, Harvard, Vancouver, ISO, and other styles
40

Pedersen, Isabel, and Kirsten Ellison. "Startling Starts: Smart Contact Lenses and Technogenesis." M/C Journal 18, no. 5 (October 14, 2015). http://dx.doi.org/10.5204/mcj.1018.

Full text
Abstract:
On 17 January 2013, Wired chose the smart contact lens as one of “7 Massive Ideas That Could Change the World” describing a Google-led research project. Wired explains that the inventor, Dr. Babak Parviz, wants to build a microsystem on a contact lens: “Using radios no wider than a few human hairs, he thinks these lenses can augment reality and incidentally eliminate the need for displays on phones, PCs, and widescreen TVs”. Explained further in other sources, the technology entails an antenna, circuits embedded into a contact lens, GPS, and an LED to project images on the eye, creating a virtual display (Solve for X). Wi-Fi would stream content through a transparent screen over the eye. One patent describes a camera embedded in the lens (Etherington). Another mentions medical sensing, such as glucose monitoring of tears (Goldman). In other words, Google proposes an imagined future when we use contact lenses to search the Internet (and be searched by it), shop online, communicate with friends, work, navigate maps, swipe through Tinder, monitor our health, watch television, and, by that time, probably engage in a host of activities not yet invented. Often referred to as a bionic contact, the smart contact lens would signal a weighty shift in the way we work, socialize, and frame our online identities. However, speculative discussion over this radical shift in personal computing, rarely if ever, includes consideration of how the body, acting as a host to digital information, will manage to assimilate not only significant affordances, but also significant constraints and vulnerabilities. At this point, for most people, the smart contact lens is just an idea. Is a new medium of communication started when it is launched in an advertising campaign? When we Like it on Facebook? If we chat about it during a party amongst friends? Or, do a critical mass of people actually have to be using it to say it has started? One might say that Apple’s Macintosh computer started as a media platform when the world heard about the famous 1984 television advertisement aired during the American NFL Super Bowl of that year. Directed by Ridley Scott, the ad entails an athlete running down a passageway and hurling a hammer at a massive screen depicting cold war style rulers expounding state propaganda. The screen explodes freeing those imprisoned from their concentration camp existence. The direct reference to Orwell’s 1984 serves as a metaphor for IBM in 1984. PC users were made analogous to political prisoners and IBM served to represent the totalitarian government. The Mac became a something that, at the time, challenged IBM, and suggested an alternative use for the desktop computer that had previously been relegated for work rather than life. Not everyone bought a Mac, but the polemical ad fostered the idea that Mac was certainly the start of new expectations, civic identities, value-systems, and personal uses for computers. The smart contact lens is another startling start. News of it shocks us, initiates social media clicks and forwards, and instigates dialogue. But, it also indicates the start of a new media paradigm that is already undergoing popular adoption as it is announced in mainstream news and circulated algorithmically across media channels. Since 2008, news outlets like CNN, The New York Times, The Globe and Mail, Asian International News, United News of India, The Times of London and The Washington Post have carried it, feeding the buzz in circulation that Google intends. Attached to the wave of current popular interest generated around any technology claiming to be “wearable,” a smart contact lens also seems surreptitious. We would no longer hold smartphones, but hide all of that digital functionality beneath our eyelids. Its emergence reveals the way commercial models have dramatically changed. The smart contact lens is a futuristic invention imagined for us and about us, but also a sensationalized idea socializing us to a future that includes it. It is also a real device that Parviz (with Google) has been inventing, promoting, and patenting for commercial applications. All of these workings speak to a broader digital culture phenomenon. We argue that the smart contact lens discloses a process of nascent posthuman adaptation, launched in an era that celebrates wearable media as simultaneously astonishing and banal. More specifically, we adopt technology based on our adaptation to it within our personal, political, medial, social, and biological contexts, which also function in a state of flux. N. Katherine Hayles writes that “Contemporary technogenesis, like evolution in general, is not about progress ... rather, contemporary technogenesis is about adaptation, the fit between organisms and their environments, recognizing that both sides of the engagement (human and technologies) are undergoing coordinated transformations” (81). This article attends to the idea that in these early stages, symbolic acts of adaptation signal an emergent medium through rhetorical processes that society both draws from and contributes to. In terms of project scope, this article contributes a focused analysis to a much larger ongoing digital rhetoric project. For the larger project, we conducted a discourse analysis on a collection of international publications concerning Babak Parviz and the invention. We searched for and collected newspaper stories, news broadcasts, YouTube videos from various sources, academic journal publications, inventors’ conference presentations, and advertising, all published between January 2008 and May 2014, generating a corpus of more than 600 relevant artifacts. Shortly after this time, Dr. Parviz, a Professor at the University of Washington, left the secretive GoogleX lab and joined Amazon.com (Mac). For this article we focus specifically on the idea of beginnings or genesis and how digital spaces increasingly serve as the grounds for emergent digital cultural phenomena that are rarely recognized as starting points. We searched through the corpus to identify a few exemplary international mainstream news stories to foreground predominant tropes in support of the claim we make that smart contacts lenses are a startling idea. Content producers deliberately use astonishment as a persuasive device. We characterize the idea of a smart contact lens cast in rhetorical terms in order to reveal how its allure works as a process of adaptation. Rhetorician and philosopher, Kenneth Burke writes that “rhetorical language is inducement to action (or to attitude)” (42). A rhetorical approach is instrumental because it offers a model to explain how we deploy, often times, manipulative meaning as senders and receivers while negotiating highly complex constellations of resources and contexts. Burke’s rhetorical theory can show how messages influence and become influenced by powerful hierarchies in discourse that seem transparent or neutral, ones that seem to fade into the background of our consciousness. For this article, we also concentrate on rhetorical devices such as ethos and the inventor’s own appeals through different modes of communication. Ethos was originally proposed by Aristotle to identify speaker credibility as a persuasive tactic. Addressed by scholars of rhetoric for centuries, ethos has been reconfigured by many critical theorists (Burke; Baumlin Ethos; Hyde). Baumlin and Baumlin suggest that “ethos describes an audience’s projection of authority and trustworthiness onto the speaker ... ethos suggests that the ethical appeal to be a radically psychological event situated in the mental processes of the audience – as belonging as much to the audience as to the actual character of a speaker” (Psychology 99). Discussed in the next section, our impression of Parviz and his position as inventor plays a dramatic role in the surfacing of the smart contact lens. Digital Rhetoric is an “emerging scholarly discipline concerned with the interpretation of computer-generated media as objects of study” (Losh 48). In an era when machine-learning algorithms become the messengers for our messages, which have become commodity items operating across globalized, capitalist networks, digital rhetoric provides a stable model for our approach. It leads us to demonstrate how this emergent medium and invention, the smart contact lens, is born amid new digital genres of speculative communication circulated in the everyday forums we engage on a daily basis. Smart Contact Lenses, Sensationalism, and Identity One relevant site for exploration into how an invention gains ethos is through writing or video penned or produced by the inventor. An article authored by Parviz in 2009 discusses his invention and the technical advancements that need to be made before the smart contact lens could work. He opens the article using a fictional and sensationalized analogy to encourage the adoption of his invention: The human eye is a perceptual powerhouse. It can see millions of colors, adjust easily to shifting light conditions, and transmit information to the brain at a rate exceeding that of a high-speed Internet connection.But why stop there?In the Terminator movies, Arnold Schwarzenegger’s character sees the world with data superimposed on his visual field—virtual captions that enhance the cyborg’s scan of a scene. In stories by the science fiction author Vernor Vinge, characters rely on electronic contact lenses, rather than smartphones or brain implants, for seamless access to information that appears right before their eyes. Identity building is made to correlate with smart contact lenses in a manner that frames them as exciting. Coming to terms with them often involves casting us as superhumans, wielding abilities that we do not currently possess. One reason for embellishment is because we do not need digital displays on the eyes, so the motive to use them must always be geared to transcending our assumed present condition as humans and society members. Consequently, imagination is used to justify a shift in human identity along a future trajectory.This passage above also instantiates a transformation from humanist to posthumanist posturing (i.e. “the cyborg”) in order to incent the adoption of smart contact lenses. It begins with the bold declarative statement, “The human eye is a perceptual powerhouse,” which is a comforting claim about our seemingly human superiority. Indexing abstract humanist values, Parviz emphasizes skills we already possess, including seeing a plethora of colours, adjusting to light on the fly, and thinking fast, indeed faster than “a high-speed Internet connection”. However, the text goes on to summon the Terminator character and his optic feats from the franchise of films. Filmic cyborg characters fulfill the excitement that posthuman rhetoric often seems to demand, but there is more here than sensationalism. Parviz raises the issue of augmenting human vision using science fiction as his contextualizing vehicle because he lacks another way to imbricate the idea. Most interesting in this passage is the inventor’s query “But why stop there?” to yoke the two claims, one biological (i.e., “The human eye is a perceptual powerhouse”) and one fictional (i.e. Terminator, Vernor Vinge characters). The query suggests, Why stop with human superiority, we may as well progress to the next level and embrace a smart contact lens just as fictional cyborgs do. The non-threatening use of fiction makes the concept seem simultaneously exciting and banal, especially because the inventor follows with a clear description of the necessary scientific engineering in the rest of the article. This rhetorical act signifies the voice of a technoelite, a heavily-funded cohort responding to global capitalist imperatives armed with a team of technologists who can access technological advancements and imbue comments with an authority that may extend beyond their fields of expertise, such as communication studies, sociology, psychology, or medicine. The result is a powerful ethos. The idea behind the smart contact lens maintains a degree of respectability long before a public is invited to use it.Parviz exhumes much cultural baggage when he brings to life the Terminator character to pitch smart contact lenses. The Terminator series of films has established the “Arnold Schwarzenegger” character a cultural mainstay. Each new film reinvented him, but ultimately promoted him within a convincing dystopian future across the whole series: The Terminator (Cameron), Terminator 2: Judgment Day (Cameron), Terminator 3: Rise of the Machines (Mostow), Terminator Salvation (McG) and Terminator Genisys (Taylor) (which appeared in 2015 after Parviz’s article). Recently, several writers have addressed how cyborg characters figure significantly in our cultural psyche (Haraway, Bukatman; Leaver). Tama Leaver’s Artificial Culture explores the way popular, contemporary, cinematic, science fiction depictions of embodied Artificial Intelligence, such as the Terminator cyborgs, “can act as a matrix which, rather than separating or demarcating minds and bodies or humanity and the digital, reinforce the symbiotic connection between people, bodies, and technologies” (31). Pointing out the violent and ultimately technophobic motive of The Terminator films, Leaver reads across them to conclude nevertheless that science fiction “proves an extremely fertile context in which to address the significance of representations of Artificial Intelligence” (63).Posthumanism and TechnogenesisOne reason this invention enters the public’s consciousness is its announcement alongside a host of other technologies, which seem like parts of a whole. We argue that this constant grouping of technologies in the news is one process indicative of technogenesis. For example, City A.M., London’s largest free commuter daily newspaper, reports on the future of business technology as a hodgepodge of what ifs: As Facebook turns ten, and with Bill Gates stepping down as Microsoft chairman, it feels like something is drawing to an end. But if so, it is only the end of the technological revolution’s beginning ... Try to look ahead ten years from now and the future is dark. Not because it is bleak, but because the sheer profusion of potential is blinding. Smartphones are set to outnumber PCs within months. After just a few more years, there are likely to be 3bn in use across the planet. In ten years, who knows – wearables? smart contact lenses? implants? And that’s just the start. The Internet of Things is projected to be a $300bn (£183bn) industry by 2020. (Sidwell) This reporting is a common means to frame the commodification of technology in globalized business news that seeks circulation as much as it does readership. But as a text, it also posits how individuals frame the future and their participation with it (Pedersen). Smart contacts appear to move along this exciting, unstoppable trajectory where the “potential is blinding”. The motive is to excite and scare. However, simultaneously, the effect is predictable. We are quite accustomed to this march of innovations that appears everyday in the morning paper. We are asked to adapt rather than question, consequently, we never separate the parts from the whole (e.g., “wearables? smart contact lenses? Implants”) in order to look at them critically.In coming to terms with Cary Wolf’s definition of posthumanism, Greg Pollock writes that posthumanism is the questioning that goes on “when we can no longer rely on ‘the human’ as an autonomous, rational being who provides an Archimedean point for knowing about the world (in contrast to “humanism,” which uses such a figure to ground further claims)” (208). With similar intent, N. Katherine Hayles formulating the term technogenesis suggests that we are not really progressing to another level of autonomous human existence when we adopt media, we are in effect, adapting to media and media are also in a process of adapting to us. She writes: As digital media, including networked and programmable desktop stations, mobile devices, and other computational media embedded in the environment, become more pervasive, they push us in the direction of faster communication, more intense and varied information streams, more integration of humans and intelligent machines, and more interactions of language with code. These environmental changes have significant neurological consequences, many of which are now becoming evident in young people and to a lesser degree in almost everyone who interacts with digital media on a regular basis. (11) Following Hayles, three actions or traits characterize adaptation in a manner germane to the technogenesis of media like smart contact lenses. The first is “media embedded in the environment”. The trait of embedding technology in the form of sensors and chips into external spaces evokes the onset of The Internet of Things (IoT) foundations. Extensive data-gathering sensors, wireless technologies, mobile and wearable components integrated with the Internet, all contribute to the IoT. Emerging from cloud computing infrastructures and data models, The IoT, in its most extreme, involves a scenario whereby people, places, animals, and objects are given unique “embedded” identifiers so that they can embark on constant data transfer over a network. In a sense, the lenses are adapted artifacts responding to a world that expects ubiquitous networked access for both humans and machines. Smart contact lenses will essentially be attached to the user who must adapt to these dynamic and heavily mediated contexts.Following closely on the first, the second point Hayles makes is “integration of humans and intelligent machines”. The camera embedded in the smart contact lens, really an adapted smartphone camera, turns the eye itself into an image capture device. By incorporating them under the eyelids, smart contact lenses signify integration in complex ways. Human-machine amalgamation follows biological, cognitive, and social contexts. Third, Hayles points to “more interactions of language with code.” We assert that with smart contact lenses, code will eventually govern interaction between countless agents in accordance with other smart devices, such as: (1) exchanges of code between people and external nonhuman networks of actors through machine algorithms and massive amalgamations of big data distributed on the Internet;(2) exchanges of code amongst people, human social actors in direct communication with each other over social media; and (3) exchanges of coding and decoding between people and their own biological processes (e.g. monitoring breathing, consuming nutrients, translating brainwaves) and phenomenological (but no less material) practices (e.g., remembering, grieving, or celebrating). The allure of the smart contact lens is the quietly pressing proposition that communication models such as these will be radically transformed because they will have to be adapted to use with the human eye, as the method of input and output of information. Focusing on genetic engineering, Eugene Thacker fittingly defines biomedia as “entail[ing] the informatic recontextualization of biological components and processes, for ends that may be medical or nonmedical (economic, technical) and with effects that are as much cultural, social, and political as they are scientific” (123). He specifies, “biomedia are not computers that simply work on or manipulate biological compounds. Rather, the aim is to provide the right conditions, such that biological life is able to demonstrate or express itself in a particular way” (123). Smart contact lenses sit on the cusp of emergence as a biomedia device that will enable us to decode bodily processes in significant new ways. The bold, technical discourse that announces it however, has not yet begun to attend to the seemingly dramatic “cultural, social, and political” effects percolating under the surface. Through technogenesis, media acclimatizes rapidly to change without establishing a logic of the consequences, nor a design plan for emergence. Following from this, we should mention issues such as the intrusion of surveillance algorithms deployed by corporations, governments, and other hegemonic entities that this invention risks. If smart contact lenses are biomedia devices inspiring us to decode bodily processes and communicate that data for analysis, for ourselves, and others in our trust (e.g., doctors, family, friends), we also need to be wary of them. David Lyon warns: Surveillance has spilled out of its old nation-state containers to become a feature of everyday life, at work, at home, at play, on the move. So far from the single all-seeing eye of Big Brother, myriad agencies now trace and track mundane activities for a plethora of purposes. Abstract data, now including video, biometric, and genetic as well as computerized administrative files, are manipulated to produce profiles and risk categories in a liquid, networked system. The point is to plan, predict, and prevent by classifying and assessing those profiles and risks. (13) In simple terms, the smart contact lens might disclose the most intimate information we possess and leave us vulnerable to profiling, tracking, and theft. Irma van der Ploeg presupposed this predicament when she wrote: “The capacity of certain technologies to change the boundary, not just between what is public and private information but, on top of that, between what is inside and outside the human body, appears to leave our normative concepts wanting” (71). The smart contact lens, with its implied motive to encode and disclose internal bodily information, needs considerations on many levels. Conclusion The smart contact lens has made a digital beginning. We accept it through the mass consumption of the idea, which acts as a rhetorical motivator for media adoption, taking place long before the device materializes in the marketplace. This occurrence may also be a sign of our “posthuman predicament” (Braidotti). We have argued that the smart contact lens concept reveals our posthuman adaptation to media rather than our reasoned acceptance or agreement with it as a logical proposition. By the time we actually squabble over the price, express fears for our privacy, and buy them, smart contact lenses will long be part of our everyday culture. References Baumlin, James S., and Tita F. Baumlin. “On the Psychology of the Pisteis: Mapping the Terrains of Mind and Rhetoric.” Ethos: New Essays in Rhetorical and Critical Theory. Eds. James S. Baumlin and Tita F. Baumlin. Dallas: Southern Methodist University Press, 1994. 91-112. Baumlin, James S., and Tita F. Baumlin, eds. Ethos: New Essays in Rhetorical and Critical Theory. Dallas: Southern Methodist University Press, 1994. Bilton, Nick. “A Rose-Colored View May Come Standard.” The New York Times, 4 Apr. 2012. Braidotti, Rosi. The Posthuman. Cambridge: Polity, 2013. Bukatman, Scott. Terminal Identity: The Virtual Subject in Postmodern Science Fiction. Durham: Duke University Press, 1993. Burke, Kenneth. A Rhetoric of Motives. Berkeley: University of California Press, 1950. Cameron, James, dir. The Terminator. Orion Pictures, 1984. DVD. Cameron, James, dir. Terminator 2: Judgment Day. Artisan Home Entertainment, 2003. DVD. Etherington, Darrell. “Google Patents Tiny Cameras Embedded in Contact Lenses.” TechCrunch, 14 Apr. 2014. Goldman, David. “Google to Make Smart Contact Lenses.” CNN Money 17 Jan. 2014. Haraway, Donna. Simians, Cyborgs and Women: The Reinvention of Nature. London: Free Association Books, 1991. Hayles, N. Katherine. How We Think: Digital Media and Contemporary Technogenesis. Chicago: University of Chicago, 2012. Hyde, Michael. The Ethos of Rhetoric. Columbia: University of South Carolina Press, 2004. Leaver, Tama. Artificial Culture: Identity, Technology, and Bodies. New York: Routledge, 2012. Losh, Elizabeth. Virtualpolitik: An Electronic History of Government Media-Making in a Time of War, Scandal, Disaster, Miscommunication, and Mistakes. Boston: MIT Press. 2009. Lyon, David, ed. Surveillance as Social Sorting: Privacy, Risk and Digital Discrimination. New York: Routledge, 2003. Mac, Ryan. “Amazon Lures Google Glass Creator Following Phone Launch.” Forbes.com, 14 July 2014. McG, dir. Terminator Salvation. Warner Brothers, 2009. DVD. Mostow, Jonathan, dir. Terminator 3: Rise of the Machines. Warner Brothers, 2003. DVD. Parviz, Babak A. “Augmented Reality in a Contact Lens.” IEEE Spectrum, 1 Sep. 2009. Pedersen, Isabel. Ready to Wear: A Rhetoric of Wearable Computers and Reality-Shifting Media. Anderson, South Carolina: Parlor Press, 2013. Pollock, Greg. “What Is Posthumanism by Cary Wolfe (2009).” Rev. of What is Posthumanism?, by Cary Wolfe. Journal for Critical Animal Studies 9.1/2 (2011): 235-241. Sidwell, Marc. “The Long View: Bill Gates Is Gone and the Dot-com Era Is Over: It's Only the End of the Beginning.” City A.M., 7 Feb. 2014. “Solve for X: Babak Parviz on Building Microsystems on the Eye.” YouTube, 7 Feb. 2012. Taylor, Alan, dir. Terminator: Genisys. Paramount Pictures, 2015. DVD. Thacker, Eugene “Biomedia.” Critical Terms for Media Studies. Eds. W.J.T Mitchell and Mark Hansen, Chicago: Chicago Press, 2010. 117-130. Van der Ploeg, Irma. “Biometrics and the Body as Information.” Surveillance as Social Sorting: Privacy, Risk and Digital Discrimination. Ed. David Lyon. New York: Routledge, 2003. 57-73. Wired Staff. “7 Massive Ideas That Could Change the World.” Wired.com, 17 Jan. 2013.
APA, Harvard, Vancouver, ISO, and other styles
41

Avram, Horea. "The Convergence Effect: Real and Virtual Encounters in Augmented Reality Art." M/C Journal 16, no. 6 (November 7, 2013). http://dx.doi.org/10.5204/mcj.735.

Full text
Abstract:
Augmented Reality—The Liminal Zone Within the larger context of the post-desktop technological philosophy and practice, an increasing number of efforts are directed towards finding solutions for integrating as close as possible virtual information into specific real environments; a short list of such endeavors include Wi-Fi connectivity, GPS-driven navigation, mobile phones, GIS (Geographic Information System), and various technological systems associated with what is loosely called locative, ubiquitous and pervasive computing. Augmented Reality (AR) is directly related to these technologies, although its visualization capabilities and the experience it provides assure it a particular place within this general trend. Indeed, AR stands out for its unique capacity (or ambition) to offer a seamless combination—or what I call here an effect of convergence—of the real scene perceived by the user with virtual information overlaid on that scene interactively and in real time. The augmented scene is perceived by the viewer through the use of different displays, the most common being the AR glasses (head-mounted display), video projections or monitors, and hand-held mobile devices such as smartphones or tablets, increasingly popular nowadays. One typical example of AR application is Layar, a browser that layers information of public interest—delivered through an open-source content management system—over the actual image of a real space, streamed live on the mobile phone display. An increasing number of artists employ this type of mobile AR apps to create artworks that consist in perceptually combining material reality and virtual data: as the user points the smartphone or tablet to a specific place, virtual 3D-modelled graphics or videos appear in real time, seamlessly inserted in the image of that location, according to the user’s position and orientation. In the engineering and IT design fields, one of the first researchers to articulate a coherent conceptualization of AR and to underlie its specific capabilities is Ronald Azuma. He writes that, unlike Virtual Reality (VR) which completely immerses the user inside a synthetic environment, AR supplements reality, therefore enhancing “a user’s perception of and interaction with the real world” (355-385). Another important contributor to the foundation of AR as a concept and as a research field is industrial engineer Paul Milgram. He proposes a comprehensive and frequently cited definition of “Mixed Reality” (MR) via a schema that includes the entire spectrum of situations that span the “continuum” between actual reality and virtual reality, with “augmented reality” and “augmented virtuality” between the two poles (283). Important to remark with regard to terminology (MR or AR) is that especially in the non-scientific literature, authors do not always explain a preference for either MR or AR. This suggests that the two terms are understood as synonymous, but it also provides evidence for my argument that, outside of the technical literature, AR is considered a concept rather than a technology. Here, I use the term AR instead of MR considering that the phrase AR (and the integrated idea of augmentation) is better suited to capturing the convergence effect. As I will demonstrate in the following lines, the process of augmentation (i.e. the convergence effect) is the result of an enhancement of the possibilities to perceive and understand the world—through adding data that augment the perception of reality—and not simply the product of a mix. Nevertheless, there is surely something “mixed” about this experience, at least for the fact that it combines reality and virtuality. The experiential result of combining reality and virtuality in the AR process is what media theorist Lev Manovich calls an “augmented space,” a perceptual liminal zone which he defines as “the physical space overlaid with dynamically changing information, multimedia in form and localized for each user” (219). The author derives the term “augmented space” from the term AR (already established in the scientific literature), but he sees AR, and implicitly augmented space, not as a strictly defined technology, but as a model of visuality concerned with the intertwining of the real and virtual: “it is crucial to see this as a conceptual rather than just a technological issue – and therefore as something that in part has already been an element of other architectural and artistic paradigms” (225-6). Surely, it is hard to believe that AR has appeared in a void or that its emergence is strictly related to certain advances in technological research. AR—as an artistic manifestation—is informed by other attempts (not necessarily digital) to merge real and fictional in a unitary perceptual entity, particularly by installation art and Virtual Reality (VR) environments. With installation art, AR shares the same spatial strategy and scenographic approach—they both construct “fictional” areas within material reality, that is, a sort of mise-en-scène that are aesthetically and socially produced and centered on the active viewer. From the media installationist practice of the previous decades, AR inherited the way of establishing a closer spatio-temporal interaction between the setting, the body and the electronic image (see for example Bruce Nauman’s Live-Taped Video Corridor [1970], Peter Campus’s Interface [1972], Dan Graham’s Present Continuous Pasts(s) [1974], Jeffrey Shaw’s Viewpoint [1975], or Jim Campbell’s Hallucination [1988]). On the other hand, VR plays an important role in the genealogy of AR for sharing the same preoccupation for illusionist imagery and—at least in some AR projects—for providing immersive interactions in “expanded image spaces experienced polysensorily and interactively” (Grau 9). VR artworks such as Paul Sermon, Telematic Dreaming (1992), Char Davies’ Osmose (1995), Michael Naimark’s Be Now Here (1995-97), Maurice Benayoun’s World Skin: A Photo Safari in the Land of War (1997), Luc Courchesne’s Where Are You? (2007-10), are significant examples for the way in which the viewer can be immersed in “expanded image-spaces.” Offering no view of the exterior world, the works try instead to reduce as much as possible the critical distance the viewer might have to the image he/she experiences. Indeed, AR emerged in great part from the artistic and scientific research efforts dedicated to VR, but also from the technological and artistic investigations of the possibilities of blending reality and virtuality, conducted in the previous decades. For example, in the 1960s, computer scientist Ivan Sutherland played a crucial role in the history of AR contributing to the development of display solutions and tracking systems that permit a better immersion within the digital image. Another important figure in the history of AR is computer artist Myron Krueger whose experiments with “responsive environments” are fundamental as they proposed a closer interaction between participant’s body and the digital object. More recently, architect and theorist Marcos Novak contributed to the development of the idea of AR by introducing the concept of “eversion”, “the counter-vector of the virtual leaking out into the actual”. Today, AR technological research and the applications made available by various developers and artists are focused more and more on mobility and ubiquitous access to information instead of immersivity and illusionist effects. A few examples of mobile AR include applications such as Layar, Wikitude—“world browsers” that overlay site-specific information in real-time on a real view (video stream) of a place, Streetmuseum (launched in 2010) and Historypin (launched in 2011)—applications that insert archive images into the street-view of a specific location where the old images were taken, or Google Glass (launched in 2012)—a device that provides the wearer access to Google’s key Cloud features, in situ and in real time. Recognizing the importance of various technological developments and of the artistic manifestations such as installation art and VR as predecessors of AR, we should emphasize that AR moves forward from these artistic and technological models. AR extends the installationist precedent by proposing a consistent and seamless integration of informational elements with the very physical space of the spectator, and at the same time rejects the idea of segregating the viewer into a complete artificial environment like in VR systems by opening the perceptual field to the surrounding environment. Instead of leaving the viewer in a sort of epistemological “lust” within the closed limits of the immersive virtual systems, AR sees virtuality rather as a “component of experiencing the real” (Farman 22). Thus, the questions that arise—and which this essay aims to answer—are: Do we have a specific spatial dimension in AR? If yes, can we distinguish it as a different—if not new—spatial and aesthetic paradigm? Is AR’s intricate topology able to be the place not only of convergence, but also of possible tensions between its real and virtual components, between the ideal of obtaining a perceptual continuity and the inherent (technical) limitations that undermine that ideal? Converging Spaces in the Artistic Mode: Between Continuum and Discontinuum As key examples of the way in which AR creates a specific spatial experience—in which convergence appears as a fluctuation between continuity and discontinuity—I mention three of the most accomplished works in the field that, significantly, expose also the essential role played by the interface in providing this experience: Living-Room 2 (2007) by Jan Torpus, Under Scan (2005-2008) by Rafael Lozano-Hemmer and Hans RichtAR (2013) by John Craig Freeman and Will Pappenheimer. The works illustrate the three main categories of interfaces used for AR experience: head-attached, spatial displays, and hand-held (Bimber 2005). These types of interface—together with all the array of adjacent devices, software and tracking systems—play a central role in determining the forms and outcomes of the user’s experience and consequently inform in a certain measure the aesthetic and socio-cultural interpretative discourse surrounding AR. Indeed, it is not the same to have an immersive but solitary experience, or a mobile and public experience of an AR artwork or application. The first example is Living-Room 2 an immersive AR installation realized by a collective coordinated by Jan Torpus in 2007 at the University of Applied Sciences and Arts FHNW, Basel, Switzerland. The work consists of a built “living-room” with pieces of furniture and domestic objects that are perceptually augmented by means of a “see-through” Head Mounted Display. The viewer perceives at the same time the real room and a series of virtual graphics superimposed on it such as illusionist natural vistas that “erase” the walls, or strange creatures that “invade” the living-room. The user can select different augmenting “scenarios” by interacting with both the physical interfaces (the real furniture and objects) and the graphical interfaces (provided as virtual images in the visual field of the viewer, and activated via a handheld device). For example, in one of the scenarios proposed, the user is prompted to design his/her own extended living room, by augmenting the content and the context of the given real space with different “spatial dramaturgies” or “AR décors.” Another scenario offers the possibility of creating an “Ecosystem”—a real-digital world perceived through the HMD in which strange creatures virtually occupy the living-room intertwining with the physical configuration of the set design and with the user’s viewing direction, body movement, and gestures. Particular attention is paid to the participant’s position in the room: a tracking device measures the coordinates of the participant’s location and direction of view and effectuates occlusions of real space and then congruent superimpositions of 3D images upon it. Figure 1: Jan Torpus, Living-Room 2 (Ecosystems), Augmented Reality installation (2007). Courtesy of the artist. Figure 2: Jan Torpus, Living-Room 2 (AR decors), Augmented Reality installation (2007). Courtesy of the artist.In this sense, the title of the work acquires a double meaning: “living” is both descriptive and metaphoric. As Torpus explains, Living-Room is an ambiguous phrase: it can be both a living-room and a room that actually lives, an observation that suggests the idea of a continuum and of immersion in an environment where there are no apparent ruptures between reality and virtuality. Of course, immersion is in these circumstances not about the creation of a purely artificial secluded space of experience like that of the VR environments, but rather about a dialogical exercise that unifies two different phenomenal levels, real and virtual, within a (dis)continuous environment (with the prefix “dis” as a necessary provision). Media theorist Ron Burnett’s observations about the instability of the dividing line between different levels of experience—more exactly, of the real-virtual continuum—in what he calls immersive “image-worlds” have a particular relevance in this context: Viewing or being immersed in images extend the control humans have over mediated spaces and is part of a perceptual and psychological continuum of struggle for meaning within image-worlds. Thinking in terms of continuums lessens the distinctions between subjects and objects and makes it possible to examine modes of influence among a variety of connected experiences. (113) It is precisely this preoccupation to lessen any (or most) distinctions between subjects and objects, and between real and virtual spaces, that lays at the core of every artistic experiment under the AR rubric. The fact that this distinction is never entirely erased—as Living-Room 2 proves—is part of the very condition of AR. The ambition to create a continuum is after all not about producing perfectly homogenous spaces, but, as Ron Burnett points out (113), “about modalities of interaction and dialogue” between real worlds and virtual images. Another way to frame the same problematic of creating a provisional spatial continuum between reality and virtuality, but this time in a non-immersive fashion (i.e. with projective interface means), occurs in Rafael Lozano-Hemmer’s Under Scan (2005-2008). The work, part of the larger series Relational Architecture, is an interactive video installation conceived for outdoor and indoor environments and presented in various public spaces. It is a complex system comprised of a powerful light source, video projectors, computers, and a tracking device. The powerful light casts shadows of passers-by within the dark environment of the work’s setting. A tracking device indicates where viewers are positioned and permits the system to project different video sequences onto their shadows. Shot in advance by local videographers and producers, the filmed sequences show full images of ordinary people moving freely, but also watching the camera. As they appear within pedestrians’ shadows, the figurants interact with the viewers, moving and establishing eye contact. Figure 3: Rafael Lozano-Hemmer, Under Scan (Relational Architecture 11), 2005. Shown here: Trafalgar Square, London, United Kingdom, 2008. Photo by: Antimodular Research. Courtesy of the artist. Figure 4: Rafael Lozano-Hemmer, Under Scan (Relational Architecture 11), 2005. Shown here: Trafalgar Square, London, United Kingdom, 2008. Photo by: Antimodular Research. Courtesy of the artist. One of the most interesting attributes of this work with respect to the question of AR’s (im)possible perceptual spatial continuity is its ability to create an experientially stimulating and conceptually sophisticated play between illusion and subversion of illusion. In Under Scan, the integration of video projections into the real environment via the active body of the viewer is aimed at tempering as much as possible any disparities or dialectical tensions—that is, any successive or alternative reading—between real and virtual. Although non-immersive, the work fuses the two levels by provoking an intimate but mute dialogue between the real, present body of the viewer and the virtual, absent body of the figurant via the ambiguous entity of the shadow. The latter is an illusion (it marks the presence of a body) that is transcended by another illusion (video projection). Moreover, being “under scan,” the viewer inhabits both the “here” of the immediate space and the “there” of virtual information: “the body” is equally a presence in flesh and bones and an occurrence in bits and bytes. But, however convincing this reality-virtuality pseudo-continuum would be, the spatial and temporal fragmentations inevitably persist: there is always a certain break at the phenomenological level between the experience of real space, the bodily absence/presence in the shadow, and the displacements and delays of the video image projection. Figure 5: John Craig Freeman and Will Pappenheimer, Hans RichtAR, augmented reality installation included in the exhibition “Hans Richter: Encounters”, Los Angeles County Museum of Art, 2013. Courtesy of the artists. Figure 6: John Craig Freeman and Will Pappenheimer, Hans RichtAR, augmented reality installation included in the exhibition “Hans Richter: Encounters”, Los Angeles County Museum of Art, 2013. Courtesy of the artists. The third example of an AR artwork that engages the problem of real-virtual spatial convergence as a play between perceptual continuity and discontinuity, this time with the use of hand-held mobile interface is Hans RichtAR by John Craig Freeman and Will Pappenheimer. The work is an AR installation included in the exhibition “Hans Richter: Encounters” at Los Angeles County Museum of Art, in 2013. The project recreates the spirit of the 1929 exhibition held in Stuttgart entitled Film und Foto (“FiFo”) for which avant-garde artist Hans Richter served as film curator. Featured in the augmented reality is a re-imaging of the FiFo Russian Room designed by El Lissitzky where a selection of Russian photographs, film stills and actual film footage was presented. The users access the work through tablets made available at the exhibition entrance. Pointing the tablet at the exhibition and moving around the room, the viewer discovers that a new, complex installation is superimposed on the screen over the existing installation and gallery space at LACMA. The work effectively recreates and interprets the original design of the Russian Room, with its scaffoldings and surfaces at various heights while virtually juxtaposing photography and moving images, to which the authors have added some creative elements of their own. Manipulating and converging real space and the virtual forms in an illusionist way, AR is able—as one of the artists maintains—to destabilize the way we construct representation. Indeed, the work makes a statement about visuality that complicates the relationship between the visible object and its representation and interpretation in the virtual realm. One that actually shows the fragility of establishing an illusionist continuum, of a perfect convergence between reality and represented virtuality, whatever the means employed. AR: A Different Spatial Practice Regardless the degree of “perfection” the convergence process would entail, what we can safely assume—following the examples above—is that the complex nature of AR operations permits a closer integration of virtual images within real space, one that, I argue, constitutes a new spatial paradigm. This is the perceptual outcome of the convergence effect, that is, the process and the product of consolidating different—and differently situated—elements in real and virtual worlds into a single space-image. Of course, illusion plays a crucial role as it makes permeable the perceptual limit between the represented objects and the material spaces we inhabit. Making the interface transparent—in both proper and figurative senses—and integrating it into the surrounding space, AR “erases” the medium with the effect of suspending—at least for a limited time—the perceptual (but not ontological!) differences between what is real and what is represented. These aspects are what distinguish AR from other technological and artistic endeavors that aim at creating more inclusive spaces of interaction. However, unlike the CAVE experience (a display solution frequently used in VR applications) that isolates the viewer within the image-space, in AR virtual information is coextensive with reality. As the example of the Living-Room 2 shows, regardless the degree of immersivity, in AR there is no such thing as dismissing the real in favor of an ideal view of a perfect and completely controllable artificial environment like in VR. The “redemptive” vision of a total virtual environment is replaced in AR with the open solution of sharing physical and digital realities in the same sensorial and spatial configuration. In AR the real is not denounced but reflected; it is not excluded, but integrated. Yet, AR distinguishes itself also from other projects that presuppose a real-world environment overlaid with data, such as urban surfaces covered with screens, Wi-Fi enabled areas, or video installations that are not site-specific and viewer inclusive. Although closely related to these types of projects, AR remains different, its spatiality is not simply a “space of interaction” that connects, but instead it integrates real and virtual elements. Unlike other non-AR media installations, AR does not only place the real and virtual spaces in an adjacent position (or replace one with another), but makes them perceptually convergent in an—ideally—seamless way (and here Hans RichtAR is a relevant example). Moreover, as Lev Manovich notes, “electronically augmented space is unique – since the information is personalized for every user, it can change dynamically over time, and it is delivered through an interactive multimedia interface” (225-6). Nevertheless, as our examples show, any AR experience is negotiated in the user-machine encounter with various degrees of success and sustainability. Indeed, the realization of the convergence effect is sometimes problematic since AR is never perfectly continuous, spatially or temporally. The convergence effect is the momentary appearance of continuity that will never take full effect for the viewer, given the internal (perhaps inherent?) tensions between the ideal of seamlessness and the mostly technical inconsistencies in the visual construction of the pieces (such as real-time inadequacy or real-virtual registration errors). We should note that many criticisms of the AR visualization systems (being them practical applications or artworks) are directed to this particular aspect related to the imperfect alignment between reality and digital information in the augmented space-image. However, not only AR applications can function when having an estimated (and acceptable) registration error, but, I would state, such visual imperfections testify a distinctive aesthetic aspect of AR. The alleged flaws can be assumed—especially in the artistic AR projects—as the “trace,” as the “tool’s stroke” that can reflect the unique play between illusion and its subversion, between transparency of the medium and its reflexive strategy. In fact this is what defines AR as a different perceptual paradigm: the creation of a convergent space—which will remain inevitably imperfect—between material reality and virtual information.References Azuma, Ronald T. “A Survey on Augmented Reality.” Presence: Teleoperators and Virtual Environments 6.4 (Aug. 1997): 355-385. < http://www.hitl.washington.edu/projects/knowledge_base/ARfinal.pdf >. Benayoun, Maurice. World Skin: A Photo Safari in the Land of War. 1997. Immersive installation: CAVE, Computer, video projectors, 1 to 5 real photo cameras, 2 to 6 magnetic or infrared trackers, shutter glasses, audio-system, Internet connection, color printer. Maurice Benayoun, Works. < http://www.benayoun.com/projet.php?id=16 >. Bimber, Oliver, and Ramesh Raskar. Spatial Augmented Reality. Merging Real and Virtual Worlds. Wellesley, Massachusetts: AK Peters, 2005. 71-92. Burnett, Ron. How Images Think. Cambridge, Mass.: MIT Press, 2004. Campbell, Jim. Hallucination. 1988-1990. Black and white video camera, 50 inch rear projection video monitor, laser disc players, custom electronics. Collection of Don Fisher, San Francisco. Campus, Peter. Interface. 1972. Closed-circuit video installation, black and white camera, video projector, light projector, glass sheet, empty, dark room. Centre Georges Pompidou Collection, Paris, France. Courchesne, Luc. Where Are You? 2005. Immersive installation: Panoscope 360°. a single channel immersive display, a large inverted dome, a hemispheric lens and projector, a computer and a surround sound system. Collection of the artist. < http://courchel.net/# >. Davies, Char. Osmose. 1995. Computer, sound synthesizers and processors, stereoscopic head-mounted display with 3D localized sound, breathing/balance interface vest, motion capture devices, video projectors, and silhouette screen. Char Davies, Immersence, Osmose. < http://www.immersence.com >. Farman, Jason. Mobile Interface Theory: Embodied Space and Locative Media. New York: Routledge, 2012. Graham, Dan. Present Continuous Past(s). 1974. Closed-circuit video installation, black and white camera, one black and white monitor, two mirrors, microprocessor. Centre Georges Pompidou Collection, Paris, France. Grau, Oliver. Virtual Art: From Illusion to Immersion. Translated by Gloria Custance. Cambridge, Massachusetts, London: MIT Press, 2003. Hansen, Mark B.N. New Philosophy for New Media. Cambridge, Mass.: MIT Press, 2004. Harper, Douglas. Online Etymology Dictionary, 2001-2012. < http://www.etymonline.com >. Manovich, Lev. “The Poetics of Augmented Space.” Visual Communication 5.2 (2006): 219-240. Milgram, Paul, Haruo Takemura, Akira Utsumi, Fumio Kishino. “Augmented Reality: A Class of Displays on the Reality-Virtuality Continuum.” SPIE [The International Society for Optical Engineering] Proceedings 2351: Telemanipulator and Telepresence Technologies (1994): 282-292. Naimark, Michael, Be Now Here. 1995-97. Stereoscopic interactive panorama: 3-D glasses, two 35mm motion-picture cameras, rotating tripod, input pedestal, stereoscopic projection screen, four-channel audio, 16-foot (4.87 m) rotating floor. Originally produced at Interval Research Corporation with additional support from the UNESCO World Heritage Centre, Paris, France. < http://www.naimark.net/projects/benowhere.html >. Nauman, Bruce. Live-Taped Video Corridor. 1970. Wallboard, video camera, two video monitors, videotape player, and videotape, dimensions variable. Solomon R. Guggenheim Museum, New York. Novak, Marcos. Interview with Leo Gullbring, Calimero journalistic och fotografi, 2001. < http://www.calimero.se/novak2.htm >. Sermon, Paul. Telematic Dreaming. 1992. ISDN telematic installation, two video projectors, two video cameras, two beds set. The National Museum of Photography, Film & Television in Bradford England. Shaw, Jeffrey, and Theo Botschuijver. Viewpoint. 1975. Photo installation. Shown at 9th Biennale de Paris, Musée d'Art Moderne, Paris, France.
APA, Harvard, Vancouver, ISO, and other styles
42

Peaty, Gwyneth. "Power in Silence: Captions, Deafness, and the Final Girl." M/C Journal 20, no. 3 (June 21, 2017). http://dx.doi.org/10.5204/mcj.1268.

Full text
Abstract:
IntroductionThe horror film Hush (2016) has attracted attention since its release due to the uniqueness of its central character—a deaf–mute author who lives in a world of silence. Maddie Young (Kate Siegel) moves into a remote cabin in the woods to recover from a breakup and finish her new novel. Aside from a cat, she is alone in the house, only engaging with loved ones via online messaging or video chats during which she uses American Sign Language (ASL). Maddie cannot hear nor speak, so writing is her primary mode of creative expression, and a key source of information for the audience. This article explores both the presence and absence of text in Hush, examining how textual “captions” of various kinds are both provided and withheld at key moments. As an author, Maddie battles the limits of written language as she struggles with writer’s block. As a person, she fights the limits of silence and isolation as a brutal killer invades her retreat. Accordingly, this article examines how the interplay between silence, text, and sound invites viewers to identify with the heroine’s experience and ultimate triumph.Hush is best described as a slasher—a horror film in which a single (usually male) killer stalks and kills a series of victims with relentless determination (Clover, Men, Women). Slashers are about close, visceral killing—blood and the hard stab of the knife. With her big brown eyes and gentle presence, quiet, deaf Maddie is clearly framed as a lamb to slaughter in the opening scenes. Indeed, throughout Hush, Maddie’s lack of hearing is leveraged to increase suspense and horror. The classic pantomime cry of “He’s behind you!” is taken to dark extremes as the audience watches a nameless man (John Gallagher Jr.) stalk the writer in her isolated house. She is unable to hear him enter the building, unable to sense him looming behind her. Neither does she hear him killing her friend outside on the porch, banging her body loudly against the French doors.And yet, despite her vulnerability, she rises to the challenge. Fighting back against her attacker using a variety of multisensory strategies, Maddie assumes the role of the “Final Girl” in this narrative. As Carol Clover has explained, the Final Girl is a key trope of slasher films, forming part of their essential structure. While others in the film are killed, “she alone looks death in the face; but she alone also finds the strength either to stay the killer long enough to be rescued (ending A) or to kill him herself (ending B)” (Clover, Her Body, Himself). However, reviews and discussions of Hush typically frame Maddie as a Final Girl with a difference. Adding disability into the equation is seen as “revolutionising” the trope (Sheppard) and “updating the Final Girl theory” for a new age (Laird). Indeed, the film presents its Final Girl as simultaneously deaf and powerful—a twist that potentially challenges the dynamics of the slasher and representations of disability more generally.My Weakness, My StrengthThe opening sequence of Hush introduces Maddie’s deafness through the use of sound, silence, and text. Following an establishing shot sweeping over the dark forest and down to her solitary cottage, the film opens to warm domesticity. Close-ups of onion, eggs, and garlic being prepared are accompanied by clear, crisp sounds of crackling, bubbling, slicing, and frying. The camera zooms out to focus on Maddie, busy at her culinary tasks. All noises begin to fade. The camera focuses on Maddie’s ear as audio is eliminated, replaced by silence. As she continues to cook, the audience experiences her world—a world devoid of sound. These initial moments also highlight the importance of digital communication technologies. Maddie moves smoothly between devices, switching from laptop computer to iPhone while sharing instant messages with a friend. Close-ups of these on-screen conversations provide viewers with additional narrative information, operating as an alternate form of captioning from within the diegesis. Snippets of text from other sources are likewise shown in passing, such as the author’s blurb on the jacket of her previous novel. The camera lingers on this book, allowing viewers to read that Maddie suffered hearing loss and vocal paralysis after contracting bacterial meningitis at 13 years old. Traditional closed captioning or subtitles are thus avoided in favour of less intrusive forms of expositional text that are integrated within the plot.While hearing characters, such as her neighbour and sister, use SimCom (simultaneous communication or sign supported speech) to communicate with her, Maddie signs in silence. Because the filmmakers have elected not to provide captions for her signs in these moments, a—typically non-ASL speaking—hearing audience will inevitably experience disruptions in comprehension and Maddie’s conversations can therefore only be partially understood. This allows for an interesting role reversal for viewers. As Katherine A. Jankowski (32) points out, deaf and hard of hearing audiences have long expressed dissatisfaction with accessing the spoken word on television and film due to a lack of closed captioning. Despite the increasing technological ease of captioning digital media in the 21st century, this barrier to accessibility continues to be an ongoing issue (Ellis and Kent). The hearing community do not share this frustrating background—television programs that include ASL are captioned to ensure hearing viewers can follow the story (see for example Beth Haller’s article on Switched at Birth in this special issue). Hush therefore inverts this dynamic by presenting ASL without captions. Whereas silence is used to draw hearing viewers into Maddie’s experience, her periodic use of ASL pushes them out again. This creates a push–pull dynamic, whereby the hearing audience identify with Maddie and empathise with the losses associated with being deaf and mute, but also realise that, as a result, she has developed additional skills that are beyond their ken.It is worth noting at this point that Maddie is not the first Final Girl with a disability. In the 1967 thriller Wait until Dark, for instance, Audrey Hepburn plays Susy Hendrix, a blind woman trapped in her home by three crooks. Martin F. Norden suggests that this film represented a “step forward” in cinematic representations of disability because its heroine is not simply an innocent victim, but “tough, resilient, and resourceful in her fight against the criminals who have misrepresented themselves to her and have broken into her apartment” (228). Susy’s blindness, at first presented as a source of vulnerability and frustration, becomes her strength in the film’s climax. Bashing out all the lights in the apartment, she forces the men to fight on her terms, in darkness, where she holds the upper hand. In a classic example of Final Girl tenacity, Susy stabs the last of them to death before help arrives. Maddie likewise uses her disability as a tactical advantage. An enhanced sense of touch allows her to detect the killer when he sneaks up behind her as she feels the lightest flutter upon the hairs of her neck. She also wields a blaring fire alarm as a weapon, deafening and disorienting her attacker, causing him to drop his knife.The similarities between these films are not coincidental. During an interview, director Mike Flanagan (who co-wrote Hush with wife Siegel) stated that they were directly informed by Wait until Dark. When asked about the choice to make Maddie’s character deaf, he explained that “it kind of happened because Kate and I were out to dinner and we were talking about movies we liked. One of the ones that we stumbled on that we both really liked was Wait Until Dark” (cited in Thurman). In the earlier film, director Terence Young used darkness to blind the audience—at times the screen is completely black and viewers must listen carefully to work out what is happening. Likewise, Flanagan and Siegel use silence to effectively deafen the audience at crucial moments. The viewers are therefore forced to experience the action as the heroines do.You’re Gonna Die Screaming But You Won’t Be HeardHorror films often depend upon sound design for impact—the most mundane visuals can be made frightening by the addition of a particular noise, effect, or tune. Therefore, in the context of the slasher genre, one of the most unique aspects of Hush is the absence of the Final Girl’s vocalisation. A mute heroine is deprived of the most basic expressive tool in the horror handbook—a good scream. “What really won me over,” comments one reviewer, “was the fact that this particular ‘final girl’ isn’t physically able to whinge or scream when in pain–something that really isn’t the norm in slasher/home invasion movies” (Gorman). Yet silence also plays an important part in this genre, “when the wind stops or the footfalls cease, death is near” (Whittington 183). Indeed, Hush’s tagline is “silence can be killer.”The arrival of the killer triggers a deep kind of silence in this particular film, because alternative captions, text, and other communicative techniques (including ASL) cease to be used or useful when the man begins terrorising Maddie. This is not entirely surprising, as the abject failure of technology is a familiar trope in slasher films. As Clover explains, “the emotional terrain of the slasher film is pretechnological” (Her Body, Himself, 198). In Hush, however, the focus on text in this context is notable. There is a sense that written modes of communication are unreliable when it counts. The killer steals her phone, and cuts electricity and Internet access to the house. She attempts to use the neighbours’ Wi-Fi via her laptop, but does not know the password. Quick thinking Maddie even scrawls backwards messages on her windows, “WON’T TELL. DIDN’T SEE FACE,” she writes in lipstick, “BOYFRIEND COMING HOME.” In response, the killer simply removes his mask, “You’ve seen it now” he says. They both know there is no boyfriend. The written word has shifted from being central to Maddie’s life, to largely irrelevant. Text cannot save her. It is only by using other strategies (and senses) that Maddie empowers herself to survive.Maddie’s struggles to communicate and take control are integral to the film’s unfolding narrative, and co-writer Siegel notes this was a conscious theme: “A lot of this movie is … a metaphor for feeling unheard. It’s a movie about asserting yourself and of course as a female writer I brought a lot to that.” In their reflection on the limits of both verbal and written communication, the writers of Hush owe a debt to another source of inspiration—Joss Whedon’s Buffy the Vampire Slayer television series. Season four, episode ten, also called Hush, was first aired on 14 December 1999 and features a critically acclaimed storyline in which the characters all lose their ability to speak. Voices from all over Sunnydale are stolen by monstrous fairytale figures called The Gentlemen, who use the silence to cut fresh hearts from living victims. Their appearance is heralded by a morbid rhyme:Can’t even shout, can’t even cry The Gentlemen are coming by. Looking in windows, knocking on doors, They need to take seven and they might take yours. Can’t call to mom, can’t say a word, You’re gonna die screaming but you won’t be heard.The theme of being “unheard” is clearly felt in this episode. Buffy and co attempt a variety of methods to compensate for their lost voices, such as hanging message boards around their necks, using basic text-to-voice computer software, and drawing on overhead projector slides. These tools essentially provide the captions for a story unfolding in silence, as no subtitles are provided. As it turns out, in many ways the friends’ non-verbal communication is more effective than their spoken words. Patrick Shade argues that the episode:celebrates the limits and virtues of both the nonverbal and the verbal. … We tend to be most readily aware of verbal means … but “Hush” stresses that we are embodied creatures whose communication consists in more than the spoken word. It reminds us that we have multiple resources we regularly employ in communicating.In a similar way, the film Hush emphasises alternative modes of expression through the device of the mute Final Girl, who must use all of her sensory and intellectual resources to survive. The evening begins with Maddie at leisure, unable to decide how to end her fictional novel. By the finale she is clarity incarnate. She assesses each real-life scene proactively and “writes” the end of the film on her own terms, showing that there is only one way to survive the night—she must fight.Deaf GainIn his discussion of disability and cinema, Norden explains that the majority of films position disabled people as outsiders and “others” because “filmmakers photograph and edit their work to reflect an able-bodied point of view” (1). The very apparatus of mainstream film, he argues, is designed to embody able-bodied experiences and encourage audience identification with able-bodied characters. He argues this bias results in disabled characters positioned as “objects of spectacle” to be pitied, feared or scorned by viewers. In Hush, however, the audience is consistently encouraged to identify with Maddie. As she fights for her life in the final scenes, sound fades away and the camera assumes a first-person perspective. The man is above, choking her on the floor, and we look up at him through her eyes. As Maddie’s groping hand finds a corkscrew and jabs the spike into his neck, we watch his death through her eyes too. The film thus assists viewers to apprehend Maddie’s strength intimately, rather than framing her as a spectacle or distanced “other” to be pitied.Importantly, it is this very core of perceived vulnerability, yet ultimate strength, that gives Maddie the edge over her attacker in the end. In this way, Maddie’s disabilities are not solely represented as a space of limitation or difference, but a potential wellspring of power. Hence the film supports, to some degree, the move to seeing deafness as gain, rather than loss:Deafness has long been viewed as a hearing loss—an absence, a void, a lack. It is virtually impossible to think of deafness without thinking of loss. And yet Deaf people do not often consider their lives to be defined by loss. Rather, there is something present in the lives of Deaf people, something full and complete. (Bauman and Murray, 3)As Bauman and Murray explain, the shift from “hearing loss” to “deaf gain” involves focusing on what is advantageous and unique about the deaf experience. They use the example of the Swiss national snowboarding team, who hired a deaf coach to boost their performance. The coach noticed they were depending too much on sound and used earplugs to teach a multi-sensory approach, “the earplugs forced them to learn to depend on the feel of the snow beneath their boards [and] the snowboarder’s performance improved markedly” (6). This idea that removing sound strengthens other senses is a thread that runs throughout Hush. For example, it is the loss of hearing and speech that are credited with inspiring Maddie’s successful writing career and innovative literary “voice”.Lennard J. Davis warns that framing people as heroic or empowered as a result of their disabilities can feed counterproductive stereotypes and perpetuate oppressive systems. “Privileging the inherent powers of the deaf or the blind is a form of patronizing,” he argues, because it traps such individuals within the concept of innate difference (106). Disparities between able and disabled people are easier to justify when disabled characters are presented as intrinsically “special” or “noble,” as this suggests inevitable divergence, rather than structural inequality. While this is something to keep in mind, Hush skirts the issue by presenting Maddie as a flawed, realistic character. She does not possess superpowers; she makes mistakes and gets injured. In short, she is a fallible human using what resources she has to the best of her abilities. As such, she represents a holistic vision of a disabled heroine rather than an overly glorified stereotype.ConclusionHush is a film about the limits of text, the gaps where language is impossible or insufficient, and the struggle to be heard as a woman with disabilities. It is a film about the difficulties surrounding both verbal and written communication, and our dependence upon them. The absence of closed captions or subtitles, combined with the use of alternative “captioning”—in the form of instant messaging, for instance—grounds the narrative in lived space, rather than providing easy extra-textual solutions. It also poses a challenge to a hearing audience, to cross the border of “otherness” and identify with a deaf heroine.Returning to the discussion of the Final Girl characterisation, Clover argues that this is a gendered device combining both traditionally feminine and masculine characteristics. The fluidity of the Final Girl is constant, “even during that final struggle she is now weak and now strong, now flees the killer and now charges him, now stabs and is stabbed, now cries out in fear and now shouts in anger” (Her Body, Himself, 221). Men viewing slasher films identify with the Final Girl’s “masculine” traits, and in the process find themselves looking through the eyes of a woman. In using a deaf character, Hush suggests that an evolution of this dynamic might also occur along the dis/abled boundary line. Maddie is a powerful survivor who shifts between weak and strong, frightened and fierce, but also between disabled and able. This portrayal encourages the audience to identify with her empowered traits and in the process look through the eyes of a disabled woman. Therefore, while slashers—and horror films in general—are not traditionally associated with progressive representations of disabilities, this evolution of the Final Girl may provide a fruitful topic of both research and filmmaking in the future.ReferencesBauman, Dirksen, and Joseph J. Murray. “Reframing: From Hearing Loss to Deaf Gain.” Trans. Fallon Brizendine and Emily Schenker. Deaf Studies Digital Journal 1 (2009): 1–10. <http://dsdj.gallaudet.edu/assets/section/section2/entry19/DSDJ_entry19.pdf>.Clover, Carol J. Men, Women, and Chain Saws: Gender in the Modern Horror Film. New Jersey: Princeton UP, 1992.———. “Her Body, Himself: Gender in the Slasher Film.” Representations 20 (1987): 187–228.Davis, Lennard J. Enforcing Normalcy: Disability, Deafness, and the Body. London: Verso, 1995.Ellis, Katie, and Mike Kent. Disability and New Media. New York: Routledge, 2011.Gorman, H. “Hush: Film Review.” Scream Horror Magazine (2016) <http://www.screamhorrormag.com/hush-film-review/>.Jankowski, Katherine A. Deaf Empowerment: Emergence, Struggle, and Rhetoric. Washington: Gallaudet UP, 1997.Laird, E.E. “Updating the Final Girl Theory.” Medium (2016) <https://medium.com/@TheFilmJournal/updating-the-final-girl-theory-b37ec0b1acf4>.Norden, M.F. Cinema of Isolation: A History of Physical Disability in the Movies. New Jersey: Rutgers UP, 1994.Shade, Patrick. “Screaming to Be Heard: Community and Communication in ‘Hush’.” Slayage 6.1 (2006). <http://www.whedonstudies.tv/uploads/2/6/2/8/26288593/shade_slayage_6.1.pdf>.Sheppard, D. “Hush: Revolutionising the Final Girl.” Eyes on Screen (2016). <https://eyesonscreen.wordpress.com/2016/06/08/hush-revolutionising-the-final-girl/>.Thurman, T. “‘Hush’ Director Mike Flanagan and Actress Kate Siegel on Their New Thriller!” Interview. Bloody Disgusting (2016). <http://bloody-disgusting.com/interviews/3384092/interview-hush-mike-flanagan-kate-siegel/>.Whittington, W. “Horror Sound Design.” A Companion to the Horror Film. Ed. Harry M. Benshoff. Oxford: John Wiley & Sons, 2014: 168–185.
APA, Harvard, Vancouver, ISO, and other styles
43

Caines, Rebecca, Rachelle Viader Knowles, and Judy Anderson. "QR Codes and Traditional Beadwork: Augmented Communities Improvising Together." M/C Journal 16, no. 6 (November 7, 2013). http://dx.doi.org/10.5204/mcj.734.

Full text
Abstract:
Images 1-6: Photographs by Rachelle Viader Knowles (2012)This article discusses the cross-cultural, augmented artwork Parallel Worlds, Intersecting Moments (2012) by Rachelle Viader Knowles and Judy Anderson, that premiered at the First Nations University of Canada Gallery in Regina, on 2 March 2012, as part of a group exhibition entitled Critical Faculties. The work consists of two elements: wall pieces with black and white Quick Response (QR) codes created using traditional beading and framed within red Stroud cloth; and a series of videos, accessible via scanning the beaded QR codes. The videos feature Aboriginal and non-Aboriginal people from Saskatchewan, Canada telling stories about their own personal experiences with new technologies. A QR code is a matrix barcode made up of black square modules on a white square in a grid pattern that is optically machine-readable. Performance artist and scholar Rebecca Caines was invited by the artists to participate in the work as a subject in one of the videos. She attended the opening and observed how audiences improvised and interacted with the work. Caines then went on to initiate this collaborative writing project. Like the artwork it analyzes, this writing documents a series of curated experiences and conversations. This article includes excerpts of artist statements, descriptions of artists’s process and audience observation, and new sections of collaborative critical writing, woven together to explore the different augmented elements of the artwork and the results of this augmentation. These conversations and responses explore the cross-cultural processes that led to the work’s creation, and describe the results of the technological and social disruptions and slippages that occurred in the development phase and in the gallery as observers and artists improvised with the augmentation technology, and with each other. The article includes detail on the augmented art practices of storytelling, augmented reality (AR), and traditional beading, that collided and mutated during this project, exploring the tension and opportunity inherent in the human impulse to augment. Storytelling through Augmented Art Practices: The Creation of the WorkJUDY ANDERSON: I am a Plains Cree artist from the Gordon’s First Nation, which is located in Saskatchewan, Canada. As a Professor of Indian Fine Arts at the First Nations University of Canada, I research and continue to learn about traditional art making using traditional materials creating primarily beaded pieces such as medicine bags and drum sticks. Of particular interest to me, however, is how such traditional practices manifest in contemporary Aboriginal art. In this regard I have been greatly influenced by my colleague and friend, artist Ruth Cuthand, and specifically her Trading series, which reframed my thinking about beadwork (Art Placement), and later by the work of artists like Nadia Myer, and KC Adams (Myer; KC Adams). Cuthand’s incredibly successful series taught me that beadwork does not only beautify and “augment” our world, but it has the power to bring to the forefront important issues regarding Aboriginal people. As a result, I began to work on my own ideas on how to create beadworks that spoke to both traditional and contemporary thoughts.RACHELLE VIADER KNOWLES: At the time we started developing this project, we were both working in leadership roles in our respective Departments; Judy as Coordinator of Indian Fine Arts at First Nations University, and myself as Head of Visual Arts at the University of Regina. We began discussing ways that we could create more interconnection between our faculty members and students. At the centre of both our practices was a dialogic method of back and forth negotiation and compromise. JA: Rachelle had the idea that we should bead QR codes and make videos for the upcoming First Nations and University of Regina joint faculty exhibition. Over the 2011 Christmas holiday we visited each other’s homes, beaded together, and found out about each other’s lives by telling stories of the things we’ve experienced. I felt it was very important that our QR codes were not beaded in the exact same manner; Rachelle built up hers through a series of straight lines, whereas mine was beaded with a circle around the square QR code, which reflected the importance of the circle in my Cree belief system. It was important for me to show that even though we, Aboriginal and non-Aboriginal people, have similar experiences, we often have a different approach or way of thinking about similar things. I also suggested we frame the black and white beaded QR codes with bright red Stroud cloth, a heavy wool cloth originating in the UK that has been used in North America as trade cloth since the 1680s, and has become a significant part of First Nations fabric traditions.Since we were approaching this piece as a cross-cultural one, I chose the number seven for the amount of stories we would create because it is a sacred number in my own Plains Cree spiritual teachings. As such, we brought together seven pairs of people, including ourselves. The participants were drawn from family and friends from reserves and communities around Saskatchewan, including the city of Regina, as well as colleagues and students from the two university campuses. There were a number of different age ranges and socioeconomic backgrounds represented. We came together to tell stories about our experiences with technology, a common cross-cultural experience that seemed appropriate to the work.RVK: As the process of making the beadworks unfolded however, what became apparent to me was the sheer amount of hours it takes to create a piece of “augmentation” through beading, and the deeply social nature of the activity. We also worked together on the videos for the AR part of the artwork. Each participant in the videos was asked to write a short text about some aspect of their relationship to technology and communications. We took the short stories, arranged them into pairs, and used them to write short scripts. We then invited each pair to perform the scripts together on camera in my studio. The stories were really broad ranging. My own was a reflection of the profound discomfort of finding a blog where a man I was dating was publishing the story of our relationship as it unfolded. Other stories covered the loss of no longer being able to play the computer games from teenage years, first encounters with new technologies and social networks, secret admirers, and crank calls to emergency services. The storytelling and dialogue between us as we shared our practices became an important, but unseen layer of this “dialogical” work (Kester).REBECCA CAINES: I came along to Rachelle’s studio at the university to be a participant in a video for the piece. My co-performer was a young woman called Nova Lee. We laughed and chatted and talked and sat knee-to-knee together to film our stories about technology, both of us focusing on different types of Internet relationships. We were asked to read one line of our story at a time, interweaving together our poem of experience. Afterwards I asked her where her name was from. She told me it was from a song. She found the song on YouTube on Rachelle’s computer in the studio and played it for us. Here is a sample of the lyrics: I told my daddy I'd found a girlWho meant the world to meAnd tomorrow I'd ask the Indian chiefFor the hand of Nova LeeDad's trembling lips spoke softlyAs he told me of my life twangs then he said I could never takeThis maiden for my wifeSon, the white man and Indians were fighting when you were bornAnd a brave called Yellow Sun scalped my little boySo I stole you to get even for what he'd doneThough you're a full-blooded Indian, son I love you as much as my own little fellow that's deadAnd, son, Nova Lee is your sisterAnd that's why I've always saidSon, don't go near the IndiansPlease stay awaySon, don't go near the IndiansPlease do what I say— Rex Allen. “Don’t Go Near the Indians.” 1962. Judy explained to Rachelle and I that this was a common history of displacement in Canada, people taken away, falling in love with their relatives without knowing, perhaps sensing a connection, always longing for a home (Campbell). I thought, “What a weight for this young woman to bear, this name, this history.” Other participants also learnt about each other this way through the sharing of stories. Many had come to Canada from other places, each with different cultural and colonial resonances. Through these moments of working together, new understandings formed that deeply affected the participants. In this way, layers of storytelling form the heart of this work.JA: Storytelling holds an incredibly special place in Aboriginal people’s lives; through them we learned the laws, rules, and regulations that governed our behaviour as individuals, within our family, our communities, and our nations. These stories included histories (personal and communal), sacred teachings, the way the world used to be, creation stories, medicine stories, stories regarding the seasons and animals, and stories that defined our relationship with the environment, etc. The stories we asked for not only showed that we as Aboriginal and non-Aboriginal people have the same experiences, but also work in the way that a traditional story would. For example, Rachelle’s story taught a good lesson about how it is important to learn about the individual you are dating—had she not, her whole life could have been laid out to any who may have come across that man’s blog. My story spoke to the need to look up and observe what is around you instead of being engrossed in your own little world, because you don’t know who could be lifting your information. They all showed a common interest in sharing information, and laughing at mistakes and life lessons.Augmented Storytelling and Augmented RealityRC: This work relies on the augmented reality (AR) qualities of the QR code. Pavlik and Bridges suggest AR, even through relatively limited tools like a QR code, can have a significant impact on storytelling practices: “AR enriches an individual’s experience with the real world … Stories are put in a local context and act as a supplement to a citizen’s direct experience with the world” (Pavlik and Bridges 21). Their research shows that AR technologies like QR codes brings the story to life in a three dimensional and interactive form that allows the user a level of participation impossible in traditional, analogue media. They emphasize the different viewing possible in AR storytelling as: The new media storytelling model is nonlinear. The storyteller conceptualizes the audience member not as a consumer of the story engaged in a third-person narrative, but rather as a participant engaged in a first-person narrative. The storyteller invites the participant to explore the story in a variety of ways, perhaps beginning in the middle, moving across time, or space, or by topic. (Pavlik and Bridges 22) In their case studies, Pavlik and Bridges show AR has the “potential to become a viable storytelling format with a diverse range of options that engage citizens through sight, sound, or haptic experiences… to produce participatory, immersive, and community-based stories” (Pavlik and Bridges 39). The personal stories in this artwork were remediated a number of different ways. They were written down, then separated into one-line fragments, interwoven with our partners, and re-read again and again for the camera, before being edited and processed. Marked by the artists clearly as ‘Aboriginal’ and ‘non Aboriginal’ and placed alongside works featuring traditional beading, these stories were marked and re-inscribed by complex and fragmented histories of indigenous and non-indigenous relations in Canada. This history was emphasized as the QR codes were also physically located in the First Nations University of Canada, a unique indigenous space.To view this artwork in its entirety, therefore, two camera-enabled and internet-capable mobile devices were required to be used simultaneously. Due to the way they were accessed and played back through augmented reality technologies, stories in the gallery were experienced in nonlinear fashions, started part way through, left before completion, or not in sync with the partner they were designed to work with. The audience experimented with the video content, stopping and starting it to produce new combinations of words and images. This experience was also affected by chance as the video files online were on a cycle, after a set period of time, the scan would suddenly produce a new story. These augmented stories were recreated and reshaped by participants in dialogue with the space, and with each other. Augmented Stories and Improvised CommunitiesRC: In her 1997 study of the reception of new media art in galleries, Beryl Graham surveys the types of audience interaction common to new media art practices like AR art. She “reveals patterns of use of interactive artworks including the relation of use-time to gender, aspects of intimidation, and social interaction.” In particular, she observes “a high frequency of collective use of artworks, even when the artworks are designed to be used by one person” (Graham 2). What Graham describes as “collective” and “social,” I see as a type of improvisation engaging with difference, differences between audience members, and differences between human participants and the alien nature of sophisticated, interactive technologies. Improvisation “embodies real-time creative decision-making, risk-taking, and collaboration” (Heble). In the improvisatory act, participants participate in active listening in order to work with different voices, experiences, and practices, but share a common focus in the creative endeavour. Notions such as “the unexpected” or “the mistake” are constantly reconfigured into productive material. However, as leading improvisation studies scholar Ajay Heble suggests, “improvisation must be considered not simply as a musical or creative form, but as a complex social phenomenon that mediates transcultural inter-artistic exchanges that produce new conceptions of identity, community, history, and the body” (Heble). I watched at the opening as audience members in Parallel Worlds, Intersecting Moments paired up, successfully or unsuccessfully attempted to scan the code and download the video, and physically wrapped themselves around their partner (often a stranger) in order to hear the quiet audio in the loud gallery. The audience began to help each other through the process, to improvise together. The QR code was not always a familiar or comfortable object. The audience often had to install a QR code reader application onto their own device first, and then proceed to try to get the reader to work. Underfunded university Wi-Fi connections dropped, Apple ID logins failed, devices stalled. There were sudden loud cries when somebody successfully scanned their half of the work, and then rushes and scrambles as small groups of people attempted to sync their videos to start at the same time. The louder the gallery got, the closer the pairs had to stand to each other to hear the video through the device’s tiny speakers. Many people looked over someone else’s shoulder without their knowledge. Sometimes people were too close for comfort and behavior was negotiated and adapted. Sometimes, the pairs gave up trying; sometimes they borrowed each other’s devices, sometimes their phone or tablet was incompatible. Difference created new improvisations, or introduced sudden stops or diversions in the activities taking place. The theme of the work was strengthened every time an improvised negotiation took place, every time the technology faltered or succeeded, every time a digital or physical interaction was attempted. Through the combination of augmented bead practices used in an innovative way, and augmented technology with new audiences, new types of improvisatory responses could take place.Initially I found it difficult to not simplify and stereotype the processes taking place, to read it as a metaphor of the differing access to resources and training in Aboriginal and non-Aboriginal communities, a clear example of the ways technology-use marks wealth and status. As I moved through the space, caught up in dialogic, improvisatory encounters, cross-cultural experiences broke down, but did not completely erase, these initial markers of difference. Instead, layers of interaction and information began to be placed over the Aboriginal and non-Aboriginal identities in the gallery. My own assumptions were placed under pressure as I interacted with the artists and the other participants in the space. My identity as a relative newcomer to Saskatchewan was slowly augmented by the stories and experiences I shared and heard, and the audience members shifted back and forth between being experts in the aspects of the stories and technologies that were familiar, and asking for help to translate and activate the stories and processes that were alien.Augmented Art PracticesJA: There is an old saying, “if it doesn’t move, bead it.” I think that this desire to augment with the decorative is handed down through traditional thoughts and beliefs regarding clothing. Once nomadic we did not accumulate many goods, as a result, the goods we did keep were beautified though artistic practices including quilling and eventually beadwork (painting too). And our clothing was thought of as spiritual because it did the important act of protecting us from the elements, therefore it was thought of as sacred. To beautify the clothing was to honour your spirit while at the same time it honoured the animal that had given its life to protect you (Berlo and Phillips). I think that this belief naturally grew to include any item, after all, there is nothing like an object or piece of clothing that is beaded well—no one can resist it. There is, however, a belief that humans should not try to mimic perfection, which is reserved for the Creator and in many cases a beader will deliberately put a bead out of place.RC: When new media produces unexpected results, or as Rachelle says, when pixels “go out of place”, it can be seen as a sign that humans are (deliberately or accidently) failing to use the digital technology in the way it was intended. In Parallel Worlds, Intersecting Moments the theme of cross cultural encounters and technological communication was only enhanced by these moments of displacement and slippage and the improvisatory responses that took place. The artists could not predict the degree of slippage that would occur, but from their catalogue texts and the conversations above, it is clear that collective negotiation was a desired outcome. By creating a QR code based artwork that utilized augmented art practices to create new types of storytelling, the artists allowed augmented identities to develop, slip, falter, and be reconfigured. Through the dialogic art practices of traditional beading and participatory video work, Anderson and Knowles began to build new modes of communication and knowledge sharing. I believe there could be productive relationships to be further explored between what Judy calls the First Nations “desire to bead” whilst acknowledging human fallibility; and the ways Rachelle aims to technologically-augment conversation and storytelling through contemporary AR and video practices despite, or perhaps because of the possibility of risk and disruptions when bodies and code interact. What kind of trust and reciprocity becomes possible across cultural divides when this can be acknowledged as a common human quality? How could beads and/or pixels being “out of place” expose fault lines and opportunities in these kinds of cross-cultural knowledge transfer? As Judy suggested in our conversations, such work requires active engagement from the audience in the process that does not always occur. “In those instances, does the piece fail or people fail the piece? I'm not sure.” In crossing back and forth between these different types of augmentation impulses, and by creating improvisatory, dialogic encounters in the gallery, these artists began the tentative, complex, and vital process of cultural exchange, and invited participants and audience to take this step with them and to work “across traditional and contemporary modes of production” to “use the language and process of art to speak, listen, teach and learn” (Knowles and Anderson).ReferencesAdams, K.C. “Cyborg Hybrid \'cy·borg 'hi·brid\ n.” KC Adams, n.d. 16 Nov. 2013 ‹http://www.kcadams.net/art/arttotal.html›. Allen, Rex. “Don't Go Near the Indians.” Rex Allen Sings and Tells Tales of the Golden West. Mercury, 1962. LP and CD.Anderson, Judy, and Rachelle Viader Knowles. Parallel Worlds, Intersecting Moments. First Nations University of Canada Gallery; Slate Gallery, Regina, Saskatchewan, 2012. Art Placement. “Ruth Cuthand”. Artists. Art Placement, n.d. 16 Nov. 2013 ‹http://www.artplacement.com/gallery/artists.php›.Berlo, Janet Catherine, and Ruth B. Phillips. Native North American Art. Oxford: Oxford University Press, 1998. Campbell, Maria. Stories of the Road Allowance People. Penticton, B.C.: Theytus Books, 1995. Critical Faculties. Regina: University of Regina and First Nations University of Canada, 2012. Graham, Beryl C.E. “A Study of Audience Relationships with Interactive Computer-Based Visual Artworks in Gallery Settings, through Observation, Art Practice, and Curation”. Dissertation. University of Sunderland, 1997. Heble, Ajay. “About ICASP.” Improvisation, Community, and Social Practice. University of Guelph; Social Sciences Humanities Research Council of Canada, n.d. 16 Nov. 2011 ‹http://www.improvcommunity.ca/›.Kester, Grant. Conversation Pieces: Community and Communication in Modern Art. Berkeley: University of California Press, 2004. Knowles, Rachelle Viader. Rachelle Viader Knowles, n.d. 16 Nov. 2013 ‹http://uregina.ca/rvk›.Myre, Nadia. Nadia Myre. 16 Nov. 2013 ‹http://nadiamyre.com/NadiaMyre/home.html›. Pavlik, John G., and Frank Bridges. “The Emergence of Augmented Reality (AR) as a Storytelling Medium in Journalism.” Journalism & Communication Monographs 15.4 (2013): 4-59.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography