To see the other types of publications on this topic, follow the link: Automatic; Video; Interpretation.

Journal articles on the topic 'Automatic; Video; Interpretation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Automatic; Video; Interpretation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lavee, G., E. Rivlin, and M. Rudzsky. "Understanding Video Events: A Survey of Methods for Automatic Interpretation of Semantic Occurrences in Video." IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 39, no. 5 (September 2009): 489–504. http://dx.doi.org/10.1109/tsmcc.2009.2023380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

WITHERS, C. S. "ORTHOGONAL FUNCTIONS AND ZERNIKE POLYNOMIALS—A RANDOM VARIABLE INTERPRETATION." ANZIAM Journal 50, no. 3 (January 2009): 435–44. http://dx.doi.org/10.1017/s1446181109000169.

Full text
Abstract:
AbstractThere are advantages in viewing orthogonal functions as functions generated by a random variable from a basis set of functions. Let Y be a random variable distributed uniformly on [0,1]. We give two ways of generating the Zernike radial polynomials with parameter l, {Zll+2n(x), n≥0}. The first is using the standard basis {xn,n≥0} and the random variable Y1/(l+1). The second is using the nonstandard basis {xl+2n,n≥0} and the random variable Y1/2. Zernike polynomials are important in the removal of lens aberrations, in characterizing video images with a small number of numbers, and in automatic aircraft identification.
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Pengyu, Shaowen Ding, Hongliang Zhang, and Xiaohu Zhang. "A Real-Time Optical Tracking and Measurement Processing System for Flying Targets." Scientific World Journal 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/976590.

Full text
Abstract:
Optical tracking and measurement for flying targets is unlike the close range photography under a controllable observation environment, which brings extreme conditions like diverse target changes as a result of high maneuver ability and long cruising range. This paper first designed and realized a distributed image interpretation and measurement processing system to achieve resource centralized management, multisite simultaneous interpretation and adaptive estimation algorithm selection; then proposed a real-time interpretation method which contains automatic foreground detection, online target tracking, multiple features location, and human guidance. An experiment is carried out at performance and efficiency evaluation of the method by semisynthetic video. The system can be used in the field of aerospace tests like target analysis including dynamic parameter, transient states, and optical physics characteristics, with security control.
APA, Harvard, Vancouver, ISO, and other styles
4

Morozov, A. A., O. S. Sushkova, I. A. Kershner, and A. F. Polupanov. "Development of a Method of Terahertz Intelligent Video Surveillance Based on the Semantic Fusion of Terahertz and 3D Video Images." Information Technology and Nanotechnology, no. 2391 (2019): 134–43. http://dx.doi.org/10.18287/1613-0073-2019-2391-134-143.

Full text
Abstract:
The terahertz video surveillance opens up new unique opportunities in the field of security in public places, as it allows to detect and thus to prevent usage of hidden weapons and other dangerous items. Although the first generation of terahertz video surveillance systems has already been created and is available on the security systems market, it has not yet found wide application. The main reason for this is in that the existing methods for analyzing terahertz images are not capable of providing hidden and fully-automatic recognition of weapons and other dangerous objects and can only be used under the control of a specially trained operator. As a result, the terahertz video surveillance appears to be more expensive and less efficient in comparison with the standard approach based on the organizing security perimeters and manual inspection of the visitors. In the paper, the problem of the development of a method of automatic analysis of the terahertz video images is considered. As a basis for this method, it is proposed to use the semantic fusion of video images obtained using different physical principles, the idea of which is in that the semantic content of one video image is used to control the processing and analysis of another video image. For example, the information about 3D coordinates of the body, arms, and legs of a person can be used for analysis and proper interpretation of color areas observed on a terahertz video image. Special means of the object-oriented logic programming are developed for the implementation of the semantic fusion of the video data, including special built-in classes of the Actor Prolog logic language for acquisition, processing, and analysis of video data in the visible, infrared, and terahertz ranges as well as 3D video data.
APA, Harvard, Vancouver, ISO, and other styles
5

GERICKE, LUTZ, RAJA GUMIENNY, and CHRISTOPH MEINEL. "COLLABORATECOM SPECIAL ISSUE ANALYZING DISTRIBUTED WHITEBOARD INTERACTIONS." International Journal of Cooperative Information Systems 21, no. 03 (September 2012): 199–220. http://dx.doi.org/10.1142/s021884301241002x.

Full text
Abstract:
We present the digital whiteboard system Tele-Board, which automatically captures all interactions made on the all-digital whiteboard and thus offers possibilities for a fast interpretation of usage characteristics. Analyzing team work at whiteboards is a time-consuming and error-prone process if manual interpretation techniques are applied. In a case study, we demonstrate how to conduct and analyze whiteboard experiments with the help of our system. The study investigates the role of video compared to an audio-only connection for distributed work settings. With the simplified analysis of communication data, we can prove that the video teams were more active than the audio teams and the distribution of whiteboard interaction between team members was more balanced. This way, an automatic analysis can not only support manual observations and codings, but also give insights that cannot be achieved with other systems. Beyond the overall view on one sessions focusing on key figures, it is also possible to find out more about the internal structure of a session.
APA, Harvard, Vancouver, ISO, and other styles
6

Tneb, Rainer, Andreas Seidl, Guido Hansen, and C. Pruett. "3-D Body Scanning - Systems, Methods and Applications for Automatic Interpretation of 3D Surface Anthropometrical Data." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, no. 38 (July 2000): 844–47. http://dx.doi.org/10.1177/154193120004403844.

Full text
Abstract:
Since the early beginning of the development of the 3-D digital human and ergonomics tool RAMSIS in 1988, appropriate measurement systems had been developed simultaneously. New integrated approaches and methods for human body measurement have been investigated and developed. TECMATH has developed the VITUS Pro and VITUS Smart 3-D Full Body Laser Scanner family for high precision, and adapted a 2-D video camera-based system that is simple to use and inexpensive. In the past three years, novel applications for mass customization have been developed specifically for the clothing industry. More than 120 systems (3-D and 2-D) have been installed in research environments, clothing shops, army facilities and automobile manufacturers in the past two years. These organizations require measurement systems, methods, and analysis techniques that ensure reliable and precise information about human body dimensions.
APA, Harvard, Vancouver, ISO, and other styles
7

Brás, Luís M. R., Elsa F. Gomes, Margarida M. M. Ribeiro, and M. M. L. Guimarães. "Drop Distribution Determination in a Liquid-Liquid Dispersion by Image Processing." International Journal of Chemical Engineering 2009 (2009): 1–6. http://dx.doi.org/10.1155/2009/746439.

Full text
Abstract:
This paper presents the implementation of an algorithm for automatic identification of drops with different sizes in monochromatic digitized frames of a liquid-liquid chemical process. These image frames were obtained at our Laboratory, using a nonintrusive process, with a digital video camera, a microscope, and an illumination setup from a dispersion of toluene in water within a transparent mixing vessel. In this implementation, we propose a two-phase approach, using a Hough transform that automatically identifies drops in images of the chemical process. This work is a promising starting point for the possibility of performing an automatic drop classification with good results. Our algorithm for the analysis and interpretation of digitized images will be used for the calculation of particle size and shape distributions for modelling liquid-liquid systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Smolyakov, P. N. "Liability of Legal Entities for Offenses Recorded by Special Technical Devices." Actual Problems of Russian Law 1, no. 12 (January 20, 2020): 11–16. http://dx.doi.org/10.17803/1994-1471.2019.109.12.011-016.

Full text
Abstract:
The article is devoted to the exemption of legal entities from liability for administrative offenses recorded by special technical devices operating in an automatic mode and having functions of photo and film shooting, video recording, or by means of photo and film shooting, video recording. It is noted that the existing regulation in the Administrative Code of the Russian Federation in the interpretation of the highest court and other courts makes such liability ephemeral, allowing to arbitrarily shift it, for example, onto natural persons, e.i. drivers of vehicles belonging to legal entities. This situation allows legal entities with a large number of commercial vehicles throughout the country to easily avoid paying large amounts of administrative fines, which has nefative effect on pumping up the treasury and encourages further illegal behavior of their drivers on the roads. The author proposes to discuss the state of legislation and law enforcement on this issue.
APA, Harvard, Vancouver, ISO, and other styles
9

Kang, Hyung W., and Sung Yong Shin. "Creating Walk-Through Images from a Video Sequence of a Dynamic Scene." Presence: Teleoperators and Virtual Environments 13, no. 6 (December 2004): 638–55. http://dx.doi.org/10.1162/1054746043280556.

Full text
Abstract:
Tour into the picture (TIP), proposed by Horry et al. (Horry, Anjyo, & Arai, 1997, ACM SIGGRAPH '97 Conference Proceedings, 225–232) is a method for generating a sequence of walk-through images from a single reference image. By navigating a 3D scene model constructed from the image, TIP provides convincing 3D effects. This paper presents a comprehensive scheme for creating walk-through images from a video sequence by generalizing the idea of TIP. To address various problems in dealing with a video sequence rather than a single image, the proposed scheme is designed to have the following features: First, it incorporates a new modeling scheme based on a vanishing circle identified in the video, assuming that the input video contains a negligible amount of motion parallax effects and that dynamic objects move on a flat terrain. Second, we propose a novel scheme for automatic background detection from the video, based on 4-parameter motion model and statistical background color estimation. Third, to assist the extraction of static or dynamic foreground objects from video, we devised a semiautomatic boundary-segmentation scheme based on enhanced lane (Kang & Shin, 2002, Graphical Models, 64 (5), 282–303). The purpose of this work is to let users experience the feel of navigating into a video sequence with their own interpretation and imagination about a given scene. The proposed scheme covers various types of video films of dynamic scenes, such as sports coverage, cartoon animation, and movie films, in which objects are continuously changing their shapes and locations. It can also be used to produce a variety of synthetic video sequences by importing and merging dynamic foreign objects with the original video.
APA, Harvard, Vancouver, ISO, and other styles
10

Xiu, Hongling, and Fengyun Yang. "Batch Processing of Remote Sensing Image Mosaic based on Python." International Journal of Online Engineering (iJOE) 14, no. 09 (September 30, 2018): 208. http://dx.doi.org/10.3991/ijoe.v14i09.9226.

Full text
Abstract:
In the process of remote sensing image processing, analysis and interpretation, it is usually necessary to combine several local images into a complete image. Aiming at the shortcoming of long and complicated process of conventional semi-automatic video stitching. In this paper, using the splicing method of pixels, based on the Python interface of ArcGIS 10.1 platform, the idea of programming language is introduced and batch mosaic of remote sensing images is realized. Through the comparison with the image processing software, it is found that this method can shorten the time of image mosaic and improve the efficiency of splicing, which is convenient for later image analysis and other work under the premise of ensuring the accuracy.
APA, Harvard, Vancouver, ISO, and other styles
11

Mu, Yaxin, Xiaojuan Zhang, Wupeng Xie, and Yaoxin Zheng. "Automatic Detection of Near-Surface Targets for Unmanned Aerial Vehicle (UAV) Magnetic Survey." Remote Sensing 12, no. 3 (February 1, 2020): 452. http://dx.doi.org/10.3390/rs12030452.

Full text
Abstract:
Great progress has been made in the integration of Unmanned Aerial Vehicle (UAV) magnetic measurement systems, but the interpretation of UAV magnetic data is facing serious challenges. This paper presents a complete workflow for the detection of the subsurface objects, like Unexploded Ordnance (UXO), by the UAV-borne magnetic survey. The elimination of interference field generated by the drone and an improved Euler deconvolution are emphasized. The quality of UAV magnetic data is limited by the UAV interference field. A compensation method based on the signal correlation is proposed to remove the UAV interference field, which lays the foundation for the subsequent interpretation of UAV magnetic data. An improved Euler deconvolution is developed to estimate the location of underground targets automatically, which is the combination of YOLOv3 (You Only Look Once version 3) and Euler deconvolution. YOLOv3 is a deep convolutional neural network (DCNN)-based image and video detector and it is applied in the context of magnetic survey for the first time, replacing the traditional sliding window. The improved algorithm is more satisfactory for the large-scale UAV-borne magnetic survey because of the simpler and faster workflow, compared with the traditional sliding window (SW)-based Euler method. The field test is conducted and the experimental results show that all procedures in the designed routine is reasonable and effective. The UAV interference field is suppressed significantly with root mean square error 0.5391 nT and the improved Euler deconvolution outperforms the SW Euler deconvolution in terms of positioning accuracy and reducing false targets.
APA, Harvard, Vancouver, ISO, and other styles
12

Petre, Raluca-Diana, and Titus Zaharia. "3D Model-Based Semantic Categorization of Still Image 2D Objects." International Journal of Multimedia Data Engineering and Management 2, no. 4 (October 2011): 19–37. http://dx.doi.org/10.4018/jmdem.2011100102.

Full text
Abstract:
Automatic classification and interpretation of objects present in 2D images is a key issue for various computer vision applications. In particular, when considering image/video, indexing, and retrieval applications, automatically labeling in a semantically pertinent manner/huge multimedia databases still remains a challenge. This paper examines the issue of still image object categorization. The objective is to associate semantic labels to the 2D objects present in natural images. The principle of the proposed approach consists of exploiting categorized 3D model repositories to identify unknown 2D objects, based on 2D/3D matching techniques. The authors use 2D/3D shape indexing methods, where 3D models are described through a set of 2D views. Experimental results, carried out on both MPEG-7 and Princeton 3D models databases, show recognition rates of up to 89.2%.
APA, Harvard, Vancouver, ISO, and other styles
13

Tang, Ziyang, Xiang Liu, Hanlin Chen, Joseph Hupy, and Baijian Yang. "Deep Learning Based Wildfire Event Object Detection from 4K Aerial Images Acquired by UAS." AI 1, no. 2 (April 27, 2020): 166–79. http://dx.doi.org/10.3390/ai1020010.

Full text
Abstract:
Unmanned Aerial Systems, hereafter referred to as UAS, are of great use in hazard events such as wildfire due to their ability to provide high-resolution video imagery over areas deemed too dangerous for manned aircraft and ground crews. This aerial perspective allows for identification of ground-based hazards such as spot fires and fire lines, and to communicate this information with fire fighting crews. Current technology relies on visual interpretation of UAS imagery, with little to no computer-assisted automatic detection. With the help of big labeled data and the significant increase of computing power, deep learning has seen great successes on object detection with fixed patterns, such as people and vehicles. However, little has been done for objects, such as spot fires, with amorphous and irregular shapes. Additional challenges arise when data are collected via UAS as high-resolution aerial images or videos; an ample solution must provide reasonable accuracy with low delays. In this paper, we examined 4K ( 3840 × 2160 ) videos collected by UAS from a controlled burn and created a set of labeled video sets to be shared for public use. We introduce a coarse-to-fine framework to auto-detect wildfires that are sparse, small, and irregularly-shaped. The coarse detector adaptively selects the sub-regions that are likely to contain the objects of interest while the fine detector passes only the details of the sub-regions, rather than the entire 4K region, for further scrutiny. The proposed two-phase learning therefore greatly reduced time overhead and is capable of maintaining high accuracy. Compared against the real-time one-stage object backbone of YoloV3, the proposed methods improved the mean average precision(mAP) from 0 . 29 to 0 . 67 , with an average inference speed of 7.44 frames per second. Limitations and future work are discussed with regard to the design and the experiment results.
APA, Harvard, Vancouver, ISO, and other styles
14

López, Juan Pedro, Marta Bosch-Baliarda, Carlos Alberto Martín, José Manuel Menéndez, Pilar Orero, Olga Soler, and Federico Álvarez. "Design and development of sign language questionnaires based on video and web interfaces." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 33, no. 4 (November 2019): 429–41. http://dx.doi.org/10.1017/s0890060419000374.

Full text
Abstract:
AbstractConventional tests with written information used for the evaluation of sign language (SL) comprehension introduce distortions due to the translation process. This fact affects the results and conclusions drawn and, for that reason, it is necessary to design and implement the same language interpreter-independent evaluation tools. Novel web technologies facilitate the design of web interfaces that support online, multiple-choice questionnaires, while exploiting the storage of tracking data as a source of information about user interaction. This paper proposes an online, multiple-choice sign language questionnaire based on an intuitive methodology. It helps users to complete tests and automatically generates accurate, statistical results using the information and data obtained in the process. The proposed system presents SL videos and enables user interaction, fulfilling the requirements that SL interpretation is not able to cover. The questionnaire feeds a remote database with the user answers and powers the automatic creation of data for analytics. Several metrics, including time elapsed, are used to assess the usability of the SL questionnaire, defining the goals of the predictive models. These predictions are based on machine learning models, with the demographic data of the user as features for estimating the usability of the system. This questionnaire reduces costs and time in terms of interpreter dedication, as well as widening the amount of data collected while employing user native language. The validity of this tool was demonstrated in two different use cases.
APA, Harvard, Vancouver, ISO, and other styles
15

Han, Dongyeob, Suk Bae Lee, Mihwa Song, and Jun Sang Cho. "Change Detection in Unmanned Aerial Vehicle Images for Progress Monitoring of Road Construction." Buildings 11, no. 4 (April 2, 2021): 150. http://dx.doi.org/10.3390/buildings11040150.

Full text
Abstract:
Currently, unmanned aerial vehicles are increasingly being used in various construction projects such as housing developments, road construction, and bridge maintenance. If a drone is used at a road construction site, elevation information and orthoimages can be generated to acquire the construction status quantitatively. However, the detection of detailed changes in the site owing to construction depends on visual video interpretation. This study develops a method for automatic detection of the construction area using multitemporal images and a deep learning method. First, a deep learning model was trained using images of the changing area as reference. Second, we obtained an effective application method by applying various parameters to the deep learning process. The application of the time-series images of a construction site to the selected deep learning model enabled more effective identification of the changed areas than the existing pixel-based change detection. The proposed method is expected to be very helpful in construction management by aiding in the development of smart construction technology.
APA, Harvard, Vancouver, ISO, and other styles
16

Gerdes, Martin, Frode Gallefoss, and Rune Werner Fensli. "The EU project “United4Health”: Results and experiences from automatic health status assessment in a Norwegian telemedicine trial system." Journal of Telemedicine and Telecare 25, no. 1 (October 10, 2017): 46–53. http://dx.doi.org/10.1177/1357633x17735558.

Full text
Abstract:
Introduction Patients with chronic obstructive pulmonary disease require help in daily life situations to increase their individual perception of security, especially under worsened medical conditions. Unnecessary hospital (re-)admissions and home visits by doctors or nurses shall be avoided. This study evaluates the results from a two-year telemedicine field trial for automatic health status assessment based on remote monitoring and analysis of a long time series of vital signs data from patients at home over periods of weeks or months. Methods After discharge from hospital treatment for acute exacerbations, 94 patients were recruited for follow-up by the trial system. The system supported daily measurements of pulse and transdermal peripheral capillary oxygen saturation at patients’ homes, a symptom-specific questionnaire, and provided nurses trained to use telemedicine (“telenurses”) with an automatically generated health status overview of all monitored patients. A colour code (green/yellow/red) indicated whether the patient was stable or had a notable deterioration, while red alerts highlighted those in most urgent need of follow-up. The telenurses could manually overwrite the status level based on the patients’ conditions observed through video consultation. Results Health status evaluation in 4970 telemonitor datasets were assessed retrospectively. The automatic health status determination (subgroup of 33 patients) showed green status at 46% of the days during a one-month monitoring period, 28% yellow status, and 19% red status (no data reported at 7% of the days). The telenurses manually downrated approximately 10% of the red or yellow alerts. Discussion The evaluation of the defined real-time health status assessment algorithms, which involve static rules with personally adapted elements, shows limitations to adapt long-term home monitoring with adequate interpretation of day-to-day changes in the patient’s condition. Thus, due to the given sensitivity and specificity of such algorithms, it seems challenging to avoid false high alerts.
APA, Harvard, Vancouver, ISO, and other styles
17

Vishaka Gayathri, D., Shrutee Shree, Taru Jain, and K. Sornalakshmi. "Real Time System for Human Identification and Tracking from Surveillance Videos." International Journal of Engineering & Technology 7, no. 3.12 (July 20, 2018): 244. http://dx.doi.org/10.14419/ijet.v7i3.12.16034.

Full text
Abstract:
The need for intelligent surveillance systems has raised the concerns of security. A viable system with automated methods for person identification to detect, track and recognize persons in real time is required. The traditional detection techniques have not been able to analyze such a huge amount of live video generated in real-time. So, there is a necessity for live streaming video analytics which includes processing and analyzing large scale visual data such as images or videos to find content that are useful for interpretation. In this work, an automated surveillance system for real-time detection, recognition and tracking of persons in video streams from multiple video inputs is presented. In addition, the current location of an individual can be searched with the tool bar provided. A model is proposed, which uses a messaging queue to receive/transfer video feeds and the frames in the video are analyzed using image processing modules to identify and recognize the person with respect to the training data sets. The main aim of this project is to overcome the challenges faced in integrating the open source tools that build up the system for tagging and searching people.
APA, Harvard, Vancouver, ISO, and other styles
18

Rezaei, Behnaz, Yiorgos Christakis, Bryan Ho, Kevin Thomas, Kelley Erb, Sarah Ostadabbas, and Shyamal Patel. "Target-Specific Action Classification for Automated Assessment of Human Motor Behavior from Video." Sensors 19, no. 19 (October 1, 2019): 4266. http://dx.doi.org/10.3390/s19194266.

Full text
Abstract:
Objective monitoring and assessment of human motor behavior can improve the diagnosis and management of several medical conditions. Over the past decade, significant advances have been made in the use of wearable technology for continuously monitoring human motor behavior in free-living conditions. However, wearable technology remains ill-suited for applications which require monitoring and interpretation of complex motor behaviors (e.g., involving interactions with the environment). Recent advances in computer vision and deep learning have opened up new possibilities for extracting information from video recordings. In this paper, we present a hierarchical vision-based behavior phenotyping method for classification of basic human actions in video recordings performed using a single RGB camera. Our method addresses challenges associated with tracking multiple human actors and classification of actions in videos recorded in changing environments with different fields of view. We implement a cascaded pose tracker that uses temporal relationships between detections for short-term tracking and appearance based tracklet fusion for long-term tracking. Furthermore, for action classification, we use pose evolution maps derived from the cascaded pose tracker as low-dimensional and interpretable representations of the movement sequences for training a convolutional neural network. The cascaded pose tracker achieves an average accuracy of 88% in tracking the target human actor in our video recordings, and overall system achieves average test accuracy of 84% for target-specific action classification in untrimmed video recordings.
APA, Harvard, Vancouver, ISO, and other styles
19

Hennessy, S. J., L. B. Wong, D. B. Yeates, and I. F. Miller. "Automated measurement of ciliary beat frequency." Journal of Applied Physiology 60, no. 6 (June 1, 1986): 2109–13. http://dx.doi.org/10.1152/jappl.1986.60.6.2109.

Full text
Abstract:
Measurements of ciliary beat frequency using video images are dependent on observer interpretation. To obtain objective estimates of ciliary beat frequency from video-image sequences, a computer-based method was developed. Regions of interest of video-image sequences were selected and digitized. Variations in numerical values representing light intensity resulting from cilia beating were extracted and analyzed using autocorrelation techniques. The ciliary beat frequencies obtained for 14 in vitro experiments on ciliated cells or epithelium from the frog palate (Rana catesbeiana) over the range of frequencies 2–25 Hz correlated well with independent observer measurements (r = 0.979). The addition of such computer-based methods to video observer-based systems allows more objective and efficient determinations of ciliary beat frequency.
APA, Harvard, Vancouver, ISO, and other styles
20

Jani, Kuntesh Ketan, and Rajeev Srivastava. "A Survey on Medical Image Analysis in Capsule Endoscopy." Current Medical Imaging Formerly Current Medical Imaging Reviews 15, no. 7 (August 26, 2019): 622–36. http://dx.doi.org/10.2174/1573405614666181102152434.

Full text
Abstract:
Background and Objective: Capsule Endoscopy (CE) is a non-invasive, patient-friendly alternative to conventional endoscopy procedure. However, CE produces 6 to 8 hrs long video posing a tedious challenge to a gastroenterologist for abnormality detection. Major challenges to an expert are lengthy videos, need of constant concentration and subjectivity of the abnormality. To address these challenges along with high diagnostic accuracy, design and development of automated abnormality detection system is a must. Machine learning and computer vision techniques are devised to develop such automated systems. Methods: Study presents a review of quality research papers published in IEEE, Scopus, and Science Direct database with search criteria as capsule endoscopy, engineering, and journal papers. The initial search retrieved 144 publications. After evaluating all articles, 62 publications pertaining to image analysis are selected. Results: This paper presents a rigorous review comprising all the aspects of medical image analysis concerning capsule endoscopy namely video summarization and redundant image elimination, Image enhancement and interpretation, segmentation and region identification, Computer-aided abnormality detection in capsule endoscopy, Image and video compression. The study provides a comparative analysis of various approaches, experimental setup, performance, strengths, and limitations of the aspects stated above. Conclusion: The analyzed image analysis techniques for capsule endoscopy have not yet overcome all current challenges mainly due to lack of dataset and complex nature of the gastrointestinal tract.
APA, Harvard, Vancouver, ISO, and other styles
21

Gong, Jie, and Carlos H. Caldas. "Computer Vision-Based Video Interpretation Model for Automated Productivity Analysis of Construction Operations." Journal of Computing in Civil Engineering 24, no. 3 (May 2010): 252–63. http://dx.doi.org/10.1061/(asce)cp.1943-5487.0000027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Díaz-Vilariño, L., J. Martínez-Sánchez, S. Lagüela, J. Armesto, and K. Khoshelham. "Door recognition in cluttered building interiors using imagery and lidar data." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5 (June 6, 2014): 203–9. http://dx.doi.org/10.5194/isprsarchives-xl-5-203-2014.

Full text
Abstract:
Building indoors reconstruction is an active research topic due to the importance of the wide range of applications to which they can be subjected, from architecture and furniture design, to movies and video games editing, or even crime scene investigation. Among the constructive elements defining the inside of a building, doors are important entities in applications like routing and navigation, and their automated recognition is advantageous e.g. in case of large multi-storey buildings with many office rooms. The inherent complexity of the automation of the recognition process is increased by the presence of clutter and occlusions, difficult to avoid in indoor scenes. In this work, we present a pipeline of techniques used for the reconstruction and interpretation of building interiors using information acquired in the form of point clouds and images. The methodology goes in depth with door detection and labelling as either <i>opened, closed or furniture (false positive)</i>
APA, Harvard, Vancouver, ISO, and other styles
23

Huang, Mu-Shiang, Chi-Shiang Wang, Jung-Hsien Chiang, Ping-Yen Liu, and Wei-Chuan Tsai. "Automated Recognition of Regional Wall Motion Abnormalities Through Deep Neural Network Interpretation of Transthoracic Echocardiography." Circulation 142, no. 16 (October 20, 2020): 1510–20. http://dx.doi.org/10.1161/circulationaha.120.047530.

Full text
Abstract:
Background: Automated interpretation of echocardiography by deep neural networks could support clinical reporting and improve efficiency. Whereas previous studies have evaluated spatial relationships using still frame images, we aimed to train and test a deep neural network for video analysis by combining spatial and temporal information, to automate the recognition of left ventricular regional wall motion abnormalities. Methods: We collected a series of transthoracic echocardiography examinations performed between July 2017 and April 2018 in 2 tertiary care hospitals. Regional wall abnormalities were defined by experienced physiologists and confirmed by trained cardiologists. First, we developed a 3-dimensional convolutional neural network model for view selection ensuring stringent image quality control. Second, a U-net model segmented images to annotate the location of each left ventricular wall. Third, a final 3-dimensional convolutional neural network model evaluated echocardiographic videos from 4 standard views, before and after segmentation, and calculated a wall motion abnormality confidence level (0–1) for each segment. To evaluate model stability, we performed 5-fold cross-validation and external validation. Results: In a series of 10 638 echocardiograms, our view selection model identified 6454 (61%) examinations with sufficient image quality in all standard views. In this training set, 2740 frames were annotated to develop the segmentation model, which achieved a Dice similarity coefficient of 0.756. External validation was performed in 1756 examinations from an independent hospital. A regional wall motion abnormality was observed in 8.9% and 4.9% in the training and external validation datasets, respectively. The final model recognized regional wall motion abnormalities in the cross-validation and external validation datasets with an area under the receiver operating characteristic curve of 0.912 (95% CI, 0.896–0.928) and 0.891 (95% CI, 0.834–0.948), respectively. In the external validation dataset, the sensitivity was 81.8% (95% CI, 73.8%–88.2%), and specificity was 81.6% (95% CI, 80.4%–82.8%). Conclusions: In echocardiographic examinations of sufficient image quality, it is feasible for deep neural networks to automate the recognition of regional wall motion abnormalities using temporal and spatial information from moving images. Further investigation is required to optimize model performance and evaluate clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
24

Kolls, Brad J., and Brian E. Mace. "A practical method for determining automated EEG interpretation software performance on continuous Video-EEG monitoring data." Informatics in Medicine Unlocked 23 (2021): 100548. http://dx.doi.org/10.1016/j.imu.2021.100548.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Наумов, А., Andrey Naumov, Д. Юдин, D. Yudin, А. Долженко, and Aleksandr Dolzhenko. "IMPROVING THE TECHNOLOGY OF CONSTRUCTION AND TECHNICAL EXPERTISE USING A HARDWARE AND SOFTWARE COMPLEX OF AUTOMATED INSPECTION." Bulletin of Belgorod State Technological University named after. V. G. Shukhov 4, no. 4 (April 25, 2019): 61–69. http://dx.doi.org/10.34031/article_5cb824d26344e7.45899508.

Full text
Abstract:
In the construction and technical expertise of buildings, inspection is an important and crucial part in the conclusions about the technical condition of the object. It is carried out visually and instrumentally with the interpretation of results by subjective expert methods. The current performance standard of construction and technical inspection has several disadvantages: essential schematics of collection algorithms, simplified diagnostics and coarsening of the analysis of field research results for geometrically extended or constructively complex real estate objects, which are caused by negative influence of human factors on the process of inspection , weather conditions, inaccessibility of the object, low information culture of documenting and archiving survey results. This complicates the dynamic monitoring of the object and is characterized by significant increase in the cost and timing of the repair work due to the delayed diagnosis of defects and the selectivity of the scope of expert research. The proposed system is designed to improve the reliability, objectivity, unification of approaches and information flows associated with the construction and technical expertise of buildings and structures. It is based on photo and video filming of long and hard to reach surfaces of buildings and structures using unmanned aerial vehicles with subsequent automated processing of the material obtained, its intellectual analysis, automated recognition, classification, quantification of installed defects; as well as innovative technology for documenting and automating the adoption of rational organizational and technological decisions in the practice of technical operation and monitoring of the functional reliability of the surveyed real estate.
APA, Harvard, Vancouver, ISO, and other styles
26

Cree, Christopher, Emily Carter, Heng Wang, Changki Mo, and John Miller. "Tracking Robot Location for Non-Destructive Evaluation of Double-Shell Tanks." Applied Sciences 10, no. 20 (October 19, 2020): 7318. http://dx.doi.org/10.3390/app10207318.

Full text
Abstract:
(1) Background: Non-destructive evaluation of double-shell nuclear-waste storage tanks at the U.S. Department of Energy’s Hanford site requires a robot to navigate a network of air slots in the confined space between primary and secondary tanks. Situational awareness, data collection, and data interpretation require continuous tracking of the robot’s location. (2) Methods: Robot location is continuously monitored using video image analysis for short distances and laser ranging for absolute location. (3) Results: The technique was demonstrated in our laboratory using a mockup of air slot and robot. (4) Conclusions: Location tracking and display provide decision support to inspectors and lay the groundwork for automated data collection.
APA, Harvard, Vancouver, ISO, and other styles
27

Khan, Muhammad Arsalan, Wim Ectors, Tom Bellemans, Davy Janssens, and Geert Wets. "Unmanned Aerial Vehicle–Based Traffic Analysis: Methodological Framework for Automated Multivehicle Trajectory Extraction." Transportation Research Record: Journal of the Transportation Research Board 2626, no. 1 (January 2017): 25–33. http://dx.doi.org/10.3141/2626-04.

Full text
Abstract:
Unmanned aerial vehicles (UAVs), commonly referred to as drones, are one of the most dynamic and multidimensional emerging technologies of the modern era. This technology has recently found multiple potential applications within the transportation field, ranging from traffic surveillance applications to traffic network analysis. To conduct a UAV-based traffic study, extremely diligent planning and execution are required followed by an optimal data analysis and interpretation procedure. In this study, however, the main focus was on the processing and analysis of UAV-acquired traffic footage. A detailed methodological framework for automated UAV video processing is proposed to extract the trajectories of multiple vehicles at a particular road segment. Such trajectories can be used either to extract various traffic parameters or to analyze traffic safety situations. The proposed framework, which provides comprehensive guidelines for an efficient processing and analysis of a UAV-based traffic study, comprises five components: preprocessing, stabilization, georegistration, vehicle detection and tracking, and trajectory management. Until recently, most traffic-focused UAV studies have employed either manual or semiautomatic processing techniques. In contrast, this paper presents an in-depth description of the proposed automated framework followed by a description of a field experiment conducted in the city of Sint-Truiden, Belgium. Future research will mainly focus on the extension of the applications of the proposed framework in the context of UAV-based traffic monitoring and analysis.
APA, Harvard, Vancouver, ISO, and other styles
28

Borwarnginn, Punyanuch, Worapan Kusakunniran, Parintorn Pooyoi, and Jason H. Haga. "Segmenting Snow Scene from CCTV using Deep Learning Approach." ECTI Transactions on Computer and Information Technology (ECTI-CIT) 13, no. 2 (March 13, 2020): 151–59. http://dx.doi.org/10.37936/ecti-cit.2019132.216323.

Full text
Abstract:
Recently, data from many sensors has been used in a disaster monitoring of things, such as river wa- ter levels, rainfall levels, and snowfall levels. These types of numeric data can be straightforwardly used in a further analysis. In contrast, data from CCTV cameras (i.e. images and/or videos) cannot be easily interpreted for users in an automatic way. In a tra- ditional way, it is only provided to users for a visual- ization without any meaningful interpretation. Users must rely on their own expertise and experience to interpret such visual information. Thus, this paper proposes the CNN-based method to automatically in- terpret images captured from CCTV cameras, by us- ing snow scene segmentation as a case example. The CNN models are trained to work on 3 classes: snow, non-snow and non-ground. The non-ground class is explicitly learned, in order to avoid a confusion of the models in differentiating snow pixels from non- ground pixels, e.g. sky regions. The VGG-19 with pre-trained weights is retrained using manually la- beled snow, non-snow and non-ground samples. The learned models achieve up to 85% sensitivity and 97% specificity of the snow area segmentation.
APA, Harvard, Vancouver, ISO, and other styles
29

Mohamed, Mohamed Gomaa, and Nicolas Saunier. "Behavior Analysis Using a Multilevel Motion Pattern Learning Framework." Transportation Research Record: Journal of the Transportation Research Board 2528, no. 1 (January 2015): 116–27. http://dx.doi.org/10.3141/2528-13.

Full text
Abstract:
The increasing availability of video data, through existing traffic cameras or dedicated field data collection, and the development of computer vision techniques pave the way for the collection of massive data sets about the microscopic behavior of road users. Analysis of such data sets helps in understanding normal road user behavior and can be used for realistic prediction of motion and computation of surrogate safety indicators. A multilevel motion pattern learning framework was developed to enable automated scene interpretation, anomalous behavior detection, and surrogate safety analysis. First, points of interest (POIs) were learned on the basis of the Gaussian mixture model and the expectation maximization algorithm and then used to form activity paths (APs). Second, motion patterns, represented by trajectory prototypes, were learned from road users' trajectories in each AP by using a two-stage trajectory clustering method based on spatial then temporal (speed) information. Finally, motion prediction relied on matching at each instant partial trajectories to the learned prototypes to evaluate potential for collision by using computing indicators. An intersection case study demonstrates the framework's ability in many ways: it helps reduce the computation cost up to 90%; it cleans the trajectory data set from tracking outliers; it uses actual trajectories as prototypes without any pre- and postprocessing; and it predicts future motion realistically to compute surrogate safety indicators.
APA, Harvard, Vancouver, ISO, and other styles
30

McGuire, Patrick Charles, Jens Ormö, Enrique Díaz Martínez, José Antonio Rodríguez Manfredi, Javier Gómez Elvira, Helge Ritter, Markus Oesker, and Jörg Ontrup. "The Cyborg Astrobiologist: first field experience." International Journal of Astrobiology 3, no. 3 (July 2004): 189–207. http://dx.doi.org/10.1017/s147355040500220x.

Full text
Abstract:
We present results from the first geological field tests of the ‘Cyborg Astrobiologist’, which is a wearable computer and video camcorder system that we are using to test and train a computer-vision system towards having some of the autonomous decision-making capabilities of a field-geologist and field-astrobiologist. The Cyborg Astrobiologist platform has thus far been used for testing and development of the following algorithms and systems: robotic acquisition of quasi-mosaics of images; real-time image segmentation; and real-time determination of interesting points in the image mosaics. The hardware and software systems function reliably, and the computer-vision algorithms are adequate for the first field tests. In addition to the proof-of-concept aspect of these field tests, the main result of these field tests is the enumeration of those issues that we can improve in the future, including: detection and accounting for shadows caused by three-dimensional jagged edges in the outcrop; reincorporation of more sophisticated texture-analysis algorithms into the system; creation of hardware and software capabilities to control the camera's zoom lens in an intelligent manner; and, finally, development of algorithms for interpretation of complex geological scenery. Nonetheless, despite these technical inadequacies, this Cyborg Astrobiologist system, consisting of a camera-equipped wearable-computer and its computer-vision algorithms, has demonstrated its ability in finding genuinely interesting points in real-time in the geological scenery, and then gathering more information about these interest points in an automated manner.
APA, Harvard, Vancouver, ISO, and other styles
31

Marres, Noortje. "For a situational analytics: An interpretative methodology for the study of situations in computational settings." Big Data & Society 7, no. 2 (July 2020): 205395172094957. http://dx.doi.org/10.1177/2053951720949571.

Full text
Abstract:
This article introduces an interpretative approach to the analysis of situations in computational settings called situational analytics. I outline the theoretical and methodological underpinnings of this approach, which is still under development, and show how it can be used to surface situations from large data sets derived from online platforms such as YouTube. Situational analytics extends to computationally-mediated settings a qualitative methodology developed by Adele Clarke, Situational Analysis (2005), which uses data mapping to detect heterogeneous entities in fieldwork data to determine ‘what makes a difference’ in a situation. Situational analytics scales up this methodology to analyse situations latent in computational data sets with semi-automated methods of textual and visual analysis. I discuss how this approach deviates from recent analyses of situations in computational social science, and argue that Clarke’s framework renders tractable a fundamental methodological problem that arises in this area of research: while social researchers turn to computational settings in order to analyse social life, the social processes unfolding in these envirnoments are fundamentally affected by the computational architectures in which they occur. Situational analytics offers a way to address this problematic by making a heterogeneously composed situation – involving social, technical and media elements – the unit of computational analysis. To conclude, I show how situational analytics can be applied in a case study of YouTube videos featuring intelligent vehicles and discuss how situational analysis itself needs to be elaborated if we are to come to terms with computational transformations of the situational fabric of social life.
APA, Harvard, Vancouver, ISO, and other styles
32

Tataris, G., N. Soulakellis, and K. Chaidas. "MULTITEMPORAL 3D MAPPING OF POST-EARTHQUAKE RECOVERY PHASE WITH UAS: CASE STUDY VRISA, LESVOS, GREECE." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences VI-3/W1-2020 (November 17, 2020): 123–30. http://dx.doi.org/10.5194/isprs-annals-vi-3-w1-2020-123-2020.

Full text
Abstract:
Abstract. The recovery phase of an earthquake-affected settlement is a time-consuming and complex process that requires monitoring, which is now possible using UAS. The purpose of this paper is to present the methodology followed and the results obtained by the exploitation of UAS for rapid multitemporal 3D mapping during the recovery phase of Vrisa traditional settlement, Lesvos island, Greece, which was highly damaged by the earthquake (Mw=6.3) on 12th June 2017. More analytically, three (3) flight campaigns covering the period July 2017 – May 2020 took place by means of an UAS for collecting high-resolution images on: i) 19th May 2019, ii) 29th September 2019, iii) 17th May 2020. Structure from Motion (SfM) and Multi Stereo View (MSV) methods have been applied and produced: i) Digital Surface Models – DSMs, ii) 3D Point Clouds – 3DPC and iii) Orthophoto-maps, of Vrisa. In parallel, GIS capabilities has been exploit to calculate building volumes based on: a) DSM produced by UAS image processing, b) DEM produced by 233 RTK measurements and c) building footprints derived by the digitization of the orthophoto-map of 25th July 2017. The methodology developed and implemented achieves extremely reliable results in a relatively easy, fast and economically feasible way, which is confirmed with great precision by field work. By applying the above-described methodology, it was possible to monitoring the recovery phase during July 2017 and May 2020 which 302/340 buildings that had been severely damaged by the earthquake have been demolished. A small number of new buildings have also been rebuilded and small number of buildings that have just begun excavations for their construction. An important parameter for obtaining reliable data and comparable results is the correct selection of flight parameters and their maintenance at all times when it is decided to take data, without affecting the accuracy of the results from taking photos or videos. Automation in the future of the proposed methodology can significantly accelerate the achievement of reliable results without the intermediate interpretation of orthophoto-maps.
APA, Harvard, Vancouver, ISO, and other styles
33

Brea Castro, Millán. "Didactic methodology in professional e-sport training. An international experience in Brawl Stars (Metodología didáctica en entrenamiento profesional de e-sport. Una experiencia internacional en Brawl Stars)." Retos, no. 41 (December 31, 2020): 247–55. http://dx.doi.org/10.47197/retos.v0i41.83225.

Full text
Abstract:
The impact of eSports today is undeniable. Millions of connected players, international competitions with succulent economic awards, diversity of platforms to participate or track the competitions. A new system of youth socialization, channelled through video games, in which the competition requires a complete sport training. This article describes the methodological strategies for teaching and learning eSports from a semi-professional team. The analysed categories are: training type and schedule, used tools and typology of feedback carried out for the automation of playing behaviours. The research is carried out using a qualitative approach. The data collection is done through semi-structured interviews with the members of the semi-professional QLASH eSports team. The data were coded to simplify the interpretation and the subsequent description. In conclusion, we can point out that the training methodology in eSports has similarities with classic sport, but it does not have a structured and does not follow specific guidelines when executed by the coaches. There are three types of training which usually last approximately one hour and half per day. We cannot confirm that the didactic strategies used during training will improve results during the competition. Resumen. El impacto de los eSports hoy en día es innegable. Millones de jugadores conectados, competiciones internacionales con suculentos premios económicos, diversidad de plataformas para participar o seguir las competiciones. Un nuevo sistema de socialización juvenil, canalizado a través de los videojuegos, en el que la competición requiere un entrenamiento deportivo completo. En este artículo se describen las estrategias metodológicas para la enseñanza y el aprendizaje de los deportes electrónicos de un equipo semiprofesional. Las categorías analizadas son: tipo y calendario de entrenamiento, herramientas utilizadas y tipología de la retroalimentación realizada para la automatización de los comportamientos de juego. La investigación se lleva a cabo utilizando un enfoque cualitativo. La recopilación de datos se realiza mediante entrevistas semiestructuradas con los miembros del equipo semiprofesional de eSports de QLASH. Los datos fueron codificados para simplificar la interpretación y la posterior descripción. En conclusión, podemos señalar que la metodología de entrenamiento en eSports tiene similitudes con el deporte clásico, pero no tiene una estructura y no sigue directrices específicas cuando es ejecutada por los entrenadores. Hay tres tipos de entrenamiento que suelen durar aproximadamente una hora y media por día. No podemos confirmar que las estrategias didácticas utilizadas durante el entrenamiento mejoren los resultados durante la competición.
APA, Harvard, Vancouver, ISO, and other styles
34

van der Spek, Alex, and Alix Thomas. "Neural-Net Identification of Flow Regime With Band Spectra of Flow-Generated Sound." SPE Reservoir Evaluation & Engineering 2, no. 06 (December 1, 1999): 489–98. http://dx.doi.org/10.2118/59067-pa.

Full text
Abstract:
Summary Multiphase production log interpretation requires that the flow regime along hole in the wellbore is known. Flow regime is the cased-hole analog of lithology. Knowledge of the flow regime will help to interpret tool signals, will help to evaluate the flow rate on a per phase basis, and will reduce post-processing load. The flow regime can be classified correctly by a neural net in up to 87% of all cases using 1/3 octave band spectra of flow-generated sound plus the pipe inclination angle. Without the inclination an 88% correct classification can be achieved. A neural net trained on commercially available tool data (noise cuts) appears to be too sensitive to the wellbore inclination. Hence, application of automated neural net interpretation of noise logs requires a new generation of noise logging tools. Introduction Flow regime is for the fluid dynamicist what lithology is for the petrophysicist. Without a lithology classification, it is difficult if not impossible to quantify hydrocarbon volumes in a reservoir. Likewise, without a flow regime classification, it is hard to quantify fluid flow rates in two-phase flow in a conduit. The conventional way to classify the flow regime is by visual observation of flow in a conduit by a human observer. Although downhole video surveys are commercially available, visual observation of downhole flow is not standard practice in (horizontal well) production logging, since it requires a special wireline (optical fiber cable). Moreover, downhole video surveys can only be successful in transparent fluids, either gas wells or wells killed with clear kill fluid. In oil wells, an alternative to visual observation for classifying the flow regime is needed. All flow regimes produce their own characteristic sounds. A trained human observer can classify the flow regime in a pipe by auditory rather than visual observations. Contrary to video surveys, sound logging services are readily available at low cost from various cased-hole wireline service providers. The traditional use of such sound logs is to pinpoint leaks in either casing or tubing strings. In addition to the sound logs recorded, the surface control panel is equipped with amplifiers and speakers that allow audible monitoring of downhole produced sounds. The sound log typically is a plot versus along hole depth of the (uncalibrated) sound pressure level in five different frequency bands with high pass cut-off frequencies equal to 200, 600, 1000, 2000 and 4000 Hz (noise cuts). In principle, the logging engineer, based on auditory observation of the downhole sounds, could carry out flow regime classification. This procedure, however, is impractical; it is prone to errors, it cannot be reproduced from recorded logs (the sound is not normally recorded on audio tape) and it relies on the experience of the specific engineer. The objective of this investigation was to establish the feasibility of classifying the flow regime by a neural net. A second objective was to identify the minimum required resolution of sound band spectra in order to allow a neural net to classify flow regime correctly in excess of, say, 85% of all cases. The figure 85% was chosen because, from the authors' experience human beings, using visual observations cannot classify the flow regime correctly in 10 to 20% of all cases. To meet these two objectives, an extensive experimental program was carried out whereby two-phase flow-generated sound was recorded as 1/3 octave spectra. Subsequently, a neural net was trained on various kinds of band spectra that could be derived from the recorded 1/3 octave spectra. Both objectives were met and it appears that a neural net can classify the flow regime correctly in up to 88% of all cases using 1/3 octave spectra of two-phase flow-generated sound. Successful application of neural net classification of the flow regime from sound logs in the field brings several benefits to the business. First of all it will allow the application of the correct, flow regime specific, hydraulic model to the task of evaluating horizontal well, two-phase flow production logs. Second, it will allow a more constrained consistency check on recorded production logging data. Last but not least, it alleviates the need to predict the flow regime using hydraulic stability criteria from first principles thereby reducing computational loads (from the authors' experience at least a factor of 10) and resulting in faster turn around times. Theory: Flow Regimes Two-phase flow is the interacting flow of two phases, liquid, solid or gas, where the interface between the phases is influenced by their motion.1 Many different flow patterns can result from the changing form of the interface between the two phases. These patterns depend on a variety of factors, for instance, the phase flow rates, the pressure and the diameter and inclination of the pipe containing the flow in question, etc. Flow regimes in vertical upward flow are illustrated in Fig. 1 and are described below.1Bubble flow: A dispersion of bubbles in a continuum of liquid.Intermittent or slug flow: The bubble diameter approaches that of the tube. The bubbles are bullet shaped. Small bubbles are suspended in the intermediate liquid cylinders.Churn or froth flow: A highly unstable flow of an oscillatory nature, whereby the liquid near the pipe wall continuously pulses up and down.Annular flow: A film of liquid flows on the wall of the pipe and the gas phase flows in the center.
APA, Harvard, Vancouver, ISO, and other styles
35

Cantin-Garside, Kristine, Rupa S. Valdez, Maury A. Nussbaum, Susan White, Sunwook Kim, Chung Do Kim, and Diogo M. G. Fortes. "Exploring Challenges of Monitoring Technology and Self-Injurious Behavior in Autism Spectrum Disorder." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62, no. 1 (September 2018): 620–21. http://dx.doi.org/10.1177/1541931218621141.

Full text
Abstract:
Self-injurious behavior (SIB), such as head banging or self-hitting, is considered one of the most dangerous characteristics of autism spectrum disorder (ASD) (Mahatmya, Zobel, & Valdovinos, 2008). Clinicians traditionally rely on structured observation, which can be time-consuming and invasive. Recent technological developments in motion tracking may decrease these burdens. For example, accelerometers in smart watches can gather movement information, which could be automatically classified to detect and predict events associated with SIB using machine learning algorithms. While such systems have clear potential to objectively, accurately, and efficiently monitor and predict SIB, this potential will not be fully realized unless devices are adopted and integrated into clinics and homes. The lack of user input when designing home-based technological interventions for ASD likely contributes to the fact that technology has been rarely, if at all, implemented. In ongoing work, we included stakeholders before design is complete, and embraced a user-centered perspective by evaluating user needs and translating them into system requirements (Karsh, Weinger, Abbott, & Wears, 2010). To this end, we evaluated stakeholder perspectives regarding monitoring technology for SIB in children with ASD. Sixteen parents (age 31-62, M = 45.1 ± 8.1 years) with children (age 6-26, M = 14.1 ± 6.7 years) with ASD and SIB were engaged in individual or group interviews to assess needs and challenges associated with SIB. Interviews with broad and open-ended questions were conducted to allow for response variability that may decrease in larger groups. Questions spanned several aspects of SIB and its management, as well as current and projected technology use. Parents discussed perceived benefits and challenges of different technologies, such as smart watches and video cameras, as related to tracking movement associated with SIB. Data from the first six interviews influenced a second version of interview questions to reflect participant responses. Qualitative content analysis was used to organize the responses into seven main themes surrounding experiences of SIB and technology: (1) triggers, (2) emotional responses, (3) SIB characteristics, (4) management strategies, (5) caregiver impact, (6) child impact, and (7) preferred sensory stimuli (Graneheim & Lundman, 2004). Data were cross-coded with two underlying themes of (8) uncertainty and (9) state of experience. Critical to preserving the original interview content, categories and themes were derived directly from the data rather than from predetermined topics (Hsieh & Shannon, 2005). The derived themes were related to the needs and challenges of SIB, and they were then interpreted to determine design considerations for monitoring methods. Parents described changes in SIB, and they often associated these changes with either child-specific variables (e.g., maturity, medical concerns) or environment-specific variables (e.g., time, new triggers). The variety of triggers and behaviors and the high likelihood of these parameters changing require adaptive monitoring technology capable of learning new behavioral patterns. Tracking systems should be customizable to accommodate the strong presence of variability (Cabibihan, Javed, Aldosari, Frazier, & Elbashir, 2017) and to support patient and contextual variability, which is an opportunity for human factors research through the patient work lens (Valdez, Holden, Novak, & Veinot, 2014). Participants also expressed a shared deficit in resources, referring to both a lack of available technology and information. Monitoring system design should therefore employ affordable, accessible technology while empowering caregivers to access interpretable data. Whether devices are embedded in the environment or attached to a child, parents prefer mitigating required input because of their already high levels of stress, discussed within the caregiver impact theme. Parents mentioned that their typical schedules afforded limited time for data collection, which indicates the designed system should require a limited number of quick interactions. Automated and manual options (Valdez et al., 2014) may address both the need to reduce workload, a factor affecting patient work (Holden, Valdez, Schubert, Thompson, & Hundt, 2017), and the need to increase control when monitoring SIB. The findings from this study and the resulting design implications provide a foundation for future technology development. It is expected that early-stage user involvement will encourage acceptance of this monitoring technology (Panchanathan & McDaniel, 2015; Veryzer & Borja de Mozota, 2005). Users will continue to participate throughout the design process. Careful consideration of the user may lead to accepted and adopted health technology with both efficiency and accuracy in detecting SIB. Results from this study highlight the importance of parent consideration in the health technology space for children with disabilities, particularly when parents participate in management methods. Further, this research contributes to an underexplored domain of qualitative human factors applied to disability and design. Future work could employ human factors approaches, such as contextual inquiries (Marcu et al., 2013) reflecting the patient work framework, to evaluate child and parent needs within the home setting. This research was supported by a National Science Foundation Graduate Research Fellowship (to the first author) and a 4-VA Collaborative Research Grant (to RSV). However, neither agency had any involvement in data analysis or interpretation.
APA, Harvard, Vancouver, ISO, and other styles
36

Veluchamy, S., L. R. Karlmarx, and K. Michael Mahesh. "Detection and Localization of Abnormalities in Surveillance Video Using Timerider-Based Neural Network." Computer Journal, March 19, 2021. http://dx.doi.org/10.1093/comjnl/bxab002.

Full text
Abstract:
Abstract Automatic anomaly detection in surveillance videos is a trending research domain, which assures the detection of the anomalies effectively, relieves the time-consumed by the manual interpretation methods without the requirement of the domain knowledge about the anomalous object. Accordingly, this research work proposes an effective anomaly detection approach, named, TimeRide Neural network (TimeRideNN), by modifying the standard RideNN using the Taylor series such that an extra group of rider, named as timerider, is included in the standard rider optimization algorithm. Initially, the face in the videos is subjected to face detection using the Viola Jones algorithm. Then, the object tracking is performed using the knocker and holoentropy-based Bhattacharya distance, which is a modification of the Bhattacharya distance using the knocker and holoentropy. After that, the features, such as object-level features and speed-level features of the objects, are extracted and the features are employed to the proposed TimeRideNN classifier, which declares the anomalous objects in the video. The experimentation of the proposed anomaly detection method is done using the UCSD dataset (Ped1), subway dataset and QMUL junction dataset, and the analysis is performed based on accuracy, sensitivity and specificity. The proposed TimeRideNN classifier obtains the accuracy, sensitivity and specificity of 0.9724, 0.9894 and 0.9691, respectively.
APA, Harvard, Vancouver, ISO, and other styles
37

Ahouandjinou, Arnaud, Eugène C. Ezin, and Cina Motamed. "Temporal and Hierarchical HMM for Activity Recognition Applied in Visual Medical Monitoring using a Multi-Camera System." Revue Africaine de la Recherche en Informatique et Mathématiques Appliquées Volume 21 - 2015 - Special... (August 30, 2015). http://dx.doi.org/10.46298/arima.1999.

Full text
Abstract:
International audience We address in this paper an improved medical monitoring system through an automatic recognition of human activity in Intensive Care Units (ICUs). A multi camera vision system approach is proposed to collect video sequence for automatic analysis and interpretation of the scene. The latter is performed using Hidden Markov Model (HMM) with explicit state duration combine at the management of the hierarchical structure of the scenario. Significant experiments are carried out on the proposed monitoring system in a hospital's cardiology section in order to prove the need for computer-aided patient supervision to help clinicians in the decision making process. Temporal and hierarchical HMM handles explicitly the state duration and then provides a suitable solution for the automatic recognition of temporal events. Finally, the use of Temporal HMM (THMM) based approach improves the scenario recognition performance compared to the result of standard HMM models. Nous proposons dans cet article une solution pour améliorer le système actuel de surveillance médicale en Unité de Soins Intensifs (USIs) cardiologique grâce à un système de reconnaissance automatique d'activités humaines. Une approche de vidéo surveillance multicaméras est proposée à cet effet et permet l'acquisition des données pour l'analyse et l'interprétation automatique de la scène. Cette dernière est basée sur le Modèle de Markov Caché (MMC) avec une durée d'état explicite et intégrant une gestion de la structure hiérarchique interne des scénarios. Plusieurs séries d'expérimentations sont effectuées sur le nouveau système de surveillance proposé en USIs et démontre ainsi la nécessité d'une surveillance assistée par ordinateur des patients afin d'aider les médecins surveillants et les cliniciens dans le processus de prise de décision. De plus, le MMC temporel offre une solution très adaptée pour la reconnaissance automatique des événements en USIs. Enfin, les résultats obtenus avec le modèle de MMC temporel et hiérarchique ont été comparés à ceux des MMC classiques.
APA, Harvard, Vancouver, ISO, and other styles
38

Ortiz de Gortari, Angelica B., and Jayne Gackenbach. "Game Transfer Phenomena and Problematic Interactive Media Use: Dispositional and Media Habit Factors." Frontiers in Psychology 12 (April 22, 2021). http://dx.doi.org/10.3389/fpsyg.2021.585547.

Full text
Abstract:
The study of the effects of interactive media has mainly focused on dysregulated behaviors, the conceptualization of which is supported by the paradigms of addiction. Research into Game Transfer Phenomena (GTP) examines the interplay between video game features, events while playing, and the manipulation of hardware, which can lead to sensory-perceptual and cognitive intrusions (e.g., hallucinations and recurrent thoughts) and self-agency transient changes (e.g., automatic behaviors) related to video games. GTP can influence the interpretation of stimuli and everyday interactions and, in contrast to gaming disorder, are relatively common and not necessarily negative. However, some players have reported feeling distress due to their GTP. This study focuses on how dispositional and interactive media habit factors are related to GTP and two forms of problematic interactive media [problematic video game playing (PVG) and problematic social media use (PSMU)]. A sample of 343 university students who played video games completed an online survey (58.7% male, 19–25 years old). Not all who had experienced GTP were identified as exhibiting PVG or PSMU, but all of those in the PVG group had experienced GTP. Overall, the profiles of the groups, including GTP (91.4%), PVG (28.5%), and PSMU (24.8%), were in accordance with previous findings. Those in the GTP and the PVG groups were characterized by being male, being highly engaged in the game (either while playing or via game-related activities), and showed preferences for game-related activities. However, while those in the GTP group were significantly more likely to be fantasy-prone, those with PVG were the ones who played most per day. Those in the PSMU group were characterized by being female and/or extroverted, frequently using social/sharing platforms, and seldom playing video games. A hierarchical binary logistic regression revealed that males were more likely to experience GTP. Increases in PVG, fantasy proneness, and neuroticism increased the odds of GTP. Future work can benefit from considering the role of GTP in gaming disorder, since intrusive thoughts, cognitive biases, and poor impulse control are pivotal in the initiation and maintenance of dysfunctional playing behaviors.
APA, Harvard, Vancouver, ISO, and other styles
39

Burwell, Catherine. "New(s) Readers: Multimodal Meaning-Making in AJ+ Captioned Video." M/C Journal 20, no. 3 (June 21, 2017). http://dx.doi.org/10.5204/mcj.1241.

Full text
Abstract:
IntroductionIn 2013, Facebook introduced autoplay video into its newsfeed. In order not to produce sound disruptive to hearing users, videos were muted until a user clicked on them to enable audio. This move, recognised as a competitive response to the popularity of video-sharing sites like YouTube, has generated significant changes to the aesthetics, form, and modalities of online video. Many video producers have incorporated captions into their videos as a means of attracting and maintaining user attention. Of course, captions are not simply a replacement or translation of sound, but have instead added new layers of meaning and changed the way stories are told through video.In this paper, I ask how the use of captions has altered the communication of messages conveyed through online video. In particular, I consider the role captions have played in news reporting, as online platforms like Facebook become increasingly significant sites for the consumption of news. One of the most successful producers of online news video has been Al Jazeera Plus (AJ+). I examine two recent AJ+ news videos to consider how meaning is generated when captions are integrated into the already multimodal form of the video—their online reporting of Australian versus US healthcare systems, and the history of the Black Panther movement. I analyse interactions amongst image, sound, language, and typography and consider the role of captions in audience engagement, branding, and profit-making. Sean Zdenek notes that captions have yet to be recognised “as a significant variable in multimodal analysis, on par with image, sound and video” (xiii). Here, I attempt to pay close attention to the representational, cultural and economic shifts that occur when captions become a central component of online news reporting. I end by briefly enquiring into the implications of captions for our understanding of literacy in an age of constantly shifting media.Multimodality in Digital MediaJeff Bezemer and Gunther Kress define a mode as a “socially and culturally shaped resource for meaning making” (171). Modes include meaning communicated through writing, sound, image, gesture, oral language, and the use of space. Of course, all meanings are conveyed through multiple modes. A page of written text, for example, requires us to make sense through the simultaneous interpretation of words, space, colour, and font. Media such as television and film have long been understood as multimodal; however, with the appearance of digital technologies, media’s multimodality has become increasingly complex. Video games, for example, demonstrate an extraordinary interplay between image, sound, oral language, written text, and interactive gestures, while technologies such as the mobile phone combine the capacity to produce meaning through speaking, writing, and image creation.These multiple modes are not simply layered one on top of the other, but are instead “enmeshed through the complexity of interaction, representation and communication” (Jewitt 1). The rise of multimodal media—as well as the increasing interest in understanding multimodality—occurs against the backdrop of rapid technological, cultural, political, and economic change. These shifts include media convergence, political polarisation, and increased youth activism across the globe (Herrera), developments that are deeply intertwined with uses of digital media and technology. Indeed, theorists of multimodality like Jay Lemke challenge us to go beyond formalist readings of how multiple modes work together to create meaning, and to consider multimodality “within a political economy and a cultural ecology of identities, markets and values” (140).Video’s long history as an inexpensive and portable way to produce media has made it an especially dynamic form of multimodal media. In 1974, avant-garde video artist Nam June Paik predicted that “new forms of video … will stimulate the whole society to find more imaginative ways of telecommunication” (45). Fast forward more than 40 years, and we find that video has indeed become an imaginative and accessible form of communication. The cultural influence of video is evident in the proliferation of video genres, including remix videos, fan videos, Let’s Play videos, video blogs, live stream video, short form video, and video documentary, many of which combine semiotic resources in novel ways. The economic power of video is evident in the profitability of video sharing sites—YouTube in particular—as well as the recent appearance of video on other social media platforms such as Instagram and Facebook.These platforms constitute significant “sites of display.” As Rodney Jones notes, sites of display are not merely the material media through which information is displayed. Rather, they are complex spaces that organise social interactions—for example, between producers and users—and shape how meaning is made. Certainly we can see the influence of sites of display by considering Facebook’s 2013 introduction of autoplay into its newsfeed, a move that forced video producers to respond with new formats. As Edson Tandoc and Julian Maitra write, news organisations have had been forced to “play by Facebook’s frequently modified rules and change accordingly when the algorithms governing the social platform change” (2). AJ+ has been considered one of the media companies that has most successfully adapted to these changes, an adaptation I examine below. I begin by taking up Lemke’s challenge to consider multimodality contextually, reading AJ+ videos through the conceptual lens of the “attention economy,” a lens that highlights the profitability of attention within digital cultures. I then follow with analyses of two short AJ+ videos to show captions’ central role, not only in conveying meaning, but also in creating markets, and communicating branded identities and ideologies.AJ+, Facebook and the New Economies of AttentionThe Al Jazeera news network was founded in 1996 to cover news of the Arab world, with a declared commitment to give “voice to the voiceless.” Since that time, the network has gained global influence, yet many of its attempts to break into the American market have been unsuccessful (Youmans). In 2013, the network acquired Current TV in an effort to move into cable television. While that effort ultimately failed, Al Jazeera’s purchase of the youth-oriented Current TV nonetheless led to another, surprisingly fruitful enterprise, the development of the digital media channel Al Jazeera Plus (AJ+). AJ+ content, which is made up almost entirely of video, is directed at 18 to 35-year-olds. As William Youmans notes, AJ+ videos are informal and opinionated, and, while staying consistent with Al Jazeera’s mission to “give voice to the voiceless,” they also take an openly activist stance (114). Another distinctive feature of AJ+ videos is the way they are tailored for specific platforms. From the beginning, AJ+ has had particular success on Facebook, a success that has been recognised in popular and trade publications. A 2015 profile on AJ+ videos in Variety (Roettgers) noted that AJ+ was the ninth biggest video publisher on the social network, while a story on Journalism.co (Reid, “How AJ+ Reaches”) that same year commented on the remarkable extent to which Facebook audiences shared and interacted with AJ+ videos. These stories also note the distinctive video style that has become associated with the AJ+ brand—short, bold captions; striking images that include photos, maps, infographics, and animations; an effective opening hook; and a closing call to share the video.AJ+ video producers were developing this unique style just as Facebook’s autoplay was being introduced into newsfeeds. Autoplay—a mechanism through which videos are played automatically, without action from a user—predates Facebook’s introduction of the feature. However, autoplay on Internet sites had already begun to raise the ire of many users before its appearance on Facebook (Oremus, “In Defense of Autoplay”). By playing video automatically, autoplay wrests control away from users, and causes particular problems for users using assistive technologies. Reporting on Facebook’s decision to introduce autoplay, Josh Constine notes that the company was looking for a way to increase advertising revenues without increasing the number of actual ads. Encouraging users to upload and share video normalises the presence of video on Facebook, and opens up the door to the eventual addition of profitable video ads. Ensuring that video plays automatically gives video producers an opportunity to capture the attention of users without the need for them to actively click to start a video. Further, ensuring that the videos can be understood when played silently means that both deaf users and users who are situationally unable to hear the audio can also consume its content in any kind of setting.While Facebook has promoted its introduction of autoplay as a benefit to users (Oremus, “Facebook”), it is perhaps more clearly an illustration of the carefully-crafted production strategies used by digital platforms to capture, maintain, and control attention. Within digital capitalism, attention is a highly prized and scarce resource. Michael Goldhaber argues that once attention is given, it builds the potential for further attention in the future. He writes that “obtaining attention is obtaining a kind of enduring wealth, a form of wealth that puts you in a preferred position to get anything this new economy offers” (n.p.). In the case of Facebook, this offers video producers the opportunity to capture users’ attention quickly—in the time it takes them to scroll through their newsfeed. While this may equate to only a few seconds, those few seconds hold, as Goldhaber predicted, the potential to create further value and profit when videos are viewed, liked, shared, and commented on.Interviews with AJ+ producers reveal that an understanding of the value of this attention drives the organisation’s production decisions, and shapes content, aesthetics, and modalities. They also make it clear that it is captions that are central in their efforts to engage audiences. Jigar Mehta, former head of engagement at AJ+, explains that “those first three to five seconds have become vital in grabbing the audience’s attention” (quoted in Reid, “How AJ+ Reaches”). While early videos began with the AJ+ logo, that was soon dropped in favour of a bold image and text, a decision that dramatically increased views (Reid, “How AJ+ Reaches”). Captions and titles are not only central to grabbing attention, but also to maintaining it, particularly as many audience members consume video on mobile devices without sound. Mehta tells an editor at the Nieman Journalism Lab:we think a lot about whether a video works with the sound off. Do we have to subtitle it in order to keep the audience retention high? Do we need to use big fonts? Do we need to use color blocking in order to make words pop and make things stand out? (Mehta, qtd. in Ellis)An AJ+ designer similarly suggests that the most important aspects of AJ+ videos are brand, aesthetic style, consistency, clarity, and legibility (Zou). While questions of brand, style, and clarity are not surprising elements to associate with online video, the matter of legibility is. And yet, in contexts where video is viewed on small, hand-held screens and sound is not an option, legibility—as it relates to the arrangement, size and colour of type—does indeed take on new importance to storytelling and sense-making.While AJ+ producers frame the use of captions as an innovative response to Facebook’s modern algorithmic changes, it makes sense to also remember the significant histories of captioning that their videos ultimately draw upon. This lineage includes silent films of the early twentieth century, as well as the development of closed captions for deaf audiences later in that century. Just as he argues for the complexity, creativity, and transformative potential of captions themselves, Sean Zdenek also urges us to view the history of closed captioning not as a linear narrative moving inevitably towards progress, but as something far more complicated and marked by struggle, an important reminder of the fraught and human histories that are often overlooked in accounts of “new media.” Another important historical strand to consider is the centrality of the written word to digital media, and to the Internet in particular. As Carmen Lee writes, despite public anxieties and discussions over a perceived drop in time spent reading, digital media in fact “involve extensive use of the written word” (2). While this use takes myriad forms, many of these forms might be seen as connected to the production, consumption, and popularity of captions, including practices such as texting, tweeting, and adding titles and catchphrases to photos.Captions, Capture, and Contrast in Australian vs. US HealthcareOn May 4, 2017, US President Donald Trump was scheduled to meet with Australian Prime Minister Malcolm Turnbull in New York City. Trump delayed the meeting, however, in order to await the results of a vote in the US House of Representatives to repeal the Affordable Care Act—commonly known as Obama Care. When he finally sat down with the Prime Minister later that day, Trump told him that Australia has “better health care” than the US, a statement that, in the words of a Guardian report, “triggered astonishment and glee” amongst Trump’s critics (Smith). In response to Trump’s surprising pronouncement, AJ+ produced a 1-minute video extending Trump’s initial comparison with a series of contrasts between Australian government-funded health care and American privatised health care (Facebook, “President Trump Says…”). The video provides an excellent example of the role captions play in both generating attention and creating the unique aesthetic that is crucial to the AJ+ brand.The opening frame of the video begins with a shot of the two leaders seated in front of the US and Australian flags, a diplomatic scene familiar to anyone who follows politics. The colours of the picture are predominantly red, white and blue. Superimposed on top of the image is a textbox containing the words “How does Australia’s healthcare compare to the US?” The question appears in white capital letters on a black background, and the box itself is heavily outlined in yellow. The white and yellow AJ+ logo appears in the upper right corner of the frame. This opening frame poses a question to the viewer, encouraging a kind of rhetorical interactivity. Through the use of colour in and around the caption, it also quickly establishes the AJ+ brand. This opening scene also draws on the Internet’s history of humorous “image macros”—exemplified by the early LOL cat memes—that create comedy through the superimposition of captions on photographic images (Shifman).Captions continue to play a central role in meaning-making once the video plays. In the next frame, Trump is shown speaking to Turnbull. As he speaks, his words—“We have a failing healthcare”—drop onto the screen (Image 1). The captions are an exact transcription of Trump’s awkward phrase and appear centred in caps, with the words “failing healthcare” emphasised in larger, yellow font. With or without sound, these bold captions are concise, easily read on a small screen, and visually dominate the frame. The next few seconds of the video complete the sequence, as Trump tells Turnbull, “I shouldn’t say this to our great gentleman, my friend from Australia, ‘cause you have better healthcare than we do.” These words continue to appear over the image of the two men, still filling the screen. In essence, Trump’s verbal gaffe, transcribed word for word and appearing in AJ+’s characteristic white and yellow lettering, becomes the video’s hook, designed to visually call out to the Facebook user scrolling silently through their newsfeed.Image 1: “We have a failing healthcare.”The middle portion of the video answers the opening question, “How does Australia’s healthcare compare to the US?”. There is no verbal language in this segment—the only sound is a simple synthesised soundtrack. Instead, captions, images, and spatial design, working in close cooperation, are used to draw five comparisons. Each of these comparisons uses the same format. A title appears at the top of the screen, with the remainder of the screen divided in two. The left side is labelled Australia, the right U.S. Underneath these headings, a representative image appears, followed by two statistics, one for each country. For example, the third comparison contrasts Australian and American infant mortality rates (Image 2). The left side of the screen shows a close-up of a mother kissing a baby, with the superimposed caption “3 per 1,000 births.” On the other side of the yellow border, the American infant mortality rate is illustrated with an image of a sleeping baby superimposed with a corresponding caption, “6 per 1,000 births.” Without voiceover, captions do much of the work of communicating the national differences. They are, however, complemented and made more quickly comprehensible through the video’s spatial design and its subtly contrasting images, which help to visually organise the written content.Image 2: “Infant mortality rate”The final 10 seconds of the video bring sound back into the picture. We once again see and hear Trump tell Turnbull, “You have better healthcare than we do.” This image transforms into another pair of male faces—liberal American commentator Chris Hayes and US Senator Bernie Sanders—taken from a MSNBC cable television broadcast. On one side, Hayes says “They do have, they have universal healthcare.” On the other, Sanders laughs uproariously in response. The only added caption for this segment is “Hahahaha!”, the simplicity of which suggests that the video’s target audience is assumed to have a context for understanding Sander’s laughter. Here and throughout the video, autoplay leads to a far more visual style of relating information, one in which captions—working alongside images and layout—become, in Zdenek’s words, a sort of “textual performance” (6).The Black Panther Party and the Textual Performance of Progressive PoliticsReports on police brutality and Black Lives Matters protests have been amongst AJ+’s most widely viewed and shared videos (Reid, “Beyond Websites”). Their 2-minute video (Facebook, Black Panther) commemorating the 50th anniversary of the Black Panther Party, viewed 9.5 million times, provides background to these contemporary events. Like the comparison of American and Australian healthcare, captions shape the video’s structure. But here, rather than using contrast as means of quick visual communication, the video is structured as a list of five significant points about the Black Panther Party. Captions are used not only to itemise and simplify—and ultimately to reduce—the party’s complex history, but also, somewhat paradoxically, to promote the news organisation’s own progressive values.After announcing the intent and structure of the video—“5 things you should know about the Black Panther Party”—in its first 3 seconds, the video quickly sets in to describe each item in turn. The themes themselves correspond with AJ+’s own interests in policing, community, and protest, while the language used to announce each theme is characteristically concise and colloquial:They wanted to end police brutality.They were all about the community.They made enemies in high places.Women were vocal and active panthers.The Black Panthers’ legacy is still alive today.Each of these themes is represented using a combination of archival black and white news footage and photographs depicting Black Panther members, marches, and events. These still and moving images are accompanied by audio recordings from party members, explaining its origins, purposes, and influences. Captions are used throughout the video both to indicate the five themes and to transcribe the recordings. As the video moves from one theme to another, the corresponding number appears in the centre of the screen to indicate the transition, and then shrinks and moves to the upper left corner of the screen as a reminder for viewers. A musical soundtrack of strings and percussion, communicating a sense of urgency, underscores the full video.While typographic features like font size, colour, and placement were significant in communicating meaning in AJ+’s healthcare video, there is an even broader range of experimentation here. The numbers 1 to 5 that appear in the centre of the screen to announce each new theme blink and flicker like the countdown at the beginning of bygone film reels, gesturing towards the historical topic and complementing the black and white footage. For those many viewers watching the video without sound, an audio waveform above the transcribed interviews provides a visual clue that the captions are transcriptions of recorded voices. Finally, the colour green, used infrequently in AJ+ videos, is chosen to emphasise a select number of key words and phrases within the short video. Significantly, all of these words are spoken by Black Panther members. For example, captions transcribing former Panther leader Ericka Huggins speaking about the party’s slogan—“All power to the people”—highlight the words “power” and “people” with large, lime green letters that stand out against the grainy black and white photos (Image 3). The captions quite literally highlight ideas about oppression, justice, and social change that are central to an understanding of the history of the Black Panther Party, but also to the communication of the AJ+ brand.Image 3: “All power to the people”ConclusionEmploying distinctive combinations of word and image, AJ+ videos are produced to call out to users through the crowded semiotic spaces of social media. But they also call out to scholars to think carefully about the new kinds of literacies associated with rapidly changing digital media formats. Captioned video makes clear the need to recognise how meaning is constructed through sophisticated interpretive strategies that draw together multiple modes. While captions are certainly not new, an analysis of AJ+ videos suggests the use of novel typographical experiments that sit “midway between language and image” (Stöckl 289). Discussions of literacy need to expand to recognise this experimentation and to account for the complex interactions between the verbal and visual that get lost when written text is understood to function similarly across multiple platforms. In his interpretation of closed captioning, Zdenek provides an insightful list of the ways that captions transform meaning, including their capacity to contextualise, clarify, formalise, linearise and distill (8–9). His list signals not only the need for a deeper understanding of the role of captions, but also for a broader and more vivid vocabulary to describe multimodal meaning-making. Indeed, as Allan Luke suggests, within the complex multimodal and multilingual contexts of contemporary global societies, literacy requires that we develop and nurture “languages to talk about language” (459).Just as importantly, an analysis of captioned video that takes into account the economic reasons for captioning also reminds us of the need for critical media literacies. AJ+ videos reveal how the commercial goals of branding, promotion, and profit-making influence the shape and presentation of news. As meaning-makers and as citizens, we require the capacity to assess how we are being addressed by news organisations that are themselves responding to the interests of economic and cultural juggernauts such as Facebook. In schools, universities, and informal learning spaces, as well as through discourses circulated by research, media, and public policy, we might begin to generate more explicit and critical discussions of the ways that digital media—including texts that inform us and even those that exhort us towards more active forms of citizenship—simultaneously seek to manage, direct, and profit from our attention.ReferencesBezemer, Jeff, and Gunther Kress. “Writing in Multimodal Texts: A Social Semiotic Account of Designs for Learning.” Written Communication 25.2 (2008): 166–195.Constine, Josh. “Facebook Adds Automatic Subtitling for Page Videos.” TechCrunch 4 Jan. 2017. 1 May 2017 <https://techcrunch.com/2017/01/04/facebook-video-captions/>.Ellis, Justin. “How AJ+ Embraces Facebook, Autoplay, and Comments to Make Its Videos Stand Out.” Nieman Labs 3 Aug. 2015. 28 Apr. 2017 <http://www.niemanlab.org/2015/08/how-aj-embraces-facebook-autoplay-and-comments-to-make-its-videos-stand-out/>.Facebook. “President Trump Says…” Facebook, 2017. <https://www.facebook.com/ajplusenglish/videos/954884227986418/>.Facebook. “Black Panther.” Facebook, 2017. <https://www.facebook.com/ajplusenglish/videos/820822028059306/>.Goldhaber, Michael. “The Attention Economy and the Net.” First Monday 2.4 (1997). 9 June 2013 <http://firstmonday.org/article/view/519/440>.Herrera, Linda. “Youth and Citizenship in the Digital Age: A View from Egypt.” Harvard Educational Review 82.3 (2012): 333–352.Jewitt, Carey.”Introduction.” Routledge Handbook of Multimodal Analysis. Ed. Carey Jewitt. New York: Routledge, 2009. 1–8.Jones, Rodney. “Technology and Sites of Display.” Routledge Handbook of Multimodal Analysis. Ed. Carey Jewitt. New York: Routledge, 2009. 114–126.Lee, Carmen. “Micro-Blogging and Status Updates on Facebook: Texts and Practices.” Digital Discourse: Language in the New Media. Eds. Crispin Thurlow and Kristine Mroczek. Oxford Scholarship Online, 2011. DOI: 10.1093/acprof:oso/9780199795437.001.0001.Lemke, Jay. “Multimodality, Identity, and Time.” Routledge Handbook of Multimodal Analysis. Ed. Carey Jewitt. New York: Routledge, 2009. 140–150.Luke, Allan. “Critical Literacy in Australia: A Matter of Context and Standpoint.” Journal of Adolescent and Adult Literacy 43.5 (200): 448–461.Oremus, Will. “Facebook Is Eating the Media.” National Post 14 Jan. 2015. 15 June 2017 <http://news.nationalpost.com/news/facebook-is-eating-the-media-how-auto-play-videos-could-put-news-websites-out-of-business>.———. “In Defense of Autoplay.” Slate 16 June 2015. 14 June 2017 <http://www.slate.com/articles/technology/future_tense/2015/06/autoplay_videos_facebook_twitter_are_making_them_less_annoying.html>.Paik, Nam June. “The Video Synthesizer and Beyond.” The New Television: A Public/Private Art. Eds. Douglas Davis and Allison Simmons. Cambridge, MA: MIT Press, 1977. 45.Reid, Alistair. “Beyond Websites: How AJ+ Is Innovating in Digital Storytelling.” Journalism.co 17 Apr. 2015. 13 Feb. 2017 <https://www.journalism.co.uk/news/beyond-websites-how-aj-is-innovating-in-digital-storytelling/s2/a564811/>.———. “How AJ+ Reaches 600% of Its Audience on Facebook.” Journalism.co. 5 Aug. 2015. 13 Feb. 2017 <https://www.journalism.co.uk/news/how-aj-reaches-600-of-its-audience-on-facebook/s2/a566014/>.Roettgers, Jank. “How Al Jazeera’s AJ+ Became One of the Biggest Video Publishers on Facebook.” Variety 30 July 2015. 1 May 2017 <http://variety.com/2015/digital/news/how-al-jazeeras-aj-became-one-of-the-biggest-video-publishers-on-facebook-1201553333/>.Shifman, Limor. Memes in Digital Culture. Cambridge, MA: MIT Press, 2014.Smith, David. “Trump Says ‘Everybody’, Not Just Australia, Has Better Healthcare than US.” The Guardian 5 May 2017. 5 May 2017 <https://www.theguardian.com/us-news/2017/may/05/trump-healthcare-australia-better-malcolm-turnbull>.Stöckl, Hartmut. “Typography: Visual Language and Multimodality.” Interactions, Images and Texts. Eds. Sigrid Norris and Carmen Daniela Maier. Amsterdam: De Gruyter, 2014. 283–293.Tandoc, Edson, and Maitra, Julian. “New Organizations’ Use of Native Videos on Facebook: Tweaking the Journalistic Field One Algorithm Change at a Time. New Media & Society (2017). DOI: 10.1177/1461444817702398.Youmans, William. An Unlikely Audience: Al Jazeera’s Struggle in America. New York: Oxford University Press, 2017.Zdenek, Sean. Reading Sounds: Closed-Captioned Media and Popular Culture. Chicago: University of Chicago Press, 2015.Zou, Yanni. “How AJ+ Applies User-Centered Design to Win Millennials.” Medium 16 Apr. 2016. 7 May 2017 <https://medium.com/aj-platforms/how-aj-applies-user-centered-design-to-win-millennials-3be803a4192c>.
APA, Harvard, Vancouver, ISO, and other styles
40

Huang, M. S., and M. R. Tsai. "4942Automated recognition of regional wall motion abnormalities by deep neural network interpretation of echocardiography." European Heart Journal 40, Supplement_1 (October 1, 2019). http://dx.doi.org/10.1093/eurheartj/ehz746.0012.

Full text
Abstract:
Abstract Background The deep neural network assisted in automated echocardiography interpretation joint to cardiologist final confirmation has now been gradually emerging. There were applications applied in echocardiography views classification, chamber size and myocardium mass evaluation, and certain disease detections already published. Our aim, instead of frame-by-frame “image-level” interpretation in previous studies, is to apply deep neural network in echocardiography temporal relationship analysis – “video-level” – and applied in automated left ventricle myocardium regional wall motion abnormalities recognition. Methods We collected all echocardiography performed in 2017, and preprocessed them into numeric arrays for matrix computations. Regional wall motion abnormalities were approved by authorized cardiologists, and processed into labels whether regional wall motion abnormalities presented in anterior, inferior, septal, or lateral walls of the left ventricle, as the ground truth. We then first developed a convolutional neural network (CNN) model to do view selection, and gathered parasternal long/short views, and apical four/two chamber views from each exam, as well as developing view prediction confidence for strict image quality control. Within these images, we annotated part of images to develop the second CNN model, known as U-net, for image segmentation and mark each regional wall. Finally, we developed the major three-dimensional CNN model with the inputs composed of four views of echocardiography videos and then output the final label for motion abnormalities in each wall. Results In total we collected 13,984 series of echocardiography, and gathered four main views with quality confidence level above 90%, which resulted in 9,323 series for training. Within these images, we annotated 2,736 frames for U-net model and resulted in dice score of segmentation 73%. With the join of segmentation model, the final three-dimensional CNN model predict regional wall motion with accuracy of 83%. Conclusions Deep neural network application in regional wall motion recognition is feasible and should mandate further investigation for promoting performance. Acknowledgement/Funding None
APA, Harvard, Vancouver, ISO, and other styles
41

Lai, Derek. "Public Video Surveillance by the State: Policy, Privacy Legislation, and the Charter." Alberta Law Review, December 30, 2015. http://dx.doi.org/10.29173/alr379.

Full text
Abstract:
This article explores the growing phenomenon of public video surveillance and how the law should protect an individual's right to privacy while providing for effective law enforcement. The author considers the positive and negative effects of surveillance and recent technological advancements that currently challenge courts, legislatures, and police forces. Canadian case studies from Kelowna and Edmonton are utilized to examine the role of federal and provincial privacy legislation, while the Supreme Court of Canada's evolving interpretation of s. 8 of the Charter is canvassed through an examination of jurisprudence involving public surveillance technology. Ultimately, the author concludes that public video surveillance is necessary but the law must control its use. Video surveillance via automated collection would resolve the "effectiveness versus privacy" policy debate by minimizing the potential for abuse.
APA, Harvard, Vancouver, ISO, and other styles
42

Di Vece, D., F. Laumer, M. Schwyzer, R. Burkholz, L. Corinzia, V. L. Cammann, R. Citro, et al. "Artificial intelligence in echocardiography diagnostics – detection of takotsubo syndrome." European Heart Journal 41, Supplement_2 (November 1, 2020). http://dx.doi.org/10.1093/ehjci/ehaa946.1221.

Full text
Abstract:
Abstract Background Machine learning allows classifying diseases based only on raw echocardiographic imaging data and is therefore a landmark in the development of computer-assisted decision support systems in echocardiography. Purpose The present study sought to determine the value of deep (machine) learning systems for automatic discrimination of takotsubo syndrome and acute myocardial infarction. Methods Apical 2- and 4-chamber echocardiographic views of 110 patients with takotsubo syndrome and 110 patients with acute myocardial infarction were used in the development, training and validation of a deep learning approach, i.e. a convolutional autoencoder (CAE) for feature extraction followed by classical machine learning models for classification of the diseases. Results The deep learning model achieved an area under the receiver operating curve (AUC) of 0.801 with an overall accuracy of 74.5% for 5-fold cross validation evaluated on a clinically relevant dataset. In comparison, experienced cardiologists achieved AUCs in the range 0.678–0.740 and an average accuracy of 64.5% on the same dataset. Conclusions A real-time system for fully automated interpretation of echocardiographic videos was established and trained to differentiate takotsubo syndrome from acute myocardial infarction. The framework provides insight into the algorithms' decision process for physicians and yields new and valuable information on the manifestation of disease patterns in echocardiographic data. While our system was superior to cardiologists in echocardiography-based disease classification, further studies should be conducted in a larger patient population to prove its clinical application. Funding Acknowledgement Type of funding source: None
APA, Harvard, Vancouver, ISO, and other styles
43

George, Jonathan K., Cesare Soci, Mario Miscuglio, and Volker J. Sorger. "Symmetry perception with spiking neural networks." Scientific Reports 11, no. 1 (March 11, 2021). http://dx.doi.org/10.1038/s41598-021-85232-3.

Full text
Abstract:
AbstractMirror symmetry is an abundant feature in both nature and technology. Its successful detection is critical for perception procedures based on visual stimuli and requires organizational processes. Neuromorphic computing, utilizing brain-mimicked networks, could be a technology-solution providing such perceptual organization functionality, and furthermore has made tremendous advances in computing efficiency by applying a spiking model of information. Spiking models inherently maximize efficiency in noisy environments by placing the energy of the signal in a minimal time. However, many neuromorphic computing models ignore time delay between nodes, choosing instead to approximate connections between neurons as instantaneous weighting. With this assumption, many complex time interactions of spiking neurons are lost. Here, we show that the coincidence detection property of a spiking-based feed-forward neural network enables mirror symmetry. Testing this algorithm exemplary on geospatial satellite image data sets reveals how symmetry density enables automated recognition of man-made structures over vegetation. We further demonstrate that the addition of noise improves feature detectability of an image through coincidence point generation. The ability to obtain mirror symmetry from spiking neural networks can be a powerful tool for applications in image-based rendering, computer graphics, robotics, photo interpretation, image retrieval, video analysis and annotation, multi-media and may help accelerating the brain-machine interconnection. More importantly it enables a technology pathway in bridging the gap between the low-level incoming sensor stimuli and high-level interpretation of these inputs as recognized objects and scenes in the world.
APA, Harvard, Vancouver, ISO, and other styles
44

Lavanchy, Joël L., Joel Zindel, Kadir Kirtac, Isabell Twick, Enes Hosgor, Daniel Candinas, and Guido Beldi. "Automation of surgical skill assessment using a three-stage machine learning algorithm." Scientific Reports 11, no. 1 (March 4, 2021). http://dx.doi.org/10.1038/s41598-021-84295-6.

Full text
Abstract:
AbstractSurgical skills are associated with clinical outcomes. To improve surgical skills and thereby reduce adverse outcomes, continuous surgical training and feedback is required. Currently, assessment of surgical skills is a manual and time-consuming process which is prone to subjective interpretation. This study aims to automate surgical skill assessment in laparoscopic cholecystectomy videos using machine learning algorithms. To address this, a three-stage machine learning method is proposed: first, a Convolutional Neural Network was trained to identify and localize surgical instruments. Second, motion features were extracted from the detected instrument localizations throughout time. Third, a linear regression model was trained based on the extracted motion features to predict surgical skills. This three-stage modeling approach achieved an accuracy of 87 ± 0.2% in distinguishing good versus poor surgical skill. While the technique cannot reliably quantify the degree of surgical skill yet it represents an important advance towards automation of surgical skill assessment.
APA, Harvard, Vancouver, ISO, and other styles
45

Kainz, Bernhard, Mattias P. Heinrich, Antonios Makropoulos, Jonas Oppenheimer, Ramin Mandegaran, Shrinivasan Sankar, Christopher Deane, et al. "Non-invasive diagnosis of deep vein thrombosis from ultrasound imaging with machine learning." npj Digital Medicine 4, no. 1 (September 15, 2021). http://dx.doi.org/10.1038/s41746-021-00503-7.

Full text
Abstract:
AbstractDeep vein thrombosis (DVT) is a blood clot most commonly found in the leg, which can lead to fatal pulmonary embolism (PE). Compression ultrasound of the legs is the diagnostic gold standard, leading to a definitive diagnosis. However, many patients with possible symptoms are not found to have a DVT, resulting in long referral waiting times for patients and a large clinical burden for specialists. Thus, diagnosis at the point of care by non-specialists is desired. We collect images in a pre-clinical study and investigate a deep learning approach for the automatic interpretation of compression ultrasound images. Our method provides guidance for free-hand ultrasound and aids non-specialists in detecting DVT. We train a deep learning algorithm on ultrasound videos from 255 volunteers and evaluate on a sample size of 53 prospectively enrolled patients from an NHS DVT diagnostic clinic and 30 prospectively enrolled patients from a German DVT clinic. Algorithmic DVT diagnosis performance results in a sensitivity within a 95% CI range of (0.82, 0.94), specificity of (0.70, 0.82), a positive predictive value of (0.65, 0.89), and a negative predictive value of (0.99, 1.00) when compared to the clinical gold standard. To assess the potential benefits of this technology in healthcare we evaluate the entire clinical DVT decision algorithm and provide cost analysis when integrating our approach into diagnostic pathways for DVT. Our approach is estimated to generate a positive net monetary benefit at costs up to £72 to £175 per software-supported examination, assuming a willingness to pay of £20,000/QALY.
APA, Harvard, Vancouver, ISO, and other styles
46

Jones, Steve. "Seeing Sound, Hearing Image." M/C Journal 2, no. 4 (June 1, 1999). http://dx.doi.org/10.5204/mcj.1763.

Full text
Abstract:
“As the old technologies become automatic and invisible, we find ourselves more concerned with fighting or embracing what’s new”—Dennis Baron, From Pencils to Pixels: The Stage of Literacy Technologies Popular music is firmly rooted within realist practice, or what has been called the "culture of authenticity" associated with modernism. As Lawrence Grossberg notes, the accelleration of the rate of change in modern life caused, in post-war youth culture, an identity crisis or "lived contradiction" that gave rock (particularly) and popular music (generally) a peculiar position in regard to notions of authenticity. Grossberg places rock's authenticity within the "difference" it maintains from other cultural forms, and notes that its difference "can be justified aesthetically or ideologically, or in terms of the social position of the audiences, or by the economics of its production, or through the measure of its popularity or the statement of its politics" (205-6). Popular music scholars have not adequately addressed issues of authenticity and individuality. Two of the most important questions to be asked are: How is authenticity communicated in popular music? What is the site of the interpretation of authenticity? It is important to ask about sound, technology, about the attempt to understand the ideal and the image, the natural and artificial. It is these that make clear the strongest connections between popular music and contemporary culture. Popular music is a particularly appropriate site for the study of authenticity as a cultural category, for several reasons. For one thing, other media do not follow us, as aural media do, into malls, elevators, cars, planes. Nor do they wait for us, as a tape player paused and ready to play. What is important is not that music is "everywhere" but, to borrow from Vivian Sobchack, that it creates a "here" that can be transported anywhere. In fact, we are able to walk around enveloped by a personal aural environment, thanks to a Sony Walkman.1 Also, it is more difficult to shut out the aural than the visual. Closing one's ears does not entirely shut out sound. There is, additionally, the sense that sound and music are interpreted from within, that is, that they resonate through and within the body, and as such engage with one's self in a fashion that coincides with Charles Taylor's claim that the "ideal of authenticity" is an inner-directed one. It must be noted that authenticity is not, however, communicated only via music, but via text and image. Grossberg noted the "primacy of sound" in rock music, and the important link between music, visual image, and authenticity: Visual style as conceived in rock culture is usually the stage for an outrageous and self-conscious inauthenticity... . It was here -- in its visual presentation -- that rock often most explicitly manifested both an ironic resistance to the dominant culture and its sympathies with the business of entertainment ... . The demand for live performance has always expressed the desire for the visual mark (and proof) of authenticity. (208) But that relationship can also be reversed: Music and sound serve in some instances to provide the aural mark and proof of authenticity. Consider, for instance, the "tear" in the voice that Jensen identifies in Hank Williams's singing, and in that of Patsy Cline. For the latter, voicing, in this sense, was particularly important, as it meant more than a singing style, it also involved matters of self-identity, as Jensen appropriately associates with the move of country music from "hometown" to "uptown" (101). Cline's move toward a more "uptown" style involved her visual image, too. At a significant turning point in her career, Faron Young noted, Cline "left that country girl look in those western outfits behind and opted for a slicker appearance in dresses and high fashion gowns" (Jensen 101). Popular music has forged a link with visual media, and in some sense music itself has become more visual (though not necessarily less aural) the more it has engaged with industrial processes in the entertainment industry. For example, engagement with music videos and film soundtracks has made music a part of the larger convergence of mass media forms. Alongside that convergence, the use of music in visual media has come to serve as adjunct to visual symbolisation. One only need observe the increasingly commercial uses to which music is put (as in advertising, film soundtracks and music videos) to note ways in which music serves image. In the literature from a variety of disciplines, including communication, art and music, it has been argued that music videos are the visualisation of music. But in many respects the opposite is true. Music videos are the auralisation of the visual. Music serves many of the same purposes as sound does generally in visual media. One can find a strong argument for the use of sound as supplement to visual media in Silverman's and Altman's work. For Silverman, sound in cinema has largely been overlooked (pun intended) in favor of the visual image, but sound is a more effective (and perhaps necessary) element for willful suspension of disbelief. One may see this as well in the development of Dolby Surround Sound, and in increased emphasis on sound engineering among video and computer game makers, as well as the development of sub-woofers and high-fidelity speakers as computer peripherals. Another way that sound has become more closely associated with the visual is through the ongoing evolution of marketing demands within the popular music industry that increasingly rely on visual media and force image to the front. Internet technologies, particularly the WorldWideWeb (WWW), are also evidence of a merging of the visual and aural (see Hayward). The development of low-cost desktop video equipment and WWW publishing, CD-i, CD-ROM, DVD, and other technologies, has meant that visual images continue to form part of the industrial routine of the music business. The decrease in cost of many of these technologies has also led to the adoption of such routines among individual musicians, small/independent labels, and producers seeking to mimic the resources of major labels (a practice that has become considerably easier via the Internet, as it is difficult to determine capital resources solely from a WWW site). Yet there is another facet to the evolution of the link between the aural and visual. Sound has become more visual by way of its representation during its production (a representation, and process, that has largely been ignored in popular music studies). That representation has to do with the digitisation of sound, and the subsequent transformation sound and music can undergo after being digitised and portrayed on a computer screen. Once digitised, sound can be made visual in any number of ways, through traditional methods like music notation, through representation as audio waveform, by way of MIDI notation, bit streams, or through representation as shapes and colors (as in recent software applications particularly for children, like Making Music by Morton Subotnick). The impetus for these representations comes from the desire for increased control over sound (see Jones, Rock Formation) and such control seems most easily accomplished by way of computers and their concomitant visual technologies (monitors, printers). To make computers useful tools for sound recording it is necessary to employ some form of visual representation for the aural, and the flexibility of modern computers allows for new modes of predominately visual representation. Each of these connections between the aural and visual is in turn related to technology, for as audio technology develops within the entertainment industry it makes sense for synergistic development to occur with visual media technologies. Yet popular music scholars routinely analyse aural and visual media in isolation from one another. The challenge for popular music studies and music philosophy posed by visual media technologies, that they must attend to spatiality and context (both visual and aural), has not been taken up. Until such time as it is, it will be difficult, if not impossible, to engage issues of authenticity, because they will remain rootless instead of situated within the experience of music as fully sensual (in some cases even synaesthetic). Most of the traditional judgments of authenticity among music critics and many popular music scholars involve space and time, the former in terms of the movement of music across cultures and the latter in terms of history. None rely on notions of the "situatedness" of the listener or musicmaker in a particular aural, visual and historical space. Part of the reason for the lack of such an understanding arises from the very means by which popular music is created. We have become accustomed to understanding music as manipulation of sound, and so far as most modern music production is concerned such manipulation occurs as much visually as aurally, by cutting, pasting and otherwise altering audio waveforms on a computer screen. Musicians no more record music than they record fingering; they engage in sound recording. And recording engineers and producers rely less and less on sound and more on sight to determine whether a recording conforms to the demands of digital reproduction.2 Sound, particularly when joined with the visual, becomes a means to build and manipulate the environment, virtual and non-virtual (see Jones, "Sound"). Sound & Music As we construct space through sound, both in terms of audio production (e.g., the use of reverberation devices in recording studios) and in terms of everyday life (e.g., perception of aural stimuli, whether by ear or vibration in the body, from points surrounding us), we centre it within experience. Sound combines the psychological and physiological. Audio engineer George Massenburg noted that in film theaters: You couldn't utilise the full 360-degree sound space for music because there was an "exit sign" phenomena [sic]. If you had a lot of audio going on in the back, people would have a natural inclination to turn around and stare at the back of the room. (Massenburg 79-80) However, he went on to say, beyond observations of such reactions to multichannel sound technology, "we don't know very much". Research in psychoacoustics being used to develop virtual audio systems relies on such reactions and on a notion of human hardwiring for stimulus response (see Jones, "Sense"). But a major stumbling block toward the development of those systems is that none are able to account for individual listeners' perceptions. It is therefore important to consider the individual along with the social dimension in discussions of sound and music. For instance, the term "sound" is deployed in popular music to signify several things, all of which have to do with music or musical performance, but none of which is music. So, for instance, musical groups or performers can have a "sound", but it is distinguishable from what notes they play. Entire music scenes can have "sounds", but the music within such scenes is clearly distinct and differentiated. For the study of popular music this is a significant but often overlooked dimension. As Grossberg argues, "the authenticity of rock was measured by its sound" (207). Visually, he says, popular music is suspect and often inauthentic (sometimes purposefully so), and it is grounded in the aural. Similarly in country music Jensen notes that the "Nashville Sound" continually evoked conflicting definitions among fans and musicians, but that: The music itself was the arena in and through which claims about the Nashville Sound's authenticity were played out. A certain sound (steel guitar, with fiddle) was deemed "hard" or "pure" country, in spite of its own commercial history. (84) One should, therefore, attend to the interpretive acts associated with sound and its meaning. But why has not popular music studies engaged in systematic analysis of sound at the level of the individual as well as the social? As John Shepherd put it, "little cultural theoretical work in music is concerned with music's sounds" ("Value" 174). Why should this be a cause for concern? First, because Shepherd claims that sound is not "meaningful" in the traditional sense. Second, because it leads us to re-examine the question long set to the side in popular music studies: What is music? The structural homology, the connection between meaning and social formation, is a foundation upon which the concept of authenticity in popular music stands. Yet the ability to label a particular piece of music "good" shifts from moment to moment, and place to place. Frith understates the problem when he writes that "it is difficult ... to say how musical texts mean or represent something, and it is difficult to isolate structures of musical creation or control" (56). Shepherd attempts to overcome this difficulty by emphasising that: Music is a social medium in sound. What [this] means ... is that the sounds of music provide constantly moving and complex matrices of sounds in which individuals may invest their own meanings ... [however] while the matrices of sounds which seemingly constitute an individual "piece" of music can accommodate a range of meanings, and thereby allow for negotiability of meaning, they cannot accommodate all possible meanings. (Shepherd, "Art") It must be acknowledged that authenticity is constructed, and that in itself is an argument against the most common way to think of authenticity. If authenticity implies something about the "pure" state of an object or symbol then surely such a state is connected to some "objective" rendering, one not possible according to Shepherd's claims. In some sense, then, authenticity is autonomous, its materialisation springs not from any necessary connection to sound, image, text, but from individual acts of interpretation, typically within what in literary criticism has come to be known as "interpretive communities". It is not hard to illustrate the point by generalising and observing that rock's notion of authenticity is captured in terms of songwriting, but that songwriters are typically identified with places (e.g. Tin Pan Alley, the Brill Building, Liverpool, etc.). In this way there is an obvious connection between authenticity and authorship (see Jones, "Popular Music Studies") and geography (as well in terms of musical "scenes", e.g. the "Philly Sound", the "Sun Sound", etc.). The important thing to note is the resultant connection between the symbolic and the physical worlds rooted (pun intended) in geography. As Redhead & Street put it: The idea of "roots" refers to a number of aspects of the musical process. There is the audience in which the musician's career is rooted ... . Another notion of roots refers to music. Here the idea is that the sounds and the style of the music should continue to resemble the source from which it sprang ... . The issue ... can be detected in the argument of those who raise doubts about the use of musical high-technology by African artists. A final version of roots applies to the artist's sociological origins. (180) It is important, consequently, to note that new technologies, particularly ones associated with the distribution of music, are of increasing importance in regulating the tension between alienation and progress mentioned earlier, as they are technologies not simply of musical production and consumption, but of geography. That the tension they mediate is most readily apparent in legal skirmishes during an unsettled era for copyright law (see Brown) should not distract scholars from understanding their cultural significance. These technologies are, on the one hand, "liberating" (see Hayward, Young, and Marsh) insofar as they permit greater geographical "reach" and thus greater marketing opportunities (see Fromartz), but on the other hand they permit less commercial control, insofar as they permit digitised music to freely circulate without restriction or compensation, to the chagrin of copyright enthusiasts. They also create opportunities for musical collaboration (see Hayward) between performers in different zones of time and space, on a scale unmatched since the development of multitracking enabled the layering of sound. Most importantly, these technologies open spaces for the construction of authenticity that have hitherto been unavailable, particularly across distances that have largely separated cultures and fan communities (see Paul). The technologies of Internetworking provide yet another way to make connections between authenticity, music and sound. Community and locality (as Redhead & Street, as well as others like Sara Cohen and Ruth Finnegan, note) are the elements used by audience and artist alike to understand the authenticity of a performer or performance. The lived experience of an artist, in a particular nexus of time and space, is to be somehow communicated via music and interpreted "properly" by an audience. But technologies of Internetworking permit the construction of alternative spaces, times and identities. In no small way that has also been the situation with the mediation of music via most recordings. They are constructed with a sense of space, consumed within particular spaces, at particular times, in individual, most often private, settings. What the network technologies have wrought is a networked audience for music that is linked globally but rooted in the local. To put it another way, the range of possibilities when it comes to interpretive communities has widened, but the experience of music has not significantly shifted, that is, the listener experiences music individually, and locally. Musical activity, whether it is defined as cultural or commercial practice, is neither flat nor autonomous. It is marked by ever-changing tastes (hence not flat) but within an interpretive structure (via "interpretive communities"). Musical activity must be understood within the nexus of the complex relations between technical, commercial and cultural processes. As Jensen put it in her analysis of Patsy Cline's career: Those who write about culture production can treat it as a mechanical process, a strategic construction of material within technical or institutional systems, logical, rational, and calculated. But Patsy Cline's recording career shows, among other things, how this commodity production view must be linked to an understanding of culture as meaning something -- as defining, connecting, expressing, mattering to those who participate with it. (101) To achieve that type of understanding will require that popular music scholars understand authenticity and music in a symbolic realm. Rather than conceiving of authenticity as a limited resource (that is, there is only so much that is "pure" that can go around), it is important to foreground its symbolic and ever-changing character. Put another way, authenticity is not used by musician or audience simply to label something as such, but rather to mean something about music that matters at that moment. Authenticity therefore does not somehow "slip away", nor does a "pure" authentic exist. Authenticity in this regard is, as Baudrillard explains concerning mechanical reproduction, "conceived according to (its) very reproducibility ... there are models from which all forms proceed according to modulated differences" (56). Popular music scholars must carefully assess the affective dimensions of fans, musicians, and also record company executives, recording producers, and so on, to be sensitive to the deeply rooted construction of authenticity and authentic experience throughout musical processes. Only then will there emerge an understanding of the structures of feeling that are central to the experience of music. Footnotes For analyses of the Walkman's role in social settings and popular music consumption see du Gay; Hosokawa; and Chen. It has been thus since the advent of disc recording, when engineers would watch a record's grooves through a microscope lens as it was being cut to ensure grooves would not cross over one into another. References Altman, Rick. "Television/Sound." Studies in Entertainment. Ed. Tania Modleski. Bloomington: Indiana UP, 1986. 39-54. Baudrillard, Jean. Symbolic Death and Exchange. London: Sage, 1993. Brown, Ronald. Intellectual Property and the National Information Infrastructure: The Report of the Working Group on Intellectual Property Rights. Washington, DC: U.S. Department of Commerce, 1995. Chen, Shing-Ling. "Electronic Narcissism: College Students' Experiences of Walkman Listening." Annual meeting of the International Communication Association. Washington, D.C. 1993. Du Gay, Paul, et al. Doing Cultural Studies. London: Sage, 1997. Frith, Simon. Sound Effects. New York: Pantheon, 1981. Fromartz, Steven. "Starts-ups Sell Garage Bands, Bowie on Web." Reuters newswire, 4 Dec. 1996. Grossberg, Lawrence. We Gotta Get Out of This Place. London: Routledge, 1992. Hayward, Philip. "Enterprise on the New Frontier." Convergence 1.2 (Winter 1995): 29-44. Hosokawa, Shuhei. "The Walkman Effect." Popular Music 4 (1984). Jensen, Joli. The Nashville Sound: Authenticity, Commercialisation and Country Music. Nashville, Vanderbilt UP, 1998. Jones, Steve. Rock Formation: Music, Technology and Mass Communication. Newbury Park, CA: Sage, 1992. ---. "Popular Music Studies and Critical Legal Studies" Stanford Humanities Review 3.2 (Fall 1993): 77-90. ---. "A Sense of Space: Virtual Reality, Authenticity and the Aural." Critical Studies in Mass Communication 10.3 (Sep. 1993), 238-52. ---. "Sound, Space & Digitisation." Media Information Australia 67 (Feb. 1993): 83-91. Marrsh, Brian. "Musicians Adopt Technology to Market Their Skills." Wall Street Journal 14 Oct. 1994: C2. Massenburg, George. "Recording the Future." EQ (Apr. 1997): 79-80. Paul, Frank. "R&B: Soul Music Fans Make Cyberspace Their Meeting Place." Reuters newswire, 11 July 1996. Redhead, Steve, and John Street. "Have I the Right? Legitimacy, Authenticity and Community in Folk's Politics." Popular Music 8.2 (1989). Shepherd, John. "Art, Culture and Interdisciplinarity." Davidson Dunston Research Lecture. Carleton University, Canada. 3 May 1992. ---. "Value and Power in Music." The Sound of Music: Meaning and Power in Culture. Eds. John Shepherd and Peter Wicke. Cambridge: Polity, 1993. Silverman, Kaja. The Acoustic Mirror. Bloomington: Indiana UP, 1988. Sobchack, Vivian. Screening Space. New York: Ungar, 1982. Young, Charles. "Aussie Artists Use Internet and Bootleg CDs to Protect Rights." Pro Sound News July 1995. Citation reference for this article MLA style: Steve Jones. "Seeing Sound, Hearing Image: 'Remixing' Authenticity in Popular Music Studies." M/C: A Journal of Media and Culture 2.4 (1999). [your date of access] <http://www.uq.edu.au/mc/9906/remix.php>. Chicago style: Steve Jones, "Seeing Sound, Hearing Image: 'Remixing' Authenticity in Popular Music Studies," M/C: A Journal of Media and Culture 2, no. 4 (1999), <http://www.uq.edu.au/mc/9906/remix.php> ([your date of access]). APA style: Steve Jones. (1999) Seeing Sound, Hearing Image: "Remixing" Authenticity in Popular Music Studies. M/C: A Journal of Media and Culture 2(4). <http://www.uq.edu.au/mc/9906/remix.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
47

Campbell, Sian Petronella. "On the Record: Time and The Self as Data in Contemporary Autofiction." M/C Journal 22, no. 6 (December 4, 2019). http://dx.doi.org/10.5204/mcj.1604.

Full text
Abstract:
In January of this year, artist Christian Marclay’s 24-hour video installation The Clock came to Melbourne. As Ben Lerner explains in 10:04, the autofictional novel Lerner published in 2014, The Clock by Christian Marclay “is a clock: it is a twenty-four hour montage of thousands of scenes from movies and a few from TV edited together so as to be shown in real time; each scene indicates the time with a shot of a timepiece or its mention in dialogue, time in and outside of the film is synchronized” (52). I went to see The Clock at ACMI several times, with friends and alone, in the early morning and late at night. Each time I sank back into the comfortable chairs and settled into the communal experience of watching time pass on a screen in a dark room. I found myself sucked into the enforced narrative of time, the way in which the viewer – in this case myself, and those sharing the experience with me – sought to impose a sort of meaning on the arguably meaningless passing of the hours. In this essay, I will explore how we can expand our thinking of the idea of autofiction, as a genre, to include contemporary forms of digital media such as social media or activity trackers, as the authors of these new forms of digital media act as author-characters by playing with the divide between fact and fiction, and requiring their readers to ascertain meaning by interpreting the clues layered within. I will analyse the ways in which the meaning of autofictional texts—such as Lerner’s 10:04, but also including social media feeds, blogs and activity trackers—shifts depending on their audience. I consider that as technology develops, we increasingly use data to contextualise ourselves within a broader narrative – health data, media, journalistic data. As the sociologist John B. Thompson writes, “The development of the media not only enriches and transforms the process of self-formation, it also produces a new kind of intimacy which did not exist before … individuals can create and establish a form of intimacy which is essentially non-reciprocal” (208). New media and technologies have emerged to assist in this process of self-formation through the collection and publication of data. This essay is interested in analysing this process of self-formation, and its relationship to the genre of autofiction.Contemporary Digital Media as AutofictionWhile humans have always recorded themselves throughout history, with the rise of new technologies the instinct to record the self is increasingly becoming an automatic one; an instinct we can tie to what media theorist Nick Couldry terms as “presencing”: an “emerging requirement in everyday life to have a public presence beyond one’s bodily presence, to construct an objectification of oneself” (50). We are required to participate in ‘presencing’ by opting-in to new media; it is now uncommon – even unfavourable – for someone not to engage in any forms of social media or self-monitoring. We are now encouraged to participate in ‘presencing’ through the recording and online publication of data that would have once been considered private, such as employment histories and activity histories. Every Instagram photo, Snapchat or TikTok video contributes to an accumulating digital presence, an emerging narrative of the self. Couldry notes that presencing “is not the same as calling up a few friends to tell them some news; nor, although the audience is unspecific, is it like putting up something on a noticeboard. That is because presencing is oriented to a permanent site in public space that is distinctively marked by the producer for displaying that producer’s self” (50).In this way, we can see that in effect we are all becoming increasingly positioned to become autofiction authors. As an experimental form of literature, autofiction has been around for a long time, the term having first been introduced in the 1970s, and with Serge Doubrovsky widely credited with having introduced the genre with the publication of his 1977 novel Fils (Browning 49). In the most basic terms, autofiction is simply a work of fiction featuring a protagonist who can be interpreted as a stand-in for its author. And while autofiction is also confused with or used interchangeably with other genres such as metafiction or memoir, the difference between autofiction and other genres, writes Arnaud Schmitt, is that autoficton “relies on fiction—runs on fiction, to be exact” (141). Usually the reader can pick up on the fact that a novel is an autofictional one by noting that the protagonist and the author share a name, or key autobiographical details, but it is debatable as to whether the reader in fact needs to know that the work is autofictional in the first place in order to properly engage with it as a literary text.The same ideas can be applied to the application of digital media today. Kylie Cardell notes that “personal autobiographical but specifically diaristic (confessional, serial, quotidian) disclosure is increasingly positioned as a symptomatic feature of online life” (507). This ties in with Couldry’s idea of ‘presencing’; confession is increasingly a requirement when it comes to participation in digital media. As technology advances, the ways in which we can present and record the self evolve, and the narrative we can produce of the self expands alongside our understanding of the relationship between fact and fiction. Though of course we have always fabricated different narratives of the self, whether it be through diary entries or letter-writing, ‘presencing’ occurs when we literally present these edited versions of ourselves to an online audience. Lines become blurred between fiction and non-fiction, and the ability to distinguish between ‘fake’ and ‘real’ becomes almost impossible.Increasingly, such a distinction fails to seem important, and in some cases, this blurred line becomes the point, or a punchline; we can see this most clearly in TikTok videos, wherein people (specifically, or at least most typically, young people—Generation Z) play with ideas of truth and unreality ironically. When a teenager posts a video of themselves on TikTok dancing in their school cafeteria with the caption, “I got suspended for this, don’t let this flop”, the savvy viewer understands without it needing to be said that the student was not actually suspended – and also understands that even less outlandish or unbelievable digital content is unreliable by nature, and simply the narrative the author or producer wishes to convey; just like the savvy reader of an autofiction novel understands, without it actually being said, that the novel is in part autobiographical, even when the author and protagonist do not share a name or other easily identifiable markers.This is the nature of autofiction; it signals to the reader its status as a work of autofiction by littering intertextual clues throughout. Readers familiar with the author’s biography or body of work will pick up on these clues, creating a sense of uneasiness in the reader as they work to discern what is fact and what is not.Indeed, in 10:04, Lerner flags the text as a work of autofiction by sketching a fictional-not-fictional image of himself as an author of a story, ‘The Golden Vanity’ published in The New Yorker, that earned him a book deal—a story the ‘real’ Ben Lerner did in fact publish, two years before the publication of 10:04: “a few months before, the agent had e-mailed me that she believed I could get a “strong six-figure” advance based on a story of mine that had appeared in The New Yorker” (Lerner 4).In a review of 10:04 for the Sydney Review of Books, Stephanie Bishop writes:we learn that he did indeed write a proposal, that there was a competitive auction … What had just happened? Where are we in time? Was the celebratory meal fictional or real? Can we (and should we) seek to distinguish these categories?Here Lerner is ‘presencing’, crafting a multilayered version of himself across media by assuming that the reader of his work is also a reader of The New Yorker (an easy assumption to make given that his work often appears in, and is reviewed in, The New Yorker). Of course, this leads to the question: what becomes of autofiction when it is consumed by someone who is unable to pick up on the many metareferences layered within its narrative? In this case, the work itself becomes a joke that doesn’t land – much like a social media feed being consumed by someone who is not its intended audience.The savvy media consumer also understands that even the most meaningless or obtuse of media is all part of the overarching narrative. Lerner highlights the way we try and impose meaning onto (arguably) meaningless media when he describes his experience of watching time pass in Marclay’s The Clock:Big Ben, which I would come to learn appears frequently in the video, exploded, and people in the audience applauded… But then, a minute later, a young girl awakes from a nightmare and, as she’s comforted by her father (Clark Gable as Rhett Butler), you see Big Ben ticking away again outside their window, no sign of damage. The entire preceding twenty-four hours might have been the child’s dream, a storm that never happened, just one of many ways The Clock can be integrated into an overarching narrative. Indeed it was a greater challenge for me to resist the will to integration. (Lerner 52-53)This desire to impose an overarching narrative that Lerner speaks of – and which I also experienced when watching The Clock, as detailed in the introduction to this essay – is what the recording of the self both aims to achieve and achieves by default; it is the point and also the by-product. The Self as DataThe week my grandmother died, in 2017, my father bought me an Apple Watch. I had recently started running and—perhaps as an outlet for my grief—was looking to take my running further. I wanted a smart watch to help me record my runs; to turn the act of running into data that I could quantify and thus understand. This, in turn, would help me understand something about myself. Deborah Lupton explains my impulse here when she writes, “the body/self is portrayed as a conglomerate of quantifiable data that can be revealed using digital devices” (65). I wanted to reveal my ‘self’ by recording it, similar to the way the data accumulated in a diary, when reflected upon, helps a diarist understand their life more broadly. "Is a Fitbit a diary?”, asks Kylie Cardell. “The diary in the twenty-first century is already vastly different from many of its formal historical counterparts, yet there are discursive resonances. The Fitbit is a diary if we think of diary as a chronological record of data, which it can be” (348). The diary, as with the Apple Watch or Fitbit, is simply just a record of the self moving through time.Thus I submitted myself to the task of turning as much of myself into digital data as was possible to do so. Every walk, swim, meditation, burst of productivity, lapse in productivity, and beat of my heart became quantified, as Cardell might say, diarised. There is a very simple sort of pleasure in watching the red, green and blue rings spin round as you stand more, move more, run more. There is something soothing in knowing that at any given moment in time, you can press a button and see exactly what your heart is doing; even more soothing is knowing that at any given time, you can open up an app and see what your heart has been doing today, yesterday, this month, this year. It made sense to me that this data was being collected via my timepiece; it was simply the accumulation of my ‘self,’ as viewed through the lens of time.The Apple Watch was just the latest in a series of ways I have tasked technology with the act of quantifying myself; with my iPhone I track my periods with the Clue app. I measure my mental health with apps such as Shine, and my daily habits with Habitica. I have tried journaling apps such as Reflectly and Day One. While I have never actively tracked my food intake, or weight, or sex life, I know if I wanted to I could do this, too. And long before the Apple Watch, and long before my iPhone, too, I measured myself. In the late 2000s, I kept an online blog. Rebecca Blood notes that the development of blogging technology allowed blogging to become about “whatever came to mind. Walking to work. Last night’s party. Lunch” (54). Browning expands on this, noting that bloggingemerged as a mode of publication in the late ’90s, expressly smudging the boundaries of public and private. A diaristic mode, the blog nonetheless addresses (a) potential reader(s), often with great intimacy — and in its transition to print, as a boundary-shifting form with ill-defined goals regarding its readership. (49)(It is worth noting here that while of course many different forms of blogging exist and have always existed, this essay is only concerned with the diaristic blog that Blood and Browning speak of – arguably the most popular, and at least the most well known, form of blog.)My blog was also ostensibly about my own life, but really it was a work of autofiction, in the same way that my Apple Watch data, when shared, became a work of autofiction – which is to say that I became the central character, the author-character, whose narrative I was shaping with each post, using time as the setting. Jenny Davis writes:if self-quantifiers are seeking self-knowledge through numbers, then narratives and subjective interpretations are the mechanisms by which data morphs into selves. Self-quantifiers don’t just use data to learn about themselves, but rather, use data to construct the stories that they tell themselves about themselves.Over time, I became addicted to the blogging platform’s inbuilt metrics. I would watch with interest as certain posts performed better than others, and eventually the inevitable happened: I began – mostly unconsciously – to try and mould the content of my blogs to achieve certain outcomes – similar to the way that now, in 2019, it is hard to say whether I use an app to assist myself to meditate/journal/learn/etc, or whether I meditate/journal/learn/etc in order to record myself having done so.David Sedaris notes how the collection of data subconsciously, automatically leads to its manipulation in his essay collection, Calypso:for reasons I cannot determine my Fitbit died. I was devastated when I tapped the broadest part of it and the little dots failed to appear. Then I felt a great sense of freedom. It seemed that my life was now my own again. But was it? Walking twenty-five miles, or even running up the stairs and back, suddenly seemed pointless, since, without the steps being counted and registered, what use were they? (Sedaris, 49)In this way, the data we collect on and produce about ourselves, be it fitness metrics, blog posts, Instagram stories or works of literature or art, allows us to control and shape our own narrative, and so we do, creating what Kylie Cardell describes as “an autobiographical representation of self that is coherent and linear, “excavated” from a mass of personal data” (502).Of course, as foregrounded earlier, it is important to highlight the way ideas of privacy and audience shift in accordance with the type of media being consumed or created. Within different media, different author-characters emerge, and the author is required to participate in ‘presencing’ in different ways. For instance, data that exists only for the user does not require the user, or author, to participate in the act of ‘presencing’ at all – an example of this might be the Clue app, which records menstruation history. This information is only of interest to myself, and is not published or shared anywhere, with anyone. However even data intended for a limited audience still requires participation in ‘presencing’. While I only ‘share’ my Apple Watch’s activity with a few people, even just the act of sharing this activity influences the activity itself, creating an affect in which the fact of the content’s consumption shapes the creation of the content itself. Through consumption of Apple Watch data alone, a narrative can be built in which I am lazy, or dedicated, an early riser or a late sleeper, the kind of person who prefers setting their own goals, or the kind of person who enjoys group activities – and knowing that this narrative is being built requires me to act, consciously, in the experience of building it, which leads to the creation of something unreal or fictional interspersed with factual data. (All of which is to admit that sometimes I go on a run not because I want to go on a run, but because I want to be the sort of person who has gone on a run, and be seen as such: in this way I am ‘presencing’.)Similarly, the ephemeral versus permanent nature of data shared through media like Snapchat or Instagram dictates its status as a work of autofiction. When a piece of data – for instance, a photograph on Instagram – is published permanently, it contributes to an evolving autofictional narrative. The ‘Instagrammed’ self is both real and unreal, both fictional and non-fictional. The consumer of this data can explore an author’s social media feed dating back years and consume this data in exactly the way the author intends. However, the ‘stories’ function on Instagram, for instance, allows the consumption of this data to change again. Content is published for a limited amount of time—usually 24 hours—then disappears, and is able to be shared with either the author’s entire group of followers, or a select audience, allowing an author more creative freedom to choose how their data is consumed.Anxiety and AutofictionWhy do I feel the need to record all this data about myself? Obviously, this information is, to an extent, useful. If you are a person who menstruates, knowing exactly when your last period was, how long it lasted and how heavy it was is useful information to have, medically and logistically. If you run regularly, tracking your runs can be helpful in improving your time or routine. Similarly, recording the self in this way can be useful in keeping track of your moods, your habits, and your relationships.Of course, as previously noted, humans have always recorded ourselves. Cardell notes that “although the forms, conditions, and technology for diary keeping have changed, a motivation for recording, documenting, and accounting for the experience of the self over time has endured” (349). Still, it is hard to ignore the fact that ultimately, we seem to be entering some sort of age of digital information hoarding, and harder still to ignore the sneaking suspicion that this all seems to speak to a growing anxiety – and specifically, an anxiety of the self.Gayle Greene writes that “all writers are concerned with memory, since all writing is a remembrance of things past; all writers draw on the past, mine it as a quarry. Memory is especially important to anyone who cares about change, for forgetting dooms us to repetition” (291). If all writers are concerned with memory, as Greene posits, then perhaps we can draw the conclusion that autofiction writers are concerned with an anxiety of forgetting, or of being forgotten. We are self-conscious as authors of autofictional media; concerned with how our work is and will continue to be perceived – and whether it is perceived at all. Marjorie Worthington believes that that the rise in self-conscious fiction has resulted in an anxiety of obsolescence; that this anxiety in autofiction occurs “when a cultural trope (such as 'the author' is deemed to be in danger of becoming obsolete (or 'dying')” (27). However, it is worth considering the opposite – that an anxiety of obsolescence has resulted in a rise of self-conscious fiction, or autofiction.This fear of obsolescence is pervasive in new digital media – Instagram stories and Snapchats, which once disappeared forever into a digital void, are now able to be saved and stored. The fifteen minutes of fame has morphed into fifteen seconds: in this way, time works both for and against the anxious author of digital autofiction. Technologies evolve quicker than we can keep up, with popular platforms becoming obsolete at a rapid pace. This results in what Kylie Cardell sees as an “anxiety around the traces of lives accumulating online and the consequences of 'accidental autobiography,' as well as the desire to have a 'tidy,' representable, and 'storied' life” (503).This same desire can be seen at the root of autofiction. The media theorist José van Dijck notes thatwith the advent of photography, and later film and television, writing tacitly transformed into an interior means of consciousness and remembrance, whereupon electronic forms of media received the artificiality label…writing gained status as a more authentic container of past recollection. (15)Autofiction, however, disrupts this tacit transformation. It is a co-mingling of a desire to record the self, as well as a desire to control one’s own narrative. The drive to represent oneself in a specific way, with consideration to one’s audience and self-brand, has become the root of social media, but is so pervasive now that it is often an unexamined, subconscious one. In autofiction, this drive is not subconscious, it is self-conscious.ConclusionAs technology has developed, new ways to record, present and evaluate the self have emerged. While an impulse to self-monitor has always existed within society, with the rise of ‘presencing’ through social media this impulse has been made public. In this way, we can see presencing, or the public practice of self-performing through media, as an inherently autofictional practice. We can understand that the act of presencing stems from a place of anxiety and self-consciousness, and understand that is in fact impossible to create autofiction without self-consciousness. As we begin to understand that all digital media is becoming inherently autofictional in nature, we’re increasingly required to force to draw our own conclusions about the media we consume—just like the author-character of 10:04 is forced to draw his own conclusions about the passing of time, as represented by Big Ben, when interacting with Marclay’s The Clock. By analysing and comparing the ways in which the emerging digital landscape and autofiction both share a common goal of recording and preserving an interpretation of the ‘self’, we can then understand a deeper understanding of the purpose that autofiction serves. ReferencesBishop, Stephanie. “The Same but Different: 10:04 by Ben Lerner.” Sydney Review of Books 6 Feb. 2015. <https://sydneyreviewofbooks.com/10-04-ben-lerner/>.Blood, Rebecca. "How Blogging Software Reshapes the Online Community." Communications of the ACM 47.12 (2004): 53-55.Browning, Barbara. "The Performative Novel." TDR: The Drama Review 62.2 (2018): 43-58. Davis, Jenny. “The Qualified Self.” Cyborgology 13 Mar. 2013. <http://thesocietypages.org/cyborgology/2013/03/13/the-qualified-self/>.Cardell, Kylie. “The Future of Autobiography Studies: The Diary.” a/b: Auto/Biography Studies 32.2 (2017): 347-350.Cardell, Kylie. “Modern Memory-Making: Marie Kondo, Online Journaling, and the Excavation, Curation, and Control of Personal Digital Data.” a/b: Auto/Biography Studies 32.3 (2017): 499-517.Couldry, Nick. Media, Society, World: Social Theory and Digital Media Practice. Great Britain: Polity Press, 2012.Greene, Gayle. “Feminist Fiction and the Uses of Memory.” Signs 16.2 (1991): 290-321.Lerner, Ben. 10:04. London: Faber and Faber, 2014.Lerner, Ben. “The Golden Vanity.” The New Yorker 11 June 2012. <https://www.newyorker.com/magazine/2012/06/18/the-golden-vanity>.Lupton, Deborah. “You Are Your Data: Self-Tracking Practices and Concepts of Data.” Lifelogging. Ed. Stefan Selke. Wiesbaden: Springer, 2016. 61-79.Schmitt, Arnaud. “David Shields's Lyrical Essay: The Dream of a Genre-Free Memoir, or beyond the Paradox.” a/b: Auto/Biography Studies 31.1 (2016): 133-146.Sedaris, David. Calypso. United States: Little Brown, 2018.Thompson, John B. The Media and Modernity: A Social Theory of the Media. California: Stanford University Press, 1995.Van Dijck, José. Mediated Memories in the Digital Age. Stanford: Stanford UP, 2007.Worthington, Marjorie. The Story of "Me": Contemporary American Autofiction. Nebraska: University of Nebraska Press, 2018.
APA, Harvard, Vancouver, ISO, and other styles
48

Ellis, Katie, Mike Kent, and Gwyneth Peaty. "Captioned Recorded Lectures as a Mainstream Learning Tool." M/C Journal 20, no. 3 (June 21, 2017). http://dx.doi.org/10.5204/mcj.1262.

Full text
Abstract:
In Australian universities, many courses provide lecture notes as a standard learning resource; however, captions and transcripts of these lectures are not usually provided unless requested by a student through dedicated disability support officers (Worthington). As a result, to date their use has been limited. However, while the requirement for—and benefits of—captioned online lectures for students with disabilities is widely recognised, these captions or transcripts might also represent further opportunity for a personalised approach to learning for the mainstream student population (Podszebka et al.; Griffin). This article reports findings of research assessing the usefulness of captioned recorded lectures as a mainstream learning tool to determine their usefulness in enhancing inclusivity and learning outcomes for the disabled, international, and broader student population.Literature ReviewCaptions have been found to be of benefit for a number of different groups considered at-risk. These include people who are D/deaf or hard of hearing, those with other learning difficulties, and those from a non-English speaking background (NESB).For students who are D/deaf or hard of hearing, captions play a vital role in providing access to otherwise inaccessible audio content. Captions have been found to be superior to sign language interpreters, note takers, and lip reading (Stinson et al.; Maiorana-Basas and Pagliaro; Marschark et al.).The use of captions for students with a range of cognitive disabilities has also been shown to help with student comprehension of video-based instruction in a higher education context (Evmenova; Evmenova and Behrmann). This includes students with autism spectrum disorder (ASD) (Knight et al.; Reagon et al.) and students with dyslexia (Alty et al.; Beacham and Alty). While, anecdotally, captions are also seen as of benefit for students with attention deficit hyperactivity disorder (ADHD) (Kent et al.), studies have proved inconclusive (Lewis and Brown).The third group of at-risk students identified as benefiting from captioning recorded lecture content are those from a NESB. The use of captions has been shown to increase vocabulary learning (Montero Perez, Peters, Clarebout, and Desmet; Montero Perez, Van Den Noortgate, and Desmet) and to assist with comprehension of presenters with accents or rapid speech (Borgaonkar, 2013).In addition to these three main groups of at-risk students, captions have also been demonstrated to increase the learning outcomes for older students (Pachman and Ke, 2012; Schmidt and Haydu, 1992). Captions also have demonstrable benefits for the broader student cohort beyond these at-risk groups (Podszebka et al.; Griffin). For example, a recent study found that the broader student population utilised lecture captions and transcripts in order to focus, retain information, and overcome poor audio quality (Linder). However, the same study revealed that students were largely unaware about the availability of captions and transcripts, nor how to access them.MethodologyIn 2016 students in the Curtin University unit Web Communications (an introductory unit for the Internet Communications major) and its complementary first year unit, Internet and Everyday Life, along with a second year unit, Web Media, were provided with access to closed captions for their online recorded lectures. The latter unit was added to the study serendipitously when its lectures were required to be captioned through a request from the Curtin Disability Office during the study period. Recordings and captions were created using the existing captioning system available through Curtin’s lecture recording platform—Echo360. As well as providing a written caption of what is being said during the lectures, this system also offers a sophisticated search functionality, as well as access to a total transcript of the lecture. The students were provided access to an online training module, developed specifically for this study, to explain the use of this system.Enrolled Curtin students, both on-campus and online, Open Universities Australia (OUA) students studying through Curtin online, teaching staff, and disability officers were then invited to participate in a survey and interviews. The study sought to gain insights into students’ use of both recorded lectures and captioned video at the time of the survey, and their anticipated future usage of these services (see Kent et al.).A total of 50 students—of 539 enrolled across the different instances of the three units—completed the survey. In addition, five follow-up interviews with students, teaching staff, and disability support staff were conducted once the surveys had been completed. Staff interviewed included tutors and unit coordinators who taught and supervised units in which the lecture captions were provided. The interviews assessed the awareness, use, and perceived validity of the captions system in the context of both learning and teaching.ResultsA number of different questions were asked regarding students’ demographics, their engagement with online unit materials, including recorded lectures, their awareness of Echo360’s lecture captions, as well as its additional features, their perceived value of online captions for their studies, and the future significance of captions in a university context.Of the 50 participants in the survey, only six identified themselves as a person with a disability—almost 90 per cent did not identify as disabled. Additionally, 45 of the 50 participants identified English as their primary language. Only one student identified as a person with both a disability and coming from a NESB.Engagement with Online Unit Materials and Recorded LecturesThe survey results provide insight into the ways in which participants interact with the Echo360 lecture system. Over 90 per cent of students had accessed the recorded lectures via the Echo360 system. While this might not seem notable at first, given such materials are essential elements of the units surveyed, the level of repeated engagement seen in these results is important because it indicates the extent to which students are revising the same material multiple times—a practice that captions are designed to facilitate and assist. For instance, one lecture was recorded per week for each unit surveyed, and most respondents (70 per cent) were viewing these lectures at least once or twice a week, while 10 per cent were viewing the lectures multiple times a week. Over half of the students surveyed reported viewing the same lecture more than once. Out these participants, 19 (or 73 per cent) had viewed a lecture twice and 23 per cent had viewed it three times or more. This illustrates that frequent revision is taking place, as students watch the same lecture repeatedly to absorb and clarify its contents. This frequency of repeated engagement with recorded unit materials—lectures in particular—indicates that students were making online engagement and revision a key element of their learning process.Awareness of the Echo360 Lecture Captions and Additional FeaturesHowever, while students were highly engaged with both the online learning material and the recorded lectures, there was less awareness of the availability of the captioning system—only 34 per cent of students indicated they were aware of having access to captions. The survey also asked students whether or not they had used additional features of the Echo360 captioning system such as the search function and downloadable lecture transcripts. Survey results confirm that these features were being used; however, responses indicated that only a minority of students using the captions system used these features, with 28 per cent using the search function and 33 per cent making use of the transcripts. These results can be seen as an indication that additional features were useful for revision, albeit for the minority of students who used them. A Curtin disability advisor noted in their interview that:transcripts are particularly useful in addition to captions as they allow the user to quickly skim the material rather than sit through a whole lecture. Transcripts also allow translation into other languages, highlighting text and other features that make the content more accessible.Teaching staff were positive about these features and suggested that providing transcripts saved time for tutors who are often approached to provide these to individual students:I typically receive requests for lecture transcripts at the commencement of each study period. In SP3 [during this study] I did not receive any requests.I feel that lecture transcripts would be particularly useful as this is the most common request I receive from students, especially those with disabilities.I think transcripts and keyword searching would likely be useful to many students who access lectures through recordings (or who access recordings even after attending the lecture in person).However, the one student who was interviewed preferred the keyword search feature, although they expressed interest in transcripts as well:I used the captions keyword search. I think I would like to use the lecture transcript as well but I did not use that in this unit.In summary, while not all students made use of Echo360’s additional features for captions, those who did access them did so frequently, indicating that these are potentially useful learning tools.Value of CaptionsOf the students who were aware of the captions, 63 per cent found them useful for engaging with the lecture material. According to one of the students:[captions] made a big difference to me in terms on understanding and retaining what was said in the lectures. I am not sure that many students would realise this unless they actually used the captions…I found it much easier to follow what was being said in the recorded lectures and I also found that they helped stay focussed and not become distracted from the lecture.It is notable that the improvements described above do not involve assistance with hearing or language issues, but the extent to which captions improve a more general learning experience. This participant identified themselves as a native English speaker with no disabilities, yet the captions still made a “big difference” in their ability to follow, understand, focus on, and retain information drawn from the lectures.However, while over 60 per cent of students who used the captions reported they found them useful, it was difficult to get more detailed feedback on precisely how and why. Only 52.6 per cent reported actually using them when accessing the lectures, and a relatively small number reported taking advantage of the search and transcripts features available through the Echo360 system. Exactly how they were being used and what role they play in student learning is therefore an area to pursue in future research, as it will assist in breaking down the benefits of captions for all learners.Teaching staff also reported the difficulty in assessing the full value of captions—one teacher interviewed explained that the impact of captions was hard to monitor quantitatively during regular teaching:it is difficult enough to track who listens to lectures at all, let alone who might be using the captions, or have found these helpful. I would like to think that not only those with hearing impairments, but also ESL students and even people who find listening to and taking in the recording difficult for other reasons, might have benefitted.Some teaching staff, however, did note positive feedback from students:one student has given me positive feedback via comments on the [discussion board].one has reported that it helps with retention and with times when speech is soft or garbled. I suspect it helps mediate my accent and pitch!While 60 per cent claiming captions were useful is a solid majority, it is notable that some participants skipped this question. As discussed above, survey answers indicate that this was because these 37 students did not think they had access to captions in their units.Future SignificanceOverall, these results indicate that while captions can provide a benefit to students’ engagement with online lecture learning material, there is a need for more direct and ongoing information sharing to ensure both students and teaching staff are fully aware of captions and how to use them. Technical issues—such as the time delay in captions being uploaded—potentially dissuade students from using this facility, so improving the speed and reliability of this tool could increase the number of learners keen to use it. All staff interviewed agreed that implementing captions for all lectures would be beneficial for everyone:any technology that can assist in making lectures more accessible is useful, particularly in OUA [online] courses.it would be a good example of Universal Design as it would make the lecture content more accessible for students with disabilities as well as students with other equity needs.YES—it benefits all students. I personally find that I understand and my attention is held more by captioned content.it certainly makes my role easier as it allows effective access to recorded lectures. Captioning allows full access as every word is accessible as opposed to note taking which is not verbatim.DiscussionThe results of this research indicate that captions—and their additional features—available through the Echo360 captions system are an aid to student learning. However, there are significant challenges to be addressed to make students aware of these features and their potential benefits.This study has shown that in a cohort of primarily English speaking students without disabilities, over 60 per cent found captions a useful addition to recorded lectures. This suggests that the implementation of captions for all recorded lectures would have widespread benefits for all learners, not only those with hearing or language difficulties. However, at present, only “eligible” students who approach the disability office would be considered for this service, usually students who are D/deaf or hard of hearing. Yet it can be argued that these benefits—and challenges—could also extend to other groups that are might traditionally have been seen to benefit from the use of captions such as students with other disabilities or those from a NESB.However, again, a lack of awareness of the training module meant that this potential cohort did not benefit from this trial. In this study, none of the students who identified as having a disability or coming from a NESB indicated that they had access to the training module. Further, five of the six students with disabilities reported that they did not have access to the captions system and, similarly, only two of the five NESB students. Despite these low numbers, all the students who were part of these two groups and who did access the captions system did find it useful.It can therefore be seen that the main challenge for teaching staff is to ensure all students are aware of captions and can access them easily. One option for reducing the need for training or further instructions might be having captions always ON by default. This means students could incorporate them into their study experience without having to take direct action or, equally, could simply choose to switch them off.There are also a few potential teething issues with implementing captions universally that need to be noted, as staff expressed some concerns regarding how this might alter the teaching and learning experience. For example:because the captioning is once-off, it means I can’t re-record the lectures where there was a failure in technology as the new versions would not be captioned.a bit cautious about the transcript as there may be problems with students copying that content and also with not viewing the lectures thinking the transcripts are sufficient.Despite these concerns, the survey results and interviews support the previous findings showing that lecture captions have the potential to benefit all learners, enhancing each student’s existing capabilities. As one staff member put it:in the main I just feel [captions are] important for accessibility and equity in general. Why should people have to request captions? Recorded lecture content should be available to all students, in whatever way they find it most easy (or possible) to engage.Follow-up from students at the end of the study further supported this. As one student noted in an email at the start of 2017:hi all, in one of my units last semester we were lucky enough to have captions on the recorded lectures. They were immensely helpful for a number of reasons. I really hope they might become available to us in this unit.ConclusionsWhen this project set out to investigate the ways diverse groups of students could utilise captioned lectures if they were offered it as a mainstream learning tool rather than a feature only disabled students could request, existing research suggested that many accommodations designed to assist students with disabilities actually benefit the entire cohort. The results of the survey confirmed this was also the case for captioning.However, currently, lecture captions are typically utilised in Australian higher education settings—including Curtin—only as an assistive technology for students with disabilities, particularly students who are D/deaf or hard of hearing. In these circumstances, the student must undertake a lengthy process months in advance to ensure timely access to essential captioned material. Mainstreaming the provision of captions and transcripts for online lectures would greatly increase the accessibility of online learning—removing these barriers allows education providers to harness the broad potential of captioning technology. Indeed, ensuring that captions were available “by default” would benefit the educational outcomes and self-determination of the wide range of students who could benefit from this technology.Lecture captioning and transcription is increasingly cost-effective, given technological developments in speech-to-text or automatic speech recognition software, and the increasing re-use of content across different iterations of a unit in online higher education courses. At the same time, international trends in online education—not least the rapidly evolving interpretations of international legislation—provide new incentives for educational providers to begin addressing accessibility shortcomings by incorporating captions and transcripts into the basic materials of a course.Finally, an understanding of the diverse benefits of lecture captions and transcripts needs to be shared widely amongst higher education providers, researchers, teaching staff, and students to ensure the potential of this technology is accessed and used effectively. Understanding who can benefit from captions, and how they benefit, is a necessary step in encouraging greater use of such technology, and thereby enhancing students’ learning opportunities.AcknowledgementsThis research was funded by the Curtin University Teaching Excellence Development Fund. Natalie Latter and Kai-ti Kao provided vital research assistance. We also thank the students and staff who participated in the surveys and interviews.ReferencesAlty, J.L., A. Al-Sharrah, and N. Beacham. “When Humans Form Media and Media Form Humans: An Experimental Study Examining the Effects Different Digital Media Have on the Learning Outcomes of Students Who Have Different Learning Styles.” Interacting with Computers 18.5 (2006): 891–909.Beacham, N.A., and J.L. Alty. “An Investigation into the Effects That Digital Media Can Have on the Learning Outcomes of Individuals Who Have Dyslexia.” Computers & Education 47.1 (2006): 74–93.Borgaonkar, R. “Captioning for Classroom Lecture Videos.” University of Houston 2013. <https://uh-ir.tdl.org/uh-ir/handle/10657/517>.Evmenova, A. “Lights. Camera. Captions: The Effects of Picture and/or Word Captioning Adaptations, Alternative Narration, and Interactive Features on Video Comprehension by Students with Intellectual Disabilities.” Ph.D. thesis. Virginia: George Mason U, 2008.Evmenova, A., and M. Behrmann. “Enabling Access and Enhancing Comprehension of Video Content for Postsecondary Students with Intellectual Disability.” Education and Training in Autism and Developmental Disabilities 49.1 (2014): 45–59.Griffin, Emily. “Who Uses Closed Captions? Not Just the Deaf or Hard of Hearing.” 3PlayMedia Aug. 2015 <http://www.3playmedia.com/2015/08/28/who-uses-closed-captions-not-just-the-deaf-or-hard-of-hearing/>.Kent, Mike, Katie Ellis, Gwyneth Peaty, Natalie Latter, and Kathryn Locke. Mainstreaming Captions for Online Lectures in Higher Education in Australia: Alternative Approaches to Engaging with Video Content. Perth: National Centre for Student Equity in Higher Education (NCSEHE), Curtin U, 2017. <https://www.ncsehe.edu.au/publications/4074/?doing_wp_cron=1493183232.7519669532775878906250>.Knight, V., B.R. McKissick, and A. Saunders. “A Review of Technology-Based Interventions to Teach Academic Skills to Students with Autism Spectrum Disorder.” Journal of Autism and Developmental Disorders 43.11 (2013): 2628–2648. <https://doi.org/10.1007/s10803-013-1814-y>.Linder, Katie. Student Uses and Perceptions of Closed Captions and Transcripts: Results from a National Study. Corvallis, OR: Oregon State U Ecampus Research Unit, 2016.Lewis, D., and V. Brown. “Multimedia and ADHD Learners: Are Subtitles Beneficial or Detrimental?” Annual Meeting of the AECT International Convention, The Galt House, Louisville 2012. <http://www.aect.org/pdf/proceedings12/2012/12_17.pdf>.Maiorana-Basas, M., and C.M. Pagliaro. “Technology Use among Adults Who Are Deaf and Hard of Hearing: A National Survey.” Journal of Deaf Studies and Deaf Education 19.3 (2014): 400–410. <https://doi.org/10.1093/deafed/enu005>.Marschark, Marc, Greg Leigh, Patricia Sapere, Denis Burnham, Carol Convertino, Michael Stinson, Harry Knoors, Mathijs P. J. Vervloed, and William Noble. “Benefits of Sign Language Interpreting and Text Alternatives for Deaf Students’ Classroom Learning.” Journal of Deaf Studies and Deaf Education 11.4 (2006): 421–437. <https://doi.org/10.1093/deafed/enl013>.Montero Perez, M., E. Peters, G. Clarebout, and P. Desmet. “Effects of Captioning on Video Comprehension and Incidental Vocabulary Learning.” Language Learning & Technology 18.1 (2014): 118–141.Montero Perez, M., W. Van Den Noortgate, and P. Desmet. “Captioned Video for L2 Listening and Vocabulary Learning: A Meta-Analysis.” System 41.3 (2013): 720–739. <https://doi.org/10.1016/j.system.2013.07.013>.Pachman, M., and F. Ke. “Environmental Support Hypothesis in Designing Multimedia Training for Older Adults: Is Less Always More?” Computers & Education 58.1 (2012): 100–110. <https://doi.org/10.1016/j.compedu.2011.08.011>.Podszebka, Darcy, Candee Conklin, Mary Apple, and Amy Windus. “Comparison of Video and Text Narrative Presentations on Comprehension and Vocabulary Acquisition”. Paper presented at SUNY – Geneseo Annual Reading and Literacy Symposium. New York: Geneseo, May 1998. <https://dcmp.org/caai/nadh161.pdf>.Reagon, K.A., T.S. Higbee, and K. Endicott. “Using Video Instruction Procedures with and without Embedded Text to Teach Object Labeling to Preschoolers with Autism: A Preliminary Investigation.” Journal of Special Education Technology 22.1 (2007): 13–20.Schmidt, M.J., and M.L. Haydu. “The Older Hearing‐Impaired Adult in the Classroom: Real‐Time Closed Captioning as a Technological Alternative to the Oral Lecture.” Educational Gerontology 18.3 (1992): 273–276. <https://doi.org/10.1080/0360127920180308>.Stinson, M.S., L.B. Elliot, R.R. Kelly, and Y. Liu. “Deaf and Hard-of-Hearing Students’ Memory of Lectures with Speech-to-Text and Interpreting/Note Taking Services.” The Journal of Special Education 43.1 (2009): 52–64. <https://doi.org/10.1177/0022466907313453>.Worthington, Tom. “Are Australian Universities Required to Caption Lecture Videos?” Higher Education Whisperer 14 Feb. 2015. <http://blog.highereducationwhisperer.com/2015/02/are-australian-universities-required-to.html>.
APA, Harvard, Vancouver, ISO, and other styles
49

Chesher, Chris. "Mining Robotics and Media Change." M/C Journal 16, no. 2 (March 8, 2013). http://dx.doi.org/10.5204/mcj.626.

Full text
Abstract:
Introduction Almost all industries in Australia today have adopted digital media in some way. However, uses in large scale activities such as mining may seem to be different from others. This article looks at mining practices with a media studies approach, and concludes that, just as many other industries, mining and media have converged. Many Australian mine sites are adopting new media for communication and control to manage communication, explore for ore bodies, simulate forces, automate drilling, keep records, and make transport and command robotic. Beyond sharing similar digital devices for communication and computation, new media in mining employ characteristic digital media operations, such as numerical operation, automation and managed variability. This article examines the implications of finding that some of the most material practices have become mediated by new media. Mining has become increasingly mediated through new media technologies similar to GPS, visualisation, game remote operation, similar to those adopted in consumer home and mobile digital media. The growing and diversified adoption of digital media championed by companies like Rio Tinto aims not only ‘improve’ mining, but to change it. Through remediating practices of digital mining, new media have become integral powerful tools in prospective, real time and analytical environments. This paper draws on two well-known case studies of mines in the Pilbara and Western NSW. These have been documented in press releases and media reports as representing changes in media and mining. First, the West Angelas mines in the Pilbara is an open cut iron ore mine introducing automation and remote operation. This mine is located in the remote Pilbara, and is notable for being operated remotely from a control centre 2000km away, near Perth Airport, WA. A growing fleet of Komatsu 930E haul trucks, which can drive autonomously, traverses the site. Fitted with radars, lasers and GPS, these enormous vehicles navigate through the open pit mine with no direct human control. Introducing these innovations to mine sites become more viable after iron ore mining became increasingly profitable in the mid-2000s. A boom in steel building in China drove unprecedented demand. This growing income coincided with a change in public rhetoric from companies like Rio Tinto. They pointed towards substantial investments in research, infrastructure, and accelerated introduction of new media technologies into mining practices. Rio Tinto trademarked the term ‘Mine of the future’ (US Federal News Service 1), and publicised their ambitious project for renewal of mining practice, including digital media. More recently, prices have been more volatile. The second case study site is a copper and gold underground mine at Northparkes in Western NSW. Northparkes uses substantial sensing and control, as well as hybrid autonomous and remote operated vehicles. The use of digital media begins with prospecting, and through to logistics of transportation. Engineers place explosives in optimal positions using computer modelling of the underground rock formations. They make heavy use of software to coordinate layer-by-layer use of explosives in this advanced ‘box cut’ mine. After explosives disrupt the rock layer a kilometre underground, another specialised vehicle collects and carries the ore to the surface. The Sandvik loader-hauler-dumper (LHD) can be driven conventionally by a driver, but it can also travel autonomously in and out of the mine without a direct operator. Once it reaches a collection point, where the broken up ore has accumulated, a user of the surface can change the media mode to telepresence. The human operator then takes control using something like a games controller and multiple screens. The remote operator controls the LHD to fill the scoop with ore. The fully-loaded LHD backs up, and returns autonomously using laser senses to follow a trail to the next drop off point. The LHD has become a powerful mediator, reconfiguring technical, material and social practices throughout the mine. The Meanings of Mining and Media Are Converging Until recently, mining and media typically operated ontologically separately. The media, such as newspapers and television, often tell stories about mining, following regular narrative scripts. There are controversies and conflicts, narratives of ecological crises, and the economics of national benefit. There are heroic and tragic stories such as the Beaconsfield mine collapse (Clark). There are new industry policies (Middelbeek), which are politically fraught because of the lobbying power of miners. Almost completely separately, workers in mines were consumers of media, from news to entertainment. These media practices, while important in their own right, tell nothing of the approaching changes in many other sectors of work and everyday life. It is somewhat unusual for a media studies scholar to study mine sites. Mine sites are most commonly studied by Engineering (Bellamy & Pravica), Business and labour and cultural histories (McDonald, Mayes & Pini). Until recently, media scholarship on mining has related to media institutions, such as newspapers, broadcasters and websites, and their audiences. As digital media have proliferated, the phenomena that can be considered as media phenomena has changed. This article, pointing to the growing roles of media technologies, observes the growing importance that media, in these terms, have in the rapidly changing domain of mining. Another meaning for ‘media’ studies, from cybernetics, is that a medium is any technology that translates perception, makes interpretations, and performs expressions. This meaning is more abstract, operating with a broader definition of media — not only those institutionalised as newspapers or radio stations. It is well known that computer-based media have become ubiquitous in culture. This is true in particular within the mining company’s higher ranks. Rio Tinto’s ambitious 2010 ‘Mine of the Future’ (Fisher & Schnittger, 2) program was premised on an awareness that engineers, middle managers and senior staff were already highly computer literate. It is worth remembering that such competency was relatively uncommon until the late 1980s. The meanings of digital media have been shifting for many years, as computers become experienced more as everyday personal artefacts, and less as remote information systems. Their value has always been held with some ambivalence. Zuboff’s (387-414) picture of loss, intimidation and resistance to new information technologies in the 1980s seems to have dissipated by 2011. More than simply being accepted begrudgingly, the PC platform (and variants) has become a ubiquitous platform, a lingua franca for information workers. It became an intimate companion for many professions, and in many homes. It was an inexpensive, versatile and generalised convergent medium for communication and control. And yet, writers such as Gregg observe, the flexibility of networked digital work imposes upon many workers ‘unlimited work’. The office boundaries of the office wall break down, for better or worse. Emails, utility and other work-related behaviours increasingly encroach onto domestic and public space and time. Its very attractiveness to users has tied them to these artefacts. The trail that leads the media studies discipline down the digital mine shaft has been cleared by recent work in media archaeology (Parikka), platform studies (Middelbeek; Montfort & Bogost; Maher) and new media (Manovich). Each of these redefined Media Studies practices addresses the need to diversify the field’s attention and methods. It must look at more specific, less conventional and more complex media formations. Mobile media and games (both computer-based) have turned out to be quite different from traditional media (Hjorth; Goggin). Kirschenbaum’s literary study of hard drives and digital fiction moves from materiality to aesthetics. In my study of digital mining, I present a reconfigured media studies, after the authors, that reveals heterogeneous media configurations, deserving new attention to materiality. This article also draws from the actor network theory approach and terminology (Latour). The uses of media / control / communications in the mining industry are very complex, and remain under constant development. Media such as robotics, computer modelling, remote operation and so on are bound together into complex practices. Each mine site is different — geologically, politically, and economically. Mines are subject to local and remote disasters. Mine tunnels and global prices can collapse, rendering active sites uneconomical overnight. Many technologies are still under development — including Northparkes and West Angelas. Both these sites are notable for their significant use of autonomous vehicles and remote operated vehicles. There is no doubt that the digital technologies modulate all manner of the mining processes: from rocks and mechanical devices to human actors. Each of these actors present different forms of collusion and opposition. Within a mining operation, the budgets for computerised and even robotic systems are relatively modest for their expected return. Deep in a mine, we can still see media convergence at work. Convergence refers to processes whereby previously diverse practices in media have taken on similar devices and techniques. While high-end PCs in mining, running simulators; control data systems; visualisation; telepresence, and so on may be high performance, ruggedised devices, they still share a common platform to the desktop PC. Conceptual resources developed in Media Ecology, New Media Studies, and the Digital Humanities can now inform readings of mining practices, even if their applications differ dramatically in size, reliability and cost. It is not entirely surprising that some observations by new media theorists about entertainment and media applications can also relate to features of mining technologies. Manovich argues that numerical representation is a distinctive feature of new media. Numbers have always already been key to mining engineering. However, computers visualise numerical fields in simulations that extend out of the minds of the calculators, and into visual and even haptic spaces. Specialists in geology, explosives, mechanical apparatuses, and so on, can use plaftorms that are common to everyday media. As the significance of numbers is extended by computers in the field, more and more diverse sources of data provide apparently consistent and seamless images of multiple fields of knowledge. Another feature that Manovich identifies in new media is the capacity for automation of media operations. Automation of many processes in mechanical domains clearly occurred long before industrial technologies were ported into new media. The difference with new media in mine sites is that robotic systems must vary their performance according to feedback from their extra-system environments. For our purposes, the haul trucks in WA are software-controlled devices that already qualify as robots. They sense, interpret and act in the world based on their surroundings. They evaluate multiple factors, including the sensors, GPS signals, operator instructions and so on. They can repeat the path, by sensing the differences, day after day, even if the weather changes, the track wears away or the instructions from base change. Automation compensates for differences within complex and changing environments. Automation of an open-pit mine haulage system… provides more consistent and efficient operation of mining equipment, it removes workers from potential danger, it reduces fuel consumption significantly reducing greenhouse gas (GHG) emissions, and it can help optimize vehicle repairs and equipment replacement because of more-predictable and better-controlled maintenance. (Parreire and Meech 1-13) Material components in physical mines tend to become modular and variable, as their physical shape lines up with the logic of another of Manovich’s new media themes, variability. Automatic systems also make obsolete human drivers, who previously handled those environmental variations, for better or for worse, through the dangerous, dull and dirty spaces of the mine. Drivers’ capacity to control repeat trips is no longer needed. The Komatsu driverless truck, introduced to the WA iron ore mines from 2008, proved itself to be almost as quick as human drivers at many tasks. But the driverless trucks have deeper advantages: they can run 23 hours each day with no shift breaks; they drive more cautiously and wear the equipment less than human drivers. There is no need to put up workers and their families up in town. The benefit most often mentioned is safety: even the worst accident won’t produce injuries to drivers. The other advantage less mentioned is that autonomous trucks don’t strike. Meanwhile, managers of human labour also need to adopt certain strategies of modulation to support the needs and expectations of their workers. Mobile phones, televisions and radio are popular modes of connecting workers to their loved ones, particularly in the remote and harsh West Angelas site. One solution — regular fly-in-fly out shifts — tends also to be alienating for workers and locals (Cheshire; Storey; Tonts). As with any operations, the cost of maintaining a safe and comfortable environment for workers requires trade-offs. Companies face risks from mobile phones, leaking computer networks, and espionage that expose the site to security risks. Because of such risks, miners tend be subject to disciplinary regimes. It is common to test alcohol and drug levels. There was some resistance from workers, who refused to change to saliva testing from urine testing (Latimer). Contesting these machines places the medium, in a different sense, at the centre of regulation of the workers’ bodies. In Northparkes, the solution of hybrid autonomous and remote operation is also a solution for modulating labour. It is safer and more comfortable, while also being more efficient, as one experienced driver can control three trucks at a time. This more complex mode of mediation is necessary because underground mines are more complex in geology, and working environments to suit full autonomy. These variations provide different relationships between operators and machines. The operator uses a games controller, and watches four video views from the cabin to make the vehicle fill the bucket with ore (Northparkes Mines, 9). Again, media have become a pivotal element in the mining assemblage. This combines the safety and comfort of autonomous operation (helping to retain staff) with the required use of human sensorimotor dexterity. Mine systems deserve attention from media studies because sites are combining large scale physical complexity with increasingly sophisticated computing. The conventional pictures of mining and media rarely address the specificity of subjective and artefactual encounters in and around mine sites. Any research on mining communication is typically within the instrumental frames of engineering (Duff et al.). Some of the developments in mechanical systems have contributed to efficiency and safety of many mines: larger trucks, more rock crushers, and so on. However, the single most powerful influence on mining has been adopting digital media to control, integrate and mining systems. Rio Tinto’s transformative agenda document is outlined in its high profile ‘Mine of the Future’ agenda (US Federal News Service). The media to which I refer are not only those in popular culture, but also those with digital control and communications systems used internally within mines and supply chains. The global mining industry began adopting digital communication automation (somewhat) systematically only in the 1980s. Mining companies hesitated to adopt digital media because the fundamentals of mining are so risky and bound to standard procedures. Large scale material operations, extracting and processing minerals from under the ground: hardly to be an appropriate space for delicate digital electronics. Mining is also exposed to volatile economic conditions, so investing in anything major can be unattractive. High technology perhaps contradicts an industry ethos of risk-taking and masculinity. Digital media became domesticated, and familiar to a new generation of formally educated engineers for whom databases and algorithms (Manovich) were second nature. Digital systems become simultaneously controllers of objects, and mediators of meanings and relationships. They control movements, and express communications. Computers slide from using meanings to invoking direct actions over objects in the world. Even on an everyday scale, computer operations often control physical processes. Anti-lock Braking Systems regulate a vehicle’s braking pressure to avoid the danger when wheels lock-up. Or another example, is the ATM, which involves both symbolic interactions, and also exchange of physical objects. These operations are examples of the ‘asignifying semiotic’ (Guattari), in which meanings and non-meanings interact. There is no operation essential distinction between media- and non-media digital operations. Which are symbolic, attached or non-consequential is not clear. This trend towards using computation for both meanings and actions has accelerated since 2000. Mines of the Future Beyond a relatively standard set of office and communications software, many fields, including mining, have adopted specialised packages for their domains. In 3D design, it is AutoCAD. In hard sciences, it is custom modelling. In audiovisual production, it may be Apple and Adobe products. Some platforms define their subjectivity, professional identity and practices around these platforms. This platform orientation is apparent in areas of mining, so that applications such as the Gemcom, Rockware, Geological Database and Resource Estimation Modelling from Micromine; geology/mine design software from Runge, Minemap; and mine production data management software from Corvus. However, software is only a small proportion of overall costs in the industry. Agents in mining demand solutions to peculiar problems and requirements. They are bound by their enormous scale; physical risks of environments, explosive and moving elements; need to negotiate constant change, as mining literally takes the ground from under itself; the need to incorporate geological patterns; and the importance of logistics. When digital media are the solution, there can be what is perceived as rapid gains, including greater capacities for surveillance and control. Digital media do not provide more force. Instead, they modulate the direction, speed and timing of activities. It is not a complete solution, because too many uncontrolled elements are at play. Instead, there are moment and situations when the degree of control refigures the work that can be done. Conclusions In this article I have proposed a new conception of media change, by reading digital innovations in mining practices themselves as media changes. This involved developing an initial reading of the operations of mining as digital media. With this approach, the array of media components extends far beyond the conventional ‘mass media’ of newspapers and television. It offers a more molecular media environment which is increasingly heterogeneous. It sometimes involves materiality on a huge scale, and is sometimes apparently virtual. The mining media event can be a semiotic, a signal, a material entity and so on. It can be a command to a human. It can be a measurement of location, a rock formation, a pressure or an explosion. The mining media event, as discussed above, is subject to Manovich’s principles of media, being numerical, variable and automated. In the mining media event, these principles move from the aesthetic to the instrumental and physical domains of the mine site. The role of new media operates at many levels — from the bottom of the mine site to the cruising altitude of the fly-in-fly out aeroplanes — has motivated significant changes in the Australian industry. When digital media and robotics come into play, they do not so much introduce change, but reintroduce similarity. This inversion of media is less about meaning, and more about local mastery. Media modulation extends the kinds of influence that can be exerted by the actors in control. In these situations, the degrees of control, and of resistance, are yet to be seen. Acknowledgments Thanks to Mining IQ for a researcher's pass at Mining Automation and Communication Conference, Perth in August 2012. References Bellamy, D., and L. Pravica. “Assessing the Impact of Driverless Haul Trucks in Australian Surface Mining.” Resources Policy 2011. Cheshire, L. “A Corporate Responsibility? The Constitution of Fly-In, Fly-Out Mining Companies as Governance Partners in Remote, Mine-Affected Localities.” Journal of Rural Studies 26.1 (2010): 12–20. Clark, N. “Todd and Brant Show PM Beaconsfield's Cage of Hell.” The Mercury, 6 Nov. 2008. Duff, E., C. Caris, A. Bonchis, K. Taylor, C. Gunn, and M. Adcock. “The Development of a Telerobotic Rock Breaker.” CSIRO 2009: 1–10. Fisher, B.S. and S. Schnittger. Autonomous and Remote Operation Technologies in the Mining Industry: Benefits and Costs. BAE Report 12.1 (2012). Goggin, G. Global Mobile Media. London: Routledge, 2010. Gregg, M. Work’s Intimacy. Cambridge: Polity, 2011. Guattari, F. Chaosmosis: An Ethico-Aesthetic Paradigm. Trans. Paul Bains and Julian Pefanis. Bloomington: Indiana UP, 1992. Hjorth, L. Mobile Media in the Asia-Pacific: Gender and the Art of Being Mobile. Taylor & Francis, 2008. Kirschenbaum, M.G. Mechanisms: New Media and the Forensic Imagination. Campridge, Mass.: MIT Press, 2008. Latimer, Cole. “Fair Work Appeal May Change Drug Testing on Site.” Mining Australia 2012. 3 May 2013 ‹http://www.miningaustralia.com.au/news/fair-work-appeal-may-change-drug-testing-on-site›. Latour, B. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press, 2007. Maher, J. The Future Was Here: The Commodore Amiga. Cambridge, Mass.: MIT Press, 2012. Manovich, Lev. The Language of New Media. Cambridge, Mass.: MIT Press, 2001. McDonald, P., R. Mayes, and B. Pini. “Mining Work, Family and Community: A Spatially-Oriented Approach to the Impact of the Ravensthorpe Nickel Mine Closure in Remote Australia.” Journal of Industrial Relations 2012. Middelbeek, E. “Australia Mining Tax Set to Slam Iron Ore Profits.” Metal Bulletin Weekly 2012. Montfort, N., and I. Bogost. Racing the Beam: The Atari Video Computer System. Cambridge, Mass.: MIT Press, 2009. Parikka, J. What Is Media Archaeology? London: Polity Press, 2012. Parreira, J., and J. Meech. “Autonomous vs Manual Haulage Trucks — How Mine Simulation Contributes to Future Haulage System Developments.” Paper presented at the CIM Meeting, Vancouver, 2010. 3 May 2013 ‹http://www.infomine.com/library/publications/docs/parreira2010.pdf›. Storey, K. “Fly-In/Fly-Out and Fly-Over: Mining and Regional Development in Western Australia.” Australian Geographer 32.2 (2010): 133–148. Storey, K. “Fly-In/Fly-Out: Implications for Community Sustainability.” Sustainability 2.5 (2010): 1161–1181. 3 May 2013 ‹http://www.mdpi.com/2071-1050/2/5/1161›. Takayama, L., W. Ju, and C. Nas. “Beyond Dirty, Dangerous and Dull: What Everyday People Think Robots Should Do.” Paper presented at HRI '08, Amsterdam, 2008. 3 May 2013 ‹http://www-cdr.stanford.edu/~wendyju/publications/hri114-takayama.pdf›. Tonts, M. “Labour Market Dynamics in Resource Dependent Regions: An Examination of the Western Australian Goldfields.” Geographical Research 48.2 (2010): 148-165. 3 May 2013 ‹http://onlinelibrary.wiley.com/doi/10.1111/j.1745-5871.2009.00624.x/abstract›. US Federal News Service, Including US State News. “USPTO Issues Trademark: Mine of the Future.” 31 Aug. 2011. Wu, S., H. Han, X. Liu, H. Wang, F. Xue. “Highly Effective Use of Australian Pilbara Blend Lump Ore in a Blast Furnace.” Revue de Métallurgie 107.5 (2010): 187-193. doi:10.1051/metal/2010021. Zuboff, S. In the Age of the Smart Machine: The Future of Work and Power. Heinemann Professional, 1988.
APA, Harvard, Vancouver, ISO, and other styles
50

Melchior, Angelika. "Tag & Trace Marketing." M/C Journal 8, no. 4 (August 1, 2005). http://dx.doi.org/10.5204/mcj.2385.

Full text
Abstract:
The use of RFID (radio frequency identification), also called “smart tags”, is on the rise in the retail industry. In short, RFID are tiny microchips using short range radio signals to emit information and can be used to tag goods, buildings, cars, pets, people etc. Unlike bar-code scanners, which must be held directly in front of the item being scanned, one of the benefits of RFID tags is that they can be scanned from a distance. It is expected that RFID will eventually replace the bar code and its use is likely to save companies like Wal-Mart, Procter & Gamble and Gillette millions of dollars as they can track every bottle of shampoo or packet of razor blades from the factory floor to the store shelf (Baard, “Lawmakers”). Most agree that using RFID to track goods from the point of manufacture to the location of sale in order to prevent goods being lost, stolen or handled inappropriately, is acceptable and not cause for privacy concerns. But as marketers often take every opportunity to learn more about consumers and their purchasing behaviours, some fear that tags embedded in clothing, membership cards, mobile phones etc. may be scanned inappropriately and used to target individuals with cleverly tailored marketing messages. In the effort to provide a more customised experience, business is at risk of becoming increasingly intrusive – something that will not be universally acceptable. But is it all bad? Privacy concerns aside, smart tags can add new functions as well as enable a whole range of innovative products and services when joined with other technologies. RFID beyond Traditional Value Chain Management Prada is often mentioned as an example of how RFID can be taken beyond the traditional value chain management. Prada has implemented some ground-breaking technology in their Manhattan (New York) store, all based around RFID. RF-receivers automatically detect and scan garments brought into the dressing room. Via a touch screen the customers view tips on how to mix and match, access information about available sizes, colours, fabrics and styles, and watch video clips of models wearing the very outfit they are trying on (Grassley, ”Prada’s”). Eventually customers will be able to create virtual closets and store information about what they have tried on or bought on their Prada Web account (”Prada’s”). Customers’ details, including notes made by sales assistants, e.g. preferences, can be stored automatically in customer cards, readable by sales assistants’ handhelds or at the cash register (”Prada’s”) – information that could be used by the assistant to spur further sales by suggesting for example: “Last time you were here, you bought a black skirt. We have a sweater that matches that skirt” (Batista). Another example is Precision Dynamics Corp (PDC), which developed an automatic identification wristband incorporating RFID technology. One application is the AgeBand which is used to verify the bearer’s age when purchasing alcohol. ID is checked when entering the venue and the customer receives a plastic wristband printed with personal details that cannot be removed without being damaged or destroyed (Swedberg). The embedded chip can be linked to a customer’s credit card number or a cash deposit to pay for purchases while on the premises. “It is also an easy way to collect statistics for marketing”, says PDC’s senior marketing communications specialist, Paula Maggio (qtd. in Swedberg). Although the RFID clearly provides benefits and new opportunities to business operations, there is an argument over whether consumers will ultimately gain or suffer when smart tags become more commonplace. Certainly it may be convenient to have smart hangers that project virtual clothes onto a customer’s reflection in the mirror so they can try on a range of outfits without having to remove their clothes, but the collection of personal information necessary to provide this convenience also raises complex privacy issues. Fear of Intrusive Marketing Hesseldahl believes that our homes, workplaces, shops, malls, cars, trains, planes and bicycles will all be environments that constantly notice who we are and what we are doing, and which – according to a detailed profile of our habits – will try to service us in ways we can hardly even imagine today (25). This may be helpful to us in many ways, but there are concerns that organisations will use RF-technology to connect product information to individuals in order to create personal profiles which can then be used for pin-pointed marketing purposes, or even for tracking individuals, without their knowledge or consent. A possible scenario is one where consumers are bombarded with intrusive advertising based on what they are wearing, what they are purchasing, their history of past purchases, demographics and more. “Kill Machines” Fearing that the technology will be abused, many privacy advocates suggest that RFID must only be used to keep track of goods in the supply chain and thus be deactivated as soon as they leave the store. For example, consumer organisations such as CASPIAN (Consumers Against Supermarket Privacy Invasion and Numbering), the American Civil Liberties Union, and Electronic Frontiers Australia, lobby for the obligatory deactivation of the tags at the point of purchase. But companies such as Procter & Gamble and Wal-Mart would prefer to keep the tags active after checkout, rather than disabling them with so called “kill machines”, so they can match the unique codes emitted by RF-tags to shopper’s personal information (Baard “Watchdog”). They will want to use RF technology to support the sales process and to provide the consumer with new and better services than what is otherwise possible. And without doubt, if the tags are deactivated some genuinely helpful applications would be lost to the consumer, e.g. being able to call your refrigerator from the supermarket to check if you need milk or your washing machine alerting you if you have accidentally put a delicate garment in your white wash. Looking at the Bright Side Privacy concerns aside, RFID technology, in fact, has the potential to empower consumers as it will put more information about products at consumers’ fingertips. Consumers will for example be able to go into competing supermarkets and scan items with an RF-receiver embedded in their mobile phone, record prices, store and process the information to evaluate which store offers better value. The information can then be shared with other shoppers via the Internet, and suddenly we have a powerful “shopping bot” which transcends the online world. Consequently, RFID has the potential to make competition between retailers tougher than ever before and to benefit consumers through lower and more transparent pricing. In addition, RFID tags may also make possible faster and more accurate services, particularly in supermarkets. Shopping carts are mounted with computers which automatically register all items put into the cart and enable the customer to keep track of items, their prices and their total amount (Blau). RFID can also be used to find the location of items in the store and show more detailed information on a product (origin, use by date, content etc.) and as the customer passes through the checkout, all purchases are registered automatically in a matter of seconds. Privacy Protection In Australia, the Privacy Amendment (Private Sector) Act 2000, with its ten National Privacy Principles (NPP), has been highly criticised over the last few years as being much to open for interpretation and thus difficult to reinforce. In short, the NPP allows for marketers to use non-sensitive personal information for direct marketing purposes without seeking the individual’s consent if it is impracticable to do so (“Guidelines”). That is, as long as they make available a privacy policy explaining why the data is collected and who will have access, they ensure that the data is correct and up to date, protected from unauthorised access, and that individuals are given access to their data upon request (“Guidelines”). In 2003 the Spam Act was introduced in order to take a tougher stand on the escalating problem with massive amounts of unsolicited emails filling up inboxes, threatening the whole concept of the Internet and its many benefits. In essence, the Spam Act will not allow commercial electronic messages to be sent without the recipient’s prior consent or without a possibility to unsubscribe (“Spam”). In the same manner that the Spam Act was passed to regulate the collection of Internet users’ contact information, it may become necessary to regulate the collection and use of data obtained via RFID if the NPP are deemed inadequate. The difficulty will be to do so and at the same time safeguard many of the positive side-effects the technology may have for businesses and consumers. As argued by Roger Clarke, privacy has to be balanced against many other, often competing interests: “The privacy interest of one person may conflict with that of another person, or of a groups of people, or of an organisation, or of society as a whole. Hence: Privacy Protection is a process of finding appropriate balances between privacy and multiple competing interests.” It is therefore recommended that legislators and policy makers keep up with the development and undertake significant research into both sides before any legislation is passed so that the best interests of consumers and business are catered for. Can There Be a Win-Win Situation? Although business can expect some significant gains from the use of RFID, particularly through a more effective value chain management but also from more substantial and better quality business intelligence, consumers may in fact be the real winners as new and better business concepts, products and services are made available. Further, with the increased transparency in business, consumers can use the vast amounts of information available to find the best products and services, at the best price, and from the best provider. With the aid of smart software, such as search agents, it will be a rather effortless task and will provide consumers with a real advantage. But this assumes consumers are aware of the benefits and how they can be exploited, and have the means to do so – something that will require some skill, interest, money and time. Consumers will also have to give up some privacy in order to take full advantage of the new technology. For the industry, the main challenge will be communicating what these advantages are, as acceptance, adoption and thus also return on investment will depend upon it. For legislators and policy makers, the major dilemma will be to provide a regulatory framework that is flexible but distinct, and that will prevent abuse and at the same time enable positive outcomes for both business and consumers. A fine line that should be treaded wisely in order to create a future where everyone can gain from the benefits of using this technology. References Baard, Mark. “Lawmakers Alarmed by RFID Spying.” Wired News 26 Feb. 2004. 9 Mar. 2004 http://www.wired.com/news/privacy/0,1848,62433,00.html>. Baard, Mark. ”Watchdog Push for RFID Laws.” Wired News 5 Apr. 2004. 6 Apr. 2004 http://www.wired.com/news/privacy/0,1848,62922,00.html>. Batista, Elisa. ”What Your Clothes Say about You.” Wired News 12 Mar. 2003. 8 Mar. 2004 http://www.wired.com/news/wireless/0,1382,58006,00.html>. Blau, John. ”Så fungerar det digitala snabbköpet.” PC för Alla 1 (2004). 8 Mar. 2004 http://www.idg.se/ArticlePages/200402/27/20040227165630_IDG. se760/20040227165630_IDG.se760.dbp.asp>. Clarke, Roger. Beyond the OECD Guidelines: Privacy Protection for the 21st Century. 4 Jan. 2000. 15 Mar. 2004 http://www.anu.edu.au/people/Roger.Clarke/DV/PP21C.html>. Grassley, Tanya. ”Retailers Outfit Stores with Tech.” Wired News 18 Dec. 2002. 8 Mar. 2004 http://www.wired.com/news/holidays/0,1882,56885,00.html>. “Guidelines to the National Privacy Principles.” The Office of the Federal Privacy Commissioner 2001. 4 Apr. 2004 http://www.privacy.gov.au/publications/nppgl_01.html#sum>. Hessledahl, Peter. Den globale organisme. Copenhagen: Aschehoug, 2002. 24 Apr. 2004 http://www.global-organisme.dk/e-bog/den_globale_organisme.pdf>. ”Prada’s Smart Tags Too Clever?” Wired News 27 Oct. 2002. 9 Mar. 2004 http://www.wired.com/news/technology/0,1282,56042,00.html>. “Spam” DCITA 2004. 4 Apr. 2004 http://www2.dcita.gov.au/ie/trust/improving/spam>. Swedberg, Claire. ”Putting Drinks on the Cuff.” RFID Journal 15 Jun. 2004. 15 Jun. 2004 http://www.rfidjournal.com/article/articleview/987/1/1/>. Citation reference for this article MLA Style Melchior, Angelika. "Tag & Trace Marketing." M/C Journal 8.4 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0508/03-melchior.php>. APA Style Melchior, A. (Aug. 2005) "Tag & Trace Marketing," M/C Journal, 8(4). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0508/03-melchior.php>.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography