Journal articles on the topic 'Immersive video coding'

To see the other types of publications on this topic, follow the link: Immersive video coding.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 44 journal articles for your research on the topic 'Immersive video coding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Boyce, Jill M., Renaud Dore, Adrian Dziembowski, Julien Fleureau, Joel Jung, Bart Kroon, Basel Salahieh, Vinod Kumar Malamal Vadakital, and Lu Yu. "MPEG Immersive Video Coding Standard." Proceedings of the IEEE 109, no. 9 (September 2021): 1521–36. http://dx.doi.org/10.1109/jproc.2021.3062590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mieloch, Dawid, Adrian Dziembowski, Marek Domański, Gwangsoon Lee, and Jun Young Jeong. "Color-dependent pruning in immersive video coding." Journal of WSCG 30, no. 1-2 (2022): 91–98. http://dx.doi.org/10.24132/jwscg.2022.11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents the color-dependent method of removing inter-view redundancy from multiview video. The pruning of input views decides which fragments of views are redundant, i.e., do not provide new information about the three-dimensional scene, as these fragments were already visible from different views. The proposed modification of the pruning uses both color and depth and utilizes the adaptive pruning threshold which increases the robustness against the noisy input. As performed experiments have shown, the proposal provides significant improvement in the quality of encoded multiview videos and decreases erroneous areas in the decoded video caused by different camera characteristics, specular surfaces, and mirror-like reflections. The pruning method proposed by the authors of this paper was evaluated by experts of the ISO/IEC JTC1/SC29/WG 11 MPEG and included by them in the Test Model of MPEG Immersive Video.
3

Wien, Mathias, Jill M. Boyce, Thomas Stockhammer, and Wen-Hsiao Peng. "Standardization Status of Immersive Video Coding." IEEE Journal on Emerging and Selected Topics in Circuits and Systems 9, no. 1 (March 2019): 5–17. http://dx.doi.org/10.1109/jetcas.2019.2898948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dziembowski, Adrian, Dawid Mieloch, Marek Domański, Gwangsoon Lee, and Jun Young Jeong. "Spatiotemporal redundancy removal in immersive video coding." Journal of WSCG 30, no. 1-2 (2022): 54–62. http://dx.doi.org/10.24132/jwscg.2022.7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, the authors describe two methods designed for reducing the spatiotemporal redundancy of the video within the MPEG Immersive video (MIV) encoder: patch occupation modification and cluster splitting. These methods allow optimizing two important parameters of the immersive video: bitrate and pixelrate. The patch occupation modification method significantly decreases the number of active pixels within texture and depth video produced by the MIV encoder. Cluster splitting decreases the total area needed for storing the texture and depth information from multiple input views, decreasing the pixelrate. Both methods proposed by the authors of this paper were appreciated by the experts of the ISO/IEC JTC1/SC29/WG11 MPEG and are included in the Test Model for MPEG Immersive video (TMIV), which is the reference software implementation of the MIV standard.
5

Wien, Mathias, Jill M. Boyce, Thomas Stockhammer, and Wen-Hsiao Peng. "Guest Editorial Immersive Video Coding and Transmission." IEEE Journal on Emerging and Selected Topics in Circuits and Systems 9, no. 1 (March 2019): 1–4. http://dx.doi.org/10.1109/jetcas.2019.2899531.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Salahieh, Basel, Wayne Cochran, and Jill Boyce. "Delivering Object-Based Immersive Video Experiences." Electronic Imaging 2021, no. 18 (January 18, 2021): 103–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.18.3dia-103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Immersive video enables interactive natural consumption of visual content by empowering a user to navigate through six degrees of freedom, with motion parallax and wide-angle rotation. Supporting immersive experiences requires content captured by multiple cameras and efficient video coding to meet bandwidth and decoder complexity constraints, while delivering high quality video to end users. The Moving Picture Experts Group (MPEG) is developing an immersive video (MIV) standard to data access and delivery of such content. One of MIV operating modes is an objectbased immersive video coding which enables innovative use cases where the streaming bandwidth can be better allocated to objects of interest and users can personalize the rendered streamed content. In this paper, we describe a software implementation of the object-based solution on top of the MPEG Test Model for Immersive Video (TMIV). We demonstrate how encoding foreground objects can lead to a significant saving in pixel rate and bitrate while still delivering better subjective and objective results compared to the generic MIV operating mode without the object-based solution.
7

Jeong, JongBeom, Dongmin Jang, Jangwoo Son, and Eun-Seok Ryu. "3DoF+ 360 Video Location-Based Asymmetric Down-Sampling for View Synthesis to Immersive VR Video Streaming." Sensors 18, no. 9 (September 18, 2018): 3148. http://dx.doi.org/10.3390/s18093148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recently, with the increasing demand for virtual reality (VR), experiencing immersive contents with VR has become easier. However, a tremendous amount of calculation and bandwidth is required when processing 360 videos. Moreover, additional information such as the depth of the video is required to enjoy stereoscopic 360 contents. Therefore, this paper proposes an efficient method of streaming high-quality 360 videos. To reduce the bandwidth when streaming and synthesizing the 3DoF+ 360 videos, which supports limited movements of the user, a proper down-sampling ratio and quantization parameter are offered from the analysis of the graph between bitrate and peak signal-to-noise ratio. High-efficiency video coding (HEVC) is used to encode and decode the 360 videos, and the view synthesizer produces the video of intermediate view, providing the user with an immersive experience.
8

Samelak, Jarosław, Adrian Dziembowski, and Dawid Mieloch. "Advanced HEVC Screen Content Coding for MPEG Immersive Video." Electronics 11, no. 23 (December 5, 2022): 4040. http://dx.doi.org/10.3390/electronics11234040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This paper presents the modified HEVC Screen Content Coding (SCC) that was adapted to be more efficient as an internal video coding of the MPEG Immersive Video (MIV) codec. The basic, unmodified SCC is already known to be useful in such an application. However, in this paper, we propose three additional improvements to SCC to increase the efficiency of immersive video coding. First, we analyze using the quarter-pel accuracy in the intra block copy technique to provide a more effective search of the best candidate block to be copied in the encoding process. The second proposal is the use of tiles to allow inter-view prediction inside MIV atlases. The last proposed improvement is the addition of the MIV bitstream parser in the HEVC encoder that enables selecting the most efficient coding configuration depending on the type of currently encoded data. The experimental results show that the proposal increases the compression efficiency for natural content sequences by almost 7% and simultaneously decreases the computational time of encoding by more than 15%, making the proposal very valuable for further research on immersive video coding.
9

Storch, Iago, Luis A. da Silva Cruz, Luciano Agostini, Bruno Zatt, and Daniel Palomino. "The Impacts of Equirectangular 360-degrees Videos in the Intra-Frame Prediction of HEVC." Journal of Integrated Circuits and Systems 14, no. 1 (April 29, 2019): 1–10. http://dx.doi.org/10.29292/jics.v14i1.46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Recent technological advancements allowed videos to come from a simple sequence of 2D images to be displayed in a flat screen display into spherical representations of one’s surroundings, capable of creating a realistic immersive experience when allied to head-mounted displays. In order to explore the existing infrastructure for video coding, 360-degrees videos are pre-processed and then encoded by conventional video coding standards. However, the flattened version of 360-degrees videos present some peculiarities which are not present in conventional videos, and therefore, may not be properly exploited by conventional video coders. Aiming to find evidence that conventional video encoders can be adapted to perform better over 360-degrees videos, this work performs an evaluation on the intra-frame prediction performed by the High Efficiency Video Coding over 360-degrees videos in the equirectangular projection. Experimental results point that 360-degrees videos present spatial properties that make some regions of the frame likely to be encoded using a reduced set of prediction modes and block sizes. This behavior could be used in the development of fast decision and energy saving algorithms by evaluating a reduced set of prediction modes and block sizes depending on the regions of the frame being encoded.
10

Park, Dohyeon, Sung-Gyun Lim, Kwan-Jung Oh, Gwangsoon Lee, and Jae-Gon Kim. "Nonlinear Depth Quantization Using Piecewise Linear Scaling for Immersive Video Coding." IEEE Access 10 (2022): 4483–94. http://dx.doi.org/10.1109/access.2022.3140537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Peng, Tony Thomas, Tao Zhuo, Wei Huang, and Hanqiao Huang. "Object coding based video authentication for privacy protection in immersive communication." Journal of Ambient Intelligence and Humanized Computing 8, no. 6 (September 1, 2016): 871–84. http://dx.doi.org/10.1007/s12652-016-0401-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

LIM, Sung-Gyun, Dong-Ha KIM, Kwan-Jung OH, Gwangsoon LEE, Jun Young JEONG, and Jae-Gon KIM. "Wider Depth Dynamic Range Using Occupancy Map Correction for Immersive Video Coding." IEICE Transactions on Information and Systems E106.D, no. 5 (May 1, 2023): 1102–5. http://dx.doi.org/10.1587/transinf.2022edl8077.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Timmerer, Christian. "MPEG column: 128th MPEG meeting in Geneva, Switzerland." ACM SIGMultimedia Records 11, no. 4 (December 2019): 1. http://dx.doi.org/10.1145/3530839.3530846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The 128th MPEG meeting concluded on October 11, 2019 in Geneva, Switzerland with the following topics: •Low Complexity Enhancement Video Coding (LCEVC) Promoted to Committee Draft •2nd Edition of Omnidirectional Media Format (OMAF) has reached the first milestone •Genomic Information Representation --- Part 4 Reference Software and Part 5 Conformance Promoted to Draft International Standard The corresponding press release of the 128th MPEG meeting can be found here: https://mpeg.chiariglione.org/meetings/128. In this report we will focus on video coding aspects (i.e., LCEVC) and immersive media applications (i.e., OMAF). At the end, we will provide an update related to adaptive streaming (i.e., DASH and CMAF).
14

Timmerer, Christian. "MPEG column: 125th MPEG meeting in Marrakesh, Morocco." ACM SIGMultimedia Records 11, no. 1 (March 2019): 1. http://dx.doi.org/10.1145/3458462.3458467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The original blog post can be found at the Bitmovin Techblog and has been modified/updated here to focus on and highlight research aspects. The 125th MPEG meeting concluded on January 18, 2019 in Marrakesh, Morocco with the following topics: Network-Based Media Processing (NBMP) - MPEG promotes NBMP to Committee Draft stage 3DoF+ Visual - MPEG issues Call for Proposals on Immersive 3DoF+ Video Coding Technology MPEG-5 Essential Video Coding (EVC) - MPEG starts work on MPEG-5 Essential Video Coding ISOBMFF - MPEG issues Final Draft International Standard of Conformance and Reference software for formats based on the ISO Base Media File Format (ISOBMFF) MPEG-21 User Description - MPEG finalizes 2nd edition of the MPEG-21 User Description The corresponding press release of the 125th MPEG meeting can be found here. In this blog post I'd like to focus on those topics potentially relevant for over-the-top (OTT), namely NBMP, EVC, and ISOBMFF.
15

Hsu, Chih-Fan, Tse-Hou Hung, and Cheng-Hsin Hsu. "Optimizing Immersive Video Coding Configurations Using Deep Learning: A Case Study on TMIV." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 1 (January 31, 2022): 1–25. http://dx.doi.org/10.1145/3471191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Immersive video streaming technologies improve Virtual Reality (VR) user experience by providing users more intuitive ways to move in simulated worlds, e.g., with 6 Degree-of-Freedom (6DoF) interaction mode. A naive method to achieve 6DoF is deploying cameras at numerous different positions and orientations that may be required based on users’ movement, which unfortunately is expensive, tedious, and inefficient. A better solution for realizing 6DoF interactions is to synthesize target views on-the-fly from a limited number of source views. While such view synthesis is enabled by the recent Test Model for Immersive Video (TMIV) codec, TMIV dictates manually-composed configurations, which cannot exercise the tradeoff among video quality, decoding time, and bandwidth consumption. In this article, we study the limitation of TMIV and solve its configuration optimization problem by searching for the optimal configuration in a huge configuration space. We first identify the critical parameters in the TMIV configurations. Then, we introduce two Neural Network (NN) -based algorithms from two heterogeneous aspects: (i) a Convolutional Neural Network (CNN) algorithm solving a regression problem and (ii) a Deep Reinforcement Learning (DRL) algorithm solving a decision making problem, respectively. We conduct both objective and subjective experiments to evaluate the CNN and DRL algorithms on two diverse datasets: an equirectangular and a perspective projection dataset. The objective evaluations reveal that both algorithms significantly outperform the default configurations. In particular, with the equirectangular (perspective) projection dataset, the proposed algorithms only require 95% (23%) decoding time, stream 79% (23%) views, and improve the utility by 6% (73%) on average. The subjective evaluations confirm the proposed algorithms consume fewer resources while achieving comparable Quality of Experience (QoE) than the default and the optimal TMIV configurations.
16

Takamura, Seishi. "58-1:Invited Paper: New Developments in Video Coding towards Immersive Visual Experience." SID Symposium Digest of Technical Papers 48, no. 1 (May 2017): 857–59. http://dx.doi.org/10.1002/sdtp.11757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Lim, Sung-Gyun, Hyun-Ho Kim, and Yong-Hwan Kim. "Adaptive Patch-Wise Depth Range Linear Scaling Method for MPEG Immersive Video Coding." IEEE Access 11 (2023): 133440–50. http://dx.doi.org/10.1109/access.2023.3336892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Timmerer, Christian. "MPEG Column: 129th MPEG Meeting in Brussels, Belgium." ACM SIGMultimedia Records 12, no. 1 (March 2020): 1. http://dx.doi.org/10.1145/3548555.3548559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The 129th MPEG meeting concluded on January 17, 2020 in Brussels, Belgium with the following topics: •Coded representation of immersive media - WG11 promotes Network-Based Media Processing (NBMP) to the final stage •Coded representation of immersive media - Publication of the Technical Report on Architectures for Immersive Media •Genomic information representation - WG11 receives answers to the joint call for proposals on genomic annotations in conjunction with ISO TC 276/WG 5 •Open font format - WG11 promotes Amendment of Open Font Format to the final stage •High efficiency coding and media delivery in heterogeneous environments - WG11 progresses Baseline Profile for MPEG-H 3D Audio •Multimedia content description interface - Conformance and Reference Software for Compact Descriptors for Video Analysis promoted to the final stage
19

Alvarez-Mesa, Mauricio, Sergio Sanz-Rodr'guez, Chi Ching Chi, Maciej Glowiak, and Roland Haring. "8K/16K Video and 3D Audio Coding and Playback for Large-Screen Immersive Spaces." SMPTE Motion Imaging Journal 130, no. 1 (January 2021): 50–58. http://dx.doi.org/10.5594/jmi.2020.3036711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Heirich, Marissa S., Lanja S. Sinjary, Maisa S. Ziadni, Sandra Sacks, Alexandra S. Buchanan, Sean C. Mackey, and Jordan L. Newmark. "Use of Immersive Learning and Simulation Techniques to Teach and Research Opioid Prescribing Practices." Pain Medicine 20, no. 3 (September 12, 2018): 456–63. http://dx.doi.org/10.1093/pm/pny171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Introduction Unsafe opioid prescribing practices to treat acute and chronic pain continue to contribute to the opioid overdose crisis in the United States, a growing public health emergency that harms patients and their communities. Poor opioid prescribing practices stem in part from a lack of education and skills training surrounding pain and opioid management. Methods As part of the Clinical Pain Medicine Fellowship at Stanford University, physicians were given the opportunity to participate in a pilot program to practice opioid management in a live, simulated interaction. Twenty-seven physician trainees participated in the simulation with a live, standardized patient actor. Before beginning the simulation, participants were given a detailed patient history that included the patient’s risk for opioid abuse. They were also provided with relevant risk evaluation and mitigation (REM) tools. All simulation interactions were video-recorded and coded by two independent reviewers. A detailed coding scheme was developed before video analysis, and an inter-rater reliability score showed substantial agreement between reviewers. Results Contrary to expectations, many of the observed performances by trainees contained aspects of unsafe opioid prescribing, given the patient history. Many trainees did not discuss their patient’s aberrant behaviors related to opioids or the patient’s risk for opioid abuse. Marked disparities were also observed between the trainees’ active patient interactions and their written progress notes. Discussion This simulation addresses a pressing need to further educate, train, and provide point-of-care tools for providers prescribing opioids. We present our experience and preliminary findings.
21

Wien, Mathias. "MPEG Visual Quality Assessment Advisory Group." ACM SIGMultimedia Records 13, no. 3 (September 2021): 1. http://dx.doi.org/10.1145/3578495.3578498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The perceived visual quality is of utmost importance in the context of visual media compression, such as 2D, 3D, immersive video, and point clouds. The trade-off between compression efficiency and computational/implementation complexity has a crucial impact on the success of a compression scheme. This specifically holds for the development of visual media compression standards which typically aims at maximum compression efficiency using state-of-the-art coding technology. In MPEG, the subjective and objective assessment of visual quality has always been an integral part of the standards development process. Due to the significant effort of formal subjective evaluations, the standardization process typically relies on such formal tests in the starting phase and for verification while in the development phase objective metrics are used. In the new MPEG structure, established in 2020, a dedicated advisory group has been installed for the purpose of providing, maintaining, and developing visual quality assessment methods suitable for use in the standardization process. This column lays out the scope and tasks of this advisory group and reports on its first achievements and developments. After a brief overview of the organizational structure, current projects are presented, and initial results are presented.
22

Borderie, Joceran, and Nicolas Michinov. "Identifying Flow in Video Games." International Journal of Gaming and Computer-Mediated Simulations 8, no. 3 (July 2016): 19–38. http://dx.doi.org/10.4018/ijgcms.2016070102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The flow, or optimal experience, is a highly focused mental state leading to immersion and high performance. Although flow theory has been widely applied to research on videogames, methods based on behavior observation to identify flow states are limited in this domain. The aim of the present study was to develop a new method to detect flow episodes occurring during a gaming session from observation of players' behaviors and analysis of game replays. The authors developed an optimal experience behavior pattern and a related coding scheme. In-depth interviews were then conducted to determine whether episodes coded as flow by researchers were also described as such by the players themselves. Findings showed that intense concentration followed by an expression of satisfaction could be a useful pattern to detect flow. Unexpectedly, the interviews suggested that frustration, as well as joy, may also be an emotional signature of flow. This study shed new light on the relationship between gameplay and flow.
23

Soysal, Yi̇lmaz. "Managing A Discursive Journey for Classroom Inquiry: Examination of a Teacher’s Discursive Moves." Journal of Science Learning 4, no. 4 (September 19, 2021): 394–411. http://dx.doi.org/10.17509/jsl.v4i4.32029.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study presents an analysis of teacher discursive moves (TDMs) that aid students in altering their thinking and talking systems. The participants were a science who handled the immersion inquiry activities. The primary data source was the video recorded in the classroom. This video-based data was analyzed through systematic observation in two phases comprising coding and counting to reveal the mechanics of the discursive journey. Three assertions were made for the dynamics of the discursive journey. First, the teacher enacted a wide range of TDMs incorporating dialogically/monologically oriented, simplified (observe-compare-predict), and rather sophisticated moves (challenging). The challenging moves were the most featured among all analytical TDMs. Second, once higher-order categories were composed by collapsing subcategories of the displayed TDMs, the communicating-framing moves were the most prominent performed moves. Lastly, the teacher created an argumentative atmosphere in which the students had the right to evaluate and judge their classmates and teacher's utterances that modified the epistemic and social authority of the discursive journey. Finally, educational recommendations are offered in the context of teachers noticing the mechanics and dynamics of the discourse journey.
24

Messias, Adria Maria, and Fernanda Moreto Impolcetto. "Atividades circenses na educação física: possibilidades e limites para a educação infantil." MOTRICIDADES: Revista da Sociedade de Pesquisa Qualitativa em Motricidade Humana 5, no. 1 (April 29, 2021): 96–105. http://dx.doi.org/10.29181/2594-6463-2021-v5-n1-secesp-p96-105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Resumo O objetivo do estudo foi analisar as possibilidades e os limites que emergiram da implementação de uma Unidade Didática de Atividades Circenses. De natureza qualitativa, do tipo Pesquisa Participante, foi realizada com uma turma da 2ª etapa da Educação Infantil. Como instrumento de construção dos dados utilizou-se o Diário de Campo, norteado por um roteiro para a realização dos registros observados durante as aulas. Os dados construídos foram tratados por meio da análise de categorias de codificação - codificação simples. Os principais resultados evidenciaram que: 1) diversos recursos (vídeos, músicas, fantasias) e materiais (de baixo custo ou recicláveis) possibilitaram a imersão dos alunos no universo circense; 2) algumas limitações (falta de uma estrutura física adequada, os eventos extras e a necessidade de tempo/quantidade maior de aulas) influenciaram o desdobramento das aulas, mas não prejudicaram o processo ensino e aprendizagem. Desse modo, confirmou-se a viabilidade e potencialidade do ensino das Atividades Circenses, nas aulas de Educação Física na Educação Infantil.Palavras-chave: Educação Física Escolar. Educação Infantil. Atividades Circenses. Cultura Corporal de Movimento. Circus activities in physical education: possibilities and limits for early childhood education Abstract The objective of the research was to analyze the possibilities and limits that emerged from the implementation of a Didactic Unit of Circus Activities. The qualitative research, based on the Participant Research method, it was carried out with a kindergarten students of Early Childhood Education. As an instrument for data construction, the Field Diary was chosen, based on a script for the realization of the records observed during the classes. The constructed data were treated through the analysis of coding categories - simple coding. The main results showed that: 1) several resources (videos, music, costumes) and materials (low cost or recyclable), allowed the immersion of students in the circus universe; 2) some limitations (lack of an adequate physical structure, extra events and the need for time/greater number of classes) influenced the unfolding of classes, but did not impair the teaching and learning process of Circus Activities. Thus, the feasibility and potentiality of the teaching of Circus Activities in physical education classes in Early Childhood Education were confirmed.Keywords: Physical Education School. Early Childhood Education. Circus Activities. Body Culture of Movement. Actividades circenses en educación física: posibilidades y límites para la educación en la primera infancia Resumen El objetivo de la investigación era analizar las posibilidades y límites que surgieron de la implementación de una Unidad Didáctica de Actividades Circenses. De carácter cualitativo, del tipo de encuesta de los participantes, se llevó a cabo con una clase de la 2a etapa de Educación Infantil Temprana. Como instrumento para la construcción de datos, se utilizó el Diario de Campo, basado en un script para la realización de los registros observados durante las clases. Los datos construidos fueron tratados a través del análisis de categorías de codificación - codificación simple. Los principales resultados mostraron que: 1) varios recursos (vídeos, música, vestuario) y materiales (de bajo costo o reciclables), permitieron la inmersión de los estudiantes en el universo circense; 2) algunas limitaciones (falta de una estructura física adecuada, eventos adicionales y la necesidad de tiempo/mayor número de clases) influyeron en el desarrollo de las clases, pero no afectaron el proceso de enseñanza y aprendizaje de las actividades circenses. Así, se confirmó la viabilidad y potencialidad de la enseñanza de las Actividades Circenses en las clases de educación física en Educación Infantil.Palabras clave: Educación Física Escolar. Educación Infantil. Actividades de Circo. Cultura Corporal del Movimento.
25

Dziembowski, Adrian, Dawid Mieloch, Jun Young Jeong, and Gwangsoon Lee. "Immersive Video Postprocessing for Efficient Video Coding." IEEE Transactions on Circuits and Systems for Video Technology, 2023, 1. http://dx.doi.org/10.1109/tcsvt.2023.3243381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Yang, Xu, Minfeng Huang, Lei Luo, Hongwei Guo, and Ce Zhu. "Efficient Panoramic Video Coding for Immersive Metaverse Experience." IEEE Network, 2023, 1. http://dx.doi.org/10.1109/mnet.2023.3319958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Schwarz, Sebastian, Nahid Sheikhipour, Vida Fakour Sevom, and Miska M. Hannuksela. "Video coding of dynamic 3D point cloud data." APSIPA Transactions on Signal and Information Processing 8 (2019). http://dx.doi.org/10.1017/atsip.2019.24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Due to the increased popularity of augmented (AR) and virtual (VR) reality experiences, the interest in representing the real world in an immersive fashion has never been higher. Distributing such representations enables users all over the world to freely navigate in never seen before media experiences. Unfortunately, such representations require a large amount of data, not feasible for transmission on today's networks. Thus, efficient compression technologies are in high demand. This paper proposes an approach to compress 3D video data utilizing 2D video coding technology. The proposed solution was developed to address the needs of “tele-immersive” applications, such as VR, AR, or mixed reality with “Six Degrees of Freedom” capabilities. Volumetric video data is projected on 2D image planes and compressed using standard 2D video coding solutions. A key benefit of this approach is its compatibility with readily available 2D video coding infrastructure. Furthermore, objective and subjective evaluation shows significant improvement in coding efficiency over reference technology. The proposed solution was contributed and evaluated in international standardization. Although it is was not selected as the winning proposal, as very similar solution has been selected developed since then.
28

Jeong, Jong-Beom, Soonbin Lee, and Eun-Seok Ryu. "DATRA-MIV: Decoder-Adaptive Tiling and Rate Allocation for MPEG Immersive Video." ACM Transactions on Multimedia Computing, Communications, and Applications, February 19, 2024. http://dx.doi.org/10.1145/3648371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The emerging immersive video coding standard moving picture experts group (MPEG) immersive video (MIV) which is ongoing standardization by MPEG-Immersive (MPEG-I) group, enables six degrees of freedom (6DoF) in a virtual reality (VR) environment that represents both natural and computer-generated scenes using multi-view video compression. The MIV eliminates the redundancy between multi-view videos and merges the residuals into multiple pictures, called an atlas. Thus, bitstreams with encoded atlases are generated and corresponding number of decoders are needed, which is challenging for the lightweight device with a single decoder. This paper proposes a decoder-adaptive tiling and rate allocation (DATRA) method for MIV to overcome the challenge. First, the proposed method divides atlases into subpictures considering two aspects: (i) subpicture bitstream extracting and merging into one bitstream to use a single decoder, (ii) separation of each source view from the atlases for rate allocation. Second, the atlases are encoded by versatile video coding (VVC), using an extractable subpicture (ES) to divide the atlases into subpictures. Third, each subpicture bitstream is extracted, and asymmetric quality allocation for each subpictures is conducted by considering the residuals in the subpicture. Fourth, mixed-quality subpictures were merged by using the proposed bitstream merger. Fifth, the merged bitstream is decoded by using a single decoder. Finally, the viewing area of the user is synthesized by using the reconstructed atlases. Experimental results with the VVC test model (VTM) show that the proposed method achieves a 21.37% Bjøntegaard delta rate (BD-rate) saving for immersive video peak signal-to-noise ratio (IV-PSNR) and a 26.76% decoding runtime saving compared to the VTM anchor configuration. Moreover, it supports bitstreams for multiple decoders and single decoder without re-encoding, transcoding, or a substantial increase of the server-side storage.
29

Lee, Gwangsoon, Sangwoon Kwak, Hong-chang Shin, Bong Ho Lee, and Won-sik Cheong. "Partial Access using Group-based MPEG Immersive Video Coding Technology." Proceedings of the International Display Workshops, December 7, 2023, 1351. http://dx.doi.org/10.36463/idw.2023.1351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lee, Jinho, Gun Bang, Jungwon Kang, Mehrdad Teratani, Gauthier Lafruit, and Haechul Choi. "Performance analysis of multiview video compression based on MIV and VVC multilayer." ETRI Journal, February 2024. http://dx.doi.org/10.4218/etrij.2023-0309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractTo represent immersive media providing six degree‐of‐freedom experience, moving picture experts group (MPEG) immersive video (MIV) was developed to compress multiview videos. Meanwhile, the state‐of‐the‐art versatile video coding (VVC) also supports multilayer (ML) functionality, enabling the coding of multiview videos. In this study, we designed experimental conditions to assess the performance of these two state‐of‐the‐art standards in terms of objective and subjective quality. We observe that their performances are highly dependent on the conditions of the input source, such as the camera arrangement and the ratio of input views to all views. VVC‐ML is efficient when the input source is captured by a planar camera arrangement and many input views are used. Conversely, MIV outperforms VVC‐ML when the camera arrangement is non‐planar and the ratio of input views to all views is low. In terms of the subjective quality of the synthesized view, VVC‐ML causes severe rendering artifacts such as holes when occluded regions exist among the input views, whereas MIV reconstructs the occluded regions correctly but induces rendering artifacts with rectangular shapes at low bitrates.
31

Sugito, Yasuko, Shinya Iwasaki, Kazuhiro Chida, Kazuhisa Iguchi, Kikufumi Kanda, Xuying Lei, Hidenobu Miyoshi, and Kimihiko Kazui. "Video bit-rate requirements for 8K 120-Hz HEVC/H.265 temporal scalable coding: experimental study based on 8K subjective evaluations." APSIPA Transactions on Signal and Information Processing 9 (2020). http://dx.doi.org/10.1017/atsip.2020.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract 8K video parameters were designed to provide an immersive experience; meanwhile, special considerations are necessary to assess the entire screen subjectively. This paper studies the video bit-rate required for 8K 119.88-Hz (120-Hz) and 59.94-Hz (60-Hz) the high efficiency video coding (HEVC)/H.265 temporal scalable coding based on subjective evaluation experiments. To investigate the appropriate bit-rate for both 8K 120- and 60-Hz videos for broadcasting purposes, we compress 8K 120-Hz test sequences using software that emulates our real-time HEVC encoder and conduct two types of experiments. The experimental results demonstrate that the required video bit-rate for 8K 120-Hz temporal scalable coding is estimated to be 85–110 Mbps, which is equivalent to the practical bit-rate for 8K 60-Hz videos, and the appropriate bit-rate for 8K 60-Hz video in 8K 120-Hz video at 85 Mbps is assumed to be ~80 Mbps. From the analyses of the encoded videos, it is confirmed that the experimental results are primarily influenced by the image quality on the slice boundary positioned at the middle of the screen height. When conducting the experiments, we determined settings referring to an initial 8K subjective assessment; we further mention requirements for future 8K subjective evaluations from the experimental results.
32

Ilola, Lauri, Lukasz Kondrad, Sebastian Schwarz, and Ahmed Hamza. "An Overview of the MPEG Standard for Storage and Transport of Visual Volumetric Video-Based Coding." Frontiers in Signal Processing 2 (April 29, 2022). http://dx.doi.org/10.3389/frsip.2022.883943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The increasing popularity of virtual, augmented, and mixed reality (VR/AR/MR) applications is driving the media industry to explore the creation and delivery of new immersive experiences. One of the trends is volumetric video, which allows users to explore content unconstrained by the traditional two-dimensional window of director’s view.The ISO/IEC joint technical committee 1 subcommittee 29, better known as the Moving Pictures Experts Group (MPEG), has recently finalized a group of standards, under the umbrella of Visual Volumetric Video-based Coding (V3C). These standards aim to efficiently code, store, and transport immersive content with 6 degrees of freedom. The V3C family of standards currently consists of three documents: 1) ISO/IEC 23090-5 defines the generic concepts of volumetric video-based coding and its application to dynamic point cloud data; 2) ISO/IEC 23090-12 specifies another application that enables compression of volumetric video content captured by multiple cameras; and 3) ISO/IEC 23090-10 describes how to store and deliver V3C compressed volumetric video content. Each standard leverages the capabilities of traditional 2D video coding and delivery solutions, allowing for re-use of existing infrastructures which facilitates fast deployment of volumetric video.This article provides an overview of the generic concepts of V3C, as defined in ISO/IEC 23090-5. Furthermore, it describes V3C carriage related functionalities specified in ISO/IEC 23090-10 and offers best practices for the community with respect to storage and delivery of volumetric video.
33

Garus, Patrick, Felix Henry, Joel Jung, Thomas Maugey, and Christine Guillemot. "Immersive Video Coding: Should Geometry Information be Transmitted as Depth Maps?" IEEE Transactions on Circuits and Systems for Video Technology, 2021, 1. http://dx.doi.org/10.1109/tcsvt.2021.3100006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Choi, Yongho, The Van Le, Gun Bang, Jinho Lee, and Jin Young Lee. "Efficient immersive video coding using specular detection for high rendering quality." Multimedia Tools and Applications, March 9, 2024. http://dx.doi.org/10.1007/s11042-024-18815-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Graziosi, D., O. Nakagami, S. Kuma, A. Zaghetto, T. Suzuki, and A. Tabatabai. "An overview of ongoing point cloud compression standardization activities: video-based (V-PCC) and geometry-based (G-PCC)." APSIPA Transactions on Signal and Information Processing 9 (2020). http://dx.doi.org/10.1017/atsip.2020.12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract This article presents an overview of the recent standardization activities for point cloud compression (PCC). A point cloud is a 3D data representation used in diverse applications associated with immersive media including virtual/augmented reality, immersive telepresence, autonomous driving and cultural heritage archival. The international standard body for media compression, also known as the Motion Picture Experts Group (MPEG), is planning to release in 2020 two PCC standard specifications: video-based PCC (V-CC) and geometry-based PCC (G-PCC). V-PCC and G-PCC will be part of the ISO/IEC 23090 series on the coded representation of immersive media content. In this paper, we provide a detailed description of both codec algorithms and their coding performances. Moreover, we will also discuss certain unique aspects of point cloud compression.
36

Cai, Yangang, Xufeng Li, Yueming Wang, and Ronggang Wang. "An Overview of Panoramic Video Projection Schemes in the IEEE 1857.9 Standard for Immersive Visual Content Coding." IEEE Transactions on Circuits and Systems for Video Technology, 2022, 1. http://dx.doi.org/10.1109/tcsvt.2022.3165878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Sobocinski, Marta, Daryn Dever, Megan Wiedbusch, Foysal Mubarak, Roger Azevedo, and Sanna Järvelä. "Capturing self‐regulated learning processes in virtual reality: Causal sequencing of multimodal data." British Journal of Educational Technology, September 30, 2023. http://dx.doi.org/10.1111/bjet.13393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractThis study examines the embodied ways in which learners monitor their cognition while learning about exponential functions in an immersive virtual reality (VR) based game, Pandemic by Prisms of Reality. Traditionally, metacognitive monitoring has been assessed through behavioural traces and verbalised instances. When learning in VR, learners are fully immersed in the learning environment, actively manipulating it based on affordances designed to support learning, offering insights into the relationship between physical interaction and metacognition. The study collected multimodal data from 15 participants, including think‐aloud audio, bird's‐eye view video recordings and physiological data. Metacognitive monitoring was analysed through qualitative coding of the think‐aloud protocol, while movement was measured via optical flow analysis and cognitive load was assessed through heart rate variability analysis. The results revealed embodied metacognition by aligning the data to identify learners' physical states alongside their verbalised metacognition. The findings demonstrated a temporal interplay among cognitive load, metacognitive monitoring, and motion during VR‐based learning. Specifically, cognitive load, indicated by the low‐ and high‐frequency heart rate variability index, predicted instances of metacognitive monitoring, and monitoring predicted learners' motion while interacting with the VR environment. This study further provides future directions in understanding self‐regulated learning processes during VR learning by utilizing multimodal data to inform real‐time adaptive personalised support within these environments. Practitioner notesWhat is already known about this topic Immersive virtual reality (VR) environments have the potential to offer personalised support based on users' individual needs and characteristics. Self‐regulated learning (SRL) involves learners monitoring their progress and strategically regulating their learning when needed. Multimodal data captured during VR learning, such as birds‐eye‐view video, screen recordings, physiological changes and verbalisations, can provide insights into learners' SRL processes and support needs. What this paper adds Provides insights into the embodied aspects of learners' metacognitive monitoring during learning in an immersive VR environment. Demonstrates how SRL processes can be captured via the collection and analysis of multimodal data, including think‐aloud audio, bird's‐eye view video recordings and physiological data, to capture metacognitive monitoring and movement during VR‐based learning. Contributes to the understanding of the interplay between cognitive load, metacognitive monitoring, and motion in immersive VR learning. Implications for practice and/or policy Researchers and practitioners can use the causal relationships identified in this study to identify instances of SRL in an immersive VR setting. Educational technology developers can consider the integration of online measures, such as cognitive load and physiological arousal, into adaptive VR environments to enable real‐time personalised support for learners based on their self‐regulatory needs.
38

Mills, Kathy A., Alinta Brown, and Patricia Funnell. "Virtual reality games for 3D multimodal designing and knowledge across the curriculum." Australian Educational Researcher, March 13, 2024. http://dx.doi.org/10.1007/s13384-024-00695-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractImmersive virtual reality (VR) is anticipated to peak in development this decade bringing new opportunities for 3D multimodal designing across all levels of education. The need for students to gain capabilities with multimodal texts—texts that combine two or more modes, such as spoken, written, and visual—is emphasised at all levels of education from P-12 in the Australian Curriculum. Likewise, the use of technology-supported pedagogies is increasing worldwide, rendering multimodal texts ubiquitous across all knowledge domains. This original, qualitative classroom research investigated students’ 3D designing of multimodal texts using an immersive VR head-mounted display. Upper primary students (ages 10–12 years, n = 48) transferred their knowledge of ancient Rome through 2D drawing, writing, speaking, and 3D multimodal designing with VR. The application of multimodal analysis to video data, screen recordings, and think-aloud protocols, and the thematic coding of student and teacher interviews yielded four key findings: (i) VR gaming supported 3D multimodal designing through haptic and embodied experience, (ii) VR improved performance through creative redesigning, (iii) VR-supported knowledge application, consolidation, and transfer, and (iv) pedagogical strengths of VR were situated and transformed practice. This research is timely and significant given the increasing accessibility and affordability of VR and the need to connect research and pedagogical practice to support students’ advanced knowledge and capabilities with multimodal learning across the curriculum.
39

Redder, Benjamin Dorrington, and Gareth Schott. "Scholarly Discourse on the Visuality of Ethics in Gaming: An Opening Conversation." Video Journal of Education and Pedagogy, April 5, 2023, 1–10. http://dx.doi.org/10.1163/23644583-bja10037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract This article offers perspectives into some of the key intersections between ethics in representation and game design by showcasing an extended ten-minute version of the presenters’ original video about the provocation of visual ethics in gaming, first shown at the summit ‘Re/Sponse-able Visual Ethics.’ This version particularly divulges further detail into the ludic and congruent real-world dynamics that come into play in coding and facilitating both the game’s system of rules and mechanics as well as the player’s immersion in its game world. Topics principally addressed included the relationship and function of violence in procedural-based game activity, and game developer intentions or responses to their chosen subjects and themes embedded into their game. While the video and its topics primarily aligned to the presenters’ respective disciplines, it is still a valuable platform to instigate new or expand existing avenues of research into the relationship of ethics and gaming.
40

Hollerweger, Elisabeth. "Natur literarisch programmieren?" Jahrbuch der Gesellschaft für Kinder- und Jugendliteraturforschung, December 1, 2022, 141–53. http://dx.doi.org/10.21248/gkjf-jb.95.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
[English title and abstract below] Die Jugendliteratur reagiert auf die zunehmenden Bedrohungen durch Klimawandel und Umweltkrisen schon seit Ende der Nullerjahre mit einer Art ,Future Nature Writing‘, das dystopische Zukunftswelten entwirft und Begegnungen mit intakter(er) Natur nur noch nostalgisch erinnert, fantastisch wiederherstellt oder medial simuliert. Letzteres ist innerhalb des jugendliterarischen Feldes ein relativ neues Phänomen, das im vorliegenden Beitrag am Beispiel des Romans Cryptos (Poznanski 2020) näher beleuchtet wird. Dabei wird auf Basis von Foucaults Raumtheorie zunächst hinterfragt, welche normativen Zuschreibungen und semantischen Codierungen mit den gegensätzlichen Räumen des Weltensystems verbunden werden und in welchen Machtkonstellationen die Figuren darin verortet sind. Daran anknüpfend rückt die Darstellung von virtueller Natur in den Fokus, wobei speziell die literarischen Verfahren von Interesse sind, die Immersion auf der einen und Reflexion auf der anderen Seite initiieren. Dieses zweischrittige Vorgehen zielt darauf ab, Spezifika literarischer Naturprogrammierung zu erarbeiten und Analysekategorien abzuleiten. Auf dieser Basis legt der Vergleich von Cryptos mit Hikikomori (Kuhn 2012) und Ready Player One (Cline 2017) Gemeinsamkeiten und Unterschiede in der literarischen Programmierung von Natur offen. Das Verhältnis von literarischem System und medialem Bezugssystem sowie die Wirkungsweisen von programmierter Natur in Literatur und Videospiel werden schließlich unter Bezugnahme auf Rajewskys Intermedialitätstheorie hinterfragt, sodass insgesamt verschiedene Facetten des Schreibens von virtueller Natur Berücksichtigung finden. Programming Nature with Literature?Virtual Environments in Ursula Poznanski’s Cryptos Since the end of the first decade of the twenty-first century, young adult literature has reacted to the increased threats posed by climate change and environmental crises with a kind of ‘future nature writing’ that generates dystopian future worlds in which encounters with (more) intact nature are either nostalgically recalled, fantastically recreated or medially simulated. The latter is a relatively new phenomenon that this article will examine, taking Ursula Poznanski’s novel Cryptos (2020) as its main example. Based on Foucault’s theory of space, it first identifies the normative attributions and semantic codings associated with the opposing spaces of the world system and the power constellations in which the characters are located within the spaces. In a second step, the focus shifts to the representation of virtual nature, with the literary processes that initiate immersion on the one hand and reflection of a particular interest on the other. The aim of this two-step approach is to identify the specifics of literary nature programming and to develop categories for analysis. On the basis of this approach, a comparison between Cryptos, Hikikomori (2012) by Kevin Kuhn and Ready Player One (2017) by Ernest Cline exposes similarities and differences in the literary programming of nature. Finally, the relationship between the literary and the medial reference systems, as well as the mechanisms of programmed nature in literature and video games, is examined with reference to Rajewsky’s theory of intermediality so that different facets of writing virtual nature are taken into account.
41

Loess, Nicholas. "Augmentation and Improvisation." M/C Journal 16, no. 6 (November 7, 2013). http://dx.doi.org/10.5204/mcj.739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Preamble: Medium/Format/Marker Medium/Format/Marker (M/F/M) was a visual-aural improvisational performance involving myself, and musicians Joe Sorbara, and Ben Grossman. It was formed through my work as a PhD candidate at the Improvisation, Community, and Social Practice research initiative at the University of Guelph. This performance was conceived as an attempted intervention against the propensity to reify the “new.” It also sought to address the proliferation of the screen and question how the increased presence of screens in everyday life has augmented the way in which an audience is conceived and positioned. This conception is in direct conversation with my thesis, which is a practice-based research project exploring what the experimental combination of intermediality, improvisation, and the cinema might offer towards developing a reflexive approach to "new" media, screen culture, and expanded cinemas. One of the ways I chose to explore this area involved developing an interface that allowed an audio-visual ensemble to improvise with a film's audio-visual projection. I experimented with different VJ programs. These programs often utilize digital filters and effects to alter images through real-time mixing and layering, much like a DJ does with sound. I found a program developed by Chicago-based artist Ontologist called Ontoplayer, which he developed out of his practice as an improvisational video artist. The program works through a dual-channel interface where two separate digital files could be augmented, with their projected tempo capable of being determined by musicians through a MIDI interface. I conceptualized the performance around the possibility of networking myself with two other musicians via this interface. I approached percussionist Joe Sorbara and multi-instrumentalist Ben Grossman with the idea to use Ontoplayer as a means to improvise with Chris Marker's La Jetée (1962, 28 mins). The film itself would be projected simultaneously in four different formats: 16mm celluloid, VHS, Blu-ray, and Standard Definition video (the format the ensemble improvised with) projected onto four separate screens. From left to right, the first screen contained the projected version of La Jetée that we improvised with, next to it was its Blu-ray format, next to that, a degraded VHS copy of the film, and next to that, the 16mm print. The performance materialized through performing a number of improvisatory experiments. A last minute experiment conceived a few hours before the performance involved placing contact microphones overtop of the motor on a Bell & Howell 16mm projector. The projector was tested in the days leading up to the performance and it ran as smoothly as could be expected. It had a nice cacophonous hum that Ben Grossman intended to improvise with using some contact mics attached directly over the projector’s motor, a $5 iPad app, and his hurdy-gurdy. Fifteen minutes before the performance began, the three of us huddled to discuss how long we'd like to go. We had met briefly the day before to discuss the technical setup of the performance but not its execution and length. I hadn't considered duration. Joe broke the silence by asking if we'd be "finding beginnings and endings." I didn't know what that entailed, but nodded. We started. I turned on the projector and it immediately started to cough and chew on the 40 year old 16mm print I found online. My first impulse was to intervene, to try to save it. The film continued and I sat frozen for a moment. Joe started playing and Ben, expecting me to send him the audio track from La Jetée, prompted me to do so. I let the projector go and began. Joe had a digital kick-drum and two contact mics on his drum kit hooked into a MIDI hub, while Ben's hurdy-gurdy had a contact mic inside it, wired into the hub. The hub hooked into my laptop and allowed for an intermedial conversation to emerge between the three of us. While the 16mm, VHS, and Blu-Ray formats proceeded relatively unimpeded alongside each other on their respective screens, the fourth screen was where this conversation took place. I digitally reordered different image sequences from La Jetée. The fact that it’s a film (almost) comprised entirely of still images made this reordering intriguing in that I was able control the speed of progressing from each image to the next. The movement from image to image was structured between Ben and Joe’s improvisations and the kind of effects and filters I had initialized. Ontoplayer has a number of effects and filters that push the base image into more abstract territories (e.g.: geometric shapes, over pixelation) I was uninterested in exploring. I utilized effects that to some degree still kept the representational content of the image intact. The degree to which these effects took hold of the image were determined by whether or not Ben and Joe decided to use the part of their instrument that would trigger them. The decision to linger on an image, colour it differently, or skip ahead in the film’s real-time projection destabilized my sense of where I was in the film. It became an event in the sense that each movement, both visual and aural was happening with an indeterminate duration. La Jetée opens with the narrator proclaiming: “this is the story of a man marked by an image from his childhood.” The story itself is situated around a man in a post-apocalyptic world, haunted by the persistent memory of a woman he saw as a child while standing on the jetty at Orly Airport in Paris. The man was a soldier, now captured, and imprisoned in an underground camp. The prison guards have been conducting experiments on the prisoners, attempting to use the prisoner’s memories as a mechanism to send them backwards and forwards in time. The narrator explains, “with the surface of the planet irradiated … The human race was doomed. Space was off limits. The only link with survival passed through time … The purpose of the experiments was to throw emissaries into time to call the past and future to the aid of the present.” La Jetée is visually structured as a photomontage, with voice-over narration, diegetic and non-diegetic sound existing as component parts to the whole film. I decided to separate these components for the sake of isolating them before the performance as instruments of the film to be improvisationally deployed through the intermedial connection between Ben, Joe, and myself. The resulting projections that emerged from our interface became a kind of improvised "grooving" to La Jetée that restricted the impulse to discriminately place sound beneath and behind the image. I selected images from different points in the film that felt "timely" given the changing dynamic between the three of us. I remember lingering on an image of the woman's face, her hand against her mouth, her hair being blown back by the wind. I looked and listened for the moment when the film would catch and then catch fire. It never came. We let the reel run to the end and continued on improvising until we found an ending. But the sound of that film catching but never breaking, the intention and tension of the film being near death the entire time made everything we did more precious, teetering on the brink of failure. We could never have predicted that, and it gave us something I continue to ponder and be thankful for. Celluloid junkies in the room commented on how precipitous the whole thing was, given how rare it is to encounter the sound of celluloid film travelling through a projector inside a cinematic space. An audiophile mused over how there wasn’t any document, his mind adequately blown by how “funky” the projector sounded. With there being no document of the performance, I'm left with my own memories. In mining the aftermath of this performance, I hope to find an addendum that considers how improvisation might negotiate with augmentation in ways that speak to Walter Benjamin's assertion that the "camera, the film, on the one hand, extends our comprehension of the necessities which rule our lives; on the other hand, it manages to assure us of an immense and unexpected field of action” (Benjamin 236-7).Images to be Determined I got a job working in a photo lab eight years ago, right around the time digital cameras started becoming not only affordable, but technologically-comparable alternatives to film cameras. The photo printer in the lab was setup to scan and digitize celluloid filmstrips to allow for digital “touchups” by the technician. It was also hooked into touchscreen media stations that accepted a variety of memory card formats so that customers could “touchup” their own images. Celluloid film meant that as long as their format was chemical, touching up their images remained the task of the technician. Against the urging of the lab’s manager, I resisted altering other people’s images. It felt like a violation, despite the fact that almost every customer was unaware of this process. They assumed a degree of responsibility for a chemically-exposed image. I still got blamed for a lot of bad photography, but an image chemically under or overexposed was irreparable. Digital cameras changed all of that. I still preferred an evenly exposed celluloid print to a digital, but the allure was the ability for these images to be augmented. Augmentation is synonymous with "enhancement," "prosthesis," "addition," "amplification," "enrichment," "expansion,” and "extension" (to name a few). For the purpose of this essay, I am situating augmentation as an agential act engaging with a static form to purposefully alter its aesthetic and political relation to a reality. To what extent can we say that the digital image is itself, an augmentation? If Instagram is any indication, the digital image's existence is bound by its perpetual augmentation. A digital image is only as good as its capacity to be worked on. The ubiquity of digitally applying lomographic filters to digital images, as a defining step in their distributive chain, is indicative of the discursive impact remediating the old into the new has on digital forms. These digitally-coded filters used to augment “clear” digital images are comprised of exaggerated imperfections that existed to varying degrees, as unforeseen side effects of working with comparatively more unstable celluloid textures. The filtered images themselves are digital distortions of a digital original. The filters augment this original through obscuring one or a number of components. Some filters might exaggerate the green values or sharpen a particular quadrant within the frame that might coincide with the look of a particular film stock from the past. The discourse of “film” and “vintage” photography has become a synonymous component of the digital aesthetic, discursively warming up what is often considered to be a cold, and disembodied medium. Augmentation works to re-establish a congruous relationship between the filmic and the digital, attempting to reconcile the aesthetic distance between granularity and pixelation. This is ironic because this process is encapsulated through digitally encoding and applying these filters for the sake of obscuring clarity. Thus, the object is both hailed as clear and clearly manipulable. Another example a bit closer to the cinema is the development of digital video cameras offering RAW, or minimally compressed file formats for the sole purpose of augmenting the initial recording in post-production workflows in an attempt to minimize degradation in the image. The colour values and dynamic range of these images are muted, or flattened so that the human can control their elevation after the fact. To some degree the initial image, in itself, is an augmentation of its filmic relative. From early experiments with video synthesizers to the present digital coding of film effects, digital images have tantalized video artists and filmmakers with possibility shrouded in instantaneity and malleability. A key problem with this structure remains the unbridled proliferation and expansion of the digital image, set free for the sake of newness. How might improvisation work towards establishing an ethics of augmentation? An ethics of this kind must disrupt the popular notion of the digital image existing beyond analogical constraints. The belief that “if you can imagine it, you can do it” obfuscates the reality that to work with images, whatever their texture, is a negotiation with constraint. Part of M/F/M’s fruition emerged from a conversation I'd had with Canadian Animator Pierre Hébert last summer. Now obvious, but for Hébert, the first obstacle he needed to overcome as an improviser was developing an instrument that he could gig with. Through the act of designing an instrument I immediately became aware of what wasn't possible, and so the work leading up to the performance involved attempting to expand the possibilities of that instrument. How might I conceive of my own treatment of images simultaneously treated by Joe and Ben as a kind of cinematic extended technique we collaboratively bring into being? Constraint necessitates the need for extension, for finding new ways to sound and appear. Constraint is also consistently conceived as shackling progress. In scientific methodologies it is often arbitrarily imposed to steer an experiment into a desired direction. This sort of experimental methodology is in the business of presupposing outcomes, which I feel is often the case with what ultimately becomes the essay of end result in Humanities research. Constraint is an important imposition in improvisation only if the parties involved are willing to find new ways to move in consort with it. The act of improvisation is thus an engagement with the spatio-temporal constraints of performance, politics, memory, texture, and difference. My conception of the cinema is that of an instrument, whose past is what I work with to better understand its future. Critic Gene Youngblood, in his landmark book, Expanded Cinema, theorized a new conception of the cinema as a global planetary phenomenon suffused inside a space of intermedia, where immersive, interactive, and interconnected realms necessitated the need to critically conceptualise the cinema in cosmic terms. At around the time of Youngblood's writing, another practitioner of the cosmic way, improviser and composer Sun Ra was staking a similar claim for music's ability to uplift the species cosmically. Ra's popular line “If we came from nowhere here, why can’t we go somewhere there?” (Heble 125), articulated the problematic racial politics in post-WWII America, that fixed African-American identity into a static domain with little room to move upward. The "somewhere there" to Ra was a non-space, created from "a desire to opt out of the very codes of representation and intelligibility, the very frameworks of interpretation and assumption which have legitimated the workings of dominant culture" (Heble 125). Though Youngblood's and Ra's intellectual and creative impulses formed from differing political circumstances, the work and thinking of these two figures remain significant articulations of the need to work from and towards the cosmic. In 2003, Youngblood published a follow-up essay in a reprint of Expanded Cinema entitled Cinema and the Code. In it, he defines cinema as a “phenomenology of the moving image.” Rather than conceiving of it through any of its particular media, Youngblood advocates for a segregated conception of the cinema: Just as we separate music from its instruments. Cinema is the art of organizing a stream of audiovisual events in time. It is an event-stream, like music. There are at least four media through which we can practice cinema – film, video, holography, and structured digital code—just as there are many instruments through which we can practice music. (Youngblood cited in Marchessault and Lord 7) Music and cinema are thus conceived as the exterior consequences of creative and co-creative instrumental experimentation. For Ra and Youngblood, the planetary stakes of this project are infused with the need to manufacture and occupy an imaginative space (if only for a moment) outside of the known. This is not to say that the action itself is transcendental. But rather this outside is the planetary. For the past year I've been making a documentary with Joe Sorbara on the free improv scene in Toronto. Listening to musicians talk about improvisation in expansive terms, as this ethereal and ephemeral experience, that exists on the brink of failure, that is as much an act of memory as renewal, reverberated with my own feelings surrounding the cinema. Improvisation, to philosopher Gary Peters, is the "entwinement of preservation and destruction", that "invites us to make a transition from a closed conception of the past to one that re-thinks it as an endlessly ongoing event or occurrence whereby tradition is re-originated (Benjamin) or re-opened (Heidegger)” (Peters 2). This “entwinement of preservation and destruction” takes me back to my earlier discussion of the ways in which digital photography, in particular lomographically filtered snapshots, is structured through preserving the discursive past of film while destroying its standard. The performance of M/F/M attempted to connect the augmentation of the digital image and the impact this augmentation had on conceptualizing the past through an improvisational approach to intermediality. The issue I have with the determination of images concerns their technological standardization. As long as manufacturers and technicians control this process then the practice of gathering, projecting, and experiencing digital images is predetermined by their commercial obligation. It assures that augmenting the “immense and unexpected field of action” comprising the domain of images is itself a predetermination. References Benjamin, Walter. Illuminations. New York: Schocken Books, 1985. Heble, Ajay. Landing on the Wrong Note. London: Routledge, 2000. Marker, Chris, dir. La Jetée. Argos Films. 1962. Marchessault, Janine, and Susan Lord. Fluid Screens, Expanded Cinema. Toronto: University of Toronto Press, 2007. Peters, Gary. The Philosophy of Improvisation. Chicago: University of Chicago Press, 2009.
42

Cruikshank, Lauren. "Synaestheory: Fleshing Out a Coalition of Senses." M/C Journal 13, no. 6 (November 25, 2010). http://dx.doi.org/10.5204/mcj.310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Everyone thinks I named my cat Mango because of his orange eyes but that’s not the case. I named him Mango because the sounds of his purrs and his wheezes and his meows are all various shades of yellow-orange. (Mass 3) Synaesthesia, a condition where stimulus in one sense is perceived in that sense as well as in another, is thought to be a neurological fluke, marked by cross-sensory reactions. Mia, a character in the children’s book A Mango-Shaped Space, has audition colorée or coloured hearing, the most common form of synaesthesia where sounds create dynamic coloured photisms in the visual field. Others with the condition may taste shapes (Cytowic 5), feel colours (Duffy 52), taste sounds (Cytowic 118) or experience a myriad of other sensory combinations. Most non-synaesthetes have never heard of synaesthesia and many treat the condition with disbelief upon learning of it, while synaesthetes are often surprised to hear that others don’t have it. Although there has been a resurgence of interest in synaesthesia recently in psychology, neuroscience and philosophy (Ward and Mattingley 129), there is no widely accepted explanation for how or why synaesthetic perception occurs. However, if we investigate what meaning this particular condition may offer for rethinking not only what constitutes sensory normalcy, but also the ocular-centric bias in cultural studies, especially media studies, synaesthesia may present us with very productive coalitions indeed.Some theorists posit the ultimate role of media of all forms “to transfer sense experiences from one person to another” (Bolter and Grusin 3). Alongside this claim, many “have also maintained that the ultimate function of literature and the arts is to manifest this fusion of the senses” found in synaesthesia (Dann ix). If the most primary of media aims are to fuse and transfer sensory experiences, manifesting these goals would be akin to transferring synaesthetic experience to non-synaesthetes. In some cases, this synaesthetic transfer has been the explicit goal of media forms, from the invention of kaleidoscopes as colour symphonies in 1818 (Dann 66) to the 2002 launch of the video game Rez, the packaging for which reads “Discover a new world. A world of sound, visuals and vibrations. Release your instincts, open your senses and experience synaesthesia” (Rez). Recent innovations such as touch screen devices, advances in 3D film and television technologies and a range of motion-sensing video gaming consoles extend media experience far beyond the audio-visual and as such, present both serious challenges and important opportunities for media and culture scholars to reinvigorate ways of thinking about media experience, sensory embodiment and what might be learned from engaging with synaesthesia. Fleshing out the Field While acknowledging synaesthesia as a specific condition that enhances and complicates the lives of many individuals, I also suggest that synaesthesia is a useful mode of interference into our current ocular-centric notions of culture. Vision and visual phenomena hold a particularly powerful role in producing and negotiating meanings, values and relationships in the contemporary cultural arena and as a result, the eye has become privileged as the “master sense of the modern era” (Jay Scopic 3). Proponents of visual culture claim that the majority of modern life takes place through sight and that “human experience is now more visual and visualized than ever before ... in this swirl of imagery, seeing is much more than believing. It is not just a part of everyday life, it is everyday life” (Mirzoeff 1). In order to enjoy this privilege as the master sense, vision has been disentangled from the muscles and nerves of the eyeball and relocated to the “mind’s eye”, a metaphor that equates a kind of disembodied vision with knowledge. Vision becomes the most non-sensual of the senses, and made to appear “as a negative reference point for the other senses...on the side of detachment, separation” (Connor) or even “as the absence of sensuality” (Haraway). This creates a paradoxical “visual culture” in which the embodied eye is, along with the ear, skin, tongue and nose, strangely absent. If visual culture has been based on the separation of the senses, and in fact, a refutation of embodied senses altogether, what about that which we might encounter and know in the world that is not encompassed by the mind’s eye? By silencing the larger sensory context, what are we missing? What ocular-centric assumptions have we been making? What responsibilities have we ignored?This critique does not wish to do away with the eye, but instead to re-embrace and extend the field of vision to include an understanding of the eye and what it sees within the context of its embodied abilities and limitations. Although the mechanics of the eye make it an important and powerful sensory organ, able to perceive at a distance and provide a wealth of information about our surroundings, it is also prone to failures. Equipped as it is with eyelids and blind spots, reliant upon light and gullible to optical illusions (Jay, Downcast 4), the eye has its weaknesses and these must be addressed along with its abilities. Moreover, by focusing only on what is visual in culture, we are missing plenty of import. The study of visual culture is not unlike studying an electrical storm from afar. The visually impressive jagged flash seems the principal aspect of the storm and quite separate from the rumbling sound that rolls after it. We perceive them and name them as two distinct phenomena; thunder and lightning. However, this separation is a feature only of the distance between where we stand and the storm. Those who have found themselves in the eye of an electrical storm know that the sight of the bolt, the sound of the crash, the static tingling and vibration of the crack and the smell of ozone are mingled. At a remove, the bolt appears separate from the noise only artificially because of the safe distance. The closer we are to the phenomenon, the more completely it envelops us. Although getting up close and personal with an electrical storm may not be as comfortable as viewing it from afar, it does offer the opportunity to better understand the total experience and the thrill of intensities it can engage across the sensory palette. Similarly, the false separation of the visual from the rest of embodied experience may be convenient, but in order to flesh out this field, other embodied senses and sensory coalitions must be reclaimed for theorising practices. The senses as they are traditionally separated are simply put, false categories. Towards SynaestheoryAny inquiry inspired by synaesthesia must hold at its core the idea that the senses cannot be responsibly separated. This notion applies firstly to the separation of senses from one another. Synaesthetic experience and experiment both insist that there is rich cross-fertility between senses in synaesthetes and non-synaesthetes alike. The French verb sentir is instructive here, as it can mean “to smell”, “to taste” or “to feel”, depending on the context it is used in. It can also mean simply “to sense” or “to be aware of”. In fact, the origin of the phrase “common sense” meant exactly that, the point at which the senses meet. There also must be recognition that the senses cannot be separated from cognition or, in the Cartesian sense, that body and mind cannot be divided. An extensive and well-respected study of synaesthesia conducted in the 1920s by Raymond Wheeler and Thomas Cutsforth, non-synaesthetic and synaesthetic researchers respectively, revealed that the condition was not only a quirk of perception, but of conception. Synaesthetic activity, the team deduced “is an essential mechanism in the construction of meaning that functions in the same way as certain unattended processes in non-synaesthetes” (Dann 82). With their synaesthetic imagery impaired, synaesthetes are unable to do even a basic level of thinking or recalling (Dann, Cytowic). In fact, synaesthesia may be a universal process, but in synaesthetes, “a brain process that is normally unconscious becomes bared to consciousness so that synaesthetes know they are synaesthetic while the rest of us do not” (166). Maurice Merleau-Ponty agrees, claiming:Synaesthetic perception is the rule, and we are unaware of it only because scientific knowledge shifts the centre of gravity of experience, so that we unlearn how to see, hear, and generally speaking, feel in order to deduce, from our bodily organisation and the world as the physicist conceives it, what we are to see, hear and feel. (229) With this in mind, neither the mind’s role nor the body’s role in synaesthesia can be isolated, since the condition itself maintains unequivocally that the two are one.The rich and rewarding correlations between senses in synaesthesia prompt us to consider sensory coalitions in other experiences and contexts as well. We are urged to consider flows of sensation seriously as experiences in and of themselves, with or without interpretation and explanation. As well, the debates around synaesthetic experience remind us that in order to speak to phenomena perceived and conceived it is necessary to recognise the specificities, ironies and responsibilities of any embodied experience. Ultimately, synaesthesia helps to highlight the importance of relationships and the complexity of concepts necessary in order to practice a more embodied and articulate theorising. We might call this more inclusive approach “synaestheory”.Synaestheorising MediaDystopia, a series of photographs by artists Anthony Aziz and Sammy Cucher suggests a contemporary take on Decartes’s declaration that “I will now close my eyes, I will stop my ears, I will turn away my senses from their objects” (86). These photographs consist of digitally altered faces where the subject’s skin has been stretched over the openings of eyes, nose, mouth and ears, creating an interesting image both in process and in product. The product of a media mix that incorporates photography and computer modification, this image suggests the effects of the separation from our senses that these media may imply. The popular notion that media allow us to surpass our bodies and meet without our “meat” tagging along is a trope that Aziz and Cucher expose here with their computer-generated cover-up. By sealing off the senses, they show us how little we now seem to value them in a seemingly virtual, post-embodied world. If “hybrid media require hybrid analyses” (Lunenfeld in Graham 158), in our multimedia, mixed media, “mongrel media” (Dovey 114) environment, we need mongrel theory, synaestheory, to begin to discuss the complexities at hand. The goal here is producing an understanding of both media and sensory intelligences as hybrid. Symptomatic of our simple sense of media is our tendency to refer to media experiences as “audio-visual”: stimuli for the ear, eye or both. However, even if media are engineered to be predominately audio and/or visual, we are not. Synaestheory examines embodied media use, including the sensory information that the media does not claim to concentrate on, but that is still engaged and present in every mediated experience. It also examines embodied media use by paying attention to the pops and clicks of the material human-media interface. It does not assume simple sensory engagement or smooth engagement with media. These bumps, blisters, misfirings and errors are just as crucial a part of embodied media practice as smooth and successful interactions. Most significantly, synaesthesia insists simply that sensation matters. Sensory experiences are material, rich, emotional, memorable and important to the one sensing them, synaesthete or not. This declaration contradicts a legacy of distrust of the sensory in academic discourse that privileges the more intellectual and abstract, usually in the form of the detached text. However, academic texts are sensory too, of course. Sound, feeling, movement and sight are all inseparable from reading and writing, speaking and listening. We might do well to remember these as root sensory situations and by extension, recognise the importance of other sensual forms.Indeed, we have witnessed a rise of media genres that appeal to our senses first with brilliant and detailed visual and audio information, and story or narrative second, if at all. These media are “direct and one-dimensional, about little, other than their ability to commandeer the sight and the senses” (Darley 76). Whereas any attention to the construction of the media product is a disastrous distraction in narrative-centred forms, spectacular media reveals and revels in artifice and encourages the spectator to enjoy the simulation as part of the work’s allure. It is “a pleasure of control, but also of being controlled” (MacTavish 46). Like viewing abstract art, the impact of the piece will be missed if we are obsessed with what the artwork “is about”. Instead, we can reflect on spectacular media’s ability, like that of an abstract artwork, to impact our senses and as such, “renew the present” (Cubitt 32).In this framework, participation in any medium can be enjoyed not only as an interpretative opportunity, but also as an experience of sensory dexterity and relevance with its own pleasures and intelligences; a “being-present”. By focusing our attention on sensory flows, we may be able to perceive aspects of the world or ourselves that we had previously missed. Every one of us–synaesthete or nonsynaesthete–has a unique blueprint of reality, a unique way of coding knowledge that is different from any other on earth [...] By quieting down the habitually louder parts of our mind and turning the dial of our attention to its darker, quieter places, we may hear our personal code’s unique and usually unheard “song”, needing the touch of our attention to turn up its volume. (Duffy 123)This type of presence to oneself has been termed a kind of “perfect immediacy” and is believed to be cultivated through meditation or other sensory-focused experiences such as sex (Bolter and Grusin 260), art (Cubitt 32), drugs (Dann 184) or even physical pain (Gromala 233). Immersive media could also be added to this list, if as Bolter and Grusin suggest, we now “define immediacy as being in the presence of media” (236). In this case, immediacy has become effectively “media-cy.”A related point is the recognition of sensation’s transitory nature. Synaesthetic experiences and sensory experiences are vivid and dynamic. They do not persist. Instead, they flow through us and disappear, despite any attempts to capture them. You cannot stop or relive pure sound, for example (Gross). If you stop it, you silence it. If you relive it, you are experiencing another rendition, different even if almost imperceptibly from the last time you heard it. Media themselves are increasingly transitory and shifting phenomena. As media forms emerge and fall into obsolescence, spawning hybrid forms and spinoffs, the stories and memories safely fixed into any given media become outmoded and ultimately inaccessible very quickly. This trend towards flow over fixation is also informed by an embodied understanding of our own existence. Our sensations flow through us as we flow through the world. Synaesthesia reminds us that all sensation and indeed all sensory beings are dynamic. Despite our rampant lust for statis (Haraway), it is important to theorise with the recognition that bodies, media and sensations all flow through time and space, emerging and disintegrating. Finally, synaesthesia also encourages an always-embodied understanding of ourselves and our interactions with our environment. In media experiences that traditionally rely on vision the body is generally not only denied, but repressed (Balsamo 126). Claims to disembodiment flood the rhetoric around new media as an emancipatory element of mediated experience and somehow, seeing is superimposed on embodied being to negate it. However phenomena such as migraines, sensory release hallucinations, photo-memory, after-images, optical illusions and most importantly here, the “crosstalk” of synaesthesia (Cohen Kadosh et al. 489) all attest to the co-involvement of the body and brain in visual experience. Perhaps useful here for understanding media involvement in light of synaestheory is a philosophy of “mingled bodies” (Connor), where the world and its embodied agents intermingle. There are no discrete divisions, but plenty of translation and transfer. As Sean Cubitt puts it, “the world, after all, touches us at the same moment that we touch it” (37). We need to employ non-particulate metaphors that do away with the dichotomies of mind/body, interior/exterior and real/virtual. A complex embodied entity is not an object or even a series of objects, but embodiment work. “Each sense is in fact a nodal cluster, a clump, confection or bouquet of all the other senses, a mingling of the modalities of mingling [...] the skin encompasses, implies, pockets up all the other sense organs: but in doing so, it stands as a model for the way in which all the senses in their turn also invaginate all the others” (Connor). The danger here is of delving into a nostalgic discussion of a sort of “sensory unity before the fall” (Dann 94). The theory that we are all synaesthetes in some ways can lead to wistfulness for a perfect fusion of our senses, a kind of synaesthetic sublime that we had at one point, but lost. This loss occurs in childhood in some theories, (Maurer and Mondloch) and in our aboriginal histories in others (Dann 101). This longing for “original syn” is often done within a narrative that equates perfect sensory union with a kind of transcendence from the physical world. Dann explains that “during the modern upsurge in interest that has spanned the decades from McLuhan to McKenna, synaesthesia has continued to fulfil a popular longing for metaphors of transcendence” (180). This is problematic, since elevating the sensory to the sublime does no more service to understanding our engagements with the world than ignoring or degrading the sensory. Synaestheory does not tolerate a simplification of synaesthesia or any condition as a ticket to transcendence beyond the body and world that it is necessarily grounded in and responsible to. At the same time, it operates with a scheme of senses that are not a collection of separate parts, but blended; a field of intensities, a corporeal coalition of senses. It likewise refuses to participate in the false separation of body and mind, perception and cognition. More useful and interesting is to begin with metaphors that assume complexity without breaking phenomena into discrete pieces. This is the essence of a new anti-separatist synaestheory, a way of thinking through embodied humans in relationships with media and culture that promises to yield more creative, relevant and ethical theorising than the false isolation of one sense or the irresponsible disregard of the sensorium altogether.ReferencesAziz, Anthony, and Sammy Cucher. Dystopia. 1994. 15 Sep. 2010 ‹http://www.azizcucher.net/1994.php>. Balsamo, Anne. “The Virtual Body in Cyberspace.” Technologies of the Gendered Body: Reading Cyborg Women. Durham: Duke UP, 1997. 116-32.Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. Cambridge: MIT Press, 1999.Cohen Kadosh, Roi, Avishai Henik, and Vincent Walsh. “Synaesthesia: Learned or Lost?” Developmental Science 12.3 (2009): 484-491.Connor, Steven. “Michel Serres’ Five Senses.” Michel Serres Conference. Birkbeck College, London. May 1999. 5 Oct. 2010 ‹http://www.bbk.ac.uk/eh/skc/5senses.htm>. Cubitt, Sean. “It’s Life, Jim, But Not as We Know It: Rolling Backwards into the Future.” Fractal Dreams: New Media in Social Context. Ed. Jon Dovey. London: Lawrence and Wishart, 1996. 31-58.Cytowic, Richard E. The Man Who Tasted Shapes: A Bizarre Medical Mystery Offers Revolutionary Insights into Emotions, Reasoning and Consciousness. New York: Putnam Books, 1993.Dann, Kevin T. Bright Colors Falsely Seen: Synaesthesia and the Search for Transcendental Knowledge. New Haven: Yale UP, 1998.Darley, Andrew. Visual Digital Culture: Surface Play and Spectacle in New Media Genres. London: Routledge, 2000.Descartes, Rene. Discourse on Method and the Meditations. Trans. Johnn Veitch. New York: Prometheus Books, 1989.Dovey, Jon. “The Revelation of Unguessed Worlds.” Fractal Dreams: New Media in Social Context. Ed. Jon Dovey. London: Lawrence and Wishart, 1996. 109-35. Duffy, Patricia Lynne. Blue Cats and Chartreuse Kittens: How Synesthetes Color Their Worlds. New York: Times Books, 2001.Graham, Beryl. “Playing with Yourself: Pleasure and Interactive Art.” Fractal Dreams: New Media in Social Context. Ed. Jon Dovey. London: Lawrence and Wishart, 1996. 154-81.Gromala, Diana. "Pain and Subjectivity in Virtual Reality." Clicking In: Hot Links to a Digital Culture. Ed. Lynn Hershman Leeson. Seattle: Bay Press, 1996. 222-37.Haraway, Donna. “At the Interface of Nature and Culture.” Seminar. European Graduate School. Saas-Fee, Switzerland, 17-19 Jun. 2003.Jay, Martin. Downcast Eyes: The Denigration of Vision in Twentieth Century French Thought. Berkeley: University of California P, 1993.Jay, Martin. "Scopic Regimes of Modernity." Hal Foster, Ed. Vision and Visuality. New York: Dia Art Foundation, 1988. 2-23.MacTavish. Andrew. “Technological Pleasure: The Performance and Narrative of Technology in Half-Life and other High-Tech Computer Games.” ScreenPlay: Cinema/Videogames/Interfaces. Eds. Geoff King and Tanya Krzywinska. London: Wallflower P, 2002. Mass, Wendy. A Mango-Shaped Space. Little, Brown and Co., 2003.Maurer, Daphne, and Catherine J. Mondloch. “Neonatal Synaesthesia: A Re-Evaluation.” Eds. Lynn C. Robertson and Noam Sagiv. Synaesthesia: Perspectives from Cognitive Neuroscience. Oxford: Oxford UP, 2005.Merleau-Ponty, Maurice. Phenomenology of Perception. Trans. Colin Smith. London: Routledge, 1989.Mirzoeff, Nicholas. “What Is Visual Culture?” The Visual Culture Reader. Ed. Nicholas Mirzoeff. London: Routledge, 1998. 3-13.Rez. United Game Artists. Playstation 2. 2002.Stafford, Barbara Maria. Good Looking: Essays on the Virtue of Images. Cambridge: MIT Press, 1996.Ward, Jamie, and Jason B. Mattingley. “Synaesthesia: An Overview of Contemporary Findings and Controversies.” Cortex 42.2 (2006): 129-136.
43

Flynn, Bernadette. "Towards an Aesthetics of Navigation." M/C Journal 3, no. 5 (October 1, 2000). http://dx.doi.org/10.5204/mcj.1875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Introduction Explorations of the multimedia game format within cultural studies have been broadly approached from two perspectives: one -- the impact of technologies on user interaction particularly with regard to social implications, and the other -- human computer interactions within the framework of cybercultures. Another approach to understanding or speaking about games within cultural studies is to focus on the game experience as cultural practice -- as an activity or an event. In this article I wish to initiate an exploration of the aesthetics of player space as a distinctive element of the gameplay experience. In doing so I propose that an understanding of aesthetic spatial issues as an element of player interactivity and engagement is important for understanding the cultural practice of adventure gameplay. In approaching these questions, I am focussing on the single-player exploration adventure game in particular Myst and The Crystal Key. In describing these games as adventures I am drawing on Chris Crawford's The Art of Computer Game Design, which although a little dated, focusses on game design as a distinct activity. He brings together a theoretical approach with extensive experience as a game designer himself (Excalibur, Legionnaire, Gossip). Whilst at Atari he also worked with Brenda Laurel, a key theorist in the area of computer design and dramatic structure. Adventure games such as Myst and The Crystal Key might form a sub-genre in Chris Crawford's taxonomy of computer game design. Although they use the main conventions of the adventure game -- essentially a puzzle to be solved with characters within a story context -- the main focus and source of pleasure for the player is exploration, particularly the exploration of worlds or cosmologies. The main gameplay of both games is to travel through worlds solving clues, picking up objects, and interacting with other characters. In Myst the player has to solve the riddle of the world they have entered -- as the CD-ROM insert states "Now you're here, wherever here is, with no option but to explore." The goal, as the player must work out, is to release the father Atrus from prison by bringing magic pages of a book to different locations in the worlds. Hints are offered by broken-up, disrupted video clips shown throughout the game. In The Crystal Key, the player as test pilot has to save a civilisation by finding clues, picking up objects, mending ships and defeating an opponent. The questions foregrounded by a focus on the aesthetics of navigation are: What types of representational context are being set up? What choices have designers made about representational context? How are the players positioned within these spaces? What are the implications for the player's sense of orientation and navigation? Architectural Fabrication For the ancient Greeks, painting was divided into two categories: magalography (the painting of great things) and rhyparography (the painting of small things). Magalography covered mythological and historical scenes, which emphasised architectural settings, the human figure and grand landscapes. Rhyparography referred to still lifes and objects. In adventure games, particularly those that attempt to construct a cosmology such as Myst and The Crystal Key, magalography and rhyparography collide in a mix of architectural monumentality and obsessive detailing of objects. For the ancient Greeks, painting was divided into two categories: magalography (the painting of great things) and rhyparography (the painting of small things). Magalography covered mythological and historical scenes, which emphasised architectural settings, the human figure and grand landscapes. Rhyparography referred to still lifes and objects. In adventure games, particularly those that attempt to construct a cosmology such as Myst and The Crystal Key, magalography and rhyparography collide in a mix of architectural monumentality and obsessive detailing of objects. The creation of a digital architecture in adventure games mimics the Pompeii wall paintings with their interplay of extruded and painted features. In visualising the space of a cosmology, the environment starts to be coded like the urban or built environment with underlying geometry and textured surface or dressing. In The Making of Myst (packaged with the CD-ROM) Chuck Carter, the artist on Myst, outlines the process of creating Myst Island through painting the terrain in grey scale then extruding the features and adding textural render -- a methodology that lends itself to a hybrid of architectural and painted geometry. Examples of external architecture and of internal room design can be viewed online. In the spatial organisation of the murals of Pompeii and later Rome, orthogonals converged towards several vertical axes showing multiple points of view simultaneously. During the high Renaissance, notions of perspective developed into a more formal system known as the construzione legittima or legitimate construction. This assumed a singular position of the on-looker standing in the same place as that occupied by the artist when the painting was constructed. In Myst there is an exaggeration of the underlying structuring technique of the construzione legittima with its emphasis on geometry and mathematics. The player looks down at a slight angle onto the screen from a fixed vantage point and is signified as being within the cosmological expanse, either in off-screen space or as the cursor. Within the cosmology, the island as built environment appears as though viewed through an enlarging lens, creating the precision and coldness of a Piero della Francesca painting. Myst mixes flat and three-dimensional forms of imagery on the same screen -- the flat, sketchy portrayal of the trees of Myst Island exists side-by-side with the monumental architectural buildings and landscape design structures created in Macromodel. This image shows the flat, almost expressionistic trees of Myst Island juxtaposed with a fountain rendered in high detail. This recalls the work of Giotto in the Arena chapel. In Joachim's Dream, objects and buildings have depth, but trees, plants and sky -- the space in-between objects -- is flat. Myst Island conjures up the realm of a magic, realist space with obsolete artefacts, classic architectural styles (the Albert Hall as the domed launch pad, the British Museum as the library, the vernacular cottage in the wood), mechanical wonders, miniature ships, fountains, wells, macabre torture instruments, ziggurat-like towers, symbols and odd numerological codes. Adam Mates describes it as "that beautiful piece of brain-deadening sticky-sweet eye-candy" but more than mere eye-candy or graphic verisimilitude, it is the mix of cultural ingredients and signs that makes Myst an intriguing place to play. The buildings in The Crystal Key, an exploratory adventure game in a similar genre to Myst, celebrate the machine aesthetic and modernism with Buckminster Fuller style geodesic structures, the bombe shape, exposed ducting, glass and steel, interiors with movable room partitions and abstract expressionist decorations. An image of one of these modernist structures is available online. The Crystal Key uses QuickTime VR panoramas to construct the exterior and interior spaces. Different from the sharp detail of Myst's structures, the focus changes from sharp in wide shot to soft focus in close up, with hot-spot objects rendered in trompe l'oeil detail. The Tactility of Objects "The aim of trompe l'oeil -- using the term in its widest sense and applying it to both painting and objects -- is primarily to puzzle and to mystify" (Battersby 19). In the 15th century, Brunelleschi invented a screen with central apparatus in order to obtain exact perspective -- the monocular vision of the camera obscura. During the 17th century, there was a renewed interest in optics by the Dutch artists of the Rembrandt school (inspired by instruments developed for Dutch seafaring ventures), in particular Vermeer, Hoogstraten, de Hooch and Dou. Gerard Dou's painting of a woman chopping onions shows this. These artists were experimenting with interior perspective and trompe l'oeil in order to depict the minutia of the middle-class, domestic interior. Within these luminous interiors, with their receding tiles and domestic furniture, is an elevation of the significance of rhyparography. In the Girl Chopping Onions of 1646 by Gerard Dou the small things are emphasised -- the group of onions, candlestick holder, dead fowl, metal pitcher, and bird cage. Trompe l'oeil as an illusionist strategy is taken up in the worlds of Myst, The Crystal Key and others in the adventure game genre. Traditionally, the fascination of trompe l'oeil rests upon the tension between the actual painting and the scam; the physical structures and the faux painted structures call for the viewer to step closer to wave at a fly or test if the glass had actually broken in the frame. Mirian Milman describes trompe l'oeil painting in the following manner: "the repertory of trompe-l'oeil painting is made up of obsessive elements, it represents a reality immobilised by nails, held in the grip of death, corroded by time, glimpsed through half-open doors or curtains, containing messages that are sometimes unreadable, allusions that are often misunderstood, and a disorder of seemingly familiar and yet remote objects" (105). Her description could be a scene from Myst with in its suggestion of theatricality, rich texture and illusionistic play of riddle or puzzle. In the trompe l'oeil painterly device known as cartellino, niches and recesses in the wall are represented with projecting elements and mock bas-relief. This architectural trickery is simulated in the digital imaging of extruded and painting elements to give depth to an interior or an object. Other techniques common to trompe l'oeil -- doors, shadowy depths and staircases, half opened cupboard, and paintings often with drapes and curtains to suggest a layering of planes -- are used throughout Myst as transition points. In the trompe l'oeil paintings, these transition points were often framed with curtains or drapes that appeared to be from the spectator space -- creating a painting of a painting effect. Myst is rich in this suggestion of worlds within worlds through the framing gesture afforded by windows, doors, picture frames, bookcases and fireplaces. Views from a window -- a distant landscape or a domestic view, a common device for trompe l'oeil -- are used in Myst to represent passageways and transitions onto different levels. Vertical space is critical for extending navigation beyond the horizontal through the terraced landscape -- the tower, antechamber, dungeon, cellars and lifts of the fictional world. Screen shots show the use of the curve, light diffusion and terracing to invite the player. In The Crystal Key vertical space is limited to the extent of the QTVR tilt making navigation more of a horizontal experience. Out-Stilling the Still Dutch and Flemish miniatures of the 17th century give the impression of being viewed from above and through a focussing lens. As Mastai notes: "trompe l'oeil, therefore is not merely a certain kind of still life painting, it should in fact 'out-still' the stillest of still lifes" (156). The intricate detailing of objects rendered in higher resolution than the background elements creates a type of hyper-reality that is used in Myst to emphasise the physicality and actuality of objects. This ultimately enlarges the sense of space between objects and codes them as elements of significance within the gameplay. The obsessive, almost fetishistic, detailed displays of material artefacts recall the curiosity cabinets of Fabritius and Hoogstraten. The mechanical world of Myst replicates the Dutch 17th century fascination with the optical devices of the telescope, the convex mirror and the prism, by coding them as key signifiers/icons in the frame. In his peepshow of 1660, Hoogstraten plays with an enigma and optical illusion of a Dutch domestic interior seen as though through the wrong end of a telescope. Using the anamorphic effect, the image only makes sense from one vantage point -- an effect which has a contemporary counterpart in the digital morphing widely used in adventure games. The use of crumbled or folded paper standing out from the plane surface of the canvas was a recurring motif of the Vanitas trompe l'oeil paintings. The highly detailed representation and organisation of objects in the Vanitas pictures contained the narrative or symbology of a religious or moral tale. (As in this example by Hoogstraten.) In the cosmology of Myst and The Crystal Key, paper contains the narrative of the back-story lovingly represented in scrolls, books and curled paper messages. The entry into Myst is through the pages of an open book, and throughout the game, books occupy a privileged position as holders of stories and secrets that are used to unlock the puzzles of the game. Myst can be read as a Dantesque, labyrinthine journey with its rich tapestry of images, its multi-level historical associations and battle of good and evil. Indeed the developers, brothers Robyn and Rand Miller, had a fertile background to draw on, from a childhood spent travelling to Bible churches with their nondenominational preacher father. The Diorama as System Event The diorama (story in the round) or mechanical exhibit invented by Daguerre in the 19th century created a mini-cosmology with player anticipation, action and narrative. It functioned as a mini-theatre (with the spectator forming the fourth wall), offering a peek into mini-episodes from foreign worlds of experience. The Musée Mechanique in San Francisco has dioramas of the Chinese opium den, party on the captain's boat, French execution scenes and ghostly graveyard episodes amongst its many offerings, including a still showing an upper class dancing party called A Message from the Sea. These function in tandem with other forbidden pleasures of the late 19th century -- public displays of the dead, waxwork museums and kinetescope flip cards with their voyeuristic "What the Butler Saw", and "What the Maid Did on Her Day Off" tropes. Myst, along with The 7th Guest, Doom and Tomb Raider show a similar taste for verisimilitude and the macabre. However, the pre-rendered scenes of Myst and The Crystal Key allow for more diorama like elaborate and embellished details compared to the emphasis on speed in the real-time-rendered graphics of the shoot-'em-ups. In the gameplay of adventure games, animated moments function as rewards or responsive system events: allowing the player to navigate through the seemingly solid wall; enabling curtains to be swung back, passageways to appear, doors to open, bookcases to disappear. These short sequences resemble the techniques used in mechanical dioramas where a coin placed in the slot enables a curtain or doorway to open revealing a miniature narrative or tableau -- the closure of the narrative resulting in the doorway shutting or the curtain being pulled over again. These repeating cycles of contemplation-action-closure offer the player one of the rewards of the puzzle solution. The sense of verisimilitude and immersion in these scenes is underscored by the addition of sound effects (doors slamming, lifts creaking, room atmosphere) and music. Geographic Locomotion Static imagery is the standard backdrop of the navigable space of the cosmology game landscape. Myst used a virtual camera around a virtual set to create a sequence of still camera shots for each point of view. The use of the still image lends itself to a sense of the tableauesque -- the moment frozen in time. These tableauesque moments tend towards the clean and anaesthetic, lacking any evidence of the player's visceral presence or of other human habitation. The player's navigation from one tableau screen to the next takes the form of a 'cyber-leap' or visual jump cut. These jumps -- forward, backwards, up, down, west, east -- follow on from the geographic orientation of the early text-based adventure games. In their graphic form, they reveal a new framing angle or point of view on the scene whilst ignoring the rules of classical continuity editing. Games such as The Crystal Key show the player's movement through space (from one QTVR node to another) by employing a disorientating fast zoom, as though from the perspective of a supercharged wheelchair. Rather than reconciling the player to the state of movement, this technique tends to draw attention to the technologies of the programming apparatus. The Crystal Key sets up a meticulous screen language similar to filmic dramatic conventions then breaks its own conventions by allowing the player to jump out of the crashed spaceship through the still intact window. The landscape in adventure games is always partial, cropped and fragmented. The player has to try and map the geographical relationship of the environment in order to understand where they are and how to proceed (or go back). Examples include selecting the number of marker switches on the island to receive Atrus's message and the orientation of Myst's tower in the library map to obtain key clues. A screenshot shows the arrival point in Myst from the dock. In comprehending the landscape, which has no centre, the player has to create a mental map of the environment by sorting significant connecting elements into chunks of spatial elements similar to a Guy Debord Situationist map. Playing the Flaneur The player in Myst can afford to saunter through the landscape, meandering at a more leisurely pace that would be possible in a competitive shoot-'em-up, behaving as a type of flaneur. The image of the flaneur as described by Baudelaire motions towards fin de siècle decadence, the image of the socially marginal, the dispossessed aristocrat wandering the urban landscape ready for adventure and unusual exploits. This develops into the idea of the artist as observer meandering through city spaces and using the power of memory in evoking what is observed for translation into paintings, writing or poetry. In Myst, the player as flaneur, rather than creating paintings or writing, is scanning the landscape for clues, witnessing objects, possible hints and pick-ups. The numbers in the keypad in the antechamber, the notes from Atrus, the handles on the island marker, the tower in the forest and the miniature ship in the fountain all form part of a mnemomic trompe l'oeil. A screenshot shows the path to the library with one of the island markers and the note from Atrus. In the world of Myst, the player has no avatar presence and wanders around a seemingly unpeopled landscape -- strolling as a tourist venturing into the unknown -- creating and storing a mental map of objects and places. In places these become items for collection -- cultural icons with an emphasised materiality. In The Crystal Key iconography they appear at the bottom of the screen pulsing with relevance when active. A screenshot shows a view to a distant forest with the "pick-ups" at the bottom of the screen. This process of accumulation and synthesis suggests a Surrealist version of Joseph Cornell's strolls around Manhattan -- collecting, shifting and organising objects into significance. In his 1982 taxonomy of game design, Chris Crawford argues that without competition these worlds are not really games at all. That was before the existence of the Myst adventure sub-genre where the pleasures of the flaneur are a particular aspect of the gameplay pleasures outside of the rules of win/loose, combat and dominance. By turning the landscape itself into a pathway of significance signs and symbols, Myst, The Crystal Key and other games in the sub-genre offer different types of pleasures from combat or sport -- the pleasures of the stroll -- the player as observer and cultural explorer. References Battersby, M. Trompe L'Oeil: The Eye Deceived. New York: St. Martin's, 1974. Crawford, C. The Art of Computer Game Design. Original publication 1982, book out of print. 15 Oct. 2000 <http://members.nbci.com/kalid/art/art.php>. Darley Andrew. Visual Digital Culture: Surface Play and Spectacle in New Media Genres. London: Routledge, 2000. Lunenfeld, P. Digital Dialectic: New Essays on New Media. Cambridge, Mass.: MIT P 1999. Mates, A. Effective Illusory Worlds: A Comparative Analysis of Interfaces in Contemporary Interactive Fiction. 1998. 15 Oct. 2000 <http://www.wwa.com/~mathes/stuff/writings>. Mastai, M. L. d'Orange. Illusion in Art, Trompe L'Oeil: A History of Pictorial Illusion. New York: Abaris, 1975. Miller, Robyn and Rand. "The Making of Myst." Myst. Cyan and Broderbund, 1993. Milman, M. Trompe-L'Oeil: The Illusion of Reality. New York: Skira Rizzoli, 1982. Murray, J. Hamlet on the Holodeck: The Future of Narrative in Cyberspace. New York: Simon and Schuster, 1997. Wertheim, M. The Pearly Gates of Cyberspace: A History of Cyberspace from Dante to the Internet. Sydney: Doubleday, 1999. Game References 7th Guest. Trilobyte, Inc., distributed by Virgin Games, 1993. Doom. Id Software, 1992. Excalibur. Chris Crawford, 1982. Myst. Cyan and Broderbund, 1993. Tomb Raider. Core Design and Eidos Interactive, 1996. The Crystal Key. Dreamcatcher Interactive, 1999. Citation reference for this article MLA style: Bernadette Flynn. "Towards an Aesthetics of Navigation -- Spatial Organisation in the Cosmology of the Adventure Game." M/C: A Journal of Media and Culture 3.5 (2000). [your date of access] <http://www.api-network.com/mc/0010/navigation.php>. Chicago style: Bernadette Flynn, "Towards an Aesthetics of Navigation -- Spatial Organisation in the Cosmology of the Adventure Game," M/C: A Journal of Media and Culture 3, no. 5 (2000), <http://www.api-network.com/mc/0010/navigation.php> ([your date of access]). APA style: Bernadette Flynn. (2000) Towards an aesthetics of navigation -- spatial organisation in the cosmology of the adventure game. M/C: A Journal of Media and Culture 3(5). <http://www.api-network.com/mc/0010/navigation.php> ([your date of access]).
44

Monty, Randall W. "Driving in Cars with Noise." M/C Journal 27, no. 2 (April 16, 2024). http://dx.doi.org/10.5204/mcj.3039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Revving I’m convinced that no one actually listens to podcasts. Or maybe it’s just that no one admits it. This is partially because a podcast falls between fetish and precious. Listening to a podcast is at once intimate, someone speaking directly to you through your AirPods, and distant, since you’re likely listening by yourself. Listening to a podcast is weird enough; talking about listening to a podcast makes other people feel uncomfortable. This is why no one listens to podcasts while doing nothing else. Podcasts encourage passive listening; they compel active participation in something other than the podcast. There’s a suggested utility to listening to a podcast while doing something else—walking your cockapoo around the block, rearranging your bookshelf, prepping your meals—like you’re performing your practicality for the world. Listening to a podcast is not sufficient. When listening to a podcast, you simultaneously do something else to justify the listening. Podcasts are relatively new, as academic texts go. Yet they have been quickly taken up as technologies and artifacts of analysis (Vásquez), tools for teaching writing (Bowie), and modes of distributing scholarship (McGregor and Copeland). Podcasts are also, importantly, not simply audio versions of written essays (Detweiller), or non-visual equivalents of videos (Vásquez). Podcasts represent genres and opportunities for rhetorical choice that instructors cannot assume students already possess expected literacies for (Bourelle, Bourelle, and Jones). Paralleling much service work at institutions of higher education, women scholars and scholars of colour take on inequitable labour with podcast scholarship (Faison et al.; Shamburg). A promising new direction challenges the raced and gendered stereotypes of the genre and mode, highlighting podcasts as an anti-racist and anti-disinformation tool (Vrikki) and a way to engage reluctant students in critical race discourse (Harris). And, with so many podcasts accessible on virtually any topic imaginable, podcasts have more recently emerged as reliable secondary sources for academic research, a usage accelerated by the availability of audio versions of scholarly publications and professional academics composing podcasts to distribute and conduct their research. When we incorporate podcasts into our academic work, new connections become recognisable: connections between ourselves and other humans, ourselves and other things, and things and other things—including the connections between audio and work. Podcasts maintain their histories as a passive medium. A student can listen to a podcast for class while making dinner and keeping an eye on their family. A professional academic might more dutifully pay attention to the content of the podcast, but they’ll also attune to how the physical experience of doing research that way affects their work, their findings, and themself. When considered as academic work, as in this piece, podcasts persuade us to pay attention to methods, materiality, networks, and embodiment. Methods I listen to podcasts in the car, most often while driving to and from work. Listening to podcasts while commuting is common. Yet listening beyond content immersion or distraction, listening as part of an intentional methodology—formulating a plan, rhetorically listening, taking audio notes, annotating and building on those notes later—maybe less so. This intentional, rhetorical approach to listening while driving attunes the researcher to the embodied, physical aspects of each of these activities: research, driving, and listening. As a result, the research experience provides different kinds of opportunities for invention and reflection. My process is as follows: first, I curate a playlist based around a specific research question or agenda. This playlist will include selected episodes from podcasts that I have evaluated as reliable on a given topic. This evaluation is usually based on a combination of factors, mainly my familiarity with the podcast, the professional credentials (academic or otherwise) of the podcast hosts and guests, and recommendations from other researchers or podcasters. I also consider the structure of the podcast and the quality of the audio recording, because if I can’t hear the content, or if I must spend more time skipping ads than actively listening, then the podcast isn’t very usable for this stage of my research process. I will sometimes include single episodes of podcasts I’m less familiar with, usually because I noticed them pitched on one of my social media feeds and as a trial to see if I want to subscribe to the podcast. The playlist is arranged in what I hope will be a coherent order based on the episode descriptions. For example, sequencing episodes of Have You Heard (Berkshire and Schneider), Talking Race, Africa and People (Tiluk and Hope), and Is This Democracy (Mason and Zimmer) with the titles, "Digging Deep into the Education Wars”, “They Stole WOKE”, and “‘Cancel Culture’: How a Moral Panic Is Capturing America and the World” places these sources in conversation with each other, juxtaposes the arguments, and allows me to synthesise my own comprehensive response. Second, I listen. Ratcliffe positioned rhetorical listening as a performative “trope for interpretive invention” and a method for “facilitating cross-cultural dialogues” within composition studies (196). Listening is a thing we do in order to do something else. Under this framework, the listener/researcher approaches their task with goals of understanding and responsibility to themselves and others, which then affords opportunities to identify commonalities and differences within claims and cultural logics (204). In other words, by paying closer attention to who we are and who we’re listening to, and by listening in good faith, we can better understand what and why people are saying and doing what they are, and when we understand those better, we are better equipped for future action. Listening rhetorically can be an anchor when researching with podcasts, a modality notoriously coded and memed as white, male, and upper middle class (Locke; Morgan; “A Group of White Men Is Called a Podcast”). The technologies I use during this research afford and constrain, which leads to the third aspect: notetaking. I can’t write while driving. I tend to forget important bits. But the act of listening opens me up to things I might otherwise have missed. Sound, Detweiler shows, “affords different modes of composing, listening, thinking, and responding”. To facilitate my listening as invention, I added myself to my contacts list so that I can talk-to-text myself with questions about what I’m listening to, names and key terms that I need to look up later, and starter drafts of my own writing. While driving, I can “favourite” an episode while on the go, a marker to myself to re-listen and inspect the episode transcript. Later, at my work desk, I decipher whatever it is my phone’s text messaging app thought I said. “Anna Genesis Evolution from one species to another.” “Ben sick something at the bottom of the sea.” “Dinosaurs and dragons make each other plausible.” (Pretty sure my phone got that last one right.) There, my workflow is mediated by expected reading research technologies (word processing application, PDF viewer, boutique file organisation and annotation software), agents (desk, chair, and lighting selected by my employer to improve my productivity), and processes (coding transcripts, annotating secondary sources, writing, and revising). Materiality My methodology is an auditory variation of McNely’s visual fieldwork, which “attempts to render visible the environs, objects, sensations, and affects of inquiry” ("Lures" 216). Podcasts are expressions of physicality that bring together a confluence of networked actors, technologies, and spaces. Moreover, a podcast is itself a material artifact in the most literal sense: sound is a physical phenomenon, emitting and reverberating waves stimulating effects in our body and affecting physio-emotional responses. Inside my car, there is little impeding the sound waves emitting from the speakers and into my ears. Diffraction is minimal; the sound fills the interior of my vehicle so quickly that I can’t perceive that it is moving. I’m surrounded by the sound of the podcast, but not in the sense that is usually meant by “surround sound”. I’m also inundated by other sounds, the noises of driving that the twenty-first-century commuter has been conditioned to render ambient: the buzz of other vehicles passing me, the hum of my tyres on asphalt, the squeak of brakes and crunch of slowly turning tyres. Listening to a podcast in the car is like sitting in on a conversation that you can’t participate in. Slate magazine’s sports podcast “Hang Up and Listen” plays with this expectation, taking its name for the clichéd valediction that callers to local sports radio shows would say to indicate that they are done asking their question, signifying to the host that it’s their turn again. It’s a shibboleth through which the caller acknowledges and performs the participatory role of the listener as an actor within the network of the show. McNely writes that when he walks, “there are sounds in me, around me, passing through me. When I walk, I feel wind, mist, sleet. When I walk, I feel bass, treble, empathy. When I walk, I feel arguments, metaphors, dialogues—in my gut, in my chest” (Engaging 184). His attunement to all of these elicits physical sensations and emotional responses, and the sounds of the podcast cause similar responses for me. I jostle in my seat. I tense up, grip the steering wheel, and grind my teeth. I sigh, guffaw, roll my eyes, and yell. I pause—both my movement and the podcast app—to let a potential response roll about in my head. I’m in the car, but podcasts attempt to place me somewhere else through ambient worldbuilding: the clinking of cups and spoons to let me know the conversation is taking place in a coffee shop, the chirps of frogs and bugs to make me feel like I’m with the guest interviewee at the Amazonian research site, the clamour of a teacher calling their third-grade class to attention as a lead in for a discussion of public school funding. The arrangement and design of the podcast takes the listener to the world within the podcast, and it reminds me how the podcast, and myself, my car, and the listening are connected to everything else. Networks I am employed at an institution with a “distributed campus”, with multiple sites spread across the local region and online, without an officially designated central campus. Faculty and students attend these different places based on appointment, proximity, and preference. I teach classes in person on two of the campuses, sometimes at both simultaneously connected via videoconference. So where is the location of my class? It’s the physical campuses, certainly. It’s also the online space where the class meets, the locations where users join from (home, a dorm room, their workplace, etc.), and the Internet connecting those people and spaces. The class is transnational, as many of our students live in the neighbouring country. The class is also in between and in transit, with students using the shuttle bus Wi-Fi to complete work or join meetings. As with the research methodology detailed above, the class is moving between the static places, too, as the instructor and students alike travel to teach or attend class or book it home to join via videoconference in time. The institution’s networks enact Detweiller’s characterisation of podcasts as enacting both rhetorical distribution and circulation. Taken together, “distribution is not a strictly one-to-many phenomenon”. Yes, it’s “a conception of rhetoric that challenges but does not erase the role of human agency in rhetorical causes and effects”, but it’s also the physical networks and “supply chains” that move things. In both cases, the decentralisation draws attention away from individual nodes and to the network and the interconnections between various actors. Consider the routes the podcast takes. I start the episode as I leave my driveway. By the time I reach the highway, the podcast has made it through its preamble and first ad read. The episode travels with me in the car along my route, the sound of a single word literally takes up physical space on the highway. Ideas stretch for miles. I make the entire trip in a single episode. I then assign that episode to my students, who take the podcast with them. It moves at different speeds but also at the same speed (unless a particular listener sets their playback at a faster pace). In some ways, it’s the same sound, yet in other ways—time, space, distribution, audience—the same episode makes a different sound. Meanwhile, the podcast hosts remain in their recording booth, simultaneously locked into and moving through spacetime. Further, by analysing the various texts surrounding my listening to podcasts, we can see a multimodal genre ecology of signs, roadways, mapped and unmapped routes, turn-by-turn navigation apps, as well as other markers of location and direction, like billboards, water towers in the distance, the setting sun, and that one tree in a field that doesn’t belong there but lets me know I’ve passed the midpoint of my commute. Visual cues are perhaps more easily felt, but Rickert reminds us that “we consciously and unconsciously depend on sound to orient, situate, and wed ourselves to the places we inhabit” (152). The three-note dinging of a railroad crossing halts drivers even without visual confirmation of an oncoming train. The brutal springtime crosswind announces its presence on my passenger window, giving me a split second to steady the wheel. The lowering pitch of the pavement as I take the exit towards my house. The network of audio extends beyond the situations of the researcher and draws attention to what Barad referred to as “entangled material agencies” resulting in “networks or assemblages of humans and nonhumans” (1118, 1131). The network of my podcast listening accounts for the mobile device that we use to access content, the digital networks that I download episodes over, as well as the physical infrastructures that enable those networks, the hosting services and recording technologies and funding mechanisms used by the podcasters, the distribution of campuses, the roads I travel on, the tonnage of steel and plastic that I manipulate while researching, and that’s even before we get to everything else that impacts on my listening, like weather, traffic, the pathways all these material items took to get where they are, the head cold impacting on my hearing, my personal history of hearing different sounds, and on and on. Embodiment I listen to podcasts in the car while commuting to work. A more accurate way of putting that would be to say that commuting is work, which I mean twice over. First, a commute is likely a requisite component of your job. This is not to assign full culpability to one actor or another; the length of your commute likely owes to various factors—availability of affordable housing, proximity of worksite relative to your home, competing duties of family care, etc.—but a commute is and should be considered part of the work. Even if you’re not getting paid for it, even if the neoliberal economic system that overarches your life has convinced you that you are actively choosing to commute as part of the mutually and equally entered-into contract with your employer, you’re on the clock when commuting because you’re doing that action because of the work. If your response to this is, “then what about people who work from home? Should their personal devices and monthly Internet costs be considered work expenses? Or what about the time it takes to get up early to put makeup on or prepare lunches for their kids? Does all that count as work?” Yes. Yes, it does. The farmer’s day doesn’t start when they milk the cow, it starts as soon as they wake up. It starts before then, even. We are entangled with our work selves. Lately, I’ve begun logging these listening commutes on my weekly timesheet. It’s not an official record: salaried employees at my institution are not required to keep track of their work hours. Instead, it’s a routine and technical document I developed to help me get things done, an artifact of procedural rhetoric and the broader genre ecology of my work. Second, commuting is a physical act. It is work. We walk to bus and train stops and stand around waiting. We power our bicycles. We drive our vehicles, manoeuvring through streets and turns and other drivers. The deleterious effects of sitting down for prolonged periods for work, including while commuting, are well documented (Ding et al.). Driving itself is an act that places the human—the driver, passengers, and pedestrians—in greater physical danger than flying, or riding a train, or swimming with sharks. Research in this way presents a different kind of epistemic risk. Arriving So, the question I’m left to codify is what does this commuting audio research methodology offer for researchers that other, more traditional approaches, might not? Rickert analysed an electric car as “inherently suasive”, as it “participates in the conflicted discourses about that built environment and showcases some fundamental preconceptions rooted into our everyday ways of being together” (263). I’m alone in the car, but every sound reminds me of how I am connected to someone or something else. Of course, neither commuting nor listening to podcasts are exclusively solo endeavours: people carpool to work, and fans attend live recordings of their favourite shows. Perhaps listening while driving causes me to pay closer attention to what’s being said, the way you seem to learn the words of a song better when listening and singing along in the car. There are different kinds of distractions when driving versus sitting at one’s desk to read or listen (although it’s fair to say that the podcast itself is the distraction from what I should be paying the most attention to when driving). Anyone who has taken a long road trip alone can tell you about the opportunities it provides to sit with one’s thoughts, to spend uninterrupted time and miles turning over an idea in your mind, to reflect at length on a single topic, to rant to the noise of the road. Maybe that’s what a commuting podcast methodology affords: isolated moments surrounded by sound, away from the overtly audio, and connected to the rest of the world. References “A Group of White Men Is Called a Podcast.” Know Your Meme, 20 Feb. 2019. 6 Mar. 2024 <https://knowyourmeme.com/memes/a-group-of-white-men-is-called-a-podcast>. Barad, Karen. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham: Duke UP, 2007. Berkshire, Jennifer, and Jack Schneider, hosts. “Digging Deep into the Education Wars.” Have You Heard 156 (4 May 2023). <https://www.haveyouheardpodcast.com/episodes/156-digging-deep-into-the-education-wars?rq=woke>. Bourelle, Andrew, Tiffany Bourelle, and Natasha Jones. “Multimodality in the Technical Communication Classroom: Viewing Classical Rhetoric through a 21st Century Lens.” Technical Communication Quarterly 24.4 (2015): 306-327. Bowie, Jennifer L. “Podcasting in a Writing Class? Considering the Possibilities.” Kairos: A Journal of Rhetoric, Technology, and Pedagogy 16.2 (2012). 29 Nov. 2023 <https://kairos.technorhetoric.net/16.2/topoi/bowie/index.html>. Detweiler, Eric. “The Bandwidth of Podcasting.” Tuning in to Soundwriting, special issue of enculturation/Intermezzo. 9 Feb. 2024 <http://intermezzo.enculturation.net/14-stedman-et-al/detweiler.html>. Ding, Ding, et al. “Driving: A Road to Unhealthy Lifestyles and Poor Health Outcomes.” Plos One 9.6. 15 Feb. 2024 <https://doi.org/10.1371/journal.pone.0094602>. Faison, Wonderful, et al. “White Benevolence: Why Supa-Save-a-Savage Rhetoric Ain’t Getting It.” In Counterstories from the Writing Center, eds. Wonderful Faison and Frankie Condon. Logan: Utah State UP. 81-94. Harris, Jasmine. “Podcast Talk and Public Sociology: Teaching Critical Race Discourse Participation through Podcast Production.” About Campus 24.3 (2019): 16-20. Locke, Charley. “Podcasts' Biggest Problem Isn't Discovery, It's Diversity.” Wired, 31 Aug. 2015. 6 Mar. 2024 <https://www.wired.com/2015/08/podcast-discovery-vs-diversity/>. Mason, Lily, and Thomas, hosts. “‘Cancel Culture’: How a Moral Panic Is Capturing America and the World – with Adrian Daub.” Is This Democracy 24 (16 May 2023). <https://podcasts.apple.com/us/podcast/24-cancel-culture-how-a-moral-panic-is-capturing/id1652741954?i=1000612321369>. McGregor, Hannah, and Stacey Copeland. “Why Podcast? Podcasting as Publishing, Sound-Based Scholarship, and Making Podcasts Count.” Kairos: A Journal of Rhetoric, Technology, and Pedagogy 27.1 (2022). 15 Feb. 2024 <https://kairos.technorhetoric.net/27.1/topoi/mcgregor-copeland/index.html>. McNely, Brian. “Lures, Slimes, Time: Viscosity and the Nearness of Distance.” Philosophy & Rhetoric 52.3 (2019): 203-226. ———. Engaging Ambience: Visual and Multisensory Methodologies and Rhetorical Theory. Logan: Utah State UP, 2024. Morgan, Josh. “Data Confirm That Podcasting in the US Is a White Male Thing.” Quartz, 12 Jan. 2016. 6 Mar. 2024 <https://qz.com/591440/data-confirm-that-podcasting-in-the-us-is-a-white-male-thing>. Ratcliffe, Krista. “Rhetorical Listening: A Trope for Interpretive Invention and a 'Code of Cross-Cultural Conduct'.” College Composition and Communication 51.2 (1999): 195-224. Rickert, Thomas. Ambient Rhetoric: The Attunements of Rhetorical Being. Pittsburgh: U of Pittsburgh P, 2013. Shamburg, Christopher. “Rising Waves in Informal Education: Women of Color with Educationally Oriented Podcasts.” Education and Information Technologies 26 (2021): 699–713. Tiluk, Daniel, and Have Hope, hosts. “They Stole WOKE.” Talking Race, Africa and People 1 (14 Apr. 2023). <https://podcasts.apple.com/ca/podcast/01-they-stole-woke/id1682830005?i=1000609221830> Vásquez, Camilla. Research Methods for Digital Discourse Analysis. London, Bloomsbury, 2022. Vrikki, Photini, and Sarita Malik. “Voicing Lived-Experience and Anti-Racism: Podcasting as a Space at the Margins for Subaltern Counterpublics.” Popular Communication 17.4 (2018): 273-287.

To the bibliography