Journal articles on the topic 'Semantic video coding'

To see the other types of publications on this topic, follow the link: Semantic video coding.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 35 journal articles for your research on the topic 'Semantic video coding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Essel, Daniel Danso, Ben-Bright Benuwa, and Benjamin Ghansah. "Video Semantic Analysis." International Journal of Computer Vision and Image Processing 11, no. 2 (April 2021): 1–21. http://dx.doi.org/10.4018/ijcvip.2021040101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Sparse Representation (SR) and Dictionary Learning (DL) based Classifier have shown promising results in classification tasks, with impressive recognition rate on image data. In Video Semantic Analysis (VSA) however, the local structure of video data contains significant discriminative information required for classification. To the best of our knowledge, this has not been fully explored by recent DL-based approaches. Further, similar coding findings are not being realized from video features with the same video category. Based on the foregoing, a novel learning algorithm, Sparsity based Locality-Sensitive Discriminative Dictionary Learning (SLSDDL) for VSA is proposed in this paper. In the proposed algorithm, a discriminant loss function for the category based on sparse coding of the sparse coefficients is introduced into structure of Locality-Sensitive Dictionary Learning (LSDL) algorithm. Finally, the sparse coefficients for the testing video feature sample are solved by the optimized method of SLSDDL and the classification result for video semantic is obtained by minimizing the error between the original and reconstructed samples. The experimental results show that, the proposed SLSDDL significantly improves the performance of video semantic detection compared with state-of-the-art approaches. The proposed approach also shows robustness to diverse video environments, proving the universality of the novel approach.
2

Chen, Sovann, Supavadee Aramvith, and Yoshikazu Miyanaga. "Learning-Based Rate Control for High Efficiency Video Coding." Sensors 23, no. 7 (March 30, 2023): 3607. http://dx.doi.org/10.3390/s23073607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
High efficiency video coding (HEVC) has dramatically enhanced coding efficiency compared to the previous video coding standard, H.264/AVC. However, the existing rate control updates its parameters according to a fixed initialization, which can cause errors in the prediction of bit allocation to each coding tree unit (CTU) in frames. This paper proposes a learning-based mapping method between rate control parameters and video contents to achieve an accurate target bit rate and good video quality. The proposed framework contains two main structural codings, including spatial and temporal coding. We initiate an effective learning-based particle swarm optimization for spatial and temporal coding to determine the optimal parameters at the CTU level. For temporal coding at the picture level, we introduce semantic residual information into the parameter updating process to regulate the bit correctly on the actual picture. Experimental results indicate that the proposed algorithm is effective for HEVC and outperforms the state-of-the-art rate control in the HEVC reference software (HM-16.10) by 0.19 dB on average and up to 0.41 dB for low-delay P coding structure.
3

Antoszczyszyn, P. M., J. M. Hannah, and P. M. Grant. "Reliable tracking of facial features in semantic-based video coding." IEE Proceedings - Vision, Image, and Signal Processing 145, no. 4 (1998): 257. http://dx.doi.org/10.1049/ip-vis:19982153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

NOMURA, Yoshihiko, Ryutaro MATSUDA, Ryota Sakamoto, Tokuhiro SUGIURA, Hirokazu Matsui, and Norihiko KATO. "2301 Low Bit-Rate Semantic Coding Technology for Lecture Video." Proceedings of the JSME annual meeting 2005.7 (2005): 89–90. http://dx.doi.org/10.1299/jsmemecjo.2005.7.0_89.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Benuwa, Ben-Bright, Yongzhao Zhan, Benjamin Ghansah, Ernest K. Ansah, and Andriana Sarkodie. "Sparsity Based Locality-Sensitive Discriminative Dictionary Learning for Video Semantic Analysis." Mathematical Problems in Engineering 2018 (August 5, 2018): 1–11. http://dx.doi.org/10.1155/2018/9312563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dictionary learning (DL) and sparse representation (SR) based classifiers have greatly impacted the classification performance and have had good recognition rate on image data. In video semantic analysis (VSA), the local structure of video data contains more vital discriminative information needed for classification. However, this has not been fully exploited by the current DL based approaches. Besides, similar coding findings are not being realized from video features with the same video category. Based on the issues stated afore, a novel learning algorithm, called sparsity based locality-sensitive discriminative dictionary learning (SLSDDL) for VSA is proposed in this paper. In the proposed algorithm, a discriminant loss function for the category based on sparse coding of the sparse coefficients is introduced into structure of locality-sensitive dictionary learning (LSDL) algorithm. Finally, the sparse coefficients for the testing video feature sample are solved by the optimized method of SLSDDL and the classification result for video semantic is obtained by minimizing the error between the original and reconstructed samples. The experiment results show that the proposed SLSDDL significantly improves the performance of video semantic detection compared with the comparative state-of-the-art approaches. Moreover, the robustness to various diverse environments in video is also demonstrated, which proves the universality of the novel approach.
6

Pimentel-Niño, M. A., Paresh Saxena, and M. A. Vazquez-Castro. "Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness." Scientific World Journal 2015 (2015): 1–16. http://dx.doi.org/10.1155/2015/394956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A novel cross-layer optimized video adaptation driven by perceptual semantics is presented. The design target is streamed live video to enhance situational awareness in challenging communications conditions. Conventional solutions for recreational applications are inadequate and novel quality of experience (QoE) framework is proposed which allows fully controlled adaptation and enables perceptual semantic feedback. The framework relies on temporal/spatial abstraction for video applications serving beyond recreational purposes. An underlying cross-layer optimization technique takes into account feedback on network congestion (time) and erasures (space) to best distribute available (scarce) bandwidth. Systematic random linear network coding (SRNC) adds reliability while preserving perceptual semantics. Objective metrics of the perceptual features in QoE show homogeneous high performance when using the proposed scheme. Finally, the proposed scheme is in line with content-aware trends, by complying with information-centric-networking philosophy and architecture.
7

Guo, Jia, Xiangyang Gong, Wendong Wang, Xirong Que, and Jingyu Liu. "SASRT: Semantic-Aware Super-Resolution Transmission for Adaptive Video Streaming over Wireless Multimedia Sensor Networks." Sensors 19, no. 14 (July 15, 2019): 3121. http://dx.doi.org/10.3390/s19143121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
There are few network resources in wireless multimedia sensor networks (WMSNs). Compressing media data can reduce the reliance of user’s Quality of Experience (QoE) on network resources. Existing video coding software, such as H.264 and H.265, focuses only on spatial and short-term information redundancy. However, video usually contains redundancy over a long period of time. Therefore, compressing video information redundancy with a long period of time without compromising the user experience and adaptive delivery is a challenge in WMSNs. In this paper, a semantic-aware super-resolution transmission for adaptive video streaming system (SASRT) for WMSNs is presented. In the SASRT, some deep learning algorithms are used to extract video semantic information and enrich the video quality. On the multimedia sensor, different bit-rate semantic information and video data are encoded and uploaded to user. Semantic information can also be identified on the user side, further reducing the amount of data that needs to be transferred. However, identifying semantic information on the user side may increase the computational cost of the user side. On the user side, video quality is enriched with super-resolution technologies. The major challenges faced by SASRT include where the semantic information is identified, how to choose the bit rates of semantic and video information, and how network resources should be allocated to video and semantic information. The optimization problem is formulated as a complexity-constrained nonlinear NP-hard problem. Three adaptive strategies and a heuristic algorithm are proposed to solve the optimization problem. Simulation results demonstrate that SASRT can compress video information redundancy with a long period of time effectively and enrich the user experience with limited network resources while simultaneously improving the utilization of these network resources.
8

Stivaktakis, Radamanthys, Grigorios Tsagkatakis, and Panagiotis Tsakalides. "Semantic Predictive Coding with Arbitrated Generative Adversarial Networks." Machine Learning and Knowledge Extraction 2, no. 3 (August 25, 2020): 307–26. http://dx.doi.org/10.3390/make2030017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In spatio-temporal predictive coding problems, like next-frame prediction in video, determining the content of plausible future frames is primarily based on the image dynamics of previous frames. We establish an alternative approach based on their underlying semantic information when considering data that do not necessarily incorporate a temporal aspect, but instead they comply with some form of associative ordering. In this work, we introduce the notion of semantic predictive coding by proposing a novel generative adversarial modeling framework which incorporates the arbiter classifier as a new component. While the generator is primarily tasked with the anticipation of possible next frames, the arbiter’s principal role is the assessment of their credibility. Taking into account that the denotative meaning of each forthcoming element can be encapsulated in a generic label descriptive of its content, a classification loss is introduced along with the adversarial loss. As supported by our experimental findings in a next-digit and a next-letter scenario, the utilization of the arbiter not only results in an enhanced GAN performance, but it also broadens the network’s creative capabilities in terms of the diversity of the generated symbols.
9

Herranz, Luis. "Integrating semantic analysis and scalable video coding for efficient content-based adaptation." Multimedia Systems 13, no. 2 (June 30, 2007): 103–18. http://dx.doi.org/10.1007/s00530-007-0090-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Motlicek, Petr, Stefan Duffner, Danil Korchagin, Hervé Bourlard, Carl Scheffler, Jean-Marc Odobez, Giovanni Del Galdo, Markus Kallinger, and Oliver Thiergart. "Real-Time Audio-Visual Analysis for Multiperson Videoconferencing." Advances in Multimedia 2013 (2013): 1–21. http://dx.doi.org/10.1155/2013/175745.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
We describe the design of a system consisting of several state-of-the-art real-time audio and video processing components enabling multimodal stream manipulation (e.g., automatic online editing for multiparty videoconferencing applications) in open, unconstrained environments. The underlying algorithms are designed to allow multiple people to enter, interact, and leave the observable scene with no constraints. They comprise continuous localisation of audio objects and its application for spatial audio object coding, detection, and tracking of faces, estimation of head poses and visual focus of attention, detection and localisation of verbal and paralinguistic events, and the association and fusion of these different events. Combined all together, they represent multimodal streams with audio objects and semantic video objects and provide semantic information for stream manipulation systems (like a virtual director). Various experiments have been performed to evaluate the performance of the system. The obtained results demonstrate the effectiveness of the proposed design, the various algorithms, and the benefit of fusing different modalities in this scenario.
11

Zhou, Xiang, Yue Cui, Gang Xu, Hongliang Chen, Jing Zeng, Yutong Li, and Jiangjian Xiao. "Sleep Action Recognition Based on Segmentation Strategy." Journal of Imaging 9, no. 3 (March 7, 2023): 60. http://dx.doi.org/10.3390/jimaging9030060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In order to solve the problem of long video dependence and the difficulty of fine-grained feature extraction in the video behavior recognition of personnel sleeping at a security-monitored scene, this paper proposes a time-series convolution-network-based sleeping behavior recognition algorithm suitable for monitoring data. ResNet50 is selected as the backbone network, and the self-attention coding layer is used to extract rich contextual semantic information; then, a segment-level feature fusion module is constructed to enhance the effective transmission of important information in the segment feature sequence on the network, and the long-term memory network is used to model the entire video in the time dimension to improve behavior detection ability. This paper constructs a data set of sleeping behavior under security monitoring, and the two behaviors contain about 2800 single-person target videos. The experimental results show that the detection accuracy of the network model in this paper is significantly improved on the sleeping post data set, up to 6.69% higher than the benchmark network. Compared with other network models, the performance of the algorithm in this paper has improved to different degrees and has good application value.
12

Riaz, Waqar, Gao Chenqiang, Abdullah Azeem, Saifullah, Jamshaid Allah Bux, and Asif Ullah. "Traffic Anomaly Prediction System Using Predictive Network." Remote Sensing 14, no. 3 (January 18, 2022): 447. http://dx.doi.org/10.3390/rs14030447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Anomaly anticipation in traffic scenarios is one of the primary challenges in action recognition. It is believed that greater accuracy can be obtained by the use of semantic details and motion information along with the input frames. Most state-of-the art models extract semantic details and pre-defined optical flow from RGB frames and combine them using deep neural networks. Many previous models failed to extract motion information from pre-processed optical flow. Our study shows that optical flow provides better detection of objects in video streaming, which is an essential feature in further accident prediction. Additional to this issue, we propose a model that utilizes the recurrent neural network which instantaneously propagates predictive coding errors across layers and time steps. By assessing over time the representations from the pre-trained action recognition model from a given video, the use of pre-processed optical flows as input is redundant. Based on the final predictive score, we show the effectiveness of our proposed model on three different types of anomaly classes as Speeding Vehicle, Vehicle Accident, and Close Merging Vehicle from the state-of-the-art KITTI, D2City and HTA datasets.
13

Akbari, Mohammad, Jie Liang, Jingning Han, and Chengjie Tu. "Learned Bi-Resolution Image Coding using Generalized Octave Convolutions." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (May 18, 2021): 6592–99. http://dx.doi.org/10.1609/aaai.v35i8.16816.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Learned image compression has recently shown the potential to outperform the standard codecs. State-of-the-art rate-distortion (R-D) performance has been achieved by context-adaptive entropy coding approaches in which hyperprior and autoregressive models are jointly utilized to effectively capture the spatial dependencies in the latent representations. However, the latents are feature maps of the same spatial resolution in previous works, which contain some redundancies that affect the R-D performance. In this paper, we propose a learned bi-resolution image coding approach that is based on the recently developed octave convolutions to factorize the latents into high and low resolution components. Therefore, the spatial redundancy is reduced, which improves the R-D performance. Novel generalized octave convolution and octave transposed-convolution architectures with internal activation layers are also proposed to preserve more spatial structure of the information. Experimental results show that the proposed scheme outperforms all existing learned methods as well as standard codecs such as the next-generation video coding standard VVC (4:2:0) in both PSNR and MS-SSIM. We also show that the proposed generalized octave convolution can improve the performance of other auto-encoder-based schemes such as semantic segmentation and image denoising.
14

Тимочко, О. І., В. В. Ларін, Ю. І. Шевяков, and А. Абдалла. "Investigation of the mechanism for processing predicted frames in the technology of compression of transformed images in computer systems and special purpose networks." Системи обробки інформації, no. 4(163), (October 28, 2020): 87–93. http://dx.doi.org/10.30748/soi.2020.163.09.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The analysis of image processing technologies shows the main practice way to improve the quality of image processing. It is a preliminary analysis and subsequent image processing. It depends on the result of the preliminary analysis (filtration, sharpening, noise reduction, etc.). However, when selection of the method of preliminary analysis, an intermediate evaluation of results, selection of the subsequent processing method, etc. decision makers involved. This is not acceptable for practical implementation in automatic processing and transmission of video information systems. The main difficulties in working with video are large volumes of transmitted information and sensitivity to delays in the video information transmission. Therefore, in order to eliminate the maximum redundancy amount in the formation of the video sequence, 3 types of frames are used: I, P and B which form a frame group. Therefore, the possibility of upgrading coding methods for P-frames is considered on preliminary blocks' type identification with the subsequent formation of block code structures. As the correlation coefficient between adjacent frames increases, the compression ratio of the differential-represented frame's binary mask increases. A mechanism for processing predicted frames in the technology of compression of transformed images in computer systems and special purpose networks has been created. It based on the using of filter masks and the definition of complexity structural indicators of video fragments. It allows us to increase the efficiency of contours detection, namely, the accuracy of the allocation and localization of the semantic component up to 30% with an insignificant increase in the total processing time (no more than 5%).
15

Stanek, Kelly M., Yong D. Park, Anthony M. Murro, Debra Moore-Hill, and Fernando Vale. "A-282 Verbal Fluencies are Differentially Associated with Processing Speed in Temporal Lobe Epilepsy." Archives of Clinical Neuropsychology 37, no. 6 (August 17, 2022): 1433. http://dx.doi.org/10.1093/arclin/acac060.282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract Objective: The current study sought to better understand the impact of processing speed on pre-surgical assessment of verbal fluencies in temporal lobe epilepsy (TLE) by examining whether processing speed is differentially related to category and letter fluencies across patients with left and right unilateral TLE. Method: The retrospective data sample included 36 adults aged 17-60 (56% female) with cryptogenic TLE and both video EEG evidence for unilateral seizure focus (right TLE n =16; left TLE n=20) and confirmed left hemisphere language dominance, who had undergone pre-surgical neuropsychological evaluation including assessment of category fluency (Animal Naming), letter fluency (FAS), and processing speed (Coding). Primary partial correlation analyses controlled for age and years of education. Results: After controlling for demographic variables in each of the following analyses, there was a statistically significant relationship between Coding and Animal Naming (r=.47, p<.01) but not FAS in the full sample of TLE patients. Similarly in the left TLE group, there was a statistically significant relationship between Coding and Animal Naming (r=.47, p<.05) but not FAS. In the right TLE group, neither Animal Naming nor FAS were statistically significantly related to Coding. Conclusion: Results suggest that processing speed may have a greater influence on measurement of category as opposed to letter fluency in individuals with left TLE but not right TLE. While further research in larger samples is indicated, a better understanding of these relationships is important in assessing the lateralizing/localizing value of semantic and phonemic fluencies in pre-surgical neurocognitive profiles of patients with unilateral TLE with impaired processing speed.
16

Barannik, Volodymyr, S. Shulgin, O. Ignatyev, R. Onyshchenko, Yu Babenko, and Valeriy Barannik. "CONCEPT FUNCTIONAL TRANSFORMATIONS FOR FORMATION OF SYNTACTIC DESCRIPTION DIAGONALS TRANSFORMANT." Information and communication technologies, electronic engineering 3, no. 1 (June 2023): 23–31. http://dx.doi.org/10.23939/ictee2023.01.023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The article justifies the existence an imbalance in the provision of video information services using infocommunication networks. It is shown that such an imbalance is due to the destructive actions of the opposing side. Many these attacks relate to the disabling of energy and telecommunications infrastructure. This leads to a significant drop in the bandwidth of the infocommunication network. Accordingly, it is necessary to localize the imbalance between the information load infocommunication network and its bandwidth in the face of crisis factors. To do this, you must use an integrated approach. The article discusses in detail the direction creating technologies for additional reduction of bit load without losing the semantic integrity of video information resources. However, for such technologies in the process of reducing the information load of the network, there is a contradiction. On the one hand, a reduction in the information load of the network is achieved. But on the other hand, there are losses in the integrity video information. So you need to create a new class of encoding methods. Accordingly, to build compression coding technologies, it is necessary to develop an approach concept. A theoretical basis has been created for constructing the technology of encoding transformants in an uneven diagonal format, taking into account its combinatorial configuration. It is based on a system of transformations that is outlined as a two-layer compressive encoding transformer in an uneven-diagonal spectral space.
17

Liu, Hongxia. "Design of Neural Network Model for Cross-Media Audio and Video Score Recognition Based on Convolutional Neural Network Model." Computational Intelligence and Neuroscience 2022 (June 13, 2022): 1–12. http://dx.doi.org/10.1155/2022/4626867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, the residual convolutional neural network is used to extract the note features in the music score image to solve the problem of model degradation; then, multiscale feature fusion is used to fuse the feature information of different levels in the same feature map to enhance the feature representation ability of the model. A network composed of a bidirectional simple loop unit and a chained time series classification function is used to identify notes, parallelizing a large number of calculations, thereby speeding up the convergence speed of training, which also makes the data in the dataset no longer need to be strict with labels. Alignment also reduces the requirements on the dataset. Aiming at the problem that the existing cross-modal retrieval methods based on common subspace are insufficient for mining local consistency within modalities, a cross-modal retrieval method fused with graph convolution is proposed. The K-nearest neighbor algorithm is used to construct modal graphs for samples of different modalities, and the original features of samples from different modalities are encoded through a symmetric graph convolutional coding network and a symmetric multilayer fully connected coding network, and the encoded features are fused and input. We jointly optimize the intramodal semantic constraints and intermodal modality-invariant constraints in the common subspace to learn highly locally consistent and semantically consistent common representations for samples from different modalities. The error value of the experimental results is used to illustrate the effect of parameters such as the number of iterations and the number of neurons on the network. In order to more accurately illustrate that the generated music sequence is very similar to the original music sequence, the generated music sequence is also framed, and finally the music sequence spectrogram and spectrogram are generated. The accuracy of the experiment is illustrated by comparing the spectrogram and the spectrogram, and genre classification predictions are also performed on the generated music to show that the network can generate music of different genres.
18

Tong, Chau, Drew Margolin, Rumi Chunara, Jeff Niederdeppe, Teairah Taylor, Natalie Dunbar, and Andy J. King. "Search Term Identification Methods for Computational Health Communication: Word Embedding and Network Approach for Health Content on YouTube." JMIR Medical Informatics 10, no. 8 (August 30, 2022): e37862. http://dx.doi.org/10.2196/37862.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background Common methods for extracting content in health communication research typically involve using a set of well-established queries, often names of medical procedures or diseases, that are often technical or rarely used in the public discussion of health topics. Although these methods produce high recall (ie, retrieve highly relevant content), they tend to overlook health messages that feature colloquial language and layperson vocabularies on social media. Given how such messages could contain misinformation or obscure content that circumvents official medical concepts, correctly identifying (and analyzing) them is crucial to the study of user-generated health content on social media platforms. Objective Health communication scholars would benefit from a retrieval process that goes beyond the use of standard terminologies as search queries. Motivated by this, this study aims to put forward a search term identification method to improve the retrieval of user-generated health content on social media. We focused on cancer screening tests as a subject and YouTube as a platform case study. Methods We retrieved YouTube videos using cancer screening procedures (colonoscopy, fecal occult blood test, mammogram, and pap test) as seed queries. We then trained word embedding models using text features from these videos to identify the nearest neighbor terms that are semantically similar to cancer screening tests in colloquial language. Retrieving more YouTube videos from the top neighbor terms, we coded a sample of 150 random videos from each term for relevance. We then used text mining to examine the new content retrieved from these videos and network analysis to inspect the relations between the newly retrieved videos and videos from the seed queries. Results The top terms with semantic similarities to cancer screening tests were identified via word embedding models. Text mining analysis showed that the 5 nearest neighbor terms retrieved content that was novel and contextually diverse, beyond the content retrieved from cancer screening concepts alone. Results from network analysis showed that the newly retrieved videos had at least one total degree of connection (sum of indegree and outdegree) with seed videos according to YouTube relatedness measures. Conclusions We demonstrated a retrieval technique to improve recall and minimize precision loss, which can be extended to various health topics on YouTube, a popular video-sharing social media platform. We discussed how health communication scholars can apply the technique to inspect the performance of the retrieval strategy before investing human coding resources and outlined suggestions on how such a technique can be extended to other health contexts.
19

Alforova, Z. "Trans‑Image in Viktor Sidorenko’ Creative Work." Vìsnik Harkìvsʹkoi deržavnoi akademìi dizajnu ì mistectv 2021, no. 1 (February 2021): 51–56. http://dx.doi.org/10.33625/visnik2021.01.051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The aim of the article is to investigate the transformation of the image as a morphological unit of modern visual art on the example of the creative work of a Ukrainian artist and researcher of modern visual art Viktor Sidorenko. The research methodology is based on the methods developed in the works of J. Landsdown and S. Schofield, S. Yerokhin and others. The paper presents an attempt to study a new type of image – post-post-classical image, which has an ontologically transgressive character. The research has important implications for understanding the modern morphological changes in the visual sphere, the instrumental system of these changes that ensures the presence of its two mutually antithetical trends: transgressive and synergetic. The author analyzes the publications in which the link between the morphology of the visual sphere and the latest developments in the creation of visual space are explored. The choice of Victor Sidorenko’s creative work as a research material is not accidental. It is in the work of this artist that the genesis of the modern post-post-classical image and the transformation of its space as polymorphic can be clearly traced. The attraction to photographicity on the one hand, and author coding, on the other, becomes an ontological feature of V. Sidorenko’s authorial construction of pictorial space and in further creative searches. A certain culmination of this stage in the artist’s visual work is the already famous full‑scale visual project “The Mill of Time” (2003). It is in this project that V. Sidorenko creates a trans‑image as a space of a new type, in which photography, installation, painting, living objects and video projection create a post-classical type of image. The paper considers Victor Sidorenko’s creative work as a vivid example of the genesis of the image, whose ontological feature is its transgressive character. This is an image of the post-post-classical type, whose pictorial space can consist of classical, non‑classical and post-classical images. At the present stage, there have been changes in the strategies of representation of the trans‑image as such. The main strategy of its representation is a polymorphic visual project, which synergistically combines different types of images, bringing their semantic content beyond a stable visual form.
20

Chung, Siyoung, Mark Chong, Jie Sheng Chua, and Jin Cheon Na. "Evolution of corporate reputation during an evolving controversy." Journal of Communication Management 23, no. 1 (February 13, 2019): 52–71. http://dx.doi.org/10.1108/jcom-08-2018-0072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
PurposeThe purpose of this paper is to investigate the evolution of online sentiments toward a company (i.e. Chipotle) during a crisis, and the effects of corporate apology on those sentiments.Design/methodology/approachUsing a very large data set of tweets (i.e. over 2.6m) about Company A’s food poisoning case (2015–2016). This case was selected because it is widely known, drew attention from various stakeholders and had many dynamics (e.g. multiple outbreaks, and across different locations). This study employed a supervised machine learning approach. Its sentiment polarity classification and relevance classification consisted of five steps: sampling, labeling, tokenization, augmentation of semantic representation, and the training of supervised classifiers for relevance and sentiment prediction.FindingsThe findings show that: the overall sentiment of tweets specific to the crisis was neutral; promotions and marketing communication may not be effective in converting negative sentiments to positive sentiments; a corporate crisis drew public attention and sparked public discussion on social media; while corporate apologies had a positive effect on sentiments, the effect did not last long, as the apologies did not remove public concerns about food safety; and some Twitter users exerted a significant influence on online sentiments through their popular tweets, which were heavily retweeted among Twitter users.Research limitations/implicationsEven with multiple training sessions and the use of a voting procedure (i.e. when there was a discrepancy in the coding of a tweet), there were some tweets that could not be accurately coded for sentiment. Aspect-based sentiment analysis and deep learning algorithms can be used to address this limitation in future research. This analysis of the impact of Chipotle’s apologies on sentiment did not test for a direct relationship. Future research could use manual coding to include only specific responses to the corporate apology. There was a delay between the time social media users received the news and the time they responded to it. Time delay poses a challenge to the sentiment analysis of Twitter data, as it is difficult to interpret which peak corresponds with which incident/s. This study focused solely on Twitter, which is just one of several social media sites that had content about the crisis.Practical implicationsFirst, companies should use social media as official corporate news channels and frequently update them with any developments about the crisis, and use them proactively. Second, companies in crisis should refrain from marketing efforts. Instead, they should focus on resolving the issue at hand and not attempt to regain a favorable relationship with stakeholders right away. Third, companies can leverage video, images and humor, as well as individuals with large online social networks to increase the reach and diffusion of their messages.Originality/valueThis study is among the first to empirically investigate the dynamics of corporate reputation as it evolves during a crisis as well as the effects of corporate apology on online sentiments. It is also one of the few studies that employs sentiment analysis using a supervised machine learning method in the area of corporate reputation and communication management. In addition, it offers valuable insights to both researchers and practitioners who wish to utilize big data to understand the online perceptions and behaviors of stakeholders during a corporate crisis.
21

Chebet, Mercy, and Petronilla Mbatha Mathooko. "Communication for Social and Behaviour Change: A Case Study of Puppetry Animation in Kenya." International Journal of Social Science and Humanities Research (IJSSHR) ISSN 2959-7056 (o); 2959-7048 (p) 1, no. 1 (November 3, 2023): 566–89. http://dx.doi.org/10.61108/ijsshr.v1i1.48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The COVID-19 pandemic has claimed over six million lives worldwide (Ritchie, 2022). In Kenya, more than three hundred thousand lives have been lost (Ministry of Health, 2022). Prevention has been the main focus. This has necessitated behaviour change. The government, non-governmental organisations, and private agencies set out to find Social and Behaviour Change Communication (SBCC) strategies at the start of the pandemic. Different strategies received different types of feedback. One strategy that has been proven time and time again to be effective in mass communication and education while entertaining is puppetry. It was, however, only partially utilised. This research sought to explore communication for social and behaviour change communication and how it was applied to curb the spread of COVID 19 using a case study of puppetry in Kenya. It attempted to answer the following questions: What types of puppets were used in Kenya for social and behaviour change communication? Which advocacy messages were communicated for social and behaviour change using puppetry? How was puppetry used for social mobilisation for social and behaviour change communication? What factors hindered puppetry's use for social and behaviour change communication? The theoretical base of this research was informed by the health belief model, which provides a framework for designing messaging that targets perceived barriers, benefits, self-efficacy, and threats; the social learning theory, which explains how behaviour is learned through observation; and the social marketing theory, which provides a framework for how behaviour change messages are designed using marketing principles. This study used a qualitative research design. It analysed past studies to give a framework and then used interviews and observation to collect current data by observation and key informant interviews for thematic analysis. The population chosen for this study was all the puppet shows created, digitally recorded, and aired for SBCC to communicate behaviour change as a prevention measure against COVID-19. For this research, the Dr Pamoja show was purposely chosen as the sample as it met all the criteria for this study. Coding was used for thematic analysis of the collected puppetry video samples from the Dr Pamoja show produced by Project hand up to communicate SBCC against COVID-19. The codes were derived using the deductive approach, and the show analysed both the latent and the semantic. The gaps in the data were informed by Key informants, including the Dr Pamoja show’s director, an AMREF representative, sixteen (16) puppeteers, a community leader, a health representative, and ten (10) parents. After collecting and analysing the data, findings showed that the main types of puppets used are glove puppets, and puppetry remains an effective tool for social and behavioural change communication in Kenya. It was found to be efficient in communicating advocacy messages through influence, persuasion, and social marketing, and also in social mobilisation at both the community level and national level by promoting health messaging that is personalised, normative, and identity-relevant and relies on people’s connections and sense of accountability. The data also showed that episodes translated to Kiswahili and Kikuyu were more popular than the other languages. It is, however, faced with social challenges such as perception, motivation, cultural, psychological, and production challenges such as financing. The study recommends better utilisation of puppetry for SBCC and incorporation of puppetry in communication by the government, non-governmental organisations, and mainstream media. It further recommends that further research be conducted on memory retention of new behaviours learned from puppetry, gender issues in puppetry, and the use of puppetry in other areas such as therapy, play, and education.
22

Karimipour, Amir, and Shahla Sharifi. "An experimental study on deictic verbs and the coding patterns of deixis in Ilami Kurdish: A comparative study." Studia Linguistica Universitatis Iagellonicae Cracoviensis 138, no. 4 (2021): 159–86. http://dx.doi.org/10.4467/20834624sl.21.014.14742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Conducting a video-based experiment in English, Japanese and Thai, Matsumoto et al. (2017) report that deictic verbs are more frequently used when the motion is not just toward the speaker but also into his/her functional space (i.e. functional HERE of the speaker) defined by limits of interaction and visibility as well as when the motion is accompanied by an interactional behaviour of the Figure such as greeting the speaker. They claim that directional venitive prepositional phrases (henceforth PPs) like toward me do not exhibit this feature, though. This paper aims to reevaluate these proposals (Matsumoto et al. 2017) in Ilami Kurdish (henceforth IK), thereby figuring out whether the functional nature of deictic verbs observed in the three studied languages is also attested in this dialect. In line with the findings reported by Matsumoto et al. (2017), results of this research reveal that the semantics of venitive verbs of motion in IK is spatial and functional at the same time. In other words, these verbs are more often used in the verbal descriptions of the IK participants, when the Figure shares a functional space with the speaker induced by limits of interaction and visibility, and also when he/she smiles at or greets the speaker. Importantly, results show that venitive PPs in IK can be functional in nature or add some functional meaning (in addition to their spatial meaning) to the verb, so that participants utilize venitive adpositions along with the venitive verb to add emphasis on the kind of motion (to be a venitive one) and express that the Figure would be “very close” to the speaker at the end of motion.
23

Ma, Siwei, Junlong Gao, Ruofan Wang, Jianhui Chang, Qi Mao, Zhimeng Huang, and Chuanmin Jia. "Overview of intelligent video coding: from model-based to learning-based approaches." Visual Intelligence 1, no. 1 (August 2, 2023). http://dx.doi.org/10.1007/s44267-023-00018-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractIntelligent video coding (IVC), which dates back to the late 1980s with the concept of encoding videos with knowledge and semantics, includes visual content compact representation models and methods enabling structural, detailed descriptions of visual information at different granularity levels (i.e., block, mesh, region, and object) and in different areas. It aims to support and facilitate a wide range of applications, such as visual media coding, content broadcasting, and ubiquitous multimedia computing. We present a high-level overview of the IVC technology from model-based coding (MBC) to learning-based coding (LBC). MBC mainly adopts a manually designed coding scheme to explicitly decompose videos to be coded into blocks or semantic components. Thanks to emerging deep learning technologies such as neural networks and generative models, LBC has become a rising topic in the coding area. In this paper, we first review the classical MBC approaches, followed by the LBC approaches for image and video data. We also discuss and overview our recent attempts at neural coding approaches, which are inspiring for both academic research and industrial implementation. Some critical yet less studied issues are discussed at the end of this paper.
24

Décombas, Marc, Younous Fellah, Fréderic Dufaux, Beatrice Pesquet-popescu, Francois Capman, and Erwann Renan. "Seam carving modeling for semantic video coding in security applications." APSIPA Transactions on Signal and Information Processing 4 (2015). http://dx.doi.org/10.1017/atsip.2015.4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In some security applications, it is important to transmit just enough information to take the right decisions. Traditional video codecs try to maximize the global quality, irrespective of the video content pertinence for certain tasks. To better maintain the semantics of the scene, some approaches allocate more bitrate to the salient information. In this paper, a semantic video compression scheme based on seam carving is proposed. The idea is to suppress non-salient parts of the video by seam carving. The reduced sequence is encoded with H.264/AVC while the seams are encoded with our approach. The main contributions of this paper are (1) an algorithm that segments the sequence into group of pictures, depending on the content, (2) a spatio-temporal seam clustering method, (3) an isolated seam discarding technique, improving the seam encoding, (4) a new seam modeling, avoiding geometric distortion and resulting in a better control of the seam shapes, and (5) a new encoder which reduces the overall bit-rate. A full reference object-oriented quality metric is used to assess the performance of the approach. Our approach outperforms traditional H.264/AVC intra encoding with a Bjontegaard's rate improvement between 7.02 and 21.77% while maintaining the quality of the salient objects.
25

Sun, Bo, Yong Wu, Jun He, and Lejun Yu. "Structured Coding Based on Semantic Disambiguation for Video Captioning." SSRN Electronic Journal, 2022. http://dx.doi.org/10.2139/ssrn.4174916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Samarathunga, Prabhath, Yasith Ganearachchi, Thanuj Fernando, Adhuran Jayasingam, Indika Alahapperuma, and Anil Fernando. "A Semantic Communication and VVC Based Hybrid Video Coding System." IEEE Access, 2024, 1. http://dx.doi.org/10.1109/access.2024.3399174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Geetha, P., and Vasumathi Narayanan. "Facial Expression Analysis for Content-Based Video Retrieval." Journal of Computing and Information Science in Engineering 14, no. 4 (September 1, 2014). http://dx.doi.org/10.1115/1.4027885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this work, we propose a technique for facial expression recognition to bridge the semantic gap among the features that can be extracted in a content-based video retrieval system. The paper aims to provide accurate and reliable facial expression recognition of a dominant person in video frames using deterministic binary cellular automata (DBCA). Both geometric and appearance-based features are used. Efficient dimension reduction techniques for face detection and recognition are applied. Using the facial action coding system (FACS), one can code automatically nearly any anatomically possible facial expression, deconstructing it into what are called as action units (AUs). By employing two-dimensional deterministic binary cellular automaton systems (2D-DBCA), a scheme is developed to classify the facial expressions representing various emotions to retrieve video scenes/shots. Extensive experiments on Cohn–Kanade database, Yale database, and large movie videos show the superiority of the proposed method, in comparison with support vector machines (SVMs), hidden Markov models (HMMs), and neural network (NN) classifiers.
28

Oksiuk, Alexander, Natalia Korolyova, Volodymyr Barannik, Dmytro Zhuikov, Yuriy Babenko, and Roman Puhachov. "CREATION OF METHODOLOGICAL BASIS FOR IDENTIFICATION OF SIGNIFICANT SEGMENTS FROM THE POINT OF VIEW OF PRESERVATION OF SEMANTIC INTEGRITY OF A VIDEO RESOURCE." Visnyk Universytetu “Ukraina”, 2019. http://dx.doi.org/10.36994/2707-4110-2019-2-23-28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The necessity of further improvement of technologies of effective syntactic coding of video information resources for ensuring their information security in systems of critical infrastructure is proved. It is shown that one of the stages is required to use the methods of identification of video frame segments by the degree of their saturation with structural elements. It is proved that it creates conditions for further decrease in bit volume, and consequently and increase of level of categories of availability and integrity of information resources. Development of identification of significant segments from the position of preservation of semantic integrity of video resource on the basis of use of system of rules for decision-making on information on structural and statistical properties of microsegments on a brightness component of color representation of a video frame is stated. Built a system of rules to identify segments of the video according to their degree of importance from the position maintain the necessary level of integrity of objects of interest, given the information value of local areas (these micro-segments) of the video frame, characterized by a greater homogeneity of their structural and statistical properties. The basic components of such a rule are substantiated, which include: at the first level, a system of comparisons of the indicator characterizing the level of structural and statistical saturation of microsegments with upper and lower threshold values is created. This is one of the three types of microsegment, characterized by respectively high, seredni ABO niskim runem structural statisticno nasyshennost on syntaxical run the description; the second level is the system of decision rules relative to the importance of the segment from the position of the SSTs of the video resource based on usage information about the number of these micro-segments with different levels of structural-statistical saturation. It is shown that the processing time delays are reduced due to the fact that the identification process is carried out only on the brightness component of the color representation of the video frame, which carries the main information load among the other color components.
29

Wulf, Tim, Daniel Possler, and Johannes Breuer. "Video game genre ((Online)Games)." DOCA - Database of Variables for Content Analysis, March 26, 2021. http://dx.doi.org/10.34778/3f.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The variable 'genre' aims to identify and compare different types of games, mainly in terms of gameplay differences (i.e., rules and players’ possibilities to interact with a game). Genre is usually coded by using external video game databases, such as those published on journalistic websites. Field of application/theoretical foundation: The variable ‘genre’ is often used in content analyses of video games to identify and compare different types of games. Lynch et al (2016), for example, investigate whether the number of sexualized characters differ between various video game genres (Action, Adventure, Fighting, Platformer, Role-Playing-Game, Shooter). However, the definition and validity of different genre lists is controversially discussed in the literature (e.g., Arsenault, 2009). Most content analytic studies adopt the value of the genre variable for a given game from an external source. Most commonly, scholars use one or more databases published on journalistic video game websites (www.ign.com; www.gamespot.com; www.giantbomb.com), on Wikipedia or the database of the Entertainment Software Rating Board (www.esrb.org). Most of the genre classifications in these databases are based on gameplay characteristics rather than narrative themes. For example, both the game Starcraft as well as Anno 1602 are classified as ‘real-time strategy’ on Wikipedia, regardless of the fact that they have rather different settings (science fiction vs. historic). To ensure that games are classified into a few, clear genre categories (some journalistic genre lists are extremely detailed, see Arsenault, 2009), many content analyses define potential values of the genre variable in a first step (see below). For example, while IGN (www.ign.com) currently categorizes games in 27 different genre categories, studies mostly only differentiate between 9-15 genres (see below). In a second step, the appropriate value of the variable for a given game is coded based on the external sources. Additionally, rules need to be developed that determine how to deal with potential conflicts. At first, if coding is based on multiple sources, it needs to be decided how to deal with potential conflicts between these sources. For example, Hanninger and Thompson (2004) report that “the genre most frequently used” (p. 867) was coded in such cases. In contrast, Lynch and colleagues (2016) prioritized entries in the IGN database and only used additional sources (GiantBomb and Wikipedia) if information was lacking. Moreover, scholars need to decide how to deal with multiple categorizations of a given game in the same database (e.g. Anno 1602 is classified as ‘real-time strategy’ and ‘city-building game’ on Wikipedia). Lynch and colleagues (2016), for instance, coded the first genre from their list which was mentioned in the database. Finally, scholars must also ensure that their shortened list of genres (step 1) is consistent with the potentially more detailed classification approach of external databases or develop a scheme that defines the correspondence between these lists. References/combination with other methods of data collection: Scholars may also use survey methods to classify games in homogeneous groups. For example, experts or players could be asked to evaluate several games on multiple dimensions, such as setting and gameplay mechanics. Subsequent statistical cluster analysis (e.g., hierarchical clustering) could be applied to identify homogeneous groups of games. Moreover, games could be clustered on the basis of their textual descriptions, for example, in Wikipedia articles. Automated methods, such as latent semantic analysis, can be used for this purpose (e.g. Ryan et al., 2015). Example studies Coding material Measure Operationalization Unit(s) of analysis Source(s) (reported reliability of coding) Entry of a game in the video game database published on the journalistic website IGN; if information was unavailable the website GiantBomb as well as Wikipedia were used Genre Predefined list of genres: “action, adventure, casual, children’s entertainment, family entertainment, fighting, flight simulation, horror, platformer, racing, role-playing game (RPG), shooter, sports, strategy, or other/indeterminable” (p. 562) Game Lynch et al., 2016 (reliability not stated) Entry of a game in video game databases published on journalistic websites (IGN, GameSpot, GameFAQs) and the database of the Entertainment Software Rating Board Genre Predefined list of genres: “action, adventure, fighting, racing, role-playing, shooting, simulation, sports, strategy, or trivia” (p. 857) Game Haninger & Thompson, 2004 (reliability not stated) Entry of a game in the video game database of the Entertainment Software Rating Board Genre Predefined list of genres: “adventure, flight simulator, fighting, music, role-playing, racing, shooter, sports, or strategy/puzzle” (p. 65) Game Smith, Lachlan, & Tamborini, 2003 (reliability not stated) References Arsenault, D. (2009). Video Game Genre, Evolution and Innovation. Eludamos. Journal for Computer Game Culture, 3(2), 29. Haninger, K., & Thompson, K. M. (2004). Content and ratings of teen-rated video games. JAMA: The Journal of the American Medical Association, 160(4), 402–410. https://doi.org/10.1001/archpedi.160.4.402 Lynch, T., Tompkins, J. E., van Driel, I. I., & Fritz, N. (2016). Sexy, Strong, and Secondary: A Content Analysis of Female Characters in Video Games across 31 Years. Journal of Communication, 66(4), 564–584. https://doi.org/10.1111/jcom.12237 Ryan, J. O., Kaltman, E., Mateas, M., & Wardrip-Fruin, N. (2015). What We Talk About When We Talk About Games: Bottom-Up Game Studies Using Natural Language Processing. Proceedings of the 10th International Conference on the Foundations of Digital Games, 10. Smith, S. L., Lachlan, K. A., & Tamborini, R. (2003). Popular video games: Quantifying the presentation of violence and its context. Journal of Broadcasting & Electronic Media, 47(1), 58–76. https://doi.org/10.1207/s15506878jobem4701_4
30

Kojima, Kaori, Yoshihisa Hirakawa, Takashi Yamanaka, Satoshi Hirahara, Jiro Okochi, Masafumi Kuzuya, and Hisayuki Miura. "Challenges faced by older people with dementia during the COVID‐19 pandemic as perceived by professionals: a qualitative study with interviews." Psychogeriatrics, May 20, 2024. http://dx.doi.org/10.1111/psyg.13131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractBackgroundPrevious studies have highlighted a decline in the mental health of older adults over the course of the coronavirus disease 2019 (COVID‐19) pandemic. Few studies have determined the possible causes of behavioural and psychological symptoms of dementia during COVID‐19 in a comprehensive manner. We aimed to identify the challenges faced by older adults with dementia during the COVID‐19 pandemic.MethodsThis study adopted a qualitative approach to understanding the perceptions of healthcare professionals, such as regarding the negative effects of COVID‐19 on the mental health of people with dementia. Between January and March 2022, the authors conducted individual in‐depth interviews on how COVID‐19 affected the stress levels, care, and self‐determination of people with dementia. Qualitative data from the individual interviews were data cleansed to ensure the clarity and readability of the transcripts. The qualitative data were then analyzed by inductive manual coding using a qualitative content analysis approach. The grouping process involved reading and comparing individual labels to cluster similar labels into categories and inductively formulate themes.ResultsQualitative analysis extracted 61 different semantic units that were duplicated. Seven categories were inductively extracted using a grouping process. These were further integrated to extract the following four themes: fear of personal protective equipment (PPE), loneliness, dissatisfaction with behavioural restrictions and limitations of video calls, and family interference with service use.DiscussionPeople with dementia often faced mental distress during the pandemic owing to preventive measures against COVID‐19, and a lack of awareness and understanding of such preventive measures worsened their distress. They experienced a severe sense of social isolation and loneliness. Findings also indicated that families tended to ignore the needs of people with dementia and their decisions and opinions regarding healthcare service use.
31

Boudjadja, Rima, Mohamed Azni, Abdelnasser Dahmani, and Mohamed Nadjib Zennir. "EFFICIENT MOBILE VIDEO TRANSMISSION BASED ON A JOINT CODING SCHEME." Journal of Information and Communication Technology, November 6, 2017. http://dx.doi.org/10.32890/jict2017.16.2.8235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, we propose a joint coding design which uses the Symbol Forward Error Correction (S-FEC) at the application layer. The purpose of this work is on one hand to minimize the Packet Loss Rate (PLR) and, on the other hand to maximize the visual quality of video transmitted over a wireless network (WN). The scheme proposed is founded on a FEC adaptable with the semantics of the H.264/AVC video encoding. This mechanism relies upon a rate distortion algorithm, controlling the channel code rates under the global rate constraints given by the WN. Based on a data partitioning (DP) tool, both packet type and packet length are taken into account by the proposed optimization mechanism which leads to unequal error protection (UEP). The performance of the proposed JSCC unequal error control is illustrated over wireless network by performing simulations under different channel conditions. The simulation results are then compared with an equal error protection (EEP) scheme.
32

Jin, Xin, Ruoyu Feng, Simeng Sun, Runsen Feng, Tianyu He, and Zhibo Chen. "Semantical video coding: Instill static-dynamic clues into structured bitstream for AI tasks." Journal of Visual Communication and Image Representation, March 2023, 103816. http://dx.doi.org/10.1016/j.jvcir.2023.103816.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Tuuri, Emilia. "Concerning variation in encoding spatial motion: Evidence from Finnish." Nordic Journal of Linguistics, October 11, 2021, 1–22. http://dx.doi.org/10.1017/s0332586521000202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract This article describes variation in the use of frames of reference (FoRs; object-centred, viewpoint-centred, and geocentric, as in Holistic Spatial Semantics) in Finnish descriptions of motion and connects questions of variation to a typological framework. Recent research has described the choice of FoRs as a process with multiple factors. This complexity and controlling for the main variables posited in the literature create the starting point for the current study that explores factors affecting the choice of FoRs in motion situations and within speakers of the same language. The data were elicited from 50 native speakers of Finnish by using video stimuli. The informants were (mostly) formally educated young adults living in urban surroundings. The analysis reveals considerable variation in individual coding strategies, especially in the inclusion of the speaker’s viewpoint. It also considers variation with respect to different types of trajectories and cross-linguistic differences in the resources of spatial reference.
34

Mackenzie, Adrian. "Making Data Flow." M/C Journal 5, no. 4 (August 1, 2002). http://dx.doi.org/10.5204/mcj.1975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Why has software code become an object of intense interest in several different domains of cultural life? In art (.net art or software art), in Open source software (Linux, Perl, Apache, et cetera (Moody; Himanen)), in tactical media actions (hacking of WEF Melbourne and Nike websites), and more generally, in the significance attributed to coding as work at the pinnacle of contemporary production of information (Negri and Hardt 298), code itself has somehow recently become significant, at least for some subcultures. Why has that happened? At one level, we could say that this happened because informatic interaction (websites, email, chat, online gaming, ecommerce, etc) has become mainstream to media production, organisational practice and indeed, quotidian life in developed and developing countries. As information production moves into the mainstream, working against mainstream control of flows of information means going upstream. For artists, tactical media groups and hackers, code seems to provide a way to, so to speak, reach over the shoulder of mainstream media channels and contest their control of information flows.1 A basic question is: does it? What code does We all see content flowing through the networks. Yet the expressive traits of the flows themselves are harder to grapple with, partly because they are largely infrastructural. When media and cultural theory discuss information-network society, cyberculture or new media, questions of flow specificity are usually downplayed in favour of high-level engagement with information as content. Arguably, the heightened attention to code attests to an increasing awareness that power relations are embedded in the generation and control of flow rather than just the meanings or contents that might be transported by flow. In this context, loops provide a really elementary and concrete way to explore how code participates in information flows. Loops structure almost every code object at a basic level. The programmed loop, a very mundane construct, can be found in any new media artist's or software engineer's coding toolkit. All programming languages have them. In popular programming and scripting languages such as FORTRAN, C, Pascal, C++, Java, Visual Basic, Perl, Python, JavaScript, ActionScript, etc, an almost identical set of looping constructs are found.2 Working with loops as material and as instrument constitutes an indispensable part of producing code-based objects. On the one hand, the loop is the most basic technical element of code as written text. On the other hand, as process executed by CPUs, and in ways that are not immediately obvious even to programmers themselves, loops of various kinds underpin the generative potential of code.3 Crucially, code is concerned with operationality rather than meaning (Lash 203). Code does not directly create meaning. It circulates, transforms, and reproduces messages and patterns of widely varying semantic and contextual richness. By definition, flow is something continuous. In the case of information, what flows are not things but patterns which can be rendered perceptible in different ways—as image, text, sound—on screen, display, and speaker. While the patterns become perceptible in a range of different spatio-temporal modes, their circulation is serialised. They are, as we know, composed of sequences of modulations (bits). Loops control the flow of patterns. Lev Manovich writes: programming involves altering the linear flow of data through control structures, such as 'if/then' and 'repeat/while'; the loop is the most elementary of these control structures (Manovich 189). Drawing on these constructs, programming or coding work gain traction in flows. Interactive looping Loops also generate flows by multiplying events. The most obvious example of how code loops generate and control flows comes from the graphic user interfaces (GUIs) provided by typical operating systems such as Windows, MacOs or one of the Linux desktop environments. These operating systems configure the visual space of millions of desktop screen according to heavily branded designs. Basically they all divide the screen into different framing areas—panels, dividing lines, toolbars, frames, windows—and then populate those areas with controls and indicators—buttons, icons, checkboxes, dropdown lists, menus, popup menus. Framing areas hold content—text, tables, images, video. Controls, usually clustered around the edge of the frame, transform the content displayed in the framed areas in many different ways. Visual controls are themselves hooked up via code to physical input devices such as keyboard, mouse, joystick, buttons and trackpad. The highly habituated and embodied experience of interacting with contemporary GUIs consists of moving in and out, within and between different framing areas, using visual controls that respond either to pointing (with the mouse) or keyboard command to change what is displayed, how it is displayed or indeed to move that content elsewhere (onto disk, across a network). Beneath the highly organised visual space of the GUI, lie hundreds if not thousands of loops. The work of coding these interfaces involves making loops, splicing loops together, and nesting loops within loops. At base, the so-called event loop means that the GUI in principle stands ready at any time to accept input from the physical interface devices. Depending on what that input is, it may translate into direct changes within the framed areas (for instance, keystrokes appear in a text field as letters) or changes affecting the controls (for instance, Control-Enter might signal send the text as an email). What we usually understand by interactivity stems from the way that a loop constantly accepts signals from the physical inputs, queues the signals as events, and deals with them one by one as discrete changes in what appears on screen. Within the GUI's basic event loop, many other loops are constantly starting and finishing. They are nested and unnested. They often affect some or other of the dozens of processes running at any one time within the operating system. Sometimes a command coming from the keyboard or a signal arriving from some other peripheral interface (the network interface card, the printer, a scanner, etc) will trigger the execution of a new process, itself composed of manifold loops. Hence loops often transiently interact with each other during execution of code. At base, the GUI shows something important, something that extends well beyond the domain of the GUI per se: the event loop generates and controls informations flows at the same time. People type on keyboards or manipulate game controllers. A single keypress or mouse click itself hardly constitutes a flow. Yet the event loop can amplify it into a cascade of thousands of events because it sets other loops in process. What we call information flow springs from the multiplicatory effect of loops. A typology of looping Information flows don't come from nowhere. They always go somewhere. Perhaps we could generalise a little from the mundane example of the GUI and say that the generation and control of information flows through loops is itself regulated by bounding conditions. A bounding condition determines the number of times and the sequence of operations carried out by a loop. They often come from outside the machine (interfaces of many different kinds) and from within it (other processes running at the same time, dependent on the operating system architecture and the hardware platform). Their regulatory role suggests the possibility of classifying loops according to boundary conditions.4 The following table classifies loops based on bounding conditions: Type of loop Bounding condition Typical location Simple & indefinite No bounding conditions Event loops in GUIs, servers ... Simple & definite Bounding conditions determined by a finite set of elements Counting, sorting, input and output Nested & definite Multiple bounding conditions Transforming grid and table structures Recursive Depth of possible recursion (memory or time) Searching and sorting of tree or network structures Result controlled Loop ends when some goal has been reached Goal-seeking algorithms Interactive and indefinite Bounding conditions change during the course of the loop User interfaces or interaction Although it risks simplifying something that is quite intricate in any actually executing process, this classification does stress that the distinguishing feature of loops may well be their bounding conditions. In practical terms, within program code, a bounding condition takes the form of some test carried out before, during or after each iteration of a loop. The bounding conditions for some loops relate to data that the code expects to come from other places—across networks, from the user interface, or some other devices. For other loops, the bounding conditions continually emerge in the course of the loop itself—the result of a calculation, finding some result in the course of searching a collection or receiving some new input in a flow of data from an interface or network connection. Based on the classification, we could suggest that loops not only generate flows, but they generate those flows within particular spatio-temporal manifolds. Put less abstractly, if we accept that flows don't come from nowhere, we then need to say what kind of places they do come from. The classification shows that they do not come from homogeneous spaces. In fact they relate to different topologies, to the hugely diverse orderings of signs and gestures within mediatic cultures. To take a mundane example, why has the table become such an important element in the HTML coding of webpages? Clearly tables provide an easy way to organise a page. Tables as classifying and visual ordering devices are nothing new. Along with lists, they have been used for centuries. However, the table as onscreen spatial entity also maps very directly onto a nested loop: the inner loop generates the horizontal row contents; the outer loop places the output of the inner loop in vertical order. As web-designers quickly discovered during the 1990s, HTML tables are rendered quickly by browsers and can easily position different contents—images, headings, text, lines, spaces—in proximity. In shorts, nested loops can quickly turn a table into a serial flow or quickly render a table out of a serial flow. Implications We started with the observation that artists, writers, hackers and media activists are working with code in order to reposition themselves in relation to information flows. Through technical elements such as loops, they reappropriate certain facets of the production of information and communication. Working with these and other elements, they look for different points of entry into the flows, attempting to move upstream of the heavily capitalised sites of mainstream production such as the Windows GUI, eCommerce websites or blockbuster game titles. The proliferation of information objects in music, in visual culture, in database and net-centred forms of interactivity ranging from computer games to chat protocols, suggests that the coding work can trigger powerful shifts in the cultures of circulation. Analysis of loops also suggests that the notion of data or information flow, understood as the continuous gliding of bits through systems of communication, needs revision. Rather than code simply controlling flow, code generates flows as well. What might warrant further thought is just how different kinds of bounding conditions generate different spatio-temporal patterns and modes of inclusion within flows. The diversity of loops within information objects imply a variety of topologically complicated places. It would be possible to work through the classification describing how each kind of loop maps into different spatial and temporal orderings. In particular, we might want to focus on how more complicated loops—result controlled, recursive, or interactive and indefinite types—map out more topologically complicated spaces and times. For my purposes, the important point is that bounding conditions not only regulate loops, they bring different kinds of spatio-temporal manifold into the seriality of flow. They imprint spatial and temporal ordering. Here the operationality of code begins to display a generative dimension that goes well beyond merely transporting or communicating content. Notes 1. At a more theoretical level, for a decade or so fairly abstract notions of virtuality have dominated media and cultural studies approaches to new media. While that domination has been increasingly contested by more fine grained studies of how the Internet is enmeshed with different places (Miller and Slater), attention to code is justified on the grounds that it constitutes an increasingly important form of expression within information flows. 2. Detailed discussion of these looping constructs can be found in any programming textbook or introductory computer science course, so I will not be going through them in any detail. 3. For instance, the cycles of the clock chip are absolutely irreducible. Virtually all programs implicitly rely on a clock chip to regulate execution of their instructions. 4. A classification can act as a symptomatology, that is, as something that sets out the various signs of the existence of a particular condition (Deleuze 368), in this case, the operationality of code. References Appadurai, Arjun. Modernity at Large: Cultural Dimensions of Globalization. Minneapolis: U of Minnesota P, 1996. Deleuze, Gilles. The Brain is the Screen. An Interview with Gilles Deleuze. The Brain is the Screen. Deleuze and the Philosophy of Cinema. Ed Gregory Flaxman. Minneapolis: U of Minnesota P, 2000. 365-68. Hardt, Michael and Antonio Negri. Empire. Cambridge, MA: Harvard U P, 2000. Himanen, Pekka. The Hacker Ethic and the Spirit of the Information Age. London: Secker and Warburg, 2001. Lash, Scott. Critique of Information. London: Sage, 2002. Manovich, Lev. What is Digital Cinema? Ed. Peter Lunenfeld. The Digital Dialectic: New Essays on New Media. Cambridge, MA: MIT, 1999. 172-92. Miller, Daniel, and Don Slater. The Internet: An Ethnographic Approach. Oxford: Berg, 2000. Moody, Glyn. Rebel Code: Linux and the Open Source Revolution. Middlesworth: Penguin, 2001. Citation reference for this article MLA Style Mackenzie, Adrian. "Making Data Flow" M/C: A Journal of Media and Culture 5.4 (2002). [your date of access] < http://www.media-culture.org.au/mc/0208/data.php>. Chicago Style Mackenzie, Adrian, "Making Data Flow" M/C: A Journal of Media and Culture 5, no. 4 (2002), < http://www.media-culture.org.au/mc/0208/data.php> ([your date of access]). APA Style Mackenzie, Adrian. (2002) Making Data Flow. M/C: A Journal of Media and Culture 5(4). < http://www.media-culture.org.au/mc/0208/data.php> ([your date of access]).
35

Hollerweger, Elisabeth. "Natur literarisch programmieren?" Jahrbuch der Gesellschaft für Kinder- und Jugendliteraturforschung, December 1, 2022, 141–53. http://dx.doi.org/10.21248/gkjf-jb.95.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
[English title and abstract below] Die Jugendliteratur reagiert auf die zunehmenden Bedrohungen durch Klimawandel und Umweltkrisen schon seit Ende der Nullerjahre mit einer Art ,Future Nature Writing‘, das dystopische Zukunftswelten entwirft und Begegnungen mit intakter(er) Natur nur noch nostalgisch erinnert, fantastisch wiederherstellt oder medial simuliert. Letzteres ist innerhalb des jugendliterarischen Feldes ein relativ neues Phänomen, das im vorliegenden Beitrag am Beispiel des Romans Cryptos (Poznanski 2020) näher beleuchtet wird. Dabei wird auf Basis von Foucaults Raumtheorie zunächst hinterfragt, welche normativen Zuschreibungen und semantischen Codierungen mit den gegensätzlichen Räumen des Weltensystems verbunden werden und in welchen Machtkonstellationen die Figuren darin verortet sind. Daran anknüpfend rückt die Darstellung von virtueller Natur in den Fokus, wobei speziell die literarischen Verfahren von Interesse sind, die Immersion auf der einen und Reflexion auf der anderen Seite initiieren. Dieses zweischrittige Vorgehen zielt darauf ab, Spezifika literarischer Naturprogrammierung zu erarbeiten und Analysekategorien abzuleiten. Auf dieser Basis legt der Vergleich von Cryptos mit Hikikomori (Kuhn 2012) und Ready Player One (Cline 2017) Gemeinsamkeiten und Unterschiede in der literarischen Programmierung von Natur offen. Das Verhältnis von literarischem System und medialem Bezugssystem sowie die Wirkungsweisen von programmierter Natur in Literatur und Videospiel werden schließlich unter Bezugnahme auf Rajewskys Intermedialitätstheorie hinterfragt, sodass insgesamt verschiedene Facetten des Schreibens von virtueller Natur Berücksichtigung finden. Programming Nature with Literature?Virtual Environments in Ursula Poznanski’s Cryptos Since the end of the first decade of the twenty-first century, young adult literature has reacted to the increased threats posed by climate change and environmental crises with a kind of ‘future nature writing’ that generates dystopian future worlds in which encounters with (more) intact nature are either nostalgically recalled, fantastically recreated or medially simulated. The latter is a relatively new phenomenon that this article will examine, taking Ursula Poznanski’s novel Cryptos (2020) as its main example. Based on Foucault’s theory of space, it first identifies the normative attributions and semantic codings associated with the opposing spaces of the world system and the power constellations in which the characters are located within the spaces. In a second step, the focus shifts to the representation of virtual nature, with the literary processes that initiate immersion on the one hand and reflection of a particular interest on the other. The aim of this two-step approach is to identify the specifics of literary nature programming and to develop categories for analysis. On the basis of this approach, a comparison between Cryptos, Hikikomori (2012) by Kevin Kuhn and Ready Player One (2017) by Ernest Cline exposes similarities and differences in the literary programming of nature. Finally, the relationship between the literary and the medial reference systems, as well as the mechanisms of programmed nature in literature and video games, is examined with reference to Rajewsky’s theory of intermediality so that different facets of writing virtual nature are taken into account.

To the bibliography