Academic literature on the topic 'Video transformation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Video transformation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Video transformation"

1

Wang, Ning, Wengang Zhou, and Houqiang Li. "Contrastive Transformation for Self-supervised Correspondence Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 11 (2021): 10174–82. http://dx.doi.org/10.1609/aaai.v35i11.17220.

Full text
Abstract:
In this paper, we focus on the self-supervised learning of visual correspondence using unlabeled videos in the wild. Our method simultaneously considers intra- and inter-video representation associations for reliable correspondence estimation. The intra-video learning transforms the image contents across frames within a single video via the frame pair-wise affinity. To obtain the discriminative representation for instance-level separation, we go beyond the intra-video analysis and construct the inter-video affinity to facilitate the contrastive transformation across different videos. By forcing the transformation consistency between intra- and inter-video levels, the fine-grained correspondence associations are well preserved and the instance-level feature discrimination is effectively reinforced. Our simple framework outperforms the recent self-supervised correspondence methods on a range of visual tasks including video object tracking (VOT), video object segmentation (VOS), pose keypoint tracking, etc. It is worth mentioning that our method also surpasses the fully-supervised affinity representation (e.g., ResNet) and performs competitively against the recent fully-supervised algorithms designed for the specific tasks (e.g., VOT and VOS).
APA, Harvard, Vancouver, ISO, and other styles
2

Mohammed, Dhrgham Hani, and Laith Ali Abdul-Rahaim. "A Proposed of Multimedia Compression System Using Three - Dimensional Transformation." Webology 18, SI05 (2021): 816–31. http://dx.doi.org/10.14704/web/v18si05/web18264.

Full text
Abstract:
Video compression has become especially important nowadays with the increase of data transmitted over transmission channels, the reducing the size of the videos must be done without affecting the quality of the video. This process is done by cutting the video thread into frames of specific lengths and converting them into a three-dimensional matrix. The proposed compression scheme uses the traditional red-green-blue color space representation and applies a three-dimensional discrete Fourier transform (3D-DFT) or three-dimensional discrete wavelet transform (3D-DWT) to the signal matrix after converted the video stream to three-dimensional matrices. The resulting coefficients from the transformation are encoded using the EZW encoder algorithm. Three main criteria by which the performance of the proposed video compression system will be tested; Compression ratio (CR), peak signal-to-noise ratio (PSNR) and processing time (PT). Experiments showed high compression efficiency for videos using the proposed technique with the required bit rate, the best bit rate for traditional video compression. 3D discrete wavelet conversion has a high frame rate with natural spatial resolution and scalability through visual and spatial resolution Beside the quality and other advantages when compared to current conventional systems in complexity, low power, high throughput, low latency and minimum the storage requirements. All proposed systems implement using MATLAB R2020b.
APA, Harvard, Vancouver, ISO, and other styles
3

Anggraini, Sazkia Noor. "Aesthetic Transformation of Video4Change Project Through Postmodernism Studies." International Journal of Creative and Arts Studies 1, no. 1 (2017): 44. http://dx.doi.org/10.24821/ijcas.v1i1.1571.

Full text
Abstract:
Related research on community videos commonly limited in the social domain. This may happen because making video community is not classified as work of art, but rather as a tool to convey messages on community organizing method. Video4Change (v4c) project here consist different organizations in four countries; Indonesia, India, America and Israel. The review of videos conducted in textual and visual ethnography. This method used to specify all the things captured in the sense, the visual, the voice (audio) and the symbol on each video. Video as a medium in the postmodernism era considered as an illusion and simulation, now has more authority. Video build new structures and functions that transformed from mere aesthetic imagery into practical media with particular meanings. The video made by common people has been taking control of society to understanding the images by interpret it. This research attempts to trace and shift the study of community video from the perspective of art, vice versa from what have done before. However, the video as a tool has particular rules and approach to effectively deliver ‘text’ or message in visual language. This study expected to be a reference in a cultural context that comes from the artistic perspective. The analysis will shift the meaning of aesthetic perspective that could be transforming into practical solution-based. Beyond that, this study is able to see how the perspective transforms as the co modification of art in society changes.
APA, Harvard, Vancouver, ISO, and other styles
4

Salim, Fahim A., Fasih Haider, Saturnino Luz, and Owen Conlan. "Automatic Transformation of a Video Using Multimodal Information for an Engaging Exploration Experience." Applied Sciences 10, no. 9 (2020): 3056. http://dx.doi.org/10.3390/app10093056.

Full text
Abstract:
Exploring the content of a video is typically inefficient due to the linear streamed nature of its media and the lack of interactivity. While different approaches have been proposed for enhancing the exploration experience of video content, the general view of video content has remained basically the same, that is, a continuous stream of images. It is our contention that such a conservative view on video limits its potential value as a content source. This paper presents An Alternative Representation of Video via feature Extraction (RAAVE), a novel approach to transform videos from a linear stream of content into an adaptive interactive multimedia document and thereby enhance the exploration potential of video content by providing a more engaging user experience. We explore the idea of viewing video as a diverse multimedia content source, opening new opportunities and applications to explore and consume video content. A modular framework and algorithm for the representation engine and template collection is described. The representation engine based approach is evaluated through development of a prototype system grounded on the design of the proposed approach, allowing users to perform multiple content exploration tasks within a video. The evaluation demonstrated RAAVE’s ability to provide users with a more engaging, efficient and effective experience than a typical multimedia player while performing video exploration tasks.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Lei, Xiao-Quan Chen, Xin-Yi Kong, and Hua Huang. "Geodesic Video Stabilization in Transformation Space." IEEE Transactions on Image Processing 26, no. 5 (2017): 2219–29. http://dx.doi.org/10.1109/tip.2017.2676354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vinod, Malavika, M. Pallavi, Sreelakshmi Ajith, and Padmamala Sriram. "Reversible Data Hiding in Encrypted Video Using Reversible Image Transformation." Journal of Computational and Theoretical Nanoscience 17, no. 1 (2020): 136–40. http://dx.doi.org/10.1166/jctn.2020.8640.

Full text
Abstract:
This work focuses on a method to hide images in videos in a manner that the secret image can be losslessly recovered from the target image with minimal distortion. This lossless recovery can be done by using Reversible Data Hiding (RDH). To ensure the privacy of the video owner, the target media is encrypted before applying RDH. Reversible Image Transformation (RIT) is a framework used to ensure that. Audio Steganography techniques are used for further encryption. This can be used in cloud technology so that the cloud may add information into the target video without compromising its integrity. RDH also has other possible applications in the field of military, defence and medicine.
APA, Harvard, Vancouver, ISO, and other styles
7

C, Chanjal. "Feature Re-Learning for Video Recommendation." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (2021): 3143–49. http://dx.doi.org/10.22214/ijraset.2021.35350.

Full text
Abstract:
Predicting the relevance between two given videos with respect to their visual content is a key component for content-based video recommendation and retrieval. The application is in video recommendation, video annotation, Category or near-duplicate video retrieval, video copy detection and so on. In order to estimate video relevance previous works utilize textual content of videos and lead to poor performance. The proposed method is feature re-learning for video relevance prediction. This work focus on the visual contents to predict the relevance between two videos. A given feature is projected into a new space by an affine transformation. Different from previous works this use a standard triplet ranking loss that optimize the projection process by a novel negative-enhanced triplet ranking loss. In order to generate more training data, propose a data augmentation strategy which works directly on video features. The multi-level augmentation strategy works for video features, which benefits the feature relearning. The proposed augmentation strategy can be flexibly used for frame-level or video-level features. The loss function that consider the absolute similarity of positive pairs and supervise the feature re-learning process and a new formula for video relevance computation.
APA, Harvard, Vancouver, ISO, and other styles
8

Shi, Henan, Tanfeng Sun, Xinghao Jiang, Yi Dong, and Ke Xu. "A HEVC Video Steganalysis Against DCT/DST-Based Steganography." International Journal of Digital Crime and Forensics 13, no. 3 (2021): 19–33. http://dx.doi.org/10.4018/ijdcf.20210501.oa2.

Full text
Abstract:
The development of video steganography has put forward a higher demand for video steganalysis. This paper presents a novel steganalysis against discrete cosine/sine transform (DCT/DST)-based steganography for high efficiency video coding (HEVC) videos. The new steganalysis employs special frames extraction (SFE) and accordion unfolding (AU) transformation to target the latest DCT/DST domain HEVC video steganography algorithms by merging temporal and spatial correlation. In this article, the distortion process of DCT/DST-based HEVC steganography is firstly analyzed. Then, based on the analysis, two kinds of distortion, the intra-frame distortion and the inter-frame distortion, are mainly caused by DCT/DST-based steganography. Finally, to effectively detect these distortions, an innovative method of HEVC steganalysis is proposed, which gives a combination feature of SFE and a temporal to spatial transformation, AU. The experiment results show that the proposed steganalysis performs better than other methods.
APA, Harvard, Vancouver, ISO, and other styles
9

Sowmyayani, S., and P. Arockia Jansi Rani. "An Efficient Temporal Redundancy Transformation for Wavelet Based Video Compression." International Journal of Image and Graphics 16, no. 03 (2016): 1650015. http://dx.doi.org/10.1142/s0219467816500157.

Full text
Abstract:
The objective of this work is to propose a novel idea of transforming temporal redundancies present in videos. Initially, the frames are divided into sub-blocks. Then, the temporally redundant blocks are grouped together thus generating new frames with spatially redundant temporal data. The transformed frames are given to compression in the wavelet domain. This new approach greatly reduces the computational time. The reason is that the existing video codecs use block matching methods for motion estimation which is a time consuming process. The proposed method avoids the use of block matching method. The existing H.264/AVC takes approximately one hour to compress a video file where as the proposed method takes only one minute for the same task. The experimental results substantially proved that the proposed method performs better than the existing H.264/AVC standard in terms of time, compression ratio and PSNR.
APA, Harvard, Vancouver, ISO, and other styles
10

Andriolo, Umberto. "Nearshore Wave Transformation Domains from Video Imagery." Journal of Marine Science and Engineering 7, no. 6 (2019): 186. http://dx.doi.org/10.3390/jmse7060186.

Full text
Abstract:
Within the nearshore area, three wave transformation domains can be distinguished based on the wave properties: shoaling, surf, and swash zones. The identification of these distinct areas is relevant for understanding nearshore wave propagation properties and physical processes, as these zones can be related, for instance, to different types of sediment transport. This work presents a technique to automatically retrieve the nearshore wave transformation domains from images taken by coastal video monitoring stations. The technique exploits the pixel intensity variation of image acquisitions, and relates the pixel properties to the distinct wave characteristics. This allows the automated description of spatial and temporal extent of shoaling, surf, and swash zones. The methodology was proven to be robust, and capable of spotting the three distinct zones within the nearshore, both cross-shore and along-shore dimensions. The method can support a wide range of coastal studies, such as nearshore hydrodynamics and sediment transport. It can also allow a faster and improved application of existing video-based techniques for wave breaking height and depth-inversion, among others.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography