Добірка наукової літератури з теми "Features of video information"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Features of video information".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Features of video information"

1

Zhang, Chunhu, Yun Tan, Jiaohua Qin, and Xuyu Xiang. "Coverless Video Steganography Based on Audio and Frame Features." Security and Communication Networks 2022 (April 4, 2022): 1–14. http://dx.doi.org/10.1155/2022/1154098.

Повний текст джерела
Анотація:
The coverless steganography based on video has become a research hot spot recently. However, the existing schemes usually hide secret information based on the single-frame feature of video and do not take advantage of other rich features. In this work, we propose a novel coverless steganography, which makes full use of the audio and frame image features of the video. First, three features are extracted to obtain hash bit sequences, which include DWT (discrete wavelet transform) coefficients and short-term energy of audio and the SIFT (scale-invariant feature transformation) feature of frame images. Then, we build a retrieval database according to the relationship between the generated bit sequences and three features of the corresponding videos. The sender divides the secret information into segments and sends the corresponding retrieval information and carrier videos to the receiver. The receiver can use the retrieval information to recover the secret information from the carrier videos correspondingly. The experimental results show that the proposed method can achieve larger capacity, less time cost, higher hiding success rate, and stronger robustness compared with the existing coverless steganography schemes based on the video.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ramezani, Mohsen, and Farzin Yaghmaee. "Retrieving Human Action by Fusing the Motion Information of Interest Points." International Journal on Artificial Intelligence Tools 27, no. 03 (May 2018): 1850008. http://dx.doi.org/10.1142/s0218213018500082.

Повний текст джерела
Анотація:
In response to the fast propagation of videos on the Internet, Content-Based Video Retrieval (CBVR) was introduced to help users find their desired items. Since most videos concern humans, human action retrieval was introduced as a new topic in CBVR. Most human action retrieval methods represent an action by extracting and describing its local features as more reliable than global ones; however, these methods are complex and not very accurate. In this paper, a low-complexity representation method that more accurately describes extracted local features is proposed. In this method, each video is represented independently from other videos. To this end, the motion information of each extracted feature is described by the directions and sizes of its movements. In this system, the correspondence between the directions and sizes of the movements is used to compare videos. Finally, videos that correspond best with the query video are delivered to the user. Experimental results illustrate that this method can outperform state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Waykar, Sanjay B., and C. R. Bharathi. "Multimodal Features and Probability Extended Nearest Neighbor Classification for Content-Based Lecture Video Retrieval." Journal of Intelligent Systems 26, no. 3 (July 26, 2017): 585–99. http://dx.doi.org/10.1515/jisys-2016-0041.

Повний текст джерела
Анотація:
AbstractDue to the ever-increasing number of digital lecture libraries and lecture video portals, the challenge of retrieving lecture videos has become a very significant and demanding task in recent years. Accordingly, the literature presents different techniques for video retrieval by considering video contents as well as signal data. Here, we propose a lecture video retrieval system using multimodal features and probability extended nearest neighbor (PENN) classification. There are two modalities utilized for feature extraction. One is textual information, which is determined from the lecture video using optical character recognition. The second modality utilized to preserve video content is local vector pattern. These two modal features are extracted, and the retrieval of videos is performed using the proposed PENN classifier, which is the extension of the extended nearest neighbor classifier, by considering the different weightages for the first-level and second-level neighbors. The performance of the proposed video retrieval is evaluated using precision, recall, and F-measure, which are computed by matching the retrieved videos and the manually classified videos. From the experimentation, we proved that the average precision of the proposed PENN+VQ is 78.3%, which is higher than that of the existing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Gong, Xiaohui. "A Personalized Recommendation Method for Short Drama Videos Based on External Index Features." Advances in Meteorology 2022 (April 18, 2022): 1–10. http://dx.doi.org/10.1155/2022/3601956.

Повний текст джерела
Анотація:
Dramatic short videos have quickly gained a huge number of user views in the current short video boom. The information presentation dimension of short videos is higher, and it is easier to be accepted and spread by people. At present, there are a large number of drama short video messages on the Internet. These short video messages have brought serious information overload to users and also brought great challenges to short video operators and video editors. Therefore, how to process short videos quickly has become a research hotspot. The traditional episode recommendation process often adopts collaborative filtering recommendation or content-based recommendation to users, but these methods have certain limitations. Short videos have fast dissemination speed, strong timeliness, and fast hot search speed. These have become the characteristics of short video dissemination. Traditional recommendation methods cannot recommend short videos with high attention and high popularity. To this end, this paper adds external index features to extract short video features and proposes a short video recommendation method based on index features. Using external features to classify and recommend TV series videos, this method can quickly and accurately make recommendations to target customers. Through the experimental analysis, it can be seen that the method in this paper has a good effect.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Ye, Qing, Haoxin Zhong, Chang Qu, and Yongmei Zhang. "Human Interaction Recognition Based on Whole-Individual Detection." Sensors 20, no. 8 (April 20, 2020): 2346. http://dx.doi.org/10.3390/s20082346.

Повний текст джерела
Анотація:
Human interaction recognition technology is a hot topic in the field of computer vision, and its application prospects are very extensive. At present, there are many difficulties in human interaction recognition such as the spatial complexity of human interaction, the differences in action characteristics at different time periods, and the complexity of interactive action features. The existence of these problems restricts the improvement of recognition accuracy. To investigate the differences in the action characteristics at different time periods, we propose an improved fusion time-phase feature of the Gaussian model to obtain video keyframes and remove the influence of a large amount of redundant information. Regarding the complexity of interactive action features, we propose a multi-feature fusion network algorithm based on parallel Inception and ResNet. This multi-feature fusion network not only reduces the network parameter quantity, but also improves the network performance; it alleviates the network degradation caused by the increase in network depth and obtains higher classification accuracy. For the spatial complexity of human interaction, we combined the whole video features with the individual video features, making full use of the feature information of the interactive video. A human interaction recognition algorithm based on whole–individual detection is proposed, where the whole video contains the global features of both sides of action, and the individual video contains the individual detail features of a single person. Making full use of the feature information of the whole video and individual videos is the main contribution of this paper to the field of human interaction recognition and the experimental results in the UT dataset (UT–interaction dataset) showed that the accuracy of this method was 91.7%.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

K, Jayasree, and Sumam Mary Idicula. "Enhanced Video Classification System Using a Block-Based Motion Vector." Information 11, no. 11 (October 24, 2020): 499. http://dx.doi.org/10.3390/info11110499.

Повний текст джерела
Анотація:
The main objective of this work was to design and implement a support vector machine-based classification system to classify video data into predefined classes. Video data has to be structured and indexed for any video classification methodology. Video structure analysis involves shot boundary detection and keyframe extraction. Shot boundary detection is performed using a two-pass block-based adaptive threshold method. The seek spread strategy is used for keyframe extraction. In most of the video classification methods, selection of features is important. The selected features contribute to the efficiency of the classification system. It is very hard to find out which combination of features is most effective. Feature selection makes relevance to the proposed system. Herein, a support vector machine-based classifier was considered for the classification of video clips. The performance of the proposed system considered six categories of video clips: cartoons, commercials, cricket, football, tennis, and news. When shot level features and keyframe features, along with motion vectors, were used, 86% correct classification was achieved, which was comparable with the existing methods. The research concentrated on feature extraction where combination of selected features was given to a classifier to get the best classification performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Huang, Dong Mei, and Kai Feng. "A Near-Duplicate Video Detection with Temporal Consistency Feature." Advanced Materials Research 798-799 (September 2013): 510–14. http://dx.doi.org/10.4028/www.scientific.net/amr.798-799.510.

Повний текст джерела
Анотація:
There are a wide variety of video data in the information-oriented society, and how to detect the video clips that users want in the massive video data quickly and accurately is attracting more people to research. Given the existing near-duplicate video detection algorithms are processed by extracting global or local features directly in the key frame level, which is very time-consuming, this paper introduces a new cascaded near-duplicate video detection approach using the temporal consistency feature in the shot level to preliminarily filter out some dissimilar videos before extracting features, and then combining global and local features to obtain the ultimate videos that are duplicated with the query video step by step. We have verified the approach by experimenting on the CC_WEB_VIDEO dataset, and compared the performance with the method based on global signature color histogram. The results show the proposed method can achieve better detection accuracy, especially for the videos with complex motion scenes and great frame changes.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chen, Hanqing, Chunyan Hu, Feifei Lee, Chaowei Lin, Wei Yao, Lu Chen, and Qiu Chen. "A Supervised Video Hashing Method Based on a Deep 3D Convolutional Neural Network for Large-Scale Video Retrieval." Sensors 21, no. 9 (April 29, 2021): 3094. http://dx.doi.org/10.3390/s21093094.

Повний текст джерела
Анотація:
Recently, with the popularization of camera tools such as mobile phones and the rise of various short video platforms, a lot of videos are being uploaded to the Internet at all times, for which a video retrieval system with fast retrieval speed and high precision is very necessary. Therefore, content-based video retrieval (CBVR) has aroused the interest of many researchers. A typical CBVR system mainly contains the following two essential parts: video feature extraction and similarity comparison. Feature extraction of video is very challenging, previous video retrieval methods are mostly based on extracting features from single video frames, while resulting the loss of temporal information in the videos. Hashing methods are extensively used in multimedia information retrieval due to its retrieval efficiency, but most of them are currently only applied to image retrieval. In order to solve these problems in video retrieval, we build an end-to-end framework called deep supervised video hashing (DSVH), which employs a 3D convolutional neural network (CNN) to obtain spatial-temporal features of videos, then train a set of hash functions by supervised hashing to transfer the video features into binary space and get the compact binary codes of videos. Finally, we use triplet loss for network training. We conduct a lot of experiments on three public video datasets UCF-101, JHMDB and HMDB-51, and the results show that the proposed method has advantages over many state-of-the-art video retrieval methods. Compared with the DVH method, the mAP value of UCF-101 dataset is improved by 9.3%, and the minimum improvement on JHMDB dataset is also increased by 0.3%. At the same time, we also demonstrate the stability of the algorithm in the HMDB-51 dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Oliveira, Eva, Teresa Chambel, and Nuno Magalhães Ribeiro. "Sharing Video Emotional Information in the Web." International Journal of Web Portals 5, no. 3 (July 2013): 19–39. http://dx.doi.org/10.4018/ijwp.2013070102.

Повний текст джерела
Анотація:
Video growth over the Internet changed the way users search, browse and view video content. Watching movies over the Internet is increasing and becoming a pastime. The possibility of streaming Internet content to TV, advances in video compression techniques and video streaming have turned this recent modality of watching movies easy and doable. Web portals as a worldwide mean of multimedia data access need to have their contents properly classified in order to meet users’ needs and expectations. The authors propose a set of semantic descriptors based on both user physiological signals, captured while watching videos, and on video low-level features extraction. These XML based descriptors contribute to the creation of automatic affective meta-information that will not only enhance a web-based video recommendation system based in emotional information, but also enhance search and retrieval of videos affective content from both users’ personal classifications and content classifications in the context of a web portal.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ma, Biao, and Minghui Ji. "Motion Feature Retrieval in Basketball Match Video Based on Multisource Motion Feature Fusion." Advances in Mathematical Physics 2022 (January 11, 2022): 1–10. http://dx.doi.org/10.1155/2022/9965764.

Повний текст джерела
Анотація:
Both the human body and its motion are three-dimensional information, while the traditional feature description method of two-person interaction based on RGB video has a low degree of discrimination due to the lack of depth information. According to the respective advantages and complementary characteristics of RGB video and depth video, a retrieval algorithm based on multisource motion feature fusion is proposed. Firstly, the algorithm uses the combination of spatiotemporal interest points and word bag model to represent the features of RGB video. Then, the directional gradient histogram is used to represent the feature of the depth video frame. The statistical features of key frames are introduced to represent the histogram features of depth video. Finally, the multifeature image fusion algorithm is used to fuse the two video features. The experimental results show that multisource feature fusion can greatly improve the retrieval accuracy of motion features.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Features of video information"

1

Asghar, Muhammad Nabeel. "Feature based dynamic intra-video indexing." Thesis, University of Bedfordshire, 2014. http://hdl.handle.net/10547/338913.

Повний текст джерела
Анотація:
With the advent of digital imagery and its wide spread application in all vistas of life, it has become an important component in the world of communication. Video content ranging from broadcast news, sports, personal videos, surveillance, movies and entertainment and similar domains is increasing exponentially in quantity and it is becoming a challenge to retrieve content of interest from the corpora. This has led to an increased interest amongst the researchers to investigate concepts of video structure analysis, feature extraction, content annotation, tagging, video indexing, querying and retrieval to fulfil the requirements. However, most of the previous work is confined within specific domain and constrained by the quality, processing and storage capabilities. This thesis presents a novel framework agglomerating the established approaches from feature extraction to browsing in one system of content based video retrieval. The proposed framework significantly fills the gap identified while satisfying the imposed constraints of processing, storage, quality and retrieval times. The output entails a framework, methodology and prototype application to allow the user to efficiently and effectively retrieved content of interest such as age, gender and activity by specifying the relevant query. Experiments have shown plausible results with an average precision and recall of 0.91 and 0.92 respectively for face detection using Haar wavelets based approach. Precision of age ranges from 0.82 to 0.91 and recall from 0.78 to 0.84. The recognition of gender gives better precision with males (0.89) compared to females while recall gives a higher value with females (0.92). Activity of the subject has been detected using Hough transform and classified using Hiddell Markov Model. A comprehensive dataset to support similar studies has also been developed as part of the research process. A Graphical User Interface (GUI) providing a friendly and intuitive interface has been integrated into the developed system to facilitate the retrieval process. The comparison results of the intraclass correlation coefficient (ICC) shows that the performance of the system closely resembles with that of the human annotator. The performance has been optimised for time and error rate.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Sjöblom, Mattias. "Investigating Gaze Attraction to Bottom-Up Visual Features for Visual Aids in Games." Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12862.

Повний текст джерела
Анотація:
Context. Video games usually have visual aids guiding the players in 3D-environments. The designers need to know which visual feature is the most effective in attracting a player's gaze and what features are preferred by players as visual aid. Objectives. This study investigates which feature of the bottom-upvisual attention process attracts the gaze faster. Methods. With the use of the Tobii T60 eye tracking system, a user study with 32 participants was conducted in a controlled environment. An experiment was created where each participant looked at a slideshow consisting of 18 pictures with 8 objects on each picture. One object per picture had a bottom-up visual feature applied that made it stand out as different. Video games often have a goal or a task and to connect the experiment to video games a goal was set. This goal was to find the object with the visual feature applied. The eye tracker measured the gaze while the participant was trying to find the object. A survey to determine which visual feature was preferredby the players was also made. Results. The result showed that colour was the visual feature with the shortest time to attract attention. It was closely followed by intensity,motion and a pulsating highlight. Small size had the longest attraction time. Results also showed that the preferred visual feature for visual aid by the players was intensity and the least preferred was orientation. Conclusions. The results show that visual features with contrast changes in the texture seems to draw attention faster, with colour the fastest, than changes on the object itself. These features were also the most preferred as visual aid by the players with intensity the most preferred. If this study was done on a larger scale within a 3D-environment, this experiment could show promise to help designers in decisions regarding visual aid in video games.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gurrapu, Chaitanya. "Human Action Recognition In Video Data For Surveillance Applications." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15878/.

Повний текст джерела
Анотація:
Detecting human actions using a camera has many possible applications in the security industry. When a human performs an action, his/her body goes through a signature sequence of poses. To detect these pose changes and hence the activities performed, a pattern recogniser needs to be built into the video system. Due to the temporal nature of the patterns, Hidden Markov Models (HMM), used extensively in speech recognition, were investigated. Initially a gesture recognition system was built using novel features. These features were obtained by approximating the contour of the foreground object with a polygon and extracting the polygon's vertices. A Gaussian Mixture Model (GMM) was fit to the vertices obtained from a few frames and the parameters of the GMM itself were used as features for the HMM. A more practical activity detection system using a more sophisticated foreground segmentation algorithm immune to varying lighting conditions and permanent changes to the foreground was then built. The foreground segmentation algorithm models each of the pixel values using clusters and continually uses incoming pixels to update the cluster parameters. Cast shadows were identified and removed by assuming that shadow regions were less likely to produce strong edges in the image than real objects and that this likelihood further decreases after colour segmentation. Colour segmentation itself was performed by clustering together pixel values in the feature space using a gradient ascent algorithm called mean shift. More robust features in the form of mesh features were also obtained by dividing the bounding box of the binarised object into grid elements and calculating the ratio of foreground to background pixels in each of the grid elements. These features were vector quantized to reduce their dimensionality and the resulting symbols presented as features to the HMM to achieve a recognition rate of 62% for an event involving a person writing on a white board. The recognition rate increased to 80% for the "seen" person sequences, i.e. the sequences of the person used to train the models. With a fixed lighting position, the lack of a shadow removal subsystem improved the detection rate. This is because of the consistent profile of the shadows in both the training and testing sequences due to the fixed lighting positions. Even with a lower recognition rate, the shadow removal subsystem was considered an indispensable part of a practical, generic surveillance system.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Johansson, Henrik. "Video Flow Classification : Feature Based Classification Using the Tree-based Approach." Thesis, Karlstads universitet, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-43012.

Повний текст джерела
Анотація:
This dissertation describes a study which aims to classify video flows from Internet network traffic. In this study, classification is done based on the characteristics of the flow, which includes features such as payload sizes and inter-arrival time. The purpose of this is to give an alternative to classifying flows based on the contents of their payload packets. Because of an increase of encrypted flows within Internet network traffic, this is a necessity. Data with known class is fed to a machine learning classifier such that a model can be created. This model can then be used for classification of new unknown data. For this study, two different classifiers are used, namely decision trees and random forest. Several tests are completed to attain the best possible models. The results of this dissertation shows that classification based on characteristics is possible and the random forest classifier in particular achieves good accuracies. However, the accuracy of classification of encrypted flows was not able to be tested within this project.
HITS, 4707
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Копинець, Валеріян Валеріянович. "Особливості оброблення відеоінформації для мультимедійних видань мистецького спрямування". Master's thesis, КПІ ім. Ігоря Сікорського, 2020. https://ela.kpi.ua/handle/123456789/39436.

Повний текст джерела
Анотація:
Пояснювальна записка до магістерської дисертації на тему «Особливості оброблення відеоінформації для мультимедійних видань мистецького спрямування», містить сторінок 72, рисунків 22, таблиць 24, літературних джерел 16. Виконано інтеграцію відеоекскурсії у форматі 360 на базі технології доповненої реальності. Досліджено основні особливості оброблення відеоінформації. Виготовлено електронний та фізичний примірник (сторінку) видання для демонстрації. Наведено техніко-економічне обґрунтування проекту. Розраховано термін окупності проекту та стартап-план.
Explanatory note to the master's dissertation on "Features of video information processing for multimedia publications in the field of art", contains 72 pages, drawings 22, tables 24, literature sources 16. Integrated video tour in 360 format based on augmented reality technology. The main features of video information processing are investigated. An electronic and physical copy (page) of the publication was made for demonstration. Feasibility study of the project is given. The payback period of the project and the startup plan are calculated.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Šabatka, Pavel. "Vyhledávání informací." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237213.

Повний текст джерела
Анотація:
The purpose of this thesis is a summary of theoretical knowledge in the field of information retrieval. This document contains mathematical models that can be used for information retrieval algorithms, including how to rank them. There are also examined the specifics of image and text data. The practical part is then an implementation of the algorithm in video shots of the TRECVid 2009 dataset based on high-level features. The uniqueness of this algorithm is to use internet search engines to obtain terms similarity. The work contains a detailed description of the implemented algorithm including the process of tuning and conclusions of its testing.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Grinberg, Michael [Verfasser]. "Feature-Based Probabilistic Data Association for Video-Based Multi-Object Tracking / Michael Grinberg." Karlsruhe : KIT Scientific Publishing, 2018. http://www.ksp.kit.edu.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Mohanna, Farahnaz. "Content based video database retrieval using shape features." Thesis, University of Surrey, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250764.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Nallabolu, Adithya Reddy. "Unsupervised Learning of Spatiotemporal Features by Video Completion." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/79702.

Повний текст джерела
Анотація:
In this work, we present an unsupervised representation learning approach for learning rich spatiotemporal features from videos without the supervision from semantic labels. We propose to learn the spatiotemporal features by training a 3D convolutional neural network (CNN) using video completion as a surrogate task. Using a large collection of unlabeled videos, we train the CNN to predict the missing pixels of a spatiotemporal hole given the remaining parts of the video through minimizing per-pixel reconstruction loss. To achieve good reconstruction results using color videos, the CNN needs to have a certain level of understanding of the scene dynamics and predict plausible, temporally coherent contents. We further explore to jointly reconstruct both color frames and flow fields. By exploiting the statistical temporal structure of images, we show that the learned representations capture meaningful spatiotemporal structures from raw videos. We validate the effectiveness of our approach for CNN pre-training on action recognition and action similarity labeling problems. Our quantitative results demonstrate that our method compares favorably against learning without external data and existing unsupervised learning approaches.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Černý, Petr. "Vyhledávání ve videu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236590.

Повний текст джерела
Анотація:
This thesis summarizes the information retrieval theory, the relational model basic and focuses on the data indexing in relational database systems. The thesis focuses on multimedia data searching. It includes description of automatic multimedia data content extraction and multimedia data indexing. Practical part discusses design and solution implementation for improving query effectivity for multidimensional vector similarity which describes multimedia data. Thesis final part discusses experiments with this solution.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Features of video information"

1

Simpkins, Rebecca. Disability information federations: Features and issues. London: Policy Studies Institute, 1993.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Li, Ying, and C. C. Jay Kuo. Video Content Analysis Using Multimodal Information. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4757-3712-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Schonfeld, Dan. Video search and mining. Berlin: Springer, 2010.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Semiconductors, ITT. TZS9EK0021: Single-chip video processor : advance information. Freiburg: ITT Semiconductors, 1994.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Choi, Sunah. In media res: Information, contre-information. Rennes: Presses universitaires de Rennes, 2003.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kreis, Münchner. Video Digital -- Quo vadis Fernsehen? Berlin, Heidelberg: Springer Berlin Heidelberg, 2003.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

R, Wullert J., ed. Electronic information display technologies. Singapore: World Scientific, 1997.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Teaching on TV and video. New York: Institute of Electrical and Electronics Engineers, 1995.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

United States. Federal Highway Administration. Demonstration project No. 85: GIS / Video imagery applications. Cambridge, MA: GIS/Trans, Ltd., 1993.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Visual information systems: The power of graphics and video. Boston, Mass: G.K. Hall, 1988.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Features of video information"

1

Du, Ruo-Fei, Ren-Jie Liu, Tian-Xiang Wu, and Bao-Liang Lu. "Online Vigilance Analysis Combining Video and Electrooculography Features." In Neural Information Processing, 447–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34500-5_53.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Jing, Chenchen, Zhen Dong, Mingtao Pei, and Yunde Jia. "Fusing Appearance Features and Correlation Features for Face Video Retrieval." In Advances in Multimedia Information Processing – PCM 2017, 150–60. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-77383-4_15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Dong, Pei, Zhiyong Wang, Li Zhuo, and Dagan Feng. "Video Summarization with Visual and Semantic Features." In Advances in Multimedia Information Processing - PCM 2010, 203–14. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15702-8_19.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Cai, Zhaoquan, Yihui Liang, Hui Hu, and Wei Luo. "Offline Video Object Retrieval Method Based on Color Features." In Communications in Computer and Information Science, 495–505. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-0356-1_53.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Zhao, Yiru, Wanfeng Ge, Wenxin Li, Run Wang, Lei Zhao, and Jiang Ming. "Capturing the Persistence of Facial Expression Features for Deepfake Video Detection." In Information and Communications Security, 630–45. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-41579-2_37.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Fan, Miao, Jared Vicory, Sarah McGill, Stephen Pizer, and Julian Rosenman. "Features for the Detection of Flat Polyps in Colonoscopy Video." In Communications in Computer and Information Science, 106–17. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-95921-4_12.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Ranjan, Rajnish K., Yachana Bhawsar, and Amrita Aman. "Video Summary Based on Visual and Mid-level Semantic Features." In Communications in Computer and Information Science, 182–97. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-8896-6_15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Lin, Chenhan, Fei Long, Junfeng Yao, Ming-Ting Sun, and Jinsong Su. "Learning Spatiotemporal and Geometric Features with ISA for Video-Based Facial Expression Recognition." In Neural Information Processing, 435–44. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70090-8_45.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Guo, Xiaona, Wei Zhong, Long Ye, Li Fang, and Qin Zhang. "Affective Video Content Analysis Based on Two Compact Audio-Visual Features." In Communications in Computer and Information Science, 355–64. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-3341-9_29.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhou, Minqi, and Vijayan K. Asari. "Speeded-Up Robust Features Based Moving Object Detection on Shaky Video." In Communications in Computer and Information Science, 677–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22786-8_86.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Features of video information"

1

Hu, Rong, Rongjie Shi, I.-fan Shen, and Wenbin Chen. "Video Stabilization Using Scale-Invariant Features." In 2007 11th International Conference Information Visualization (IV '07). IEEE, 2007. http://dx.doi.org/10.1109/iv.2007.119.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lin, Zhihui, Chun Yuan, and Maomao Li. "HAF-SVG: Hierarchical Stochastic Video Generation with Aligned Features." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/138.

Повний текст джерела
Анотація:
Stochastic video generation methods predict diverse videos based on observed frames, where the main challenge lies in modeling the complex future uncertainty and generating realistic frames. Numerous of Recurrent-VAE-based methods have achieved state-of-the-art results. However, on the one hand, the independence assumption of the variables of approximate posterior limits the inference performance. On the other hand, although these methods adopt skip connections between encoder and decoder to utilize multi-level features, they still produce blurry generation due to the spatial misalignment between encoder and decoder features at different time steps. In this paper, we propose a hierarchical recurrent VAE with a feature aligner, which can not only relax the independence assumption in typical VAE but also use a feature aligner to enable the decoder to obtain the aligned spatial information from the last observed frames. The proposed model is named Hierarchical Stochastic Video Generation network with Aligned Features, referred to as HAF-SVG. Experiments on Moving-MNIST, BAIR, and KTH datasets demonstrate that hierarchical structure is helpful for modeling more accurate future uncertainty, and the feature aligner is beneficial to generate realistic frames. Besides, the HAF-SVG exceeds SVG on both prediction accuracy and the quality of generated frames.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Tseng, Vincent S., Ja-Hwung Su, and Chih-Jen Chen. "Effective Video Annotation by Mining Visual Features and Speech Features." In Third International Conference on Intelligent Information Hiding and Multimedia Signal Processing. IEEE, 2007. http://dx.doi.org/10.1109/iihmsp.2007.4457526.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kudubayeva, Saule, Dmitriy Ryumin, Ainur Sndetbayeva, and Yuri Krak. "Computing of hands gestures' informative video features." In 2016 XIth International Scientific and Technical Conference “Computer Sciences and Information Technologies (CSIT). IEEE, 2016. http://dx.doi.org/10.1109/stc-csit.2016.7589867.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Xu, Xin, Chunping Liu, Haibin Liu, Yi Ji, and Zhaohui Wang. "Video Description Using Learning Multiple Features." In 2017 International Conference on Information Technology and Intelligent Manufacturing (ITIM 2017). Paris, France: Atlantis Press, 2017. http://dx.doi.org/10.2991/itim-17.2017.34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Widyanto, Adhi, Hertog Nugroho, Prasti Eko Yunanto, and Adi Suheryadi. "Development of Video Features to Detect Spatially Modified Video." In 2017 5th International Conference on Instrumentation, Communications, Information Technology, and Biomedical Engineering (ICICI-BME). IEEE, 2017. http://dx.doi.org/10.1109/icici-bme.2017.8537750.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhao, Bin, Xuelong Li, and Xiaoqiang Lu. "Video Captioning with Tube Features." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/164.

Повний текст джерела
Анотація:
Visual feature plays an important role in the video captioning task. Considering that the video content is mainly composed of the activities of salient objects, it has restricted the caption quality of current approaches which just focus on global frame features while paying less attention to the salient objects. To tackle this problem, in this paper, we design an object-aware feature for video captioning, denoted as tube feature. Firstly, Faster-RCNN is employed to extract object regions in frames, and a tube generation method is developed to connect the regions from different frames but belonging to the same object. After that, an encoder-decoder architecture is constructed for video caption generation. Specifically, the encoder is a bi-directional LSTM, which is utilized to capture the dynamic information of each tube. The decoder is a single LSTM extended with an attention model, which enables our approach to adaptively attend to the most correlated tubes when generating the caption. We evaluate our approach on two benchmark datasets: MSVD and Charades. The experimental results have demonstrated the effectiveness of tube feature in the video captioning task.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Abdullah, Lili Nurliyana, Shahrul Azman Mohd Noah, and Tengku Mohd Tengku Sembok. "Semantic Audiovisual Features in Video Scene Detection." In 2009 International Conference on Information Management and Engineering. IEEE, 2009. http://dx.doi.org/10.1109/icime.2009.157.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Milani, S., L. Cuccovillo, M. Tagliasacchi, S. Tubaro, and P. Aichroth. "Video camera identification using audio-visual features." In 2014 5th European Workshop on Visual Information Processing (EUVIP). IEEE, 2014. http://dx.doi.org/10.1109/euvip.2014.7018382.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Darabi, Kaveh, Gheorghita Ghinea, Rajkumar Kannan, and Suresh Kannaiyan. "User-based video abstraction using visual features." In 2014 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). IEEE, 2014. http://dx.doi.org/10.1109/isspit.2014.7300631.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Features of video information"

1

Rudyk, Myroslava. COMMUNICATIVE FEATURES OF UKRAINIAN VIDEO BLOGS ON THE EXAMPLE OF YOUTUBE-CHANNELS OF «TORONTO TV», YANINA SOKOLOVA, AND OSTAP DROZDOV. Ivan Franko National University of Lviv, March 2021. http://dx.doi.org/10.30970/vjo.2021.50.11111.

Повний текст джерела
Анотація:
The article is devoted to the study of the Ukrainian segment of video blogging as one of the most popular types of the functioning of the modern blogosphere. The content and statistics of popular video blogs were studied on the example of YouTube channels of Ukrainian bloggers and famous journalists. Today we are witnessing the rapid development of technologies that help journalists become better, and the creators of media content to work more quickly and ensure the completeness of the information. With the help of Internet communication, new ways of disseminating information have appeared in journalism. Journalists more often create their blogs on various platforms. Blogosphere video content has become very popular among the Ukrainian audience on YouTube because today the video format is the most effective in terms of communication. The YouTube social network partially replaces television, and the variety of thematic content is ably adapted to a wide audience. The paper analyzes Ukrainian blogs managed by journalists, where they publish different content formats. Therefore, the presentation of various examples of video blogs in our work helps to understand the specifics of Ukrainian blogging at its current stage of development. After all, videos of popular people such as Michael Shchur, Yanina Sokolova, Ostap Drozdov demonstrate the peculiarities of Ukrainian popular video content. For the research, we chose those blogs that are currently relevant to Ukrainian YouTube and have their specifics and uniqueness. The main objective of a blogger is to react quickly to the flow of information because the rating of the channel being monetized depends on it. With the help of statistical data, we can conclude that the Ukrainian audience is interested in a wide range of different information. Viewers now value the independent opinion of bloggers and more often listen to it. Every important event is covered by bloggers promptly. And the format in which it is presented depends on the individual style of the author and the concept of his channel. We can conclude that the video content of the modern blogosphere is developing rapidly. This provides the audience with information for different tastes.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Rigotti, Christophe, and Mohand-Saïd Hacid. Representing and Reasoning on Conceptual Queries Over Image Databases. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.89.

Повний текст джерела
Анотація:
The problem of content management of multimedia data types (e.g., image, video, graphics) is becoming increasingly important with the development of advanced multimedia applications. Traditional database management systems are inadequate for the handling of such data types. They require new techniques for query formulation, retrieval, evaluation, and navigation. In this paper we develop a knowledge-based framework for modeling and retrieving image data by content. To represent the various aspects of an image object's characteristics, we propose a model which consists of three layers: (1) Feature and Content Layer, intended to contain image visual features such as contours, shapes,etc.; (2) Object Layer, which provides the (conceptual) content dimension of images; and (3) Schema Layer, which contains the structured abstractions of images, i.e., a general schema about the classes of objects represented in the object layer. We propose two abstract languages on the basis of description logics: one for describing knowledge of the object and schema layers, and the other, more expressive, for making queries. Queries can refer to the form dimension (i.e., information of the Feature and Content Layer) or to the content dimension (i.e., information of the Object Layer). These languages employ a variable free notation, and they are well suited for the design, verification and complexity analysis of algorithms. As the amount of information contained in the previous layers may be huge and operations performed at the Feature and Content Layer are time-consuming, resorting to the use of materialized views to process and optimize queries may be extremely useful. For that, we propose a formal framework for testing containment of a query in a view expressed in our query language. The algorithm we propose is sound and complete and relatively efficient.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Rigotti, Christophe, and Mohand-Saïd Hacid. Representing and Reasoning on Conceptual Queries Over Image Databases. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.89.

Повний текст джерела
Анотація:
The problem of content management of multimedia data types (e.g., image, video, graphics) is becoming increasingly important with the development of advanced multimedia applications. Traditional database management systems are inadequate for the handling of such data types. They require new techniques for query formulation, retrieval, evaluation, and navigation. In this paper we develop a knowledge-based framework for modeling and retrieving image data by content. To represent the various aspects of an image object's characteristics, we propose a model which consists of three layers: (1) Feature and Content Layer, intended to contain image visual features such as contours, shapes,etc.; (2) Object Layer, which provides the (conceptual) content dimension of images; and (3) Schema Layer, which contains the structured abstractions of images, i.e., a general schema about the classes of objects represented in the object layer. We propose two abstract languages on the basis of description logics: one for describing knowledge of the object and schema layers, and the other, more expressive, for making queries. Queries can refer to the form dimension (i.e., information of the Feature and Content Layer) or to the content dimension (i.e., information of the Object Layer). These languages employ a variable free notation, and they are well suited for the design, verification and complexity analysis of algorithms. As the amount of information contained in the previous layers may be huge and operations performed at the Feature and Content Layer are time-consuming, resorting to the use of materialized views to process and optimize queries may be extremely useful. For that, we propose a formal framework for testing containment of a query in a view expressed in our query language. The algorithm we propose is sound and complete and relatively efficient.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Lederer, S., D. Posch, C. Timmerer, A. Azgin, W. Liu, C. Mueller, A. Detti, et al. Adaptive Video Streaming over Information-Centric Networking (ICN). Edited by C. Westphal. RFC Editor, August 2016. http://dx.doi.org/10.17487/rfc7933.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Jarron, Matthew, Amy R. Cameron, and James Gemmill. Dundee Discoveries Past and Present. University of Dundee, November 2020. http://dx.doi.org/10.20933/100001182.

Повний текст джерела
Анотація:
A series of self-guided walking tours through pioneering scientific research in medicine, biology, forensics, nursing and dentistry from the past to the present. Dundee is now celebrated internationally for its pioneering work in medical sciences, in particular the University of Dundee’s ground-breaking research into cancer, diabetes, drug development and surgical techniques. But the city has many more amazing stories of innovation and discovery in medicine and biology, past and present, and the three walking tours presented here will introduce you to some of the most extraordinary. Basic information about each topic is presented on this map, but you will ­find more in-depth information, images and videos on the accompanying website at uod.ac.uk/DundeeDiscoveriesMap For younger explorers, we have also included a Scavenger Hunt – look out for the cancer cell symbols on the map and see if you can ­find the various features listed along the way!
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Muguira, Maritza Rosa, and Trina Denise Russ. Extracting meaningful information from video sequences for intelligent searches. Office of Scientific and Technical Information (OSTI), February 2005. http://dx.doi.org/10.2172/922070.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Medioni, Gerard. 3-D Semantic Information Inference from Airborne Video. Final Report. Office of Scientific and Technical Information (OSTI), June 2018. http://dx.doi.org/10.2172/1457372.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Medioni, Gerard. Final Report: 3-D Semantic Information Inference from Airborne Video. Office of Scientific and Technical Information (OSTI), June 2018. http://dx.doi.org/10.2172/1457173.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Treisman, Anne. Visual Information Processing in the Perception of Features and Objects. Fort Belvoir, VA: Defense Technical Information Center, January 1988. http://dx.doi.org/10.21236/ada192026.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Toutin, Th, and M. Beaudoin. Real-time Extraction on Planimetric and Altimetric Features From Digital Stereo SPOT Data Using a Digital Video Plotter. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1995. http://dx.doi.org/10.4095/218531.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії