Academic literature on the topic 'Video analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Video analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Video analysis"

1

Luo, Yong, Guochang Zhou, Jianping Li, and Xiao Xiao. "A MOOC Video Viewing Behavior Analysis Algorithm." Mathematical Problems in Engineering 2018 (October 16, 2018): 1–7. http://dx.doi.org/10.1155/2018/7560805.

Full text
Abstract:
MOOCs (massive open online courses) are developing rapidly, but they also face many problems. As the MOOC’s most important resource, the course videos have a very important influence on the learning. This article defines the ratio R (R=Average viewing duration/Video length), which reflects the popularity of the video. By analyzing the relationship between the video length, release time, and R, we found a significant negative linear correlation between video length and R and video release time and R. However, when the number of videos is less than the threshold, the release time has less influence on R. This paper presents a video viewing behavior analysis algorithm based on multiple linear regression. The residual independence test proved that the algorithm has a good approximation to the data. It can predict the popularity of similar course videos to help producers optimize video design.
APA, Harvard, Vancouver, ISO, and other styles
2

Gallis, Michael R. "Artificial Video for Video Analysis." Physics Teacher 48, no. 1 (January 2010): 32–34. http://dx.doi.org/10.1119/1.3274357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kumar, Anil, and Umesh Chandra Jaiswal. "Comparative Analysis of Sentiments in Children with Neurodevelopmental Disorders." ADCAIJ: Advances in Distributed Computing and Artificial Intelligence Journal 12 (December 29, 2023): e31469. http://dx.doi.org/10.14201/adcaij.31469.

Full text
Abstract:
In-group favoritism is the tendency of people where, individuals tend to punish transgressors with varying intensity based on whether they belong to their own group or not. In this cross-sectional analytical study, we examine matched samples of children with developmental disorders, observing their perspectives on punishment after watching two videos in which rules are broken. Data (video 1) shows a football player from the viewer’s country scoring a handball goal, while in data (video 2), a foreign player replicates the same action against the host nation. Every contestant viewed both videos, and their responses were then compared. Our proposed methods compare and analyze the data to determine player’s opinions using artificial intelligence-based machine learning such as text analysis and opinion, extract on- favorable, unfavorable, neutral feelings, or emotions. In both sets of data, the autism spectrum disorder (ASD) group displayed negative emotions for both video 1 (M = −.1; CI 90% −.41 to .21) and video 2 (t (7) = 1.54, p =.12; M = -.42; CI 90% 76 to -.08). On the contrary, the groups with attention deficit hyperactivity disorder (ADHD), learning disabilities (LD), and intellectual disability (ID) had a favorable reaction to video1 but an unfavorable reaction to video 2. Children diagnosed with ASD typically display a consistent adherence to rules, even when those breaking the rules are not part of their group. This behavior may be linked to lower levels of empathy.
APA, Harvard, Vancouver, ISO, and other styles
4

Tait, D. Margaret. "Video Analysis." Ear and Hearing 14, no. 6 (December 1993): 378–89. http://dx.doi.org/10.1097/00003446-199312000-00002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chang, Yuchou, and Hong Lin. "Irrelevant frame removal for scene analysis using video hyperclique pattern and spectrum analysis." Journal of Advanced Computer Science & Technology 5, no. 1 (February 6, 2016): 1. http://dx.doi.org/10.14419/jacst.v5i1.4035.

Full text
Abstract:
<p>Video often include frames that are irrelevant to the scenes for recording. These are mainly due to imperfect shooting, abrupt movements of camera, or unintended switching of scenes. The irrelevant frames should be removed before the semantic analysis of video scene is performed for video retrieval. An unsupervised approach for automatic removal of irrelevant frames is proposed in this paper. A novel log-spectral representation of color video frames based on Fibonacci lattice-quantization has been developed for better description of the global structures of video contents to measure similarity of video frames. Hyperclique pattern analysis, used to detect redundant data in textual analysis, is extended to extract relevant frame clusters in color videos. A new strategy using the k-nearest neighbor algorithm is developed for generating a video frame support measure and an h-confidence measure on this hyperclique pattern based analysis method. Evaluation of the proposed irrelevant video frame removal algorithm reveals promising results for datasets with irrelevant frames.</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Jacob, Jaimon, M. Sudheep Elayidom, and V. P. Devassia. "Video content analysis and retrieval system using video storytelling and indexing techniques." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 6 (December 1, 2020): 6019. http://dx.doi.org/10.11591/ijece.v10i6.pp6019-6025.

Full text
Abstract:
Videos are used often for communicating ideas, concepts, experience, and situations, because of the significant advances made in video communication technology. The social media platforms enhanced the video usage expeditiously. At, present, recognition of a video is done, using the metadata like video title, video descriptions, and video thumbnails. There are situations like video searcher requires only a video clip on a specific topic from a long video. This paper proposes a novel methodology for the analysis of video content and using video storytelling and indexing techniques for the retrieval of the intended video clip from a long duration video. Video storytelling technique is used for video content analysis and to produce a description of the video. The video description thus created is used for preparation of an index using wormhole algorithm, guarantying the search of a keyword of definite length L, within the minimum worst-case time. This video index can be used by video searching algorithm to retrieve the relevant part of the video by virtue of the frequency of the word in the keyword search of the video index. Instead of downloading and transferring a whole video, the user can download or transfer the specifically necessary video clip. The network constraints associated with the transfer of videos are considerably addressed.
APA, Harvard, Vancouver, ISO, and other styles
7

Al-Tamimi, Abdel-Karim, Raj Jain, and Chakchai So-In. "High-Definition Video Streams Analysis, Modeling, and Prediction." Advances in Multimedia 2012 (2012): 1–13. http://dx.doi.org/10.1155/2012/539396.

Full text
Abstract:
High-definition video streams' unique statistical characteristics and their high bandwidth requirements are considered to be a challenge in both network scheduling and resource allocation fields. In this paper, we introduce an innovative way to model and predict high-definition (HD) video traces encoded with H.264/AVC encoding standard. Our results are based on our compilation of over 50 HD video traces. We show that our model, simplified seasonal ARIMA (SAM), provides an accurate representation for HD videos, and it provides significant improvements in prediction accuracy. Such accuracy is vital to provide better dynamic resource allocation for video traffic. In addition, we provide a statistical analysis of HD videos, including both factor and cluster analysis to support a better understanding of video stream workload characteristics and their impact on network traffic. We discuss our methodology to collect and encode our collection of HD video traces. Our video collection, results, and tools are available for the research community.
APA, Harvard, Vancouver, ISO, and other styles
8

Cui, Limeng, and Lijuan Chu. "YouTube Videos Related to the Fukushima Nuclear Disaster: Content Analysis." JMIR Public Health and Surveillance 7, no. 6 (June 7, 2021): e26481. http://dx.doi.org/10.2196/26481.

Full text
Abstract:
Background YouTube (Alphabet Incorporated) has become the most popular video-sharing platform in the world. The Fukushima Daiichi Nuclear Power Plant (FDNPP) disaster resulted in public anxiety toward nuclear power and radiation worldwide. YouTube is an important source of information about the FDNPP disaster for the world. Objective This study's objectives were to examine the characteristics of YouTube videos related to the FDNPP disaster, analyze the content and comments of videos with a quantitative method, and determine which features contribute to making a video popular with audiences. This study is the first to examine FDNPP disaster–related videos on YouTube. Methods We searched for the term “Fukushima nuclear disaster” on YouTube on November 2, 2019. The first 60 eligible videos in the relevance, upload date, view count, and rating categories were recorded. Videos that were irrelevant, were non-English, had inappropriate words, were machine synthesized, and were <3 minutes long were excluded. In total, 111 videos met the inclusion criteria. Parameters of the videos, including the number of subscribers, length, the number of days since the video was uploaded, region, video popularity (views, views/day, likes, likes/day, dislikes, dislikes/day, comments, comments/day), the tone of the videos, the top ten comments, affiliation, whether Japanese people participated in the video, whether the video recorder visited Fukushima, whether the video contained theoretical knowledge, and whether the video contained information about the recent situation in Fukushima, were recorded. By using criteria for content and technical design, two evaluators scored videos and grouped them into the useful (score: 11-14), slightly useful (score: 6-10), and useless (score: 0-5) video categories. Results Of the 111 videos, 43 (38.7%) videos were useful, 43 (38.7%) were slightly useful, and 25 (22.5%) were useless. Useful videos had good visual and aural effects, provided vivid information on the Fukushima disaster, and had a mean score of 12 (SD 0.9). Useful videos had more views per day (P<.001), likes per day (P<.001), and comments per day (P=.02) than useless and slightly useful videos. The popularity of videos had a significant correlation with clear sounds (likes/day: P=.001; comments/day: P=.02), vivid information (likes/day: P<.001; comments/day: P=.007), understanding content (likes/day: P=.001; comments/day: P=.04). There was no significant difference in likes per day (P=.72) and comments per day (P=.11) between negative and neutral- and mixed-tone videos. Videos about the recent situation in Fukushima had more likes and comments per day. Video recorders who personally visited Fukushima Prefecture had more subscribers and received more views and likes. Conclusions The possible features that made videos popular to the public included video quality, videos made in Fukushima, and information on the recent situation in Fukushima. During risk communication on new forms of media, health institutes should increase publicity and be more approachable to resonate with international audiences.
APA, Harvard, Vancouver, ISO, and other styles
9

Riudin, Hartini, Kasman Arifin, and Murni Sabilu. "Analysis of project-based learning videos on biology subjects." BIO-INOVED : Jurnal Biologi-Inovasi Pendidikan 4, no. 2 (June 26, 2022): 201. http://dx.doi.org/10.20527/bino.v4i2.12753.

Full text
Abstract:
This study aims to analyze project-based learning videos on biology subjects on YouTube. This type of research is descriptive research with qualitative approach. The object of the research was a real teaching video about project-based learning in biology subjects on YouTube. The instrument in this research is an observation sheet that contains aspects of the activities carried out in project-based learning which have been validated by three learning experts. The data analysis technique used is descriptive analysis, by observing learning videos and evaluating through instrument, then the data is processed and classified into categories. The results of observations of four video real teaching project-based learning in biology subjects on YouTube are in the good and sufficient categories. Each video category is video 1 received a score of 85 which was classified as good category, the videos 2 and 3 scored 71 in the good category, and the video 4 scored 61 in the sufficient category.Abstrak Penelitian ini bertujuan untuk menganalisis video pembelajaran berbasis proyek pada mata pelajaran biologi di YouTube. Jenis penelitian ini adalah penelitian deskriptif dengan pendekatan kualitatif. Objek penelitian berupa video real teaching mengenai pembelajaran berbasis proyek pada mata pelajaran Biologi di YouTube. Instrumen dalam penelitian ini berupa lembar pengamatan yang berisi aspek-aspek kegiatan yang dilakukan dalam pembelajaran berbasis proyek yang telah divalidasi oleh tiga pakar pembelajaran. Teknik analisis data yang digunakan adalah analisis deskriptif, dengan mengamati video pembelajaran dan dilakukan penilaian melalui instrumen, kemudian data diolah dan diklasifikasikan dalam kategori. Hasil pengamatan terhadap empat video real teaching pembelajaran berbasis proyek pada mata pelajaran biologi di YouTube berada pada kategori baik dan cukup. Masing-masing kategori video yaitu video 1 memperoleh nilai sebesar 85 tergolong kategori baik, video 2 dan 3 memperoleh nilai sebesar 71 tergolong kategori baik, dan video 4 memperoleh nilai sebesar 61 tergolong kategori cukup.
APA, Harvard, Vancouver, ISO, and other styles
10

Kamble, Shailesh D., Dilip Kumar Jang Bahadur Saini, Sachin Jain, Kapil Kumar, Sunil Kumar, and Dharmesh Dhabliya. "A novel approach of surveillance video indexing and retrieval using object detection and tracking." Journal of Interdisciplinary Mathematics 26, no. 3 (2023): 341–50. http://dx.doi.org/10.47974/jim-1665.

Full text
Abstract:
The problem of searching videos in large databases i.e. multimedia applications is a major challenge. Therefore, video indexing is used to search the location of the particular video in a large database quickly. Quickly locating the video in the large database is the good quality of indexing. Still, there is a scope of improvement in quickly searching a video in a large database in terms of assigning labels to video. In computer vision, real-time object detection and tracking is a gigantic, vibrant yet indecisive and intricate area. You only look once (YOLO) algorithm is used to detect the object and background subtraction is used to track the object. In this paper, video indexing using object detection / tracking can be performed on single object in a video. In future, video indexing can be performed on multiple objects in a video.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Video analysis"

1

Lidén, Jonas. "Distributed Video Content Analysis." Thesis, Umeå universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-99062.

Full text
Abstract:
Video Content Analysis (VCA) is usually computationally intense and time consuming. In this thesis the efficiency of VCA is increased by implementing a distributed VCA architecture. Automatic speech recognition is used as a case study to evaluate how the efficiency of VCA can be increased by distributing the workload across several machines. The system is to be run on standard desktop computers and need to support a variety of operating systems. The developed distributed system is compared to a serial system in use today. The results show increased performance, at the cost of a small increase in error rate. Two types of load balancing algorithms, static load balancing and dynamic load balancing, is evaluated in order to increase system throughput and it is concluded that the dynamic algorithm outperforms the static algorithm when running on a heterogeneous set of machines and that the differences are negligible when running on a homogeneous set of machines.
APA, Harvard, Vancouver, ISO, and other styles
2

Ren, Reede. "Audio-visual football video analysis, from structure detection to attention analysis." Thesis, Connect to e-thesis. Move to record for print version, 2008. http://theses.gla.ac.uk/77/.

Full text
Abstract:
Thesis (Ph.D.) - University of Glasgow, 2008.
Ph.D. thesis submitted to the Faculty of Information and Mathematical Sciences, Department of Computing Science, University of Glasgow, 2008. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
3

Pérez, Rúa Juan Manuel. "Hierarchical motion-based video analysis with applications to video post-production." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S125/document.

Full text
Abstract:
Nous présentons dans ce manuscrit les méthodes développées et les résultats obtenus dans notre travail de thèse sur l'analyse du contenu dynamique de scène visuelle. Nous avons considéré la configuration la plus fréquente de vision par ordinateur, à savoir caméra monoculaire et vidéos naturelles de scène extérieure. Nous nous concentrons sur des problèmes importants généraux pour la vision par ordinateur et d'un intérêt particulier pour l'industrie cinématographique, dans le cadre de la post-production vidéo. Les problèmes abordés peuvent être regroupés en deux catégories principales, en fonction d'une interaction ou non avec les utilisateurs : l'analyse interactive du contenu vidéo et l'analyse vidéo entièrement automatique. Cette division est un peu schématique, mais elle est en fait liée aux façons dont les méthodes proposées sont utilisées en post-production vidéo. Ces deux grandes approches correspondent aux deux parties principales qui forment ce manuscrit, qui sont ensuite subdivisées en chapitres présentant les différentes méthodes que nous avons proposées. Néanmoins, un fil conducteur fort relie toutes nos contributions. Il s'agit d'une analyse hiérarchique compositionnelle du mouvement dans les scènes dynamiques. Nous motivons et expliquons nos travaux selon l'organisation du manuscrit résumée ci-dessous. Nous partons de l'hypothèse fondamentale de la présence d'une structure hiérarchique de mouvement dans la scène observée, avec un objectif de compréhension de la scène dynamique. Cette hypothèse s'inspire d'un grand nombre de recherches scientifiques sur la vision biologique et cognitive. Plus précisément, nous nous référons à la recherche sur la vision biologique qui a établi la présence d'unités sensorielles liées au mouvement dans le cortex visuel. La découverte de ces unités cérébrales spécialisées a motivé les chercheurs en vision cognitive à étudier comment la locomotion des animaux (évitement des obstacles, planification des chemins, localisation automatique) et d'autres tâches de niveau supérieur sont directement influencées par les perceptions liées aux mouvements. Fait intéressant, les réponses perceptuelles qui se déroulent dans le cortex visuel sont activées non seulement par le mouvement lui-même, mais par des occlusions, des désocclusions, une composition des mouvements et des contours mobiles. En outre, la vision cognitive a relié la capacité du cerveau à appréhender la nature compositionnelle du mouvement dans l'information visuelle à une compréhension de la scène de haut niveau, comme la segmentation et la reconnaissance d'objets
The manuscript that is presented here contains all the findings and conclusions of the carried research in dynamic visual scene analysis. To be precise, we consider the ubiquitous monocular camera computer vision set-up, and the natural unconstrained videos that can be produced by it. In particular, we focus on important problems that are of general interest for the computer vision literature, and of special interest for the film industry, in the context of the video post-production pipeline. The tackled problems can be grouped in two main categories, according to the whether they are driven user interaction or not : user-assisted video processing tools and unsupervised tools for video analysis. This division is rather synthetic but it is in fact related to the ways the proposed methods are used inside the video post-production pipeline. These groups correspond to the main parts that form this manuscript, which are subsequently formed by chapters that explain our proposed methods. However, a single thread ties together all of our findings. This is, a hierarchical analysis of motion composition in dynamic scenes. We explain our exact contributions, together with our main motivations, and results in the following sections. We depart from a hypothesis that links the ability to consider a hierarchical structure of scene motion, with a deeper level of dynamic scene understanding. This hypothesis is inspired by plethora of scientific research in biological and psychological vision. More specifically, we refer to the biological vision research that established the presence of motion-related sensory units in the visual cortex. The discovery of these specialized brain units motivated psychological vision researchers to investigate how animal locomotion (obstacle avoidance, path planning, self-localization) and other higher-level tasks are directly influenced by motion-related percepts. Interestingly, the perceptual responses that take place in the visual cortex are activated not only by motion itself, but by occlusions, dis-occlusions, motion composition, and moving edges. Furthermore, psychological vision have linked the brain's ability to understand motion composition from visual information to high level scene understanding like object segmentation and recognition
APA, Harvard, Vancouver, ISO, and other styles
4

Touliatou, Georgia. "Diegetic stories in a video mediation : a narrative analysis of four videos." Thesis, University of Surrey, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.397132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Park, Dong-Jun. "Video event detection framework on large-scale video data." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/2754.

Full text
Abstract:
Detection of events and actions in video entails substantial processing of very large, even open-ended, video streams. Video data presents a unique challenge for the information retrieval community because properly representing video events is challenging. We propose a novel approach to analyze temporal aspects of video data. We consider video data as a sequence of images that form a 3-dimensional spatiotemporal structure, and perform multiview orthographic projection to transform the video data into 2-dimensional representations. The projected views allow a unique way to rep- resent video events and capture the temporal aspect of video data. We extract local salient points from 2D projection views and perform detection-via-similarity approach on a wide range of events against real-world surveillance data. We demonstrate our example-based detection framework is competitive and robust. We also investigate the synthetic example driven retrieval as a basis for query-by-example.
APA, Harvard, Vancouver, ISO, and other styles
6

Bales, Michael Ryan. "Illumination compensation in video surveillance analysis." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39535.

Full text
Abstract:
Problems in automated video surveillance analysis caused by illumination changes are explored, and solutions are presented. Controlled experiments are first conducted to measure the responses of color targets to changes in lighting intensity and spectrum. Surfaces of dissimilar color are found to respond significantly differently. Illumination compensation model error is reduced by 70% to 80% by individually optimizing model parameters for each distinct color region, and applying a model tuned for one region to a chromatically different region increases error by a factor of 15. A background model--called BigBackground--is presented to extract large, stable, chromatically self-similar background features by identifying the dominant colors in a scene. The stability and chromatic diversity of these features make them useful reference points for quantifying illumination changes. The model is observed to cover as much as 90% of a scene, and pixels belonging to the model are 20% more stable on average than non-member pixels. Several illumination compensation techniques are developed to exploit BigBackground, and are compared with several compensation techniques from the literature. Techniques are compared in terms of foreground / background classification, and are applied to an object tracking pipeline with kinematic and appearance-based correspondence mechanisms. Compared with other techniques, BigBackground-based techniques improve foreground classification by 25% to 43%, improve tracking accuracy by an average of 20%, and better preserve object appearance for appearance-based trackers. All algorithms are implemented in C or C++ to support the consideration of runtime performance. In terms of execution speed, the BigBackground-based illumination compensation technique is measured to run on par with the simplest compensation technique used for comparison, and consistently achieves twice the frame rate of the two next-fastest techniques.
APA, Harvard, Vancouver, ISO, and other styles
7

Almquist, Mathias, and Viktor Almquist. "Analysis of 360° Video Viewing Behaviour." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-144405.

Full text
Abstract:
In this thesis we study users' viewing motions when watching 360° videos in order to provide information that can be used to optimize future view-dependent streaming protocols. More specifically, we develop an application that plays a sequence of 360° videos on an Oculus Rift Head Mounted Display and records the orientation and rotation velocity of the headset during playback. The application is used during an extensive user study in order to collect more than 21 hours of viewing data which is then analysed to expose viewing patterns, useful for optimizing 360° streaming protocols.
APA, Harvard, Vancouver, ISO, and other styles
8

Gu, Lifang. "Video analysis in MPEG compressed domain." University of Western Australia. School of Computer Science and Software Engineering, 2003. http://theses.library.uwa.edu.au/adt-WU2003.0016.

Full text
Abstract:
The amount of digital video has been increasing dramatically due to the technology advances in video capturing, storage, and compression. The usefulness of vast repositories of digital information is limited by the effectiveness of the access methods, as shown by the Web explosion. The key issues in addressing the access methods are those of content description and of information space navigation. While textual documents in digital form are somewhat self-describing (i.e., they provide explicit indices, such as words and sentences that can be directly used to categorise and access them), digital video does not provide such an explicit content description. In order to access video material in an effective way, without looking at the material in its entirety, it is therefore necessary to analyse and annotate video sequences, and provide an explicit content description targeted to the user needs. Digital video is a very rich medium, and the characteristics in which users may be interested are quite diverse, ranging from the structure of the video to the identity of the people who appear in it, their movements and dialogues and the accompanying music and audio effects. Indexing digital video, based on its content, can be carried out at several levels of abstraction, beginning with indices like the video program name and name of subject, to much lower level aspects of video like the location of edits and motion properties of video. Manual video indexing requires the sequential examination of the entire video clip. This is a time-consuming, subjective, and expensive process. As a result, there is an urgent need for tools to automate the indexing process. In response to such needs, various video analysis techniques from the research fields of image processing and computer vision have been proposed to parse, index and annotate the massive amount of digital video data. However, most of these video analysis techniques have been developed for uncompressed video. Since most video data are stored in compressed formats for efficiency of storage and transmission, it is necessary to perform decompression on compressed video before such analysis techniques can be applied. Two consequences of having to first decompress before processing are incurring computation time for decompression and requiring extra auxiliary storage.To save on the computational cost of decompression and lower the overall size of the data which must be processed, this study attempts to make use of features available in compressed video data and proposes several video processing techniques operating directly on compressed video data. Specifically, techniques of processing MPEG-1 and MPEG-2 compressed data have been developed to help automate the video indexing process. This includes the tasks of video segmentation (shot boundary detection), camera motion characterisation, and highlights extraction (detection of skin-colour regions, text regions, moving objects and replays) in MPEG compressed video sequences. The approach of performing analysis on the compressed data has the advantages of dealing with a much reduced data size and is therefore suitable for computationally-intensive low-level operations. Experimental results show that most analysis tasks for video indexing can be carried out efficiently in the compressed domain. Once intermediate results, which are dramatically reduced in size, are obtained from the compressed domain analysis, partial decompression can be applied to enable high resolution processing to extract high level semantic information.
APA, Harvard, Vancouver, ISO, and other styles
9

Gu, Lifang. "Video analysis in MPEG compressed domain /." Connect to this title, 2002. http://theses.library.uwa.edu.au/adt-2003.0016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Hao. "Advanced video analysis for surveillance applications." Thesis, University of Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.555815.

Full text
Abstract:
This thesis addresses the issues of applying advanced video analytics for surveillance applications. A video surveillance system can be defined as a technological tool that assists humans by providing an extended perception and capability of capturing interesting activities in the monitored scene. The prime components of video surveillance systems include moving object detection, object tracking, and anomaly detection. Moving object detection extracts the foreground silhouettes of moving objects. The object tracking component then applies the foreground information to create correspondences between tracks in the previous frame and objects in the current frame. The most challenging part of the system concerns the use of extracted scene information from the moving objects and object tracking for anomaly detection. The thesis proposes novel approaches for each of the main components above. They include: 1) an efficient foreground detection algorithm based on block-based detection and improved pixel-based Gaussian Mixture Model (GMM) refinement that can selectively update pixel information in each image region; 2) an adaptive object tracker that combines the merits of Kalman, mean-shift and particle filtering; 3) a feature clustering algorithm, which can automatically choose the optimal number of clusters in the training data for scene pattern classification; 4) a statistical scene modeller based on Bayesian theory and GMM, which combines object-based and local region-based information for enhanced anomaly detection. In addition, a layered feedback system architecture is proposed for using high- level detection results for improving low-level detection performance. Compared with common open-loop approaches, this increases the system reliability at the expense of using little extra computation. Moreover, considering the capability of real-time operation, robustness, and detection accuracy, which are key factors of video surveillance systems, appropriate trade-offs between complexity and detection performance are introduced in the relevant phases of the system, such as in moving object detection and in object tracking. The performance of the proposed system is evaluated with various video datasets. Both qualitative and quantitative measures are applied, for example visual comparison and precision-recall curves. The proposed moving object detection achieves an average of 52% and 38% improvement in terms of false positive detected pixels compared with a Gaussian Model (GM) and a GMM respectively. The object tracking component reduces the computation by 10% compared to a mean-shift filter while maintaining better tracking results. The proposed anomaly detection algorithm also outperforms previously proposed approaches. These results demonstrate the effectiveness of the proposed video surveillance system framework.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Video analysis"

1

United States. General Accounting Office. National Security and International Affairs Division. Postol's video analysis. Washington, D.C: The Office, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Verma, Brijesh, Ligang Zhang, and David Stockwell. Roadside Video Data Analysis. Singapore: Springer Singapore, 2017. http://dx.doi.org/10.1007/978-981-10-4539-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Video interaction analysis: Methods and methodology. Frankfurt am Main: Peter Lang, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Jianguo, Ling Shao, Lei Zhang, and Graeme A. Jones, eds. Intelligent Video Event Analysis and Understanding. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-17554-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Ying, and C. C. Jay Kuo. Video Content Analysis Using Multimodal Information. Boston, MA: Springer US, 2003. http://dx.doi.org/10.1007/978-1-4757-3712-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Heng, Wei Jyh. Digital video transition analysis and detection. Singapore: World Scientific Publishing, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rawlings, Barbara. Video analysis and small group behaviour. Manchester: Manchester Business School and Centre for BusinessResearch, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Content-based analysis of digital video. Boston, MA: Kluwer Academic Publishers, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

N, Ngan King, ed. Digital video transition analysis and detection. River Edge, N.J: World Scientific, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ram, A. Ranjith, and Subhasis Chaudhuri. Video Analysis and Repackaging for Distance Education. New York, NY: Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-3837-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Video analysis"

1

Dawson, Catherine. "Video analysis." In A–Z of Digital Research Methods, 368–75. Abingdon, Oxon ; New York, NY : Routledge, 2019.: Routledge, 2019. http://dx.doi.org/10.4324/9781351044677-56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wei, Ge. "Video analysis." In Reimaging Pre-Service Teachers' Practical Knowledge, 70–91. London: Routledge, 2022. http://dx.doi.org/10.4324/9781003304111-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Otsuka, Isao, Sam Shipman, and Ajay Divakaran. "A Video Browsing enabled Personal Video Recorder." In Multimedia Content Analysis, 1–12. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-76569-3_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Osborn, Brad. "Sound Analysis." In Interpreting Music Video, 31–48. New York: Routledge, 2021.: Routledge, 2021. http://dx.doi.org/10.4324/9781003037576-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Babaguchi, Noboru, and Naoko Nitta. "Sports Video Analysis." In Encyclopedia of Multimedia, 820–27. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-78414-4_64.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hauptmann, Alexander. "Video Content Analysis." In Encyclopedia of Database Systems, 3271–76. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-39940-9_1018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Hsinchun. "Jihadi Video Analysis." In Dark Web, 273–93. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4614-1557-2_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lincoln, Andrew E., and Shane V. Caswell. "Video Data Analysis." In Injury Research, 397–408. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-1-4614-1599-2_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hauptmann, Alexander. "Video Content Analysis." In Encyclopedia of Database Systems, 1–8. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4899-7993-3_1018-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hauptmann, Alexander. "Video Content Analysis." In Encyclopedia of Database Systems, 4381–88. New York, NY: Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4614-8265-9_1018.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Video analysis"

1

Moreira, Daniel, Siome Goldenstein, and Anderson Rocha. "Sensitive-Video Analysis." In XXX Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação - SBC, 2017. http://dx.doi.org/10.5753/ctd.2017.3466.

Full text
Abstract:
Sensitive videos that may be inadequate to some audiences (e.g., pornography and violence, towards underages) are constantly being shared over the Internet. Employing humans for filtering them is daunting. The huge amount of data and the tediousness of the task ask for computer-aided sensitive videoanalysis, which we tackle in two ways. In the first one (sensitive-video classification), we explore efficient methods to decide whether or not a video contains sensitive material. In the second one (sensitive-content localization), we explore manners to find the moments a video starts and ceases to display sensitive content. Hypotheses are stated and validated, leading to contributions (papers, dataset, and patents) in the fields of Digital Forensics and Computer Vision.
APA, Harvard, Vancouver, ISO, and other styles
2

Bauermann, Ingo, and Eckehard Steinbach. "A theoretical analysis of the RDTC space." In Packet Video 2007. IEEE, 2007. http://dx.doi.org/10.1109/packet.2007.4397050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Alfonso, Daniele, Matteo Gherardi, Andrea Vitali, and Fabrizio Rovati. "Performance analysis of the scalable video coding standard." In Packet Video 2007. IEEE, 2007. http://dx.doi.org/10.1109/packet.2007.4397047.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lucas, S. "Designing video-joysticks." In IEE Colloquium on Motion Analysis and Tracking. IEE, 1999. http://dx.doi.org/10.1049/ic:19990583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chellappa, Rama, and Gaurav Aggarwal. "Video Biometrics." In 14th International Conference on Image Analysis and Processing (ICIAP 2007). IEEE, 2007. http://dx.doi.org/10.1109/iciap.2007.4362805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Divakaran, Ajay, and Isao Otsuka. "A Video-Browsing-Enhanced Personal Video Recorder." In 14th International Conference of Image Analysis and Processing - Workshops (ICIAPW 2007). IEEE, 2007. http://dx.doi.org/10.1109/iciapw.2007.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Qiu Shen, Ye-Kui Wang, Miska M. Hannuksela, Houqiang Li, and Yi Wang. "Buffer requirement analysis and reference picture marking for temporal scalable video coding." In Packet Video 2007. IEEE, 2007. http://dx.doi.org/10.1109/packet.2007.4397030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Park, Dong-Jun, and Davd A. Eichmann. "Temporal video analysis." In Proceeding of the 1st ACM workshop. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1463542.1463556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Aghajan, Hamid, Marco Cristani, Vittorio Murino, and Nicu Sebe. "Pervasive video analysis." In the international conference. New York, New York, USA: ACM Press, 2010. http://dx.doi.org/10.1145/1873951.1874354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rahn, Rahn C., Youn-kyung Lim, and Dennis P. Groth. "Redesigning video analysis." In Proceeding of the twenty-sixth annual CHI conference extended abstracts. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1358628.1358854.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Video analysis"

1

Bandat, N. E. Video image analysis using the Selective Video Processor development platform. Office of Scientific and Technical Information (OSTI), August 1989. http://dx.doi.org/10.2172/6161006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Key, Everett Kiusan, Kendra Lu Van Buren, Will Warren, and Francois M. Hemez. Video Analysis in Multi-Intelligence. Office of Scientific and Technical Information (OSTI), July 2016. http://dx.doi.org/10.2172/1291267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Orchard, Michael, and Robert Joyce. Content Analysis of Video Sequences. Fort Belvoir, VA: Defense Technical Information Center, February 2002. http://dx.doi.org/10.21236/ada414069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Matzner, Shari, Colleen K. Trostle, Garrett J. Staines, Ryan E. Hull, Andrew Avila, and Genevra EL Harker-Klimes. Triton: Igiugig Video Analysis - Project Report. Office of Scientific and Technical Information (OSTI), February 2016. http://dx.doi.org/10.2172/1485061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Menlove, H. O., J. A. Howell, C. A. Rodriguez, G. W. Eccleston, D. Beddingfield, J. E. Smith, and C. W. Baumgart. Integration of video and radiation analysis data. Office of Scientific and Technical Information (OSTI), December 1995. http://dx.doi.org/10.2172/10105924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bovik, Alan C. AM-FM Analysis of Images and Video. Fort Belvoir, VA: Defense Technical Information Center, January 2000. http://dx.doi.org/10.21236/ada387139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Matzner, Shari, Colleen Trostle, Garrett Staines, Ryan Hull, Phoenix Avila, and Genevra EL Harker-Klimes. Triton: Igiugig Fish Video Analysis - Project Report. Office of Scientific and Technical Information (OSTI), August 2017. http://dx.doi.org/10.2172/2348943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liang, Yiqing. Video Retrieval Based on Language and Image Analysis. Fort Belvoir, VA: Defense Technical Information Center, May 1999. http://dx.doi.org/10.21236/ada364129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dunn, Marcus, Adam Kennerley, Kate Webster, Kane Middleton, and Jon Wheat. Application of Video Interpolation to Markerless Movement Analysis. Purdue University, 2022. http://dx.doi.org/10.5703/1288284317501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Brumby, Steven P. Video Analysis & Search Technology (VAST): Automated content-based labeling and searching for video and images. Office of Scientific and Technical Information (OSTI), May 2014. http://dx.doi.org/10.2172/1133765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography