Academic literature on the topic 'Automatic; Video; Interpretation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Automatic; Video; Interpretation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Automatic; Video; Interpretation"

1

Lavee, G., E. Rivlin, and M. Rudzsky. "Understanding Video Events: A Survey of Methods for Automatic Interpretation of Semantic Occurrences in Video." IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 39, no. 5 (September 2009): 489–504. http://dx.doi.org/10.1109/tsmcc.2009.2023380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

WITHERS, C. S. "ORTHOGONAL FUNCTIONS AND ZERNIKE POLYNOMIALS—A RANDOM VARIABLE INTERPRETATION." ANZIAM Journal 50, no. 3 (January 2009): 435–44. http://dx.doi.org/10.1017/s1446181109000169.

Full text
Abstract:
AbstractThere are advantages in viewing orthogonal functions as functions generated by a random variable from a basis set of functions. Let Y be a random variable distributed uniformly on [0,1]. We give two ways of generating the Zernike radial polynomials with parameter l, {Zll+2n(x), n≥0}. The first is using the standard basis {xn,n≥0} and the random variable Y1/(l+1). The second is using the nonstandard basis {xl+2n,n≥0} and the random variable Y1/2. Zernike polynomials are important in the removal of lens aberrations, in characterizing video images with a small number of numbers, and in automatic aircraft identification.
APA, Harvard, Vancouver, ISO, and other styles
3

Guo, Pengyu, Shaowen Ding, Hongliang Zhang, and Xiaohu Zhang. "A Real-Time Optical Tracking and Measurement Processing System for Flying Targets." Scientific World Journal 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/976590.

Full text
Abstract:
Optical tracking and measurement for flying targets is unlike the close range photography under a controllable observation environment, which brings extreme conditions like diverse target changes as a result of high maneuver ability and long cruising range. This paper first designed and realized a distributed image interpretation and measurement processing system to achieve resource centralized management, multisite simultaneous interpretation and adaptive estimation algorithm selection; then proposed a real-time interpretation method which contains automatic foreground detection, online target tracking, multiple features location, and human guidance. An experiment is carried out at performance and efficiency evaluation of the method by semisynthetic video. The system can be used in the field of aerospace tests like target analysis including dynamic parameter, transient states, and optical physics characteristics, with security control.
APA, Harvard, Vancouver, ISO, and other styles
4

Morozov, A. A., O. S. Sushkova, I. A. Kershner, and A. F. Polupanov. "Development of a Method of Terahertz Intelligent Video Surveillance Based on the Semantic Fusion of Terahertz and 3D Video Images." Information Technology and Nanotechnology, no. 2391 (2019): 134–43. http://dx.doi.org/10.18287/1613-0073-2019-2391-134-143.

Full text
Abstract:
The terahertz video surveillance opens up new unique opportunities in the field of security in public places, as it allows to detect and thus to prevent usage of hidden weapons and other dangerous items. Although the first generation of terahertz video surveillance systems has already been created and is available on the security systems market, it has not yet found wide application. The main reason for this is in that the existing methods for analyzing terahertz images are not capable of providing hidden and fully-automatic recognition of weapons and other dangerous objects and can only be used under the control of a specially trained operator. As a result, the terahertz video surveillance appears to be more expensive and less efficient in comparison with the standard approach based on the organizing security perimeters and manual inspection of the visitors. In the paper, the problem of the development of a method of automatic analysis of the terahertz video images is considered. As a basis for this method, it is proposed to use the semantic fusion of video images obtained using different physical principles, the idea of which is in that the semantic content of one video image is used to control the processing and analysis of another video image. For example, the information about 3D coordinates of the body, arms, and legs of a person can be used for analysis and proper interpretation of color areas observed on a terahertz video image. Special means of the object-oriented logic programming are developed for the implementation of the semantic fusion of the video data, including special built-in classes of the Actor Prolog logic language for acquisition, processing, and analysis of video data in the visible, infrared, and terahertz ranges as well as 3D video data.
APA, Harvard, Vancouver, ISO, and other styles
5

GERICKE, LUTZ, RAJA GUMIENNY, and CHRISTOPH MEINEL. "COLLABORATECOM SPECIAL ISSUE ANALYZING DISTRIBUTED WHITEBOARD INTERACTIONS." International Journal of Cooperative Information Systems 21, no. 03 (September 2012): 199–220. http://dx.doi.org/10.1142/s021884301241002x.

Full text
Abstract:
We present the digital whiteboard system Tele-Board, which automatically captures all interactions made on the all-digital whiteboard and thus offers possibilities for a fast interpretation of usage characteristics. Analyzing team work at whiteboards is a time-consuming and error-prone process if manual interpretation techniques are applied. In a case study, we demonstrate how to conduct and analyze whiteboard experiments with the help of our system. The study investigates the role of video compared to an audio-only connection for distributed work settings. With the simplified analysis of communication data, we can prove that the video teams were more active than the audio teams and the distribution of whiteboard interaction between team members was more balanced. This way, an automatic analysis can not only support manual observations and codings, but also give insights that cannot be achieved with other systems. Beyond the overall view on one sessions focusing on key figures, it is also possible to find out more about the internal structure of a session.
APA, Harvard, Vancouver, ISO, and other styles
6

Tneb, Rainer, Andreas Seidl, Guido Hansen, and C. Pruett. "3-D Body Scanning - Systems, Methods and Applications for Automatic Interpretation of 3D Surface Anthropometrical Data." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 44, no. 38 (July 2000): 844–47. http://dx.doi.org/10.1177/154193120004403844.

Full text
Abstract:
Since the early beginning of the development of the 3-D digital human and ergonomics tool RAMSIS in 1988, appropriate measurement systems had been developed simultaneously. New integrated approaches and methods for human body measurement have been investigated and developed. TECMATH has developed the VITUS Pro and VITUS Smart 3-D Full Body Laser Scanner family for high precision, and adapted a 2-D video camera-based system that is simple to use and inexpensive. In the past three years, novel applications for mass customization have been developed specifically for the clothing industry. More than 120 systems (3-D and 2-D) have been installed in research environments, clothing shops, army facilities and automobile manufacturers in the past two years. These organizations require measurement systems, methods, and analysis techniques that ensure reliable and precise information about human body dimensions.
APA, Harvard, Vancouver, ISO, and other styles
7

Brás, Luís M. R., Elsa F. Gomes, Margarida M. M. Ribeiro, and M. M. L. Guimarães. "Drop Distribution Determination in a Liquid-Liquid Dispersion by Image Processing." International Journal of Chemical Engineering 2009 (2009): 1–6. http://dx.doi.org/10.1155/2009/746439.

Full text
Abstract:
This paper presents the implementation of an algorithm for automatic identification of drops with different sizes in monochromatic digitized frames of a liquid-liquid chemical process. These image frames were obtained at our Laboratory, using a nonintrusive process, with a digital video camera, a microscope, and an illumination setup from a dispersion of toluene in water within a transparent mixing vessel. In this implementation, we propose a two-phase approach, using a Hough transform that automatically identifies drops in images of the chemical process. This work is a promising starting point for the possibility of performing an automatic drop classification with good results. Our algorithm for the analysis and interpretation of digitized images will be used for the calculation of particle size and shape distributions for modelling liquid-liquid systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Smolyakov, P. N. "Liability of Legal Entities for Offenses Recorded by Special Technical Devices." Actual Problems of Russian Law 1, no. 12 (January 20, 2020): 11–16. http://dx.doi.org/10.17803/1994-1471.2019.109.12.011-016.

Full text
Abstract:
The article is devoted to the exemption of legal entities from liability for administrative offenses recorded by special technical devices operating in an automatic mode and having functions of photo and film shooting, video recording, or by means of photo and film shooting, video recording. It is noted that the existing regulation in the Administrative Code of the Russian Federation in the interpretation of the highest court and other courts makes such liability ephemeral, allowing to arbitrarily shift it, for example, onto natural persons, e.i. drivers of vehicles belonging to legal entities. This situation allows legal entities with a large number of commercial vehicles throughout the country to easily avoid paying large amounts of administrative fines, which has nefative effect on pumping up the treasury and encourages further illegal behavior of their drivers on the roads. The author proposes to discuss the state of legislation and law enforcement on this issue.
APA, Harvard, Vancouver, ISO, and other styles
9

Kang, Hyung W., and Sung Yong Shin. "Creating Walk-Through Images from a Video Sequence of a Dynamic Scene." Presence: Teleoperators and Virtual Environments 13, no. 6 (December 2004): 638–55. http://dx.doi.org/10.1162/1054746043280556.

Full text
Abstract:
Tour into the picture (TIP), proposed by Horry et al. (Horry, Anjyo, & Arai, 1997, ACM SIGGRAPH '97 Conference Proceedings, 225–232) is a method for generating a sequence of walk-through images from a single reference image. By navigating a 3D scene model constructed from the image, TIP provides convincing 3D effects. This paper presents a comprehensive scheme for creating walk-through images from a video sequence by generalizing the idea of TIP. To address various problems in dealing with a video sequence rather than a single image, the proposed scheme is designed to have the following features: First, it incorporates a new modeling scheme based on a vanishing circle identified in the video, assuming that the input video contains a negligible amount of motion parallax effects and that dynamic objects move on a flat terrain. Second, we propose a novel scheme for automatic background detection from the video, based on 4-parameter motion model and statistical background color estimation. Third, to assist the extraction of static or dynamic foreground objects from video, we devised a semiautomatic boundary-segmentation scheme based on enhanced lane (Kang & Shin, 2002, Graphical Models, 64 (5), 282–303). The purpose of this work is to let users experience the feel of navigating into a video sequence with their own interpretation and imagination about a given scene. The proposed scheme covers various types of video films of dynamic scenes, such as sports coverage, cartoon animation, and movie films, in which objects are continuously changing their shapes and locations. It can also be used to produce a variety of synthetic video sequences by importing and merging dynamic foreign objects with the original video.
APA, Harvard, Vancouver, ISO, and other styles
10

Xiu, Hongling, and Fengyun Yang. "Batch Processing of Remote Sensing Image Mosaic based on Python." International Journal of Online Engineering (iJOE) 14, no. 09 (September 30, 2018): 208. http://dx.doi.org/10.3991/ijoe.v14i09.9226.

Full text
Abstract:
In the process of remote sensing image processing, analysis and interpretation, it is usually necessary to combine several local images into a complete image. Aiming at the shortcoming of long and complicated process of conventional semi-automatic video stitching. In this paper, using the splicing method of pixels, based on the Python interface of ArcGIS 10.1 platform, the idea of programming language is introduced and batch mosaic of remote sensing images is realized. Through the comparison with the image processing software, it is found that this method can shorten the time of image mosaic and improve the efficiency of splicing, which is convenient for later image analysis and other work under the premise of ensuring the accuracy.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Automatic; Video; Interpretation"

1

Vu, Van-Thinh. "Temporal scenario for automatic video interpretation." Nice, 2004. http://www.theses.fr/2004NICE4058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rittscher, Jens. "Classifying human motion." Thesis, University of Oxford, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365353.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Automatic; Video; Interpretation"

1

Nuwer, Marc R., Ronald G. Emerson, and Cecil D. Hahn. Principles and Techniques for Long-Term EEG Recording (Epilepsy Monitoring Unit, Intensive Care Unit, Ambulatory). Edited by Donald L. Schomer and Fernando H. Lopes da Silva. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780190228484.003.0031.

Full text
Abstract:
Long-term monitoring is a set of methods for recording electroencephalographic (EEG) signals over a period of 24 hours or longer. Patient video recording is often synchronized to the EEG. Interpretation aids help physicians to identify events, which include automated spike and seizure detection and various trending displays of frequency EEG content. These techniques are used in epilepsy monitoring units for presurgical evaluations and differential diagnosis of seizures versus nonepileptic events. They are used in intensive care units to identify nonconvulsive seizures, to measure the effectiveness of therapy, to assess depth and prognosis in coma, and other applications. The patient can be monitored at home with ambulatory monitoring equipment. Specialized training is needed for competent interpretation of long-term monitoring EEGs. Problems include false-positive events flagged by automated spike and seizure detection software, and muscle and movement artifact contamination during seizures.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Automatic; Video; Interpretation"

1

Vu, Van-Thinh, François Brémond, and Monique Thonnat. "Automatic Video Interpretation: A Recognition Algorithm for Temporal Scenarios Based on Pre-compiled Scenario Models." In Lecture Notes in Computer Science, 523–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-36592-3_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lee, Jeongkyu. "Video Ontology." In Encyclopedia of Multimedia Technology and Networking, Second Edition, 1506–11. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch203.

Full text
Abstract:
There has been a great deal of interest in the development of ontology to facilitate knowledge sharing and database integration. In general, ontology is a set of terms or vocabularies of interest in a particular information domain, and shows the relationships among them (Doerr, Hunter, & Lagoze, 2003). It includes machine-interpretable definitions of basic concepts in the domain. Ontology is very popular in the fields of natural language processing (NLP) and Web user interface (Web ontology). To take this advantage into multimedia content analysis, several studies have proposed ontology-based schemes (Hollink & Worring, 2005; Spyropoulos, Paliouras, Karkaletsis, Kosmopoulos, Pratikakis, Perantonis, & Gatos, 2005). Modular structure of the ontology methodology is used in a generic analysis scheme to semantically interpret and annotate multimedia content. This methodology consists of domain ontology, core ontology, and multimedia ontology. Domain ontology captures concepts in a particular type of domain, while core ontology is the key building blocks necessary to enable the scalable assimilation of information from diverse sources. Multimedia ontology is used to model multimedia data, such as audio, image, and video. In the multimedia data analysis the meaningful patterns and hidden knowledge are discovered from the database. There are existing tools for managing and searching the discovered patterns and knowledge. However, almost all of the approaches use low-level feature values instead of high-level perceptions, which make a huge gap between machine interpretation and human understanding. For example, if we have to retrieve anomaly from video surveillance systems, low-level feature values cannot represent such semantic meanings. In order to address the problem, the main focus of research has been on the construction and utilization of ontology for specific data domain in various applications. In this chapter, we first survey the state-of-the-art in multimedia ontology, specifically video ontology, and then investigate the methods of automatic generation of video ontology.
APA, Harvard, Vancouver, ISO, and other styles
3

Petre, Raluca-Diana, and Titus Zaharia. "3D Model-Based Semantic Categorization of Still Image 2D Objects." In Multimedia Data Engineering Applications and Processing, 151–69. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2940-0.ch008.

Full text
Abstract:
Automatic classification and interpretation of objects present in 2D images is a key issue for various computer vision applications. In particular, when considering image/video, indexing, and retrieval applications, automatically labeling in a semantically pertinent manner/huge multimedia databases still remains a challenge. This paper examines the issue of still image object categorization. The objective is to associate semantic labels to the 2D objects present in natural images. The principle of the proposed approach consists of exploiting categorized 3D model repositories to identify unknown 2D objects, based on 2D/3D matching techniques. The authors use 2D/3D shape indexing methods, where 3D models are described through a set of 2D views. Experimental results, carried out on both MPEG-7 and Princeton 3D models databases, show recognition rates of up to 89.2%.
APA, Harvard, Vancouver, ISO, and other styles
4

Tajbakhsh, Nima, Jae Y. Shin, R. Todd Hurst, Christopher B. Kendall, and Jianming Liang. "Automatic Interpretation of Carotid Intima–Media Thickness Videos Using Convolutional Neural Networks." In Deep Learning for Medical Image Analysis, 105–31. Elsevier, 2017. http://dx.doi.org/10.1016/b978-0-12-810408-8.00007-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Automatic; Video; Interpretation"

1

Hwang, Jenq-Neng, and Ying Luo. "Automatic object-based video analysis and interpretation: A step toward systematic video understanding." In Proceedings of ICASSP '02. IEEE, 2002. http://dx.doi.org/10.1109/icassp.2002.5745555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jenq-Neng Hwang and Ying Luo. "Automatic object-based video analysis and interpretation: a step toward systematic video understanding." In IEEE International Conference on Acoustics Speech and Signal Processing ICASSP-02. IEEE, 2002. http://dx.doi.org/10.1109/icassp.2002.1004816.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Assfalg, J., M. Bertini, C. Colombo, A. Del Bimbo, and W. Nunziati. "Automatic interpretation of soccer video for highlights extraction and annotation." In the 2003 ACM symposium. New York, New York, USA: ACM Press, 2003. http://dx.doi.org/10.1145/952532.952684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Geerinck, Thomas, Valentin Enescu, Ilse Ravyse, and Hichem Sahli. "Rule-Based Video Interpretation Framework: Application to Automated Surveillance." In 2009 International Conference on Image and Graphics (ICIG). IEEE, 2009. http://dx.doi.org/10.1109/icig.2009.140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shin, Jae Y., Nima Tajbakhsh, R. Todd Hurst, Christopher B. Kendall, and Jianming Liang. "Automating Carotid Intima-Media Thickness Video Interpretation with Convolutional Neural Networks." In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016. http://dx.doi.org/10.1109/cvpr.2016.277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Umai, Chan, Ashraf Kassim, and Chew Lock Yue. "Detection and Interpretation of Text Information in Noisy Video Sequences." In 2006 9th International Conference on Control, Automation, Robotics and Vision. IEEE, 2006. http://dx.doi.org/10.1109/icarcv.2006.345066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sikos, Leslie F. "Utilizing Multimedia Ontologies in Video Scene Interpretation via Information Fusion and Automated Reasoning." In 2017 Federated Conference on Computer Science and Information Systems. IEEE, 2017. http://dx.doi.org/10.15439/2017f66.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lang, Christian, Sven Wachsmuth, Marc Hanheide, and Heiko Wersing. "Facial communicative signal interpretation in human-robot interaction by discriminative video subsequence selection." In 2013 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2013. http://dx.doi.org/10.1109/icra.2013.6630572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography