Academic literature on the topic 'Video annotation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Video annotation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Video annotation"

1

Chen, Jia, Cui-xia Ma, Hong-an Wang, Hai-yan Yang, and Dong-xing Teng. "Sketch Based Video Annotation and Organization System in Distributed Teaching Environment." International Journal of Distributed Systems and Technologies 1, no. 4 (October 2010): 27–41. http://dx.doi.org/10.4018/jdst.2010100103.

Full text
Abstract:
As the use of instructional video is becoming a key component of e-learning, there is an increasing need for a distributed system which supports collaborative video annotation and organization. In this paper, the authors construct a distributed environment on the top of NaradaBrokering to support collaborative operations on video material when users are located in different places. The concept of video annotation is enriched, making it a powerful media to improve the instructional video organizing and viewing. With panorama based and interpolation based methods, all related users can annotate or organize videos simultaneously. With these annotations, a video organization structure is consequently built through linking them with other video clips or annotations. Finally, an informal user study was conducted and result shows that this system improves the efficiency of video organizing and viewing and enhances user’s participating into the design process with good user experience.
APA, Harvard, Vancouver, ISO, and other styles
2

Groh, Florian, Dominik Schörkhuber, and Margrit Gelautz. "A tool for semi-automatic ground truth annotation of traffic videos." Electronic Imaging 2020, no. 16 (January 26, 2020): 200–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.16.avm-150.

Full text
Abstract:
We have developed a semi-automatic annotation tool – “CVL Annotator” – for bounding box ground truth generation in videos. Our research is particularly motivated by the need for reference annotations of challenging nighttime traffic scenes with highly dynamic lighting conditions due to reflections, headlights and halos from oncoming traffic. Our tool incorporates a suite of different state-of-the-art tracking algorithms in order to minimize the amount of human input necessary to generate high-quality ground truth data. We focus our user interface on the premise of minimizing user interaction and visualizing all information relevant to the user at a glance. We perform a preliminary user study to measure the amount of time and clicks necessary to produce ground truth annotations of video traffic scenes and evaluate the accuracy of the final annotation results.
APA, Harvard, Vancouver, ISO, and other styles
3

Rich, Peter J., and Michael Hannafin. "Video Annotation Tools." Journal of Teacher Education 60, no. 1 (November 26, 2008): 52–67. http://dx.doi.org/10.1177/0022487108328486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Von Wachter, Jana-Kristin, and Doris Lewalter. "Video Annotation as a Supporting Tool for Video-based Learning in Teacher Training – A Systematic Literature Review." International Journal of Higher Education 12, no. 2 (February 27, 2023): 1. http://dx.doi.org/10.5430/ijhe.v12n2p1.

Full text
Abstract:
Digital video annotation tools, which allow users to add synchronized comments to video content, have gained significant attention in teacher education in recent years. However, there is no overview of the research on the use of annotations, their implementation in teacher training and their effect on the development of professional competencies as a result of using video annotations as a supporting tool for video-based learning. In order to fill this gap, this paper reports on the results of a systematic literature review which was carried out to determine 1) how video annotations were implemented in studies in educational settings, 2) which professional competencies were investigated to be further developed with the aid of video annotations in these studies, and 3) which learning outcomes were reported in the selected studies. A total of 18 eligible studies, published between 2014 and 2022, were identified via database search and cross-referencing. A qualitative content analysis of these studies showed that video annotations were generally used to perform one or more of three functions, these being feedback, communication, and documentation, while they also enabled a deeper content knowledge of teaching, reflective skills, and professional vision, and facilitated social integration and recognition. The convincing evidence of the positive effect of using video annotation as a supporting tool in video teacher training prove them to be a powerful tool supporting the development of professional vision and other teaching skills. The use of video annotation tools in educational settings points towards further research as well.
APA, Harvard, Vancouver, ISO, and other styles
5

R. Balamurugan, R. Balamurugan. "Semi-Automatic Context-Aware Video Annotation for Searching Educational Video Resources." Indian Journal of Applied Research 3, no. 6 (October 1, 2011): 108–10. http://dx.doi.org/10.15373/2249555x/june2013/35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sánchez-Carballido, Sergio, Orti Senderos, Marcos Nieto, and Oihana Otaegui. "Semi-Automatic Cloud-Native Video Annotation for Autonomous Driving." Applied Sciences 10, no. 12 (June 23, 2020): 4301. http://dx.doi.org/10.3390/app10124301.

Full text
Abstract:
An innovative solution named Annotation as a Service (AaaS) has been specifically designed to integrate heterogeneous video annotation workflows into containers and take advantage of a cloud native highly scalable and reliable design based on Kubernetes workloads. Using the AaaS as a foundation, the execution of automatic video annotation workflows is addressed in the broader context of a semi-automatic video annotation business logic for ground truth generation for Autonomous Driving (AD) and Advanced Driver Assistance Systems (ADAS). The document presents design decisions, innovative developments, and tests conducted to provide scalability to this cloud-native ecosystem for semi-automatic annotation. The solution has proven to be efficient and resilient on an AD/ADAS scale, specifically in an experiment with 25 TB of input data to annotate, 4000 concurrent annotation jobs, and 32 worker nodes forming a high performance computing cluster with a total of 512 cores, and 2048 GB of RAM. Automatic pre-annotations with the proposed strategy reduce the time of human participation in the annotation up to 80% maximum and 60% on average.
APA, Harvard, Vancouver, ISO, and other styles
7

Garcia, Manuel B., and Ahmed Mohamed Fahmy Yousef. "Cognitive and affective effects of teachers’ annotations and talking heads on asynchronous video lectures in a web development course." Research and Practice in Technology Enhanced Learning 18 (December 5, 2022): 020. http://dx.doi.org/10.58459/rptel.2023.18020.

Full text
Abstract:
When it comes to asynchronous online learning, the literature recommends multimedia content like videos of lectures and demonstrations. However, the lack of emotional connection and the absence of teacher support in these video materials can be detrimental to student success. We proposed incorporating talking heads and annotations to alleviate these weaknesses. In this study, we investigated the cognitive and affective effects of integrating these solutions in asynchronous video lectures. Guided by the theoretical lens of Cognitive Theory of Multimedia Learning and Cognitive-Affective Theory of Learning with Media, we produced a total of 72 videos (average = four videos per subtopic) with a mean duration of 258 seconds (range = 193 to 318 seconds). To comparatively assess our video treatments (i.e., regular videos, videos with face, videos with annotation, or videos with face and annotation), we conducted an educational-based cluster randomized controlled trial within a 14-week academic period with four cohorts of students enrolled in an introductory web design and development course. We recorded a total of 42,425 total page views (212.13 page views per student) for all web browsing activities within the online learning platform. Moreover, 39.92% (16,935 views) of these page views were attributed to the video pages accumulating a total of 47,665 minutes of watch time. Our findings suggest that combining talking heads and annotations in asynchronous video lectures yielded the highest learning performance, longest watch time, and highest satisfaction, engagement, and attitude scores. These discoveries have significant implications for designing video lectures for online education to support students’ activities and engagement. Therefore, we concluded that academic institutions, curriculum developers, instructional designers, and educators should consider these findings before relocating face-to-face courses to online learning systems to maximize the benefits of video-based learning.
APA, Harvard, Vancouver, ISO, and other styles
8

Islam, Md Anwarul, Md Azher Uddin, and Young-Koo Lee. "A Distributed Automatic Video Annotation Platform." Applied Sciences 10, no. 15 (July 31, 2020): 5319. http://dx.doi.org/10.3390/app10155319.

Full text
Abstract:
In the era of digital devices and the Internet, thousands of videos are taken and share through the Internet. Similarly, CCTV cameras in the digital city produce a large amount of video data that carry essential information. To handle the increased video data and generate knowledge, there is an increasing demand for distributed video annotation. Therefore, in this paper, we propose a novel distributed video annotation platform that explores the spatial information and temporal information. Afterward, we provide higher-level semantic information. The proposed framework is divided into two parts: spatial annotation and spatiotemporal annotation. Therefore, we propose a spatiotemporal descriptor, namely, volume local directional ternary pattern-three orthogonal planes (VLDTP–TOP) in a distributed manner using Spark. Moreover, we developed several state-of-the-art appearance-based and spatiotemporal-based feature descriptors on top of Spark. We also provide the distributed video annotation services for the end-users so that they can easily use the video annotation and APIs for development to produce new video annotation algorithms. Due to the lack of a spatiotemporal video annotation dataset that provides ground truth for both spatial and temporal information, we introduce a video annotation dataset, namely, STAD which provides ground truth for spatial and temporal information. An extensive experimental analysis was performed in order to validate the performance and scalability of the proposed feature descriptors, which proved the excellence of our proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Sigurdsson, Gunnar, Olga Russakovsky, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. "Much Ado About Time: Exhaustive Annotation of Temporal Data." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 4 (September 21, 2016): 219–28. http://dx.doi.org/10.1609/hcomp.v4i1.13290.

Full text
Abstract:
Large-scale annotated datasets allow AI systems to learn from and build upon the knowledge of the crowd. Many crowdsourcing techniques have been developed for collecting image annotations. These techniques often implicitly rely on the fact that a new input image takes a negligible amount of time to perceive. In contrast, we investigate and determine the most cost-effective way of obtaining high-quality multi-label annotations for temporal data such as videos. Watching even a short 30-second video clip requires a significant time investment from a crowd worker; thus, requesting multiple annotations following a single viewing is an important cost-saving strategy. But how many questions should we ask per video? We conclude that the optimal strategy is to ask as many questions as possible in a HIT (up to 52 binary questions after watching a 30-second video clip in our experiments).We demonstrate that while workers may not correctly answer all questions, the cost-benefit analysis nevertheless favors consensus from multiple such cheap-yet-imperfect iterations over more complex alternatives. When compared with a one-question-per-video baseline, our method is able to achieve a 10% improvement in recall (76.7% ours versus 66.7% baseline) at comparable precision (83.8% ours versus 83.0% baseline) in about half the annotation time (3.8 minutes ours compared to 7.1 minutes baseline). We demonstrate the effectiveness of our method by collecting multi-label annotations of 157 human activities on 1,815 videos.
APA, Harvard, Vancouver, ISO, and other styles
10

Gligorov, Riste, Michiel Hildebrand, Jacco Van Ossenbruggen, Lora Aroyo, and Guus Schreiber. "Topical Video Search: Analysing Video Concept Annotation through Crowdsourcing Games." Human Computation 4, no. 1 (April 26, 2017): 47–70. http://dx.doi.org/10.15346/hc.v4i1.77.

Full text
Abstract:
Games with a purpose (GWAPs) are increasingly used in audio-visual collections as a mechanism for annotating videos through tagging. One such GWAP is Waisda?, a video labeling game where players tag streaming video and win points by reaching consensus on tags with other players. The open-ended and unconstrained manner of tagging in the fast-paced setting of the game has fundamental impact on the resulting tags. We find that Waisda? tags predominately describe visual objects and rarely refer to the topics of the videos. In this study we evaluate to what extent the tags entered by players can be regarded as topical descriptors of the video material. Moreover, we characterize the quality of the user tags as topical descriptors with the aim to detect and filter out the bad ones. Our results show that after filtering, game tags perform equally well compared to the manually crafted metadata when it comes to accessing the videos based on topic. An important consequence of this finding is that tagging games can provide a cost-effective alternative in situations when manual annotation by professionals is too costly.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Video annotation"

1

Chaudhary, Ahmed. "Video annotation tools." Thesis, Texas A&M University, 2008. http://hdl.handle.net/1969.1/85911.

Full text
Abstract:
This research deals with annotations in scholarly work. Annotations have been studied by many people. A significant amount of research has shown that instead of implementing domain specific annotation applications a better approach is to develop general purpose annotation toolkits that can be used to create domain specific applications. A video annotation toolkit along with toolkits for searching, retrieving, analyzing and presenting videos can help achieve the broader goal of creating integrated work spaces for scholarly work in humanities research similar to existing environments in such fields as mathematics, engineering, statistics, software development and bioinformatics. This research implements a video annotation toolkit and evaluates it by looking at its usefulness in creating applications for different areas. It was found that many areas of study in the arts and sciences can benefit from a video annotation application tailored to their specific needs and that an annotation toolkit can significantly reduce the time for developing such applications. The toolkit was engineered through successive refinements of prototype applications developed for different application areas. The toolkit design was also guided by a set of features identified by the research community for an ideal general purpose annotation toolkit. This research contributes by combining these two different approaches to toolkit design and construction into a hybrid approach. This approach could be useful for similar or related efforts.
APA, Harvard, Vancouver, ISO, and other styles
2

Hartley, Edward. "Automating video annotation." Thesis, Lancaster University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Clement, Michael David. "Obstacle Annotation by Demonstration." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1722.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mahmood, Muhammad Habib. "Motion annotation in complex video datasets." Doctoral thesis, Universitat de Girona, 2018. http://hdl.handle.net/10803/667583.

Full text
Abstract:
Motion segmentation refers to the process of separating regions and trajectories from a video sequence into coherent subsets of space and time. In this thesis, we created a new multifaceted motion segmentation dataset enclosing real-life long and short sequences, with different numbers of motions and frames per sequence, and real distortions with missing data. Trajectory- and region-based ground-truth is provided on all the frames of all the sequences. We also proposed a new semi-automatic tool for delineating the trajectories in complex videos, even in videos captured from moving cameras. With a minimal manual annotation of an object mask, the algorithm is able to propagate the label mask in all the frames. Object label correction based on static and moving occluder is performed by applying occluder mask tracking for a given depth ordering. The results show that our cascaded-naive approach provides successful results in a variety of video sequences.
La segmentació del moviment es refereix al procés de separar regions i trajectòries d'una seqüència de vídeo en subconjunts coherents d'espai i de temps. En aquesta tesi hem creat un nou i multifacètic dataset amb seqüències de la vida real que inclou diferent número de moviments i fotogrames per seqüència i distorsions amb dades incomplertes. A més, inclou ground-truth en tots els fotogrames basat en mesures de trajectòria i regió. Hem proposat també una nova eina semiautomàtica per delinear les trajectòries en vídeos complexos, fins i tot en vídeos capturats amb càmeres mòbils. Amb una mínima anotació manual dels objectes, l'algoritme és capaç de propagar-la en tots els fotogrames. Durant les oclusions, la correcció de les etiquetes es realitza aplicant el seguiment de la màscara per a cada ordre de profunditat. Els resultats obtinguts mostren que el nostre enfocament ofereix resultats reeixits en una àmplia varietat de seqüències de vídeo.
APA, Harvard, Vancouver, ISO, and other styles
5

Aydinlilar, Merve. "Semi-automatic Semantic Video Annotation Tool." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613966/index.pdf.

Full text
Abstract:
Semantic annotation of video content is necessary for indexing and retrieval tasks of video management systems. Currently, it is not possible to extract all high-level semantic information from video data automatically. Video annotation tools assist users to generate annotations to represent video data. Generated annotations can also be used for testing and evaluation of content based retrieval systems. In this study, a semi-automatic semantic video annotation tool is presented. Generated annotations are in MPEG-7 metadata format to ensure interoperability. With the help of image processing and pattern recognition solutions, annotation process is partly automated and annotation time is reduced. Annotations can be done for spatio-temporal decompositions of video data. Extraction of low-level visual descriptions are included to obtain complete descriptions.
APA, Harvard, Vancouver, ISO, and other styles
6

Foley-Fisher, Zoltan. "A pursuit method for video annotation." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/43613.

Full text
Abstract:
Video annotation is a process of describing or elaborating on objects or events represented in video. Part of this process involves time consuming manual interactions to define spatio-temporal entities - such as a region of interest within the video. This dissertation proposes a pursuit method for video annotation to quickly define a particular type of spatio-temporal entity known as a point- based path. A pursuit method is particularly suited to annotation contexts when a precise bounding region is not needed, such as when annotators draw attention to objects in consumer video. We demonstrate the validity of the pursuit method with measurements of both accuracy and annotation time when annotators create point-based paths. Annotator tool designers can now chose a pursuit method for suitable annotation contexts.
APA, Harvard, Vancouver, ISO, and other styles
7

Salway, Andrew. "Video annotation : the role of specialist text." Thesis, University of Surrey, 1999. http://epubs.surrey.ac.uk/843350/.

Full text
Abstract:
Digital video is among the most information-intensive modes of communication. The retrieval of video from digital libraries, along with sound and text, is a major challenge for the computing community in general and for the artificial intelligence community specifically. The advent of digital video has set some old questions in a new light. Questions relating to aesthetics and to the role of surrogates - image for reality and text for image, invariably touch upon the link between vision and language. Dealing with this link computationally is important for the artificial intelligence enterprise. Interesting images to consider both aesthetically and for research in video retrieval include those which are constrained and patterned, and which convey rich meanings; for example, dance. These are specialist images for us and require a special language for description and interpretation. Furthermore, they require specialist knowledge to be understood since there is usually more than meets the untrained eye: this knowledge may also be articulated in the language of the specialism. In order to be retrieved effectively and efficiently, video has to be annotated-, particularly so for specialist moving images. Annotation involves attaching keywords from the specialism along with, for us, commentaries produced by experts, including those written and spoken specifically for annotation and those obtained from a corpus of extant texts. A system that processes such collateral text for video annotation should perhaps be grounded in an understanding of the link between vision and language. This thesis attempts to synthesise ideas from artificial intelligence, multimedia systems, linguistics, cognitive psychology and aesthetics. The link between vision and language is explored by focusing on moving images of dance and the special language used to describe and interpret them. We have developed an object-oriented system, KAB, which helps to annotate a digital video library with a collateral corpus of texts and terminology. User evaluation has been encouraging. The system is now available on the WWW.
APA, Harvard, Vancouver, ISO, and other styles
8

Silva, João Miguel Ferreira da. "People and object tracking for video annotation." Master's thesis, Faculdade de Ciências e Tecnologia, 2012. http://hdl.handle.net/10362/8953.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Object tracking is a thoroughly researched problem, with a body of associated literature dating at least as far back as the late 1970s. However, and despite the development of some satisfactory real-time trackers, it has not yet seen widespread use. This is not due to a lack of applications for the technology, since several interesting ones exist. In this document, it is postulated that this status quo is due, at least in part, to a lack of easy to use software libraries supporting object tracking. An overview of the problems associated with object tracking is presented and the process of developing one such library is documented. This discussion includes how to overcome problems like heterogeneities in object representations and requirements for training or initial object position hints. Video annotation is the process of associating data with a video’s content. Associating data with a video has numerous applications, ranging from making large video archives or long videos searchable, to enabling discussion about and augmentation of the video’s content. Object tracking is presented as a valid approach to both automatic and manual video annotation, and the integration of the developed object tracking library into an existing video annotator, running on a tablet computer, is described. The challenges involved in designing an interface to support the association of video annotations with tracked objects in real-time are also discussed. In particular, we discuss our interaction approaches to handle moving object selection on live video, which we have called “Hold and Overlay” and “Hold and Speed Up”. In addition, the results of a set of preliminary tests are reported.
project “TKB – A Transmedia Knowledge Base for contemporary dance” (PTDC/EA /AVP/098220/2008 funded by FCT/MCTES), the UTAustin – Portugal, Digital Media Program (SFRH/BD/42662/2007 FCT/MCTES) and by CITI/DI/FCT/UNL (Pest-OE/EEI/UI0527/2011)
APA, Harvard, Vancouver, ISO, and other styles
9

Dye, Brigham R. "Reliability of Pre-Service Teachers Coding of Teaching Videos Using Video-Annotation Tools." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/990.

Full text
Abstract:
Teacher education programs that aspire to helping pre-service teachers develop expertise must help students engage in deliberate practice along dimensions of teaching expertise. However, field teaching experiences often lack the quantity and quality of feedback that is needed to help students engage in meaningful teaching practice. The limited availability of supervising teachers makes it difficult to personally observe and evaluate each student teacher's field teaching performances. Furthermore, when a supervising teacher debriefs such an observation, the supervising teacher and student may struggle to communicate meaningfully about the teaching performance. This is because the student teacher and supervisor often have very different perceptions of the same teaching performance. Video analysis tools show promise for improving the quality of feedback student teachers receive in their teaching performance by providing a common reference for evaluative debriefing and allowing students to generate their own feedback by coding videos of their own teaching. This study investigates the reliability of pre-service teacher coding using a video analysis tool. This study found that students were moderately reliable coders when coding video of an expert teacher (49%-68%). However, when the reliability of student coding of their own teaching videos was audited, students showed a high degree of accuracy (91%). These contrasting findings suggest that coding reliability scores may not be simple indicators of student understanding of the teaching competencies represented by a coding scheme. Instead, reliability scores may also be subject to the influence of extraneous factors. For example, reliability scores in this study were influenced by differences in the technical aspects of how students implemented the coding system. Furthermore, reliability scores were influenced by how coding proficiency was measured. Because this study also suggests that students can be taught to improve their coding reliability, further research may improve reliability scores"-and make them a more valid reflection of student understanding of teaching competency-"by training students about the technical aspects of implementing a coding system.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Yang. "Digital video segmentation and annotation in news programs." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23273082.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Video annotation"

1

Harrison, Beverly L. The annotation and analysis of video documents. Ottawa: National Library of Canada = Bibliothèque nationale du Canada, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mello, Heliana, Massimo Pettorino, and Tommaso Raso, eds. Proceedings of the VIIth GSCP International Conference. Florence: Firenze University Press, 2013. http://dx.doi.org/10.36253/978-88-6655-351-9.

Full text
Abstract:
The 7th International Conference of the Gruppo di Studi sulla Comunicazione Parlata, dedicated to the memory of Claire Blanche-Benveniste, chose as its main theme Speech and Corpora. The wide international origin of the 235 authors from 21 countries and 95 institutions led to papers on many different languages. The 89 papers of this volume reflect the themes of the conference: spoken corpora compilation and annotation, with the technological connected fields; the relation between prosody and pragmatics; speech pathologies; and different papers on phonetics, speech and linguistic analysis, pragmatics and sociolinguistics. Many papers are also dedicated to speech and second language studies. The online publication with FUP allows direct access to sound and video linked to papers (when downloaded).
APA, Harvard, Vancouver, ISO, and other styles
3

Lee, Angela (Angela S. W.), Lu Yuhwa Eva, and Council on Social Work Education, eds. Asian and Pacific Americans: A selected bibliography (1995-2004) with annotations and teaching resources for social work educators. Alexandria, Va: CSWE Press, Council on Social Work Education, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nagao, Katashi. Digital Content Annotation and Transcoding (Artech House Digital Audio and Video Library). Artech House Publishers, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wankel, Laura A., and Patrick Blessinger, eds. Increasing Student Engagement and Retention using Multimedia Technologies: Video Annotation, Multimedia Applications, Videoconferencing and Transmedia Storytelling. Emerald Group Publishing Limited, 2013. http://dx.doi.org/10.1108/s2044-9968(2013)6_part_f.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Increasing Student Engagement and Retention Using Multimedia Technologies: Video Annotation, Multimedia Applications, Videoconferencing and Transmedia Storytelling. Emerald Publishing Limited, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wankel, Laura A., and Patrick Blessinger. Increasing Student Engagement and Retention Using Multimedia Technologies: Video Annotation, Multimedia Applications, Videoconferencing and Transmedia Storytelling. Emerald Publishing Limited, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lösel, Gunter, and Martin Zimper. Filming, Researching, Annotating: Research Video Handbook. de Gruyter GmbH, Walter, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lösel, Gunter, and Martin Zimper. Filming, Researching, Annotating: Research Video Handbook. de Gruyter GmbH, Walter, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Stoker, Bram, and Angelo Nessi. Dracula: Il Non Morto - l'Uomo Della Notte - Annotato. Independently Published, 2021.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Video annotation"

1

Bimbo, Alberto Del, and Marco Bertini. "Video Automatic Annotation." In Encyclopedia of Multimedia, 891–97. Boston, MA: Springer US, 2008. http://dx.doi.org/10.1007/978-0-387-78414-4_238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bobick, Aaron F. "Video annotation: Computers watching video." In Recent Developments in Computer Vision, 23–31. Berlin, Heidelberg: Springer Berlin Heidelberg, 1996. http://dx.doi.org/10.1007/3-540-60793-5_59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Serrano, Miguel A., Jesús Gracía, Miguel A. Patricio, and José M. Molina. "Interactive Video Annotation Tool." In Advances in Intelligent and Soft Computing, 325–32. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-14883-5_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Grassi, Marco, Christian Morbidoni, and Francesco Piazza. "Towards Semantic Multimodal Video Annotation." In Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues, 305–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-18184-9_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhu, Xingquan, Jianping Fan, Xiangyang Xue, Lide Wu, and Ahmed K. Elmagarmid. "Semi-automatic Video Content Annotation." In Advances in Multimedia Information Processing — PCM 2002, 245–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36228-2_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ren, Wei, and Sameer Singh. "An Automated Video Annotation System." In Pattern Recognition and Image Analysis, 693–700. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11552499_76.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Morsillo, Nicholas, Gideon Mann, and Christopher Pal. "YouTube Scale, Large Vocabulary Video Annotation." In Video Search and Mining, 357–86. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12900-1_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lombardo, Vincenzo, and Rossana Damiano. "Narrative Annotation and Editing of Video." In Interactive Storytelling, 62–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-16638-9_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Safadi, Bahjat, Stéphane Ayache, and Georges Quénot. "Active Cleaning for Video Corpus Annotation." In Lecture Notes in Computer Science, 518–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27355-1_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Aydınlılar, Merve, and Adnan Yazıcı. "Semi-Automatic Semantic Video Annotation Tool." In Computer and Information Sciences III, 303–10. London: Springer London, 2012. http://dx.doi.org/10.1007/978-1-4471-4594-3_31.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Video annotation"

1

Cai, Jia-Jia, Jun Tang, Qing-Guo Chen, Yao Hu, Xiaobo Wang, and Sheng-Jun Huang. "Multi-View Active Learning for Video Recommendation." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/284.

Full text
Abstract:
On many video websites, the recommendation is implemented as a prediction problem of video-user pairs, where the videos are represented by text features extracted from the metadata. However, the metadata is manually annotated by users and is usually missing for online videos. To train an effective recommender system with lower annotation cost, we propose an active learning approach to fully exploit the visual view of videos, while querying as few annotations as possible from the text view. On one hand, a joint model is proposed to learn the mapping from visual view to text view by simultaneously aligning the two views and minimizing the classification loss. On the other hand, a novel strategy based on prediction inconsistency and watching frequency is proposed to actively select the most important videos for metadata querying. Experiments on both classification datasets and real video recommendation tasks validate that the proposed approach can significantly reduce the annotation cost.
APA, Harvard, Vancouver, ISO, and other styles
2

Amorim, Marcello N., Celso A. S. Santos, and Orivaldo L. Tavares. "Integrating Crowdsourcing and Human Computation for Complex Video Annotation Tasks." In Simpósio Brasileiro de Sistemas Multimídia e Web. Sociedade Brasileira de Computação, 2020. http://dx.doi.org/10.5753/webmedia_estendido.2020.13053.

Full text
Abstract:
Video annotation is an activity that aims to supplement this type of multimedia object with additional content or information about its context, nature, content, quality and other aspects. These annotations are the basis for building a variety of multimedia applications for various purposes ranging from entertainment to security. Manual annotation is a strategy that uses the intelligence and workforce of people in the annotation process and is an alternative to cases where automatic methods cannot be applied. However, manual video annotation can be a costly process because as the content to be annotated increases, so does the workload for annotating. Crowdsourcing appears as a viable solution strategy in this con- text because it relies on outsourcing the tasks to a multitude of workers, who perform specific parts of the work in a distributed way. However, as the complexity of required media annoyances increases, it becomes necessary to employ skilled labor, or willing to perform larger, more complicated, and more time-consuming tasks. This makes it challenging to use crowdsourcing, as experts demand higher pay, and recruiting tends to be a difficult activity. In order to overcome this problem, strategies based on the decom- position of the main problem into a set of simpler subtasks suitable for crowdsourcing processes have emerged. These smaller tasks are organized in a workflow so that the execution process can be formalized and controlled. In this sense, this thesis aims to present a new framework that allows the use of crowdsourcing to create applications that require complex video annotation tasks. The developed framework considers the whole process from the definition of the problem and the decomposition of the tasks, until the construction, execution, and management of the workflow. This framework, called CrowdWaterfall, contemplates the strengths of current proposals, incorporating new concepts, techniques, and resources to overcome some of its limitations.
APA, Harvard, Vancouver, ISO, and other styles
3

Butler, Matthew, Tim Zapart, and Raymond Li. "Video Annotation - Improving Assessment of Transient Educational Events." In InSITE 2006: Informing Science + IT Education Conference. Informing Science Institute, 2006. http://dx.doi.org/10.28945/3019.

Full text
Abstract:
Annotation of video content has been commonplace in the entertainment industry for many years and is now becoming a valuable tool within the business world. Unfortunately its use in education has to date been limited. Although research and development is being undertaken to apply video annotation techniques to assessment and both software and hardware exists to facilitate this process, it must be acknowledged that these solutions are generally cost prohibitive for educational use. This paper will investigate simple video annotation methods for assessment of transient state events in the education context such as presentations. The authors will introduce a number of existing uses of video annotation, as well as discuss the educational context this can be placed within, highlighting fundamental concerns with some assessment practices. A framework for a solution involving video annotation techniques will be discussed, along with practical demonstration of a prototyped solution and discussion of further application.
APA, Harvard, Vancouver, ISO, and other styles
4

Zheng, Aihua, Jixin Ma, Bin Luo, Miltos Petridis, and Jin Tang. "Character-angle based video annotation." In 2009 International Multiconference on Computer Science and Information Technology (IMCSIT). IEEE, 2009. http://dx.doi.org/10.1109/imcsit.2009.5352786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Howlett, Todd, Mark A. Robertson, Dan Manthey, and John Krol. "Annotation of UAV surveillance video." In Defense and Security, edited by Arthur A. Andraitis and Gerard J. Leygraaf. SPIE, 2004. http://dx.doi.org/10.1117/12.542427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Shuqun. "Object tracking for video annotation." In Optical Science and Technology, the SPIE 49th Annual Meeting, edited by Andrew G. Tescher. SPIE, 2004. http://dx.doi.org/10.1117/12.560324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gaur, Eshan, Vikas Saxena, and Sandeep K. Singh. "Video annotation tools: A Review." In 2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN). IEEE, 2018. http://dx.doi.org/10.1109/icacccn.2018.8748669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Qi, Guo-Jun, Xian-Sheng Hua, Yong Rui, Jinhui Tang, Tao Mei, and Hong-Jiang Zhang. "Correlative multi-label video annotation." In the 15th international conference. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1291233.1291245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Soliman, Mohamed, Hamed R. Tavakoli, and Jorma Laaksonen. "Towards gaze-based video annotation." In 2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA). IEEE, 2016. http://dx.doi.org/10.1109/ipta.2016.7821028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Correia, Nuno, and Teresa Chambel. "Active video watching using annotation." In the seventh ACM international conference. New York, New York, USA: ACM Press, 1999. http://dx.doi.org/10.1145/319878.319919.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Video annotation"

1

Wilkins, Justin, Jarod Norton, and G. Roegner. Monitoring a nearshore beneficial use site : application of a benthic sled and video annotation. Engineer Research and Development Center (U.S.), January 2019. http://dx.doi.org/10.21079/11681/31593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Becker, Ralf. Recording lectures with annotations (Video case study). Bristol, UK: The Economics Network, July 2020. http://dx.doi.org/10.53593/n3318a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography