Dissertations / Theses on the topic 'Video analysis'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Video analysis.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Lidén, Jonas. "Distributed Video Content Analysis." Thesis, Umeå universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-99062.
Full textRen, Reede. "Audio-visual football video analysis, from structure detection to attention analysis." Thesis, Connect to e-thesis. Move to record for print version, 2008. http://theses.gla.ac.uk/77/.
Full textPh.D. thesis submitted to the Faculty of Information and Mathematical Sciences, Department of Computing Science, University of Glasgow, 2008. Includes bibliographical references. Print version also available.
Pérez, Rúa Juan Manuel. "Hierarchical motion-based video analysis with applications to video post-production." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S125/document.
Full textThe manuscript that is presented here contains all the findings and conclusions of the carried research in dynamic visual scene analysis. To be precise, we consider the ubiquitous monocular camera computer vision set-up, and the natural unconstrained videos that can be produced by it. In particular, we focus on important problems that are of general interest for the computer vision literature, and of special interest for the film industry, in the context of the video post-production pipeline. The tackled problems can be grouped in two main categories, according to the whether they are driven user interaction or not : user-assisted video processing tools and unsupervised tools for video analysis. This division is rather synthetic but it is in fact related to the ways the proposed methods are used inside the video post-production pipeline. These groups correspond to the main parts that form this manuscript, which are subsequently formed by chapters that explain our proposed methods. However, a single thread ties together all of our findings. This is, a hierarchical analysis of motion composition in dynamic scenes. We explain our exact contributions, together with our main motivations, and results in the following sections. We depart from a hypothesis that links the ability to consider a hierarchical structure of scene motion, with a deeper level of dynamic scene understanding. This hypothesis is inspired by plethora of scientific research in biological and psychological vision. More specifically, we refer to the biological vision research that established the presence of motion-related sensory units in the visual cortex. The discovery of these specialized brain units motivated psychological vision researchers to investigate how animal locomotion (obstacle avoidance, path planning, self-localization) and other higher-level tasks are directly influenced by motion-related percepts. Interestingly, the perceptual responses that take place in the visual cortex are activated not only by motion itself, but by occlusions, dis-occlusions, motion composition, and moving edges. Furthermore, psychological vision have linked the brain's ability to understand motion composition from visual information to high level scene understanding like object segmentation and recognition
Touliatou, Georgia. "Diegetic stories in a video mediation : a narrative analysis of four videos." Thesis, University of Surrey, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.397132.
Full textPark, Dong-Jun. "Video event detection framework on large-scale video data." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/2754.
Full textBales, Michael Ryan. "Illumination compensation in video surveillance analysis." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39535.
Full textAlmquist, Mathias, and Viktor Almquist. "Analysis of 360° Video Viewing Behaviour." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-144405.
Full textGu, Lifang. "Video analysis in MPEG compressed domain." University of Western Australia. School of Computer Science and Software Engineering, 2003. http://theses.library.uwa.edu.au/adt-WU2003.0016.
Full textGu, Lifang. "Video analysis in MPEG compressed domain /." Connect to this title, 2002. http://theses.library.uwa.edu.au/adt-2003.0016.
Full textLi, Hao. "Advanced video analysis for surveillance applications." Thesis, University of Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.555815.
Full textPlakas, Konstantinos. "Video sequence analysis for subsea robotics." Thesis, Heriot-Watt University, 2001. http://hdl.handle.net/10399/1186.
Full textChan, Stephen Chi Yee. "Video analysis for content-based applications." Thesis, University of Southampton, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395362.
Full textAlmquist, Mathias, and Viktor Almquist. "Analysis of 360° Video Viewing Behaviours." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-144907.
Full textWhitmore, Jean. "Video Magnification for Structural Analysis Testing." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1863.
Full textBaradel, Fabien. "Structured deep learning for video analysis." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI045.
Full textWith the massive increase of video content on Internet and beyond, the automatic understanding of visual content could impact many different application fields such as robotics, health care, content search or filtering. The goal of this thesis is to provide methodological contributions in Computer Vision and Machine Learning for automatic content understanding from videos. We emphasis on problems, namely fine-grained human action recognition and visual reasoning from object-level interactions. In the first part of this manuscript, we tackle the problem of fine-grained human action recognition. We introduce two different trained attention mechanisms on the visual content from articulated human pose. The first method is able to automatically draw attention to important pre-selected points of the video conditioned on learned features extracted from the articulated human pose. We show that such mechanism improves performance on the final task and provides a good way to visualize the most discriminative parts of the visual content. The second method goes beyond pose-based human action recognition. We develop a method able to automatically identify unstructured feature clouds of interest in the video using contextual information. Furthermore, we introduce a learned distributed system for aggregating the features in a recurrent manner and taking decisions in a distributed way. We demonstrate that we can achieve a better performance than obtained previously, without using articulated pose information at test time. In the second part of this thesis, we investigate video representations from an object-level perspective. Given a set of detected persons and objects in the scene, we develop a method which learns to infer the important object interactions through space and time using the video-level annotation only. That allows to identify important objects and object interactions for a given action, as well as potential dataset bias. Finally, in a third part, we go beyond the task of classification and supervised learning from visual content by tackling causality in interactions, in particular the problem of counterfactual learning. We introduce a new benchmark, namely CoPhy, where, after watching a video, the task is to predict the outcome after modifying the initial stage of the video. We develop a method based on object- level interactions able to infer object properties without supervision as well as future object locations after the intervention
Fraz, Muhammad. "Video content analysis for intelligent forensics." Thesis, Loughborough University, 2014. https://dspace.lboro.ac.uk/2134/18065.
Full textAl, Hajj Hassan. "Video analysis for augmented cataract surgery." Thesis, Brest, 2018. http://www.theses.fr/2018BRES0041/document.
Full textThe digital era is increasingly changing the world due to the sheer volume of data produced every day. The medical domain is highly affected by this revolution, because analysing this data can be a source of education/support for the clinicians. In this thesis, we propose to reuse the surgery videos recorded in the operating rooms for computer-assisted surgery system. We are chiefly interested in recognizing the surgical gesture being performed at each instant in order to provide relevant information. To achieve this goal, this thesis addresses the surgical tool recognition problem, with applications in cataract surgery. The main objective of this thesis is to address the surgical tool recognition problem in cataract surgery videos.In the surgical field, those tools are partially visible in videos and highly similar to one another. To address the visual challenges in the cataract surgical field, we propose to add an additional camera filming the surgical tray. Our goal is to detect the tool presence in the two complementary types of videos: tool-tissue interaction and surgical tray videos. The former records the patient's eye and the latter records the surgical tray activities.Two tasks are proposed to perform the task on the surgical tray videos: tools change detection and tool presence detection.First, we establish a similar pipeline for both tasks. It is based on standard classification methods on top of visual learning features. It yields satisfactory results for the tools change task, howev-lateer, it badly performs the surgical tool presence task on the tray. Second, we design deep learning architectures for the surgical tool detection on both video types in order to address the difficulties in manually designing the visual features.To alleviate the inherent challenges on the surgical tray videos, we propose to generate simulated surgical tray scenes along with a patch-based convolutional neural network (CNN).Ultimately, we study the temporal information using RNN processing the CNN results. Contrary to our primary hypothesis, the experimental results show deficient results for surgical tool presence on the tray but very good results on the tool-tissue interaction videos. We achieve even better results in the surgical field after fusing the tool change information coming from the tray and tool presence signals on the tool-tissue interaction videos
Stobaugh, John David. "Novel use of video and image analysis in a video compression system." Thesis, University of Iowa, 2015. https://ir.uiowa.edu/etd/1766.
Full textDye, Brigham R. "Reliability of Pre-Service Teachers Coding of Teaching Videos Using Video-Annotation Tools." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/990.
Full textMonger, Eloise. "'Video-View-Point' : video analysis to reveal tacit indicators of student nurse competence." Thesis, University of Southampton, 2014. https://eprints.soton.ac.uk/366452/.
Full textTripp, Tonya R. "The Influence of Video Analysis on Teaching." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2562.
Full textYoon, Kyongil. "Key-frame appearance analysis for video surveillance." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2818.
Full textThesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Wang, Ying. "Analysis Application for H.264 Video Encoding." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-133633.
Full textWeir, Lindsay Brian. "Digital video for time based analysis systems." Thesis, University of Canterbury. Computer Science, 1994. http://hdl.handle.net/10092/9406.
Full textSteinmetz, Nadine. "Context-aware semantic analysis of video metadata." Phd thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2014/7055/.
Full textThe Semantic Web provides information contained in the World Wide Web as machine-readable facts. In comparison to a keyword-based inquiry, semantic search enables a more sophisticated exploration of web documents. By clarifying the meaning behind entities, search results are more precise and the semantics simultaneously enable an exploration of semantic relationships. However, unlike keyword searches, a semantic entity-focused search requires that web documents are annotated with semantic representations of common words and named entities. Manual semantic annotation of (web) documents is time-consuming; in response, automatic annotation services have emerged in recent years. These annotation services take continuous text as input, detect important key terms and named entities and annotate them with semantic entities contained in widely used semantic knowledge bases, such as Freebase or DBpedia. Metadata of video documents require special attention. Semantic analysis approaches for continuous text cannot be applied, because information of a context in video documents originates from multiple sources possessing different reliabilities and characteristics. This thesis presents a semantic analysis approach consisting of a context model and a disambiguation algorithm for video metadata. The context model takes into account the characteristics of video metadata and derives a confidence value for each metadata item. The confidence value represents the level of correctness and ambiguity of the textual information of the metadata item. The lower the ambiguity and the higher the prospective correctness, the higher the confidence value. The metadata items derived from the video metadata are analyzed in a specific order from high to low confidence level. Previously analyzed metadata are used as reference points in the context for subsequent disambiguation. The contextually most relevant entity is identified by means of descriptive texts and semantic relationships to the context. The context is created dynamically for each metadata item, taking into account the confidence value and other characteristics. The proposed semantic analysis follows two hypotheses: metadata items of a context should be processed in descendent order of their confidence value, and the metadata that pertains to a context should be limited by content-based segmentation boundaries. The evaluation results support the proposed hypotheses and show increased recall and precision for annotated entities, especially for metadata that originates from sources with low reliability. The algorithms have been evaluated against several state-of-the-art annotation approaches. The presented semantic analysis process is integrated into a video analysis framework and has been successfully applied in several projects for the purpose of semantic video exploration of videos.
Mackiewicz, Michał. "Computer-assisted wireless capsule endoscopy video analysis." Thesis, University of East Anglia, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445207.
Full textXu, Xun. "Semantic spaces for video analysis of behaviour." Thesis, Queen Mary, University of London, 2016. http://qmro.qmul.ac.uk/xmlui/handle/123456789/23885.
Full textIlisescu, Corneliu. "Analysis and synthesis of interactive video sprites." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10045947/.
Full textLi, Dong. "Thermal image analysis using calibrated video imaging." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4455.
Full textThe entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on April 23, 2009) Includes bibliographical references.
Savadatti-Kamath, Sanmati S. "Video analysis and compression for surveillance applications." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26602.
Full textCommittee Chair: Dr. J. R. Jackson; Committee Member: Dr. D. Scott; Committee Member: Dr. D. V. Anderson; Committee Member: Dr. P. Vela; Committee Member: Dr. R. Mersereau. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Kim, Changick. "A framework for object-based video analysis /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/5823.
Full textEastwood, Brian S. Taylor Russell M. "Multiple layer image analysis for video microscopy." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2009. http://dc.lib.unc.edu/u?/etd,2813.
Full textTitle from electronic title page (viewed Mar. 10, 2010). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science." Discipline: Computer Science; Department/School: Computer Science.
Dye, Brigham R. "Reliability of pre-service teachers' coding of teaching videos using a video-analysis tool /." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2020.pdf.
Full textZheng, Hao. "Analysis of H.264-based Vclan implementation /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p1422980.
Full textDeshpande, Milind Umesh. "Optimal video sensing strategy and performance analysis for wireless video sensors under delay constraints." Diss., Columbia, Mo. : University of Missouri-Columbia, 2005. http://hdl.handle.net/10355/5836.
Full textThe entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (July 17, 2006) Includes bibliographical references.
Chengegowda, Venkatesh. "Analysis of Queues for Interactive Voice and Video Response Systems : Two Party Video Calls." Thesis, KTH, Kommunikationssystem, CoS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-102451.
Full textVideosamtal på mobila enheter är popularisera med tillkomsten av 3G. Den förbättrade nätkapacitet så tillgänglig möjliggör överföring av videodata över Internet. Det har prognos av flera VOIP serviceorganisationer att de nuvarande IVR-system kommer att utvecklas till röst och video Response (IVVR) System. Dock har denna utveckling många tekniska utmaningar på vägen. Arkitekturer för att genomföra kösystem för videodata och standarder för bland konvertering av videodata mellan format som stöds för uppringande är två av dessa utmaningar. Denna avhandling är en analys av köer och media kodkonvertering för IVVRs. En stor insats i detta arbete innebär att bygga en prototyp IVVR kösystem. Systemet är konstruerat med hjälp av en öppen källkod-server som heter Asterisk och MySQL-databas. Asterisk är en SIP-baserad Public Exchange Server (PBX) och även en utvecklingsmiljö för VOIP-baserade IVRs. Funktionella scenarier för SIP session etablering och motsvarande sessionen inställningar för den föreslagna kö modell mäts. Resultaten indikerar att prototypen tjänar som en tillräcklig modell för en kö, även om en betydande fördröjning införs för sessionsupprättandebegäran. Arbetet omfattar även analys av integrering DiaStar™ är en SIP-baserad media kodkonvertering motor till denna kö. Emellertid är detta system inte helt att fungera med DiaStar för media translation. The studie avslutas med ett omnämnande av de områden för framtida arbete med detta system och det allmänna tillståndet i IVVR kö-system i branschen.
Lee, Sangkeun. "Video analysis and abstraction in the compressed domain." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04072004-180041/unrestricted/lee%5fsangkeun%5f200312%5fphd.pdf.
Full textEmmot, Sebastian. "Characterizing Video Compression Using Convolutional Neural Networks." Thesis, Luleå tekniska universitet, Datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-79430.
Full textFlorez, Omar Ulises. "Knowledge Extraction in Video Through the Interaction Analysis of Activities Knowledge Extraction in Video Through the Interaction Analysis of Activities." DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/1720.
Full textWright, Geoffrey Albert. "How Does Video Analysis Impact Teacher Reflection-for-Action?" BYU ScholarsArchive, 2008. https://scholarsarchive.byu.edu/etd/1362.
Full textNordeng, Eirik Tørud. "Video metric measurements in an FPGA for use in objective no-reference video quality analysis." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elektronikk og telekommunikasjon, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-22706.
Full textKong, Lingchao. "Modeling of Video Quality for Automatic Video Analysis and Its Applications in Wireless Camera Networks." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563295836742645.
Full textGuler, Puren. "Automated Crowd Behavior Analysis For Video Surveillance Applications." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614659/index.pdf.
Full textpeople counting, people tracking and crowd behavior analysis. In this thesis, the behavior understanding will be used for crowd behavior analysis. In the literature, there are two types of approaches for behavior understanding problem: analyzing behaviors of individuals in a crowd (object based) and using this knowledge to make deductions regarding the crowd behavior and analyzing the crowd as a whole (holistic based). In this work, a holistic approach is used to develop a real-time abnormality detection in crowds using scale invariant feature transform (SIFT) based features and unsupervised machine learning techniques.
Eriksson, Martin. "Video based analysis and visualization of human action." Doctoral thesis, KTH, Numerisk Analys och Datalogi, NADA, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-106.
Full textQC 20100601
Eriksson, Martin. "Video based analysis and visualization of human action /." Stockholm, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-106.
Full textNikfetrat, Nima. "Video-based Fire Analysis and Animation Using Eigenfires." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23471.
Full textForsthoefel, Dana. "Leap segmentation in mobile image and video analysis." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50285.
Full textFaircloth, Ryan. "AUDIO AND VIDEO TEMPO ANALYSIS FOR DANCE DETECTION." Master's thesis, University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2633.
Full textM.S.E.E.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Electrical Engineering MSEE
Isgro, Francesco. "Geometric methods for video sequence analysis and applications." Thesis, Heriot-Watt University, 2001. http://hdl.handle.net/10399/495.
Full textFletcher, M. J. "A modular system for video based motion analysis." Thesis, University of Reading, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293144.
Full text