Дисертації з теми "Video image analysi"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Video image analysi".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
GIACHELLO, SILVIA. "Identità' e memoria visuale: comunità', eventi, documentazione." Doctoral thesis, Politecnico di Torino, 2012. http://hdl.handle.net/11583/2540089.
Повний текст джерелаDye, Brigham R. "Reliability of pre-service teachers' coding of teaching videos using a video-analysis tool /." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2020.pdf.
Повний текст джерелаKim, Tae-Kyun. "Discriminant analysis of patterns in images, image ensembles, and videos." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612084.
Повний текст джерелаSdiri, Bilel. "2D/3D Endoscopic image enhancement and analysis for video guided surgery." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCD030.
Повний текст джерелаMinimally invasive surgery has made remarkable progress in the last decades and became a very popular diagnosis and treatment tool, especially with the rapid medical and technological advances leading to innovative new tools such as robotic surgical systems and wireless capsule endoscopy. Due to the intrinsic characteristics of the endoscopic environment including dynamic illumination conditions and moist tissues with high reflectance, endoscopic images suffer often from several degradations such as large dark regions,with low contrast and sharpness, and many artifacts such as specular reflections and blur. These challenges together with the introduction of three dimensional(3D) imaging surgical systems have prompted the question of endoscopic images quality, which needs to be enhanced. The latter process aims either to provide the surgeons/doctors with a better visual feedback or improve the outcomes of some subsequent tasks such as features extraction for 3D organ reconstruction and registration. This thesis addresses the problem of endoscopic image quality enhancement by proposing novel enhancement techniques for both two-dimensional (2D) and stereo (i.e. 3D)endoscopic images.In the context of automatic tissue abnormality detection and classification for gastro-intestinal tract disease diagnosis, we proposed a pre-processing enhancement method for 2D endoscopic images and wireless capsule endoscopy improving both local and global contrast. The proposed method expose inner subtle structures and tissues details, which improves the features detection process and the automatic classification rate of neoplastic,non-neoplastic and inflammatory tissues. Inspired by binocular vision attention features of the human visual system, we proposed in another workan adaptive enhancement technique for stereo endoscopic images combining depth and edginess information. The adaptability of the proposed method consists in adjusting the enhancement to both local image activity and depth level within the scene while controlling the interview difference using abinocular perception model. A subjective experiment was conducted to evaluate the performance of the proposed algorithm in terms of visual qualityby both expert and non-expert observers whose scores demonstrated the efficiency of our 3D contrast enhancement technique. In the same scope, we resort in another recent stereo endoscopic image enhancement work to the wavelet domain to target the enhancement towards specific image components using the multiscale representation and the efficient space-frequency localization property. The proposed joint enhancement methods rely on cross-view processing and depth information, for both the wavelet decomposition and the enhancement steps, to exploit the inter-view redundancies together with perceptual human visual system properties related to contrast sensitivity and binocular combination and rivalry. The visual qualityof the processed images and objective assessment metrics demonstrate the efficiency of our joint stereo enhancement in adjusting the image illuminationin both dark and saturated regions and emphasizing local image details such as fine veins and micro vessels, compared to other endoscopic enhancement techniques for 2D and 3D images
Li, Dong. "Thermal image analysis using calibrated video imaging." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4455.
Повний текст джерелаThe entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on April 23, 2009) Includes bibliographical references.
Eastwood, Brian S. Taylor Russell M. "Multiple layer image analysis for video microscopy." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2009. http://dc.lib.unc.edu/u?/etd,2813.
Повний текст джерелаTitle from electronic title page (viewed Mar. 10, 2010). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science." Discipline: Computer Science; Department/School: Computer Science.
Sheikh, Faridul Hasan. "Analysis of 3D color matches for the creation and consumption of video content." Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4001.
Повний текст джерелаThe objective of this thesis is to propose a solution to the problem of color consistency between images originate from the same scene irrespective of acquisition conditions. Therefore, we present a new color mapping framework that is able to compensate color differences and achieve color consistency between views of the same scene. Our proposed, new framework works in two phases. In the first phase, we propose a new method that can robustly collect color correspondences from the neighborhood of sparse feature correspondences, despite the low accuracy of feature correspondences. In the second phase, from these color correspondences, we introduce a new, two-step, robust estimation of the color mapping model: first, nonlinear channel-wise estimation; second, linear cross-channel estimation. For experimental assessment, we propose two new image datasets: one with ground truth for quantitative assessment; another, without the ground truth for qualitative assessment. We have demonstrated a series of experiments in order to investigate the robustness of our proposed framework as well as its comparison with the state of the art. We have also provided brief overview, sample results, and future perspectives of various applications of color mapping. In experimental results, we have demonstrated that, unlike many methods of the state of the art, our proposed color mapping is robust to changes of: illumination spectrum, illumination intensity, imaging devices (sensor, optic), imaging device settings (exposure, white balance), viewing conditions (viewing angle, viewing distance)
Lee, Sangkeun. "Video analysis and abstraction in the compressed domain." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04072004-180041/unrestricted/lee%5fsangkeun%5f200312%5fphd.pdf.
Повний текст джерелаGuo, Y. (Yimo). "Image and video analysis by local descriptors and deformable image registration." Doctoral thesis, Oulun yliopisto, 2013. http://urn.fi/urn:isbn:9789526201412.
Повний текст джерелаTiivistelmä Kuvan deskriptiolla on tärkeä rooli staattisissa kuvissa esiintyvien luontaisten kokonaisuuksien ja näkymien kuvaamisessa. Viime vuosikymmeninä se on tullut perustavaa laatua olevaksi ongelmaksi monissa käytännön konenäön tehtävissä, kuten tekstuurien luokittelu, kasvojen tunnistaminen, materiaalien luokittelu ja lääketieteellisten kuvien analysointi. Staattisen kuva-analyysin tutkimusala voidaan myös laajentaa videoanalyysiin, kuten dynaamisten tekstuurien tunnistukseen, luokitteluun ja synteesiin. Tämä väitöskirjatutkimus myötävaikuttaa kuva- ja videoanalyysin tutkimukseen ja kehittymiseen kahdesta näkökulmasta. Työn ensimmäisessä osassa esitetään kaksi kuvan deskriptiomenetelmää erottelukykyisten esitystapojen luomiseksi kuvien luokitteluun. Ne suunnitellaan ohjaamattomiksi (eli tekstuurikuvien luokkien leimoja ei ole käytettävissä) tai ohjatuiksi (eli luokkien leimat ovat saatavilla). Aluksi kehitetään ohjattu malli oppimaan erottelukykyisiä paikallisia kuvioita, mikä formuloi kuvan deskriptiomenetelmän integroituna kolmikerroksisena mallina - tavoitteena estimoida optimaalinen kiinnostavien kuvioiden alijoukko ottamalla samanaikaisesti huomioon piirteiden robustisuus, erottelukyky ja esityskapasiteetti. Seuraavaksi, sellaisia tapauksia varten, joissa luokkaleimoja ei ole saatavilla, esitetään työssä lineaarinen konfiguraatiomalli kuvaamaan kuvan mikroskooppisia rakenteita ohjaamattomalla tavalla. Tätä käytetään sitten yhdessä paikallisen kuvaajan, eli local binary pattern (LBP) –operaattorin kanssa. Teoreettisella tarkastelulla osoitetaan kehitetyn kuvaajan olevan rotaatioinvariantti ja kykenevän tuottamaan erottelukykyistä, täydentävää informaatiota perinteiselle LBP-menetelmälle. Työn toisessa osassa tutkitaan videoanalyysiä, perustuen staattisen kuvan deskriptioon ja deformoituvaan kuvien rekisteröintiin – sovellusaloina dynaamisten tekstuurien kuvaaminen, synteesi ja tunnistaminen. Aluksi ehdotetaan sellainen malli dynaamisten tekstuurien synteesiin, joka luo jatkuvan ja äärettömän kuvien virran annetusta äärellisen mittaisesta videosta. Menetelmä liittää yhteen videon pätkiä aika-avaruudessa valitsemalla keskenään yhteensopivia kuvakehyksiä videosta ja järjestämällä ne loogiseen järjestykseen. Seuraavaksi työssä esitetään sellainen uusi menetelmä kasvojen ilmeiden tunnistukseen, joka formuloi dynaamisen kasvojen ilmeiden tunnistusongelman pitkittäissuuntaisten kartastojen rakentamisen ja ryhmäkohtaisen kuvien rekisteröinnin ongelmana
Stobaugh, John David. "Novel use of video and image analysis in a video compression system." Thesis, University of Iowa, 2015. https://ir.uiowa.edu/etd/1766.
Повний текст джерелаForsthoefel, Dana. "Leap segmentation in mobile image and video analysis." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50285.
Повний текст джерелаAcosta, Jesus-Adolfo. "Pavement surface distress evaluation using video image analysis." Case Western Reserve University School of Graduate Studies / OhioLINK, 1994. http://rave.ohiolink.edu/etdc/view?acc_num=case1057760579.
Повний текст джерелаMcEuen, Matt. "Expert object recognition in video /." Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/1168.
Повний текст джерелаTodd, Douglas Wallace, and Douglas Wallace Todd. "Zebrafish Video Analysis System for High-Throughput Drug Assay." Thesis, The University of Arizona, 2016. http://hdl.handle.net/10150/623150.
Повний текст джерелаWright, Geoffrey A. "How does video analysis impact teacher reflection-for-action? /." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2347.pdf.
Повний текст джерелаThomson, Malcolm S. "Real-time image processing for traffic analysis." Thesis, Edinburgh Napier University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260986.
Повний текст джерелаDickinson, Keith William. "Traffic data capture and analysis using video image processing." Thesis, University of Sheffield, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306374.
Повний текст джерелаLiu, Gaowen. "Learning with Shared Information for Image and Video Analysis." Doctoral thesis, Università degli studi di Trento, 2017. https://hdl.handle.net/11572/368806.
Повний текст джерелаLiu, Gaowen. "Learning with Shared Information for Image and Video Analysis." Doctoral thesis, University of Trento, 2017. http://eprints-phd.biblio.unitn.it/2011/1/PhD-Thesis.pdf.
Повний текст джерелаHocking, Laird Robert. "Shell-based geometric image and video inpainting." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/281805.
Повний текст джерелаHampson, Robert W. "Video-based nearshore depth inversion using WDM method." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 129 p, 2009. http://proquest.umi.com/pqdweb?did=1650507521&sid=2&Fmt=2&clientId=8331&RQT=309&VName=PQD.
Повний текст джерелаSalehi, Doolabi Saeed. "Cubic-Panorama Image Dataset Analysis for Storage and Transmission." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/24053.
Повний текст джерелаFletcher, M. J. "A modular system for video based motion analysis." Thesis, University of Reading, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293144.
Повний текст джерелаMassaro, James. "A PCA based method for image and video pose sequencing /." Online version of thesis, 2010. http://hdl.handle.net/1850/11991.
Повний текст джерелаJain, Raja P. "Extraction and interaction analysis of foreground objects in panning video /." Link to online version, 2006. https://ritdml.rit.edu/dspace/handle/1850/1879.
Повний текст джерелаBothmann, Ludwig [Verfasser]. "Efficient statistical analysis of video and image data / Ludwig Bothmann." München : Verlag Dr. Hut, 2017. http://d-nb.info/1135594317/34.
Повний текст джерелаHoward, Elizabeth Helen Civil & Environmental Engineering Faculty of Engineering UNSW. "A laboratory study of the 'shoreline' detected in video imagery." Publisher:University of New South Wales. Civil & Environmental Engineering, 2008. http://handle.unsw.edu.au/1959.4/41497.
Повний текст джерелаBaradel, Fabien. "Structured deep learning for video analysis." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI045.
Повний текст джерелаWith the massive increase of video content on Internet and beyond, the automatic understanding of visual content could impact many different application fields such as robotics, health care, content search or filtering. The goal of this thesis is to provide methodological contributions in Computer Vision and Machine Learning for automatic content understanding from videos. We emphasis on problems, namely fine-grained human action recognition and visual reasoning from object-level interactions. In the first part of this manuscript, we tackle the problem of fine-grained human action recognition. We introduce two different trained attention mechanisms on the visual content from articulated human pose. The first method is able to automatically draw attention to important pre-selected points of the video conditioned on learned features extracted from the articulated human pose. We show that such mechanism improves performance on the final task and provides a good way to visualize the most discriminative parts of the visual content. The second method goes beyond pose-based human action recognition. We develop a method able to automatically identify unstructured feature clouds of interest in the video using contextual information. Furthermore, we introduce a learned distributed system for aggregating the features in a recurrent manner and taking decisions in a distributed way. We demonstrate that we can achieve a better performance than obtained previously, without using articulated pose information at test time. In the second part of this thesis, we investigate video representations from an object-level perspective. Given a set of detected persons and objects in the scene, we develop a method which learns to infer the important object interactions through space and time using the video-level annotation only. That allows to identify important objects and object interactions for a given action, as well as potential dataset bias. Finally, in a third part, we go beyond the task of classification and supervised learning from visual content by tackling causality in interactions, in particular the problem of counterfactual learning. We introduce a new benchmark, namely CoPhy, where, after watching a video, the task is to predict the outcome after modifying the initial stage of the video. We develop a method based on object- level interactions able to infer object properties without supervision as well as future object locations after the intervention
Cutolo, Alfredo. "Image partition and video segmentation using the Mumford-Shah functional." Doctoral thesis, Universita degli studi di Salerno, 2012. http://hdl.handle.net/10556/280.
Повний текст джерелаThe aim of this Thesis is to present an image partition and video segmentation procedure, based on the minimization of a modified version of Mumford-Shah functional. The Mumford-Shah functional used for image partition has been then extended to develop a video segmentation procedure. Differently by the image processing, in video analysis besides the usual spatial connectivity of pixels (or regions) on each single frame, we have a natural notion of “temporal” connectivity between pixels (or regions) on consecutive frames given by the optical flow. In this case, it makes sense to extend the tree data structure used to model a single image with a graph data structure that allows to handle a video sequence. The video segmentation procedure is based on minimization of a modified version of a Mumford-Shah functional. In particular the functional used for image partition allows to merge neighboring regions with similar color without considering their movement. Our idea has been to merge neighboring regions with similar color and similar optical flow vector. Also in this case the minimization of Mumford-Shah functional can be very complex if we consider each possible combination of the graph nodes. This computation becomes easy to do if we take into account a hierarchy of partitions constructed starting by the nodes of the graph.[edited by author]
X n.s.
Emmot, Sebastian. "Characterizing Video Compression Using Convolutional Neural Networks." Thesis, Luleå tekniska universitet, Datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-79430.
Повний текст джерелаVercillo, Richard 1953. "Very high resolution video display memory and base image memory for a radiologic image analysis console." Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276707.
Повний текст джерелаGatica, Perez Daniel. "Extensive operators in lattices of partitions for digital video analysis /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/5874.
Повний текст джерелаWang, Feng. "Video content analysis and its applications for multimedia authoring of presentations /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?CSED%202007%20WANG.
Повний текст джерелаZhao, Bin. "Towards Scalable Analysis of Images and Videos." Research Showcase @ CMU, 2014. http://repository.cmu.edu/dissertations/583.
Повний текст джерелаXu, K. "An investigation of sewer pipe deformation by image analysis of video surveys." Thesis, Swansea University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.636703.
Повний текст джерелаSrinivasan, Sabeshan. "Object Tracking in Distributed Video Networks Using Multi-Dimentional Signatures." Fogler Library, University of Maine, 2006. http://www.library.umaine.edu/theses/pdf/SrinivasanSX2006.pdf.
Повний текст джерелаCheng, Guangchun. "Video Analytics with Spatio-Temporal Characteristics of Activities." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc799541/.
Повний текст джерелаMaczyta, Léo. "Dynamic visual saliency in image sequences." Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S046.
Повний текст джерелаOur thesis research is concerned with the estimation of motion saliency in image sequences. First, we have defined an original method to detect frames in which a salient motion is present. For this, we propose a framework relying on a deep neural network, and on the compensation of the dominant camera motion. Second, we have designed a method for estimating motion saliency maps. This method requires no learning. The motion saliency cue is obtained by an optical flow inpainting step, followed by a comparison with the initial flow. Third, we consider the problem of trajectory saliency estimation to handle progressive saliency over time. We have built a weakly supervised framework based on a recurrent auto-encoder that represents trajectories with latent codes. Performance of the three methods was experimentally assessed on real video datasets
Luo, Ying. "Statistical semantic analysis of spatio-temporal image sequences /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/5884.
Повний текст джерелаKong, Lingchao. "Modeling of Video Quality for Automatic Video Analysis and Its Applications in Wireless Camera Networks." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1563295836742645.
Повний текст джерелаSchneider, Bradley A. "Gait Analysis from Wearable Devices using Image and Signal Processing." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1514820042511803.
Повний текст джерелаKumara, Muthukudage Jayantha. "Automated Real-time Objects Detection in Colonoscopy Videos for Quality Measurements." Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc283843/.
Повний текст джерелаRobinault, Lionel. "Mosaïque d’images multi résolution et applications." Thesis, Lyon 2, 2009. http://www.theses.fr/2009LYO20039.
Повний текст джерелаThe thesis considers the of use motorized cameras with 3 degrees of freedom which are commonly called PTZ cameras. The orientation of such cameras is controlled according to two angles: the panorama angle (θ) describes the degree of rotation around on vertical axis and the tilt angle (ϕ) refers to rotation along a meridian line. Theoretically, these cameras can cover an omnidirectional field of vision of 4psr. Generally, the panorama angle and especially the tilt angle are limited for such cameras. In addition to control of the orientation of the camera, it is also possible to control focal distance, thus allowing an additional degree of freedom. Compared to other material, PTZ cameras thus allow one to build a panorama of very high resolution. A panorama is a wide representation of a scene built starting from a collection of images. The first stage in the construction of a panorama is the acquisition of the various images. To this end, we made a theoretical study to determine the optimal paving of the sphere with rectangular surfaces to minimize the number of zones of recovery. This study enables us to calculate an optimal trajectory of the camera and to limit the number of images necessary to the representation of the scene. We also propose various processing techniques which appreciably improve the rendering of the mosaic image and correct the majority of the defaults related to the assembly of a collection of images which were acquired with differing image capture parameters. A significant part of our work was used to the automatic image registration in real time, i.e. lower than 40ms. The technology that we developed makes it possible to obtain a particularly precise image registration with an computation time about 4ms (AMD1.8MHz). Our research leads directly to two proposed applications for the tracking of moving objects. The first involves the use of a PTZ camera and a spherical mirror. The combination of these two elements makes it possible to detect any motion object in the scene and to then to focus itself on one of them. Within the framework of this application, we propose an automatic algorithm of calibration of the system. The second application exploits only PTZ camera and allows the segmentation and the tracking of the objects in the scene during the movement of the camera. Compared to the traditional applications of motion detection with a PTZ camera, our approach is different by the fact that it compute a precise segmentation of the objects allowing their classification
Bothmann, Ludwig [Verfasser], and Göran [Akademischer Betreuer] Kauermann. "Efficient statistical analysis of video and image data / Ludwig Bothmann ; Betreuer: Göran Kauermann." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2016. http://d-nb.info/1115654764/34.
Повний текст джерелаAl-Jawad, Naseer. "Exploiting statistical properties of wavelet coefficients for image/video processing and analysis tasks." Thesis, University of Buckingham, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.601354.
Повний текст джерелаAl-Jawad, Neseer. "Exploiting Statical Properties of Wavelet Coefficients for image/Video Processing and Analysis Tasks." Thesis, University of Exeter, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.515492.
Повний текст джерелаQueiroz, Isabela Nascimento Fernandes De. "Study on methodology for analysis of traffic flow based on video image data." 京都大学 (Kyoto University), 2005. http://hdl.handle.net/2433/145385.
Повний текст джерелаTun, Min Han. "Virtual image sensors to track human activity in a smart house." Thesis, Curtin University, 2007. http://hdl.handle.net/20.500.11937/904.
Повний текст джерелаQadir, Ghulam. "Digital foresnic analysis for compressed images and videos." Thesis, University of Surrey, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.604341.
Повний текст джерелаOlgemar, Markus. "Camera Based Navigation : Matching between Sensor reference and Video image." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15952.
Повний текст джерелаan Internal Navigational System and a Global Navigational Satellite System (GNSS). In navigational warfare the GNSS can be jammed, therefore are a third navigational system is needed. The system that has been tried in this thesis is camera based navigation. Through a video camera and a sensor reference the position is determined. This thesis will process the matching between the sensor reference and the video image.
Two methods have been implemented: normalized cross correlation and position determination through a homography. Normalized cross correlation creates a correlation matrix. The other method uses point correspondences between the images to determine a homography between the images. And through the homography obtain a position. The more point correspondences the better the position determination will be.
The results have been quite good. The methods have got the right position when the Euler angles of the UAV have been known. Normalized cross correlation has been the best method of the tested methods.