Дисертації з теми "Computer vision-based framework"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-21 дисертацій для дослідження на тему "Computer vision-based framework".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Çelik, Turgay. "A multiresolution framework for computer vision-based autonomous navigation." Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/36782/.
Повний текст джерелаBerry, David T. "A knowledge-based framework for machine vision." Thesis, Heriot-Watt University, 1987. http://hdl.handle.net/10399/1022.
Повний текст джерелаCaudle, Eric Weaver. "An evaluation framework for designing a night vision, computer-based trainer." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA278005.
Повний текст джерелаThesis advisor(s): Kishore Sengupta ; Carl R. Jones. "December 1993." Includes bibliographical references. Also available online.
Abusaleh, Sumaya. "A Novel Computer Vision-Based Framework for Supervised Classification of Energy Outbreak Phenomena." Thesis, University of Bridgeport, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10746723.
Повний текст джерелаToday, there is a need to implement a proper design of an adequate surveillance system that detects and categorizes explosion phenomena in order to identify the explosion risk to reduce its impact through mitigation and preparedness. This dissertation introduces state-of-the-art classification of explosion phenomena through pattern recognition techniques on color images. Consequently, we present a novel taxonomy for explosion phenomena. In particular, we demonstrate different aspects of volcanic eruptions and nuclear explosions of the proposed taxonomy that include scientific formation, real examples, existing monitoring methodologies, and their limitations. In addition, we propose a novel framework designed to categorize explosion phenomena against non-explosion phenomena. Moreover, a new dataset, Volcanic and Nuclear Explosions (VNEX), was collected. The totality of VNEX is 10, 654 samples, and it includes the following patterns: pyroclastic density currents, lava fountains, lava and tephra fallout, nuclear explosions, wildfires, fireworks, and sky clouds.
In order to achieve high reliability in the proposed explosion classification framework, we propose to employ various feature extraction approaches. Thus, we calculated the intensity levels to extract the texture features. Moreover, we utilize the YCbCr color model to calculate the amplitude features. We also employ the Radix-2 Fast Fourier Transform to compute the frequency features. Furthermore, we use the uniform local binary patterns technique to compute the histogram features. Additionally, these discriminative features were combined into a single input vector that provides valuable insight of the images, and then fed into the following classification techniques: Euclidian distance, correlation, k-nearest neighbors, one-against-one multiclass support vector machines with different kernels, and the multilayer perceptron model. Evaluation results show the design of the proposed framework is effective and robust. Furthermore, a trade-off between the computation time and the classification rate was achieved.
Fang, Bing. "A Framework for Human Body Tracking Using an Agent-based Architecture." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/77135.
Повний текст джерелаPh. D.
Basso, Maik. "A framework for autonomous mission and guidance control of unmanned aerial vehicles based on computer vision techniques." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/179536.
Повний текст джерелаCumputer Vision is an area of knowledge that studies the development of artificial systems capable of detecting and developing the perception of the environment through image information or multidimensional data. Nowadays, vision systems are widely integrated into robotic systems. Visual perception and manipulation are combined in two steps "look" and then "move", generating a visual feedback control loop. In this context, there is a growing interest in using computer vision techniques in unmanned aerial vehicles (UAVs), also known as drones. These techniques are applied to position the drone in autonomous flight mode, or to perform the detection of regions for aerial surveillance or points of interest. Computer vision systems generally take three steps to the operation, which are: data acquisition in numerical form, data processing and data analysis. The data acquisition step is usually performed by cameras or proximity sensors. After data acquisition, the embedded computer performs data processing by performing algorithms with measurement techniques (variables, index and coefficients), detection (patterns, objects or area) or monitoring (people, vehicles or animals). The resulting processed data is analyzed and then converted into decision commands that serve as control inputs for the autonomous robotic system In order to integrate the visual computing systems with the different UAVs platforms, this work proposes the development of a framework for mission control and guidance of UAVs based on computer vision. The framework is responsible for managing, encoding, decoding, and interpreting commands exchanged between flight controllers and visual computing algorithms. As a case study, two algorithms were developed to provide autonomy to UAVs intended for application in precision agriculture. The first algorithm performs the calculation of a reflectance coefficient used to perform the punctual, self-regulated and efficient application of agrochemicals. The second algorithm performs the identification of crop lines to perform the guidance of the UAVs on the plantation. The performance of the proposed framework and proposed algorithms was evaluated and compared with the state of the art, obtaining satisfactory results in the implementation of embedded hardware.
Sanders, Nathaniel. "A CAMERA-BASED ENERGY RELAXATION FRAMEWORK TO MINIMIZE COLOR ARTIFACTS IN A PROJECTED DISPLAY." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/431.
Повний текст джерелаGongbo, Liang. "Pedestrian Detection Using Basic Polyline: A Geometric Framework for Pedestrian Detection." TopSCHOLAR®, 2016. http://digitalcommons.wku.edu/theses/1582.
Повний текст джерелаHoke, Jaclyn Ann. "A wavelet-based framework for efficient processing of digital imagery with an application to helmet-mounted vision systems." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/6435.
Повний текст джерелаStrand, Mattias. "A Software Framework for Facial Modelling and Tracking." Thesis, Linköping University, Department of Electrical Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54563.
Повний текст джерелаThe WinCandide application, a platform for face tracking and model based coding, had become out of date and needed to be upgraded. This report is based on the work of investigating possible open source GUIs and computer vision tool kits that could replace the old ones that are unsupported. Multi platform GUIs are of special interest.
Hofmann, Jaco [Verfasser], Andreas [Akademischer Betreuer] Koch, and Mladen [Akademischer Betreuer] Berekovic. "An Improved Framework for and Case Studies in FPGA-Based Application Acceleration - Computer Vision, In-Network Processing and Spiking Neural Networks / Jaco Hofmann ; Andreas Koch, Mladen Berekovic." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2020. http://d-nb.info/1202923097/34.
Повний текст джерелаPetit, Damien. "Analysis of sensory requirement and control framework for whole body embodiment of a humanoid robot for interaction with the environment and self." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS285.
Повний текст джерелаHumanoid robot surrogates promise new applications in the field of human robot interactions and assistive robotics. However, whole body embodiment for teleoperation or telepresence with mobile robot avatars is yet to be fully explored and understood. In this thesis, we focus on exploring the feeling of embodiment when one navigates and interacts with the environment or with one's self through a humanoid robot. First, we show a framework devised to realize scenarios of navigation and self interaction. The framework uses a brain-computer interface to control a humanoid robot and relies on several computer vision components to assist the user navigate and interact with the environment and one's self. Two scenarios are then realized with this framework where the users control a humanoid robot to realize self interaction tasks. We then explore in details key issues encountered during those scenarios. First, we investigate how the reduced controllability and feedback of the users affect their feeling of embodiment towards the walking surrogate. We then present the result of a study focused on the feeling experienced by the user when controlling the humanoid arm to ``touch'' the environment and then one's self. The result shows that despite the lack of feedback in the control, and recognizing themselves, users stay embody in the surrogate, and experience the touch in their hands through it
Kim, Changick. "A framework for object-based video analysis /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/5823.
Повний текст джерелаTous, Terrades Francesc. "Computational framework for the white point interpretation based on nameability." Doctoral thesis, Universitat Autònoma de Barcelona, 2006. http://hdl.handle.net/10803/5765.
Повний текст джерелаEl nostre estudi està emmarcat en un projecte d'anotació d'imatges que pretén obtenir descriptors que representen la imatge, en concret, els descriptors de la llum de l'escena. Definim dos contextos diferents per aquest projecte: condicions calibrades, quan coneixem alguna informació del sistema d'adquisició, i condicions no calibrades, quan no coneixem res del procés d'adquisició. Si bé ens hem centrat en el cas no calibrat, pel cas calibrat hem proposat també un mètode computacional de constància de color que introdueix l'assumpció de 'món gris' relaxada per a generar un conjunt de solucions possibles més reduït. Aquest mètode té un bon rendiment, similar al dels mètodes existents, i redueix el tamany del conjunt de solucions obtingut.
In this work we present a framework for white point estimation of images under uncalibrated conditions where multiple interpretable solutions can be considered. In this way, we propose to use the colour matching visual cue that has been proved as related to colour constancy. The colour matching process is guided by the introduction of semantic information regarding the image content. Thus, we introduce high-level information of colours we expect to find in the images. Considering these two ideas, colour matching and semantic information, and existing computational colour constancy approaches, we propose a white point estimation method for uncalibrated conditions which delivers multiple solutions according to different interpretations of the colours in a scene. However, we present the selection of multiple solutions which enables to obtain more information of the scene than existing colour constancy methods, which normally select a unique solution. In this case, the multiple solutions are weighted by the degree of colour matching between colours in the image and semantic information introduced. Finally, we prove that the feasible set of solutions can be reduced to a smaller and more significant set with a semantic interpretation.
Our study is framed in a global image annotation project which aims to obtain descriptors which depict the image, in this work we focus on illuminant descriptors.We define two different sets of conditions for this project: (a) calibrated conditions, when we have some information about the acquisition process and (b) uncalibrated conditions, when we do not know the acquisition process. Although we have focused on the uncalibrated case, for calibrated conditions we also propose a colour constancy method which introduces the relaxed grey-world assumption to produce a reduced feasible set of solutions. This method delivers good performance similar to existing methods and reduces the size of the feasible set obtained.
Li, Qi. "An integration framework of feature selection and extraction for appearance-based recognition." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 8.38 Mb., 141 p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3220745.
Повний текст джерелаFortun, Denis. "Aggregation framework and patch-based representation for optical flow." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S093/document.
Повний текст джерелаThis thesis is concerned with dense motion estimation in image sequences, also known as optical flow. Usual approaches exploit either local parametrization or global regularization of the motion field. We explore several ways to combine these two strategies, to overcome their respective limitations. We first address the problem in a global variational framework, and consider local filtering of the data term. We design a spatially adaptive filtering optimized jointly with motion, to prevent over-smoothing induced by the spatially constant approach. In a second part, we propose a generic two-step aggregation framework for optical flow estimation. The most general form is a local computation of motion candidates, combined in the aggregation step through a global model. Large displacements and motion discontinuities are efficiently recovered with this scheme. We also develop a generic exemplar-based occlusion handling to deal with large displacements. Our method is validated with extensive experiments in computer vision benchmarks. We demonstrate the superiority of our method over state-of-the-art on sequences with large displacements. Finally, we adapt the previous methods to biological imaging issues. Estimation and compensation of large local intensity changes frequently occurring in fluorescence imaging are efficiently estimated and compensated with an adaptation of our aggregation framework. We also propose a variational method with local filtering dedicated to the case of diffusive motion of particles
Sethi, Ricky Jaineet. "A physics-based, neurobiologically-inspired stochastic framework for activity recognition." Diss., [Riverside, Calif.] : University of California, Riverside, 2009. http://proquest.umi.com/pqdweb?index=0&did=1957340981&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268416562&clientId=48051.
Повний текст джерелаIncludes abstract. Title from first page of PDF file (viewed March 11, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. ). Also issued in print.
Braeger, Steven W. "A framework for blind signal correction using optimized polyspectra-based cost functions." Honors in the Major Thesis, University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1244.
Повний текст джерелаBachelors
Engineering and Computer Science
Computer Science
Wei, Lijun. "Multi-sources fusion based vehicle localization in urban environments under a loosely coupled probabilistic framework." Phd thesis, Université de Technologie de Belfort-Montbeliard, 2013. http://tel.archives-ouvertes.fr/tel-01004660.
Повний текст джерелаHofmann, Jaco. "An Improved Framework for and Case Studies in FPGA-Based Application Acceleration - Computer Vision, In-Network Processing and Spiking Neural Networks." Phd thesis, 2020. https://tuprints.ulb.tu-darmstadt.de/10355/1/Thesis_JAH_2019.pdf.
Повний текст джерелаZhao, Zhipeng. "Towards a local-global visual feature-based framework for recognition." 2009. http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000051935.
Повний текст джерела