Dissertations / Theses on the topic 'Computer vision-based framework'

To see the other types of publications on this topic, follow the link: Computer vision-based framework.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 21 dissertations / theses for your research on the topic 'Computer vision-based framework.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Çelik, Turgay. "A multiresolution framework for computer vision-based autonomous navigation." Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/36782/.

Full text
Abstract:
Autonomous navigation, e.g., for mobile robots and vehicle driver assistance, rely on intelligent processing of data acquired from different resources such as sensor networks, laser scanners, and video cameras. Due to its low cost and easy installation, video cameras are the most feasible. Thus, there is a need for robust computer vision algorithms for autonomous navigation. This dissertation investigates the use of multiresolution image analysis and proposes a framework for autonomous navigation. Multiresolution image representation is achieved via complex wavelet transform to benefit from its limited data redundancy, approximately shift invariance and improved directionality. Image enhancement is developed to enhance image features for navigation and other applications. The colour constancy is developed to correct colour aberrations to utilize colour information as a robust feature to identify drivable regions. A novel algorithm which combines multiscale edge information with contextual information through colour similarity is developed for unsupervised image segmentation. The texture analysis is accomplished through a novel multiresolution texture classifier. Each component of the framework is initially evaluated independent of the other components and on various and more general applications. The framework as a whole is applied for drivable region identification and obstacle detection. The drivable regions are identified using the colour information. The obstacle is defined as vehicles on a road and other objects which cannot be part of a road. Multiresolution texture classifier and machine learning algorithms are applied to learn the appearance of vehicles for the purpose of vehicle detection.
APA, Harvard, Vancouver, ISO, and other styles
2

Berry, David T. "A knowledge-based framework for machine vision." Thesis, Heriot-Watt University, 1987. http://hdl.handle.net/10399/1022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Caudle, Eric Weaver. "An evaluation framework for designing a night vision, computer-based trainer." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA278005.

Full text
Abstract:
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, December 1993.
Thesis advisor(s): Kishore Sengupta ; Carl R. Jones. "December 1993." Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
4

Abusaleh, Sumaya. "A Novel Computer Vision-Based Framework for Supervised Classification of Energy Outbreak Phenomena." Thesis, University of Bridgeport, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10746723.

Full text
Abstract:

Today, there is a need to implement a proper design of an adequate surveillance system that detects and categorizes explosion phenomena in order to identify the explosion risk to reduce its impact through mitigation and preparedness. This dissertation introduces state-of-the-art classification of explosion phenomena through pattern recognition techniques on color images. Consequently, we present a novel taxonomy for explosion phenomena. In particular, we demonstrate different aspects of volcanic eruptions and nuclear explosions of the proposed taxonomy that include scientific formation, real examples, existing monitoring methodologies, and their limitations. In addition, we propose a novel framework designed to categorize explosion phenomena against non-explosion phenomena. Moreover, a new dataset, Volcanic and Nuclear Explosions (VNEX), was collected. The totality of VNEX is 10, 654 samples, and it includes the following patterns: pyroclastic density currents, lava fountains, lava and tephra fallout, nuclear explosions, wildfires, fireworks, and sky clouds.

In order to achieve high reliability in the proposed explosion classification framework, we propose to employ various feature extraction approaches. Thus, we calculated the intensity levels to extract the texture features. Moreover, we utilize the YCbCr color model to calculate the amplitude features. We also employ the Radix-2 Fast Fourier Transform to compute the frequency features. Furthermore, we use the uniform local binary patterns technique to compute the histogram features. Additionally, these discriminative features were combined into a single input vector that provides valuable insight of the images, and then fed into the following classification techniques: Euclidian distance, correlation, k-nearest neighbors, one-against-one multiclass support vector machines with different kernels, and the multilayer perceptron model. Evaluation results show the design of the proposed framework is effective and robust. Furthermore, a trade-off between the computation time and the classification rate was achieved.

APA, Harvard, Vancouver, ISO, and other styles
5

Fang, Bing. "A Framework for Human Body Tracking Using an Agent-based Architecture." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/77135.

Full text
Abstract:
The purpose of this dissertation is to present our agent-based human tracking framework, and to evaluate the results of our work in light of the previous research in the same field. Our agent-based approach departs from a process-centric model where the agents are bound to specific processes, and introduces a novel model by which agents are bound to the objects or sub-objects being recognized or tracked. The hierarchical agent-based model allows the system to handle a variety of cases, such as single people or multiple people in front of single or stereo cameras. We employ the job-market model for agents' communication. In this dissertation, we will present several experiments in detail, which demonstrate the effectiveness of the agent-based tracking system. Per our research, the agents are designed to be autonomous, self-aware entities that are capable of communicating with other agents to perform tracking within agent coalitions. Each agent with high-level abstracted knowledge seeks evidence for its existence from the low-level features (e.g. motion vector fields, color blobs) and its peers (other agents representing body-parts with which it is compatible). The power of the agent-based approach is its flexibility by which the domain information may be encoded within each agent to produce an overall tracking solution.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Basso, Maik. "A framework for autonomous mission and guidance control of unmanned aerial vehicles based on computer vision techniques." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/179536.

Full text
Abstract:
A computação visual é uma área do conhecimento que estuda o desenvolvimento de sistemas artificiais capazes de detectar e desenvolver a percepção do meio ambiente através de informações de imagem ou dados multidimensionais. A percepção visual e a manipulação são combinadas em sistemas robóticos através de duas etapas "olhar"e depois "movimentar-se", gerando um laço de controle de feedback visual. Neste contexto, existe um interesse crescimente no uso dessas técnicas em veículos aéreos não tripulados (VANTs), também conhecidos como drones. Essas técnicas são aplicadas para posicionar o drone em modo de vôo autônomo, ou para realizar a detecção de regiões para vigilância aérea ou pontos de interesse. Os sistemas de computação visual geralmente tomam três passos em sua operação, que são: aquisição de dados em forma numérica, processamento de dados e análise de dados. A etapa de aquisição de dados é geralmente realizada por câmeras e sensores de proximidade. Após a aquisição de dados, o computador embarcado realiza o processamento de dados executando algoritmos com técnicas de medição (variáveis, índice e coeficientes), detecção (padrões, objetos ou áreas) ou monitoramento (pessoas, veículos ou animais). Os dados processados são analisados e convertidos em comandos de decisão para o controle para o sistema robótico autônomo Visando realizar a integração dos sistemas de computação visual com as diferentes plataformas de VANTs, este trabalho propõe o desenvolvimento de um framework para controle de missão e guiamento de VANTs baseado em visão computacional. O framework é responsável por gerenciar, codificar, decodificar e interpretar comandos trocados entre as controladoras de voo e os algoritmos de computação visual. Como estudo de caso, foram desenvolvidos dois algoritmos destinados à aplicação em agricultura de precisão. O primeiro algoritmo realiza o cálculo de um coeficiente de reflectância visando a aplicação auto-regulada e eficiente de agroquímicos, e o segundo realiza a identificação das linhas de plantas para realizar o guiamento dos VANTs sobre a plantação. O desempenho do framework e dos algoritmos propostos foi avaliado e comparado com o estado da arte, obtendo resultados satisfatórios na implementação no hardware embarcado.
Cumputer Vision is an area of knowledge that studies the development of artificial systems capable of detecting and developing the perception of the environment through image information or multidimensional data. Nowadays, vision systems are widely integrated into robotic systems. Visual perception and manipulation are combined in two steps "look" and then "move", generating a visual feedback control loop. In this context, there is a growing interest in using computer vision techniques in unmanned aerial vehicles (UAVs), also known as drones. These techniques are applied to position the drone in autonomous flight mode, or to perform the detection of regions for aerial surveillance or points of interest. Computer vision systems generally take three steps to the operation, which are: data acquisition in numerical form, data processing and data analysis. The data acquisition step is usually performed by cameras or proximity sensors. After data acquisition, the embedded computer performs data processing by performing algorithms with measurement techniques (variables, index and coefficients), detection (patterns, objects or area) or monitoring (people, vehicles or animals). The resulting processed data is analyzed and then converted into decision commands that serve as control inputs for the autonomous robotic system In order to integrate the visual computing systems with the different UAVs platforms, this work proposes the development of a framework for mission control and guidance of UAVs based on computer vision. The framework is responsible for managing, encoding, decoding, and interpreting commands exchanged between flight controllers and visual computing algorithms. As a case study, two algorithms were developed to provide autonomy to UAVs intended for application in precision agriculture. The first algorithm performs the calculation of a reflectance coefficient used to perform the punctual, self-regulated and efficient application of agrochemicals. The second algorithm performs the identification of crop lines to perform the guidance of the UAVs on the plantation. The performance of the proposed framework and proposed algorithms was evaluated and compared with the state of the art, obtaining satisfactory results in the implementation of embedded hardware.
APA, Harvard, Vancouver, ISO, and other styles
7

Sanders, Nathaniel. "A CAMERA-BASED ENERGY RELAXATION FRAMEWORK TO MINIMIZE COLOR ARTIFACTS IN A PROJECTED DISPLAY." UKnowledge, 2007. http://uknowledge.uky.edu/gradschool_theses/431.

Full text
Abstract:
We introduce a technique to automatically correct color inconsistencies in a display composed of one or more digital light projectors (DLP). The method is agnostic to the source of error and can detect and address color problems from a number of sources. Examples include inter- and intra-projector color differences, display surface markings, and environmental lighting differences on the display. In contrast to methods that discover and map all colors into the greatest common color space, we minimize local color discontinuities to create color seamlessness while remaining tolerant to significant color error. The technique makes use of a commodity camera and highdynamic range sensing to measure color gamuts at many different spatial locations. A differentiable energy function is defined that combines both a smoothness and data term. This energy function is globally minimized through the successive application of projective warps defined using gradient descent. After convergence the warps can be applied at runtime to minimize color defects in the display. The framework is demonstrated on displays that suffer from several sources of color error.
APA, Harvard, Vancouver, ISO, and other styles
8

Gongbo, Liang. "Pedestrian Detection Using Basic Polyline: A Geometric Framework for Pedestrian Detection." TopSCHOLAR®, 2016. http://digitalcommons.wku.edu/theses/1582.

Full text
Abstract:
Pedestrian detection has been an active research area for computer vision in recently years. It has many applications that could improve our lives, such as video surveillance security, auto-driving assistance systems, etc. The approaches of pedestrian detection could be roughly categorized into two categories, shape-based approaches and appearance-based approaches. In the literature, most of approaches are appearance-based. Shape-based approaches are usually integrated with an appearance-based approach to speed up a detection process. In this thesis, I propose a shape-based pedestrian detection framework using the geometric features of human to detect pedestrians. This framework includes three main steps. Give a static image, i) generating the edge image of the given image, ii) according to the edge image, extracting the basic polylines, and iii) using the geometric relationships among the polylines to detect pedestrians. The detection result obtained by the proposed framework is promising. There was a comparison made of this proposed framework with the algorithm which introduced by Dalal and Triggs [7]. This proposed algorithm increased the true-positive detection result by 47.67%, and reduced the false-positive detection number by 41.42%.
APA, Harvard, Vancouver, ISO, and other styles
9

Hoke, Jaclyn Ann. "A wavelet-based framework for efficient processing of digital imagery with an application to helmet-mounted vision systems." Diss., University of Iowa, 2017. https://ir.uiowa.edu/etd/6435.

Full text
Abstract:
Image acquisition devices, as well as image processing theory, algorithms, and hardware have advanced to the point that low Size-Weight-and-Power, real-time embedded imaging systems have become a reality. To be practical in a fielded application, an image processing sub-system must be able to conduct multiple, often highly complex tasks, in real-time. The design and construction of such systems have to address technical challenges, including real-time, low-latency processing and fixed-point algorithms in order to leverage lowest-power computing platforms. Further design complications stem from the reality that state-of-the-art image processing algorithms take very different forms, greatly complicating low-latency implementations. This dissertation presents the design and preliminary implementation of an image processing sub-system that minimizes computational complexity and power consumption by eliminating repeated transformations between processing domains. Specifically, this processing chain utilizes the LeGall 5/3 wavelet as the basis for applying multiple algorithms within a single domain. The wavelet processing chain is compared, in terms of image quality, computational cost, and power consumption, to a benchmark processing chain comprised of algorithms intended to produce high quality image results. Image quality is assessed through a subject matter expert evaluation. Computational cost is analyzed theoretically and empirically, and the power consumption is derived from the execution times and characteristics of the processing devices. The results demonstrate significant promise, but several areas for additional work have been identified.
APA, Harvard, Vancouver, ISO, and other styles
10

Strand, Mattias. "A Software Framework for Facial Modelling and Tracking." Thesis, Linköping University, Department of Electrical Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54563.

Full text
Abstract:

The WinCandide application, a platform for face tracking and model based coding, had become out of date and needed to be upgraded. This report is based on the work of investigating possible open source GUIs and computer vision tool kits that could replace the old ones that are unsupported. Multi platform GUIs are of special interest.

APA, Harvard, Vancouver, ISO, and other styles
11

Hofmann, Jaco [Verfasser], Andreas [Akademischer Betreuer] Koch, and Mladen [Akademischer Betreuer] Berekovic. "An Improved Framework for and Case Studies in FPGA-Based Application Acceleration - Computer Vision, In-Network Processing and Spiking Neural Networks / Jaco Hofmann ; Andreas Koch, Mladen Berekovic." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2020. http://d-nb.info/1202923097/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Petit, Damien. "Analysis of sensory requirement and control framework for whole body embodiment of a humanoid robot for interaction with the environment and self." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS285.

Full text
Abstract:
Les substituts de robot humanoïde promettent de nouvelles applications dans le domaine des interactions homme-robot et la robotique d’assistance. Cependant, l’incarnation de corps entier pour la téléopération ou téléprésence avec des avatars de robots mobiles est encore loin d’être pleinement exploré et compris. Dans cette thèse, nous nous concentrons sur l’exploration du sentiment d’incarnation quand on navigue et interagit avec l’environnement ou avec soi-même par l’intermédiaire d’un robot humanoïde. Dans un premier temps, nous montrons une architecture de contrôle conçu pour réaliser des scénarios de navigation et d’interaction avec soi. L’architecture utilise une interface cerveau-ordinateur pour contrôler le robot humanoïde et repose sur plusieurs composants de vision par ordinateur pour aider l’utilisateur à naviguer et interagir avec l’environnement et soi-même. Deux scénarios sont ensuite réalisés avec cette architecture où les utilisateurs doivent contrôler un robot humanoïde pour réaliser des tâches d’interaction avec soi-même. Tout d’abord, nous examinons comment la rétroaction et la contrôlabilité réduite des utilisateurs affectent leur sentiment d’incarnation vers le robot humanoïde en navigation. Nous présentons ensuite le résultat d’une étude axée sur le sentiment éprouvé par l’utilisateur lorsqu’il commande le bras humanoïde pour “toucher” l’environnement et soi-même. Le résultat montre que, malgré le manque de rétroaction dans le contrôle, et le fait de se reconnaître, les utilisateurs restent incarnés dans le robot humanoïde, et ressentent “l’effet tactile du toucher” sur leurs mains à travers le robot humanoïde
Humanoid robot surrogates promise new applications in the field of human robot interactions and assistive robotics. However, whole body embodiment for teleoperation or telepresence with mobile robot avatars is yet to be fully explored and understood. In this thesis, we focus on exploring the feeling of embodiment when one navigates and interacts with the environment or with one's self through a humanoid robot. First, we show a framework devised to realize scenarios of navigation and self interaction. The framework uses a brain-computer interface to control a humanoid robot and relies on several computer vision components to assist the user navigate and interact with the environment and one's self. Two scenarios are then realized with this framework where the users control a humanoid robot to realize self interaction tasks. We then explore in details key issues encountered during those scenarios. First, we investigate how the reduced controllability and feedback of the users affect their feeling of embodiment towards the walking surrogate. We then present the result of a study focused on the feeling experienced by the user when controlling the humanoid arm to ``touch'' the environment and then one's self. The result shows that despite the lack of feedback in the control, and recognizing themselves, users stay embody in the surrogate, and experience the touch in their hands through it
APA, Harvard, Vancouver, ISO, and other styles
13

Kim, Changick. "A framework for object-based video analysis /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/5823.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Tous, Terrades Francesc. "Computational framework for the white point interpretation based on nameability." Doctoral thesis, Universitat Autònoma de Barcelona, 2006. http://hdl.handle.net/10803/5765.

Full text
Abstract:
En aquest treball presentem un marc per a l'estimació del punt blanc en imatges sota condicions no calibrades, on considerem múltiples solucions interpretades. D'aquesta manera, proposem la utilització d'una cua visual que ha estat relacionada amb la constància de color: aparellament de colors. Aquest aparellament de colors està guiat per la introducció d'informació semàntica referent al contingut de la imatge. Així doncs, introduïm informació d'alt nivell dels colors que esperem trobar en les imatges. Tenint en compte aquestes dues idees, aparellament de colors i informació semàntica, i les aproximacions computacionals a la constància de color existents, proposem un mètode d'estimació de punt blanc per condicions no calibrades que lliura múltiples solucions, en funció de diferents interpretacions dels colors d'una escena. Plantegem l'extracció de múltiples solucions ja que pot permetre extreure més informació de l'escena que els algorismes clàssics de constància de color. En aquest cas, les múltiples solucions venen ponderades pel seu grau d'aparellament dels colors amb la informació semàntica introduïda. Finalment demostrem que la solució plantejada permet reduir el conjunt de solucions possibles a un conjunt més significant, que és petit i fàcilment interpretable.
El nostre estudi està emmarcat en un projecte d'anotació d'imatges que pretén obtenir descriptors que representen la imatge, en concret, els descriptors de la llum de l'escena. Definim dos contextos diferents per aquest projecte: condicions calibrades, quan coneixem alguna informació del sistema d'adquisició, i condicions no calibrades, quan no coneixem res del procés d'adquisició. Si bé ens hem centrat en el cas no calibrat, pel cas calibrat hem proposat també un mètode computacional de constància de color que introdueix l'assumpció de 'món gris' relaxada per a generar un conjunt de solucions possibles més reduït. Aquest mètode té un bon rendiment, similar al dels mètodes existents, i redueix el tamany del conjunt de solucions obtingut.
In this work we present a framework for white point estimation of images under uncalibrated conditions where multiple interpretable solutions can be considered. In this way, we propose to use the colour matching visual cue that has been proved as related to colour constancy. The colour matching process is guided by the introduction of semantic information regarding the image content. Thus, we introduce high-level information of colours we expect to find in the images. Considering these two ideas, colour matching and semantic information, and existing computational colour constancy approaches, we propose a white point estimation method for uncalibrated conditions which delivers multiple solutions according to different interpretations of the colours in a scene. However, we present the selection of multiple solutions which enables to obtain more information of the scene than existing colour constancy methods, which normally select a unique solution. In this case, the multiple solutions are weighted by the degree of colour matching between colours in the image and semantic information introduced. Finally, we prove that the feasible set of solutions can be reduced to a smaller and more significant set with a semantic interpretation.
Our study is framed in a global image annotation project which aims to obtain descriptors which depict the image, in this work we focus on illuminant descriptors.We define two different sets of conditions for this project: (a) calibrated conditions, when we have some information about the acquisition process and (b) uncalibrated conditions, when we do not know the acquisition process. Although we have focused on the uncalibrated case, for calibrated conditions we also propose a colour constancy method which introduces the relaxed grey-world assumption to produce a reduced feasible set of solutions. This method delivers good performance similar to existing methods and reduces the size of the feasible set obtained.
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Qi. "An integration framework of feature selection and extraction for appearance-based recognition." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 8.38 Mb., 141 p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3220745.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Fortun, Denis. "Aggregation framework and patch-based representation for optical flow." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S093/document.

Full text
Abstract:
Nous nous intéressons dans cette thèse au problème de l'estimation dense du mouvement dans des séquences d'images, également désigné sous le terme de flot optique. Les approches usuelles exploitent une paramétrisation locale ou une régularisation globale du champ de déplacement. Nous explorons plusieurs façons de combiner ces deux stratégies, pour surmonter leurs limitations respectives. Nous nous plaçons dans un premier temps dans un cadre variationnel global, et considérons un filtrage local du terme de données. Nous proposons un filtrage spatialement adaptatif, optimisé conjointement au mouvement, pour empêcher le sur-lissage induit par le filtrage spatialement constant. Dans une seconde partie, nous proposons un cadre générique d'agrégation pour l'estimation du flot optique. Sous sa forme générale, il consiste en une estimation locale de candidats de mouvements, suivie de leur combinaison à l'étape d'agrégation avec un modèle global. Ce schéma permet une estimation efficace des grands déplacements et des discontinuités de mouvement. Nous développons également une méthode générique de gestion des occultations. Notre méthode est validée par une analyse expérimentale conséquente sur des bases de données de référence en vision par ordinateur. Nous démontrons la supériorité de notre méthode par rapport à l'état de l'art sur les séquences présentant de grands déplacements. La dernière partie de la thèse est consacrée à l'adaptation des approches précédentes à des problématiques d'imagerie biologique. Les changements locaux importants d'intensité observés en imagerie de fluorescence sont estimés et compensé par une adaptation de notre schéma d'agrégation. Nous proposons également une méthode variationnelle avec filtrage local dédiée au cas de mouvements diffusifs de particules
This thesis is concerned with dense motion estimation in image sequences, also known as optical flow. Usual approaches exploit either local parametrization or global regularization of the motion field. We explore several ways to combine these two strategies, to overcome their respective limitations. We first address the problem in a global variational framework, and consider local filtering of the data term. We design a spatially adaptive filtering optimized jointly with motion, to prevent over-smoothing induced by the spatially constant approach. In a second part, we propose a generic two-step aggregation framework for optical flow estimation. The most general form is a local computation of motion candidates, combined in the aggregation step through a global model. Large displacements and motion discontinuities are efficiently recovered with this scheme. We also develop a generic exemplar-based occlusion handling to deal with large displacements. Our method is validated with extensive experiments in computer vision benchmarks. We demonstrate the superiority of our method over state-of-the-art on sequences with large displacements. Finally, we adapt the previous methods to biological imaging issues. Estimation and compensation of large local intensity changes frequently occurring in fluorescence imaging are efficiently estimated and compensated with an adaptation of our aggregation framework. We also propose a variational method with local filtering dedicated to the case of diffusive motion of particles
APA, Harvard, Vancouver, ISO, and other styles
17

Sethi, Ricky Jaineet. "A physics-based, neurobiologically-inspired stochastic framework for activity recognition." Diss., [Riverside, Calif.] : University of California, Riverside, 2009. http://proquest.umi.com/pqdweb?index=0&did=1957340981&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268416562&clientId=48051.

Full text
Abstract:
Thesis (Ph. D.)--University of California, Riverside, 2009.
Includes abstract. Title from first page of PDF file (viewed March 11, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. ). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
18

Braeger, Steven W. "A framework for blind signal correction using optimized polyspectra-based cost functions." Honors in the Major Thesis, University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1244.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Engineering and Computer Science
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
19

Wei, Lijun. "Multi-sources fusion based vehicle localization in urban environments under a loosely coupled probabilistic framework." Phd thesis, Université de Technologie de Belfort-Montbeliard, 2013. http://tel.archives-ouvertes.fr/tel-01004660.

Full text
Abstract:
In some dense urban environments (e.g., a street with tall buildings around), vehicle localization result provided by Global Positioning System (GPS) receiver might not be accurate or even unavailable due to signal reflection (multi-path) or poor satellite visibility. In order to improve the accuracy and robustness of assisted navigation systems so as to guarantee driving security and service continuity on road, a vehicle localization approach is presented in this thesis by taking use of the redundancy and complementarities of multiple sensors. At first, GPS localization method is complemented by onboard dead-reckoning (DR) method (inertial measurement unit, odometer, gyroscope), stereovision based visual odometry method, horizontal laser range finder (LRF) based scan alignment method, and a 2D GIS road network map based map-matching method to provide a coarse vehicle pose estimation. A sensor selection step is applied to validate the coherence of the observations from multiple sensors, only information provided by the validated sensors are combined under a loosely coupled probabilistic framework with an information filter. Then, if GPS receivers encounter long term outages, the accumulated localization error of DR-only method is proposed to be bounded by adding a GIS building map layer. Two onboard LRF systems (a horizontal LRF and a vertical LRF) are mounted on the roof of the vehicle and used to detect building facades in urban environment. The detected building facades are projected onto the 2D ground plane and associated with the GIS building map layer to correct the vehicle pose error, especially for the lateral error. The extracted facade landmarks from the vertical LRF scan are stored in a new GIS map layer. The proposed approach is tested and evaluated with real data sequences. Experimental results with real data show that fusion of the stereoscopic system and LRF can continue to localize the vehicle during GPS outages in short period and to correct the GPS positioning error such as GPS jumps; the road map can help to obtain an approximate estimation of the vehicle position by projecting the vehicle position on the corresponding road segment; and the integration of the building information can help to refine the initial pose estimation when GPS signals are lost for long time.
APA, Harvard, Vancouver, ISO, and other styles
20

Hofmann, Jaco. "An Improved Framework for and Case Studies in FPGA-Based Application Acceleration - Computer Vision, In-Network Processing and Spiking Neural Networks." Phd thesis, 2020. https://tuprints.ulb.tu-darmstadt.de/10355/1/Thesis_JAH_2019.pdf.

Full text
Abstract:
Field Programmable Gate Arrays (FPGAs) are a new addition to the world of data center acceleration. While the underlying technology has been around for decades, their application in data centers slowly starts gaining traction. However, there are myriad problems that hinder the widespread application of FPGAs in the data center. The closed source tool chains result in vendor lock-in and unstable tool flows. The languages used to program FPGAs require different design processes which are not easily learned by software developers. Compared to commodity solutions using CPUs and GPUs, FPGAs are more expensive and more time consuming to develop for. All of this and more make FPGAs a tough sell to people in need of task acceleration. Nonetheless, FPGAs also offer an opportunity to develop faster accelerators with a smaller energy envelop for rapidly changing applications. This work presents a solution to FPGA abstraction using the TaPaSCo framework. TaPaSCo simplifies moving between different FPGA architectures and automates scaling of accelerators across a multitude of devices. In addition, the framework provides a homogenized way of interacting with the accelerators. This thesis presents applications where FPGAs offer many benefits in the data center. Applications such as Semi-Global Block Matching which are difficult to compute on CPUs and GPUs due to the specific data transfer patterns, can be implemented highly efficiently an FPGAs. The presented work achieves over 35x of speedup on FPGAs compared to implementations of GPUs. FPGAs can also be used to improve network efficiency in the data center by replacing central network components with smart switches. The work presented here achieves up to 7x speedup over a classical distributed software implementation in a hash join scenario. Furthermore, FPGA can be used to bring new storage technologies into the data center by providing highly efficient consensus services right inside the network. The presented work shows that fetching pages remotely using a FPGA accelerated consensus system can be done as fast as 10us over the network which is only 55% of a conventional solution. These results make non-volatile network storage solutions as replacement for main memory viable. Lastly, this thesis presents a way of simulating parts of a brain with a very high level accuracy using FPGA. The spiking neural networks employed in the accelerator can benefit the research of brain functionality. The accelerator is capable of handling tens of thousands of neurons with a strict real time requirement of 50us per simulation step.
APA, Harvard, Vancouver, ISO, and other styles
21

Zhao, Zhipeng. "Towards a local-global visual feature-based framework for recognition." 2009. http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000051935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography