To see the other types of publications on this topic, follow the link: Visual modelling.

Dissertations / Theses on the topic 'Visual modelling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Visual modelling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Narbutas, Vilius. "Computational modelling of visual search." Thesis, University of Birmingham, 2018. http://etheses.bham.ac.uk//id/eprint/8772/.

Full text
Abstract:
Visual search traditionally has two main competing theories of parallel and serial search and this architectural issue has not been solved to this day. The latest developments in the field have suggested a possibility that response time distributions may aid in differentiating the two competing theories. For this purpose we have used the best available serial model Competitive Guided Search and two biologically-plausible parallel models inspired by the theory of biased competition. The parallel models adopted a winner-take-all mechanism from Selective Attention for Identification Model as base model that was extended to form a novel model for explaining response time distributions. These models are analytically intractable, therefore we adopted a more accurate kernel density estimator for representing unknown probability density function. Introduced robustness properties to the fitness method and developed a more efficient algorithm for finding the parameter solutions. Then these methods were applied for comparison of the respective models and concluded that winner-takes-all model poorly generalises to response time distributions. The results were followed by introducing a novel Asymmetrical Dynamic Neural Network model that managed to explain distributional changes better than Competitive Guided Search model.
APA, Harvard, Vancouver, ISO, and other styles
2

Bühler, Frank Stefan. "Combining visual modelling with visual programming for CORBA component development." Thesis, De Montfort University, 2002. http://hdl.handle.net/2086/4068.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Clarke, Alasdair Daniel Francis. "Modelling visual search for surface defects." Thesis, Heriot-Watt University, 2010. http://hdl.handle.net/10399/2351.

Full text
Abstract:
Much work has been done on developing algorithms for automated surface defect detection. However, comparisons between these models and human perception are rarely carried out. This thesis aims to investigate how well human observers can nd defects in textured surfaces, over a wide range of task di culties. Stimuli for experiments will be generated using texture synthesis methods and human search strategies will be captured by use of an eye tracker. Two di erent modelling approaches will be explored. A computational LNL-based model will be developed and compared to human performance in terms of the number of xations required to find the target. Secondly, a stochastic simulation, based on empirical distributions of saccades, will be compared to human search strategies.
APA, Harvard, Vancouver, ISO, and other styles
4

Woodbury, Greg. "Modelling Emergent Properties of the Visual Cortex." University of Sydney. School of Mathematics and Statistics, 2003. http://hdl.handle.net/2123/695.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wu, Qi. "Modelling visual objects regardless of depictive style." Thesis, University of Bath, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.646137.

Full text
Abstract:
Visual object classifcation and detection are major problems in contemporary com- puter vision. State-of-art algorithms allow thousands of visual objects to be learned and recognized, under a wide range of variations including lighting changes, occlusion and point of view etc. However, only a small fraction of the literature addresses the problem of variation in depictive styles (photographs, drawings, paintings etc.). This is a challenging gap but the ability to process images of all depictive styles and not just photographs has potential value across many applications. This thesis aims to narrow this gap. Our studies begin with primitive shapes. We provide experimental evidence that primitives shapes such as `triangle', `square', or `circle' can be found and used to fit regions in segmentations. These shapes corresponds to those used by artists as they draw. We then assume that an object class can be characterised by the qualitative shape of object parts and their structural arrangement. Hence, a novel hierarchical graph representation labeled with primitive shapes is proposed. The model is learnable and is able to classify over a broad range of depictive styles. However, as more depictive styles join, how to capture the wide variation in visual appearance exhibited by visual objects across them is still an open question. We believe that the use of a graph with multi-labels to represent visual words that exists in possibly discontinuous regions of a feature space can be helpful.
APA, Harvard, Vancouver, ISO, and other styles
6

Dziemianko, Michal. "Modelling eye movements and visual attention in synchronous visual and linguistic processing." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/9377.

Full text
Abstract:
This thesis focuses on modelling visual attention in tasks in which vision interacts with language and other sources of contextual information. The work is based on insights provided by experimental studies in visual cognition and psycholinguistics, particularly cross-modal processing. We present a series of models of eye-movements in situated language comprehension capable of generating human-like scan-paths. Moreover we investigate the existence of high level structure of the scan-paths and applicability of tools used in Natural Language Processing in the analysis of this structure. We show that scan paths carry interesting information that is currently neglected in both experimental and modelling studies. This information, studied at a level beyond simple statistical measures such as proportion of looks, can be used to extract knowledge of more complicated patterns of behaviour, and to build models capable of simulating human behaviour in the presence of linguistic material. We also revisit classical model saliency and its extensions, in particular the Contextual Guidance Model of Torralba et al. (2006), and extend it with memory of target positions in visual search. We show that models of contextual guidance should contain components responsible for short term learning and memorisation. We also investigate the applicability of this type of model to prediction of human behaviour in tasks with incremental stimuli as in situated language comprehension. Finally we investigate the issue of objectness and object saliency, including their effects on eye-movements and human responses to experimental tasks. In a simple experiment we show that when using an object-based notion of saliency it is possible to predict fixation locations better than using pixel-based saliency as formulated by Itti et al. (1998). In addition we show that object based saliency fits into current theories such as cognitive relevance and can be used to build unified models of cross-referential visual and linguistic processing. This thesis forms a foundation towards a more detailed study of scan-paths within an object-based framework such as Cognitive Relevance Framework (Henderson et al., 2007, 2009) by providing models capable of explaining human behaviour, and the delivery of tools and methodologies to predict which objects would be attended to during synchronous visual and linguistic processing.
APA, Harvard, Vancouver, ISO, and other styles
7

Stewart, Finlay J. "Modelling visual-olfactory integration in free-flying Drosophila." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/3192.

Full text
Abstract:
Flying fruit flies (Drosophila melanogaster) locate a concealed appetitive odour source most accurately in environments containing vertical visual contrasts (Frye et al, 2003). To investigate how visuomotor and olfactory responses interact to cause this phenomenon, I implement a tracking system capable of recording flies’ flight trajectories in three dimensions. I examine free-flight behaviour in three different visual environments, with and without food odour present. While odour localisation is facilitated by a random chequerboard pattern compared to a horizontally striped one, a single vertical landmark also facilitates odour localisation, but only if the odour source is situated close to the landmark. I implement a closed-loop systems-level model of visuomotor control consisting of three parallel subsystems which use wide-field optic flow cues to control flight behaviour. These are: an optomotor response to stabilise the model fly’s yaw orientation; a collision avoidance system to initiate rapid turns (saccades) away from looming obstacles; and a speed regulation system. This model reproduces in simulation many of the behaviours I observe in flies, including distinctive visually mediated ‘rebound’ turns following saccades. Using recordings of real odour plumes, I simulate the presence of an odorant in the arena, and investigate ways in which the olfactory input could modulate visuomotor control. In accordance with the principle of Occam’s razor, I identify the simplest mechanism of crossmodal integration that reproduces the observed pattern of visual effects on the odour localisation behaviour of flies. The resulting model uses the change in odour intensity to regulate the sensitivity of collision avoidance, resulting in visually mediated chemokinesis. Additionally, it is necessary to amplify the optomotor response whenever odour is present, increasing the model fly’s tendency to steer towards features of the visual environment. This could be viewed as a change in behavioural context brought about by the possibility of feeding. A novel heterogeneous visual environment is used to validate the model. While its predictions are largely borne out by experimental data, it fails to account for a pronounced odour-dependent attraction to regions of exclusively vertical contrast. I conclude that visual and olfactory responses of Drosophila are not independent, but that relatively simple interaction between these modalities can account for the observed visual dependence of odour source localisation.
APA, Harvard, Vancouver, ISO, and other styles
8

Lazarevic, N. "Background modelling and performance metrics for visual surveillance." Thesis, Kingston University, 2011. http://eprints.kingston.ac.uk/21831/.

Full text
Abstract:
This work deals with the problems of performance evaluation and background modelling for the detection of moving objects in outdoor video surveillance datasets. Such datasets are typically affected by considerable background variations caused by global and partial illumination variations, gradual and sudden lighting condition changes, and non-stationary backgrounds. The large variation of backgrounds in typical outdoor video sequences requires highly adaptable and robust models able to represent the background at any time instance with sufficient accuracy. Furthermore, in real life applications it is often required to detect possible contaminations of the scene in real time or when new observations become available. A novel adaptive multi-modal algorithm for on-line background modelling is proposed. The proposed algorithm applies the principles of the Gaussian Mixture Model, previously used to model the grey-level (or colour) variations of individual pixels, to the modelling of illumination variations in image regions. The image observations are represented in the eigen-space, where the dimensionality of the data is significantly reduced using the method of the principal components analysis. The projections of image regions in the reduced eigen-space are clustered using K-means into clusters (or modes) of similar backgrounds and are modelled as multivariate Gaussian distributions. Such an approach allows the model to adapts to the changes in the dataset in a timely manner. This work proposed modifications to a previously published method for incremental update of the uni-modal eigne-models. The modifications are twofold. First, the incremental update is performed on the individual modes of the multi-modal model, and second, the mechanism for adding new dimensions is adapted to handle problems typical for outdoor video surveillance scenes with a wide range of illumination changes. Finally, a novel, objective, comparative, object-based methodology for performance evaluation of object detection is also developed. the proposed methodology is concerned with the evaluation of object detection in the context of the end-user defined quality of performance in complex video surveillance applications.
APA, Harvard, Vancouver, ISO, and other styles
9

Mineault, Patrick. "Parametric modelling of visual cortex at multiple scales." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=123020.

Full text
Abstract:
The visual system is confronted with the daunting task of extracting behaviourally relevant visual information from noisy and ambiguous patterns of luminance falling on the retina. It solves this problem through a hierarchical architecture, in which the visual stimulus is iteratively re-encoded into ever more abstract representations which can drive behaviour. This thesis explores the question of how the computations performed by neurons in the visual hierarchy create behaviourally relevant representations. This question requires probing the visual system at multiple scales: computation is the role of single neurons and ensembles of neurons; representation is the function of multiple neurons within an area; hierarchical processing is an emergent process which involves multiple areas; and behaviour is defined at the full scale of the system, the psychophysical observer. To study visual processing at multiple scales, I propose to develop and apply parametric modelling methods in the context of systems identification. Systems identification seeks to establish the deterministic relationship between the input and the output of a system. Systems identification has proven particularly useful in the study of visual processing, where the input to the system can be easily controlled via sensory stimulation.Parametric modeling, built on the theory of Generalized Linear Models (GLMs), furnishes a common framework to analyze signals with different statistical properties which occur in the analysis of neural systems: spike trains, multi-unit activity, local field potentials and psychophysical decisions.In Chapter 2, I develop the parametric modeling framework which is used throughout this thesis in the context of psychophysical classification images. Results show that parametric modeling can infer a psychophysical observer's decision process with fewer trials than previously proposed methods. This allows the exploration of more complex, and potentially more informative, models of decision processes while retaining statistical tractability.In Chapter 3, I extend and apply this framework to the analysis of visual representations at the level of neuronal ensembles in area V4. The results show that it is possible to infer, from multi-unit activity and local field potential (LFP) signals, the representation of visual space at a fine-grained scale over several millimeters of cortex. Analysis of the estimated visual representations reveals that LFPs reflect both local sources of input and global biases in visual representation. These results resolve a persistent puzzle in the literature regarding the spatial reach of the local field potential.In Chapter 4, I extend and apply the same framework to the analysis of single-neuron responses in area MST of the dorsal visual stream. Results reveal that MST responses can be explained by the integration of their afferent input from area MT, provided that this integration is nonlinear. Estimated models reveal long suspected, but previously unconfirmed receptive field organization in MST neurons that allow them to respond to complex optic flow patterns. This receptive field organization and nonlinear integration allows more accurate estimation of the velocity of approaching objects from the population of MST neurons, thus revealing their possible functional role in vergence control and object motion estimation.Put together, these results demonstrate that with powerful statistical methods, it is possible to infer the nature of visual representations at multiple scales. In the discussion, I show how these results may be expanded to gain a better understanding of hierarchical visual processing at large.
Le système visuel est confronté à la difficile tâche d'extraire de l'information utile au comportement à partir de motifs complexes et ambigus détectés par la rétine. Il résout ce problème grâce à une architecture hiérarchique, dans laquelle le stimulus visuel est itérativement ré-encodé dans une représentation abstraite. Ce mémoire explore la question suivante : comment les computations performées par des neurones de la hiérarchie visuelle créent-elles des représentations permettant des comportements complexes?Cette question nécessite l'étude du système visuel à plusieurs échelles : la computation est le rôle de neurones et d'ensembles de neurones; la représentation est une fonction des neurones dans une aire du cerveau; la hiérarchie émerge de la communication entre de multiples aires du cerveau; et le comportement est défini à l'échelle du système visuel complet, l'observateur psychophysique.Afin d'étudier le système visuel à de multiple échelles, je développe et applique des méthodes de modélisation paramétrique dans le cadre de l'identification de système. Celle-ci a pour but d'établir la relation déterministe entre l'entrée d'un système et sa sortie. L'identification de système est particulièrement utile dans l'étude de la vision, où l'entrée du système peut être facilement contrôlée par stimulation sensorielle.La modélisation paramétrique, bâtie sur la théorie des modèles linéaires généralisés, offre un paradigme commun pour analyser des signaux ayant des propriétés statistiques disparates, souvent rencontrés dans l'étude du système nerveux: les potentiels d'action, l'activité d'ensemble de neurones, et les décisions psychophysiques.Dans le 2ème chapitre, je développe le paradigme d'analyse par modélisation paramétrique qui sera utilisé tout au long de ce mémoire dans le contexte des images de classification psychophysiques. Je démontre qu'il est possible d'inférer, grâce à ces méthodes, le processus décisionnel d'un observateur psychophysique avec moins de données que ce qui était précédemment possible. Cette avancée permet l'exploration de modèles psychophysiques plus complexes, et potentiellement plus informatifs sur le processus décisionnel de l'observateur.Dans le 3ème chapitre, j'applique ce paradigme à l'analyse des représentations visuelles au niveau d'ensembles neuronaux dans l'aire V4 du système visuel. Les résultats démontrent qu'il est possible, à partir de l'activité des champs de potentiel locaux (CPL), d'inférer la représentation corticale de l'espace visuel sur une échelle de plusieurs millimètres. Je démontre ainsi que les CPL reflètent à la fois des sources synaptiques locales et des biais globaux dans la représentation visuelle. Ces résultats résolvent une controverse dans la littérature concernant l'intégration spatiale des CPL.Dans le 4ème chapitre, j'applique ce même paradigme dans l'analyse de neurones dans l'aire MST du système visuel dorsal. Je révèle que les réponses dans MST peuvent être expliquées par l'intégration de sources afférentes provenant de l'aire MT; cependant, cette intégration se révèle nonlinéaire. Cette analyse révèle des propriétés longtemps soupçonnées mais jusqu'ici non confirmées des champs réceptifs des neurones dans MST; celles-ci leur permettent de communiquer de l'information sur les motifs de flux optique complexes. Cette organisation des champs réceptifs et l'intégration nonlinéaire permet d'extraire plus facilement la vélocité d'objets s'approchant de l'observateur à partir des réponses de la population de neurones dans MST, révélant un rôle insoupçonné de ces neurones dans l'estimation de la vélocité des objets.Pris ensemble, ces résultats démontrent qu'à l'aide de méthodes statistiques puissantes, il est possible d'inférer la nature des représentations visuelles à de multiples échelles. Dans la discussion, je démontre comment généraliser ces résultats afin d'obtenir une meilleure compréhension des computations hiérarchiques dans le système visuel.
APA, Harvard, Vancouver, ISO, and other styles
10

Jilani, Mohd Zairul Mazwan Bin. "Simultaneous modelling and clustering of visual field data." Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/17227.

Full text
Abstract:
In the health-informatics and bio-medical domains, clinicians produce an enormous amount of data which can be complex and high in dimensionality. This scenario includes visual field data, which are used for managing the second leading cause of blindness in the world: glaucoma. Visual field data are the most common type of data collected to diagnose glaucoma in patients, and usually the data consist of 54 or 76 variables (which are referred to as visual field locations). Due to the large number of variables, the six nerve fiber bundles (6NFB), which is a collection of visual field locations in groups, are the standard clusters used in visual field data to represent the physiological traits of the retina. However, with regard to classification accuracy of the data, this research proposes a technique to find other significant spatial clusters of visual field with higher classification accuracy than the 6NFB. This thesis presents a novel clustering technique, namely, Simultaneous Modelling and Clustering (SMC). SMC performs clustering on data based on classification accuracy using heuristic search techniques. The method searches a collection of significant clusters of visual field locations that indicate visual field loss progression. The aim of this research is two-fold. Firstly, SMC algorithms are developed and tested on data to investigate the effectiveness and efficiency of the method using optimisation and classification methods. Secondly, a significant clustering arrangement of visual field, which highly interrelated visual field locations to represent progression of visual field loss with high classification accuracy, is searched to complement the 6NFB in diagnosis of glaucoma. A new clustering arrangement of visual field locations can be used by medical practitioners together with the 6NFB to complement each other in diagnosis of glaucoma in patients. This research conducts extensive experiment work on both visual field and simulated data to evaluate the proposed method. The results obtained suggest the proposed method appears to be an effective and efficient method in clustering visual field data and 3 improving classification accuracy. The key contributions of this work are the novel model-based clustering of visual field data, effective and efficient algorithms for SMC, practical knowledge of visual field data in the diagnosis of glaucoma and the presentation a generic framework for modelling and clustering which is highly applicable to many other dataset/model combinations.
APA, Harvard, Vancouver, ISO, and other styles
11

Huth, Jacob. "Modelling Aging in the Visual System & The Convis Python Toolbox." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS140.

Full text
Abstract:
Dans cette thèse, nous étudions les processus de vieillissement dans le système visuel à partir d’une perspective de modélisation computationnelle. Nous passons en revue les phénomènes de vieillissement neuronal, les changements fondamentaux du vieillissement et les mécanismes possibles qui peuvent relier les causes et les effets. Les hypothèses que nous formulons à partir de cette revue sont : l’hypothèse de bruit d’entrée, l’hypothèse de plasticité, l’hypothèse de matière blanche et l’hypothèse d’inhibition. Puisque l’hypothèse de bruit d’entrée a la possibilité d’expliquer un certain nombre de phénomènes de vieillissement à partir d’une prémisse très simple, nous nous concentrons principalement sur cette théorie. Puisque la taille et l’organisation des champs récepteurs est importante pour la perception et change à un âge élevé, nous avons développé une théorie sur l’interaction entre le bruit et la structure des champs récepteurs. Nous proposons ensuite la STDP comme mécanisme possible qui pourrait changer la taille du champ récepteur en réponse au bruit d’entrée. Dans deux chapitres distincts, nous examinons les approches pour modéliser les données neurales et les données psychophysiques respectivement. Dans ce processus, nous examinons respectivement un mécanisme de contrôle du gain de contraste et un modèle cortical simplifié. Enfin, nous présentons convis, une boîte à outils Python pour la création de modèles de vision convolutionnelle, qui a été développée lors de cette thèse. convis peut mettre en œuvre les modèles les plus importants utilisés actuellement pour modéliser les réponses des cellules ganglionnaires rétiniennes et des cellules des corticales inférieures (V1/V2)
In this thesis we investigate aging processes in the visual system from a computational modelling perspective. We give a review about neural aging phenomena, basic aging changes and possible mechanisms that can connect causes and effects. The hypotheses we formulate from this review are: the input noise hypothesis, the plasticity hypothesis, the white matter hypothesis and the inhibition hypothesis. Since the input noise hypothesis has the possibility to explain a number of aging phenomena from a very simple premise, we focus mainly on this theory. Since the size and organization of receptive fields is important for perception and is changing in high age, we developed a theory about the interaction of noise and receptive field structure. We then propose spike-time dependent plasticity (STDP) as a possible mechanism that could change receptive field size in response to input noise. In two separate chapters we investigate the approaches to model neural data and psychophysical data respectively. In this process we examine a contrast gain control mechanism and a simplified cortical model respectively. Finally, we present convis, a Python toolbox for creating convolutional vision models,which was developed during the studies for this thesis. convis can implement the most important models used currently to model responses of retinal ganglion cells and cells in the lower visual cortices (V1 and V2)
APA, Harvard, Vancouver, ISO, and other styles
12

Apaydin, Mehmetcan. "Biologically Inspired Multichannel Modelling Of Human Visual Perceptual System." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606772/index.pdf.

Full text
Abstract:
Making a robot autonomous has been a common challenge to be overcome since the very beginning. To be an autonomous system, the robot should collect environmental data, interpret them, and act accordingly. In order to accomplish these, some resource management should be conducted. That is, the resources, which are time, and computation power in our case, should be allocated to more important areas. Existing researches and approaches, however, are not always human like. Indeed they don&rsquo
t give enough importance on this. Starting from this point of view, the system proposed in this thesis supplies the resource management trying to be more &rsquo
human like&rsquo
. It directs the focus of attention to where higher resolution algorithms are really needed. This &rsquo
real need&rsquo
is determined by the visual features of the scene, and current importance levels (or weight values) of each of these features. As a further attempt, the proposed system is compared with human subjects&rsquo
characteristics. With unbiased subjects, a set of parameters which resembles a normal human is obtained. Then, in order to see the effect of the guidance, the subjects are asked to concentrate on a single predetermined feature. Finally, an artificial neural network based learning mechanism is added to learn to mimic a single human or a group of humans. The system can be used as a preattentive stage module, or some more feature channels can be introduced for better performance in the future.
APA, Harvard, Vancouver, ISO, and other styles
13

Ahmad, Naeem. "Modelling and optimization of sky surveillance visual sensor network." Licentiate thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-17123.

Full text
Abstract:
A Visual Sensor Network (VSN) is a distributed system of a largenumber of camera sensor nodes. The main components of a camera sensornode are image sensor, embedded processor, wireless transceiver and energysupply. The major difference between a VSN and an ordinary sensor networkis that a VSN generates two dimensional data in the form of an image, whichcan be exploited in many useful applications. Some of the potentialapplication examples of VSNs include environment monitoring, surveillance,structural monitoring, traffic monitoring, and industrial automation.However, the VSNs also raise new challenges. They generate large amount ofdata which require higher processing powers, large bandwidth requirementsand more energy resources but the main constraint is that the VSN nodes arelimited in these resources.This research focuses on the development of a VSN model to track thelarge birds such as Golden Eagle in the sky. The model explores a number ofcamera sensors along with optics such as lens of suitable focal length whichensures a minimum required resolution of a bird, flying at the highestaltitude. The combination of a camera sensor and a lens formulate amonitoring node. The camera node model is used to optimize the placementof the nodes for full coverage of a given area above a required lower altitude.The model also presents the solution to minimize the cost (number of sensornodes) to fully cover a given area between the two required extremes, higherand lower altitudes, in terms of camera sensor, lens focal length, camera nodeplacement and actual number of nodes for sky surveillance.The area covered by a VSN can be increased by increasing the highermonitoring altitude and/or decreasing the lower monitoring altitude.However, it also increases the cost of the VSN. The desirable objective is toincrease the covered area but decrease the cost. This objective is achieved byusing optimization techniques to design a heterogeneous VSN. The core ideais to divide a given monitoring range of altitudes into a number of sub-rangesof altitudes. The sub-ranges of monitoring altitudes are covered by individualsub VSNs, the VSN1 covers the lower sub-range of altitudes, the VSN2 coversthe next higher sub-range of altitudes and so on, such that a minimum cost isused to monitor a given area.To verify the concepts, developed to design the VSN model, and theoptimization techniques to decrease the VSN cost, the measurements areperformed with actual cameras and optics. The laptop machines are used withthe camera nodes as data storage and analysis platforms. The area coverage ismeasured at the desired lower altitude limits of homogeneous as well asheterogeneous VSNs and verified for 100% coverage. Similarly, the minimumresolution is measured at the desired higher altitude limits of homogeneous aswell as heterogeneous VSNs to ensure that the models are able to track thebird at these highest altitudes.
APA, Harvard, Vancouver, ISO, and other styles
14

Hallum, Luke Edward Graduate School of Biomedical Engineering Faculty of Engineering UNSW. "Prosthetic vision : Visual modelling, information theory and neural correlates." Publisher:University of New South Wales. Graduate School of Biomedical Engineering, 2008. http://handle.unsw.edu.au/1959.4/41450.

Full text
Abstract:
Electrical stimulation of the retina affected by photoreceptor loss (e.g., cases of retinitis pigmentosa) elicits the perception of luminous spots (so-called phosphenes) in the visual field. This phenomenon, attributed to the relatively high survival rates of neurons comprising the retina's inner layer, serves as the cornerstone of efforts to provide a microelectronic retinal prosthesis -- a device analogous to the cochlear implant. This thesis concerns phosphenes -- their elicitation and modulation, and, in turn, image analysis for use in a prosthesis. This thesis begins with a comparative review of visual modelling of electrical epiretinal stimulation and analogous acoustic modelling of electrical cochlear stimulation. The latter models involve coloured noise played to normal listeners so as to investigate speech processing and electrode design for use in cochlear implants. Subsequently, four experiments (three psychophysical and one numerical), and two statistical analyses, are presented. Intrinsic signal optical imaging in cerebral cortex is canvassed appendically. The first experiment describes a visual tracking task administered to 20 normal observers afforded simulated prosthetic vision. Fixation, saccade, and smooth pursuit, and the effect of practice, were assessed. Further, an image analysis scheme is demonstrated that, compared to existing approaches, assisted fixation and pursuit (but not saccade) accuracy (35.8% and 6.8%, respectively), and required less phosphene array scanning. Subsequently, (numerical) information-theoretic reasoning is provided for the scheme's superiority. This reasoning was then employed to further optimise the scheme (resulting in a filter comprising overlapping Gaussian kernels), and may be readily extended to arbitrary arrangements of many phosphenes. A face recognition study, wherein stimuli comprised either size- or intensity-modulated phosphenes, is then presented. The study involved unpracticed observers (n=85), and showed no 'size' --versus--'intensity' effect. Overall, a 400-phosphene (100-phosphene) image afforded subjects 89.0% (64.0%) correct recognition (two-interval forced-choice paradigm) when five seconds' scanning was allowed. Performance fell (64.5%) when the 400-phosphene image was stabilised on the retina and presented briefly. Scanning was similar in 400- and 100-phosphene tasks. The final chapter presents the statistical effects of sampling and rendering jitter on the phosphene image. These results may generalise to low-resolution imaging systems involving loosely packed pixels.
APA, Harvard, Vancouver, ISO, and other styles
15

Singh, Avinash. "Alfalfa response to grazing, cultivar evaluation and visual modelling." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0017/NQ53078.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Shin, Dongjoe. "Robust surface modelling of visual hull from multiple silhouettes." Thesis, University of Warwick, 2008. http://wrap.warwick.ac.uk/3788/.

Full text
Abstract:
Reconstructing depth information from images is one of the actively researched themes in computer vision and its application involves most vision research areas from object recognition to realistic visualisation. Amongst other useful vision-based reconstruction techniques, this thesis extensively investigates the visual hull (VH) concept for volume approximation and its robust surface modelling when various views of an object are available. Assuming that multiple images are captured from a circular motion, projection matrices are generally parameterised in terms of a rotation angle from a reference position in order to facilitate the multi-camera calibration. However, this assumption is often violated in practice, i.e., a pure rotation in a planar motion with accurate rotation angle is hardly realisable. To address this problem, at first, this thesis proposes a calibration method associated with the approximate circular motion. With these modified projection matrices, a resulting VH is represented by a hierarchical tree structure of voxels from which surfaces are extracted by the Marching cubes (MC) algorithm. However, the surfaces may have unexpected artefacts caused by a coarser volume reconstruction, the topological ambiguity of the MC algorithm, and imperfect image processing or calibration result. To avoid this sensitivity, this thesis proposes a robust surface construction algorithm which initially classifies local convex regions from imperfect MC vertices and then aggregates local surfaces constructed by the 3D convex hull algorithm. Furthermore, this thesis also explores the use of wide baseline images to refine a coarse VH using an affine invariant region descriptor. This improves the quality of VH when a small number of initial views is given. In conclusion, the proposed methods achieve a 3D model with enhanced accuracy. Also, robust surface modelling is retained when silhouette images are degraded by practical noise.
APA, Harvard, Vancouver, ISO, and other styles
17

Jenkins, Mark David. "Robust appearance based modelling for effective visual object tracking." Thesis, Glasgow Caledonian University, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.726754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Eguchi, Akihiro. "Neural network modelling of the primate ventral visual pathway." Thesis, University of Oxford, 2017. https://ora.ox.ac.uk/objects/uuid:99277b9c-00ee-45e3-8adb-47190d716912.

Full text
Abstract:
The aim of this doctoral research is to advance understanding of how the primate brain learns to process the detailed spatial form of natural visual scenes. Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects and faces. However, it remains a difficult challenge to understand exactly how these neurons develop their response properties through visually guided learning. This thesis approaches this problem through the use of computational modelling. In particular, I first show how the brain may learn to represent the spatial structure of objects and faces through a series of processing stages along the ventral visual pathway. Then I propose how understanding the two complementary unsupervised learning mechanisms of translation invariance may have useful applications in clinical psychology. Next, the potential functional role of top-down (feedback) propagation of visual information in the brain in driving the development of border ownership cells, which are thought to play a role in binding visual features such as boundary edges to their respective objects, is investigated. In particular, the limitations of traditional rate-coded neural networks in modelling these cells are identified. Finally, a general solution to such binding problems with the use of a more biologically realistic spiking neural network is presented. This work is set to make an important contribution towards understanding how the visual system learns to encode the detailed spatial structure of objects and faces within scenes, including representing the binding relations between the visual features that comprise those objects and faces.
APA, Harvard, Vancouver, ISO, and other styles
19

Sadaghiani, Mohammad Hossein. "New method for mathematical modelling of human visual speech." Thesis, University of Nottingham, 2015. http://eprints.nottingham.ac.uk/29159/.

Full text
Abstract:
Audio-visual speech recognition and visual speech synthesisers are used as interfaces between humans and machines. Such interactions specifically rely on the analysis and synthesis of both audio and visual information, which humans use for face-to-face communication. Currently, there is no global standard to describe these interactions nor is there a standard mathematical tool to describe lip movements. Furthermore, the visual lip movement for each phoneme is considered in isolation rather than a continuation from one to another. Consequently, there is no globally accepted standard method for representing lip movement during articulation. This thesis addresses these issues by designing a transcribed group of words, by mathematical formulas, and so introducing the concept of a visual word, allocating signatures to visual words and finally building a visual speech vocabulary database. In addition, visual speech information has been analysed in a novel way by considering both lip movements and phonemic structure of the English language. In order to extract the visual data, three visual features on the lip have been chosen; these are on the outer upper, lower and corner of the lip. The extracted visual data during articulation is called the visual speech sample set. The final visual data is obtained after processing the visual speech sample sets to correct experimented artefacts such as head tilting, which happened during articulation and visual data extraction. The ‘Barycentric Lagrange Interpolation’ (BLI) formulates the visual speech sample sets into visual speech signals. The visual word is defined in this work and consists of the variation of three visual features. Further processing on relating the visual speech signals to the uttered word leads to the allocation of signatures that represent the visual word. This work suggests the visual word signature can be used either as a ‘visual word barcode’, a ‘digital visual word’ or a ‘2D/3D representations’. The 2D version of the visual word provides a unique signature that allows the identification of the words being uttered. In addition, identification of visual words has also been performed using a technique called ‘volumetric representations of the visual words’. Furthermore, the effect of altering the amplitudes and sampling rate for BLI has been evaluated. In addition, the performance of BLI in reconstructing the visual speech sample sets has been considered. Finally, BLI has been compared to signal reconstruction approach by RMSE and correlation coefficients. The results show that the BLI is the more reliable method for the purpose of this work according to Section 7.7.
APA, Harvard, Vancouver, ISO, and other styles
20

Setyanto, Arief. "Hierarchical visual content modelling and query based on trees." Thesis, University of Essex, 2016. http://repository.essex.ac.uk/16903/.

Full text
Abstract:
In recent years, such vast archives of video information have become available that human annotation of content is no longer feasible; automation of video content analysis is therefore highly desirable. The recognition of semantic content in images is a problem that relies on prior knowledge and learnt information and that, to date, has only been partially solved. Salient analysis, on the other hand, is statistically based and highlights regions that are distinct from their surroundings, while also being scalable and repeatable. The arrangement of salient information into hierarchical tree structures in the spatial and temporal domains forms an important step to bridge the semantic salient gap. Salient regions are identified using region analysis, rank ordered and documented in a tree for further analysis. A structure of this kind contains all the information in the original video and forms an intermediary between video processing and video understanding, transforming video analysis to a syntactic database analysis problem. This contribution demonstrates the formulation of spatio-temporal salient trees the syntax to index them, and provides an interface for higher level cognition in machine vision.
APA, Harvard, Vancouver, ISO, and other styles
21

Cowell, Rosemary Alice. "Modelling the effects of damage to perirhinal cortex and ventral visual stream on visual cognition." Thesis, University of Oxford, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Buchanan, Philip Hamish. "Artistic Content Representation and Modelling based on Visual Style Features." Thesis, University of Canterbury. Computer Science and Software Engineering, 2013. http://hdl.handle.net/10092/9225.

Full text
Abstract:
This thesis aims to understand visual style in the context of computer science, using traditionally intangible artistic properties to enhance existing content manipulation algorithms and develop new content creation methods. The developed algorithms can be used to apply extracted properties to other drawings automatically; transfer a selected style; categorise images based upon perceived style; build 3D models using style features from concept artwork; and other style-based actions that change our perception of an object without changing our ability to recognise it. The research in this thesis aims to provide the style manipulation abilities that are missing from modern digital art creation pipelines.
APA, Harvard, Vancouver, ISO, and other styles
23

MacCormick, John. "Probabilistic modelling and stochastic algorithms for visual localisation and tracking." Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Crabb, David Paul. "Image processing and statistical modelling of visual function in glaucoma." Thesis, City University London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Russell, David Mark. "Spatial and temporal background modelling of non-stationary visual scenes." Thesis, Queen Mary, University of London, 2009. http://qmro.qmul.ac.uk/xmlui/handle/123456789/598.

Full text
Abstract:
The prevalence of electronic imaging systems in everyday life has become increasingly apparent in recent years. Applications are to be found in medical scanning, automated manufacture, and perhaps most significantly, surveillance. Metropolitan areas, shopping malls, and road traffic management all employ and benefit from an unprecedented quantity of video cameras for monitoring purposes. But the high cost and limited effectiveness of employing humans as the final link in the monitoring chain has driven scientists to seek solutions based on machine vision techniques. Whilst the field of machine vision has enjoyed consistent rapid development in the last 20 years, some of the most fundamental issues still remain to be solved in a satisfactory manner. Central to a great many vision applications is the concept of segmentation, and in particular, most practical systems perform background subtraction as one of the first stages of video processing. This involves separation of ‘interesting foreground’ from the less informative but persistent background. But the definition of what is ‘interesting’ is somewhat subjective, and liable to be application specific. Furthermore, the background may be interpreted as including the visual appearance of normal activity of any agents present in the scene, human or otherwise. Thus a background model might be called upon to absorb lighting changes, moving trees and foliage, or normal traffic flow and pedestrian activity, in order to effect what might be termed in ‘biologically-inspired’ vision as pre-attentive selection. This challenge is one of the Holy Grails of the computer vision field, and consequently the subject has received considerable attention. This thesis sets out to address some of the limitations of contemporary methods of background segmentation by investigating methods of inducing local mutual support amongst pixels in three starkly contrasting paradigms: (1) locality in the spatial domain, (2) locality in the shortterm time domain, and (3) locality in the domain of cyclic repetition frequency. Conventional per pixel models, such as those based on Gaussian Mixture Models, offer no spatial support between adjacent pixels at all. At the other extreme, eigenspace models impose a structure in which every image pixel bears the same relation to every other pixel. But Markov Random Fields permit definition of arbitrary local cliques by construction of a suitable graph, and 3 are used here to facilitate a novel structure capable of exploiting probabilistic local cooccurrence of adjacent Local Binary Patterns. The result is a method exhibiting strong sensitivity to multiple learned local pattern hypotheses, whilst relying solely on monochrome image data. Many background models enforce temporal consistency constraints on a pixel in attempt to confirm background membership before being accepted as part of the model, and typically some control over this process is exercised by a learning rate parameter. But in busy scenes, a true background pixel may be visible for a relatively small fraction of the time and in a temporally fragmented fashion, thus hindering such background acquisition. However, support in terms of temporal locality may still be achieved by using Combinatorial Optimization to derive shortterm background estimates which induce a similar consistency, but are considerably more robust to disturbance. A novel technique is presented here in which the short-term estimates act as ‘pre-filtered’ data from which a far more compact eigen-background may be constructed. Many scenes entail elements exhibiting repetitive periodic behaviour. Some road junctions employing traffic signals are among these, yet little is to be found amongst the literature regarding the explicit modelling of such periodic processes in a scene. Previous work focussing on gait recognition has demonstrated approaches based on recurrence of self-similarity by which local periodicity may be identified. The present work harnesses and extends this method in order to characterize scenes displaying multiple distinct periodicities by building a spatio-temporal model. The model may then be used to highlight abnormality in scene activity. Furthermore, a Phase Locked Loop technique with a novel phase detector is detailed, enabling such a model to maintain correct synchronization with scene activity in spite of noise and drift of periodicity. This thesis contends that these three approaches are all manifestations of the same broad underlying concept: local support in each of the space, time and frequency domains, and furthermore, that the support can be harnessed practically, as will be demonstrated experimentally.
APA, Harvard, Vancouver, ISO, and other styles
26

Scheffler, Carl. "Applied Bayesian inference : natural language modelling and visual feature tracking." Thesis, University of Cambridge, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.610748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Nguyen, Phi Bang. "On The Use of Human Visual System Modelling in Watermarking." Paris 13, 2011. http://www.theses.fr/2011PA132035.

Full text
Abstract:
De nos jours, la facilité de diffusion des contenus numériques offre de grandes possibilités de création mais également de reproduction illégale (contrefaçon) des œuvres. Cette banalisation des outils de capture et de manipulation de contenus numériques a des conséquences néfastes sur la société et l'économie. Pour faire face à ce problème, plusieurs solutions matérielles et immatérielles ont été proposées. Parmi les techniques connues, la cryptographie et le tatouage sont deux solutions complémentaires. Dans le cadre de cette thèse, nous nous intéressons plus particulièrement à la solution de tatouage numérique. Il y a en général, deux critères principaux qu'un système de tatouage doit satisfaire: la robustesse et la transparence. Cependant, ces deux critères ne peuvent pas être satisfaits simultanément de façon optimale. En effet, plus la robustesse est forte, plus les dégradations introduites par le tatouage sont visibles et vice versa. Pour pallier à ce problème, nous nous sommes orientés vers une approche perceptuelle qui utilise quelques caractéristiques du SVH (Système Visuel Humain) pour trouver un compromis optimal entre la robustesse et la transparence. L'idée de base consiste à exploiter les limitations du SVH pour contrôler la visibilité de la marque insérée ainsi que pour trouver les zones stratégiques d'insertion. Une étude statistique de la performance des métriques de qualité utilisées dans le tatouage sont aussi proposées. Les résultats obtenus sont très encourageants et démontrent l'efficacité de l'approche développée
Nowadays, the easy distribution of digital content offers great opportunities for creativity, but also counterfeiting. This causes a negative impact on the society and the global economy. To address this problem, various protection solutions have been developed. Cryptography and watermarking are considered as two complementary techniques for such solutions. In the framework of this thesis, we concentrate on digital watermarking. One of the main issues in watermarking is to solve the trade-off between robustness and imperceptibility. However, these two criteria cannot be achieved simultaneously. Indeed, the stronger the watermark strength, the more the degradation is. To overcome this problem, we rely on an approach that exploits the perceptual characteristics of the HVS (Human Visual System) to find an optimal trade-off between these two conflicting criteria. The basic idea is to incorporate some limitations of the HVS to embed the watermark according to some perceptual criteria in order to guarantee its robustness and transparency. A statistical study on the performance of image quality metrics used in watermarking are also proposed. The obtained results are very promising and demonstrate the efficiency of the proposed approach
APA, Harvard, Vancouver, ISO, and other styles
28

Sang, Neil S. "Visual topology in SDI : a data structure for modelling landscape perception." Thesis, University of South Wales, 2011. https://pure.southwales.ac.uk/en/studentthesis/visual-topology-in-sdi(475699dd-3d19-4548-98a6-93f5e5c0d396).html.

Full text
Abstract:
Visual Topology is used here to describe the spatial relations between objects as they appear in the 2D viewing plane. This thesis sets out the concept, explains why it is needed in Geographic Information Science and suggests how it may be computed through development of prototype software. Section 1 considers the functionality that any Spatial Data Infrastructure would need to encompass in order to support the inclusion of visual analysis into landscape planning and monitoring systems. Section 2 introduces various aspects of visual topology. In particular it sets out how visual intersections of occluding edges may be modelled topologically and formally defines a novel higher level topological structure to the viewing space - the 'Euler Zone' based on the Euler complexity of a graph formed by the occluding horizons in a view. Whether such a graph has meaning to an observer is considered in Section 5, which presents the results of a web based forced­ choice experiment with significant implications for the role of topology in modelling landscape preference via quantitative metrics derived from 20 maps. Sections 3 and 4 discuss how existing methods for handling perspective models and visualisations need to be improved in order to model visual topology. Section 3 focuses on the limitations of current techniques and design criterion for a new methodology. Section 4 looks at the lessons learnt from developing a prototype implementation (VM-LITE) based on Quad-Edge Delaunay Triangulation, in the VoronoiMagic software package. Some potential applications are highlighted, both within landscape modelling and beyond, before drawing conclusions as to the potential for the concepts and methods respectively. Although important research questions remain, particularly as regards view point dynamics, Visual Topology has the potential to fundamentally change how visual modelling is undertaken in GIS. It allows the analysis of scenes based upon a richer representation of individual experience. It provides the basis for data structures that can support the extraction of generalisable metrics from this rich scene information, taking into account the qualitatively different nature of scene topology as distinct from metrics of shape and colour. In addition new metrics based on attributes only apparent in perspective, such as landform, can be analysed. Finally, it also provides a rationale for reporting units for landscapes with some measure of homogeneity and scale-independence in their scenic properties.
APA, Harvard, Vancouver, ISO, and other styles
29

Adams, Nathan Grant. "A 2D visual language for rapid 3D scene design." Thesis, University of Canterbury. Computer Science and Software Engineering, 2009. http://hdl.handle.net/10092/3021.

Full text
Abstract:
Automatic recognition and digitization of the features found in raster images of 2D topographic maps has a long research history. Very little such work has focused on creating and working with alternatives to the classic isoline-based topographic map.This thesis presents a system that generates 3D scenes from a 2D diagram format designed for user friendliness; with more geometric expressiveness and lower ink usage than classic topographic maps. This thesis explains the rationale for and the structure of the system, and the difficulties encountered in constructing it. It then describes a user study to evaluate the language and the usability of its various features, and draws future research directions from it.
APA, Harvard, Vancouver, ISO, and other styles
30

Engelke, Ulrich. "Modelling Perceptual Quality and Visual Saliency for Image and Video Communications." Doctoral thesis, Karlskrona : Blekinge Institute of Technology, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00470.

Full text
Abstract:
The evolution of advanced radio transmission technologies for third and future generation mobile radio systems has paved the way for the delivery of mobile multimedia services. This is further enabled through contemporary video coding standards, such as H.264/AVC, allowing wireless image and video applications to become a reality on modern mobile devices. The extensive amount of data needed to represent the visual content and the scarce channel bandwidth constitute great challenges for network operators to deliver an intended quality of service. Appropriate metrics are thus instrumental for service providers to monitor the quality as experienced by the end user. This thesis focuses on subjective and objective assessment methods of perceived visual quality in image and video communication. The content of the thesis can be broadly divided into four parts. Firstly, the focus is on the development of image quality metrics that predict perceived quality degradations due to transmission errors. The metrics follow the reduced-reference approach, thus, allowing to measure quality loss during image communication with only little overhead as side information. The metrics are designed and validated using subjective quality ratings from two experiments. The distortion assessment performance is further demonstrated through an application for filter design. The second part of the thesis then investigates various methodologies to further improve the quality prediction performance of the metrics. In this respect, several properties of the human visual system are investigated and incorporated into the metric design. It is shown that the quality prediction performance can be considerably improved using these methodologies. The third part is devoted to analysing the impact of the complex distortion patterns on the overall perceived quality, following two goals. Firstly, the confidence of human observers is analysed to identify the difficulties during assessment of the distorted images, showing, that indeed the level of confidence is highly dependent on the level of visual quality. Secondly, the impact of content saliency on the perceived quality is identified using region-of-interest selections and eye tracking data from two independent subjective experiments. It is revealed, that the saliency of the distortion region indeed has an impact on the overall quality perception and also on the viewing behaviour of human observers when rating image quality. Finally, the quality perception of H.264/AVC coded video containing packet loss is analysed based on the results of a combined subjective video quality and eye tracking experiment. It is shown that the distortion location in relation to the content saliency has a tremendous impact on the overall perceived quality. Based on these findings, a framework for saliency aware video quality assessment is proposed that strongly improves the quality prediction performance of existing video quality metrics.
APA, Harvard, Vancouver, ISO, and other styles
31

Cruickshanks, Katie Lawson. "The study of intertidal mollusc polymorphism using spectroradiometry and visual modelling." Thesis, University of Southampton, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.427448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Galeazzi, González Juan Manuel. "Computational modelling of hand-centred visual representations in the primate brain." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:89157e30-b776-4297-9391-0e2b257b6d82.

Full text
Abstract:
Neurons have been found in various areas of the primate brain that respond to the location of objects with respect to the hand. In this thesis we present a number of self-organising theories of how these hand-centred visual receptive fields could develop during visually guided learning in unsupervised neural network models. Experimental chapters 3 and 4 show the development of hand-centred representations under two different learning hypotheses Continuous Transformation learning and Trace learning, using a neural network model of the primate visual system, VisNet. In Chapter 5 the trace learning hypothesis is advanced by gradually increasing the realism in the visual training conditions. Results are shown using realistic visual scenes consisting of multiple targets presented at the same time around the hand. In Chapter 6 this learning hypothesis was further explored using natural eye and head movements recorded from human participants. This required a new time accurate differential formulation of the VisNet model to faithfully represent the temporal dynamics of the recorded gaze changes. These results provide an important step in showing how localised, hand-centred receptive fields could emerge under more ecologically realistic training conditions. In Chapter 7 we present a new self-organising model of how hand-centred representations may develop using a proprioceptive signal representing the position of the hand. In these simulations we first develop a body-centred representation of the location of the visual targets by trace learning. I then showed how these body-centred representations of the visual targets can be combined with a proprioceptive representation of the location of the hand in order to develop hand-centred representations. This thesis concludes with a discussion of the main findings and some suggestions for future work.
APA, Harvard, Vancouver, ISO, and other styles
33

Ugail, Hassan, and A. Sourin. "Partial differential equations for function based geometry modelling within visual cyberworlds." IEEE Computer Society, 2008. http://hdl.handle.net/10454/2612.

Full text
Abstract:
We propose the use of Partial Differential Equations (PDEs) for shape modelling within visual cyberworlds. PDEs, especially those that are elliptic in nature, enable surface modelling to be defined as boundary-value problems. Here we show how the PDE based on the Biharmonic equation subject to suitable boundary conditions can be used for shape modelling within visual cyberworlds. We discuss an analytic solution formulation for the Biharmonic equation which allows us to define a function based geometry whereby the resulting geometry can be visualised efficiently at arbitrary levels of shape resolutions. In particular, we discuss how function based PDE surfaces can be readily integrated within VRML and X3D environments
APA, Harvard, Vancouver, ISO, and other styles
34

Obregón, Mateo. "Hemispheric effects in binocular visual word recognition : experiments and cognitive modelling." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/9732.

Full text
Abstract:
Functionally, a vertically split fovea should confer an advantage to the processor. Visual stumuli arriving to each eye would be vertically split and the two parts sent to different hemispheres, obeying the crossed nature of the visual pathways. I test the prediction of a functional advantage for the separate lateralisation of text processing from the two eyes. I explore this hypothesis by means of psycholinguistic experimentation and cognitive modelling. I employed a haploscope to show foveated text to the two eyes separately, controlling for location and presentation duration, and guaranteeing that each eye could not see the other eye's stimuli. I carried out a series of experiments, based on this novel paradigm, to explore the effects of a vertically split fovea on correctness of word perception. The experiments showed: (i) words presented exclusively to the contralateral hemifoveas are more correctly reported than words presented exclusively to the ipsilateral hemifoveas; (ii) the same full word shown to both eyes and available for fusion led to better perception; (iii) word endings with fewer type-count neighbours were more accurately reported, as were beginnings with larger type-count neighbours; (iv) uncrossed-eye stumuli were better perceived than crossed-eye stimuli; (v) principled roles in a model of isolated word recognition for lexical and sublexical neighbourhood statistics, syllabicity, hemispheric fine- and coarse-coding differences, sex of the reader, handedness, left and right eye, and visual pathways. Finally, I propose a connectionist model of visual word recognition that incorporates these findings and is a basis for further exploration.
APA, Harvard, Vancouver, ISO, and other styles
35

Tremblay, Pierre-Jules. "The skeptical explorer : a multiple-hypothesis approach to visual modelling and exploration." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0007/MQ44046.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Bennett, Alan. "Robofish : visual tracking for modelling the interaction of real and robotic fish /." Leeds : University of Leeds, School of Computer Studies, 2008. http://www.comp.leeds.ac.uk/fyproj/reports/0708/BennettA.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Luna, Ortiz Carlos Ricardo. "Modelling and analysis of Drosophila early visual system : a systems engineering approach." Thesis, University of Sheffield, 2018. http://etheses.whiterose.ac.uk/19574/.

Full text
Abstract:
Over the past century or so Drosophila has been established as an ideal model organism to study, among other things, neural computation and in particular sensory processing. In this respect there are many features that make Drosophila an ideal model organism, especially the fact that it offers a vast amount of genetic and experimental tools for manipulating and interrogating neural circuits. Whilst comprehensive models of sensory processing in Drosophila are not yet available, considerable progress has been made in recent years in modelling the early stages of sensory processing. When it comes to visual processing, accurate empirical and biophysical models of the R1-R6 photoreceptors were developed and used to characterize nonlinear processing at photoreceptor level and to demonstrate that R1-R6 photoreceptors encode phase congruency. A limitation of the latest photoreceptor models is that these do not account explicitly for the modulation of photoreceptor responses by the network of interneurones hosted in the lamina. As a consequence, these models cannot describe in a unifying way the photoreceptor response in the absence of the feedback from the downstream neurons and thus cannot be used to elucidate the role of interneurones in photoreceptor adaptation. In this thesis, electrophysiological photoreceptor recordings acquired in-vivo from wild-type and histamine defficient mutant fruit flies are used to develop and validate new comprehensive models of R1-R6 photoreceptors, which not only predict the response of these photoreceptors in wild-type and mutant fruit flies, over the entire environmental range of light intensities but also characterize explicitly the contribution of lamina neurons to photoreceptor adaptation. As a consequence, the new models provide suitable building blocks for assembling a complete model of the retina which takes into account the true connectivity between photoreceptors and downstream interneurones. A recent study has demonstrated that R1-R6 photoreceptors employ nonlinear processing to selectively encode and enhance temporal phase congruency. It has been suggested that this processing strategy achieves an optimal trade-off between the two competing goals of minimizing distortion in decoding behaviourally relevant stimuli features and minimizing the information rate, which ultimately enables more efficient downstream processing of spatio-temporal visual stimuli for edge and motion detection. Using rigorous information theoretic tools, this thesis derives and analyzes the rate-distortion characteristics associated with the linear and nonlinear transformations performed by photoreceptors on a stimulus generated by a signal source with a well defined distribution.
APA, Harvard, Vancouver, ISO, and other styles
38

Dumbuya, Abdulai Don. "Visual perception modelling for intelligent virtual driver agents in synthetic driving simulation." Thesis, Loughborough University, 2003. https://dspace.lboro.ac.uk/2134/34591.

Full text
Abstract:
This thesis documents new research into the modelling of driver vision and the integration of a new vision model into a microscopic traffic simulation tool. It is proposed and demonstrated that modelling of driver vision enhances the realism of simulated driver decision-making and behaviour, in turn, leading to improved simulation of driver interactions and traffic flow. Driving and traffic related research has traditionally fallen into the three distinct areas of driver psychology, traffic and highway engineering and vehicle dynamics, with modelling or experimentation in any of these areas supported by significant approximation in the others. In contrast to this, the vision research discussed here has been carried out in a context that aims to integrate all of these areas equally. This has been realised through the implementation of a new modelling environment, Synthetic Driving SIMulation, SD-SIM.
APA, Harvard, Vancouver, ISO, and other styles
39

Figl, Kathrin, Michael Derntl, and Sonja Kabicher. "Visual modelling and designing for cooperative learning and development of team competences." Inderscience Enterprises Ltd, 2009. http://epub.wu.ac.at/5649/1/b807.pdf.

Full text
Abstract:
This paper proposes a holistic approach to designing for the promotion of team and social competences in blended learning courses. Planning and modelling cooperative learning scenarios based on a domain specific modelling notation in the style of UML activity diagrams, and comparing evaluation results with planned outcomes allows for iterative optimization of a course's design. In a case study - a course on project management for computer science students - the instructional design including individual and cooperative learning situations was modelled. Specific emphasis was put on visualising the hypothesised development of team competences in the course design models. These models were subsequently compared to evaluation results obtained during the course. The results show that visual modelling of planned competence promotion enables more focused design, implementation and evaluation of collaborative learning scenarios.
APA, Harvard, Vancouver, ISO, and other styles
40

Afsar, Asfa Jubeen. "An investigation of the optical, visual and economic performance of the pseudophakic eye." Thesis, Glasgow Caledonian University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Salt, John D. "The specification of interactive behaviour patterns in object-oriented discrete-event simulation modelling." Thesis, Brunel University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Zvonimir, Dangel Ronald Josef [Verfasser]. "Modelling eye-hand movements in different visual feedback conditions / Ronald Josef Zvonimir Dangel." Aachen : Shaker, 2012. http://d-nb.info/1066197245/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Song, Guanghan. "Effect of sound in videos on gaze : contribution to audio-visual saliency modelling." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENT013/document.

Full text
Abstract:
Les humains reçoivent grande quantité d'informations de l'environnement avec vue et l'ouïe . Pour nous aider à réagir rapidement et correctement, il existe des mécanismes dans le cerveau à l'attention de polarisation vers des régions particulières , à savoir les régions saillants . Ce biais attentionnel n'est pas seulement influencée par la vision , mais aussi influencée par l'interaction audio - visuelle . Selon la littérature existante , l'attention visuelle peut être étudié à mouvements oculaires , mais l'effet sonore sur le mouvement des yeux dans les vidéos est peu connue . L'objectif de cette thèse est d'étudier l'influence du son dans les vidéos sur le mouvement des yeux et de proposer un modèle de saillance audio - visuel pour prédire les régions saillants dans les vidéos avec plus de précision . A cet effet, nous avons conçu une première expérience audio - visuelle de poursuite oculaire . Nous avons créé une base de données d'extraits vidéo courts choisis dans divers films . Ces extraits ont été consultés par les participants , soit avec leur bande originale (condition AV ) , ou sans bande sonore ( état ​​V) . Nous avons analysé la différence de positions de l'oeil entre les participants des conditions de AV et V . Les résultats montrent qu'il n'existe un effet du bruit sur le mouvement des yeux et l'effet est plus important pour la classe de la parole à l'écran . Ensuite , nous avons conçu une deuxième expérience audiovisuelle avec treize classes de sons. En comparant la différence de positions de l'oeil entre les participants des conditions de AV et V , nous concluons que l'effet du son est différente selon le type de son , et les classes avec la voix humaine ( c'est à dire les classes parole , chanteur , bruit humain et chanteurs ) ont le plus grand effet . Plus précisément , la source sonore a attiré considérablement la position des yeux uniquement lorsque le son a été la voix humaine . En outre , les participants atteints de la maladie de AV avaient une durée moyenne plus courte de fixation que de l'état de V . Enfin , nous avons proposé un modèle de saillance audio- visuel préliminaire sur la base des résultats des expériences ci-dessus . Dans ce modèle , deux stratégies de fusion de l'information audio et visuelle ont été décrits: l'un pour la classe de son discours , et l'autre pour la musique classe de son instrument . Les stratégies de fusion audio - visuelle définies dans le modèle améliore la prévisibilité à la condition AV
Humans receive large quantity of information from the environment with sight and hearing. To help us to react rapidly and properly, there exist mechanisms in the brain to bias attention towards particular regions, namely the salient regions. This attentional bias is not only influenced by vision, but also influenced by audio-visual interaction. According to existing literature, the visual attention can be studied towards eye movements, however the sound effect on eye movement in videos is little known. The aim of this thesis is to investigate the influence of sound in videos on eye movement and to propose an audio-visual saliency model to predict salient regions in videos more accurately. For this purpose, we designed a first audio-visual experiment of eye tracking. We created a database of short video excerpts selected from various films. These excerpts were viewed by participants either with their original soundtrack (AV condition), or without soundtrack (V condition). We analyzed the difference of eye positions between participants with AV and V conditions. The results show that there does exist an effect of sound on eye movement and the effect is greater for the on-screen speech class. Then, we designed a second audio-visual experiment with thirteen classes of sound. Through comparing the difference of eye positions between participants with AV and V conditions, we conclude that the effect of sound is different depending on the type of sound, and the classes with human voice (i.e. speech, singer, human noise and singers classes) have the greatest effect. More precisely, sound source significantly attracted eye position only when the sound was human voice. Moreover, participants with AV condition had a shorter average duration of fixation than with V condition. Finally, we proposed a preliminary audio-visual saliency model based on the findings of the above experiments. In this model, two fusion strategies of audio and visual information were described: one for speech sound class, and one for musical instrument sound class. The audio-visual fusion strategies defined in the model improves its predictability with AV condition
APA, Harvard, Vancouver, ISO, and other styles
44

Ficapal, Vila Joan. "Anemone: a Visual Semantic Graph." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252810.

Full text
Abstract:
Semantic graphs have been used for optimizing various natural language processing tasks as well as augmenting search and information retrieval tasks. In most cases these semantic graphs have been constructed through supervised machine learning methodologies that depend on manually curated ontologies such as Wikipedia or similar. In this thesis, which consists of two parts, we explore in the first part the possibility to automatically populate a semantic graph from an ad hoc data set of 50 000 newspaper articles in a completely unsupervised manner. The utility of the visual representation of the resulting graph is tested on 14 human subjects performing basic information retrieval tasks on a subset of the articles. Our study shows that, for entity finding and document similarity our feature engineering is viable and the visual map produced by our artifact is visually useful. In the second part, we explore the possibility to identify entity relationships in an unsupervised fashion by employing abstractive deep learning methods for sentence reformulation. The reformulated sentence structures are qualitatively assessed with respect to grammatical correctness and meaningfulness as perceived by 14 test subjects. We negatively evaluate the outcomes of this second part as they have not been good enough to acquire any definitive conclusion but have instead opened new doors to explore.
Semantiska grafer har använts för att optimera olika processer för naturlig språkbehandling samt för att förbättra sökoch informationsinhämtningsuppgifter. I de flesta fall har sådana semantiska grafer konstruerats genom övervakade maskininlärningsmetoder som förutsätter manuellt kurerade ontologier såsom Wikipedia eller liknande. I denna uppsats, som består av två delar, undersöker vi i första delen möjligheten att automatiskt generera en semantisk graf från ett ad hoc dataset bestående av 50 000 tidningsartiklar på ett helt oövervakat sätt. Användbarheten hos den visuella representationen av den resulterande grafen testas på 14 försökspersoner som utför grundläggande informationshämtningsuppgifter på en delmängd av artiklarna. Vår studie visar att vår funktionalitet är lönsam för att hitta och dokumentera likhet med varandra, och den visuella kartan som produceras av vår artefakt är visuellt användbar. I den andra delen utforskar vi möjligheten att identifiera entitetsrelationer på ett oövervakat sätt genom att använda abstraktiva djupa inlärningsmetoder för meningsomformulering. De omformulerade meningarna utvärderas kvalitativt med avseende på grammatisk korrekthet och meningsfullhet såsom detta uppfattas av 14 testpersoner. Vi utvärderar negativt resultaten av denna andra del, eftersom de inte har varit tillräckligt bra för att få någon definitiv slutsats, men har istället öppnat nya dörrar för att utforska.
APA, Harvard, Vancouver, ISO, and other styles
45

Backhaus, Andreas. "Computational modelling of visual search experiments with the selective attention for identification model (SAIM)." Thesis, University of Birmingham, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487486.

Full text
Abstract:
Visual search is a commonly used experimental procedure to explore human processing of visual multiple object scene (Wolfe, 1998b). This thesis introduces a new computational model of visual search, termed Selective Attention for Identification model (SAIM). SAIM aims to solve translation invariant object identification in a connectionist modelling framework. The thesis demonstrates that SAIM can successfully simulate the following experimental evidence: symmetric searches (L amongst Ts and/or upside down Ts (Duncan & Humphreys, 1989; Egeth & Dagenbach, 1991)), asymmetric searches of oriented lines (Treisman & Gormican, 1988), line size (Treisman & Gormican, 1988), item complexity (Treisman & Souther, 1985; Rauschenberger & Yantis, 2(06)) and familiarity (Wang, Cavanagh, & Green, 1994; Shen & Reingold, 2(01)), the influence of distractor (non-targets) orientation (Foster & Ward, 1991; Foster & Westland, 1995), effects of priming (Hodsoll & Humphreys, 2001; Mueller, Reimann, & Krummenacher, 2003; Wolfe, Butcher, Lee, & Hyle, 2003; Anderson, Heinke, & Humphreys, 2(06)) and 'contextual cueing' (Chun & Jiang, 1998). Crucially, SAIM's success emerges from the competition of objects for object identification. This competition is chiefly influenced by three factors: the similarity between search targets and non-targets (distractors), the visual features of the distractors, e.g., line orientation, and the influence of the object identification stage on the selection process (top-down modulation). On the other hand, a detailed reView in this thesis highlights that none of the existing computational models and theories can satisfactorily account for these experimental results. Instead, each theoretical account contains only a subset of the factors suggested by SAIM and, therefore, can explain only a subset of the experimental data. In SAIM these factors are pulled together into a unifying approach of parallel competitive interaction towards visual search.
APA, Harvard, Vancouver, ISO, and other styles
46

Mangan, Michael. "Visual homing in field crickets and desert ants : a comparative behavioural and modelling study." Thesis, University of Edinburgh, 2011. http://hdl.handle.net/1842/5678.

Full text
Abstract:
Visually guided navigation represents a long standing goal in robotics. Insights may be drawn from various insect species for which visual information has been shown sufficient for navigation in complex environments, however the generality of visual homing abilities across insect species remains unclear. Furthermore variousmodels have been proposed as strategies employed by navigating insects yet comparative studies across models and species are lacking. This work addresses these questions in two insect species not previously studied: the field cricket Gryllus bimaculatus for which almost no navigational data is available; and the European desert ant Cataglyphis velox, a relation of the African desert ant Cataglyphis bicolor which has become a model species for insect navigation studies. The ability of crickets to return to a hidden target using surrounding visual cues was tested using an analogue of the Morris water-maze, a standard paradigm for spatial memory testing in rodents. Crickets learned to re-locate the hidden target using the provided visual cues, with the best performance recorded when a natural image was provided as stimulus rather than clearly identifiable landmarks. The role of vision in navigation was also observed for desert ants within their natural habitat. Foraging ants formed individual, idiosyncratic, visually guided routes through their cluttered surroundings as has been reported in other ant species inhabiting similar environments. In the absence of other cues ants recalled their route even when displaced along their path indicating that ants recall previously visited places rather than a sequence of manoeuvres. Image databases were collected within the environments experienced by the insects using custompanoramic cameras that approximated the insect eye viewof the world. Six biologically plausible visual homing models were implemented and their performance assessed across experimental conditions. The models were first assessed on their ability to replicate the relative performance across the various visual surrounds in which crickets were tested. That is, best performance was sought with the natural scene, followed by blank walls and then the distinct landmarks. Only two models were able to reproduce the pattern of results observed in crickets: pixel-wise image difference with RunDown and the centre of mass average landmark vector. The efficacy of models was then assessed across locations in the ant habitat. A 3D world was generated from the captured images providing noise free and high spatial resolution images asmodel input. Best performancewas found for optic flow and image difference based models. However in many locations the centre of mass average landmark vector failed to provide reliable guidance. This work shows that two previously unstudied insect species can navigate using surrounding visual cues alone. Moreover six biologically plausible models of visual navigation were assessed in the same environments as the insects and only an image difference based model succeeded in all experimental conditions.
APA, Harvard, Vancouver, ISO, and other styles
47

Guedes, A. P. "An integrated approach to logistics strategy planning using visual interactive modelling and decision support." Thesis, Cranfield University, 1994. http://dspace.lib.cranfield.ac.uk/handle/1826/10740.

Full text
Abstract:
The research in this thesis relates to the use of mathematical models and ièigputer-based modelling tools for supporting the Logistics Strategy Planning (LSP) process. conceptual modelling framework and a computer-based modelling and decision support system are developed to address practical LSP problems and improve the level of decision support currently reported. A LSP process T is described, its h complexity recognised, e and the problem domain defined. The evolution of the logistics strategy concept is addressed and defined a integrating procurement, production and distribution aspects. The need for decision support is also identified. A comprehensive review of models and modelling techniques from Management Science / Operations Research (MS/OR), and computer based tools in the LSP context is carried out. The appropriateness of the various models and types of computer tools is assessed. Gaps and drawbacks in current approaches to LSP are also identified. This revealed that past efforts have been directed towards producing more efficient solving techniques and tools for limited aspects of LSP, rather than developing models and tools that could address more realistic problems, recognising a integrated view of LSP. Hence, current approaches to LSP are fragmented in their handling of the procurement, production and distribution aspects. A conceptual modelling framework is proposed to support the LSP process. It includes a planning process, the logistics elements and key drivers required to define a model/representation of the LSP problem, and a selection of models/techniques to address various classes of LSP (sub-) problems. The framework provides a integrated view of all elements involved and contributes to formalise the knowledge necessary to address LSP problems. A modelling and decision support system is developed in order to demonstrate the framework and assess the approach here proposed to address practical LSP studies. STRATOVISION combines Visual Interactive Modelling (VIM) and Knowledge-Based (KB) techniques with "traditional" MS/OR models and modelling techniques. Additionally, the system implements a problem centred approach, combining various MS/OR solving techniques (e.g. simulation, heuristics and optimisation) into a unique modelling environment. A comparative analysis and discussion of functionality supports the view that STRATOVISION overcomes most limitations found with other modelling systems and provides better functionality to address LSP problems. The discussion covers the modelling phase, the options generation and the detailed evaluation of scenarios. Special emphasis is given both to the use of the Visual Interactive ( I) functionality for modelling and problem solving, and to the use of models/techniques included in STRATOVISION's model base. Several case studies are used to illustrate STRATOVISION's integrated approach to LSP and validate the model design. Comparison with fragmented approaches to LSP is carried out and the use of STRATOVISION in practical LSP studies including procurement, production and distribution decisions is reported. The analysis provides supporting evidence of the benefits achieved by using STRATOVISION's integrated approach to LSP. Finally, contributions of the approach are discussed and areas of further work pointed out.
APA, Harvard, Vancouver, ISO, and other styles
48

New, Stephen John. "The application of visual interactive modelling to managerial decision support in a manufacturing environment." Thesis, University of Manchester, 1993. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.668318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Mould, Matthew Simon. "Visual search in natural scenes with and without guidance of fixations." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/visual-search-in-natural-scenes-with-and-without-guidance-of-fixations(00c10ea2-34fa-40b9-a304-7c1829545475).html.

Full text
Abstract:
From the airport security guard monitoring luggage to the rushed commuter looking for their car keys, visual search is one of the most common requirements of our visual system. Despite its ubiquity, many aspects of visual search remain unaccounted for by computational models. Difficulty arises when trying to account for any internal biases of an observer undertaking a search task or trying to decompose an image of a natural scene into relevant fundamental properties. Previous studies have attempted to understand visual search by using highly simplified stimuli, such as discrete search arrays. Although these studies have been useful, the extent to which the search of discrete search arrays can represent the search of more naturalistic stimuli is subject to debate. The experiments described in this thesis used as stimuli images of natural scenes and attempted to address two key objectives. The first was to determine which image properties influenced the detectability of a target. Features investigated included chroma, entropy, contrast, edge contrast and luminance. The proportion of variance in detection ability accounted for by each feature was estimated and the features were ranked in order of importance to detection. The second objective was to develop a method for guiding human fixations by modifying image features while observers were engaged in a search task. To this end, images were modified using the image-processing method unsharp masking. To assess the effect of the image modification on fixations, eye movements were monitored using an eye-tracker. Another subject addressed in the thesis was the classification of fixations from eye movement data. There exists no standard method for achieving this classification. Existing methods have employed thresholds for speed, acceleration, duration and stability of point-of-gaze to classify fixations, but these thresholds have no commonly accepted values. Presented in this thesis is an automatic nonparametric method for classifying fixations, which extracts fixations without requiring any input parameters from the experimenter. The method was tested against independent classifications by three experts. The accurate estimation of Kullback-Leibler Divergence, an information theoretic quantity which can be used to compare probability distributions, was also addressed in this thesis since the quantity was used to compare fixation distributions. Different methods for the estimation of Kullback-Leibler divergence were tested using artificial data and it was shown than a method for estimating the quantity directly from input data outperformed methods which required binning of data or kernel density estimation to estimate underlying distributions.
APA, Harvard, Vancouver, ISO, and other styles
50

Tian, Jingduo. "Quantitative performance evaluation of autonomous visual navigation." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/quantitative-performance-evaluation-of-autonomous-visual-navigation(be6349b5-3b38-4ac5-aba2-fc64597fd98a).html.

Full text
Abstract:
Autonomous visual navigation algorithms for ground mobile robotic systems working in unstructured environments have been extensively studied for decades. Among these work, algorithm performance evaluations between different design configurations mainly involve the use of benchmark datasets with a limited number of real-world trails. Such evaluations, however, have difficulties to provide sufficient statistical power for performance quantification. In addition, they are unable to independently assess the algorithm robustness to individual realistic uncertainty sources, including the environment variations and processing errors. This research presents a quantitative approach to performance and robustness evaluation and optimisation of autonomous visual navigation algorithms, using large scale Monte-Carlo analyses. The Monte-Carlo analyses are supported by a simulation environment designed to represent a real-world level of visual information, using the perturbations from realistic visual uncertainties and processing errors. With the proposed evaluation method, a stereo vision based autonomous visual navigation algorithm is designed and iteratively optimised. This algorithm encodes edge-based 3D patterns into a topological map, and use them for the subsequent global localisation and navigation. An evaluation on the performance perturbations from individual uncertainty sources indicates that the stereo match error produces significant limitation for the current system design. Therefore, an optimisation approach is proposed to mitigate such an error. This maximises the Fisher information available in stereo image pairs by manipulating the stereo geometry. Moreover, the simulation environment is further updated in association with the algorithm design, which include the quantitative modelling and simulation of localisation error to the subsequent navigation behaviour. During a long-term Monte-Carlo evaluation and optimisation, the algorithm performance has been significantly improved. Simulation experiments demonstrate that the navigation of a 3-DoF robotic system is achieved in an unstructured environment, while possessing sufficient robustness to realistic visual uncertainty sources and systematic processing errors.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography