Dissertations / Theses on the topic 'Visualisation'

To see the other types of publications on this topic, follow the link: Visualisation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Visualisation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Long, Elena. "Election data visualisation." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1589.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Visualisations of election data produced by the mass media, other organisations and even individuals are becoming increasingly available across a wide variety of platforms and in many different forms. As more data become available digitally and as improvements to computer hardware and software are made, these visualisations have become more ambitious in scope and more user-friendly. Research has shown that visualising data is an extremely powerful method of communicating information to specialists and non-specialists alike. This amounts to a democratisation of access to political and electoral data. To some extent political science lags behind the progress that has been made in the field of data visualisation. Much of the academic output remains committed to the paper format and much of the data presentation is in the form of simple text and tables. In the digital and information age there is a danger that political science will fall behind. This thesis reports on a number of case studies where efforts were made to visualise election data in order to clarify its structure and to present its meaning. The first case study demonstrates the value of data visualisation to the research process itself, facilitating the understanding of effects produced by different ways of estimating missing data. A second study sought to use visualisation to explain complex aspects of voting systems to the wider public. Three further case studies demonstrate the value of collaboration between political scientists and others possessing a range of skills embracing data management, software engineering, broadcasting and graphic design. These studies also demonstrate some of the problems that are encountered when trying to distil complex data into a form that can be easily viewed and interpreted by non-expert users. More importantly, these studies suggest that when the skills balance is correct then visualisation is both viable and necessary for communicating information on elections.
2

Daniel, G. W. "Video visualisation." Thesis, Swansea University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.636344.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The main contributions of this work can be summarised as: • Presenting a collection of hypotheses that will form the backbone, underpinning the motivation for work conducted within the field of video visualisation. • Presenting a prototype system to demonstrate the technical feasibility of video visualisation within a surveillance context, along with detailing its generic pipeline. • Provide an investigation into video visualisation, offering a general solution by utilising volume visualisation techniques, such as spatial and opacity transfer functions. • Providing the first set of evidence to support some of the presented hypotheses. • Demonstrating both stream and hardware-based rendering in the context of video visualisation. • Incorporating and evaluating a collection of change detection (CD) metrics, concerning their ability to produce effective video visualisations. • Presenting a novel investigation into interaction control protocols within multi-user and multi-camera environments. Video datasets are a type of volume dataset and treated as such, allowing ray-traced rendering and advanced volume modelling techniques to be applied to the video. It is shown how the interweaving of image processing and volume visualisation techniques can be used to create effective visualisations to aid the human vision system in the interpretation of video based content and features. Through the application of CD methodologies, it is shown how feature volumes are created and rendered to show temporal variations within a period.
3

Paverd, Wayne. "Information visualisation." Master's thesis, University of Cape Town, 1996. http://hdl.handle.net/11427/13528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Bibliography: leaves 100-102.
Information visualisation uses interactive three-dimensional (3D) graphics to create an immersive environment for the exploration of large amounts of data. Unlike scientific visualisation, where the underlying physical process usually takes place in 3D space, information visualisation deals with purely abstract data. Because abstract data often lacks an intuitive visual representation, selecting an appropriate representation of the data becomes a challenge. As a result, the creation of information visualisation involves as much exploration and investigation as the eventual exploration of that data itself. Unless the user of the data is also the creator of the visualisations, the turnaround time can therefore become prohibitive. In our experience, existing visualisation applications often lack the flexibility required to easily create information visualisations. These solutions do not provide sufficiently flexible and powerful means of both visually representing the data, and specifying user-interface interactions with the underlying database. This thesis describes a library of classes that allows the user to easily implement visualisation primitives, with their accompanying interactions. These classes are not individual visualisations but can be combined to form more complex visualisations. Classes for creating various primitive visual representations have been created. In addition to this, a number of auxillary classes have been created that provide the user with the ability to swap between visualisations, scale whole scenes, and use automatic level of detail control. The classes all have built-in interaction methods which allow the user to easily incorporate the forms of interaction that we found the most useful, for example the ability to select a data. item and thereby obtain more information about it, or the ability to allow the user to change the position of certain data items. To demonstrate the effectiveness of the classes we implemented and evaluated a. number of example systems. We found that the result of using the classes was a decrease in development time as well as enabling people with little, or no visualisation experience to create information visualisations.
4

Chisnall, David. "Autonomic visualisation." Thesis, Swansea University, 2007. https://cronfa.swan.ac.uk/Record/cronfa42623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis introduces the concept of autonomic visualisation, where principles of autonomic systems are brought to the field of visualisation infrastructure. Problems in visualisation have a specific set of requirements which are not always met by existing systems. The first half of this thesis explores a specific problem for large scale visualisation; that of data management. Visualisation algorithms have somewhat different requirements to other external memory problems, due to the fact that they often require access to all, or a large subset, of the data in a way that is highly dependent on the view. This thesis proposes a knowledge-based approach to pre-fetching in this context, and presents evidence that such an approach yields good performance. The knowledge based approach is incorporated into a five-layer model, which provides a systematic way of categorising and designing out-of-core, or external memory, systems. This model is demonstrated with two example implementations, on in the local and one in the remote context. The second half explores autonomic visualisation in the more general case. A simulation tool, created for the purpose of designing autonomic visualisation infrastructure is presented. This tool, SimEAC, provides a way of facilitating the development of techniques for managing large-scale visualisation systems. The abstract design of the simulation system, as well as details of the implementation are presented. The architecture of the simulator is explored, and then the system is evaluated in a number of case studies indicating some of the ways in which it can be used. The simulator provides a framework for experimentation and rapid prototyping of large scale autonomic systems.
5

Dillenseger, Jean-Louis. "Visualisation Scientifique en médecine.Application à la visualisation de l'anatomie et à la visualisation en épileptologie clinique." Habilitation à diriger des recherches, Université Rennes 1, 2003. http://tel.archives-ouvertes.fr/tel-00130932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
En médecine, le rôle de l'image est primordial. Depuis la renaissance, l'image a été un des vecteurs principaux de la transmission du savoir. Plus récemment, l'essor des techniques d'imageries tridimensionnelles n'a fait qu'étendre l'importance de l'image à la plupart des disciplines et des procédures médicales. Tout naturellement donc, la médecine a représenté un des domaines d'application privilégiés de la visualisation scientifique. Mes travaux de recherche s'inscrivent directement dans cette discipline de la visualisation scientifique et se présentent sous la forme de solutions de représentations originales apportées et associées à certaines problématiques médicales.
Pour cela, une réflexion sur l'outil de visualisation a été menée afin de proposer un cadre bien défini qui puisse guider l'élaboration d'un outil de représentation répondant à une discipline et à une problématique particulière. Le point le plus original de cette réflexion concerne un essai de formalisation de l'évaluation de la performance des outils de visualisation.
Deux grands domaines d'application ont justement permis de démontrer la pertinence de ce cadre général de la visualisation :
- La visualisation générale de l'anatomie avec, dans un premier temps, la conception d'un outil générique de visualisation de données médicale, le lancer de rayons multifonctions. Cet outil a été ensuite étendu selon deux axes de recherche, d'une part l'intégration de modèles de connaissances dans la procédure de synthèse d'images et d'autre part, l'imagerie interventionnelle et plus particulièrement des applications en urologie.
- Les apports de la visualisation pour l'interprétation des données recueillies sur le patient épileptique et plus particulièrement l'élaboration d'outils complémentaires permettant une analyse progressive des mécanismes et structures impliqués dans la crise.
6

Hatch, Andrew. "Software architecture visualisation." Thesis, Durham University, 2004. http://etheses.dur.ac.uk/3040/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Tracing the history of software engineering reveals a series of abstractions. In early days, software engineers would construct software using machine code. As time progressed, software engineers and computer scientists developed higher levels of abstraction in order to provide tools to assist in building larger software systems. This has resulted in high-level languages, modelling languages, design patterns, and software architecture. Software architecture has been recognised as an important tool for designing and building software. Some research takes the view that the success or failure of a software development project depends heavily on the quality of the software architecture. For any software system, there are a number of individuals who have some interest in the architecture. These stakeholders have differing requirements of the software architecture depending on the role that they take. Stakeholders include the architects, designers, developers and also the sales, services and support teams and even the customer for the software. Communication and understanding of the architecture is essential in ensuring that each stakeholder can play their role during the design, development and deployment of that software system. Software visualisation has traditionally been focused on aiding the understanding of software systems by those who perform development and maintenance tasks on that software. In supporting developers and maintainers, software visualisation has been largely concerned with representing static and dynamic aspects of software at the code level. Typically, a software visualisation will represent control flow, classes, objects, import relations and other such low level abstractions of the software. This research identifies the fundamental issues concerning software architecture visualisation. It does this by identifying the practical use of software architecture in the real world, and considers the application of software visualisation techniques to the visualisation of software architecture. The aim of this research is to explore the ways in which software architecture visualisation can assist in the tasks undertaken by the differing stakeholders in a software system and its architecture. A prototype tool, named ArchVis, has been developed to enable the exploration of some of the fundamental issues in software architecture visualisation. ArchVis is a new approach to software architecture visualisation that is capable of utilising multiple sources and representations of architecture in order to generate multiple views of software architecture. The mechanism by which views are generated means that they can be more relevant to a wider collection of stakeholders in that architecture. During evaluation ArchVis demonstrates the capability of utilising a number of data sources in order to produce architecture visualisations. Arch Vis' view model is capable of generating the necessary views for architecture stakeholders and those stakeholders can navigate through the views and data in order to obtain relevant information. The results of evaluating ArchVis using a framework and scenarios demonstrate that the majority of the objectives of this research have been achieved.
7

Köse, Cemal. "Parallel volume visualisation." Thesis, University of Bristol, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.361100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Charters, Stuart Muir. "Virtualising visualisation : a distributed service based approach to visualisation on the Grid." Thesis, Durham University, 2006. http://etheses.dur.ac.uk/2659/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Context: Current visualisation systems are not designed to work with the large quantities of data produced by scientists today, they rely on the abilities of a single resource to perform all of the processing and visualisation of data which limits the problem size that they can investigate. Objectives: The objectives of this research are to address the issues encountered by scientists with current visualisation systems and the deficiencies highlighted in current visualisation systems. The research then addresses the question:” How do you design the ideal service oriented architecture for visualisation that meets the needs of scientists?” Method: A new design for a visualisation system based upon a Service Oriented Architecture is proposed to address the issues identified, the architecture is implemented using Java and web service technology. The implementation of the architecture also realised several case study scenarios as demonstrators. Evaluation: Evaluation was performed using case study scenarios of scientific problems and performance data was conducted through experimentation. The scenarios were assessed against the requirements for the architecture and the performance data against a base case simulating a single resource implementation. Conclusion: The virtualised visualisation architecture shows promise for applications where visualisation can be performed in a highly parallel manner and where the problem can be easily sub-divided into chunks for distributed processing.
9

Andersson, H. Magnus. "Visualisation of composites manufacturing /." Luleå, 2003. http://epubl.luth.se/1402-1544/2003/21/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Knight, David A. J. "Three-dimensional flow visualisation." Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.239336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Eyre-Todd, Richard A. "Safe data structure visualisation." Thesis, University of Edinburgh, 1993. http://hdl.handle.net/1842/14819.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
A simple three layer scheme is presented which broadly categorises the types of support that a computing system might provide for program monitoring and debugging, namely hardware, language and external software support. Considered as a whole , the scheme forms a model for an integrated debugging-oriented system architecture. This thesis describes work which spans the upper levels of this architecture. A programming language may support debugging by preventing or detecting the use of objects that have no value. Techniques to help with this task such as formal verification, static analysis, required initialisation and default initialisation are considered. Strategies for tracking variable status at run-time are discussed. Novel methods are presented for adding run-time pointer variable checking to a language that does not normally support this facility. Language constructs that allow the selective control of run-time unassigned variable checking for scalar and composite objects are also described. Debugging at a higher level often involves the extensive examination of a program's data structures. The problem of visualising a particular kind of data structure, the hierarchic graph, is discussed using the previously described language level techniques to ensure data validity. The elementary theory of a class of two-level graphs is presented, together with several algorithms to perform a clustering technique that can improve graph layout and aid understanding.
12

Spanlang, Bernhard. "Garment modelling and visualisation." Thesis, University College London (University of London), 2005. http://discovery.ucl.ac.uk/1446482/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis describes methods for modelling and visualising fabrics and garments on human body shapes by means of computer systems. The focus is on automatic real-time simulation but, at the same time, realistic appearance of garments is important. Very high physical accuracy in the simulation of mechanical properties of cloth is relaxed, however. We first introduce an image-based collision detection (IBCD) mechanism which harnesses the rasterisation, depth buffer and interpolation units of existing graphics hardware for robust and efficient collision detection and response between body-scan and garment surfaces. We introduce directional velocity modification (DVM), a numerical method to overcome a phenomenon called super-elasticity of conventional mass spring particle systems (MSPS). A reverse engineering technique is presented that enables us to extract assembly information and data for realistic visualisation from photographs of real garments. The images used in this technique are acquired in a set-up that includes a high resolution digital imaging device. Body landmark data is used automatically to pre-position garment panels around a body-scan before virtual sewing. We show that DVM generates visually pleasing garments on static and animated body-scans. The efficiency of the IBCD is demonstrated by a comparison with a traditional geometry based method. A system that exploits the developed techniques was created for Bodymetrics Ltd., at their request. This system is integrated with a 3D whole body scanner for fully automatic virtual try-on of clothes in a department store. The system presents a world first installation and was reported in several newspapers and fashion magazines. While it demonstrates the efficiency and robustness of the developed methods, clothes shoppers can benefit in that the system gives quicker fit-feedback than a traditional mirror in a changing room and can display their dressed body from any angle and distance. In collaboration with a garment designer we also tested the system in a design scenario. In an evaluation of the commercial potential of the developed methods the major players in markets closely related to virtual try-on technology are identified and market entry strategies are discussed.
13

Garda-Osorio, Cesar. "Data mining and visualisation." Thesis, University of the West of Scotland, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.742763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Chabane, Abderrahim. "Visualisation d'œuvres d'art masquées." Thesis, Paris 10, 2014. http://www.theses.fr/2014PA100191/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse porte sur le diagnostic de peintures murales masquées par une couche de chaux ou de peintures avec des repeints.Trois méthodes originales sont présentées et le principe validé expérimentalement.D’abord une nouvelle méthode d’excitation de couches profondes par le rayonnement infrarouge lointain λ>20 μm, qui présente une meilleure efficacité par rapport à la méthode classique d’excitation par conduction thermique. Le fait de filtrer les courtes longueurs d’onde élimine l’échauffement en surface du matériau. La semi-transparence de la couche de chaux dans l’infrarouge lointain permet d’éclairer directement les couches de peintures et d’obtenir des thermogrammes révélant les motifs masqués.Nous avons aussi étudié la transmission des couches de peintures dans l’infrarouge lointain par spectrométrie à transformée de Fourier. Les peintures présentent vers 30 μm dans l’infrarouge lointain des zones d’absorption caractéristiques des groupements fonctionnels. Aussi la mesure directe de l’emission totale dans l’infrarouge lointain à température ambiante de peintures permet leur identification. Nous avons conçu un système qui pourrait être déplacé sur le terrain.Nous avons enfin introduit une nouvelle approche pour le diagnostic des peintures murales masquées par une couche de chaux basée sur la mesure du temps de vol des photons balistiques rétrodiffusés collectés par une caméra à balayage de fente d’une résolution de 2 ps
This study deals with the diagnosis of hidden paintings by a layer of lime or another paintings.Three original methods are presented and experimentally validated the principle. First, a new excitation method of the deeper layers based on far infrared radiation λ>20 μm, which has a higher efficiency compared to the conventional method of thermal conduction excitement. The fact that the filter removes short wavelength in the heating surface of the material. The semi-transparent layer of lime in the far infrared can illuminate layers of paint directly and get thermograms revealing hidden patterns.We also studied the transmission of the layers of paint in the far infrared by Fourier transform spectroscopy. The painting present in the far infrared 30> μm areas obsorption witch caracteristic of functional groups. Also the measurement of total emission in the far infrared at room temperature allows their identification.Finally, we introduced a new approach for diagnosing murals hidden by a layer of lime based on measuring the time of flight of photons backscattered ballistic collected by a streak camera with a resolution of 2 ps
15

Sun, Yi. "Non-linear hierarchical visualisation." Thesis, Aston University, 2002. http://publications.aston.ac.uk/13263/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine that distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of the hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E - and M - step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model.
16

Mumtaz, Shahzad. "Visualisation of bioinformatics datasets." Thesis, Aston University, 2015. http://publications.aston.ac.uk/25261/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Analysing the molecular polymorphism and interactions of DNA, RNA and proteins is of fundamental importance in biology. Predicting functions of polymorphic molecules is important in order to design more effective medicines. Analysing major histocompatibility complex (MHC) polymorphism is important for mate choice, epitope-based vaccine design and transplantation rejection etc. Most of the existing exploratory approaches cannot analyse these datasets because of the large number of molecules with a high number of descriptors per molecule. This thesis develops novel methods for data projection in order to explore high dimensional biological dataset by visualising them in a low-dimensional space. With increasing dimensionality, some existing data visualisation methods such as generative topographic mapping (GTM) become computationally intractable. We propose variants of these methods, where we use log-transformations at certain steps of expectation maximisation (EM) based parameter learning process, to make them tractable for high-dimensional datasets. We demonstrate these proposed variants both for synthetic and electrostatic potential dataset of MHC class-I. We also propose to extend a latent trait model (LTM), suitable for visualising high dimensional discrete data, to simultaneously estimate feature saliency as an integrated part of the parameter learning process of a visualisation model. This LTM variant not only gives better visualisation by modifying the project map based on feature relevance, but also helps users to assess the significance of each feature. Another problem which is not addressed much in the literature is the visualisation of mixed-type data. We propose to combine GTM and LTM in a principled way where appropriate noise models are used for each type of data in order to visualise mixed-type data in a single plot. We call this model a generalised GTM (GGTM). We also propose to extend GGTM model to estimate feature saliencies while training a visualisation model and this is called GGTM with feature saliency (GGTM-FS). We demonstrate effectiveness of these proposed models both for synthetic and real datasets. We evaluate visualisation quality using quality metrics such as distance distortion measure and rank based measures: trustworthiness, continuity, mean relative rank errors with respect to data space and latent space. In cases where the labels are known we also use quality metrics of KL divergence and nearest neighbour classifications error in order to determine the separation between classes. We demonstrate the efficacy of these proposed models both for synthetic and real biological datasets with a main focus on the MHC class-I dataset.
17

Rice, Iain. "Probabilistic topographic information visualisation." Thesis, Aston University, 2015. http://publications.aston.ac.uk/27348/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The focus of this thesis is the extension of topographic visualisation mappings to allow for the incorporation of uncertainty. Few visualisation algorithms in the literature are capable of mapping uncertain data with fewer able to represent observation uncertainties in visualisations. As such, modifications are made to NeuroScale, Locally Linear Embedding, Isomap and Laplacian Eigenmaps to incorporate uncertainty in the observation and visualisation spaces. The proposed mappings are then called Normally-distributed NeuroScale (N-NS), T-distributed NeuroScale (T-NS), Probabilistic LLE (PLLE), Probabilistic Isomap (PIso) and Probabilistic Weighted Neighbourhood Mapping (PWNM). These algorithms generate a probabilistic visualisation space with each latent visualised point transformed to a multivariate Gaussian or T-distribution, using a feed-forward RBF network. Two types of uncertainty are then characterised dependent on the data and mapping procedure. Data dependent uncertainty is the inherent observation uncertainty. Whereas, mapping uncertainty is defined by the Fisher Information of a visualised distribution. This indicates how well the data has been interpolated, offering a level of ‘surprise’ for each observation. These new probabilistic mappings are tested on three datasets of vectorial observations and three datasets of real world time series observations for anomaly detection. In order to visualise the time series data, a method for analysing observed signals and noise distributions, Residual Modelling, is introduced. The performance of the new algorithms on the tested datasets is compared qualitatively with the latent space generated by the Gaussian Process Latent Variable Model (GPLVM). A quantitative comparison using existing evaluation measures from the literature allows performance of each mapping function to be compared. Finally, the mapping uncertainty measure is combined with NeuroScale to build a deep learning classifier, the Cascading RBF. This new structure is tested on the MNist dataset achieving world record performance whilst avoiding the flaws seen in other Deep Learning Machines.
18

Lopes, Adriano Martins. "Accuracy in scientific visualisation." Thesis, University of Leeds, 1999. http://etheses.whiterose.ac.uk/1282/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Quite often, accuracy is a neglected issue in scientific visualization. Indeed, in most of the visualizations there are two wrong assumptions: first, that the data visualized is accurate. Second, that the visualization process is exempt from errors. On these basis, the objectives of this thesis are three-fold: First, to understand the implications of accuracy in scientific visualization. It is important to analyse the sources of errors during visualization, and to establish mechanisms that enable the characterization of the accuracy. This learning stage is crucial for a sucessful scientific investigation. Second, to focus on visualization features that, besides enabling the visualization of the data, give users an idea of its accuracy. The challenging aspect in this case is the use of appropriate visual paradigms. In this respect, the awareness of how human beings create and process a mental image of the information visualized is important. Thrid and most important, the development of more accurate versions of visualization techniques. By understanding the issue of accuracy concerning a particular technique, there is a high probability to reach to a proposal of new improvements. There are three techniques under study in this thesis: contouring, isosurfacing and particle tracing. All these are widely used in scientific visualization. That is why they have been chosen. For all of them, the issue of showing accuracy to users is discussed. In addition, two new accurate versions of contouring and isosurfacing techniques have been presented. The new contouring method is for data defined over rectangular grids and assumes that the data vary linearly along the edges of the cell. The new isosurfacing method is an improvement of the Marching-Cubes method. Some aspects of this classic approach are clarified, and even corrected.
19

Loizides, Andreas M. "Intuitive visualisation of multi-variate data sets using the empathic visualisation algorithm (EVA)." Thesis, University College London (University of London), 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Lahtinen, Linn. "Mobile Information Visualisation : Recommendations for creating better information visualisation interfaces on mobile devices." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210563.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
An increasing use of smartphones and other mobile devices puts pressure on user interfaces to work as well on small touch-screens as on desktop computers, and information visualisation interfaces are no exception. Even though there has been a demand for research on mobile information visualisation for many years, relatively little has been accomplished in this area, and the research that has been conducted is often narrow and oriented toward to a certain design. Therefore, this paper aims to give more general recommendations regarding the design of information visualisation interfaces for mobile devices. A qualitative user study was conducted to find weaknesses and strengths in existing information visualisation interfaces when interacted with on a smartphone. For this study, five prototypes were made by which different visualisations and interaction methods were tested by the participants of the study. The participants were given tasks based on the Visual Information Seeking Mantra, which focuses on four types of interaction with information visualisations (overview, zoom, filter and details-on-demand). The results indicate that the interaction with a visualisation is more important than visualisation itself to achieve a useful and efficient information visualisation interface. Other aspects to consider is to have an adequate zoom function, to not have interactive objects that are too small and to avoid over-cluttering. The latter aspect can be solved by either taking advantage of gestures or using more layers in the interface. However, what visualisations and interaction methods that work best is heavily dependent on the data and purpose of the visualisation
En ökande användning av smartphones och andra mobila enheter sätter press på användargränssnitt att fungera lika bra på små pekskärmar som på stationära datorer, och gränssnitt för informationsvisualisering är inget undantag. Trots att det har funnits en efterfrågan på forskning om mobil informationsvisualisering under många år har relativt lite uppnåtts inom detta område, samt att den forskning som har utförts ofta är smal och inriktad mot en viss design. Därför är syftet för denna forskningsartikel att ge mer allmänna rekommendationer om utformningen av gränssnitt för informationsvisualisering på mobila enheter. En kvalitativ användarstudie genomfördes för att hitta svagheter och styrkor i befintliga gränssnitt vid interaktion med en smartphone. För denna studie gjordes fem prototyper genom vilka olika visualiseringar och interaktionsmetoder testades av deltagarna i studien. Deltagarna fick uppgifter baserade på et mantra kallat ”the Visual Information Seeking Mantra”, som fokuserar på fyra typer av interaktion med informationsvisualiseringar. Resultaten indikerar att interaktionen med en visualisering är viktigare än själva visualiseringen för att uppnå ett användbart och effektivt informationsvisualiseringsgränssnitt. Andra aspekter att tänka på är att ha en effektiv zoomfunktion, att inte ha interaktiva objekt som är för små och att undvika att ha för många objekt på ett litet område. Den senare aspekten kan lösas genom att antingen dra fördel av gester eller använda fler lager i gränssnittet. Vilka visualiseringar och interaktionsmetoder som fungerar bäst är dock starkt beroende av data och syftet med visualiseringen.
21

Anderson, Jonathan. "Visualisation of data from IoT systems : A case study of a prototyping tool for data visualisations." Thesis, Linköpings universitet, Programvara och system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The client in this study, Attentec, has seen an increase in the demand for services connected to Internet of things systems. This study is therefore examining if there is a tool that can be a used to build fast prototype visualisations of data from IoT systems to use as a tool in their daily work. The study started with an initial phase with two parts. The first part was to get better knowledge of Attentec and derive requirements for the tool and the second part was a comparison of prototyping tools for aiding in development of data visualisations. Apache Zeppelin was chosen as the most versatile and suitable tool matching the criteria defined together with Attentec. Following the initial phase a pre-study containing interviews to collect empirical data on how visualisations and IoT projects had been implemented previously at Attentec were performed. This lead to the conclusion that geospatial data and NoSQL databases were common for IoT projects. A technical investigation was conducted on Apache Zeppelin to answer if there were any limits in using the tool for characteristics common in IoT system. This investigation lead to the conclusion that there was no support for plotting data on a map. The first implementation phase implemented support for geospatial data by adding a visualisation plug-in that plotted data on a map. The implementation phase was followed by an evaluation phase in which 5 participants performed tasks with Apache Zeppelin to evaluate the perceived usability of the tool. The evaluation was performed using a System Usability Scale and a Summed Usability Metric as well as interviews with the participants to find where improvements could be made. From the evaluation three main problems were discovered, the import and mapping of data, more feature on the map visualisation plug-in and the creation of database queries. The first two were chosen for the second iteration where a script for generating the code to import data was developed as well as improvements to the geospatial visualisation plug-in. A second evaluation was performed after the changes were made using similar tasks as in the first to see if the usability was improved between the two evaluations. The results of the Summed Usability Metric improved on all tasks and the System Usability Scale showed no significant change. In the interviews with the participants they all responded that the perceived usability had improved between the two evaluations suggesting some improvement.
22

Buratti, Luca. "Visualisation of Convolutional Neural Networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le Reti Neurali, e in particolare le Reti Neurali Convoluzionali, hanno recentemente dimostrato risultati straordinari in vari campi. Purtroppo, comunque, non vi è ancora una chiara comprensione del perchè queste architetture funzionino così bene e soprattutto è difficile spiegare il comportamento nel caso di fallimenti. Questa mancanza di chiarezza è quello che separa questi modelli dall’essere applicati in scenari concreti e critici della vita reale, come la sanità o le auto a guida autonoma. Per questa ragione, durante gli ultimi anni sono stati portati avanti diversi studi in modo tale da creare metodi che siano capaci di spiegare al meglio cosa sta succedendo dentro una rete neurale oppure dove la rete sta guardando per predire in un certo modo. Proprio queste tecniche sono il centro di questa tesi e il ponte tra i due casi di studio che sono presentati sotto. Lo scopo di questo lavoro è quindi duplice: per prima cosa, usare questi metodi per analizzare e quindi capire come migliorare applicazioni basate su reti neurali convoluzionali e in secondo luogo, per investigare la capacità di generalizzazione di queste architetture, sempre grazie a questi metodi.
23

Löbbert, Sebastian. "Visualisation of two-dimensional volumes." [S.l.] : [s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=972777636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hammarstedt, Emil. "Waveform Visualisation And Plot Optimization." Thesis, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-50635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

This thesis is focused on the improvement of an existing implementation of a waveform visualizer. The problem area handled in this work has its focus on how to reduce the number of points to be plotted. The given waveform visualizer was extended by the use of two additional algorithms. First, a Level Of Detail (LOD) algorithm that gives the subset of points that are necessary to plot the waveform in the current zoom level. Second, a straight line identification algorithm to find a series of points aligned in a straight line, only leaving the end points and then drawing a line between them. These two optimizations are the main focus of this work.Additionally, an exporting functionality was implemented to export the plot data into several different data formats. Also some improvements of zooming, panning, some GUI design, and a new drag and drop functionality was constructed.

25

Badawood, Donia. "Narrative construction in information visualisation." Thesis, City, University of London, 2015. http://openaccess.city.ac.uk/15994/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Storytelling has been used throughout the ages as means of communication, conveying and transmitting knowledge from one person to another and from one generation to the next. In various domains, formulating of messages, ideas, or findings into a story has proven its efficiency in making them comprehensible, memorable, and engaging. Information Visualization as an academic field also utilises the power of storytelling to make visualizations more understandable and interesting for a variety of audiences. Although storytelling has been a a topic of interest in information visualization for some time, little or no empirical evaluations exist to compare different approaches to storytelling through information visualization. There is also a need for work that addresses in depth some criteria and techniques of storytelling such as transition types in visual stories in general and data-driven stories in particular. Two sets of experiments were conducted to explore how two different models of information visualization delivery influence narratives constructed by audiences. The first model involves direct narrative by a speaker using visualization software to tell a data-story, while the second involves constructing a story by interactively exploring the visualization software. The first set of experiments is a within-subject experiment with 13 participants, and the second set of experiments is a between-subject experiment with 32 participants. In both rounds, an open-ended questionnaire was used in controlled laboratory settings in which the primary goal was to collect a number of written data-stories derived from the two models. The data-stories and answers written by the participants were all analysed and coded using data-driven and pre-set themes. The themes include reported impressions about the story, insight types reported, narrative structures, curiosity about the data, and ease of telling a story after experimenting with each model. The findings show that while the delivery model has no effect on how easy or difficult the participants found telling a data story to be, it does have an effect on the tendency to identify and use outliers' insights in the data story if they are not distracted by direct narration. It also affects the narrative structure and depth of the data story. Examining some more mature domains of visual storytelling, such as films and comics, can be highly beneficial to this new sub-field of data visualization. In the research in hand, a taxonomy of panel-to-panel transitions in comics has been used. The definitions of the components of this taxonomy have been refined to reflect the nature of data-stories in information visualization, and the taxonomy has then been used in coding a number of VAST Challenge videos. The transitions used in each video have been represented graphically with a diagram that shows how the information was added incrementally in order to tell a story that answers a particular question. A number of issues have been taken into account when coding transitions in each video and when designing and creating the visual diagram, such as nested transitions, the use of sub-topics, and delayed transitions. The major contribution of this part of the research is the provision of a taxonomy and description of transition types in the context of narrative visualization, an explanation of how this taxonomy can be used to code transitions in narrative visualization, and a visual summary as a means of summarising that coding. The approaches to data analysis and different storytelling axes, both in the experimental work and in proposing and applying the framework of transition types used, can be usefully applied to other studies and comparisons of storytelling approaches.
26

Fei, Bennie Kar Leung. "Data visualisation in digital forensics." Pretoria : [s.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-03072007-153241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wilkinson, Debbie Isabelle. "Visualisation of osteoclast membrane domains." Thesis, University of Aberdeen, 2010. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=158808.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Osteoclasts polarise upon activation and form four distinct membrane domains; the basolateral domain, the sealing zone, the functional secretory domain and the ruffled border. The ruffled border is the resorptive organelle of the cell and provides a large surface area for the release of protons and enzymes into the space beneath the osteoclast. Defects in osteoclast formation or function can lead to diseases such as osteopetrosis. Ruffled border formation is a critical event in osteoclast function but the process by which it and other membrane domains form is only partially understood. Vesicular trafficking is essential for the tight regulation of the osteoclast membrane domains and it has been shown previously that treatment with pharmacological inhibitors causes disruption of trafficking. The aims of this PhD were to increase our understanding of vesicular trafficking in osteoclasts and to optimise ways of visualising osteoclast membrane domains. My studies of patients with osteoclast-poor osteopetrosis identified defects in RANKL as a cause of the defect. This in turn has identified a potential therapy of recombinant RANKL for patients with this form of the disease. Although purification of wild type or mutant RANKL was not completely successful, it did suggest that the mutant forms of RANKL were not functional. I have used pharmacological inhibitors to study osteoclast membrane domains, and found that transmission electron microscopy is an essential tool for studying membrane changes following pharmacological inhibition at the ultrastructural level. I also established that the study of vesicular trafficking to analyse formation of membrane domains can make excellent use of immuno-electron methods. Furthermore, genetic diseases associated with defective ruffled border formation such as XLA and osteopetrosis provide useful tools to further analyse the dynamics involved in the formation and maintenance of the ruffled border, as well as revealing more about the diseases themselves.
28

Basalaj, Wojciech. "Proximity visualisation of abstract data." Thesis, University of Cambridge, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Emerson, Jessica Merrill Thurston. "Tag clouds in software visualisation." Thesis, University of Canterbury. Computer Science and Software Engineering, 2014. http://hdl.handle.net/10092/10120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Developing and maintaining software is a difficult task, and finding effective methods of understanding software is more necessary now than ever with the last few decades seeing a dramatic climb in the scale of software. Appropriate visualisations may enable greater understanding of the datasets we deal with in software engineering. As an aid for sense-making, visualisation is widely used in daily life (through graphics such as weather maps and road signs), as well as in other research domains, and is thought to be exceedingly beneficial. Unfortunately, there has not been widespread use of the multitude of techniques which have proposed for the software engineering domain. Tag clouds are a simple, text-based visualisation commonly found on the internet. Typically, implementations of tag clouds have not included rich interactive features which are necessary for data exploration. In this thesis, I introduce design considerations and a task set for enabling interaction in a tag cloud visualisation system. These considerations are based on an analysis of challenges in visualising software engineering data, and the perceptive influences of visual properties available in tag clouds. The design and implementation of interactive system Taggle based on these considerations is also presented, along with its broad-based evaluation. Evaluation approaches were informed by a systematic mapping study of previous tag cloud evaluation, providing an overview of existing research in the domain. The design of Taggle was improved following a heuristic evaluation by domain experts. Subsequent evaluations were divided into two parts - experiments focused on the tag cloud visualisation technique itself, and a task-based approach focused on the whole interactive system. As evidenced in the series of evaluative studies, the enhanced tag cloud features incorporated into Taggle enabled faster visual search response time, and the system could be used with minimal training to discover relevant information about an unknown software engineering dataset.
30

Ganah, Abdulkadir A. M. "Computer visualisation support for buildability." Thesis, Loughborough University, 2003. https://dspace.lboro.ac.uk/2134/17389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The construction industry has a reputation for low productivity, waste, low use of new technologies, and poor quality (Egan, 1998; Wakefield & Damrienant, 1999). It is estimated that up to 30% of construction is rework, and recognised that site teams spend too much time and effort making designs work in practice (Egan, 1998). The aim of the research project was to develop a visualisation and communication environment that would assist design teams in communicating design details that may be problematic for construction teams. The investigation was based on the need for a tool that facilitates detail design information communication. The VISCON (computer visualisation support for buildability) environment provides support for general information sharing in the context of a collaborative building project. This prototype is Web based and can be accessed from any location. This will allow for construction information to be readily communicated and shared between head offices and construction sites and any other locations to provide better visualisation of design details. Three scenarios were developed as case studies for demonstration purposes based on real projects. These case studies used a paper factory, a bay barrage building and a swimming pool recently constructed at Loughborough University. In the development of the case studies, 3D models were produced using components from the selected prototype buildings that may inherently be difficult to assemble. The VISCON prototype demonstrates the various functionalities of the system in creating intricate design details that can be animated or interacted with in real time. The main achievements of the research are: The review of buildability problems and their causes during the construction stage of a facility; The development of an architecture for a computer visualisation tool for buildability (VISCON); Implementation and validation of the proposed system (VISCON) through the use of a number of case studies. The system was found to be useful and demonstrated that computer visualisation tools provide considerable potential in improving clarity of information and also a new way of visualising and solving design problems that arise during the construction stage of a project. It also demonstrated the ease of use of the proposed system, and its efficiency and application to the construction industry. The research concludes that the use of computer visualisation can improve the construction project delivery process by providing guidance on how components are assembled together and how buildability problems can be solved during the construction stage. Furthermore, the use of effective communication tools will improve collaboration between construction and design practitioners.
31

Bonneau, Georges-Pierre. "Multiresolution pour la Visualisation Scientifique." Habilitation à diriger des recherches, Université de Grenoble, 2000. http://tel.archives-ouvertes.fr/tel-01064669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les travaux de recherche dont ce mémoire est l'objet sont dédiés à la visualisation à différents niveaux de détail de données scientifiques. Les données scientifiques abordées sont de deux types. Les premières sont définies sur des grilles tridimensionnelles uniformes; le domaine d'application correspondant étant alors la visualisation de données d'origines médicales provenant de coupes tomographiques ou de scanners IRM. Les secondes sont définies sur des réseaux triangulaires irréguliers, planaires ou sphériques; les domaines d'applications correspondants étant entre autres la visualisation de données topographiques (terrain visualization), ou encore de données provenant de calculs par éléments finis.
32

Ingram, Robert J. "Legibility enhancement for information visualisation." Thesis, University of Nottingham, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Green, Damian Alan. "Stratigraphic visualisation for archaeological investigation." Thesis, Brunel University, 2003. http://bura.brunel.ac.uk/handle/2438/2168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The principal objective of archaeology is to reconstruct in all possible ways the life of a community at a specific physical location throughout a specific time period. Distinctly separate layers of soil provide evidence for a specific time period. Discovered artefacts are most frequently used to date the layer. An artefact taken out of context is virtually worthless; hence the correct registration of the layer in which they were uncovered is of great importance. The most popular way to record temporal relationships between stratigraphic layers is through the use of the 2D Harris Matrix method. Without accurate 3D spatial recording of the layers, it is difficult if not impossible, to form new stratigraphic correspondences or correlations. New techniques for archaeological recording, reconstruction, visualisation and interpretation in 3D space are described in these works and as a result software has been developed. Within the developed software system, legacy stratigraphy data, reconstructed from archaeological notebooks can be integrated with contemporary photogrammetric models and theodolite point data representations to provide as comprehensive a reconstruction as possible. The new methods developed from this research have the capability to illustrate the progression of the excavation over time. This is made possible after the entry of only two or more strata. Sophisticated, yet easy-to-use tools allow the navigation of the entire site in 3D. Through the use of an animation-bar it is possible to replay through time both the excavation period and the occupation period, that is to say the various time periods in antiquity when human beings occupied these locations. The lack of complete and consistent recording of the soil layers was an issue that proved to be an obstacle for complete reconstruction during the development of these methods. A lack of worldwide archaeological consensus on the methods of stratigraphic recording inhibited development of a universal scientific tool. As a result, new recording methods are suggested to allow more scientific stratigraphic reconstruction.
34

Hall, Peter. "Four new algorithms for visualisation." Thesis, University of Sheffield, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Stuart, Elizabeth Jayne. "The visualisation of parallel computations." Thesis, University of Ulster, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Novak, Jan. "Visualisation of chemistry in flow." Thesis, University of Birmingham, 2010. http://etheses.bham.ac.uk//id/eprint/1089/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This research project investigated the pattern-producing Belousov- Zhabotinsky (BZ) reaction under flow using a combination of optical and magnetic resonance (MRI) imaging techniques. The coupling between reaction-diffusion and advection was studied in three distinct systems. Stationary flow-distributed oscillation (FDO) patterns, previously only observed in packed bed reactors, were produced in alternative flow fields for the first time. The two systems studied have allowed greater insight into the structure, dynamics and stability of the patterns. Plug-like flow was produced with an agar gelling agent and in a vortex flow reactor (VFR). The agar system allowed the production of FDO patterns in the absence of a packing material and could be forced by a periodic change in reaction temperature. Synchronisation was found in space-time but not time. This was in contrast to previous investigations, where periodic illumination resulted in the observation of synchronous behaviour. The VFR provided a flow field which was able to produce stationary and travelling patterns. The complex structure of the stationary bands was explained by the flow, in which Taylor vortices translate axially up the tube in an approximation of plug flow. The rotation rate of the inner cylinder was identified as a new parameter for control of both the stability and wavelength of FDO patterns. Optical imaging was able to probe the bulk behaviour of propagating waves through three-dimensional Taylor vortices but was unable to probe the wave propagation mechanisms. It was found however, that magnetic resonance imaging was able to visualise the behaviour of the chemical waves within the Couette cell in order to elucidate the wave behaviour. In conjunction with this MR velocity and diffusion imaging were able to characterise the flow within the Taylor vortices. The combination of these two methods uniquely allowed the behaviour of these travelling waves to be explained in terms of the flow within the system.
37

Fintzel, Katia. "Vidéo spatialisation : algorithmique et visualisation." Nice, 2001. http://www.theses.fr/2001NICE5617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans ce rapport, nous présentons une structure complète de modélisation de scènes virtualisées, référencée en tant qu'outil de spatialisation vidéo. A partir de la seule donnée de quelques vues bidimensionnelles, voisines et non calibrées d'une scène tridimensionnelle, nous avons en effet décrit une méthode permettant de reconstruire un champ de vision virtuel continu de la scène. Afin d'éviter une étape de calibration explicite, notre approche basée-image pure pour virtualiser une scène tridimensionnelle dérive de la théorie des tenseurs trifocaux. En fait, de nouveaux points de vue de la scène sont créés à partir des points de vue existants en combinant efficacement les transformations géométriques sur les images, correspondant aux déplacements tridimensionnels d'une caméra vidéo virtuelle, aux techniques de plaquage de texture et de mosai͏̈cing d'images. Ces dernières techniques sont utilisées non seulement pour accroître la zone de couverture et le réalisme visuel des points de vues inédits synthétisés, mais aussi pour garantir une certaine fluidité lorsque nous considérons de larges déplacements tridimensionnels de la caméra vidéo en question, simulant alors un processus de navigation virtualisée
In this PhD report, we present a complete framework, referred to as Video Spatialization, to render « virtualized » scenes. We assume that we only have a few 2 D uncalibrated images of the scene and do not have any 3D CAD models. From the knowledge of this limited c=set of real and discrete images of the scene, we describe a complete framework, which enables as to reconstruct a virtual continuous vision field of the scene. In order to avoid an explicit calibration stage, our pure image-based approach to virtualize a 3D scene is based on the theory of the trifocal tensors. In fact, new points of view are created from existing ones by efficiently combining geometric transformations on images, corresponding to 3D displacements of a virtual camera, with texture mapping and displacements of a virtual camera, with texture mapping and mosaicking techniques. These last techniques are used to increase the covering area and the realism of the synthetized points of view and to ensure the fluidity when we consider large 3D displacements of the virtual video camera to stimulate virtualized navigation
38

Lutmann, Patrice. "Transfert et visualisation d'images numériques." Bordeaux 1, 1996. http://www.theses.fr/1996BOR10715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'image est aujourd'hui un support privilegie de la communication. De plus en plus d'applications informatiques font appel aux images ou sont dediees a la manipulation d'images. Pour ces applications, les principaux besoins sont le stockage, le transfert et la visualisation. A partir de l'analyse de ces besoins, specifiquement le transfert et la visualisation, je definis les principales caracteristiques du transfert et de la visualisation des images. Les difficultes et contraintes essentielles que l'on rencontre alors sont l'heterogeneite des materiels et des logiciels utilises, les debits et les delais. Je propose ensuite divers modeles d'implementation et en particulier une integration des services de transfert et de visualisation dans l'environnement graphique x, sous systeme d'exploitation unix avec tcp/ip. Les services de transfert et de visualisation d'images, designes par stvi pour les services et sirix pour le prototype implemente, vont etre integres au systeme graphique x pour la partie visualisation, et a tcp/ip pour le transport. Ces deux services appartiennent a la couche application et presentation du modele de reference osi. Ils sont en etroite relation avec le service d'image de la bddri et le service de visualisation du systeme x. En fonction des besoins de l'application, ces services permettent de determiner les parametres de profil optimum pour minimiser le volume des donnees a transferer. Ils presentent une interface generique de visualisation, exploitent les ressources de chacun des acteurs en fonction des capacites relatives de ceux-ci. Enfin, ils gerent les contraintes temporelles necessaires a la visualisation acceptable de sequences animees. Je presente les diverses associations permettant le controle du transfert, la commande du transfert et la visualisation. J'analyse les flux de donnees et la localisation des traitements ainsi que leur influence sur la realisation de ces associations et les contraintes d'implementation. Je decris aussi le systeme x et l'impact de ce systeme sur sirix. J'etudie les diverses possibilites d'implementation de ces services et leur integration au systeme x, et je compare les divers modeles realises tant du point de vue des fonctionnalites que des performances
39

Humphries, Christopher. "User-centred security event visualisation." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S086/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Il est aujourd'hui de plus en plus difficile de gérer les énormes quantités de données générées dans le cadre de la sécurité des systèmes. Les outils de visualisation sont une piste pour faire face à ce défi. Ils représentent de manière synthétique et souvent esthétique de grandes quantités de données et d'événements de sécurité pour en faciliter la compréhension et la manipulation. Dans ce document, nous présentons tout d'abord une classification des outils de visualisation pour la sécurité en fonction de leurs objectifs respectifs. Ceux-ci peuvent être de trois ordres : monitoring (c'est à dire suivi en temps réel des événements pour identifier au plus tôt les attaques alors qu'elles se déroulent), exploration (parcours et manipulation a posteriori d'une quantité importante de données pour découvrir les événements importants) ou reporting (représentation a posteriori d'informations déjà connues de manière claire et synthétique pour en faciliter la communication et la transmission). Ensuite, nous présentons ELVis, un outil capable de représenter de manière cohérente des évènements de sécurité issus de sources variées. ELVis propose automatiquement des représentations appropriées en fonction du type des données (temps, adresse IP, port, volume de données, etc.). De plus, ELVis peut être étendu pour accepter de nouvelles sources de données. Enfin, nous présentons CORGI, une extension d'ELVIs permettant de manipuler simultanément plusieurs sources de données pour les corréler. A l'aide de CORGI, il est possible de filtrer les évènements de sécurité provenant d'une source de données en fonction de critères résultant de l'analyse des évènements de sécurité d'une autre source de données, facilitant ainsi le suivi des évènements sur le système d'information en cours d'analyse
Managing the vast quantities of data generated in the context of information system security becomes more difficult every day. Visualisation tools are a solution to help face this challenge. They represent large quantities of data in a synthetic and often aesthetic way to help understand and manipulate them. In this document, we first present a classification of security visualisation tools according to each of their objectives. These can be one of three: monitoring (following events in real time to identify attacks as early as possible), analysis (the exploration and manipulation a posteriori of a an important quantity of data to discover important events) or reporting (representation a posteriori of known information in a clear and synthetic fashion to help communication and transmission). We then present ELVis, a tool capable of representing security events from various sources coherently. ELVis automatically proposes appropriate representations in function of the type of information (time, IP address, port, data volume, etc.). In addition, ELVis can be extended to accept new sources of data. Lastly, we present CORGI, an successor to ELVIS which allows the simultaneous manipulation of multiple sources of data to correlate them. With the help of CORGI, it is possible to filter security events from a datasource by multiple criteria, which facilitates following events on the currently analysed information systems
40

Fill, Hans-Georg. "Visualisation for semantic information systems." Wiesbaden Gabler, 2006. http://d-nb.info/992136148/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Galagain, Didier. "Visualisation d'une famille de graphes." Grenoble 2 : ANRT, 1986. http://catalogue.bnf.fr/ark:/12148/cb37597735b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Stratton, David. "A program visualisation meta language." Thesis, University of Ballarat, 2003. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/63588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The principle motivation of this work is to define an open PV architecture that will enable a variety of visualisation schemes to interoperate and that will encourage the generation of PV systems and research into their efficacy. Ultimately this may lead to more effective pedagogy in the field of computer programming and hence remove a barrier to students entering the profession.
Doctorate of Philosophy
43

Roard, Nicolas. "An agent-based visualisation system." Thesis, Swansea University, 2007. https://cronfa.swan.ac.uk/Record/cronfa42583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis explores the concepts of visual supercomputing, where complex distributed systems are used toward interactive visualisation of large datasets. Such complex systems inherently trigger management and optimisation problems; in recent years the concepts of autonomic computing have arisen to address those issues. Distributed visualisation systems are a very challenging area to apply autonomic computing ideas as such systems are both latency and compute sensitive, while most autonomic computing implementations usually concentrate on one or the other but not both concurrently. A major contribution of this thesis is to provide a case study demonstrating the application of autonomic computing concepts to a computation intensive, real-time distributed visualisation system. The first part of the thesis proposes the realisation of a layered multi-agent system to enable autonomic visualisation. The implementation of a generic multi-agent system providing reflective features is described. This architecture is then used to create a flexible distributed graphic pipeline, oriented toward real-time visualisation of volume datasets. Performance evaluation of the pipeline is presented. The second part of the thesis explores the reflective nature of the system and presents high level architectures based on software agents, or visualisation strategies, that take advantage of the flexibility of the system to provide generic features. Autonomic capabilities are presented, with fault recovery and automatic resource configuration. Performance evaluation, simulation and prediction of the system are presented, exploring different use cases and optimisation scenarios. A performance exploration tool, Delphe, is described, which uses real-time data of the system to let users explore its performance.
44

Wambecke, Jérémy. "Visualisation de données temporelles personnelles." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM051/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La production d’énergie, et en particulier la production d’électricité, est la principale responsable de l’émission de gaz à effet de serre au niveau mondial. Le secteur résidentiel étant le plus consommateur d’énergie, il est essentiel d’agir au niveau personnel afin de réduire ces émissions. Avec le développement de l’informatique ubiquitaire, il est désormais aisé de récolter des données de consommation d’électricité des appareils électriques d’un logement. Cette possibilité a permis le développement des technologies eco-feedback, dont l’objectif est de fournir aux consommateurs un retour sur leur consommation dans le but de la diminuer. Dans cette thèse nous proposons une méthode de visualisation de données temporelles personnelles basée sur une interaction what if, qui signifie que les utilisateurs peuvent appliquer des changements de comportement de manière virtuelle. En particulier notre méthode permet de simuler une modification de l’utilisation des appareils électriques d’un logement, puis d’évaluer visuellement l’impact de ces modifications sur les données. Cette méthode a été implémentée dans le système Activelec, que nous avons évalué avec des utilisateurs sur des données réelles. Nous synthétisons les éléments de conception indispensables aux systèmes eco-feedback dans un état de l’art. Nous exposons également les limitations de ces technologies, la principale étant la difficulté rencontrée par les utilisateurs pour trouver des modifications de comportement pertinentes leur permettant de consommer moins d’énergie.Nous présentons ensuite trois contributions. La première contribution est la conception d’une méthode what if appliquée à l’eco-feedback ainsi que son implémentation dans le système Activelec. La seconde contribution est l’évaluation de notre méthode grâce à deux expérimentations menées en laboratoire. Dans ces expérimentations nous évaluons si des participants utilisant notre méthode trouvent des modifications qui économisent de l’énergie et qui nécessitent suffisamment peu d’efforts pour être appliquées en vrai. Enfin la troisième contribution est l’évaluation in-situ du système Activelec dans des logements personnels pour une durée d’environ un mois. Activelec a été déployé dans trois appartements privés afin de permettre l’évaluation de notre méthode en contexte domestique réel. Dans ces trois expérimentations, les participants ont pu trouver des modifications d’utilisation des appareils qui économiseraient une quantité d’énergie significative, et qui ont été jugées faciles à appliquer en réalité. Nous discutons également de l’application de notre méthode what if au-delà des données de consommation électrique au domaine de la visualisation personnelle, qui est définie comme l’analyse visuelle des données personnelles. Nous présentons ainsi plusieurs applications possibles à d’autres données temporelles personnelles, par exemple concernant l’activité physique ou les transports. Cette thèse ouvre de nouvelles perspectives pour l’utilisation d’un paradigme d’interaction what if pour la visualisation personnelle
The production of energy, in particular the production of electricity, is the main responsible for the emission of greenhouse gases at world scale. The residential sector being the most energy consuming, it is essential to act at a personal scale to reduce these emissions. Thanks to the development of ubiquitous computing, it is now easy to collect data about the electricity consumption of electrical appliances of a housing. This possibility has allowed the development of eco-feedback technologies, whose objective is to provide to consumers a feedback about their consumption with the aim to reduce it. In this thesis we propose a personal visualization method for time-dependent data based on a what if interaction, which means that users can apply modifications in their behavior in a virtual way. Especially our method allows to simulate the modification of the usage of electrical appliances of a housing, and then to evaluate visually the impact of the modifications on data. This approach has been implemented in the Activelec system, which we have evaluated with users on real data.We synthesize the essential elements of conception for eco-feedback systems in a state of the art. We also outline the limitations of these technologies, the main one being the difficulty faced by users to find relevant modifications in their behavior to decrease their energy consumption. We then present three contributions. The first contribution is the development of a what if approach applied to eco-feedback as well as its implementation in the Activelec system. The second contribution is the evaluation of our approach with two laboratory studies. In these studies we assess if participants using our method manage to find modifications that save energy and which require a sufficiently low effort to be applied in reality. Finally the third contribution is the in-situ evaluation of the Activelec system. Activelec has been deployed in three private housings and used for a duration of approximately one month. This in-situ experiment allows to evaluate the usage of our approach in a real domestic context. In these three studies, participants managed to find modifications in the usage of appliances that would savea significant amount of energy, while being judged easy to be applied in reality.We also discuss of the application of our what if approach to the domain of personal visualization, beyond electricity consumption data, which is defined as the visual analysis of personal data. We hence present several potential applications to other types of time-dependent personal data, for example related to physical activity or to transportation. This thesis opens new perspectives for using a what if interaction paradigm for personal visualization
45

Widforss, Aron. "Avalanche Visualisation Using Satellite Radar." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Avalanche forecasters need precise knowledge about avalanche activity in large remote areas. Manual methods for gathering this data have scalability issues. Synthetic aperture radar satellites may provide much needed complementary data. This report describes Avanor, a system presenting change detection images of such satellite data in a web map client. Field validation suggests that the data in Avanor show at least 75 percent of the largest avalanches in Scandinavia with some small avalanches visible as well.
Lavinprognosmakare är i stort behov av detaljerad data gällande lavinaktivitet i stora och avlägsna områden. Manuella metoder för observation är svåra att skala upp, och rymdbaserad syntetisk aperturradar kan tillhandahålla ett välbehövt komplement till existerande datainsamling. Den här rapporten beskriver Avanor, en mjukvaruplattform som visualiserar förändringsbilder av sådan radardata i en webbkarta. Fältvalidering visar att datan som presenteras i Avanor kan synliggöra minst 75 procent av de största lavinerna i Skandinavien och även vissa mindre laviner.
46

Rowles, Christopher. "Visualisation of Articular Cartilage Microstructure." Thesis, Curtin University, 2016. http://hdl.handle.net/20.500.11937/52984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis developed image processing techniques enabling the detection and segregation of biological three dimensional images into its component features based upon shape and relative size of the features detected. The work used articular cartilage images and separated fibrous components from the cells and background noise. Measurement of individual components and their recombination into a composite image are possible. Developed software was used to analyse the development of hyaline cartilage in developing sheep embryos.
47

Schemali, Leila. "Interaction, édition et visualisation stéréoscopiques." Electronic Thesis or Diss., Paris, ENST, 2015. http://www.theses.fr/2015ENST0007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La stéréoscopie s’est développée depuis plus de 150 ans à travers la photographie,les films, la réalité virtuelle et la visualisation scientifique. L’affichage de données stéréoscopiques estaujourd’hui de bien meilleure qualité et le contenu ainsi que les écrans stéréoscopiques sont accessiblesà un large public. De plus, la stéréoscopie est un élément essentiel dans la compréhension d’un modèle3D, notamment dans le cadre de l’enseignement ou de la visualisation de modèles médicaux. Elle permetégalement d’appuyer le message véhiculé par une image. Utilisée à des fins artistiques, la stéréoscopiejoue un rôle émotionnel important.Cependant, la quantité de profondeur perçue dans une image stéréo est limitée par les technologiesd’affichages ainsi que par le système visuel humain. Il est alors toujours malaisé de créer du contenustéréoscopique assurant le confort du spectateur et utilisant de manière appropriée cette dimensionadditionnelle. Pour palier à cette difficulté, de multiples méthodes ont été développées, en particulierpour améliorer l’acquisition d’images stéréoscopiques, mais un grand nombre d’interventions manuellessubsistent lors du processus de création. De plus, la manipulation de contenu stéréoscopique nécessitel’utilisation d’outils adaptés, qui sont encore à définir. Enfin, de multiples méthodes d’affichagesstéréoscopiques existent aujourd’hui, certaines mieux adaptées à des conditions de visualisations qued’autres.C’est dans ce cadre que s’inscrit cette thèse, dont les contributions sont la définition d’un modèled’interaction utilisateur stéréoscopique adapté à un environnement de bureau, une méthode d’édition deprofondeur d’images stéréoscopiques assurant un confort visuel du spectateur et une nouvelle utilisationd’une méthode d’affichage stéréoscopique ne nécessitant qu’une image, la chromo-stéréoscopie
Stereoscopy has developed since 1838 through photography, movies, virtual reality andscientific visualization. Stereoscopic displays have increased in quality since the first stereoscope andstereoscopic content is now widely available. Furthermore, the understanding of 3D models is greatlyeased by stereoscopy, in particular for educational purposes or medical data visualization. The messageconveyed by an image is also further held by stereoscopy. Used in an artistic way, stereoscopy can bean important emotional vector.However, the depth interval of a stereoscopic image is limited by the displays technologies as well asthe human visual system. It is thus still difficult to create stereoscopic content ensuring a comfortableexperience to the viewer while using stereoscopy appropriately. To overcome this limitation, severalmethods have been developed, in particular to ease stereo image acquisition, but a great amount ofmanual work still remains. Moreover, stereoscopic content manipulation requires specific tools, whichare still to define. Finally, many stereoscopic display methods are available, with some more appropriateto specific viewing conditions than others.This thesis was conducted in this context, with the following contributions : the definition of a stereoscopicuser interaction model designed for a desktop environment, a new depth editing method ensuringthe viewer’s comfort and a new way to create single-image stereo using chromo-stereoscopy
48

Gel, Moreno Bernat. "Dissemination and visualisation of biological data." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/283143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the recent advent of various waves of technological advances, the amount of biological data being generated has exploded. As a consequence of this data deluge, new challenges have emerged in the field of biological data management. In order to maximize the knowledge extracted from the huge amount of biological data produced it is of great importance for the research community that data dissemination and visualisation challenges are tackled. Opening and sharing our data and working collaboratively will benefit the scientific community as a whole and to move towards that end, new developements, tools and techniques are needed. Nowadays, many small research groups are capable of producing important and interesting datasets. The release of those datasets can greatly increase their scientific value. In addition, the development of new data analysis algorithms greatly benefits from the availability of a big corpus of annotated datasets for training and testing purposes, giving new and better algorithms to biomedical sciences in return. None of these would be feasible without large amounts of biological data made freely and publicly available. Dissemination The Distributed Annotation System (DAS) is a protocol designed to publish and integrate annotations on biological entities in a distributed way. DAS is structured as a client-server system where the client retrieves data from one or more servers and to further process and visualise. Nowadays, setting up a DAS server imposes some requirements not met by many research groups. With the aim of removing the hassle of setting up a DAS server, a new software platform has been developed: easyDAS. easyDAS is a hosted platform to automatically create DAS servers. Using a simple web interface the user can upload a data file, describe its contents and a new DAS server will be automatically created and data will be publicly available to DAS clients. Visualisation One of the most broadly used visualization paradigms for genomic data are genomic browsers. A genomic browser is capable of displaying different sets of features positioned relative to a sequence. It is possible to explore the sequence and the features by moving around and zooming in and out. When this project was started, in 2007, all major genome browsers offered quite an static experience. It was possible to browse and explore data, but is was done through a set of buttons to the genome a certain amount of bases to left or right or zooming in and out. From an architectural point of view, all web-based genome browsers were very similar: they all had a relatively thin clien-side part in charge of showing images and big backend servers taking care of everything else. Every change in the display parameters made by the user triggered a request to the server, impacting the perceived responsiveness. We created a new prototype genome browser called GenExp, an interactive web-based browser with canvas based client side data rendering. It offers fluid direct interaction with the genome representation and it's possible to use the mouse drag it and use the mouse wheel to change the zoom level. GenExp offers also some quite unique features, such as its multi-window capabilities that allow a user to create an arbitrary number of independent or linked genome windows and its ability to save and share browsing sessions. GenExp is a DAS client and all data is retrieved from DAS sources. It is possible to add any available DAS data source including all data in Ensembl, UCSC and even the custom ones created with easyDAS. In addition, we developed a javascript DAS client library, jsDAS. jsDAS is a complete DAS client library that will take care of everything DAS related in a javascript application. jsDAS is javascript library agnostic and can be used to add DAS capabilities to any web application. All software developed in this thesis is freely available under an open source license.
Les recents millores tecnològiques han portat a una explosió en la quantitat de dades biològiques que es generen i a l'aparició de nous reptes en el camp de la gestió de les dades biològiques. Per a maximitzar el coneixement que podem extreure d'aquestes ingents quantitats de dades cal que solucionem el problemes associats al seu anàlisis, i en particular a la seva disseminació i visualització. La compartició d'aquestes dades de manera lliure i gratuïta pot beneficiar en gran mesura a la comunitat científica i a la societat en general, però per a fer-ho calen noves eines i tècniques. Actualment, molts grups són capaços de generar grans conjunts de dades i la seva publicació en pot incrementar molt el valor científic. A més, la disponibilitat de grans conjunts de dades és necessària per al desenvolupament de nous algorismes d'anàlisis. És important, doncs, que les dades biològiques que es generen siguin accessibles de manera senzilla, estandaritzada i lliure. Disseminació El Sistema d'Anotació Distribuïda (DAS) és un protocol dissenyat per a la publicació i integració d'anotacions sobre entitats biològiques de manera distribuïda. DAS segueix una esquema de client-servidor, on el client obté dades d'un o més servidors per a combinar-les, processar-les o visualitzar-les. Avui dia, però, crear un servidor DAS necessita uns coneixements i infraestructures que van més enllà dels recursos de molts grups de recerca. Per això, hem creat easyDAS, una plataforma per a la creació automàtica de servidors DAS. Amb easyDAS un usuari pot crear un servidor DAS a través d'una senzilla interfície web i amb només alguns clics. Visualització Els navegadors genomics són un dels paradigmes de de visualització de dades genòmiques més usats i permet veure conjunts de dades posicionades al llarg d'una seqüència. Movent-se al llarg d'aquesta seqüència és possibles explorar aquestes dades. Quan aquest projecte va començar, l'any 2007, tots els grans navegadors genomics oferien una interactivitat limitada basada en l'ús de botons. Des d'un punt de vista d'arquitectura tots els navegadors basats en web eren molt semblants: un client senzill encarregat d'ensenyar les imatges i un servidor complex encarregat d'obtenir les dades, processar-les i generar les imatges. Així, cada canvi en els paràmetres de visualització requeria una nova petició al servidor, impactant molt negativament en la velocitat de resposta percebuda. Vam crear un prototip de navegador genòmic anomenat GenExp. És un navegador interactiu basat en web que fa servir canvas per a dibuixar en client i que ofereix la possibilitatd e manipulació directa de la respresentació del genoma. GenExp té a més algunes característiques úniques com la possibilitat de crear multiples finestres de visualització o la possibilitat de guardar i compartir sessions de navegació. A més, com que és un client DAS pot integrar les dades de qualsevol servidor DAS com els d'Ensembl, UCSC o fins i tot aquells creats amb easyDAS. A més, hem desenvolupat jsDAS, la primera llibreria de client DAS completa escrita en javascript. jsDAS es pot integrar en qualsevol aplicació DAS per a dotar-la de la possibilitat d'accedir a dades de servidors DAS. Tot el programari desenvolupat en el marc d'aquesta tesis està lliurement disponible i sota una llicència de codi lliure.
49

Wu, Xiaochuan. "Clustering and visualisation with Bregman divergences." Thesis, University of the West of Scotland, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.730022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Nguyen, Quang Vinh. "Space-efficient visualisation of large hierarchies /." Electronic version, 2005. http://adt.lib.uts.edu.au/public/adt-NTSM20051123.174122/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography