Dissertations / Theses on the topic 'Data Visualisation'

To see the other types of publications on this topic, follow the link: Data Visualisation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data Visualisation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Long, Elena. "Election data visualisation." Thesis, University of Plymouth, 2013. http://hdl.handle.net/10026.1/1589.

Full text
Abstract:
Visualisations of election data produced by the mass media, other organisations and even individuals are becoming increasingly available across a wide variety of platforms and in many different forms. As more data become available digitally and as improvements to computer hardware and software are made, these visualisations have become more ambitious in scope and more user-friendly. Research has shown that visualising data is an extremely powerful method of communicating information to specialists and non-specialists alike. This amounts to a democratisation of access to political and electoral data. To some extent political science lags behind the progress that has been made in the field of data visualisation. Much of the academic output remains committed to the paper format and much of the data presentation is in the form of simple text and tables. In the digital and information age there is a danger that political science will fall behind. This thesis reports on a number of case studies where efforts were made to visualise election data in order to clarify its structure and to present its meaning. The first case study demonstrates the value of data visualisation to the research process itself, facilitating the understanding of effects produced by different ways of estimating missing data. A second study sought to use visualisation to explain complex aspects of voting systems to the wider public. Three further case studies demonstrate the value of collaboration between political scientists and others possessing a range of skills embracing data management, software engineering, broadcasting and graphic design. These studies also demonstrate some of the problems that are encountered when trying to distil complex data into a form that can be easily viewed and interpreted by non-expert users. More importantly, these studies suggest that when the skills balance is correct then visualisation is both viable and necessary for communicating information on elections.
APA, Harvard, Vancouver, ISO, and other styles
2

Eyre-Todd, Richard A. "Safe data structure visualisation." Thesis, University of Edinburgh, 1993. http://hdl.handle.net/1842/14819.

Full text
Abstract:
A simple three layer scheme is presented which broadly categorises the types of support that a computing system might provide for program monitoring and debugging, namely hardware, language and external software support. Considered as a whole , the scheme forms a model for an integrated debugging-oriented system architecture. This thesis describes work which spans the upper levels of this architecture. A programming language may support debugging by preventing or detecting the use of objects that have no value. Techniques to help with this task such as formal verification, static analysis, required initialisation and default initialisation are considered. Strategies for tracking variable status at run-time are discussed. Novel methods are presented for adding run-time pointer variable checking to a language that does not normally support this facility. Language constructs that allow the selective control of run-time unassigned variable checking for scalar and composite objects are also described. Debugging at a higher level often involves the extensive examination of a program's data structures. The problem of visualising a particular kind of data structure, the hierarchic graph, is discussed using the previously described language level techniques to ensure data validity. The elementary theory of a class of two-level graphs is presented, together with several algorithms to perform a clustering technique that can improve graph layout and aid understanding.
APA, Harvard, Vancouver, ISO, and other styles
3

Garda-Osorio, Cesar. "Data mining and visualisation." Thesis, University of the West of Scotland, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.742763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fei, Bennie Kar Leung. "Data visualisation in digital forensics." Pretoria : [s.n.], 2007. http://upetd.up.ac.za/thesis/available/etd-03072007-153241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Basalaj, Wojciech. "Proximity visualisation of abstract data." Thesis, University of Cambridge, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620911.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Anderson, Jonathan. "Visualisation of data from IoT systems : A case study of a prototyping tool for data visualisations." Thesis, Linköpings universitet, Programvara och system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138723.

Full text
Abstract:
The client in this study, Attentec, has seen an increase in the demand for services connected to Internet of things systems. This study is therefore examining if there is a tool that can be a used to build fast prototype visualisations of data from IoT systems to use as a tool in their daily work. The study started with an initial phase with two parts. The first part was to get better knowledge of Attentec and derive requirements for the tool and the second part was a comparison of prototyping tools for aiding in development of data visualisations. Apache Zeppelin was chosen as the most versatile and suitable tool matching the criteria defined together with Attentec. Following the initial phase a pre-study containing interviews to collect empirical data on how visualisations and IoT projects had been implemented previously at Attentec were performed. This lead to the conclusion that geospatial data and NoSQL databases were common for IoT projects. A technical investigation was conducted on Apache Zeppelin to answer if there were any limits in using the tool for characteristics common in IoT system. This investigation lead to the conclusion that there was no support for plotting data on a map. The first implementation phase implemented support for geospatial data by adding a visualisation plug-in that plotted data on a map. The implementation phase was followed by an evaluation phase in which 5 participants performed tasks with Apache Zeppelin to evaluate the perceived usability of the tool. The evaluation was performed using a System Usability Scale and a Summed Usability Metric as well as interviews with the participants to find where improvements could be made. From the evaluation three main problems were discovered, the import and mapping of data, more feature on the map visualisation plug-in and the creation of database queries. The first two were chosen for the second iteration where a script for generating the code to import data was developed as well as improvements to the geospatial visualisation plug-in. A second evaluation was performed after the changes were made using similar tasks as in the first to see if the usability was improved between the two evaluations. The results of the Summed Usability Metric improved on all tasks and the System Usability Scale showed no significant change. In the interviews with the participants they all responded that the perceived usability had improved between the two evaluations suggesting some improvement.
APA, Harvard, Vancouver, ISO, and other styles
7

Gel, Moreno Bernat. "Dissemination and visualisation of biological data." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/283143.

Full text
Abstract:
With the recent advent of various waves of technological advances, the amount of biological data being generated has exploded. As a consequence of this data deluge, new challenges have emerged in the field of biological data management. In order to maximize the knowledge extracted from the huge amount of biological data produced it is of great importance for the research community that data dissemination and visualisation challenges are tackled. Opening and sharing our data and working collaboratively will benefit the scientific community as a whole and to move towards that end, new developements, tools and techniques are needed. Nowadays, many small research groups are capable of producing important and interesting datasets. The release of those datasets can greatly increase their scientific value. In addition, the development of new data analysis algorithms greatly benefits from the availability of a big corpus of annotated datasets for training and testing purposes, giving new and better algorithms to biomedical sciences in return. None of these would be feasible without large amounts of biological data made freely and publicly available. Dissemination The Distributed Annotation System (DAS) is a protocol designed to publish and integrate annotations on biological entities in a distributed way. DAS is structured as a client-server system where the client retrieves data from one or more servers and to further process and visualise. Nowadays, setting up a DAS server imposes some requirements not met by many research groups. With the aim of removing the hassle of setting up a DAS server, a new software platform has been developed: easyDAS. easyDAS is a hosted platform to automatically create DAS servers. Using a simple web interface the user can upload a data file, describe its contents and a new DAS server will be automatically created and data will be publicly available to DAS clients. Visualisation One of the most broadly used visualization paradigms for genomic data are genomic browsers. A genomic browser is capable of displaying different sets of features positioned relative to a sequence. It is possible to explore the sequence and the features by moving around and zooming in and out. When this project was started, in 2007, all major genome browsers offered quite an static experience. It was possible to browse and explore data, but is was done through a set of buttons to the genome a certain amount of bases to left or right or zooming in and out. From an architectural point of view, all web-based genome browsers were very similar: they all had a relatively thin clien-side part in charge of showing images and big backend servers taking care of everything else. Every change in the display parameters made by the user triggered a request to the server, impacting the perceived responsiveness. We created a new prototype genome browser called GenExp, an interactive web-based browser with canvas based client side data rendering. It offers fluid direct interaction with the genome representation and it's possible to use the mouse drag it and use the mouse wheel to change the zoom level. GenExp offers also some quite unique features, such as its multi-window capabilities that allow a user to create an arbitrary number of independent or linked genome windows and its ability to save and share browsing sessions. GenExp is a DAS client and all data is retrieved from DAS sources. It is possible to add any available DAS data source including all data in Ensembl, UCSC and even the custom ones created with easyDAS. In addition, we developed a javascript DAS client library, jsDAS. jsDAS is a complete DAS client library that will take care of everything DAS related in a javascript application. jsDAS is javascript library agnostic and can be used to add DAS capabilities to any web application. All software developed in this thesis is freely available under an open source license.
Les recents millores tecnològiques han portat a una explosió en la quantitat de dades biològiques que es generen i a l'aparició de nous reptes en el camp de la gestió de les dades biològiques. Per a maximitzar el coneixement que podem extreure d'aquestes ingents quantitats de dades cal que solucionem el problemes associats al seu anàlisis, i en particular a la seva disseminació i visualització. La compartició d'aquestes dades de manera lliure i gratuïta pot beneficiar en gran mesura a la comunitat científica i a la societat en general, però per a fer-ho calen noves eines i tècniques. Actualment, molts grups són capaços de generar grans conjunts de dades i la seva publicació en pot incrementar molt el valor científic. A més, la disponibilitat de grans conjunts de dades és necessària per al desenvolupament de nous algorismes d'anàlisis. És important, doncs, que les dades biològiques que es generen siguin accessibles de manera senzilla, estandaritzada i lliure. Disseminació El Sistema d'Anotació Distribuïda (DAS) és un protocol dissenyat per a la publicació i integració d'anotacions sobre entitats biològiques de manera distribuïda. DAS segueix una esquema de client-servidor, on el client obté dades d'un o més servidors per a combinar-les, processar-les o visualitzar-les. Avui dia, però, crear un servidor DAS necessita uns coneixements i infraestructures que van més enllà dels recursos de molts grups de recerca. Per això, hem creat easyDAS, una plataforma per a la creació automàtica de servidors DAS. Amb easyDAS un usuari pot crear un servidor DAS a través d'una senzilla interfície web i amb només alguns clics. Visualització Els navegadors genomics són un dels paradigmes de de visualització de dades genòmiques més usats i permet veure conjunts de dades posicionades al llarg d'una seqüència. Movent-se al llarg d'aquesta seqüència és possibles explorar aquestes dades. Quan aquest projecte va començar, l'any 2007, tots els grans navegadors genomics oferien una interactivitat limitada basada en l'ús de botons. Des d'un punt de vista d'arquitectura tots els navegadors basats en web eren molt semblants: un client senzill encarregat d'ensenyar les imatges i un servidor complex encarregat d'obtenir les dades, processar-les i generar les imatges. Així, cada canvi en els paràmetres de visualització requeria una nova petició al servidor, impactant molt negativament en la velocitat de resposta percebuda. Vam crear un prototip de navegador genòmic anomenat GenExp. És un navegador interactiu basat en web que fa servir canvas per a dibuixar en client i que ofereix la possibilitatd e manipulació directa de la respresentació del genoma. GenExp té a més algunes característiques úniques com la possibilitat de crear multiples finestres de visualització o la possibilitat de guardar i compartir sessions de navegació. A més, com que és un client DAS pot integrar les dades de qualsevol servidor DAS com els d'Ensembl, UCSC o fins i tot aquells creats amb easyDAS. A més, hem desenvolupat jsDAS, la primera llibreria de client DAS completa escrita en javascript. jsDAS es pot integrar en qualsevol aplicació DAS per a dotar-la de la possibilitat d'accedir a dades de servidors DAS. Tot el programari desenvolupat en el marc d'aquesta tesis està lliurement disponible i sota una llicència de codi lliure.
APA, Harvard, Vancouver, ISO, and other styles
8

Simpson, T. "Visualisation of irregular, finite element data." Thesis, Swansea University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.639040.

Full text
Abstract:
This thesis outlines work into the development of efficient techniques for visualisation of datasets generated using finite element analyses. In particular, the work concentrates on datasets that require quadratic interpolation methods to accurately visualise simulation results. Typically for example, this includes simulations into non-Newtonian fluids where the use of quadratic shape functions is necessary to obtain acceptable results. Our first concern is fundamental algorithms for analysing irregular domains, typically constructed from triangular or tetrahedral cells. This includes methods for reducing the search space through domain space decomposition (octrees) and point location tests for interpolation. To this end, a new octree method is introduced, termed the “extended-nodes” octree which is shown to be a space-efficient data structure for irregular grids. The work continues to describe how commonly used visualisation techniques such as surface tiling and volume rendering can be adapted to work with quadratic interpolation functions over irregular grids. Particular interest is given to image quality and algorithm efficiency. In the context of volume rendering, a staged interpolation function is described based on a standard method found in the literature. This is shown to be substantially quicker whilst giving visually identical results. For surface tiling, a new recursive, adaptive algorithm is described which solves many of the problems encountered when tiling higher-order surfaces. The work on surface visualisation culminates in the introduction of a new algorithm termed Irregular, Quadratic, Direct Surface Rendering (IQDSR). This ray-casting method is shown to produce high-quality images of quadratic iso-surfaces within finite element data in a highly efficient manner. Finally, consideration is given to the visualisation of fluid flow (vector) data, common within finite element analysis. In particular, a review of both volume rendering based methods is given, along with a more in-depth discussion into particle based methodologies. Altogether, this work gives both a review of current linear finite element scalar and vector visualisation algorithms, and outlines new techniques which extend these methods to utilise quadratic interpolation functions over irregular meshes.
APA, Harvard, Vancouver, ISO, and other styles
9

Turner, D. "The visualisation of polarimetric radar data." Thesis, University of Edinburgh, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.663102.

Full text
Abstract:
This research forms part of a larger body of work which studies the application of scientific visualisation to the analysis of large multi-valued datasets. Visualisation techniques have historically assumed a fundamental role in the analysis of patterns in geographic datasets. This is particularly apparent in the analysis of remotely sensed data, which, since the advent of aerial photography, has utilised the intensity of visible (and invisible) electromagnetic energy as a means of producing synoptic map-like images. Progress in remote sensing technology, however, has led to the development of systems which measure very large numbers of intensity ‘channels’, or require the analysis of variables other than intensity values. Current visualisation strategies are insufficient to adequately represent such datasets, whilst retaining the synoptic perspective. In response to this, two new visualisation techniques are presented for the analysis of polarimetric radar data. Both techniques demonstrate how it is possible to produce synoptic image suitable for the analysis of spatial patterns without relying on pixel-based intensity images. This allows a large number of variables to be ascribed to a single geographic location, and thus encourages the rapid identification of patterns and anomalies within datasets. The value of applying the principals of scientific visualisation to exploratory data analysis is subsequently demonstrated with reference to a number of case studies that highlight the potential of the newly developed techniques.
APA, Harvard, Vancouver, ISO, and other styles
10

Loizides, Andreas M. "Intuitive visualisation of multi-variate data sets using the empathic visualisation algorithm (EVA)." Thesis, University College London (University of London), 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lees, Karen. "Data projections for the analysis and visualisation of bioinformatics data." Thesis, University of Oxford, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.496994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Selmer, Oyvind, and Mikael Brevik. "Classification and Visualisation of Twitter Sentiment Data." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-22967.

Full text
Abstract:
The social micro-blog site Twitter grows in user base each day and has become an attractive platform for companies, politicians, marketeers, and others wishing to share information and/or opinions. With a growing user market for Twitter, more and more systems and research are released for taking advantage of its informal nature and doing opinion mining and sentiment analysis. This master thesis describes a system for doing Sentiment Analysis on Twitter data and experiments with grid searches on various combinations of machine learning algorithms, features and preprocessing methods to achieve so. The classification system is fairly domain independent and performs better than baseline. This system is designed to be fast enough to classify big amounts of data and tweets in a stream, and provides an application program interface (API) to easily transfer data to applications or end users. Three visualisation applications are implemented, showing how to use the API and providing examples of how sentiment data can be used.The main contributions are: C1: A literary study of the state-of-the-art for Twitter Sentiment Analysis.C2: The implementation of a general system architecture for doing Twitter Sentiment Analysis. C3: A comparison of different machine learning algorithms for the task of identifying sentiments in short messages in a fairly semi-independent domain.C4: Implementations of a set of visualisation applications, showing how to use data from the generic system and providing examples of how to present sentiment analysis data.
APA, Harvard, Vancouver, ISO, and other styles
13

Macdonald, Donald. "Unsupervised neural networks for visualisation of data." Thesis, University of the West of Scotland, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.395687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Jones, M. W. "The visualisation of regular three dimensional data." Thesis, Swansea University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.637736.

Full text
Abstract:
This work is a thorough investigation of the area of visualisation of regular three dimensional data. The main contributions are new methods for: (1) reconstructing surfaces from contour data; (2) constructing voxel data from triangular meshes; (3) real-time manipulation through the use of cut planes; and (4) ultra high quality and accurate rendering. Various other work is presented which reduces the amount of calculations required during volume rendering, reduces the number of cubes that need to be considered during surface tiling and the combined application of particle systems and blobby models with high quality, computationally efficient rendering. All these methods have offered new solutions and improved existing methods for the construction, manipulation and visualisation of volume data. In addition to these new methods this work acts as a review and guide of current state of the art research, and gives in depth details of implementations and results of well known methods. Comparisons are made using these results of both computational expense and image quality, and these serve as a basis for the consideration of what visualisation technique to use for the resources available and the presentation of the data required. Reviews of each main visualisation topic are presented, in particular the review of volume rendering methods covers much of the recent research. Complementing this is the comparison of many alternative viewing models and efficiency tricks in the most thorough investigation to this researcher's knowledge. During the course of this research many existing methods have been implemented efficiently, in particular the surface tiling technique, and a method for measuring the distance between a point and a 3D triangle.
APA, Harvard, Vancouver, ISO, and other styles
15

Salazar, Gustavo A. "Integration and visualisation of data in bioinformatics." Doctoral thesis, University of Cape Town, 2015. http://hdl.handle.net/11427/16861.

Full text
Abstract:
Includes bibliographical references
The most recent advances in laboratory techniques aimed at observing and measuring biological processes are characterised by their ability to generate large amounts of data. The more data we gather, the greater the chance of finding clues to understand the systems of life. This, however, is only true if the methods that analyse the generated data are efficient, effective, and robust enough to overcome the challenges intrinsic to the management of big data. The computational tools designed to overcome these challenges should also take into account the requirements of current research. Science demands specialised knowledge for understanding the particularities of each study; in addition, it is seldom possible to describe a single observation without considering its relationship with other processes, entities or systems. This thesis explores two closely related fields: the integration and visualisation of biological data. We believe that these two branches of study are fundamental in the creation of scientific software tools that respond to the ever increasing needs of researchers. The distributed annotation system (DAS) is a community project that supports the integration of data from federated sources and its visualisation on web and stand-alone clients. We have extended the DAS protocol to improve its search capabilities and also to support feature annotation by the community. We have also collaborated on the implementation of MyDAS, a server to facilitate the publication of biological data following the DAS protocol, and contributed in the design of the protein DAS client called DASty. Furthermore, we have developed a tool called probeSearcher, which uses the DAS technology to facilitate the identification of microarray chips that include probes for regions on proteins of interest. Another community project in which we participated is BioJS, an open source library of visualisation components for biological data. This thesis includes a description of the project, our contributions to it and some developed components that are part of it. Finally, and most importantly, we combined several BioJS components over a modular architecture to create PINV, a web based visualiser of protein-protein interaction (PPI) networks, that takes advantage of the features of modern web technologies in order to explore PPI datasets on an almost ubiquitous platform (the web) and facilitates collaboration between scientific peers. This thesis includes a description of the design and development processes of PINV, as well as current use cases that have benefited from the tool and whose feedback has been the source of several improvements to PINV. Collectively, this thesis describes novel software tools that, by using modern web technologies, facilitates the integration, exploration and visualisation of biological data, which has the potential to contribute to our understanding of the systems of life.
APA, Harvard, Vancouver, ISO, and other styles
16

Schmidt, Armin R. "Visualisation of multi-source archaeological geophysics data." Rome: Fondazione Ing. Carlo M. Lerici, 2002. http://hdl.handle.net/10454/3281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Stuart, Elizabeth Jayne. "The visualisation of parallel computations." Thesis, University of Ulster, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.241682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wambecke, Jérémy. "Visualisation de données temporelles personnelles." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM051/document.

Full text
Abstract:
La production d’énergie, et en particulier la production d’électricité, est la principale responsable de l’émission de gaz à effet de serre au niveau mondial. Le secteur résidentiel étant le plus consommateur d’énergie, il est essentiel d’agir au niveau personnel afin de réduire ces émissions. Avec le développement de l’informatique ubiquitaire, il est désormais aisé de récolter des données de consommation d’électricité des appareils électriques d’un logement. Cette possibilité a permis le développement des technologies eco-feedback, dont l’objectif est de fournir aux consommateurs un retour sur leur consommation dans le but de la diminuer. Dans cette thèse nous proposons une méthode de visualisation de données temporelles personnelles basée sur une interaction what if, qui signifie que les utilisateurs peuvent appliquer des changements de comportement de manière virtuelle. En particulier notre méthode permet de simuler une modification de l’utilisation des appareils électriques d’un logement, puis d’évaluer visuellement l’impact de ces modifications sur les données. Cette méthode a été implémentée dans le système Activelec, que nous avons évalué avec des utilisateurs sur des données réelles. Nous synthétisons les éléments de conception indispensables aux systèmes eco-feedback dans un état de l’art. Nous exposons également les limitations de ces technologies, la principale étant la difficulté rencontrée par les utilisateurs pour trouver des modifications de comportement pertinentes leur permettant de consommer moins d’énergie.Nous présentons ensuite trois contributions. La première contribution est la conception d’une méthode what if appliquée à l’eco-feedback ainsi que son implémentation dans le système Activelec. La seconde contribution est l’évaluation de notre méthode grâce à deux expérimentations menées en laboratoire. Dans ces expérimentations nous évaluons si des participants utilisant notre méthode trouvent des modifications qui économisent de l’énergie et qui nécessitent suffisamment peu d’efforts pour être appliquées en vrai. Enfin la troisième contribution est l’évaluation in-situ du système Activelec dans des logements personnels pour une durée d’environ un mois. Activelec a été déployé dans trois appartements privés afin de permettre l’évaluation de notre méthode en contexte domestique réel. Dans ces trois expérimentations, les participants ont pu trouver des modifications d’utilisation des appareils qui économiseraient une quantité d’énergie significative, et qui ont été jugées faciles à appliquer en réalité. Nous discutons également de l’application de notre méthode what if au-delà des données de consommation électrique au domaine de la visualisation personnelle, qui est définie comme l’analyse visuelle des données personnelles. Nous présentons ainsi plusieurs applications possibles à d’autres données temporelles personnelles, par exemple concernant l’activité physique ou les transports. Cette thèse ouvre de nouvelles perspectives pour l’utilisation d’un paradigme d’interaction what if pour la visualisation personnelle
The production of energy, in particular the production of electricity, is the main responsible for the emission of greenhouse gases at world scale. The residential sector being the most energy consuming, it is essential to act at a personal scale to reduce these emissions. Thanks to the development of ubiquitous computing, it is now easy to collect data about the electricity consumption of electrical appliances of a housing. This possibility has allowed the development of eco-feedback technologies, whose objective is to provide to consumers a feedback about their consumption with the aim to reduce it. In this thesis we propose a personal visualization method for time-dependent data based on a what if interaction, which means that users can apply modifications in their behavior in a virtual way. Especially our method allows to simulate the modification of the usage of electrical appliances of a housing, and then to evaluate visually the impact of the modifications on data. This approach has been implemented in the Activelec system, which we have evaluated with users on real data.We synthesize the essential elements of conception for eco-feedback systems in a state of the art. We also outline the limitations of these technologies, the main one being the difficulty faced by users to find relevant modifications in their behavior to decrease their energy consumption. We then present three contributions. The first contribution is the development of a what if approach applied to eco-feedback as well as its implementation in the Activelec system. The second contribution is the evaluation of our approach with two laboratory studies. In these studies we assess if participants using our method manage to find modifications that save energy and which require a sufficiently low effort to be applied in reality. Finally the third contribution is the in-situ evaluation of the Activelec system. Activelec has been deployed in three private housings and used for a duration of approximately one month. This in-situ experiment allows to evaluate the usage of our approach in a real domestic context. In these three studies, participants managed to find modifications in the usage of appliances that would savea significant amount of energy, while being judged easy to be applied in reality.We also discuss of the application of our what if approach to the domain of personal visualization, beyond electricity consumption data, which is defined as the visual analysis of personal data. We hence present several potential applications to other types of time-dependent personal data, for example related to physical activity or to transportation. This thesis opens new perspectives for using a what if interaction paradigm for personal visualization
APA, Harvard, Vancouver, ISO, and other styles
19

Ahmad, Yasmeen. "Management, visualisation & mining of quantitative proteomics data." Thesis, University of Dundee, 2012. https://discovery.dundee.ac.uk/en/studentTheses/6ed071fc-e43b-410c-898d-50529dc298ce.

Full text
Abstract:
Exponential data growth in life sciences demands cross discipline work that brings together computing and life sciences in a usable manner that can enhance knowledge and understanding in both fields. High throughput approaches, advances in instrumentation and overall complexity of mass spectrometry data have made it impossible for researchers to manually analyse data using existing market tools. By applying a user-centred approach to effectively capture domain knowledge and experience of biologists, this thesis has bridged the gap between computation and biology through software, PepTracker (http://www.peptracker.com). This software provides a framework for the systematic detection and analysis of proteins that can be correlated with biological properties to expand the functional annotation of the genome. The tools created in this study aim to place analysis capabilities back in the hands of biologists, who are expert in evaluating their data. Another major advantage of the PepTracker suite is the implementation of a data warehouse, which manages and collates highly annotated experimental data from numerous experiments carried out by many researchers. This repository captures the collective experience of a laboratory, which can be accessed via user-friendly interfaces. Rather than viewing datasets as isolated components, this thesis explores the potential that can be gained from collating datasets in a “super-experiment” ideology, leading to formation of broad ranging questions and promoting biology driven lines of questioning. This has been uniquely implemented by integrating tools and techniques from the field of Business Intelligence with Life Sciences and successfully shown to aid in the analysis of proteomic interaction experiments. Having conquered a means of documenting a static proteomics snapshot of cells, the proteomics field is progressing towards understanding the extremely complex nature of cell dynamics. PepTracker facilitates this by providing the means to gather and analyse many protein properties to generate new biological insight, as demonstrated by the identification of novel protein isoforms.
APA, Harvard, Vancouver, ISO, and other styles
20

Gisbrecht, Andrej [Verfasser]. "Advances in dissimilarity-based data visualisation / Andrej Gisbrecht." Bielefeld : Universitätsbibliothek Bielefeld, 2015. http://d-nb.info/1068621729/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Knoetze, Ronald Morgan. "The mining and visualisation of application services data." Thesis, Nelson Mandela Metropolitan University, 2005. http://hdl.handle.net/10948/451.

Full text
Abstract:
Many network monitoring tools do not provide sufficiently in-depth and useful reports on network usage, particularly in the domain of application services data. The optimisation of network performance is only possible if the networks are monitored effectively. Techniques that identify patterns of network usage can assist in the successful monitoring of network performance. The main goal of this research was to propose a model to mine and visualise application services data in order to support effective network management. To demonstrate the effectiveness of the model, a prototype, called NetPatterns, was developed using data for the Integrated Tertiary Software (ITS) application service collected by a network monitoring tool on the NMMU South Campus network. Three data mining algorithms for application services data were identified for the proposed model. The data mining algorithms used are classification (decision tree), clustering (K-Means) and association (correlation). Classifying application services data serves to categorise combinations of network attributes to highlight areas of poor network performance. The clustering of network attributes serves to indicate sparse and dense regions within the application services data. Association indicates the existence of any interesting relationships between different network attributes. Three visualisation techniques were selected to visualise the results of the data mining algorithms. The visualisation techniques selected were the organisation chart, bubble chart and scatterplots. Colour and a variety of other visual cues are used to complement the selected visualisation techniques. The effectiveness and usefulness of NetPatterns was determined by means of user testing. The results of the evaluation clearly show that the participants were highly satisfied with the visualisation of network usage presented by NetPatterns. All participants successfully completed the prescribed tasks and indicated that NetPatterns is a useful tool for the analysis of network usage patterns.
APA, Harvard, Vancouver, ISO, and other styles
22

Al-Megren, Shiroq. "A tangible user interface for interactive data visualisation." Thesis, University of Leeds, 2016. http://etheses.whiterose.ac.uk/13819/.

Full text
Abstract:
Information visualisation (infovis) tools are integral for the analysis of large abstract data, where interactive processes are adopted to explore data, investigate hypotheses and detect patterns. New technologies exist beyond post-windows, icons, menus and pointing (WIMP), such as tangible user interfaces (TUIs). TUIs expand on the affordance of physical objects and surfaces to better exploit motor and perceptual abilities and allow for the direct manipulation of data. TUIs have rarely been studied in the field of infovis. The overall aim of this thesis is to design, develop and evaluate a TUI for infovis, using expression quantitative trait loci (eQTL) as a case study. The research began with eliciting eQTL analysis requirements that identified high- level tasks and themes for quantitative genetic and eQTL that were explored in a graphical prototype. The main contributions of this thesis are as follows. First, a rich set of interface design options for touch and an interactive surface with exclusively tangible objects were explored for the infovis case study. This work includes characterising touch and tangible interactions to understand how best to use them at various levels of metaphoric representation and embodiment. These design were then compared to identify a set of options for a TUI that exploits the advantages of touch and tangible interaction. Existing research shows computer vision commonly utilised as the TUI technology of choice. This thesis contributes a rigorous technical evaluation of another promising technology, micro-controllers and sensors, as well as computer vision. However the findings showed that some sensors used with micro-controllers are lacking in capability, so computer vision was adopted for the development of the TUI. The majority of TUIs for infovis are presented as technical developments or design case studies, but lack formal evaluation. The last contribution of this thesis is a quantitative and qualitative comparison of the TUI and touch UI for the infovis case study. Participants adopted more effective strategies to explore patterns and performed fewer unnecessary analyses with the TUI, which led to significantly faster performance. Contrary to common belief bimanual interactions were infrequently used for both interfaces, while epistemic actions were strongly promoted for the TUI and contributed to participants’ efficient exploration strategies.
APA, Harvard, Vancouver, ISO, and other styles
23

Yang, Ting Surveying &amp Spatial Information Systems Faculty of Engineering UNSW. "VISUALISATION OF SPATIAL DATA QUALITY FOR DISTRIBUTED GIS." Awarded by:University of New South Wales. School of Surveying and Spatial Information Systems, 2007. http://handle.unsw.edu.au/1959.4/27434.

Full text
Abstract:
Nowadays a substantial trend occurs that vast amounts of geospatial data are supplied, managed, and processed over distributed GIS. It is important to provide users with the capability of visualising spatial data quality information in a meaningful way for distributed GIS, since it will significantly enhance user understanding of data quality and aid them in assessing the data fitness for their application requirements. This thesis investigates the issue of visualisation of spatial data quality for distributed GIS. Based on a review of core concepts associated with spatial data quality, metadata standards, and major research areas related to data quality, the limitations of current data quality presentation are highlighted. To overcome some of these limitations, the research topic of this thesis is proposed, namely, adding visualisation functionality to the presentation of spatial data quality to convey uncertainty information to users in an interactive and graphical manner. Based on a review of the theories on visualisation and the frameworks developed for visualisation of spatial data quality in literature, an extended framework is developed incorporating several aspects of visualisation such as contexts, contents, and techniques, where the hierarchical nature of data quality and error models are two main parts of the visualisation contents. A brief framework of visualisation of spatial data quality for distributed GIS is proposed, where data storage with quality information and web services for visualising data quality are two key components. To satisfy a series of requirements for representing spatial data quality, a new object-oriented data model is proposed based on the review of developments of data models. This data model can specifically deal with the hierarchical nature of data quality and error propagation, recognising data quality as a dynamic process. Further, The implementation of the data model using GML and SVG is discussed. The details of a web service for visualising spatial data quality are addressed. After proposing the requirements on building a system for spatial data quality visualisation for distributed GIS, the design of a prototype visualisation system for distributed GIS is addressed in detail. The prototype visualisation system for spatial data quality is developed and implemented with an example data set, where SVG and JavaScript are used to illustrate how various graphic methods such as animation, data quality filters, and colour gradients can be used for distributed GIS. In addition to the visualisation of positional accuracy at the feature level, in this pilot system, the hierarchical structure of data quality information is also presented. Limitations of the research in this thesis are also addressed. However, in general, this research is of great significance for the contributions made to a relatively new research area in terms of theories, procedures, and software developments.
APA, Harvard, Vancouver, ISO, and other styles
24

Ottoson, Patrik. "Geographic Indexing and Data Management for 3D-Visualisation." Doctoral thesis, Stockholm : Tekniska högsk, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Gunnarsson, Ann-Sofie, and Malinda Rauhala. "Visualisation of Sensor Data using Handheld Augmented Reality." Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-97112.

Full text
Abstract:
In this project we have developed a prototype application for a Symbian Series 60 smartphone. The application is aimed for the viewing and analysing of moisture content within building elements. The sensor data is retrieved from a ZigBee network and by using the mobile phone the data read is displayed as an Augmented Reality visualisation. The background information is captured by the integrated camera and the application augments the view of the real world with a visualisation of the moisture levels and their distribution. Our thesis work involves areas like wireless communication, sensors, analysis and visualisation of data, mobile computer graphic and interaction techniques. The mobile development is built upon on Symbian Series 60 and the communication is accomplished using ZigBee and Bluetooth.
APA, Harvard, Vancouver, ISO, and other styles
26

Franklin, Keith Michael. "Non-visual data visualisation : towards a better design." Thesis, University of Kent, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.500532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Hobbs, Kenneth Frank. "The visualisation and rendering of digital elevation data." Thesis, University of East London, 2000. http://roar.uel.ac.uk/3592/.

Full text
Abstract:
The thesis addresses a longstanding cartographic problem, of how to visualise the Earth's surface relief in an effective and meaningful way. The problem is narrowed to relief defined by digital elevation data and visualised as a static, orthographic representation. It is approached in three steps: firstly research focuses on determining the most useful form of graphical representation to be pursued; secondly the theoretical basis of computer visualisation is investigated through a three-model framework, prompting a number of directions where solutions might be developed; and thirdly the development and engineering of a system is reported which models and renders widely available elevation data, and which provides flexibility in its input variables. The developed system is then applied to specific cases of relief visualisation, and new graphical forms are developed. The investigation of past and current approaches to relief representation, and a review of computer-graphic rendering of simpler geometrically defined objects, have revealed some limitations in commonly used relief visualisation systems, but have established the simulation of light and shade as still the most promising line of development. Analysis of the component variables of surface visualisation and rendering has led to the visualisation paradigm of three parametric models - of elevation, illumination and reflectance. Some attractive qualities, including widespread availability, of the contour elevation model have been identified, and a system has been developed which reconstructs surfaces from this data structure in a more effective way than typical current approaches. The system is also designed to support more complex illumination and surface reflectance models than the somewhat simplistic scenarios commonly available. The thesis reports the application of the system to generate surfaces from contour data, and experimentation with multiple coloured light sources and varying degrees of surface specularity. Evaluation of system implementation, and of the qualities of a representative set of graphical products, is addressed through six design criteria within a context defined by a typical mapping application. This has led to conclusions that the system and the new graphical forms have a number of virtues, including close fidelity with the source data, and significant improvements in visualisation.
APA, Harvard, Vancouver, ISO, and other styles
28

Wenn, Darren Edward Noel. "Visualisation and active data models in fieldbus networks." Thesis, University of Reading, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.297702.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Hyde, Richard William. "Advanced analysis and visualisation techniques for atmospheric data." Thesis, Lancaster University, 2017. http://eprints.lancs.ac.uk/88136/.

Full text
Abstract:
Atmospheric science is the study of a large, complex system which is becoming increasingly important to understand. There are many climate models which aim to contribute to that understanding by computational simulation of the atmosphere. To generate these models, and to confirm the accuracy of their outputs, requires the collection of large amounts of data. These data are typically gathered during campaigns lasting a few weeks, during which various sources of measurements are used. Some are ground based, others airborne sondes, but one of the primary sources is from measurement instruments on board aircraft. Flight planning for the numerous sorties is based on pre-determined goals with unpredictable influences, such as weather patterns, and the results of some limited analyses of data from previous sorties. There is little scope for adjusting the flight parameters during the sortie based on the data received due to the large volumes of data and difficulty in processing the data online. The introduction of unmanned aircraft with extended flight durations also requires a team of mission scientists with the added complications of disseminating observations between shifts. Earth’s atmosphere is a non-linear system, whereas the data gathered is sampled at discrete temporal and spatial intervals introducing a source of variance. Clustering data provides a convenient way of grouping similar data while also acknowledging that, for each discrete sample, a minor shift in time and/ or space could produce a range of values which lie within its cluster region. This thesis puts forward a set of requirements to enable the presentation of cluster analyses to the mission scientist in a convenient and functional manner. This will enable in-flight decision making as well as rapid feedback for future flight planning. Current state of the art clustering algorithms are analysed and a solution to all of the proposed requirements is not found. New clustering algorithms are developed to achieve these goals. These novel clustering algorithms are brought together, along with other visualization techniques, into a software package which is used to demonstrate how the analyses can provide information to mission scientists in flight. The ability to carry out offline analyses on historical data, whether to reproduce the online analyses of the current sortie, or to provide comparative analyses from previous missions, is also demonstrated. Methods for offline analyses of historical data prior to continuing the analyses in an online manner are also considered. The original contributions in this thesis are the development of five new clustering algorithms which address key challenges: speed and accuracy for typical hyper-elliptical offline clustering; speed and accuracy for offline arbitrarily shaped clusters; online dynamic and evolving clustering for arbitrary shaped clusters; transitions between offline and online techniques and also the application of these techniques to atmospheric science data analysis.
APA, Harvard, Vancouver, ISO, and other styles
30

Henkin, Rafael. "A framework for hierarchical time-oriented data visualisation." Thesis, City, University of London, 2018. http://openaccess.city.ac.uk/20611/.

Full text
Abstract:
The paradigm of exploratory data analysis advocates the use of multiple perspectives to formulate hypotheses on the data. This thesis presents a framework to support it through the use of interactive hierarchical visualisations for the exploration of temporal data. The research that leads to the framework involves investigating what are the conventional interactive techniques for temporal data, how they can be combined with hierarchical methods and which are the conceptual transformations that enable navigating between multiple perspectives. The aim of the research is to facilitate the design of interactive visualisations based on the use of granularities or units of time, which hide or reveal processes at various scales and is a key aspect of temporal data. Characteristics of granularities are suitable for hierarchical visualisations as evidenced in the literature. However, current conceptual models and frameworks lack means to incorporate characteristics of granularities as an integral part of visualisation design. The research addresses this by combining features of hierarchical and time-oriented visualisations and enabling systematic re-configuration of visualisations. Current techniques for visualising temporal data are analysed and specified at previously unsupported levels by breaking down visual encodings into decomposed layers, which can be arranged and recombined through hierarchical composition methods. Afterwards, the transformations of the properties of temporal data are defined by drawing from the interactions found in the literature and formalising them as a set of conceptual operators. The complete framework is introduced by combining the different components that form it and enable specifying visual encodings, hierarchical compositions and the temporal transformations. A case study then demonstrates how the framework can be used and its benefits for evaluating analysis strategies in visual exploration.
APA, Harvard, Vancouver, ISO, and other styles
31

Gianniotis, Nikolaos. "Visualisation of structured data through generative probabilistic modeling." Thesis, University of Birmingham, 2008. http://etheses.bham.ac.uk//id/eprint/4803/.

Full text
Abstract:
This thesis is concerned with the construction of topographic maps of structured data. A probabilistic generative model-based approach is taken, inspired by the GTM algorithm. De- pending on the data at hand, the form of a probabilistic generative model is specified that is appropriate for modelling the probability density of the data. A mixture of such models is formulated which is topographically constrained on a low-dimensional latent space. By con- strained, we mean that each point in the latent space determines the parameters of one model via a smooth non-linear mapping; by topographic, we mean that neighbouring latent points gen- erate similar parameters which address statistically similar models. The constrained mixture is trained to model the density of the structured data. A map is constructed by projecting each data item to a location on the latent space where the local latent points are associated with models that express a high probability of having generated the particular data item. We present three formulations for constructing topographic maps of structured data. Two of them are concerned with tree-structured data and employ hidden Markov trees and Markov trees as probabilistic generative models. The third approach is concerned with astronomical light curves from eclipsing binary stars and employs a physical-based model. The formulation of the all three models is accompanied by experiments and analysis of the resulting topographic maps.
APA, Harvard, Vancouver, ISO, and other styles
32

Moore, Jeanne. "Visualisation of data to optimise strategic decision making." Master's thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/25478.

Full text
Abstract:
1.1 Purpose of the study: The purpose of this research was to explain the principles that should be adopted when developing data visualisations for effective strategic decision making. 1.1.1 Main problem statement: Big data is produced at exponential rates and organisational executives may not possess the appropriate skill or knowledge to consume it for rigorous and timely strategic decision-making (Li, Tiwari, Alcock, & Bermell-Garcia, 2016; Marshall & De la Harpe, 2009; McNeely & Hahm, 2014). 1.1.2 Sub-problems: Organisational executives, including Chief Executive Officers (CEOs), Chief Financial Officers (CFOs) and Chief Operating Officers (COOs) possess unique and differing characteristics including education, IT skill, goals and experiences impacting on his/her strategic decision-making ability (Campbell, Chang, & Hosseinian-Far, 2015; Clayton, 2013; Krotov, 2015; Montibeller & Winterfeldt, 2015; Toker, Conati, Steichen, & Carenini, 2013; Xu, 2014). Furthermore, data visualisations are often not "fit-forpurpose", meaning they do not consistently or adequately guide executive strategic decision-making for organisational success (Nevo, Nevo, Kumar, Braasch, & Mathews, 2015). Finally, data visualisation development currently faces challenges, including resolving the interaction between data and human intuition, as well as the incorporation of big data to derive competitive advantage (Goes, 2014; Moorthy et al., 2015; Teras & Raghunathan, 2015). 1.1.3 Research Questions: Based on the challenges identified in section 1.1.1 and 1.1.2, the researcher has identified 3 research questions. RQ1: What do individual organisational executives value and use in data and data visualisation for strategic decision-making purposes? RQ2: How does data visualisation impact on an executive's ability to use and digest relevant information, including on his/her decision-making speed and confidence? RQ3: What elements should data analysts consider when developing data visualisations? 1.2 Rationale: The study will provide guidance to data analysts on how to develop and rethink their data visualisation methods, based on responses from organisational executives tasked with strategic decision-making. By performing this study, data analysts and executives will both benefit, as data analysts will gain knowledge and understanding of what executives value and use in data visualisations, while executives will have a platform to raise their requirements, improving the effectiveness of data visualisations for strategic decision-making. 1.3 Research Method: Qualitative research was the research method used in this research study. Qualitative research could be described as using words rather than precise measurements or calculations when performing data collection and analysis and uses methods of observation, human experiences and inquiry to explain the results of a study (Bryman, 2015; Myers, 2013). Its importance in social science research has increased, as there is a need to further understand the connection of the research study to people's emotions, culture and experiences (Creswell, 2013; Lub, 2015). This supports the ontological view of the researcher, which is an interpretivist's view (Eriksson & Kovalainen, 2015; Ormston, Spencer, Barnard, & Snape, 2014). The epistemology was interpretivism, as the researcher interviewed executives and data analysts (Eriksson & Kovalainen, 2015; Ritchie, Lewis, Nicholls, & Ormston, 2013). Furthermore, literature relating to decision-making supported the researcher's interpretivist view, as people generally make decisions based on what they know at the time (Betsch & Haberstroh, 2014). Therefore, the researcher cannot separate the participant from his/her views (Dhochak & Sharma, 2016).The population for this research comprised of 13 executives tasked with strategic decision-making, as well as 4 data analysts who are either internal (permanent employees) or external (consultants) of the organisation within the private sector. 1.4 Conclusion: RQ1: What do individual organisational executives value and use in data and data visualisation for strategic decision-making purposes? Based upon the findings, to answer RQ1, organisational executives must first be clear on the value of the decision. No benefit will be derived from data visualisation if the decision lacks value. The executives also stressed the importance of understanding how data relevancy was identified, based on the premise used by the data visualisation developers. Executives also value source data accuracy and preventing a one-dimensional view by only incorporating data from one source. Hence the value of dynamism, or differing data angles, is important. In terms of the value in data visualisation, it must provide simplicity, clarity, intuitiveness, insightfulness, gap, pattern and trending capability in a collaboration enabling manner, supporting the requirements and decision objectives of the executive. However, an additional finding also identified the importance of the executive's knowledge of the topic at hand and having some familiarity of the topic. Finally, the presenter of the visualisation must also provide a guiding force to assist the executive in reaching a final decision, but not actually formulate the decision for the executive. RQ2: How does data visualisation impact on an executive's ability to use and digest relevant information, including on his/her decision-making speed and confidence? Based on the findings, to answer RQ2, themes of consumption, speed and confidence can be used. However; the final themes of use and trust overlap the initial 3 theme. Consumption is impacted by the data visualisation's ability to talk to the objective of the decision and the ability of the technology used to map the mental model and thinking processes of the decision-maker. Furthermore, data visualisations must not only identify the best decision, but also help the executive to define actionable steps to meet the goal of the decision. Executives appreciate the knowledge and skill of peers and prefer an open approach to decision-making, provided that each inclusion is to the benefit of the organisation as a whole. Benchmark statistics from similar industries also add to the consumption factor. Speed was only defined in terms of the data visualisation design, including the use of contrasting elements, such as colour, to highlight anomalies and areas of interest with greater speed. Furthermore, tolerance limits can also assist the executive in identifying where thresholds have been surpassed, or where areas of underperformance have occurred, focussing on problem areas within the organisation. Finally, confidence is not only impacted by the data visualisation itself but is also affected by the executive's knowledge of the decision and the factors affecting the decision, the ability of the data visualisation presenter to understand, guide and add value to the decision process, the accuracy and integrity of the data presented, the familiarity of the technology used to present the data visualisation and the ability of the data visualisation to enable explorative and collaborative methods for decision-making. RQ3: What elements should data analysts consider when developing data visualisations? Based on the findings, to answer RQ3, the trust theme identifies qualitative factors, relating to the presenter. The value, consumption and confidence themes all point to the relevance of having an open and collaborative organisational culture that enables the effective use of data visualisation. Collaboration brings individuals together and the power of knowledgeable individuals can enhance the final decision. In terms of the presenter, his/her organisational ranking, handling of complexity and multiple audience requirements, use of data in the data visualisation, ability to answer questions, his/her confidence and maturity, professionalism, delivery of the message when presenting, knowledge of the subject presented, understanding of the executive's objectives and data visualisation methodology, creation of a "WOW" factor and understanding the data journey are all important considerations.
APA, Harvard, Vancouver, ISO, and other styles
33

Shovman, Mark. "Measuring comprehension of abstract data visualisations." Thesis, Abertay University, 2011. https://rke.abertay.ac.uk/en/studentTheses/4cfbdab1-0f91-4886-8b02-a4a8da48aa72.

Full text
Abstract:
Common visualisation techniques such as bar-charts and scatter-plots are not sufficient for visual analysis of large sets of complex multidimensional data. Technological advancements have led to a proliferation of novel visualisation tools and techniques that attempt to meet this need. A crucial requirement for efficient visualisation tool design is the development of objective criteria for visualisation quality, informed by research in human perception and cognition. This thesis presents a multidisciplinary approach to address this requirement, underpinning the design and implementation of visualisation software with the theory and methodology of cognitive science. An opening survey of visualisation practices in the research environment identifies three primary uses of visualisations: the detection of outliers, the detection of clusters and the detection of trends. This finding, in turn, leads to a formulation of a cognitive account of the visualisation comprehension processes, founded upon established theories of visual perception and reading comprehension. Finally, a psychophysical methodology for objectively assessing visualisation efficiency is developed and used to test the efficiency of a specific visualisation technique, namely an interactive three-dimensional scatterplot, in a series of four experiments. The outcomes of the empirical study are three-fold. On a concrete applicable level, three-dimensional scatterplots are found to be efficient in trend detection but not in outlier detection. On a methodological level, ‘pop-out’ methodology is shown to be suitable for assessing visualisation efficiency. On a theoretical level, the cognitive account of visualisation comprehension processes is enhanced by empirical findings, e.g. the significance of the learning curve parameters. All these provide a contribution to a ‘science of visualisation’ as a coherent scientific paradigm, both benefiting fundamental science and meeting an applied need.
APA, Harvard, Vancouver, ISO, and other styles
34

Henley, Lisa. "The quantification and visualisation of human flourishing." Thesis, University of Canterbury. School of Mathematics and Statistics, 2015. http://hdl.handle.net/10092/10441.

Full text
Abstract:
Economic indicators such as GDP have been a main indicator of human progress since the first half of last century. There is concern that continuing to measure our progress and / or wellbeing using measures that encourage consumption on a planet with limited resources, may not be ideal. Alternative measures of human progress, have a top down approach where the creators decide what the measure will contain. This work defines a 'bottom up' methodology an example of measuring human progress that doesn't require manual data reduction. The technique allows visual overlay of other 'factors' that users may feel are particularly important. I designed and wrote a genetic algorithm, which, in conjunction with regression analysis, was used to select the 'most important' variables from a large range of variables loosely associated with the topic. This approach could be applied in many areas where there are a lot of data from which an analyst must choose. Next I designed and wrote a genetic algorithm to explore the evolution of a spectral clustering solution over time. Additionally, I designed and wrote a genetic algorithm with a multi-faceted fitness function which I used to select the most appropriate clustering procedure from a range of hierarchical agglomerative methods. Evolving the algorithm over time was not successful in this instance, but the approach holds a lot of promise as an alternative to 'scoring' new data based on an original solution, and as a method for using alternate procedural options to those an analyst might normally select. The final solution allowed an evolution of the number of clusters with a fixed clustering method and variable selection over time. Profiling with various external data sources gave consistent and interesting interpretations to the clusters.
APA, Harvard, Vancouver, ISO, and other styles
35

Barton, Gabor J. "Visualisation of clinical gait analysis data using neural networks." Thesis, Liverpool John Moores University, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.436553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Dearden, J. "Using interactive data visualisation to explore dynamic urban models." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1356102/.

Full text
Abstract:
Dynamic urban models embody current theories about how urban systems evolve. To explore the consequences of this theory for a particular urban system requires an urban simulation because the theories are necessarily complicated to deal with the nonlinearity and complexity found in urban systems. Making sense of this output is challenging and in this thesis we explore using interactive visualisation and participation in a simulation to help a user interpret this output. Seven different simulation models are developed and explored using this methodology and applied to present day Greater London and South Yorkshire and the historical United States.
APA, Harvard, Vancouver, ISO, and other styles
37

Welch, SJ. "Interactive visualisation techniques for data mining of satellite imagery." Thesis, Honours thesis, University of Tasmania, 2006. https://eprints.utas.edu.au/933/1/front_matter_welch.pdf.

Full text
Abstract:
Supervised classification of satellite imagery largely removes the user from the information extraction process. Visualisation is an often ignored means by which users may interactively explore the complex patterns and relationships in satellite imagery. Classification can be considered a 'hypothesis testing' form of analysis. Visual Data Mining allows for dynamic hypothesis generation, testing and revision based on a human user's perception. In this study Visual Data Mining was applied to the classification of satellite imagery. After reviewing appropriate techniques and literature a tool was developed for the visual exploration and mining of satellite image data. This tool augments existing semi-automatic data mining techniques with visualisation capabilities. The tool was developed in IDL as an extension to ENVI, a popular remote sensing package. The tool developed was used to conduct a visual data mining analysis of high-resolution imagery of Heard Island. This process demonstrated the positive impacts of visualisation and visual data mining when used in the analysis of satellite imagery. These impacts consist of: increased opportunity for understanding and hence confidence in classification results, increased opportunity for the discovery of subtle patterns in satellite imagery and the ability to create, test and revise hypotheses based on visual assessment.
APA, Harvard, Vancouver, ISO, and other styles
38

Walker, Arron R. "Automated spatial information retrieval and visualisation of spatial data." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/17258/1/Arron_Robert_Walker_Thesis.pdf.

Full text
Abstract:
An increasing amount of freely available Geographic Information System (GIS) data on the Internet has stimulated recent research into Spatial Information Retrieval (SIR). Typically, SIR looks at the problem of retrieving spatial data on a dataset by dataset basis. However in practice, GIS datasets are generally not analysed in isolation. More often than not multiple datasets are required to create a map for a particular analysis task. To do this using the current SIR techniques, each dataset is retrieved one by one using traditional retrieval methods and manually added to the map. To automate map creation the traditional SIR paradigm of matching a query to a single dataset type must be extended to include discovering relationships between different dataset types. This thesis presents a Bayesian inference retrieval framework that will incorporate expert knowledge in order to retrieve all relevant datasets and automatically create a map given an initial user query. The framework consists of a Bayesian network that utilises causal relationships between GIS datasets. A series of Bayesian learning algorithms are presented that automatically discover these causal linkages from historic expert knowledge about GIS datasets. This new retrieval model improves support for complex and vague queries through the discovered dataset relationships. In addition, the framework will learn which datasets are best suited for particular query input through feedback supplied by the user. This thesis evaluates the new Bayesian Framework for SIR. This was achieved by utilising a test set of queries and responses and measuring the performance of the respective new algorithms against conventional algorithms. This contribution will increase the performance and efficiency of knowledge extraction from GIS by allowing users to focus on interpreting data, instead of focusing on finding which data is relevant to their analysis. In addition, they will allow GIS to reach non-technical people.
APA, Harvard, Vancouver, ISO, and other styles
39

Walker, Arron R. "Automated spatial information retrieval and visualisation of spatial data." Queensland University of Technology, 2007. http://eprints.qut.edu.au/17258/.

Full text
Abstract:
An increasing amount of freely available Geographic Information System (GIS) data on the Internet has stimulated recent research into Spatial Information Retrieval (SIR). Typically, SIR looks at the problem of retrieving spatial data on a dataset by dataset basis. However in practice, GIS datasets are generally not analysed in isolation. More often than not multiple datasets are required to create a map for a particular analysis task. To do this using the current SIR techniques, each dataset is retrieved one by one using traditional retrieval methods and manually added to the map. To automate map creation the traditional SIR paradigm of matching a query to a single dataset type must be extended to include discovering relationships between different dataset types. This thesis presents a Bayesian inference retrieval framework that will incorporate expert knowledge in order to retrieve all relevant datasets and automatically create a map given an initial user query. The framework consists of a Bayesian network that utilises causal relationships between GIS datasets. A series of Bayesian learning algorithms are presented that automatically discover these causal linkages from historic expert knowledge about GIS datasets. This new retrieval model improves support for complex and vague queries through the discovered dataset relationships. In addition, the framework will learn which datasets are best suited for particular query input through feedback supplied by the user. This thesis evaluates the new Bayesian Framework for SIR. This was achieved by utilising a test set of queries and responses and measuring the performance of the respective new algorithms against conventional algorithms. This contribution will increase the performance and efficiency of knowledge extraction from GIS by allowing users to focus on interpreting data, instead of focusing on finding which data is relevant to their analysis. In addition, they will allow GIS to reach non-technical people.
APA, Harvard, Vancouver, ISO, and other styles
40

Schurmann, Paul R. "Automatic flowchart displays for software visualisation." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 1998. https://ro.ecu.edu.au/theses/985.

Full text
Abstract:
Understanding large software projects and maintaining them can be a time consuming process. For instance, when changes are made to source code, corresponding changes have to be made to any related documentation. One large section of the documentation process is the creation and management of diagrams. Currently there are very few automated diagramming systems that can produce diagrams from source code, and the majority of these diagramming systems require a significant amount of time to generate diagrams. This research aims at investigating the process of creating flowchart diagrams from source code and how this process can be fully automated. Automating the diagrams creation process can save the developer both time and money. By saving the developer time we allow the developer to concentrate on more critical areas of their project. This thesis will involve the design and implementation of a prototype software tool that will allow the user to quickly and easily construct meaningful diagrams from source code. The project will focus directly on the interpretation of the Pascal language into Flowcharts. The emphasis of the project will be on the arrangement of the flowchart, with a goal to create clear and understandable diagrams.
APA, Harvard, Vancouver, ISO, and other styles
41

Cena, Bernard Maria. "Reconstruction for visualisation of discrete data fields using wavelet signal processing." University of Western Australia. Dept. of Computer Science, 2000. http://theses.library.uwa.edu.au/adt-WU2003.0014.

Full text
Abstract:
The reconstruction of a function and its derivative from a set of measured samples is a fundamental operation in visualisation. Multiresolution techniques, such as wavelet signal processing, are instrumental in improving the performance and algorithm design for data analysis, filtering and processing. This dissertation explores the possibilities of combining traditional multiresolution analysis and processing features of wavelets with the design of appropriate filters for reconstruction of sampled data. On the one hand, a multiresolution system allows data feature detection, analysis and filtering. Wavelets have already been proven successful in these tasks. On the other hand, a choice of discrete filter which converges to a continuous basis function under iteration permits efficient and accurate function representation by providing a “bridge” from the discrete to the continuous. A function representation method capable of both multiresolution analysis and accurate reconstruction of the underlying measured function would make a valuable tool for scientific visualisation. The aim of this dissertation is not to try to outperform existing filters designed specifically for reconstruction of sampled functions. The goal is to design a wavelet filter family which, while retaining properties necessary to preform multiresolution analysis, possesses features to enable the wavelets to be used as efficient and accurate “building blocks” for function representation. The application to visualisation is used as a means of practical demonstration of the results. Wavelet and visualisation filter design is analysed in the first part of this dissertation and a list of wavelet filter design criteria for visualisation is collated. Candidate wavelet filters are constructed based on a parameter space search of the BC-spline family and direct solution of equations describing filter properties. Further, a biorthogonal wavelet filter family is constructed based on point and average interpolating subdivision and using the lifting scheme. The main feature of these filters is their ability to reconstruct arbitrary degree piecewise polynomial functions and their derivatives using measured samples as direct input into a wavelet transform. The lifting scheme provides an intuitive, interval-adapted, time-domain filter and transform construction method. A generalised factorisation for arbitrary primal and dual order point and average interpolating filters is a result of the lifting construction. The proposed visualisation filter family is analysed quantitatively and qualitatively in the final part of the dissertation. Results from wavelet theory are used in the analysis which allow comparisons among wavelet filter families and between wavelets and filters designed specifically for reconstruction for visualisation. Lastly, the performance of the constructed wavelet filters is demonstrated in the visualisation context. One-dimensional signals are used to illustrate reconstruction performance of the wavelet filter family from noiseless and noisy samples in comparison to other wavelet filters and dedicated visualisation filters. The proposed wavelet filters converge to basis functions capable of reproducing functions that can be represented locally by arbitrary order piecewise polynomials. They are interpolating, smooth and provide asymptotically optimal reconstruction in the case when samples are used directly as wavelet coefficients. The reconstruction performance of the proposed wavelet filter family approaches that of continuous spatial domain filters designed specifically for reconstruction for visualisation. This is achieved in addition to retaining multiresolution analysis and processing properties of wavelets.
APA, Harvard, Vancouver, ISO, and other styles
42

Rezayan, Leo A. "Making collaborative data physicalisations." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/129458/2/Leo_Rezayan_Thesis.pdf.

Full text
Abstract:
This project investigated physical data presentations, physicalisations to explore ways of presenting data in physical, three-dimensional form, and understand how this would be received by users. This project first reviewed the field of tangible interaction and collaboration to identify a series of concepts to support the design of collaborative data physicalisation. Next, this research undertook a research through design and reflective approach to design, to create a new collaborative data physicalisation system. It then used observations and focus-groups to evaluate the design’s utility and explored how people employed physicalisation as part of their collaborative sense-making and meaning-making processes.
APA, Harvard, Vancouver, ISO, and other styles
43

Coetzee, Dirk. "Visualisation of PF firewall logs using open source." Thesis, Rhodes University, 2015. http://hdl.handle.net/10962/d1018552.

Full text
Abstract:
If you cannot measure, you cannot manage. This is an age old saying, but still very true, especially within the current South African cybercrime scene and the ever-growing Internet footprint. Due to the significant increase in cybercrime across the globe, information security specialists are starting to see the intrinsic value of logs that can ‘tell a story’. Logs do not only tell a story, but also provide a tool to measure a normally dark force within an organisation. The collection of current logs from installed systems, operating systems and devices is imperative in the event of a hacking attempt, data leak or even data theft, whether the attempt is successful or unsuccessful. No logs mean no evidence, and in many cases not even the opportunity to find the mistake or fault in the organisation’s defence systems. Historically, it remains difficult to choose what logs are required by your organization. A number of questions should be considered: should a centralised or decentralised approach for collecting these logs be followed or a combination of both? How many events will be collected, how much additional bandwidth will be required and will the log collection be near real time? How long must the logs be saved and what if any hashing and encryption (integrity of data) should be used? Lastly, what system must be used to correlate, analyse, and make alerts and reports available? This thesis will address these myriad questions, examining the current lack of log analysis, practical implementations in modern organisation, and also how a need for the latter can be fulfilled by means of a basic approach. South African organizations must use technology that is at hand in order to know what electronic data are sent in and out of their organizations network. Concentrating only on FreeBSD PF firewall logs, it is demonstrated within this thesis the excellent results are possible when logs are collected to obtain a visual display of what data is traversing the corporate network and which parts of this data are posing a threat to the corporate network. This threat is easily determined via a visual interpretation of statistical outliers. This thesis aims to show that in the field of corporate data protection, if you can measure, you can manage.
APA, Harvard, Vancouver, ISO, and other styles
44

Ruz, Heredia Gonzalo Andres. "Bayesian networks for classification, clustering, and high-dimensional data visualisation." Thesis, Cardiff University, 2008. http://orca.cf.ac.uk/54722/.

Full text
Abstract:
This thesis presents new developments for a particular class of Bayesian networks which are limited in the number of parent nodes that each node in the network can have. This restriction yields structures which have low complexity (number of edges), thus enabling the formulation of optimal learning algorithms for Bayesian networks from data. The new developments are focused on three topics: classification, clustering, and high-dimensional data visualisation (topographic map formation). For classification purposes, a new learning algorithm for Bayesian networks is introduced which generates simple Bayesian network classifiers. This approach creates a completely new class of networks which previously was limited mostly to two well known models, the naive Bayesian (NB) classifier and the Tree Augmented Naive Bayes (TAN) classifier. The proposed learning algorithm enhances the NB model by adding a Bayesian monitoring system. Therefore, the complexity of the resulting network is determined according to the input data yielding structures which model the data distribution in a more realistic way which improves the classification performance. Research on Bayesian networks for clustering has not been as popular as for classification tasks. A new unsupervised learning algorithm for three types of Bayesian network classifiers, which enables them to carry out clustering tasks, is introduced. The resulting models can perform cluster assignments in a probabilistic way using the posterior probability of a data point belonging to one of the clusters. A key characteristic of the proposed clustering models, which traditional clustering techniques do not have, is the ability to show the probabilistic dependencies amongst the variables for each cluster. This feature enables a better understanding of each cluster. The final part of this thesis introduces one of the first developments for Bayesian networks to perform topographic mapping. A new unsupervised learning algorithm for the NB model is presented which enables the projection of high-dimensional data into a two-dimensional space for visualisation purposes. The Bayesian network formalism of the model allows the learning algorithm to generate a density model of the input data and the presence of a cost function to monitor the convergence during the training process. These important features are limitations which other mapping techniques have and which have been overcome in this research.
APA, Harvard, Vancouver, ISO, and other styles
45

Bährecke, Niklas. "Automatic Classification and Visualisation of Gas from Infrared Video Data." Thesis, KTH, Skolan för teknik och hälsa (STH), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-183546.

Full text
Abstract:
Optical gas imaging denotes the visualisation of gases by means of an infrared camera, which allows operators to quickly, easily, and safely scan a large area and therefore plays a major role in the early detection and repair of gas leaks in various environments within the petrochemical industry such as processing plants and pipelines, but also in production facilities and hospitals. Thereby they help to avert damage to the environment as well as to health and safety of workers or inhabitants of nearby residential areas. The current generation of thermal gas cameras employs a so-called high-sensitivity mode, based on frame differencing, to increase the visibility of gas plumes. However, this method often results in image degradation through loss of orientation, distortion, and additional noise. Taking the increased prevalence and sinking costs for IR gas cameras – entailing an increased number of inexperienced users – into consideration, a more intuitive and user-friendly system to visualise gas constitutes a useful feature for the next generation of IR gas cameras. A system that retains the original infrared video images and highlights the gas cloud, providing the user with a clear and distinct visualisation of gas on the camera’s display, would be one example for such a visualisation system. This thesis discusses the design of such an automatic gas detection and visualisation framework based on machine learning and computer vision methods, where moving objects in video images are detected and classified as gas or non-gas based on appearance and spatiotemporal features. The main goal was to conduct a proof-of-concept study of this method, which included gathering examples for training a classifier as well as implementing the framework and evaluating several feature descriptors – both static and dynamic ones – with regard to their classification performance in gas detection in video images. Depending on the application scenario, the methods evaluated in this study are capable of reliably detecting gas.
APA, Harvard, Vancouver, ISO, and other styles
46

Synnott, Jonathan. "Analysis, visualisation and simulation of sensor data within intelligent environments." Thesis, Ulster University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.601204.

Full text
Abstract:
Reductions in fertility combined with increases in life expectancy are resulting in a globally ageing population. The result is an increasing strain on healthcare resources as members of the older population are more prone to suffer from chronic illness hence requiring long-term care and management. One current focus of research is to investigate methods of alleviating the increased strain on healthcare resources through the use of intelligent environments (IEs). IEs incorporate the use of sensor technology to facilitate long-term, home-based assessment of health condition with the goal of minimising the amount of direct clinical supervision required whilst maximising the frequency and objectivity of patient data collection. This thesis presents the design, development, testing and evaluation of novel methods for the assessment, visualisation and simulation of sensor data generated within IEs. Details of two novel methods for the objective assessment of the severity of the motor symptoms associated with Parkinson's disease are presented. The first method utilises the Nintendo Wii Remote for interaction with motor tasks and the second method uses a computer vision-based approach to monitor activity of daily living performance. Both methods were capable of quantifying the presence of tremor during activity performance. A novel method for IE data visualisation is also presented in the thesis. This method was capable of visualising spatiotemporal data trends using a novel density ring format within 2-dimensional (2D) virtual environments (VEs). Testing on data collected from an active smart lab illustrated the ability of the approach to highlight typical and atypical activity trends. Additionally, a novel method for the simulation of IE data is presented. This method was capable of facilitating the generation of simulated IE datasets through the navigation of an avatar through user-created 2D VEs, facilitating rapid prototyping without access to a physical IE implementation.
APA, Harvard, Vancouver, ISO, and other styles
47

Al-Yami, Mesfer. "Analysis and visualisation of digital elevation data for catchment management." Thesis, University of East Anglia, 2014. https://ueaeprints.uea.ac.uk/53441/.

Full text
Abstract:
River catchments are an obvious scale for soil and water resources management, since their shape and characteristics control the pathways and fluxes of water and sediment. Digital Elevation Models (DEMs) are widely used to simulate overland water paths in hydrological models. However, all DEMs are approximations to some degree and it is widely recognised that their characteristics can vary according to attributes such as spatial resolution and data sources (e.g. contours, optical or radar imagery). As a consequence, it is important to assess the ‘fitness for purpose’ of different DEMs and evaluate how uncertainty in the terrain representation may propagate into hydrological derivatives. The overall aim of this research was to assess accuracies and uncertainties associated with seven different DEMs (ASTER GDEM1, SRTM, Landform Panorama (OS 50), Landform Profile (OS 10), LandMap, NEXTMap and Bluesky DTMs) and to explore the implications of their use in hydrological analysis and catchment management applications. The research focused on the Wensum catchment in Norfolk, UK. The research initially examined the accuracy of the seven DEMs and, subsequently, a subset of these (SRTM, OS 50, OS10, NEXTMap and Bluesky) were used to evaluate different techniques for determining an appropriate flow accumulation threshold to delineate channel networks in the study catchment. These results were then used to quantitatively compare the positional accuracy of drainage networks derived from different DEMs. The final part of the thesis conducted an assessment of soil erosion and diffuse pollution risk in the study catchment using NEXTMap and OS 50 data with SCIMAP and RUSLE modelling techniques. Findings from the research demonstrate that a number of nationally available DEMs in the UK are simply not ‘fit for purpose’ as far as local catchment management is concerned. Results indicate that DEM source and resolution have considerable influence on modelling of hydrological processes, suggesting that for a lowland catchment the availability of a high resolution DEM (5m or better) is a prerequisite for any reliable assessment of the consequences of implementing particular land management measures. Several conclusions can be made from the research. (1) From the collection of DEMs used in this study the NEXTMap 5m DTM was found to be the best for representing catchment topography and is likely to prove a superior product for similar applications in other lowland catchments across the UK. (2) It is important that error modelling techniques are more routinely employed by GIS users, particularly where the fitness for purpose of a data source is not well-established. (3) GIS modelling tools that can be used to test and trial alternative management options (e.g. for reducing soil erosion) are particularly helpful in simulating the effect of possible environmental improvement measures.
APA, Harvard, Vancouver, ISO, and other styles
48

Khan, Wajid. "Information visualisation and data analysis using web mash-up systems." Thesis, University of Bedfordshire, 2014. http://hdl.handle.net/10547/584232.

Full text
Abstract:
The arrival of E-commerce systems have contributed greatly to the economy and have played a vital role in collecting a huge amount of transactional data. It is becoming difficult day by day to analyse business and consumer behaviour with the production of such a colossal volume of data. Enterprise 2.0 has the ability to store and create an enormous amount of transactional data; the purpose for which data was collected could quite easily be disassociated as the essential information goes unnoticed in large and complex data sets. The information overflow is a major contributor to the dilemma. In the current environment, where hardware systems have the ability to store such large volumes of data and the software systems have the capability of substantial data production, data exploration problems are on the rise. The problem is not with the production or storage of data but with the effectiveness of the systems and techniques where essential information could be retrieved from complex data sets in a comprehensive and logical approach as the data questions are asked. Using the existing information retrieval systems and visualisation tools, the more specific questions are asked, the more definitive and unambiguous are the visualised results that could be attained, but when it comes to complex and large data sets there are no elementary or simple questions. Therefore a profound information visualisation model and system is required to analyse complex data sets through data analysis and information visualisation, to make it possible for the decision makers to identify the expected and discover the unexpected. In order to address complex data problems, a comprehensive and robust visualisation model and system is introduced. The visualisation model consists of four major layers, (i) acquisition and data analysis, (ii) data representation, (iii) user and computer interaction and (iv) results repositories. There are major contributions in all four layers but particularly in data acquisition and data representation. Multiple attribute and dimensional data visualisation techniques are identified in Enterprise 2.0 and Web 2.0 environment. Transactional tagging and linked data are unearthed which is a novel contribution in information visualisation. The visualisation model and system is first realised as a tangible software system, which is then validated through different and large types of data sets in three experiments. The first experiment is based on the large Royal Mail postcode data set. The second experiment is based on a large transactional data set in an enterprise environment while the same data set is processed in a non-enterprise environment. The system interaction facilitated through new mashup techniques enables users to interact more fluently with data and the representation layer. The results are exported into various reusable formats and retrieved for further comparison and analysis purposes. The information visualisation model introduced in this research is a compact process for any size and type of data set which is a major contribution in information visualisation and data analysis. Advanced data representation techniques are employed using various web mashup technologies. New visualisation techniques have emerged from the research such as transactional tagging visualisation and linked data visualisation. The information visualisation model and system is extremely useful in addressing complex data problems with strategies that are easy to interact with and integrate.
APA, Harvard, Vancouver, ISO, and other styles
49

Sereno, Mickaël. "Collaborative Data Exploration and Discussion Supported by Augmented Reality." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG106.

Full text
Abstract:
J'étudie les avantages et limitations des casques de réalité augmentée (RA) pour l'exploration collaborative de données 3D. Avant d’entamer mes travaux, je voyais dans ces casques des avantages liés à leurs capacités immersives : ils fusionnent les espaces interactifs, de visualisation, de collaboration et physique des utilisateurs. Plusieurs collaborateurs peuvent voir et interagir directement avec des visuels 3D ancrés dans le monde réel. Ces casques reposent sur une vision stéréoscopique 3D qui fournit une perception de profondeur accrue par rapport aux écrans 2D, aidant les utilisateurs à mieux comprendre leurs données 3D. Laissant les utilisateurs se voir les uns les autres, il est possible de transitionner sans effort d’une phase de discussion à une phase d'exploration. Ces casques permettant aux utilisateurs d’interagir au sein de l’espace de travail de manière directe, rapide et intuitive en 3D, donnent des indices sur les intentions d'une personne aux autres. Par exemple, le fait de déplacer un objet en le saisissant est un indice fort sur les intentions de cette personne. Enfin, en n'occultant pas le monde réel, les outils habituels mais importants tels que les postes de travail restent facilement accessibles dans cet environnement. Cela étant, et bien qu’ils soient étudiés depuis des décennies, la puissance de calcul de ces casques avant la récente sortie de l'HoloLens en 2016 n'était pas suffisante pour une exploration efficace de données 3D telles que des données océaniques. De plus, les chercheurs précédemment étaient plus intéressés par comment rendre la RA possible que par comment utiliser la RA. Malgré toutes leurs qualités, il y a donc peu de travaux qui traitent de l'exploration de jeux de données 3D. Finalement, les casques de RA ne fournissent pas d'entrées 2D qui sont couramment utilisées avec les outils d'exploration actuels tels que ParaView et les logiciels de CAO, avec lesquels entre autre scientifiques et ingénieurs sont déjà efficaces. Je théorise donc dans cette thèse les situations où ces casques sont préférables. Ils semblent préférables lorsque l'objectif est de partager des idées, d'explorer des modèles ensemble et lorsque les outils d'exploration peuvent être minimaux par rapport à ce que les postes de travail fournissent, et ou la plupart des travaux et simulations préalables peuvent être effectués à l'avance. J'associe alors les casques de RA à des tablettes tactiles. J'utilise ces casques pour fusionner la visualisation, certaines interactions 3D et les espaces de collaboration dans l'espace physique des utilisateurs, et les tablettes pour la saisie 2D et l'interface utilisateur graphique habituelle que la plupart des logiciels proposent.J’étudie ensuite l'interaction de bas niveau nécessaire à l'exploration de données. Cela concerne la sélection de points et de régions dans des données 3D à l'aide de ce système hybride. Comme cette thèse vise à étudier les casques de RA dans des environnements collaboratifs, j’étudie également leurs capacités à adapter le visuel à chaque collaborateur pour un objet 3D ancré donné, similairement au "What-You-See-Is-What-I-See" relaxé qui permet par exemple à plusieurs utilisateurs de voir et modifier simultanément différentes parties d'un document partagé. Enfin, j’étudie en ce moment l'utilisation de mon système pour l'exploration collaborative en 3D des jeux de données océaniques sur lesquels travaillent mes collaborateurs du Helmholtz-Zentrum Geesthacht en Allemagne. Pour résumer, cette thèse fournit un état de l'art de la RA à des fins collaboratifs, fournit un aperçu de l'impact de la directivité de l'interaction 3D sur l'exploration de donnée 3D, et donne aux concepteurs un aperçu de l'utilisation de la RA pour l'exploration collaborative de données scientifique 2D et 3D, en mettant l'accent sur le domaine océanographique
I studied the benefits and limitations of Augmented Reality (AR) Head-Mounted Displays (AR-HMDs) for collaborative 3D data exploration. Prior of conducting any projects, I saw in AR-HMDs benefits concerning their immersive features: AR-HMDs merge the interactive, visualization, collaborative, and users' physical spaces together. Multiple collaborators can then see and interact directly with 3D visuals anchored within the users' physical space. AR-HMDs usually rely on stereoscopic 3D displays which provide additional depth cues compared to 2D screens, supporting users at understanding 3D datasets better. As AR-HMDs allow users to see each other within the workspace, seamless switches between discussion and exploration phases are possible. Interacting within those visualizations allow for fast and intuitive 3D direct interactions, which yields cues about one's intentions to others, e.g., moving an object by grabbing it is a strong cue about what a person intends to do with that object. Those cues are important for everyone to understand what is currently going on. Finally, by not occluding the users' physical space, usual but important tools such as billboards and workstations performing simulations are still easily accessible within this environment without wearing off the headsets. That being said, and while AR-HMDs are being studied for decades, their computing power before the recent release of the HoloLens in 2016 was not enough for an efficient exploration of 3D data such as ocean datasets. Moreover, previous researchers were more interested in how to make AR possible as opposed to how to use AR. Then, despite all those qualities one may think prior of working with AR-HMDs, there were almost no work that discusses the exploration of such 3D datasets. Moreover AR-HMDs are not suitable for 2D input which are however commonly used with usual explorative tools such as ParaView or CAD software, where users such as scientists and engineers are already efficient with. I then theorize in what situations are AR-HMDs preferable. They seem preferable when the purpose is to share insights with multiple collaborators and to explore patterns together, and where explorative tools can be minimal compared to what workstations provide as most of the prior work and simulations can be done before hand. I am thus combining AR-HMDs with multi-touch tablets, where I use AR-HMDs to merge the visualizations, some 3D interactions, and the collaborative spaces within the users' physical space, and I use the tablets for 2D input and usual Graphical User Interfaces that most software provides (e.g., buttons and menus). I then studied low-level interactions necessary for data exploration which concern the selection of points and regions inside datasets using this new hybrid system. The techniques my co-authors and I have chosen possess different level of directness that we investigated. As this PhD aims at studying AR-HMDs within collaborative environments, I also studied their capacities to adapt the visual to each collaborator for a given anchored 3D object. This is similar to the relaxed "What-You-See-Is-What-I-See" that allows, e.g., multiple users to see different parts of a shared document that remote users can edit simultaneously. Finally, I am currently (i.e., is not finished by the time I am writing this PhD) studying the use of this new system for the collaborative 3D data exploration of ocean datasets that my collaborators at Helmholtz-Zentrum Geesthacht, Germany, are working on. This PhD provides a state of the art of AR used within collaborative environments. It also gives insights about the impacts of 3D interaction directness for 3D data exploration. This PhD finally gives designers insights about the use of AR for collaborative scientific data exploration, with a focus on oceanography
APA, Harvard, Vancouver, ISO, and other styles
50

Griffith, Bridget Catherine Hamilton. "Development and usability testing of a data visualisation platform for an African trauma data registry." Master's thesis, University of Cape Town, 2018. http://hdl.handle.net/11427/29873.

Full text
Abstract:
Introduction Trauma is a significant contribution to the global burden of mortality and disease, especially in sub-Saharan Africa. The methods for tracking, recording, and analysing the incidence and causes of trauma are underdeveloped. To address this, The African Federation for Emergency Medicine (AFEM) developed a trauma form and Trauma Data Registry to collect trauma data in multiple sites in sub-Saharan Africa. We undertook a study to create, and assess the usability and functionality of, a trauma data visualisation platform for use in conjunction with the Trauma Data Registry. Methods We created a web-based trauma data visualisation platform for use with the AFEM Trauma Data Registry. This study involves a usability assessment of the AFEM Trauma Data Visualisation Platform to determine the specific website features and analytical needs of African trauma research facilities. This was done by surveying individuals from healthcare facilities that are currently using the AFEM Trauma Form. Two types of questionnaires were administered: Questionnaire I gathered information on the study population and their expectations for the platform, and Questionnaire II assessed the usability of the platform after it was introduced. Surveys took place in person and online, with the last group of questionnaires being administered on-site at the healthcare facility. Data were captured via Survey Monkey online and paper survey. The results were entered into Excel and analysed using descriptive statistics using Stata Version 14. Results A total of 45 healthcare practitioners from eight countries participated in the background survey. The greatest proportion were trained in Tanzania (14, 31.1%) and Ethiopia (14, 31.1%). The mean age of participants was 32.6 (SD=6.6). The mean number of years reported for working at their current facility is 3.7 (SD=3.5). The greatest number of participants in the survey were physicians (22, 48.9%) and specialists (11, 24.4%). Over half (53.3%, n=24) selected that they had moderate experience with data analysis, and the majority reported that they had less than three publications. A total of 34 HCPs participated in the usability study. The mean scores for the usability questionnaire portion were high, with all of the scores being above 6. Major positive themes of the participant comments included easy to use and time saving, major negative themes included feasibility concerns, and comments specific variable to add were common. Discussion There is a lot of heterogeneity in the data analysis and technology experience of participants. The participants were overall satisfied with the Trauma Data Platform. Participants’ comments and suggestions on elements to add indicate that there is still work to be done to design a Trauma Data Platform that is suitable for this setting. Conclusions Overall satisfaction with the Trauma Data Platform was high, and the user comments and suggestions will be incorporated into future versions of the platform. This research highlights the importance of considering the feasibility of health technology in its introduction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography