To see the other types of publications on this topic, follow the link: Visualization – Data processing.

Dissertations / Theses on the topic 'Visualization – Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Visualization – Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Huang, Shiping. "Exploratory visualization of data with variable quality." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-01115-225546/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gomes, Ricardo Rafael Baptista. "Long-term biosignals visualization and processing." Master's thesis, Faculdade de Ciências e Tecnologia, 2011. http://hdl.handle.net/10362/7979.

Full text
Abstract:
Thesis submitted in the fulfillment of the requirements for the Degree of Master in Biomedical Engineering
Long-term biosignals acquisitions are an important source of information about the patients’state and its evolution. However, long-term biosignals monitoring involves managing extremely large datasets, which makes signal visualization and processing a complex task. To overcome these problems, a new data structure to manage long-term biosignals was developed. Based on this new data structure, dedicated tools for long-term biosignals visualization and processing were implemented. A multilevel visualization tool for any type of biosignals, based on subsampling is presented, focused on four representative signal parameters (mean, maximum, minimum and standard deviation error). The visualization tool enables an overview of the entire signal and a more detailed visualization in specific parts which we want to highlight, allowing an user friendly interaction that leads to an easier signal exploring. The ”map” and ”reduce” concept is also exposed for long-term biosignal processing. A processing tool (ECG peak detection) was adapted for long-term biosignals. In order to test the developed algorithm, long-term biosignals acquisitions (approximately 8 hours each) were carried out. The visualization tool has proven to be faster than the standard methods, allowing a fast navigation over the different visualization levels of biosignals. Regarding the developed processing algorithm, it detected the peaks of long-term ECG signals with fewer time consuming than the nonparalell processing algorithm. The non-specific characteristics of the new data structure, visualization tool and the speed improvement in signal processing introduced by these algorithms makes them powerful tools for long-term biosignals visualization and processing.
APA, Harvard, Vancouver, ISO, and other styles
3

Cai, Bo. "Scattered Data Visualization Using GPU." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1428077896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Park, Joonam. "A visualization system for nonlinear frame analysis." Thesis, Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/19172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mattasantharam, R. (Rubini). "3D web visualization of continuous integration big data." Master's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201812063239.

Full text
Abstract:
Continuous Integration (CI) is a practice that is used to automate the software build and its test for every code integration to a shared repository. CI runs thousands of test scripts every day in a software organization. Every test produces data which can be test results logs such as errors, warnings, performance measurements and build metrics. This data volume tends to grow at unprecedented rates for the builds that are produced in the Continuous Integration (CI) system. The amount of the integrated test results data in CI grows over time. Visualizing and manipulating the real time and dynamic data is a challenge for the organizations. The 2D visualization of big data has been actively in use in software industry. Though the 2D visualization has numerous advantages, this study is focused on the 3D representation of CI big data visualization and its advantage over 2D visualization. Interactivity with the data and system, and accessibility of the data anytime, anywhere are two important requirements for the system to be usable. Thus, the study focused in creating a 3D user interface to visualize CI system data in 3D web environment. The three-dimensional user interface has been studied by many researchers who have successfully identified various advantages of 3D visualization along with various interaction techniques. Researchers have also described how the system is useful in real world 3D applications. But the usability of 3D user interface in visualizations in not yet reached to a desirable level especially in software industry due its complex data. The purpose of this thesis is to explore the use of 3D data visualization that could help the CI system users of a beneficiary organization in interpreting and exploring CI system data. The study focuses on designing and creating a 3D user interface for providing a more effective and usable system for CI data exploration. Design science research framework is chosen as a suitable research method to conduct the study. This study identifies the advantages of applying 3D visualization to a software system data and then proceeds to explore how 3D visualization could help users in exploring the software data through visualization and its features. The results of the study reveal that the 3D visualization help the beneficiary organization to view and compare multiple datasets in a single screen space, and to see the holistic view of large datasets, as well as focused details of multiple datasets of various categories in a single screen space. Also, it can be said from the results that the 3D visualization help the beneficiary organization CI team to better represent big data in 3D than in 2D.
APA, Harvard, Vancouver, ISO, and other styles
6

Chung, David H. S. "High-dimensional glyph-based visualization and interactive techniques." Thesis, Swansea University, 2014. https://cronfa.swan.ac.uk/Record/cronfa42276.

Full text
Abstract:
The advancement of modern technology and scientific measurements has led to datasets growing in both size and complexity, exposing the need for more efficient and effective ways of visualizing and analysing data. Despite the amount of progress in visualization methods, high-dimensional data still poses a number of significant challenges in terms of the technical ability of realising such a mapping, and how accurate they are actually interpreted. The different data sources and characteristics which arise from a wide range of scientific domains as well as specific design requirements constantly create new special challenges for visualization research. This thesis presents several contributions to the field of glyph-based visualization. Glyphs are parametrised objects which encode one or more data values to its appearance (also referred to as visual channels) such as their size, colour, shape, and position. They have been widely used to convey information visually, and are especially well suited for displaying complex, multi-faceted datasets. Its major strength is the ability to depict patterns of data in the context of a spatial relationship, where multi-dimensional trends can often be perceived more easily. Our research is set in the broad scope of multi-dimensional visualization, addressing several aspects of glyph-based techniques, including visual design, perception, placement, interaction, and applications. In particular, this thesis presents a comprehensive study on one interaction technique, namely sorting, for supporting various analytical tasks. We have outlined the concepts of glyph- based sorting, identified a set of design criteria for sorting interactions, designed and prototyped a user interface for sorting multivariate glyphs, developed a visual analytics technique to support sorting, conducted an empirical study on perceptual orderability of visual channels used in glyph design, and applied glyph-based sorting to event visualization in sports applications. The content of this thesis is organised into two parts. Part I provides an overview of the basic concepts of glyph-based visualization, before describing the state-of-the-art in this field. We then present a collection of novel glyph-based approaches to address challenges created from real-world applications. These are detailed in Part II. Our first approach involves designing glyphs to depict the composition of multiple error-sensitivity fields. This work addresses the problem of single camera positioning, using both 2D and 3D methods to support camera configuration based on various constraints in the context of a real-world environment. Our second approach present glyphs to visualize actions and events "at a glance". We discuss the relative merits of using metaphoric glyphs in comparison to other types of glyph designs to the particular problem of real-time sports analysis. As a result of this research, we delivered a visualization software, MatchPad, on a tablet computer. It successfully helped coaching staff and team analysts to examine actions and events in detail whilst maintaining a clear overview of the match, and assisted in their decision making during the matches. Abstract shortened by ProQuest.
APA, Harvard, Vancouver, ISO, and other styles
7

Peng, Wei. "Clutter-based dimension reordering in multi-dimensional data visualization." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-01115-222940.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Narayanan, Shruthi (Shruthi P. ). "Real-time processing and visualization of intensive care unit data." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/119537.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 83).
Intensive care unit (ICU) patients undergo detailed monitoring so that copious information regarding their condition is available to support clinical decision-making. Full utilization of the data depends heavily on its quantity, quality and manner of presentation to the physician at the bedside of a patient. In this thesis, we implemented a visualization system to aid ICU clinicians in collecting, processing, and displaying available ICU data. Our goals for the system are: to be able to receive large quantities of patient data from various sources, to compute complex functions over the data that are able to quantify an ICU patient's condition, to plot the data using a clean and interactive interface, and to be capable of live plot updates upon receiving new data. We made significant headway toward our goals, and we succeeded in creating a highly adaptable visualization system that future developers and users will be able to customize.
by Shruthi Narayanan.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
9

Wad, Charudatta V. "QoS : quality driven data abstraction for large databases." Worcester, Mass. : Worcester Polytechnic Institute, 2008. http://www.wpi.edu/Pubs/ETD/Available/etd-020508-151213/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Antle, Alissa N. "Interactive visualization tools for spatial data & metadata." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0010/NQ56495.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Cena, Bernard Maria. "Reconstruction for visualisation of discrete data fields using wavelet signal processing." University of Western Australia. Dept. of Computer Science, 2000. http://theses.library.uwa.edu.au/adt-WU2003.0014.

Full text
Abstract:
The reconstruction of a function and its derivative from a set of measured samples is a fundamental operation in visualisation. Multiresolution techniques, such as wavelet signal processing, are instrumental in improving the performance and algorithm design for data analysis, filtering and processing. This dissertation explores the possibilities of combining traditional multiresolution analysis and processing features of wavelets with the design of appropriate filters for reconstruction of sampled data. On the one hand, a multiresolution system allows data feature detection, analysis and filtering. Wavelets have already been proven successful in these tasks. On the other hand, a choice of discrete filter which converges to a continuous basis function under iteration permits efficient and accurate function representation by providing a “bridge” from the discrete to the continuous. A function representation method capable of both multiresolution analysis and accurate reconstruction of the underlying measured function would make a valuable tool for scientific visualisation. The aim of this dissertation is not to try to outperform existing filters designed specifically for reconstruction of sampled functions. The goal is to design a wavelet filter family which, while retaining properties necessary to preform multiresolution analysis, possesses features to enable the wavelets to be used as efficient and accurate “building blocks” for function representation. The application to visualisation is used as a means of practical demonstration of the results. Wavelet and visualisation filter design is analysed in the first part of this dissertation and a list of wavelet filter design criteria for visualisation is collated. Candidate wavelet filters are constructed based on a parameter space search of the BC-spline family and direct solution of equations describing filter properties. Further, a biorthogonal wavelet filter family is constructed based on point and average interpolating subdivision and using the lifting scheme. The main feature of these filters is their ability to reconstruct arbitrary degree piecewise polynomial functions and their derivatives using measured samples as direct input into a wavelet transform. The lifting scheme provides an intuitive, interval-adapted, time-domain filter and transform construction method. A generalised factorisation for arbitrary primal and dual order point and average interpolating filters is a result of the lifting construction. The proposed visualisation filter family is analysed quantitatively and qualitatively in the final part of the dissertation. Results from wavelet theory are used in the analysis which allow comparisons among wavelet filter families and between wavelets and filters designed specifically for reconstruction for visualisation. Lastly, the performance of the constructed wavelet filters is demonstrated in the visualisation context. One-dimensional signals are used to illustrate reconstruction performance of the wavelet filter family from noiseless and noisy samples in comparison to other wavelet filters and dedicated visualisation filters. The proposed wavelet filters converge to basis functions capable of reproducing functions that can be represented locally by arbitrary order piecewise polynomials. They are interpolating, smooth and provide asymptotically optimal reconstruction in the case when samples are used directly as wavelet coefficients. The reconstruction performance of the proposed wavelet filter family approaches that of continuous spatial domain filters designed specifically for reconstruction for visualisation. This is achieved in addition to retaining multiresolution analysis and processing properties of wavelets.
APA, Harvard, Vancouver, ISO, and other styles
12

Cui, Qingguang. "Measuring data abstraction quality in multiresolution visualizations." Worcester, Mass. : Worcester Polytechnic Institute, 2007. http://www.wpi.edu/Pubs/ETD/Available/etd-041107-224152/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Carter, Caleb. "High Resolution Visualization of Large Scientific Data Sets Using Tiled Display." Fogler Library, University of Maine, 2007. http://www.library.umaine.edu/theses/pdf/CarterC2007.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Patro, Anilkumar G. "Pixel oriented visualization in XmdvTool." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0907104-084847/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

STURARI, MIRCO. "Processing and visualization of multi-source data in next-generation geospatial applications." Doctoral thesis, Università Politecnica delle Marche, 2018. http://hdl.handle.net/11566/252596.

Full text
Abstract:
Le applicazioni geospaziali di nuova generazione come dati non usano semplicemente punti, linee e poligoni, ma oggetti complessi o evoluzioni di fenomeni che hanno bisogno di tecniche avanzate di analisi e visualizzazione per essere compresi. Le caratteristiche di queste applicazioni sono l'uso di dati multi-sorgente con diverse dimensioni spaziali, temporali e spettrali, la visualizzazione dinamica e interattiva con qualsiasi dispositivo e quasi ovunque, anche sul campo. L'analisi dei fenomeni complessi ha utilizzato fonti dati eterogenee per formato/tipologia e per risoluzione spaziale/temporale/spettrale, che rendono problematica l'operazione di fusione per l'estrazione di informazioni significative e immediatamente comprensibili. L'acquisizione dei dati multi-sorgente può avvenire tramite diversi sensori, dispositivi IoT, dispositivi mobili, social media, informazioni geografiche volontarie e dati geospaziali di fonti pubbliche. Dato che le applicazioni geospaziali di nuova generazione presentano nuove caratteristiche, per visualizzare i dati grezzi, i dati integrati, i dati derivati e le informazioni è stata analizzata l'usabilità di tecnologie innovative che ne consentano la visualizzazione con qualsiasi dispositivo: dashboard interattive, le viste e le mappe con dimensioni spaziali e temporali, le applicazioni di Augmented e Virtual Reality. Per l'estrazione delle informazioni in modo semi-automatico abbiamo impiegato varie tecniche all'interno di un processo sinergico: segmentazione e identificazione, classificazione, rilevamento dei cambiamenti, tracciamento e clustering dei percorsi, simulazione e predizione. All'interno di un workflow di elaborazione, sono stati analizzati vari scenari e implementate soluzioni innovative caratterizzate dalla fusione di dati multi-sorgente, da dinamicità e interattivà. A seconda dell'ambito applicativo le problematiche sono differenziate e per ciascuno di questi sono state implementate le soluzioni più coerenti con suddette caratteristiche. In ciascuno scenario presentato sono state trovate soluzioni innovative che hanno dato buoni risultati, alcune delle quali in nuovi ambiti applicativi: (i) l'integrazione di dati di elevazione e immagini multispettrali ad alta risoluzione per la mappatura Uso del Suolo / Copertura del Suolo, (ii) mappatura con il contributo volontario per la protezione civile e la gestione delle emergenze (iii) la fusione di sensori per la localizzazione e il tracciamento in ambiente retail, (iv) l'integrazione dei dati in tempo reale per la simulazione del traffico nei sistemi di mobilità, (v) la combinazione di informazioni visive e di nuvole di punti per la rilevazione dei cambiamenti nell'applicazione della sicurezza ferroviaria. Attraverso questi esempi, i suggerimenti potranno essere applicati per realizzare applicazioni geospaziali anche in ambiti diversi. Nel futuro sarà possibile aumentare l'integrazione per realizzare piattaforme data-driven come base per sistemi intelligenti: un'interfaccia semplice per l'utente che metta a disposizione funzionalità avanzate di analisi costruite su algoritmi affidabili ed efficienti.
Next-generation geospatial applications as data do not simply use dots, lines, and polygons, but complex objects or evolution of phenomena that need advanced analysis and visualization techniques to be understood. The features of these applications are the use of multi-source data with different spatial, temporal and spectral dimensions, dynamic and interactive visualization with any device and almost anywhere, even in the field. Complex phenomena analysis has used heterogeneous data sources for format/typology and spatial/temporal/spectral resolution, which challenging combining operation to extract meaningful and immediately comprehensible information. Multi-source data acquisition can take place through various sensors, IoT devices, mobile devices, social media, voluntary geographic information and geospatial data from public sources. Since next-generation geospatial applications have new features to view raw data, integrated data, derived data, and information, wh have analysed the usability of innovative technologies to enable visualization with any device: interactive dashboards, views and maps with spatial and temporal dimensions, Augmented and Virtual Reality applications. For semi-automatic data extraction we have used various techniques in a synergistic process: segmentation and identification, classification, change detection, tracking and path clustering, simulation and prediction. Within a processing workflow, various scenarios were analysed and implemented innovative solutions characterized by the fusion of multi-source data, dynamism and interactivity. Depending on the application field, the problems are differentiated and for each of these the most coherent solutions have been implemented with the aforementioned characteristics. Innovative solutions that have yielded good results have been found in each scenario presented, some of which are in new applications: (i) integration of elevation data and multispectral high-resolution images for Land Use/Land Cover mapping, (ii) crowd-mapping for civil protection and emergency management, (iii) sensor fusion for indoor localization and tracking, (iv) integration real-time data for traffic simulation in mobility systems, (v) mixing visual and point cloud informations for change detection on railways safety and security application. Through these examples, given suggestions can be applied to create geospatial applications even in different areas. In the future, integration can be enhanced to build data-driven platforms as the basis for intelligent systems: a user-friendly interface that provides advanced analysis capabilities built on reliable and efficient algorithms.
APA, Harvard, Vancouver, ISO, and other styles
16

Thakar, Aniruddha. "Visualization feedback from informal specifications." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-03242009-040810/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Hongqin. "Color in scientific visualization : perception and image-based data display /." Online version of thesis, 2008. http://hdl.handle.net/1850/5805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Huang, Xiaodi, and xhuang@turing une edu au. "Filtering, clustering and dynamic layout for graph visualization." Swinburne University of Technology, 2004. http://adt.lib.swin.edu.au./public/adt-VSWT20050428.111554.

Full text
Abstract:
Graph visualization plays an increasingly important role in software engineering and information systems. Examples include UML, E-R diagrams, database structures, visual programming, web visualization, network protocols, molecular structures, genome diagrams, and social structures. Many classical algorithms for graph visualization have already been developed over the past decades. However, these algorithms face difficulties in practice, such as the overlapping nodes, large graph layout, and dynamic graph layout. In order to solve these problems, this research aims to systematically address both algorithmic and approach issues related to a novel framework that describes the process of graph visualization applications. At the same time, all the proposed algorithms and approaches can be applied to other situations as well. First of all, a framework for graph visualization is described, along with a generic approach to the graphical representation of a relational information source. As the important parts of this framework, two main approaches, Filtering and Clustering, are then particularly investigated to deal with large graph layouts effectively. In order to filter 'noise' or less important nodes in a given graph, two new methods are proposed to compute importance scores of nodes called NodeRank, and then to control the appearances of nodes in a layout by ranking them. Two novel algorithms for clustering graphs, KNN and SKM, are developed to reduce visual complexity. Identifying seed nodes as initial members of clusters, both algorithms make use of either the k-nearest neighbour search or a novel node similarity matrix to seek groups of nodes with most affinities or similarities among them. Such groups of relatively highly connected nodes are then replaced with abstract nodes to form a coarse graph with reduced dimensions. An approach called MMD to the layout of clustered graphs is provided using a multiple-window�multiple-level display. As for the dynamic graph layout, a new approach to removing overlapping nodes called Force-Transfer algorithm is developed to greatly improve the classical Force- Scan algorithm. Demonstrating the performance of the proposed algorithms and approaches, the framework has been implemented in a prototype called PGD. A number of experiments as well as a case study have been carried out.
APA, Harvard, Vancouver, ISO, and other styles
19

Kraemer, Eileen T. "A framework, tools, and methodology for the visualization of parallel and distributed systems." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/9214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Choi, Yi-king, and 蔡綺瓊. "Computer visualization techniques in surgical planning for pedicle screw insertion." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31224234.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Poon, Chun-ho, and 潘仲豪. "Efficient occlusion culling and non-refractive transparency rendering for interactive computer visualization." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B2974328X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Lizarraga, Gabriel M. "A Neuroimaging Web Interface for Data Acquisition, Processing and Visualization of Multimodal Brain Images." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3855.

Full text
Abstract:
Structural and functional brain images are generated as essential modalities for medical experts to learn about the different functions of the brain. These images are typically visually inspected by experts. Many software packages are available to process medical images, but they are complex and difficult to use. The software packages are also hardware intensive. As a consequence, this dissertation proposes a novel Neuroimaging Web Services Interface (NWSI) as a series of processing pipelines for a common platform to store, process, visualize and share data. The NWSI system is made up of password-protected interconnected servers accessible through a web interface. The web-interface driving the NWSI is based on Drupal, a popular open source content management system. Drupal provides a user-based platform, in which the core code for the security and design tools are updated and patched frequently. New features can be added via modules, while maintaining the core software secure and intact. The webserver architecture allows for the visualization of results and the downloading of tabulated data. Several forms are ix available to capture clinical data. The processing pipeline starts with a FreeSurfer (FS) reconstruction of T1-weighted MRI images. Subsequently, PET, DTI, and fMRI images can be uploaded. The Webserver captures uploaded images and performs essential functionalities, while processing occurs in supporting servers. The computational platform is responsive and scalable. The current pipeline for PET processing calculates all regional Standardized Uptake Value ratios (SUVRs). The FS and SUVR calculations have been validated using Alzheimer's Disease Neuroimaging Initiative (ADNI) results posted at Laboratory of Neuro Imaging (LONI). The NWSI system provides access to a calibration process through the centiloid scale, consolidating Florbetapir and Florbetaben tracers in amyloid PET images. The interface also offers onsite access to machine learning algorithms, and introduces new heat maps that augment expert visual rating of PET images. NWSI has been piloted using data and expertise from Mount Sinai Medical Center, the 1Florida Alzheimer’s Disease Research Center (ADRC), Baptist Health South Florida, Nicklaus Children's Hospital, and the University of Miami. All results were obtained using our processing servers in order to maintain data validity, consistency, and minimal processing bias.
APA, Harvard, Vancouver, ISO, and other styles
23

Beard, Daniel Palaniappan K. "Firefly web-based interactive tool for the visualization and validation of image processing algorithms /." Diss., Columbia, Mo. : University of Missouri--Columbia, 2009. http://hdl.handle.net/10355/5346.

Full text
Abstract:
The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Title from PDF of title page (University of Missouri--Columbia, viewed on December 21, 2009. Thesis advisor: Dr. Kannappan Palaniappan. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
24

Wang, Chaoli. "A multiresolutional approach for large data visualization." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1164730737.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Jiang, Chunyan. "Multi-visualization and hybrid segmentation approaches within telemedicine framework." Phd thesis, Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2007/1282/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Ko-Chih. "Distribution-based Summarization for Large Scale Simulation Data Visualization and Analysis." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555452764885977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Henry, Sam. "Indirect Relatedness, Evaluation, and Visualization for Literature Based Discovery." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5855.

Full text
Abstract:
The exponential growth of scientific literature is creating an increased need for systems to process and assimilate knowledge contained within text. Literature Based Discovery (LBD) is a well established field that seeks to synthesize new knowledge from existing literature, but it has remained primarily in the theoretical realm rather than in real-world application. This lack of real-world adoption is due in part to the difficulty of LBD, but also due to several solvable problems present in LBD today. Of these problems, the ones in most critical need of improvement are: (1) the over-generation of knowledge by LBD systems, (2) a lack of meaningful evaluation standards, and (3) the difficulty interpreting LBD output. We address each of these problems by: (1) developing indirect relatedness measures for ranking and filtering LBD hypotheses; (2) developing a representative evaluation dataset and applying meaningful evaluation methods to individual components of LBD; (3) developing an interactive visualization system that allows a user to explore LBD output in its entirety. In addressing these problems, we make several contributions, most importantly: (1) state of the art results for estimating direct semantic relatedness, (2) development of set association measures, (3) development of indirect association measures, (4) development of a standard LBD evaluation dataset, (5) division of LBD into discrete components with well defined evaluation methods, (6) development of automatic functional group discovery, and (7) integration of indirect relatedness measures and automatic functional group discovery into a comprehensive LBD visualization system. Our results inform future development of LBD systems, and contribute to creating more effective LBD systems.
APA, Harvard, Vancouver, ISO, and other styles
28

Techaplahetvanich, Kesaraporn. "A visualization framework for exploring correlations among atributes of a large dataset and its applications in data mining." University of Western Australia. School of Computer Science and Software Engineering, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0216.

Full text
Abstract:
[Truncated abstract] Many databases in scientific and business applications have grown exponentially in size in recent years. Accessing and using databases is no longer a specialized activity as more and more ordinary users without any specialized knowledge are trying to gain information from databases. Both expert and ordinary users face significant challenges in understanding the information stored in databases. The databases are so large in most cases that it is impossible to gain useful information by inspecting data tables, which are the most common form of storing data in relational databases. Visualization has emerged as one of the most important techniques for exploring data stored in large databases. Appropriate visualization techniques can reveal trends, correlations and associations in data that are very difficult to understand from a textual representation of the data. This thesis presents several new frameworks for data visualization and visual data mining. The first technique, VisEx, is useful for visual exploration of large multi-attribute datasets and especially for exploring the correlations among the attributes in such datasets. Most previous visualization techniques can display correlations among two or three attributes at a time without excessive screen clutter. ... Although many algorithms for mining association rules have been researched extensively, they do not incorporate users in the process and most of them generate a large number of association rules. It is quite often difficult for the user to analyze a large number of rules to identify a small subset of rules that is of importance to the user. In this thesis I present a framework for the user to interactively mine association rules visually. Another challenging task in data mining is to understand the correlations among the mined association rules. It is often difficult to identify a relevant subset of association rules from a large number of mined rules. A further contribution of this thesis is a simple framework in the VisAR system that allows the user to explore a large number of association rules visually. A variety of businesses have adopted new technologies for storing large amounts of data. Analysis of historical data quite often offers new insights into business processes that may increase productivity and profit. On-line analytical processing (OLAP) has become a powerful tool for business analysts to explore historical data. Effective visualization techniques are very important for supporting OLAP technology. A new technique for the visual exploration of OLAP data cubes is also presented in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
29

Boardman, Anelda Philine. "Assessment of genome visualization tools relevant to HIV genome research: development of a genome browser prototype." Thesis, University of the Western Cape, 2004. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_3632_1185446929.

Full text
Abstract:

Over the past two decades of HIV research, effective vaccine candidates have been elusive. Traditionally viral research has been characterized by a gene -by-gene approach, but in the light of the availability of complete genome sequences and the tractable size of the HIV genome, a genomic approach may improve insight into the biology and epidemiology of this virus. A genomic approach to finding HIV vaccine candidates can be facilitated by the use of genome sequence visualization. Genome browsers have been used extensively by various groups to shed light on the biology and evolution of several organisms including human, mouse, rat, Drosophila and C.elegans. Application of a genome browser to HIV genomes and related annotations can yield insight into forces that drive evolution, identify highly conserved regions as well as regions that yields a strong immune response in patients, and track mutations that appear over the course of infection. Access to graphical representations of such information is bound to support the search for effective HIV vaccine candidates. This study aimed to answer the question of whether a tool or application exists that can be modified to be used as a platform for development of an HIV visualization application and to assess the viability of such an implementation. Existing applications can only be assessed for their suitability as a basis for development of an HIV genome browser once a well-defined set of assessment criteria has been compiled.

APA, Harvard, Vancouver, ISO, and other styles
30

Glendenning, Kurtis M. "Browser Based Visualization for Parameter Spaces of Big Data Using Client-Server Model." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1441203223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ho, Si Meng. "Web visualization for performance evaluation of e-Government." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2492851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Buchholz, Henrik. "Real-time visualization of 3D city models." Phd thesis, Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2007/1333/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Nell, Raymond D. "Three dimensional depth visualization using image sensing to detect artefact in space." Thesis, Cape Peninsula University of Technology, 2014. http://hdl.handle.net/20.500.11838/1199.

Full text
Abstract:
Thesis submitted in fulfilment of the requirements for the degree Doctor of Technology: Electrical Engineering in the Faculty of Engineering at the Cape Peninsula University of Technology 2014
Three-dimensional (3D) artefact detection can provide the conception of vision and real time interaction of electronic products with devices. The orientation and interaction of electrical systems with objects can be obtained. The introduction of electronic vision detection can be used in multiple applications, from industry, in robotics and also to give orientation to humans to their immediate surroundings. An article covering holograms states that these images can provide information about an object that can be examined from different angles. The limitations of a hologram are that there must be absolute immobilization of the object and the image system. Humans are capable of stereoscopic vision where two images are fused together to provide a 3D view of an object. In this research, two digital images are used to determine the artefact position in space. The application of a camera is utilized and the 3D coordinates of the artefact are determined. To obtain the 3D position, the principles of the pinhole camera, a single lens as well as two image visualizations are applied. This study explains the method used to determine the artefact position in space. To obtain the 3D position of an artefact with a single image was derived. The mathematical formulae are derived to determine the 3D position of an artefact in space and these formulae are applied in the pinhole camera setup to determine the 3D position. The application is also applied in the X-ray spectrum, where the length of structures can be obtained using the mathematical principles derived. The XYZ coordinates are determined, a computer simulation as well as the experimental results are explained. With this 3D detection method, devices can be connected to a computer to have real time image updates and interaction of objects in an XYZ coordinate system. Keywords: 3D point, xyz-coordinates, lens, hologram
APA, Harvard, Vancouver, ISO, and other styles
34

Yu, Zhiguo. "Cooperative Semantic Information Processing for Literature-Based Biomedical Knowledge Discovery." UKnowledge, 2013. http://uknowledge.uky.edu/ece_etds/33.

Full text
Abstract:
Given that data is increasing exponentially everyday, extracting and understanding the information, themes and relationships from large collections of documents is more and more important to researchers in many areas. In this paper, we present a cooperative semantic information processing system to help biomedical researchers understand and discover knowledge in large numbers of titles and abstracts from PubMed query results. Our system is based on a prevalent technique, topic modeling, which is an unsupervised machine learning approach for discovering the set of semantic themes in a large set of documents. In addition, we apply a natural language processing technique to transform the “bag-of-words” assumption of topic models to the “bag-of-important-phrases” assumption and build an interactive visualization tool using a modified, open-source, Topic Browser. In the end, we conduct two experiments to evaluate the approach. The first, evaluates whether the “bag-of-important-phrases” approach is better at identifying semantic themes than the standard “bag-of-words” approach. This is an empirical study in which human subjects evaluate the quality of the resulting topics using a standard “word intrusion test” to determine whether subjects can identify a word (or phrase) that does not belong in the topic. The second is a qualitative empirical study to evaluate how well the system helps biomedical researchers explore a set of documents to discover previously hidden semantic themes and connections. The methodology for this study has been successfully used to evaluate other knowledge-discovery tools in biomedicine.
APA, Harvard, Vancouver, ISO, and other styles
35

Ràfols, Soler Pere. "Development of a complete advanced computational workflow for high-resolution LDI-MS metabolomics imaging data processing and visualization." Doctoral thesis, Universitat Rovira i Virgili, 2018. http://hdl.handle.net/10803/461608.

Full text
Abstract:
La imatge per espectrometria de masses (MSI) mapeja la distribució espacial de les molècules en una mostra. Això permet extreure informació Metabolòmica espacialment corralada d'una secció de teixit. MSI no s'usa àmpliament en la metabolòmica espacial a causa de diverses limitacions relacionades amb les matrius MALDI, incloent la generació d'ions que interfereixen en el rang de masses més baix i la difusió lateral dels compostos. Hem desenvolupat un flux de treball que millora l'adquisició de metabòlits en un instrument MALDI utilitzant un "sputtering" per dipositar una nano-capa d'Au directament sobre el teixit. Això minimitza la interferència dels senyals del "background" alhora que permet resolucions espacials molt altes. S'ha desenvolupat un paquet R per a la visualització d'imatges i processament de les dades MSI, tot això mitjançant una implementació optimitzada per a la gestió de la memòria i la programació concurrent. A més, el programari desenvolupat inclou també un algoritme per a l'alineament de masses que millora la precisió de massa.
La imagen por espectrometría de masas (MSI) mapea la distribución espacial de las moléculas en una muestra. Esto permite extraer información metabolòmica espacialmente corralada de una sección de tejido. MSI no se usa ampliamente en la metabolòmica espacial debido a varias limitaciones relacionadas con las matrices MALDI, incluyendo la generación de iones que interfieren en el rango de masas más bajo y la difusión lateral de los compuestos. Hemos desarrollado un flujo de trabajo que mejora la adquisición de metabolitos en un instrumento MALDI utilizando un “sputtering” para depositar una nano-capa de Au directamente sobre el tejido. Esto minimiza la interferencia de las señales del “background” a la vez que permite resoluciones espaciales muy altas. Se ha desarrollado un paquete R para la visualización de imágenes y procesado de los datos MSI, todo ello mediante una implementación optimizada para la gestión de la memoria y la programación concurrente. Además, el software desarrollado incluye también un algoritmo para el alineamiento de masas que mejora la precisión de masa.
Mass spectrometry imaging (MSI) maps the spatial distributions of molecules in a sample. This allows extracting spatially-correlated metabolomics information from tissue sections. MSI is not widely used in spatial metabolomics due to several limitations related with MALDI matrices, including the generation of interfering ions and in the low mass range and the lateral compound delocalization. We developed a workflow to improve the acquisition of metabolites using a MALDI instrument. We sputter an Au nano-layer directly onto the tissue section enabling the acquisition of metabolites with minimal interference of background signals and ultra-high spatial resolution. We developed an R package for image visualization and MSI data processing, which is optimized to manage datasets larger than computer’s memory using a mutli-threaded implementation. Moreover, our software includes a label-free mass alignment algorithm for mass accuracy enhancement.
APA, Harvard, Vancouver, ISO, and other styles
36

Valdivia, Paola Tatiana Llerena. "Graph signal processing for visual analysis and data exploration." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-15102018-165426/.

Full text
Abstract:
Signal processing is used in a wide variety of applications, ranging from digital image processing to biomedicine. Recently, some tools from signal processing have been extended to the context of graphs, allowing its use on irregular domains. Among others, the Fourier Transform and the Wavelet Transform have been adapted to such context. Graph signal processing (GSP) is a new field with many potential applications on data exploration. In this dissertation we show how tools from graph signal processing can be used for visual analysis. Specifically, we proposed a data filtering method, based on spectral graph filtering, that led to high quality visualizations which were attested qualitatively and quantitatively. On the other hand, we relied on the graph wavelet transform to enable the visual analysis of massive time-varying data revealing interesting phenomena and events. The proposed applications of GSP to visually analyze data are a first step towards incorporating the use of this theory into information visualization methods. Many possibilities from GSP can be explored by improving the understanding of static and time-varying phenomena that are yet to be uncovered.
O processamento de sinais é usado em uma ampla variedade de aplicações, desde o processamento digital de imagens até a biomedicina. Recentemente, algumas ferramentas do processamento de sinais foram estendidas ao contexto de grafos, permitindo seu uso em domínios irregulares. Entre outros, a Transformada de Fourier e a Transformada Wavelet foram adaptadas nesse contexto. O Processamento de Sinais em Grafos (PSG) é um novo campo com muitos aplicativos potenciais na exploração de dados. Nesta dissertação mostramos como ferramentas de processamento de sinal gráfico podem ser usadas para análise visual. Especificamente, o método de filtragem de dados porposto, baseado na filtragem de grafos espectrais, levou a visualizações de alta qualidade que foram atestadas qualitativa e quantitativamente. Por outro lado, usamos a transformada de wavelet em grafos para permitir a análise visual de dados massivos variantes no tempo, revelando fenômenos e eventos interessantes. As aplicações propostas do PSG para analisar visualmente os dados são um primeiro passo para incorporar o uso desta teoria nos métodos de visualização da informação. Muitas possibilidades do PSG podem ser exploradas melhorando a compreensão de fenômenos estáticos e variantes no tempo que ainda não foram descobertos.
APA, Harvard, Vancouver, ISO, and other styles
37

Prohaska, Steffen. "Skeleton-based visualization of massive voxel objects with network-like architecture." Phd thesis, Universität Potsdam, 2007. http://opus.kobv.de/ubp/volltexte/2007/1488/.

Full text
Abstract:
This work introduces novel internal and external memory algorithms for computing voxel skeletons of massive voxel objects with complex network-like architecture and for converting these voxel skeletons to piecewise linear geometry, that is triangle meshes and piecewise straight lines. The presented techniques help to tackle the challenge of visualizing and analyzing 3d images of increasing size and complexity, which are becoming more and more important in, for example, biological and medical research. Section 2.3.1 contributes to the theoretical foundations of thinning algorithms with a discussion of homotopic thinning in the grid cell model. The grid cell model explicitly represents a cell complex built of faces, edges, and vertices shared between voxels. A characterization of pairs of cells to be deleted is much simpler than characterizations of simple voxels were before. The grid cell model resolves topologically unclear voxel configurations at junctions and locked voxel configurations causing, for example, interior voxels in sets of non-simple voxels. A general conclusion is that the grid cell model is superior to indecomposable voxels for algorithms that need detailed control of topology. Section 2.3.2 introduces a noise-insensitive measure based on the geodesic distance along the boundary to compute two-dimensional skeletons. The measure is able to retain thin object structures if they are geometrically important while ignoring noise on the object's boundary. This combination of properties is not known of other measures. The measure is also used to guide erosion in a thinning process from the boundary towards lines centered within plate-like structures. Geodesic distance based quantities seem to be well suited to robustly identify one- and two-dimensional skeletons. Chapter 6 applies the method to visualization of bone micro-architecture. Chapter 3 describes a novel geometry generation scheme for representing voxel skeletons, which retracts voxel skeletons to piecewise linear geometry per dual cube. The generated triangle meshes and graphs provide a link to geometry processing and efficient rendering of voxel skeletons. The scheme creates non-closed surfaces with boundaries, which contain fewer triangles than a representation of voxel skeletons using closed surfaces like small cubes or iso-surfaces. A conclusion is that thinking specifically about voxel skeleton configurations instead of generic voxel configurations helps to deal with the topological implications. The geometry generation is one foundation of the applications presented in Chapter 6. Chapter 5 presents a novel external memory algorithm for distance ordered homotopic thinning. The presented method extends known algorithms for computing chamfer distance transformations and thinning to execute I/O-efficiently when input is larger than the available main memory. The applied block-wise decomposition schemes are quite simple. Yet it was necessary to carefully analyze effects of block boundaries to devise globally correct external memory variants of known algorithms. In general, doing so is superior to naive block-wise processing ignoring boundary effects. Chapter 6 applies the algorithms in a novel method based on confocal microscopy for quantitative study of micro-vascular networks in the field of microcirculation.
Die vorliegende Arbeit führt I/O-effiziente Algorithmen und Standard-Algorithmen zur Berechnung von Voxel-Skeletten aus großen Voxel-Objekten mit komplexer, netzwerkartiger Struktur und zur Umwandlung solcher Voxel-Skelette in stückweise-lineare Geometrie ein. Die vorgestellten Techniken werden zur Visualisierung und Analyse komplexer drei-dimensionaler Bilddaten, beispielsweise aus Biologie und Medizin, eingesetzt. Abschnitt 2.3.1 leistet mit der Diskussion von topologischem Thinning im Grid-Cell-Modell einen Beitrag zu den theoretischen Grundlagen von Thinning-Algorithmen. Im Grid-Cell-Modell wird ein Voxel-Objekt als Zellkomplex dargestellt, der aus den Ecken, Kanten, Flächen und den eingeschlossenen Volumina der Voxel gebildet wird. Topologisch unklare Situationen an Verzweigungen und blockierte Voxel-Kombinationen werden aufgelöst. Die Charakterisierung von Zellpaaren, die im Thinning-Prozess entfernt werden dürfen, ist einfacher als bekannte Charakterisierungen von so genannten "Simple Voxels". Eine wesentliche Schlussfolgerung ist, dass das Grid-Cell-Modell atomaren Voxeln überlegen ist, wenn Algorithmen detaillierte Kontrolle über Topologie benötigen. Abschnitt 2.3.2 präsentiert ein rauschunempfindliches Maß, das den geodätischen Abstand entlang der Oberfläche verwendet, um zweidimensionale Skelette zu berechnen, welche dünne, aber geometrisch bedeutsame, Strukturen des Objekts rauschunempfindlich abbilden. Das Maß wird im weiteren mit Thinning kombiniert, um die Erosion von Voxeln auf Linien zuzusteuern, die zentriert in plattenförmigen Strukturen liegen. Maße, die auf dem geodätischen Abstand aufbauen, scheinen sehr geeignet zu sein, um ein- und zwei-dimensionale Skelette bei vorhandenem Rauschen zu identifizieren. Eine theoretische Begründung für diese Beobachtung steht noch aus. In Abschnitt 6 werden die diskutierten Methoden zur Visualisierung von Knochenfeinstruktur eingesetzt. Abschnitt 3 beschreibt eine Methode, um Voxel-Skelette durch kontrollierte Retraktion in eine stückweise-lineare geometrische Darstellung umzuwandeln, die als Eingabe für Geometrieverarbeitung und effizientes Rendering von Voxel-Skeletten dient. Es zeigt sich, dass eine detaillierte Betrachtung der topologischen Eigenschaften eines Voxel-Skeletts einer Betrachtung von allgemeinen Voxel-Konfigurationen für die Umwandlung zu einer geometrischen Darstellung überlegen ist. Die diskutierte Methode bildet die Grundlage für die Anwendungen, die in Abschnitt 6 diskutiert werden. Abschnitt 5 führt einen I/O-effizienten Algorithmus für Thinning ein. Die vorgestellte Methode erweitert bekannte Algorithmen zur Berechung von Chamfer-Distanztransformationen und Thinning so, dass diese effizient ausführbar sind, wenn die Eingabedaten den verfügbaren Hauptspeicher übersteigen. Der Einfluss der Blockgrenzen auf die Algorithmen wurde analysiert, um global korrekte Ergebnisse sicherzustellen. Eine detaillierte Analyse ist einer naiven Zerlegung, die die Einflüsse von Blockgrenzen vernachlässigt, überlegen. In Abschnitt 6 wird, aufbauend auf den I/O-effizienten Algorithmen, ein Verfahren zur quantitativen Analyse von Mikrogefäßnetzwerken diskutiert.
APA, Harvard, Vancouver, ISO, and other styles
38

Xie, Kai, and 謝凱. "Volume quantification and visualization for spinal bone cement injection." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29807578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Del, Rio Nicholas. "Provenance support for quality assessment of scientific results a user study /." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2007. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Doshi, Punit Rameshchandra. "Adaptive prefetching for visual data exploration." Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0131103-203307.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: Adaptive prefetching; Large-scale multivariate data visualization; Semantic caching; Hierarchical data exploration; Exploratory data analysis. Includes bibliographical references (p.66-70).
APA, Harvard, Vancouver, ISO, and other styles
41

Trümper, Jonas. "Visualization techniques for the analysis of software behavior and related structures." Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/volltexte/2014/7214/.

Full text
Abstract:
Software maintenance encompasses any changes made to a software system after its initial deployment and is thereby one of the key phases in the typical software-engineering lifecycle. In software maintenance, we primarily need to understand structural and behavioral aspects, which are difficult to obtain, e.g., by code reading. Software analysis is therefore a vital tool for maintaining these systems: It provides - the preferably automated - means to extract and evaluate information from their artifacts such as software structure, runtime behavior, and related processes. However, such analysis typically results in massive raw data, so that even experienced engineers face difficulties directly examining, assessing, and understanding these data. Among other things, they require tools with which to explore the data if no clear question can be formulated beforehand. For this, software analysis and visualization provide its users with powerful interactive means. These enable the automation of tasks and, particularly, the acquisition of valuable and actionable insights into the raw data. For instance, one means for exploring runtime behavior is trace visualization. This thesis aims at extending and improving the tool set for visual software analysis by concentrating on several open challenges in the fields of dynamic and static analysis of software systems. This work develops a series of concepts and tools for the exploratory visualization of the respective data to support users in finding and retrieving information on the system artifacts concerned. This is a difficult task, due to the lack of appropriate visualization metaphors; in particular, the visualization of complex runtime behavior poses various questions and challenges of both a technical and conceptual nature. This work focuses on a set of visualization techniques for visually representing control-flow related aspects of software traces from shared-memory software systems: A trace-visualization concept based on icicle plots aids in understanding both single-threaded as well as multi-threaded runtime behavior on the function level. The concept’s extensibility further allows the visualization and analysis of specific aspects of multi-threading such as synchronization, the correlation of such traces with data from static software analysis, and a comparison between traces. Moreover, complementary techniques for simultaneously analyzing system structures and the evolution of related attributes are proposed. These aim at facilitating long-term planning of software architecture and supporting management decisions in software projects by extensions to the circular-bundle-view technique: An extension to 3-dimensional space allows for the use of additional variables simultaneously; interaction techniques allow for the modification of structures in a visual manner. The concepts and techniques presented here are generic and, as such, can be applied beyond software analysis for the visualization of similarly structured data. The techniques' practicability is demonstrated by several qualitative studies using subject data from industry-scale software systems. The studies provide initial evidence that the techniques' application yields useful insights into the subject data and its interrelationships in several scenarios.
Die Softwarewartung umfasst alle Änderungen an einem Softwaresystem nach dessen initialer Bereitstellung und stellt damit eine der wesentlichen Phasen im typischen Softwarelebenszyklus dar. In der Softwarewartung müssen wir insbesondere strukturelle und verhaltensbezogene Aspekte verstehen, welche z.B. alleine durch Lesen von Quelltext schwer herzuleiten sind. Die Softwareanalyse ist daher ein unverzichtbares Werkzeug zur Wartung solcher Systeme: Sie bietet - vorzugsweise automatisierte - Mittel, um Informationen über deren Artefakte, wie Softwarestruktur, Laufzeitverhalten und verwandte Prozesse, zu extrahieren und zu evaluieren. Eine solche Analyse resultiert jedoch typischerweise in großen und größten Rohdaten, die selbst erfahrene Softwareingenieure direkt nur schwer untersuchen, bewerten und verstehen können. Unter Anderem dann, wenn vorab keine klare Frage formulierbar ist, benötigen sie Werkzeuge, um diese Daten zu erforschen. Hierfür bietet die Softwareanalyse und Visualisierung ihren Nutzern leistungsstarke, interaktive Mittel. Diese ermöglichen es Aufgaben zu automatisieren und insbesondere wertvolle und belastbare Einsichten aus den Rohdaten zu erlangen. Beispielsweise ist die Visualisierung von Software-Traces ein Mittel, um das Laufzeitverhalten eines Systems zu ergründen. Diese Arbeit zielt darauf ab, den "Werkzeugkasten" der visuellen Softwareanalyse zu erweitern und zu verbessern, indem sie sich auf bestimmte, offene Herausforderungen in den Bereichen der dynamischen und statischen Analyse von Softwaresystemen konzentriert. Die Arbeit entwickelt eine Reihe von Konzepten und Werkzeugen für die explorative Visualisierung der entsprechenden Daten, um Nutzer darin zu unterstützen, Informationen über betroffene Systemartefakte zu lokalisieren und zu verstehen. Da es insbesondere an geeigneten Visualisierungsmetaphern mangelt, ist dies eine schwierige Aufgabe. Es bestehen, insbesondere bei komplexen Softwaresystemen, verschiedenste offene technische sowie konzeptionelle Fragestellungen und Herausforderungen. Diese Arbeit konzentriert sich auf Techniken zur visuellen Darstellung kontrollflussbezogener Aspekte aus Software-Traces von Shared-Memory Softwaresystemen: Ein Trace-Visualisierungskonzept, basierend auf Icicle Plots, unterstützt das Verstehen von single- und multi-threaded Laufzeitverhalten auf Funktionsebene. Die Erweiterbarkeit des Konzepts ermöglicht es zudem spezifische Aspekte des Multi-Threading, wie Synchronisation, zu visualisieren und zu analysieren, derartige Traces mit Daten aus der statischen Softwareanalyse zu korrelieren sowie Traces mit einander zu vergleichen. Darüber hinaus werden komplementäre Techniken für die kombinierte Analyse von Systemstrukturen und der Evolution zugehöriger Eigenschaften vorgestellt. Diese zielen darauf ab, die Langzeitplanung von Softwarearchitekturen und Management-Entscheidungen in Softwareprojekten mittels Erweiterungen an der Circular-Bundle-View-Technik zu unterstützen: Eine Erweiterung auf den 3-dimensionalen Raum ermöglicht es zusätzliche visuelle Variablen zu nutzen; Strukturen können mithilfe von Interaktionstechniken visuell bearbeitet werden. Die gezeigten Techniken und Konzepte sind allgemein verwendbar und lassen sich daher auch jenseits der Softwareanalyse einsetzen, um ähnlich strukturierte Daten zu visualisieren. Mehrere qualitative Studien an Softwaresystemen in industriellem Maßstab stellen die Praktikabilität der Techniken dar. Die Ergebnisse sind erste Belege dafür, dass die Anwendung der Techniken in verschiedenen Szenarien nützliche Einsichten in die untersuchten Daten und deren Zusammenhänge liefert.
APA, Harvard, Vancouver, ISO, and other styles
42

Choo, Jae gul. "Integration of computational methods and visual analytics for large-scale high-dimensional data." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49121.

Full text
Abstract:
With the increasing amount of collected data, large-scale high-dimensional data analysis is becoming essential in many areas. These data can be analyzed either by using fully computational methods or by leveraging human capabilities via interactive visualization. However, each method has its drawbacks. While a fully computational method can deal with large amounts of data, it lacks depth in its understanding of the data, which is critical to the analysis. With the interactive visualization method, the user can give a deeper insight on the data but suffers when large amounts of data need to be analyzed. Even with an apparent need for these two approaches to be integrated, little progress has been made. As ways to tackle this problem, computational methods have to be re-designed both theoretically and algorithmically, and the visual analytics system has to expose these computational methods to users so that they can choose the proper algorithms and settings. To achieve an appropriate integration between computational methods and visual analytics, the thesis focuses on essential computational methods for visualization, such as dimension reduction and clustering, and it presents fundamental development of computational methods as well as visual analytic systems involving newly developed methods. The contributions of the thesis include (1) the two-stage dimension reduction framework that better handles significant information loss in visualization of high-dimensional data, (2) efficient parametric updating of computational methods for fast and smooth user interactions, and (3) an iteration-wise integration framework of computational methods in real-time visual analytics. The latter parts of the thesis focus on the development of visual analytics systems involving the presented computational methods, such as (1) Testbed: an interactive visual testbed system for various dimension reduction and clustering methods, (2) iVisClassifier: an interactive visual classification system using supervised dimension reduction, and (3) VisIRR: an interactive visual information retrieval and recommender system for large-scale document data.
APA, Harvard, Vancouver, ISO, and other styles
43

Vanderhyde, James. "Topology Control of Volumetric Data." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16215.

Full text
Abstract:
Three-dimensional scans and other volumetric data sources often result in representations that are more complex topologically than the original model. The extraneous critical points, handles, and components are called topological noise. Many algorithms in computer graphics require simple topology in order to work optimally, including texture mapping, surface parameterization, flows on surfaces, and conformal mappings. The topological noise disrupts these procedures by requiring each small handle to be dealt with individually. Furthermore, topological descriptions of volumetric data are useful for visualization and data queries. One such description is the contour tree (or Reeb graph), which depicts when the isosurfaces split and merge as the isovalue changes. In the presence of topological noise, the contour tree can be too large to be useful. For these reasons, an important goal in computer graphics is simplification of the topology of volumetric data. The key to this thesis is that the global topology of volumetric data sets is determined by local changes at individual points. Therefore, we march through the data one grid cell at a time, and for each cell, we use a local check to determine if the topology of an isosurface is changing. If so, we change the value of the cell so that the topology change is prevented. In this thesis we describe variations on the local topology check for use in different settings. We use the topology simplification procedure to extract a single component with controlled topology from an isosurface in volume data sets and partially-defined volume data sets. We also use it to remove critical points from three-dimensional volumes, as well as time-varying volumes. We have applied the technique to two-dimensional (plus time) data sets and three dimensional (plus time) data sets.
APA, Harvard, Vancouver, ISO, and other styles
44

Stokes, Todd Hamilton. "Development of a visualization and information management platform in translational biomedical informatics." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/33967.

Full text
Abstract:
Translational Biomedical Informatics (TBMI) is an emerging discipline expanding beyond traditional bioinformatics, with a focus on developing computational technologies for real-world biomedical practice. The goal of my Ph.D. research is to address a few key challenges in TBI, including: (1) the high quality and reproducibility required by medical applications when processing high throughput data, (2) the need for knowledge management solutions that allow molecular data to be handled and evaluated by researchers, regulators, and doctors collectively, (3) the need for near real-time, efficient access to decision-oriented visualizations of integrated data and data processing results, and (4) the need for an integrated solution that can evolve as medical consensus evolves, without requiring retraining, overhaul or replacement. This dissertation resulted in the development and adoption of concrete web-based application deliverables in regular use by bioinformaticians, clinicians, biologists and nanotechnologists. These include: the Chip Artifact Correction (caCORRECT) web site and grid services, the ArrayWiki community microarray repository, and the SimpleVisGrid visualization grid services (including eGOMiner, nanoDRIVE, PathwayVis and SphingoVisGrid).
APA, Harvard, Vancouver, ISO, and other styles
45

Trapp, Matthias. "Interactive rendering techniques for focus+context visualization of 3D geovirtual environments." Phd thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6682/.

Full text
Abstract:
This thesis introduces a collection of new real-time rendering techniques and applications for focus+context visualization of interactive 3D geovirtual environments such as virtual 3D city and landscape models. These environments are generally characterized by a large number of objects and are of high complexity with respect to geometry and textures. For these reasons, their interactive 3D rendering represents a major challenge. Their 3D depiction implies a number of weaknesses such as occlusions, cluttered image contents, and partial screen-space usage. To overcome these limitations and, thus, to facilitate the effective communication of geo-information, principles of focus+context visualization can be used for the design of real-time 3D rendering techniques for 3D geovirtual environments (see Figure). In general, detailed views of a 3D geovirtual environment are combined seamlessly with abstracted views of the context within a single image. To perform the real-time image synthesis required for interactive visualization, dedicated parallel processors (GPUs) for rasterization of computer graphics primitives are used. For this purpose, the design and implementation of appropriate data structures and rendering pipelines are necessary. The contribution of this work comprises the following five real-time rendering methods: • The rendering technique for 3D generalization lenses enables the combination of different 3D city geometries (e.g., generalized versions of a 3D city model) in a single image in real time. The method is based on a generalized and fragment-precise clipping approach, which uses a compressible, raster-based data structure. It enables the combination of detailed views in the focus area with the representation of abstracted variants in the context area. • The rendering technique for the interactive visualization of dynamic raster data in 3D geovirtual environments facilitates the rendering of 2D surface lenses. It enables a flexible combination of different raster layers (e.g., aerial images or videos) using projective texturing for decoupling image and geometry data. Thus, various overlapping and nested 2D surface lenses of different contents can be visualized interactively. • The interactive rendering technique for image-based deformation of 3D geovirtual environments enables the real-time image synthesis of non-planar projections, such as cylindrical and spherical projections, as well as multi-focal 3D fisheye-lenses and the combination of planar and non-planar projections. • The rendering technique for view-dependent multi-perspective views of 3D geovirtual environments, based on the application of global deformations to the 3D scene geometry, can be used for synthesizing interactive panorama maps to combine detailed views close to the camera (focus) with abstract views in the background (context). This approach reduces occlusions, increases the usage the available screen space, and reduces the overload of image contents. • The object-based and image-based rendering techniques for highlighting objects and focus areas inside and outside the view frustum facilitate preattentive perception. The concepts and implementations of interactive image synthesis for focus+context visualization and their selected applications enable a more effective communication of spatial information, and provide building blocks for design and development of new applications and systems in the field of 3D geovirtual environments.
Die Darstellung immer komplexerer raumbezogener Information durch Geovisualisierung stellt die existierenden Technologien und den Menschen ständig vor neue Herausforderungen. In dieser Arbeit werden fünf neue, echtzeitfähige Renderingverfahren und darauf basierende Anwendungen für die Fokus-&-Kontext-Visualisierung von interaktiven geovirtuellen 3D-Umgebungen – wie virtuelle 3D-Stadt- und Landschaftsmodelle – vorgestellt. Die große Menge verschiedener darzustellender raumbezogener Information in 3D-Umgebungen führt oft zu einer hohen Anzahl unterschiedlicher Objekte und somit zu einer hohen Geometrie- und Texturkomplexität. In der Folge verlieren 3D-Darstellungen durch Verdeckungen, überladene Bildinhalte und eine geringe Ausnutzung des zur Verfügung stehenden Bildraumes an Informationswert. Um diese Beschränkungen zu kompensieren und somit die Kommunikation raumbezogener Information zu verbessern, kann das Prinzip der Fokus-&-Kontext-Visualisierung angewendet werden. Hierbei wird die für den Nutzer wesentliche Information als detaillierte Ansicht im Fokus mit abstrahierter Kontextinformation nahtlos miteinander kombiniert. Um das für die interaktive Visualisierung notwendige Echtzeit-Rendering durchzuführen, können spezialisierte Parallelprozessoren für die Rasterisierung von computergraphischen Primitiven (GPUs) verwendet werden. Dazu ist die Konzeption und Implementierung von geeigneten Datenstrukturen und Rendering-Pipelines notwendig. Der Beitrag dieser Arbeit umfasst die folgenden fünf Renderingverfahren. • Das Renderingverfahren für interaktive 3D-Generalisierungslinsen: Hierbei wird die Kombination unterschiedlicher 3D-Szenengeometrien, z. B. generalisierte Varianten eines 3DStadtmodells, in einem Bild ermöglicht. Das Verfahren basiert auf einem generalisierten Clipping-Ansatz, der es erlaubt, unter Verwendung einer komprimierbaren, rasterbasierten Datenstruktur beliebige Bereiche einer 3D-Szene freizustellen bzw. zu kappen. Somit lässt sich eine Kombination von detaillierten Ansichten im Fokusbereich mit der Darstellung einer abstrahierten Variante im Kontextbereich implementieren. • Das Renderingverfahren zur Visualisierung von dynamischen Raster-Daten in geovirtuellen 3D-Umgebungen zur Darstellung von 2D-Oberflächenlinsen: Die Verwendung von projektiven Texturen zur Entkoppelung von Bild- und Geometriedaten ermöglicht eine flexible Kombination verschiedener Rasterebenen (z.B. Luftbilder oder Videos). Somit können verschiedene überlappende sowie verschachtelte 2D-Oberflächenlinsen mit unterschiedlichen Dateninhalten interaktiv visualisiert werden. • Das Renderingverfahren zur bildbasierten Deformation von geovirtuellen 3D-Umgebungen: Neben der interaktiven Bildsynthese von nicht-planaren Projektionen, wie beispielsweise zylindrischen oder sphärischen Panoramen, lassen sich mit diesem Verfahren multifokale 3D-Fischaugen-Linsen erzeugen sowie planare und nicht-planare Projektionen miteinander kombinieren. • Das Renderingverfahren für die Generierung von sichtabhängigen multiperspektivischen Ansichten von geovirtuellen 3D-Umgebungen: Das Verfahren basiert auf globalen Deformationen der 3D-Szenengeometrie und kann zur Erstellung von interaktiven 3D-Panoramakarten verwendet werden, welche beispielsweise detaillierte Absichten nahe der virtuellen Kamera (Fokus) mit abstrakten Ansichten im Hintergrund (Kontext) kombinieren. Dieser Ansatz reduziert Verdeckungen, nutzt den zur Verfügung stehenden Bildraum in verbesserter Weise aus und reduziert das Überladen von Bildinhalten. • Objekt-und bildbasierte Renderingverfahren für die Hervorhebung von Fokus-Objekten und Fokus-Bereichen innerhalb und außerhalb des sichtbaren Bildausschnitts, um die präattentive Wahrnehmung eines Benutzers besser zu unterstützen. Die in dieser Arbeit vorgestellten Konzepte, Entwürfe und Implementierungen von interaktiven Renderingverfahren zur Fokus-&-Kontext-Visualisierung sowie deren ausgewählte Anwendungen ermöglichen eine effektivere Kommunikation raumbezogener Information und repräsentieren softwaretechnische Bausteine für die Entwicklung neuer Anwendungen und Systeme im Bereich der geovirtuellen 3D-Umgebungen.
APA, Harvard, Vancouver, ISO, and other styles
46

Hordemann, Glen J. "Exploring High Performance SQL Databases with Graphics Processing Units." Bowling Green State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1380125703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Mehta, Nishant K. "A Hierarchy Navigation Framework: Supporting Scalable Interactive Exploration over Large Databases." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0827104-114148/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bank, Jason Noah. "Propagation of Electromechanical Disturbances across Large Interconnected Power Systems and Extraction of Associated Modal Content from Measurement Data." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/73008.

Full text
Abstract:
Changes in power system operating conditions cause dynamic changes in angle and frequency. These disturbances propagate throughout the system area with finite speed. This propagation takes the form of a traveling wave whose arrival time at a particular point in the system can be observed using a wide-area measurement system (WAMS). Observations of these waves both through simulation and measurement data have demonstrated several factors that influence the speed at which a disturbance propagates through a system. Results of this testing are presented which demonstrate dependence on generator inertia, damping and line impedance. Considering a power system as an area with and uneven distribution of these parameters it is observed that a disturbance will propagate throughout a system at different rates in differing directions. This knowledge has applications in locating the originating point of a system disturbance, understanding the overall dynamic response of a power system, and determining the dependencies between various parts of that system. A simplified power system simulator is developed using the swing equation and system power flow equations. This simplified modeling technique captures the phenomenon of traveling electromechanical waves and demonstrates the same dependencies as data derived from measurements and commercial power system simulation packages. The ultimate goal of this research is develop a methodology to approximate a real system with this simplified wave propagation model. In this architecture each measurement point would represent a pseudo-bus in the model. This procedure effectively lumps areas of the system into one equivalent bus with appropriately sized generators and loads. With the architecture of this reduced network determined its parameters maybe estimated so as to provide a best fit to the measurement data. Doing this effectively derives a data-driven equivalent system model. With an appropriate equivalent model for a given system determined, incoming measurement data can be processed in real time to provide an indication of the system operating point. Additionally as the system state is read in through measurement data future measurements values along the same trajectory can be estimated. These estimates of future system values can provide information for advanced control and protection schemes. Finally a procedure for the identification and extraction of inter-area oscillations is developed. The dominant oscillatory frequency is identified from an event region then fit across the surrounding dataset. For each segment of this data set values of amplitude, phase and damping are derived for each measurement vector. Doing this builds up a picture of how the oscillation evolves over time and responds to system conditions. These results are presented in a graphical format as a movie tracking the modal phasors over time. Examples derived from real world measurement data are presented.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
49

Schintler, Laurie A., and Manfred M. Fischer. "The Analysis of Big Data on Cites and Regions - Some Computational and Statistical Challenges." WU Vienna University of Economics and Business, 2018. http://epub.wu.ac.at/6637/1/2018%2D10%2D28_Big_Data_on_cities_and_regions_untrack_changes.pdf.

Full text
Abstract:
Big Data on cities and regions bring new opportunities and challenges to data analysts and city planners. On the one side, they hold great promise to combine increasingly detailed data for each citizen with critical infrastructures to plan, govern and manage cities and regions, improve their sustainability, optimize processes and maximize the provision of public and private services. On the other side, the massive sample size and high-dimensionality of Big Data and their geo-temporal character introduce unique computational and statistical challenges. This chapter provides overviews on the salient characteristics of Big Data and how these features impact on paradigm change of data management and analysis, and also on the computing environment.
Series: Working Papers in Regional Science
APA, Harvard, Vancouver, ISO, and other styles
50

Kim, Kihwan. "Spatio-temporal data interpolation for dynamic scene analysis." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/47729.

Full text
Abstract:
Analysis and visualization of dynamic scenes is often constrained by the amount of spatio-temporal information available from the environment. In most scenarios, we have to account for incomplete information and sparse motion data, requiring us to employ interpolation and approximation methods to fill for the missing information. Scattered data interpolation and approximation techniques have been widely used for solving the problem of completing surfaces and images with incomplete input data. We introduce approaches for such data interpolation and approximation from limited sensors, into the domain of analyzing and visualizing dynamic scenes. Data from dynamic scenes is subject to constraints due to the spatial layout of the scene and/or the configurations of video cameras in use. Such constraints include: (1) sparsely available cameras observing the scene, (2) limited field of view provided by the cameras in use, (3) incomplete motion at a specific moment, and (4) varying frame rates due to different exposures and resolutions. In this thesis, we establish these forms of incompleteness in the scene, as spatio-temporal uncertainties, and propose solutions for resolving the uncertainties by applying scattered data approximation into a spatio-temporal domain. The main contributions of this research are as follows: First, we provide an efficient framework to visualize large-scale dynamic scenes from distributed static videos. Second, we adopt Radial Basis Function (RBF) interpolation to the spatio-temporal domain to generate global motion tendency. The tendency, represented by a dense flow field, is used to optimally pan and tilt a video camera. Third, we propose a method to represent motion trajectories using stochastic vector fields. Gaussian Process Regression (GPR) is used to generate a dense vector field and the certainty of each vector in the field. The generated stochastic fields are used for recognizing motion patterns under varying frame-rate and incompleteness of the input videos. Fourth, we also show that the stochastic representation of vector field can also be used for modeling global tendency to detect the region of interests in dynamic scenes with camera motion. We evaluate and demonstrate our approaches in several applications for visualizing virtual cities, automating sports broadcasting, and recognizing traffic patterns in surveillance videos.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography