Дисертації з теми "Data Analysis and Visualization"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Data Analysis and Visualization.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Data Analysis and Visualization".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Furuhashi, Takeshi. "Data Visualization for Kansei Analysis." 日本知能情報ファジィ学会, 2010. http://hdl.handle.net/2237/20694.

Повний текст джерела
Анотація:
SCIS & ISIS 2010, Joint 5th International Conference on Soft Computing and Intelligent Systems and 11th International Symposium on Advanced Intelligent Systems. December 8-12, 2010, Okayama Convention Center, Okayama, Japan
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Cheong, Tat Man. "Money laundering data analysis and visualization." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2492978.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Yang, Di. "Analysis guided visual exploration of multivariate data." Worcester, Mass. : Worcester Polytechnic Institute, 2007. http://www.wpi.edu/Pubs/ETD/Available/etd-050407-005925/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Huang, Yunshui Charles. "A prototype of data analysis visualization tool." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/12125.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Wu, Yingyu. "Using Text based Visualization in Data Analysis." Kent State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=kent1398079502.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Alam, Sayeed Safayet. "Analysis of Eye-Tracking Data in Visualization and Data Space." FIU Digital Commons, 2017. http://digitalcommons.fiu.edu/etd/3473.

Повний текст джерела
Анотація:
Eye-tracking devices can tell us where on the screen a person is looking. Researchers frequently analyze eye-tracking data manually, by examining every frame of a visual stimulus used in an eye-tracking experiment so as to match 2D screen-coordinates provided by the eye-tracker to related objects and content within the stimulus. Such task requires significant manual effort and is not feasible for analyzing data collected from many users, long experimental sessions, and heavily interactive and dynamic visual stimuli. In this dissertation, we present a novel analysis method. We would instrument visualizations that have open source code, and leverage real-time information about the layout of the rendered visual content, to automatically relate gaze-samples to visual objects drawn on the screen. Since such visual objects are shown in a visualization stand for data, the method would allow us to necessarily detect data that users focus on or Data of Interest (DOI). This dissertation has two contributions. First, we demonstrated the feasibility of collecting DOI data for real life visualization in a reliable way which is not self-evident. Second, we formalized the process of collecting and interpreting DOI data and test whether the automated DOI detection can lead to research workflows, and insights not possible with traditional, manual approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Song, Huaguang. "Multi-scale data sketching for large data analysis and visualization." Scholarly Commons, 2012. https://scholarlycommons.pacific.edu/uop_etds/832.

Повний текст джерела
Анотація:
Analysis and visualization of large data sets is time consuming and sometimes can be a very difficult process, especially for 3D data sets. Therefore, data processing and visualization techniques have often been used in the case of different massive data analysis for efficiency and accuracy purposes. This thesis presents a multi-scale data sketching solution, specifically for large 3D scientific data with a goal to support collaborative data management, analysis and visualization. The idea is to allow users to quickly identify interesting regions and observe significant patterns without directly accessing the raw data, since most of the information in raw form is not useful. This solution will provide a fast way to allow the users to choose the regions they are interested and save time. By preprocessing the data, our solution can sketch out the general regions of the 3D data, and users can decide whether they are interested in going further to analyze the current data. The key issue is to find efficient and accurate algorithms to detect boundaries or regions information for large 3D scientific data. Specific techniques and performance analysis are also discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Park, Joonam. "A visualization system for nonlinear frame analysis." Thesis, Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/19172.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Schroeder, Michael Philipp 1986. "Analysis and visualization of multidimensional cancer genomics data." Doctoral thesis, Universitat Pompeu Fabra, 2014. http://hdl.handle.net/10803/301436.

Повний текст джерела
Анотація:
Cancer is a complex disease caused by somatic alterations of the genome and epigenome in tumor cells. Increased investments and cheaper access to various technologies have built momentum for the generation of cancer genomics data. The availability of such large datasets offers many new possibilities to gain insight into cancer molecular properties. Within this scope I present two methods that exploit the broad availability of cancer genomic data: OncodriveROLE, an approach to classify mutational cancer driver genes into activating and loss of function mode of actions and MutEx, a statistical measure to assess the trend of the somatic alterations in a set of genes to be mutually exclusive across tumor samples. Nevertheless, the unprecedented dimension of the available data raises new complications for its accessibility and exploration which we try to solve with new visualization solutions: i) Gitools interactive heatmaps with prepared large scale cancer genomics datasets ready to be explored, ii) jHeatmap, an interactive heatmap browser for the web capable of displaying multidimensional cancer genomics data and designed for its inclusion into web portals, and iii) SVGMap, a web server to project data onto customized SVG figures useful for mapping experimental measurements onto the model.
El cancer és una malaltia complexa causada per alteracions somàtiques del genoma i epigenoma de les cèl•lules tumorals. Un augment d’inversions i l'accés a tecnologies de baix cost ha provocat un increment important en la generació de dades genòmiques de càncer. La disponibilitat d’aquestes dades ofereix noves possibilitats per entendre millor les propietats moleculars del càncer. En aquest àmbit, presento dos mètodes que aprofiten aquesta gran disponibilitat de dades genòmiques de càncer: OncodriveROLE, un procediment per a classificar gens “drivers” del càncer segons si el seu mode d’acció ésl'activació o la pèrdua de funció del producte gènic; i MutEx, un estadístic per a mesurar la tendència de les mutacions somàtiques a l’exclusió mútua. Tanmateix, la manca de precedents d’aquesta gran dimensió de dades fa sorgir nous problemes en quant a la seva accessibilitat i exploració, els quals intentem solventar amb noves eines de visualització: i) Heatmaps interactius de Gitools amb dades genòmiques de càncer a gran escala, a punt per ser explorades, ii) jHeatmap, un heatmap interactiu per la web capaç de mostrar dades genòmiques de cancer multidimensionals i dissenyat per la seva inclusió a portals web; i iii) SVGMap, un servidor web per traslladar dades en figures SVG customitzades, útil per a la transl•lació de mesures experimentals en un model visual.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Walter, Martin Alan. "Visualization techniques for the analysis of neurophysiological data." Thesis, University of Plymouth, 2004. http://hdl.handle.net/10026.1/2551.

Повний текст джерела
Анотація:
In order to understand the diverse and complex functions of the Human brain, the temporal relationships of vast quantities of multi-dimensional spike train data must be analysed. A number of statistical methods already exist to analyse these relationships. However, as a result of expansions in recording capability hundreds of spike trains must now be analysed simultaneously. In addition to the requirements for new statistical analysis methods, the need for more efficient data representation is paramount. The computer science field of Information Visualization is specifically aimed at producing effective representations of large and complex datasets. This thesis is based on the assumption that data analysis can be significantly improved by the application of Information Visualization principles and techniques. This thesis discusses the discipline of Information Visualization, within the wider context of visualization. It also presents some introductory neurophysiology focusing on the analysis of multidimensional spike train data and software currently available to support this problem. Following this, the Toolbox developed to support the analysis of these datasets is presented. Subsequently, three case studies using the Toolbox are described. The first case study was conducted on a known dataset in order to gain experience of using these methods. The second and third case studies were conducted on blind datasets and both of these yielded compelling results.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Tenev, Tichomir Gospodinov. "SpreadCube--a visualization tool for exploratory data analysis." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/43924.

Повний текст джерела
Анотація:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.
Includes bibliographical references (p. 153-154).
by Tichomir Gospodinov Tenev.
M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Purcaro, Michael J. "Analysis, Visualization, and Machine Learning of Epigenomic Data." eScholarship@UMMS, 2017. https://escholarship.umassmed.edu/gsbs_diss/938.

Повний текст джерела
Анотація:
The goal of the Encyclopedia of DNA Elements (ENCODE) project has been to characterize all the functional elements of the human genome. These elements include expressed transcripts and genomic regions bound by transcription factors (TFs), occupied by nucleosomes, occupied by nucleosomes with modified histones, or hypersensitive to DNase I cleavage, etc. Chromatin Immunoprecipitation (ChIP-seq) is an experimental technique for detecting TF binding in living cells, and the genomic regions bound by TFs are called ChIP-seq peaks. ENCODE has performed and compiled results from tens of thousands of experiments, including ChIP-seq, DNase, RNA-seq and Hi-C. These efforts have culminated in two web-based resources from our lab—Factorbook and SCREEN—for the exploration of epigenomic data for both human and mouse. Factorbook is a peak-centric resource presenting data such as motif enrichment and histone modification profiles for transcription factor binding sites computed from ENCODE ChIP-seq data. SCREEN provides an encyclopedia of ~2 million regulatory elements, including promoters and enhancers, identified using ENCODE ChIP-seq and DNase data, with an extensive UI for searching and visualization. While we have successfully utilized the thousands of available ENCODE ChIP-seq experiments to build the Encyclopedia and visualizers, we have also struggled with the practical and theoretical inability to assay every possible experiment on every possible biosample under every conceivable biological scenario. We have used machine learning techniques to predict TF binding sites and enhancers location, and demonstrate machine learning is critical to help decipher functional regions of the genome.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Li, Zhongli. "Towards a Cloud-based Data Analysis and Visualization System." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/35030.

Повний текст джерела
Анотація:
In recent years, increasing attentions are paid on developing exceptional technologies for efficiently processing massive collection of heterogeneous data generated by different kinds of sensors. While we have observed great successes of utilizing big data in many innovative applications, the need on integrating information poses new challenges caused by the heterogeneity of the data. In this thesis, we target at geo-tagged data, and propose a cloud based platform named City Digital Pulse (CDP), where a unified mechanism and extensible architecture are provided to facilitate the various aspects in big data analysis, ranging from data acquisition to data visualization. We instantiate the proposed system using multi-model data collected from two social platforms, Twitter and Instagram, which include plenty of geo-tagged messages. Data analysis is performed to detect human affections from the user uploaded content. The emotional information in big social data can be uncovered by using a multi-dimension visualization interface, based on which users can easily grasp the evolving of human affective status within a given geographical area, and interact with the system. This offers costless opportunities to improve the decision making in many critical areas. Both the proposed architecture and algorithm are empirically demonstrated to be able to achieve real-time big data analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Henning, Gustav. "Visualization of neural data : Dynamic representation and analysis of accumulated experimental data." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166770.

Повний текст джерела
Анотація:
The scientific method is an integral part of the investigation and exploration of hypotheses. Although procedures may vary from one field to the next, most have common identifiable stages. Today, there is no lack of tools that illustrate data in different graphical mediums. This thesis focuses instead on the type of tools that researchers use to investigate their hypotheses’ validity.When a sufficient amount of data is gathered, it can be presented for analysis in meaningful ways to illustrate patterns or abnormalities that would otherwise go unnoticed when only viewed in raw numbers. However useful static visualization of data can be when presented in ascientific paper, researchers are often overwhelmed by the number of plots and graphs that can be made using only a sliver of data. Therefore, this thesis will introduce software which purpose is to demonstrate the needs of researchers inanalyzing data from repeated experiments in order to speed up the process of recognizing variations between them.
Den vetenskapliga metoden är en integral del av undersökningen och utforskandet av hypoteser. Medan procedurer varierar mellan fält liknar de varandra i stora drag. Idag finns det ingen brist på verktyg som visualiserar data i olika grafiska kontexter. Istället fokuserar denna tes på de typ av verktyg som forskare använder för att undersöka integriteten av hypoteser.           När tillräckligt med data samlats finns det olika sätt att presentera denna på ett meningsfullt sätt för att demonstrera mönster och avvikelser som skulle förbli osedda i endast siffror.             Hurvida användbar statisk visualisering av data är som grafik till vetenskapliga rapporter gäller nödvändigtvis inte samma sak vid analys på grund av de många kombinationer av visualisering som ofta finns. Mjukvara kommer att introduceras för att demonstrera behovet av dynamisk representation vid analys av ackumulerad data för att påskynda upptäckten av mönster och avvikelser.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Töpel, Johanna. "Initial Analysis and Visualization of Waveform Laser Scanner Data." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2864.

Повний текст джерела
Анотація:

Conventional airborne laser scanner systems output the three-dimensional coordinates of the surface location hit by the laser pulse. Data storage capacity and processing speeds available today has made it possible to digitally sample and store the entire reflected waveform, instead of only extracting the coordinates. Research has shown that return waveforms can give even more detailed insights into the vertical structure of surface objects, surface slope, roughness and reflectivity than the conventional systems. One of the most important advantages with registering the waveforms is that it gives the user the possibility to himself define the way range is calculated in post-processing.

In this thesis different techniques have been tested to visualize a waveform data set in order to get a better understanding of the waveforms and how they can be used to improve methods for classification of ground objects.

A pulse detection algorithm, using the EM algorithm, has been implemented and tested. The algorithm output position and width of the echo pulses. One of the results of this thesis is that echo pulses reflected by vegetation tend to be wider than those reflected by for example a road. Another result is that up till five echo pulses can be detected compared to two echo pulses that the conventional system detects.

Стилі APA, Harvard, Vancouver, ISO та ін.
16

Labitzke, Björn [Verfasser]. "Visualization and analysis of multispectral image data / Björn Labitzke." Siegen : Universitätsbibliothek der Universität Siegen, 2014. http://d-nb.info/1057805076/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Grünfeld, Katrin. "Visualization, integration and analysis of multi-element geochemical data." Doctoral thesis, KTH, Mark- och vattenteknik, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169.

Повний текст джерела
Анотація:
generated large databases containing information on the concentrations of chemical elements in rocks, surface sediments and biogeochemical materials. Regional geochemical data being imprecise, multivariate, spatially auto-correlated and non-normally distributed pose specific problems to the choice of data analysis methods. Commonly several methods are combined, and the choice of techniques depends on the characteristics of data as well as the purpose of study. One critical issue is dealing with extreme data values (or outliers) in the initial stages of analysis. Another common problem is that integrated analysis of several geochemical datasets is not possible without interpolating the point data into surfaces. Finally, separation of anthropogenic influences from natural geochemical background in the surface materials is an issue of great importance for environmental studies. This study describes an approach to address the above-mentioned problems by a flexible combination and use of GIS and multivariate statistical techniques with high-dimensional visualization. Dynamically linked parallel coordinate and scatterplot matrix display techniques allow simultaneous presentation of spatial, multi-element and qualitative information components of geochemical data. The plots not only display data in multi-dimensional space, but also allow detailed inspection of the data with interactive multi-dimensional brushing tools. The results of the study indicate that these simple high-dimensional visualization techniques can successfully complement the traditional statistical and GIS analysis in all steps of data processing, from data description and outlier identification through data integration, analysis, validation, and presentation of results. The outcomes of the study include: a visual procedure towards intelligent data cleaning where potentially significant information in very high element concentrations is preserved, methods for integration and visual analysis of geochemical datasets collected in different grids, estimation of geochemical baseline concentrations of trace metals in till geochemistry of southeastern Sweden, use of multi-element spatial fingerprints to trace natural geochemical patterns in biogeochemistry, and a new graphical approach to present multi-element geochemical data summaries and results from numerical analysis.
QC 20100609
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Grünfeld, Katrin. "Visualization, integration and analysis of multi-element geochemical data /." Stockholm, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Pheng, Sokhom. "Dynamic data structure analysis and visualization of Java programs." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98768.

Повний текст джерела
Анотація:
For many years, programmers have faced the problem of reading and trying to understand other programmers' code, either to maintain it or to learn from it. Analysis of dynamic data structure usage is useful for both program understanding and for improving the accuracy of other program analyses.
Data structure usage has been the target of various static techniques. Static approaches, however, may suffer from reduced accuracy in complex situations and have the potential to be overly-conservative in their approximation. An accurate, clean picture of runtime heap activity is difficult to achieve.
We have designed and implemented a dynamic heap analysis system that allows one to examine and analyze how Java programs build and modify data structures. Using a complete execution trace from a profiled run of the program, we build an internal representation that mirrors the evolving runtime data structures. The resulting series of representations can then be analyzed and visualized. This gives us an accurate representation of the data structures created and an insight into the program's behaviour. Furthermore we show how to use our approach to help understand how programs use data structures, the precise effect of garbage collection, and to establish limits on static data structure analysis.
A deep understanding of dynamic data structures is particularly important for modern, object-oriented languages that make extensive use of heap-based data structures. These analysis results can be useful for an important group of applications such as parallelization, garbage collection optimization, program understanding or improvements to other optimization.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Ding, Hao. "Visualization and Integrative analysis of cancer multi-omics data." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1467843712.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Lozano, Prieto David. "Data analysis and visualization of the 360degrees interactional datasets." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-88985.

Повний текст джерела
Анотація:
Nowadays, there has been an increasing interest in using 360degrees video in medical education. Recent efforts are starting to explore how nurse students experience and interact with 360degrees videos. However, once these interactions have been registered in a database, there is a lack of ways to analyze these data, which generates a necessity of creating a reliable method that can manage all this collected data, and visualize the valuable insights of the data. Hence, the main goal of this thesis is to address this challenge by designing an approach to analyze and visualize this kind of data. This will allow teachers in health care education, and medical specialists to understand the collected data in a meaningful way. To get the most suitable solution, several meetings with nursing teachers took place to draw the first draft structure of an application which acts as the needed approach. Then, the application was used to analyze collected data in a study made in December. Finally, the application was evaluated through a questionnaire that involved a group of medical specialists related to education. The initial outcome from those testing and evaluations indicate that the application successfully achieves the main goals of the project, and it has allowed discussing some ideas that will help in the future to improve the 360degrees video experience and evaluation in the nursing education field providing an additional tool to analyze, compare and assess students.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

El-Shehaly, Mai Hassan. "A Visualization Framework for SiLK Data exploration and Scan Detection." Thesis, Virginia Tech, 2009. http://hdl.handle.net/10919/34606.

Повний текст джерела
Анотація:
Network packet traces, despite having a lot of noise, contain priceless information, especially for investigating security incidents or troubleshooting performance problems. However, given the gigabytes of flow crossing a typical medium sized enterprise network every day, spotting malicious activity and analyzing trends in network behavior becomes a tedious task. Further, computational mechanisms for analyzing such data usually take substantial time to reach interesting patterns and often mislead the analyst into reaching false positives, benign traffic being identified as malicious, or false negatives, where malicious activity goes undetected. Therefore, the appropriate representation of network traffic data to the human user has been an issue of concern recently. Much of the focus, however, has been on visualizing TCP traffic alone while adapting visualization techniques for the data fields that are relevant to this protocol's traffic, rather than on the multivariate nature of network security data in general, and the fact that forensic analysis, in order to be fast and effective, has to take into consideration different parameters for each protocol. In this thesis, we bring together two powerful tools from different areas of application: SiLK (System for Internet-Level Knowledge), for command-based network trace analysis; and ComVis, a generic information visualization tool. We integrate the power of both tools by aiding simplified interaction between them, using a simple GUI, for the purpose of visualizing network traces, characterizing interesting patterns, and fingerprinting related activity. To obtain realistic results, we applied the visualizations on anonymized packet traces from Lawrence Berkley National Laboratory, captured on selected hours across three months. We used a sliding window approach in visually examining traces for two transport-layer protocols: ICMP and UDP. The main contribution of this research is a protocol-specific framework of visualization for ICMP and UDP data. We explored relevant header fields and the visualizations that worked best for each of the two protocols separately. The resulting views led us to a number of guidelines that can be vital in the creation of "smart books" describing best practices in using visualization and interaction techniques to maintain network security; while creating visual fingerprints which were found unique for individual types of scanning activity. Our visualizations use a multiple-views approach that incorporates the power of two-dimensional scatter plots, histograms, parallel coordinates, and dynamic queries.
Master of Science
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Woodring, Jonathan Lee. "Visualization of Time-varying Scientific Data through Comparative Fusion and Temporal Behavior Analysis." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1243549189.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Ashkiani, Shahin. "Four essays on data visualization and anomaly detection of data envelopment analysis problems." Doctoral thesis, Universitat Autònoma de Barcelona, 2019. http://hdl.handle.net/10803/669593.

Повний текст джерела
Анотація:
La visualización de datos es un tema relativamente descuidado en el campo del Análisis Envolvente de Datos(DEA). En los manuales completos de DEA casi no hay ningún capítulo o sección dedicada a los métodos de visualización de datos, y en las aplicaciones de DEA, generalmente se asigna un papel muy limitado y nada principal en la visualización de datos. Sin embargo, la representación gráfica de los datos puede tener beneficios definitivos para los profesionales e investigadores del campo, hasta tal punto que la interpretación resultante de los problemas de la DEA a través de la visualización no se puede obtener utilizando métodos analíticos. La visualización de datos, cuando se aplica correctamente, puede revelar regularidades e irregularidades en los datos. Las regularidades pueden ser tendencias o agrupaciones, y las irregularidades son discordantes, como los valores atípicos. En algunos casos, la visualización de datos ayuda a entender-los mucho más rápidamente, ya que el cerebro humano está conectado para absorber la información visual de manera más eficiente que los dígitos, y la visualización de datos puede resumir cargas de dígitos en un gráfico. Además, algunos patrones se hacen visibles cuando el método de investigación retiene todas las variables y sus relaciones, algo que los métodos analíticos no pretenden hacer. Por el contrario, la visualización de datos de alta dimensión se compone de métodos que tienden a retener toda la información y, por lo tanto, están en el centro de esta tesis, para encontrar regularidades e irregularidades en los diversos conjuntos de datos DEA. A pesar del olvido que tiene la visualación de datos DEA, ya hay cosas hechas y ya existen metodos, es más tienen varias herramientas muy útiles. El primer ensayo de esta tesis es una encuesta visual de las herramientas disponibles. Como no existe tal encuesta en la literatura de la DEA, es importante reunir todas las herramientas de visualización en un mismo grupo, e identificar e ilustrar las importantes para ayudar a los profesionales a elegir las herramientas adecuadas así como ayudar a los investigadores a crear nuevas herramientas. El segundo ensayo de esta tesis sugiere una nueva herramienta para esta caja de herramientas. Esta nueva herramienta es un método de visualización para la metodología de “Cross-Evaluation” de la DEA y se puede utilizar para diversos fines, incluida la detección de valores atípicos o unidades de toma de decisiones (DMU) poco comunes. Un tipo de estas DMU poco comunes se denominan "unidades rebeldes", y el tercer ensayo de esta tesis se centra en este tipo de DMU. Las unidades Maverick son el tema del segundo ensayo, y se sugiere un nuevo método visual, basado en el ensayo anterior, para detectar tales DMU, ​​y se crea un nuevo índice para identificarlas numéricamente. En esta tesis se prueba que el nuevo índice inconformista de maverick esta en teoria y en la práctica más justificado y es más robusto que los conocidos índices inconformistas de maverick de la literatura de la DEA. El cuarto y último ensayo es una introducción a DEA-Viz, un nuevo software de visualización desarrollado por el autor de esta tesis. DEA-Viz incluye la implementación del método sugerido de visualización de evaluación cruzada del segundo ensayo, así como una selección de métodos de visualización DEA sugeridos previamente. La importancia de DEA-Viz radica en el hecho de que no hay ningún software DEA con la misma funcionalidad que DEA-Viz, ni ningún software DEA con características similares de DEA-Viz. Por lo tanto, DEA-Viz puede tener un papel incomparable en el análisis de problemas de DEA y en la promoción de la visualización de DEA.
Data visualization is a relatively neglected topic in the field of data envelopment analysis (DEA). In the comprehensive handbooks of DEA, there is hardly any chapter or section dedicated to data visualization methods, and in the applications of DEA, a very limited and peripheral role is usually assigned to data visualization. However, graphical representation of data can have definite benefits for the practitioners and researchers of the field, to such extent that the resulted insight to the DEA problems through visualization may not be gained using analytical methods. Data visualization, when applied correctly, is able to reveal regularities and irregularities in the data. Regularities can be trends, or clusters, and irregularities are anything discordant, such as outliers. In some cases, data visualization helps to grasp the data much more quickly, as human brain is wired to absorb visual information more efficiently than digits, and data visualization can summarize loads of digits into one chart. On the other hand, some patterns become visible when all the variables and their relations are retained by the investigation method, something that analytical methods do not intend to do. In contrast, High-dimensional data visualization is composed of methods which tend to retain all information, and thus they are in the center of this thesis, in order to find regularities and irregularities in the various DEA datasets. Despite the relative neglect, DEA data visualization toolbox is not empty, and in fact it has several useful tools. The first essay of this thesis is a visual survey of these available tools. Since there is no such survey in DEA literature, it is important to gather all the visualization tools in a toolbox, and identify and illustrate the important ones in order to help practitioners to pick the proper tools, and to help researchers to craft novel tools. The second essay of this thesis suggests a new tool for this toolbox. This new tool is a visualization method for DEA cross-evaluation methodology, and can be used for various purposes including detection of outliers or uncommon decision making units (DMU). One type of these uncommon DMUs is called “maverick units”, and the third essay of this thesis is focused on this sort of DMUs. Maverick units are the subject of the second essay, and a new visual method, based on the preceding essay, is suggested to detect such DMUs, and a new index is devised to numerically identify them. It is shown that the new maverick index is theoretically and practically more justified and robust than the well-known maverick indexes of DEA literature. The forth and last essay is an introduction to DEA-Viz, a new visualization software developed by the author of this thesis. DEA-Viz includes the implementation of the suggested cross-evaluation visualization method of the second essay, as well as a selection of previously suggested DEA visualization methods. Moreover, the DEA-Viz has novel visualization features in order to investigate maverick units in further details, following the third essay. The importance of DEA-Viz lies in the facts that there is not any DEA software with the same functionality as DEA-Viz, or any DEA software with similar features of DEA-Viz. Thus, DEA-Viz can have an unparalleled role in analysis of DEA problems, and promotion DEA visualization. Following the enhancement of this thesis, an R package including all the DEA-Viz tools, as well as some new methods is developed by the author. The package, could be found in author’s online code repository, makes the code available to every interested user, and expands the current DEA visualization tools from static data, to panel data.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Wictorin, Sebastian. "Streamlining Data Journalism: Interactive Analysis in a Graph Visualization Environment." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-22498.

Повний текст джерела
Анотація:
This thesis explores the topic of how one can streamline a data journalists analytical workflow in a graph visualization environment. Interactive graph visualizations have been used recently by data journalists to investigate the biggest leaks of data in history. Graph visualizations empower users to find patterns in their connected data, and as the world continuously produces more data, the more important it becomes to make sense of it. The exploration was done by conducting semi-structured interviews with users, which illuminated three categories of insights called Graph Readability, Charts in Graphs and Temporality. Graph Readability was the category that were conceptualized and designed by integrating user research and data visualization best practises. The design process was concluded with a usability test with graph visualization developers, followed by a final iteration of the concept. The outcome resulted in a module that lets users simplify their graph and preserve information by aggregating nodes with similar attributes.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Chaudhuri, Abon. "Geometric and Statistical Summaries for Big Data Visualization." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1382235351.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Nyumbeka, Dumisani Joshua. "Using data analysis and Information visualization techniques to support the effective analysis of large financial data sets." Thesis, Nelson Mandela Metropolitan University, 2016. http://hdl.handle.net/10948/12983.

Повний текст джерела
Анотація:
There have been a number of technological advances in the last ten years, which has resulted in the amount of data generated in organisations increasing by more than 200% during this period. This rapid increase in data means that if financial institutions are to derive significant value from this data, they need to identify new ways to analyse this data effectively. Due to the considerable size of the data, financial institutions also need to consider how to effectively visualise the data. Traditional tools such as relational database management systems have problems processing large amounts of data due to memory constraints, latency issues and the presence of both structured and unstructured data The aim of this research was to use data analysis and information visualisation techniques (IV) to support the effective analysis of large financial data sets. In order to visually analyse the data effectively, the underlying data model must produce results that are reliable. A large financial data set was identified, and used to demonstrate that IV techniques can be used to support the effective analysis of large financial data sets. A review of the literature on large financial data sets, visual analytics, existing data management and data visualisation tools identified the shortcomings of existing tools. This resulted in the determination of the requirements for the data management tool, and the IV tool. The data management tool identified was a data warehouse and the IV toolkit identified was Tableau. The IV techniques identified included the Overview, Dashboards and Colour Blending. The IV tool was implemented and published online and can be accessed through a web browser interface. The data warehouse and the IV tool were evaluated to determine their accuracy and effectiveness in supporting the effective analysis of the large financial data set. The experiment used to evaluate the data warehouse yielded positive results, showing that only about 4% of the records had incorrect data. The results of the user study were positive and no major usability issues were identified. The participants found the IV techniques effective for analysing the large financial data set.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Kriegel, Francesco. "Visualization of Conceptual Data with Methods of Formal Concept Analysis." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-125309.

Повний текст джерела
Анотація:
Draft and proof of an algorithm computing incremental changes within a labeled layouted concept lattice upon insertion or removal of an attribute column in the underlying formal context. Furthermore some implementational details and mathematical background knowledge are presented
Entwurf und Beweis eines Algorithmus zur Berechnung inkrementeller Änderungen in einem beschrifteten dargestellten Begriffsverband beim Einfügen oder Entfernen einer Merkmalsspalte im zugrundeliegenden formalen Kontext. Weiterhin sind einige Details zur Implementation sowie zum mathematischen Hintergrundwissen dargestellt
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Chen, Chun-Ming. "Data Summarization for Large Time-varying Flow Visualization and Analysis." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1469141137.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Tata, Maitreyi. "Data analytics on Yelp data set." Kansas State University, 2017. http://hdl.handle.net/2097/38237.

Повний текст джерела
Анотація:
Master of Science
Department of Computing and Information Sciences
William H. Hsu
In this report, I describe a query-driven system which helps in deciding which restaurant to invest in or which area is good to open a new restaurant in a specific place. Analysis is performed on already existing businesses in every state. This is based on certain factors such as the average star rating, the total number of reviews associated with a specific restaurant, the price range of the restaurant etc. The results will give an idea of successful restaurants in a city, which helps you decide where to invest and what are the things to be kept in mind while starting a new business. The main scope of the project is to concentrate on Analytics and Data Visualization.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Phillips, Brandon. "The Relationship Between Data Visualization and Task Performance." Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc699897/.

Повний текст джерела
Анотація:
We are entering an era of business intelligence and big data where simple tables and other traditional means of data display cannot deal with the vast amounts of data required to meet the decision-making needs of businesses and their clients. Graphical figures constructed with modern visualization software can convey more information than a table because there is a limit to the table size that is visually usable. Contemporary decision performance is influenced by the task domain, the user experience, and the visualizations themselves. Utilizing data visualization in task performance to aid in decision making is a complex process. We develop and test a decision-making framework to examine task performance in a visual and non-visual aided decision-making by using three experiments to test this framework. Studies 1 and 2 investigate DV formats and how complexity and design affects the proposed visual decision making framework. The studies also examine how DV formats affect task performance, as measured by accuracy and timeliness, and format preference. Additionally, these studies examine how DV formats influence the constructs in the proposed decision making framework which include information usefulness, decision confidence, cognitive load, visual aesthetics, information seeking intention, and emotion. Preliminary findings indicate that graphical DV allows individuals to respond faster and more accurately, resulting in improved task fit and performance. Anticipated implications of this research are as follows. Visualizations are independent of the size of the data set but can be increasingly complex as the data complexity increases. Furthermore, well designed visualizations let you see through the complexity and simultaneously mine the complexity with drill down technologies such as OLAP.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Diner, Casri. "Visualizing Data With Formal Concept Analysis." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/1046325/index.pdf.

Повний текст джерела
Анотація:
In this thesis, we wanted to stress the tendency to the geometry of data. This should be applicable in almost every branch of science, where data are of great importance, and also in every kind of industry, economy, medicine etc. Since machine'
s hard-disk capacities which is used for storing datas and the amount of data you can reach through internet is increasing day by day, there should be a need to turn this information into knowledge. This is one of the reasons for studying formal concept analysis. We wanted to point out how this application is related with algebra and logic. The beginning of the first chapter emphasis the relation between closure systems, galois connections, lattice theory as a mathematical structure and concept analysis. Then it describes the basic step in the formalization: An elementary form of the representation of data is defined mathematically. Second chapter explains the logic of formal concept analysis. It also shows how implications, which can be regard as special formulas on a set,between attributes can be shown by fewer implications, so called generating set for implications. These mathematical tools are then used in the last chapter, in order to describe complex '
concept'
lattices by means of decomposition methods in examples.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Scelfo, Tony (Tony W. ). "Data visualization of biological microscopy image analyses." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/37073.

Повний текст джерела
Анотація:
Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references.
The Open Microscopy Environment (OME) provides biologists with a framework to store, analyze and manipulate large sets of image data. Current microscopes are capable of generating large numbers of images and when coupled with automated analysis routines, researchers are able to generate intractable sets of data. I have developed an extension to the OME toolkit, named the LoViewer, which allows researchers to quickly identify clusters of images based on relationships between analytically measured parameters. By identifying unique subsets of data, researchers are able to make use of the rest of the OME client software to view interesting images in high resolution, classify them into category groups and apply further analysis routines. The design of the LoViewer itself and its integration with the rest of the OME toolkit will be discussed in detail in body of this thesis.
by Tony Scelfo.
M.Eng.and S.B.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Wang, Ko-Chih. "Distribution-based Summarization for Large Scale Simulation Data Visualization and Analysis." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555452764885977.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Wan, Yong. "Fluorender, an interactive tool for confocal microscopy data visualization and analysis." Thesis, The University of Utah, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3592436.

Повний текст джерела
Анотація:

Confocal microscopy has become a popular imaging technique in biology research in recent years. It is often used to study three-dimensional (3D) structures of biological samples. Confocal data are commonly multichannel, with each channel resulting from a different fluorescent staining. This technique also results in finely detailed structures in 3D, such as neuron fibers. Despite the plethora of volume rendering techniques that have been available for many years, there is a demand from biologists for a flexible tool that allows interactive visualization and analysis of multichannel confocal data. Together with biologists, we have designed and developed FluoRender. It incorporates volume rendering techniques such as a two-dimensional (2D) transfer function and multichannel intermixing. Rendering results can be enhanced through tone-mappings and overlays. To facilitate analyses of confocal data, FluoRender provides interactive operations for extracting complex structures. Furthermore, we developed the Synthetic Brainbow technique, which takes advantage of the asynchronous behavior in Graphics Processing Unit (GPU) framebuffer loops and generates random colorizations for different structures in single-channel confocal data. The results from our Synthetic Brainbows, when applied to a sequence of developing cells, can then be used for tracking the movements of these cells. Finally, we present an application of FluoRender in the workflow of constructing anatomical atlases.

Стилі APA, Harvard, Vancouver, ISO та ін.
36

Cutler, Darren W., and Tyler J. Rasmussen. "Usability Testing and Workflow Analysis of the TRADOC Data Visualization Tool." Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/17350.

Повний текст джерела
Анотація:
Approved for public release; distribution is unlimited
The volume of data available to military decision makers is vast. Leaders need tools to sort, analyze, and present information in an effective manner. Software complexity is also increasing, with user interfaces becoming more intricate and interactive. The Data Visualization Tool (DaViTo) is an effort by TRAC Monterey to produce a tool for use by personnel with little statistical background to process and display this data. To meet the program goals and make analytical capabilities more widely available, the user interface and data representation techniques need refinement. This usability test is a task-oriented study using eye-tracking, data representation techniques, and surveys to generate recommendations for software improvement. Twenty-four subjects participated in three sessions using DaViTo over a three-week period. The first two sessions consisted of training followed by basic reinforcement tasks, evaluation of graphical methods, and a brief survey. The final session was a task-oriented session followed by graphical representations evaluation and an extensive survey. Results from the three sessions were analyzed and 37 recommendations generated for the improvement of DaViTo. Improving software latency, providing more graphing options and tools, and inclusion of an effective training product are examples of important recommendations that would greatly improve usability.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Ke, Xian 1981. "A multi-tier framework for dynamic data collection, analysis, and visualization." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/28416.

Повний текст джерела
Анотація:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.
Includes bibliographical references (leaves 52-53).
This thesis describes a framework for collecting, analyzing, and visualizing dynamic data, particularly data gathered through Web questionnaires. The framework addresses challenges such as promoting user participation, handling missing or invalid data, and streamlining the data interpretation process. Tools in the framework provide an intuitive way to build robust questionnaires on the Web and perform on-the-fly analysis and visualization of results. A novel 2.5-dimensional dynamic response-distribution visualization allows subjects to compare their results against others immediately after they have submitted their response, thereby encouraging active participation in ongoing research studies. Other modules offer the capability to quickly gain insight and discover patterns in user data. The framework has been implemented in a multi-tier architecture within an open-source, Java-based platform. It is incorporated into Risk Psychology Network, a research and educational project at MIT's Laboratory for Financial Engineering.
by Xian Ke.
M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Reshef, David N. "VisuaLyzer : an approach for rapid visualization and analysis of epidemiological data." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/53135.

Повний текст джерела
Анотація:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.
Includes bibliographical references (leaves 112-113).
The ability to capture, store, and manage massive amounts of data is changing virtually every aspect of science, technology, and medicine. This new 'data age' calls for innovative methods to mine and interact with information. VisuaLyzer is a platform designed to identify and investigate meaningful relationships between variables within large datasets through rapid, dynamic, and intelligent data exploration. VisuaLyzer uses four key steps in its approach: 1. Data management: Enabling rapid and robust loading, managing, combining, and altering of multiple databases using a customized database management system. 2. Exploratory Data Analysis: Applying existing and novel statistics and machine learning algorithms to identify and quantify all potential associations among variables across datasets, in a model-independent manner. 3. Rapid, Dynamic Visualization: Using novel methods for visualizing and understanding trends through intuitive, dynamic, real-time visualizations that allow for the simultaneous analysis of up to ten variables. 4. Intelligent Hypothesis Generation: Using computer-identified correlations, together with human intuition gathered through human interaction with visualizations, to intelligently and automatically generate hypotheses about data. VisuaLyzer's power to simultaneously analyze and visualize massive amounts of data has important applications in the realm of epidemiology, where there are many large complex datasets collected from around the world, and an important need to elicit potential disease-defining factors from within these datasets.
(cont.) Researchers can use VisuaLyzer to identify variables that may directly, or indirectly, influence disease emergence, characteristics, and interactions, representing a fundamental first step toward a new approach to data exploration. As a result, the CDC, the Clinton Foundation, and the Harvard School of Public Health have employed VisuaLyzer as a means of investigating the dynamics of disease transmission.
by David N. Reshef.
M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Jagdish, Deepak. "IMMERSION : a platform for visualization and temporal analysis of email data." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/95606.

Повний текст джерела
Анотація:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 75-76).
Visual narratives of our lives enable us to reflect upon our past relationships, collaborations and significant life events. Additionally, they can also serve as digital archives, thus making it possible for others to access, learn from and reflect upon our life's trajectory long after we are gone. In this thesis, I propose and develop a webbased platform called Immersion, which reveals the network of relationships woven by a person over time and also the significant events in their life. Using only metadata from a person's email history, Immersion creates a visual account of their life that they can interactively explore for self-reflection or share it with others as a digital archive. In the first part of this thesis, I discuss the design, technical and privacy aspects of Immersion, lessons learnt from its large-scale deployment and the reactions it elicited from people. In the second part of this thesis, I focus on the technical anatomy of a new feature of Immersion called Storyline - an interactive timeline of significant life events detected from a person's email metadata. This feature is inspired by feedback obtained from people after the initial launch of the platform.
by Deepak Jagdish.
S.M.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Singh, Shailendra. "Smart Meters Big Data : Behavioral Analytics via Incremental Data Mining and Visualization." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/35244.

Повний текст джерела
Анотація:
The big data framework applied to smart meters offers an exception platform for data-driven forecasting and decision making to achieve sustainable energy efficiency. Buying-in consumer confidence through respecting occupants' energy consumption behavior and preferences towards improved participation in various energy programs is imperative but difficult to obtain. The key elements for understanding and predicting household energy consumption are activities occupants perform, appliances and the times that appliances are used, and inter-appliance dependencies. This information can be extracted from the context rich big data from smart meters, although this is challenging because: (1) it is not trivial to mine complex interdependencies between appliances from multiple concurrent data streams; (2) it is difficult to derive accurate relationships between interval based events, where multiple appliance usage persist; (3) continuous generation of the energy consumption data can trigger changes in appliance associations with time and appliances. To overcome these challenges, we propose an unsupervised progressive incremental data mining technique using frequent pattern mining (appliance-appliance associations) and cluster analysis (appliance-time associations) coupled with a Bayesian network based prediction model. The proposed technique addresses the need to analyze temporal energy consumption patterns at the appliance level, which directly reflect consumers' behaviors and provide a basis for generalizing household energy models. Extensive experiments were performed on the model with real-world datasets and strong associations were discovered. The accuracy of the proposed model for predicting multiple appliances usage outperformed support vector machine during every stage while attaining accuracy of 81.65\%, 85.90\%, 89.58\% for 25\%, 50\% and 75\% of the training dataset size respectively. Moreover, accuracy results of 81.89\%, 75.88\%, 79.23\%, 74.74\%, and 72.81\% were obtained for short-term (hours), and long-term (day, week, month, and season) energy consumption forecasts, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Shah, Dhaval Kashyap. "Impact of Visualization on Engineers – A Survey." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6385.

Повний текст джерела
Анотація:
In the recent years, there has been a tremendous growth in data. Numerous research and technologies have been proposed and developed in the field of Visualization to cope with the associated data analytics. Despite these new technologies, the pace of people’s capacity to perform data analysis has not kept pace with the requirement. Past literature has hinted as to various reasons behind this disparity. The purpose of this research is to demonstrate specifically the usage of Visualization in the field of engineering. We conducted the research with the help of a survey identifying the places where Visualization educational shortcomings may exist. We conclude by asserting that there is a need for creating awareness and formal education about Visualization for Engineers.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Johansson, Jimmy. "Efficient Information Visualization of Multivariate and Time-Varying Data." Doctoral thesis, Linköping : Department of Science and Technology, Linköping University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11643.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Jansson, Mattias, and Jimmy Johansson. "Interactive Visualization of Statistical Data using Multidimensional Scaling Techniques." Thesis, Linköping University, Department of Science and Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1716.

Повний текст джерела
Анотація:

This study has been carried out in cooperation with Unilever and partly with the EC founded project, Smartdoc IST-2000-28137.

In areas of statistics and image processing, both the amount of data and the dimensions are increasing rapidly and an interactive visualization tool that lets the user perform real-time analysis can save valuable time. Real-time cropping and drill-down considerably facilitate the analysis process and yield more accurate decisions.

In the Smartdoc project, there has been a request for a component used for smart filtering in multidimensional data sets. As the Smartdoc project aims to develop smart, interactive components to be used on low-end systems, the implementation of the self-organizing map algorithm proposes which dimensions to visualize.

Together with Dr. Robert Treloar at Unilever, the SOM Visualizer - an application for interactive visualization and analysis of multidimensional data - has been developed. The analytical part of the application is based on Kohonen’s self-organizing map algorithm. In cooperation with the Smartdoc project, a component has been developed that is used for smart filtering in multidimensional data sets. Microsoft Visual Basic and components from the graphics library AVS OpenViz are used as development tools.

Стилі APA, Harvard, Vancouver, ISO та ін.
44

Ribler, Randy L. "Visualizing Categorical Time Series Data with Applications to Computer and Communications Network Traces." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30314.

Повний текст джерела
Анотація:
Visualization tools allow scientists to comprehend very large data sets and to discover relationships which are otherwise difficult to detect. Unfortunately, not all types of data can be visualized easily using existing tools. In particular, long sequences of nonnumeric data cannot be visualized adequately. Examples of this type of data include trace files of computer performance information, the nucleotides in a genetic sequence, a record of stocks traded over a period of years, and the sequence of words in this document. The term categorical time series is defined and used to describe this family of data. When visualizations designed for numerical time series are applied to categorical time series, the distortions which result from the arbitrary conversion of unordered categorical values to totally ordered numerical values can be profound. Examples of this phenomenon are presented and explained. Several new, general purpose techniques for visualizing categorical time series data have been developed as part of this work and have been incorporated into the Chitra perfor- mance analysis and visualization system. All of these new visualizations can be produced in O(n) time. The new visualizations for categorical time series provide general purpose techniques for visualizing aspects of categorical data which are commonly of interest. These include periodicity, stationarity, cross-correlation, autocorrelation, and the detection of recurring patterns. The effective use of these visualizations is demonstrated in a number of application domains, including performance analysis, World Wide Web traffic analysis, network routing simulations, document comparison, pattern detection, and the analysis of the performance of genetic algorithms.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Chakraborty, Soham. "DATA ASSIMILATION AND VISUALIZATION FOR ENSEMBLE WILDLAND FIRE MODELS." UKnowledge, 2008. http://uknowledge.uky.edu/gradschool_theses/529.

Повний текст джерела
Анотація:
This thesis describes an observation function for a dynamic data driven application system designed to produce short range forecasts of the behavior of a wildland fire. The thesis presents an overview of the atmosphere-fire model, which models the complex interactions between the fire and the surrounding weather and the data assimilation module which is responsible for assimilating sensor information into the model. Observation plays an important role in data assimilation as it is used to estimate the model variables at the sensor locations. Also described is the implementation of a portable and user friendly visualization tool which displays the locations of wildfires in the Google Earth virtual globe.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Supiratana, Panon. "Graphical visualization and analysis tool of data entities in embedded systems engineering." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-10428.

Повний текст джерела
Анотація:
Several decades ago, computer control systems known as Electric Control Units (ECUs) were introduced to the automotive industry. Mechanical hardware units have since then increasingly been replaced by computer controlled systems to manage complex tasks such as airbag, ABS, cruise control and so forth. This has lead to a massive increase of software functions and data which all needs to be managed. There are several tools and techniques for this, however, current tools and techniques for developing real-time embedded system are mostly focusing on software functions, not data. Those tools do not fully support developers to manage run-time data at design time. Furthermore, current tools do not focus on visualization of relationship among data items in the system. This thesis is a part of previous work named the Data Entity approach which prioritizes data management at the top level of development life cycle. Our main contribution is a tool that introduces a new way to intuitively explore run-time data items, which are produced and consumed by software components, utilized in the entire system. As a consequence, developers will achieve a better understanding of utilization of data items in the software system. This approach enables developers and system architects to avoid redundant data as well as finding and removing stale data from the system. The tool also allows us to analyze conflicts regarding run-time data items that might occur between software components at design time.
A Data-Entity Approach for Component-Based Real-Time Embedded Systems Development
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Akhavian, Reza. "A Framework for Process Data Collection, Analysis, and Visualization in Construction Projects." Master's thesis, University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5092.

Повний текст джерела
Анотація:
Automated data collection, simulation and visualization can substantially enhance the process of designing, analysis, planning, and control of many engineering processes. In particular, managing processes that are dynamic in nature can significantly benefit from such techniques. Construction projects are good examples of such processes where a variety of equipment and resources constantly interact inside an evolving environment. Management of such settings requires a platform capable of providing decision-makers with updated information about the status of project entities and assisting site personnel making critical decisions under uncertainty. To this end, the current practice of using historical data or expert judgments as static inputs to create empirical formulations, bar chart schedules, and simulation networks to study project activities, resource operations, and the environment under which a project is taking place does not seem to offer reliable results. The presented research investigates the requirements and applicability of a data-driven modeling framework capable of collecting and analyzing real time field data from construction equipment. In the developed data collection scheme, a stream of real time data is continuously transferred to a data analysis module to calculate the input parameters required to create dynamic 3D visualizations of ongoing engineering activities, and update the contents of a discrete event simulation (DES) model representing the real engineering process. The generated data-driven simulation model is an effective tool for projecting future progress based on existing performance. Ultimately, the developed framework can be used by project decision-makers for short-term project planning and control since the resulting simulation and visualization are completely based on the latest status of project entities.
ID: 031001404; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Title from PDF title page (viewed June 10, 2013).; Thesis (M.S.C.E.)--University of Central Florida, 2012.; Includes bibliographical references (p. 98-105).
M.S.C.E
Masters
Civil, Environmental, and Construction Engineering
Engineering and Computer Science
Civil Engineering
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Sugaya, Andrew (Andrew Kiminari). "iDiary : compression, analysis, and visualization of GPS data to predict user activities." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77009.

Повний текст джерела
Анотація:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 91-93).
"What did you do today?" When we hear this question, we try to think back to our day's activities and locations. When we end up drawing a blank on the details of our day, we reply with a simple, "not much." Remembering our daily activities is a difficult task. For some, a manual diary works. For the rest of us, however, we don't have the time to (or simply don't want to) manually enter diary entries. The goal of this thesis is to create a system that automatically generates answers to questions about a user's history of activities and locations. This system uses a user's GPS data to identify locations that have been visited. Activities and terms associated with these locations are found using latent semantic analysis and then presented as a searchable diary. One of the big challenges of working with GPS data is the large amount of data that comes with it, which becomes difficult to store and analyze. This thesis solves this challenge by using compression algorithms to first reduce the amount of data. It is important that this compression does not reduce the fidelity of the information in the data or significantly alter the results of any analyses that may be performed on this data. After this compression, the system analyzes the reduced dataset to answer queries about the user's history. This thesis describes in detail the different components that come together to form this system. These components include the server architecture, the algorithms, the phone application for tracking GPS locations, the flow of data in the system, and the user interfaces for visualizing the results of the system. This thesis also implements this system and performs several experiments. The results show that it is possible to develop a system that automatically generates answers to queries about a user's history.
by Andrew Sugaya.
M.Eng.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Nguyen, Neal Huynh. "Logging, Visualization, and Analysis of Network and Power Data of IoT Devices." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1990.

Повний текст джерела
Анотація:
There are approximately 23.14 billion IoT(Internet of Things) devices currently in use worldwide. This number is projected to grow to over 75 billion by 2025. Despite their ubiquity little is known about the security and privacy implications of IoT devices. Several large-scale attacks against IoT devices have already been recorded. To help address this knowledge gap, we have collected a year’s worth of network traffic and power data from 16 common IoT devices. From this data, we show that we can identify different smart speakers, like the Echo Dot, from analyzing one minute of power data on a shared power line.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Gou, Zaiyong. "Scientific visualization and exploratory data analysis of a large spatial flow dataset." The Ohio State University, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=osu1284991801.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії