Dissertations / Theses on the topic 'Mapping of data usage'

To see the other types of publications on this topic, follow the link: Mapping of data usage.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Mapping of data usage.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ramos, milis Guilherme. "Apport des mesures des compteurs Linky pour la connaissance des charges du réseau de distribution." Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALT021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Le réseau de distribution occupe une place centrale dans la transition énergétique. Cela se traduit par deux modifications clés pour le réseau : une (r)évolution des usages et une (r)évolution numérique. Dans ce contexte, cette thèse commence par dresser une nouvelle cartographie des usages des données provenant des compteurs intelligents. À partir de cette cartographie, la thèse va approfondir deux thématiques centrales d'une grande importance dans le contexte de transition énergétique. La première est une analyse du coefficient de foisonnement des charges Basse Tension (BT) et de son estimation. La seconde concerne l'estimation des courbes de charge des clients du réseau BT à partir d'une méthode innovante
The distribution grid occupies a central position in the energy transition. This results in two key changes for the network: an (r)evolution of uses and a digital (r)evolution. In this context, this thesis begins by creating a new mapping of the uses of data from smart meters. Building upon this mapping, the thesis will delve into two central themes of great importance in the context of energy transition. The first is an analysis of the Diversity Factor of Low Voltage (LV) charges and its estimation. The second involves estimating the load curves of customers on the LV grid using an innovative method
2

Suljevic, Benjamin. "Mapping HW resource usage towards SW performance." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the software applications increasing in complexity, description of hardware is becoming increasingly relevant. To ensure the quality of service for specific applications, it is imperative to have an insight into hardware resources. Cache memory is used for storing data closer to the processor needed for quick access and improves the quality of service of applications. The description of cache memory usually consists of the size of different cache levels, set associativity, or line size. Software applications would benefit more from a more detailed model of cache memory.In this thesis, we offer a way of describing the behavior of cache memory which benefits software performance. Several performance events are tested, including L1 cache misses, L2 cache misses, and L3 cache misses. With the collected information, we develop performance models of cache memory behavior. Goodness of fit is tested for these models and they are used to predict the behavior of the cache memory during future runs of the same application.Our experiments show that L1 cache misses can be modeled to predict the future runs. L2 cache misses model is less accurate but still usable for predictions, and L3 cache misses model is the least accurate and is not feasible to predict the behavior of the future runs.
3

Kamuhanda, Dany. "Visualising M-learning system usage data." Thesis, Nelson Mandela Metropolitan University, 2015. http://hdl.handle.net/10948/11015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Data storage is an important practice for organisations that want to track their progress. The evolution of data storage technologies from manual methods of storing data on paper or in spreadsheets, to the automated methods of using computers to automatically log data into databases or text files has brought an amount of data that is beyond the level of human interpretation and comprehension. One way of addressing this issue of interpreting large amounts of data is data visualisation, which aims to convert abstract data into images that are easy to interpret. However, people often have difficulty in selecting an appropriate visualisation tool and visualisation techniques that can effectively visualise their data. This research proposes the processes that can be followed to effectively visualise data. Data logged from a mobile learning system is visualised as a proof of concept to show how the proposed processes can be followed during data visualisation. These processes are summarised into a model that consists of three main components: the data, the visualisation techniques and the visualisation tool. There are two main contributions in this research: the model to visualise mobile learning usage data and the visualisation of the usage data logged from a mobile learning system. The mobile learning system usage data was visualised to demonstrate how students used the mobile learning system. Visualisation of the usage data helped to convert the data into images (charts and graphs) that were easy to interpret. The evaluation results indicated that the proposed process and resulting visualisation techniques and tool assisted users in effectively and efficiently interpreting large volumes of mobile learning system usage data.
4

Romaniuk, Helena. "Analysis of product usage panel data." Thesis, University of Southampton, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326798.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Densham, Martin. "Bathymetric mapping with QuickBird data /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Sep%5FDensham.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S. in Physical Oceanography)--Naval Postgraduate School, September 2005.
Thesis Advisor(s) :Philip A. Durkee, Edward B. Thornton. Includes bibliographical references (p. 43-44). Also available online.
6

Currie, Sheila. "Data classification for choropleth mapping." Thesis, University of Ottawa (Canada), 1989. http://hdl.handle.net/10393/5725.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Densham, Martin P. J. "Bathymetric mapping with QuickBird data." Thesis, Monterey, California. Naval Postgraduate School, 2005. http://hdl.handle.net/10945/2121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Two algorithms are used to determine bathymetry in the littoral region using QuickBird multispectral satellite observations. The algorithms determine water-leaving radiance and convert this to water depth values. The first algorithm uses a ratio of two wavebands and the second uses the sum of several wavebands. Relative bathymetric errors are determined for the clear water of Looe Key (USA) and the turbid water of Plymouth Sound (UK). Bathymetric measurements from LIDAR and chart data are compared to derived depths to assess their accuracies. An amended version of the ratio method is proposed for use in turbid water to improve accuracy. The results show that the standard ratio and turbidity algorithms have a relative error of 11.7% and 16.5% respectively in clear water. In turbid water the average error of the turbidity algorithm is 11.6% and the amended ratio algorithm average error is 13%.
Royal Navy
8

Fletcher, George H. L. "On the data mapping problem." [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3276692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Olsson, Marcus. "Data Warehouse : An Outlook of Current Usage of External Data." Thesis, University of Skövde, Department of Computer Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-682.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

A data warehouse is a data collection that integrates large amounts of data from several sources, with the aim to support the decision-making process in a company. Data could be acquired from internal sources within the own organization, as well as from external sources outside the organization.

The comprehensive aim of this dissertation is to examine the current usage of external data and its sources for integration into DWs, in order to give users of a DW the best possible foundation for decision-making. In order to investigate this problem, we have conducted an interview study with DW developers.

Based on the interview study, the result shows that it is relative common to integrate external data into DWs. The study also identifies different types of external data that are integrated, and what external sources it is common to acquire data from. In addition, opportunities and pitfalls of integrating external data have also been highlighted.

10

Onojeghuo, Alex Okiemute. "Reedbed mapping using remotely sensed data." Thesis, Lancaster University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.577547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the UK reedbeds dominated by Phragmites australis have been identified as a priority habitat for most regional Biodiversity Partnerships. Information on the current distribution and quality of reedbed sites across the UK is lacking, yet such information is vital in developing suitable management plans for the conservation and expansion of this threatened habitat. The focus of this thesis is to develop a suitable methodology for accurately mapping the distribution and assessing the biophysical properties of reedbed habitats using remotely sensed data. Three study sites situated in the North West region of the UK were used: Leighton Moss nature reserve in Lancashire, and the River Leven and Esthwaite Water situated in Cumbria. The remotely sensed data used in this study included high-resolution satellite and airborne imagery and ground-based spectral data. Results of the first analytical chapter (i.e. chapter 3) demonstrated the potential of using high resolution QuickBird multi spectral satellite imagery to derive accurate maps of reedbeds through appropriate analysis of image texture, careful selection of input bands, spatial degradation of input bands, selection of a suitable classification algorithm and post-classification refinement using terrain data. Results of the second analytical chapter (chapter 4) demonstrated the benefits of using multi-seasonal images over single-date images and the effectiveness of incorporating spectral bands with textural measures. Through careful selection of appropriate classification technique, the input image datasets could be used to generate optimal reedbed maps. The results of the multi-seasonal reedbed mapping experiment conducted using QuickBird imagery was the basis for the field spectrometry experiment. The study aimed at monitoring and understanding variations in the spectral reflectance and biophysical properties of reedbeds canopies throughout the seasonal phenological cycle and to identify the optimal spectral indices for quantifying biophysical properties (chapter five ). The results of the experiment indicated that the narrow- band derived Difference Vegetation Index (DV I) and Renormalised Difference Vegetation Index (RDVI) provided the most accurate e'~~iIi1~tes of the leaf area index (LAl) for reedbed canopies (r = 0.77 and 0.72 respectively). Having observed the limitations of accurately deriving canopy heights from experiments conducted in chapter 5, the potential for quantifying canopy biophysical properties from light detection and radar (LiDAR) data (elevation and intensity) was investigated in chapter 6. The study demonstrated some of the potential and limitations of using LiDAR data for characterising reedbed canopies. A canopy height model (CHM) was generated by subtracting the Ordnance Survey (OS) derived digital terrain model (DTM) from the LiDAR- derived digital surface model (DSM). The density of first return points was high for reedbeds and these were able to generate an accurate CHM, when validated against field measurements. LiDAR intensity data displayed specular reflection along the centre of the flight line over reedbeds and water bodies, but not for other land cover/vegetation types. The LiDAR intensity data showed potential for containing considerable information on reedbed canopy structure and pattern that is valuable from an ecological perspective. Results of the final analytical chapter (chapter 7) demonstrated the value in combining appropriately compressed hyperspectral imagery with LiDAR data for the effective mapping of reedbed habitats. The most effective image compression technique was the spectrally segmented principal component analysis (SSPCA), which had the optimal combination of reedbed accuracy and processing efficiency. A substantial improvement in the accuracy of reedbed delineation was achieved when a mask generated by applying a 3m threshold to the LiDAR- derived CHM was used to filter the reedbed map derived from the optimal SSPCA image dataset. Based on the fmdings of chapter 5 and 6, the hyperspectral and LiDAR data was used to derive LAI and canopy height (CH) maps of reedbeds respectively, two vital biophysical measures needed in estimating the quality of reedbed canopies. Hence, this study is a step forward in utilizing spectral, spatial and structural data contained in remotely sensed data for the mapping of reedbed quantity and quality. This research has demonstrated the potential of using remotely sensed data, complemented with adequate ground based information for mapping the spatial extent and quality of reedbed canopies in three specific sites across the North " West region in the UK. Based on the success with a specific habitat type, suggestions are made to further expand these techniques to explore fine scale mapping of more habitats using remotely sensed data of high spatial resolution. Hence, two major studies are recommended for future work, namely (1) updating the Phase 1 habitat survey map using remote sensing techniques, and (2) the integration of high spatial resolution satellite imagery (hyperspectral or QuickBird) and LiDAR data for vegetation mapping and deriving biophysical measures.
11

Najmuddin, Ilyas Juzer. "Austin Fracture mapping using frequency data derived from seismic data." Texas A&M University, 2003. http://hdl.handle.net/1969/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Winblad, Emanuel. "Visualization of web site visit and usage data." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-110576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This report documents the work and results of a master’s thesis in Media Tech- nology that has been carried out at the Department of Science and Technology at Linköping University with the support of Sports Editing Sweden AB (SES). Its aim is to create a solution which aids the users of SES’ web CMS products in gaining insight into web site visit and usage statistics. The resulting solu- tion is the concept and initial version of a web based service. This service has been developed through an agile process with user centered design in mind and provides a graphical user interface which makes high use of visualizations to achieve the project goal.
13

Wang, Guilian. "Schema mapping for data transformation and integration." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3211371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2006.
Title from first page of PDF file (viewed June 7, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 135-142).
14

Steutel, Donovan. "Efficient Materials Mapping Using Hyperspectral Imaging Data." Thesis, University of Hawaii at Manoa, 2002. http://hdl.handle.net/10125/6962.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Hyperspectral images contain large amounts of spectral data. An efficient material identification (EMI) process can incorporate methods which reduce the amount of spectra analyzed in a hyperspectral image and interpret the image quickly while still maximizing the quality of interpretation of the image. The purposes of this study are to implement and evaluate an EMI process, determine ways to improve the process, and to implement and test those improvements. An EMI process using spectral endmember detection, linear unmixing, and automated spectral endmember material identification by spectral feature matching is used to analyze a visible near-infrared hyperspectral image of Kaneohe Bay, Hawaiʻi, a region containing a complex mixture of natural and manmade elements. The EMI technique is successfully applied. Evaluation of the resultant interpretation of the hyperspectral image reveals shortcomings in the EMI process in endmember detection and material identification. Particularly, some detected endmembers are spectral targets useful only for mapping a small portion of image, and the library material database of the feature matching algorithm is insufficiently matched to materials in the Kaneohe Bay scene. Two improvements to the spectral endmember detection technique used in the EMI process are proposed: target detection and masking and more than one evaluation of image pixels as potential spectral endmembers. These proposed improvements are incorporated into a subsequent analysis of the Kaneohe Bay scene, resulting in an improved material analysis of the scene. The improvement is primarily due to the incorporation of target detection only. The EMI process is also applied to a multispectral image of the Aristarchus Plateau on the Moon. Target masking is incorporated into endmember detection, and a different material identification algorithm, one based on radiative transfer theory, is used. Highly detailed maps of lunar mineralogy and rock types are produced which are consistent with previous spectral analyses of the Aristarchus region. These maps add to previous findings in detail and specificity of location and quantity of mineral and rock distributions on the Aristarchus Plateau.
xvi, 117 leaves
15

Mackin, Stephen. "Mineralogical mapping using airborne imaging spectrometry data." Thesis, Durham University, 1989. http://etheses.dur.ac.uk/6306/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With the development of airborne, high spectral resolution imaging spectrometers, we now have a tool, that allows us to examine surface materials with enough spectral detail to identify them. Identification is based on the analysis of position and shape of absorption features in the material spectra in the visible and infrared (0.4µm to 2.5µm). These absorption features are caused by the interaction of Electro-Magnetic Radiation (EMR) with the atoms and molecules of the surface material. Airborne data were collected to evaluate these new high spectral resolution systems. The data quality was assessed prior to processing and analysis and several problems were noted for each data set (striping, geometric distortion, etc.). These problems required some preparation of the data. After data preparation, data processing methods were evaluated, concentrating primarily on the log residuals and hull quotients methods. The processing steps convert the data to a form suitable for analysis. The data was analysed using the Spectral Analysis Manager (SPAM) package, developed by JPL. Two Imaging spectrometers were evaluated. The AIS - 1 instrument was flown over an area in Queensland, Australia. Ground data and laboratory work confirmed the presence of anomalous areas detected by the instrument. The data quality was poor and only basic classification of the data was possible. Anomalies were classed as "GREEN VEGETATION", "DRY VEGETATION", "CLAY" or "CARBONATE" based on the position of the major absorptions observed. The second instrument, the GER - II was flown over an area of Nevada, USA. Ground data and laboratory work confirmed the presence of the anomalies detected by the instrument. The data quality was somewhat better. Identification of sericite, dolomite and illite was possible. However, most of the area could still only be classed in the broad groupings listed above. To conclude, the effectiveness of identification is limited to a large degree by the poor data quality. If the data quality can be improved, techniques can be applied to automatically locate and identify material spectra, from the airborne data alone.
16

Kaufman, Alan P. (Alan Philip). "Data and algorithms for genomic physical mapping." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/36508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (leaves 116-122).
by Alan P. Kaufman.
M.S.
17

Cai, Xuemei. "A Lexical Comparison Using Word Embedding Mapping from an Academic Word Usage Perspective." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-425266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis applies the word embedding mapping approach to make a lexical comparison from academic word usage perspective. We aim to demonstrate the differences in academic word usage between a corpus of student writings and a corpus of academic English, as well as a corpus of student writings and social media texts. The Vecmap mapping algorithm, commonly used in solving cross-language mapping problems, was used to map academic English vector space and social media text vector space into the common student writing vector space to facilitate the comparison of word representations from different corpora and to visualize the comparison results. The average distance was defined as a measure of word usage differences of 420 typical academic words between each two corpora, and principal component analysis was applied to visualize the differences. A rank-biased overlap approach was adopted to evaluate the results of the proposed approach. The experimental results show that the usage of academic words of student writings corpus is more similar to the academic English corpus than to the social media text corpus.
18

Bayir, Murat Ali. "A New Reactive Method For Processing Web Usage Data." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12607323/index.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this thesis, a new reactive session reconstruction method '
Smart-SRA'
is introduced. Web usage mining is a type of web mining, which exploits data mining techniques to discover valuable information from navigations of Web users. As in classical data mining, data processing and pattern discovery are the main issues in web usage mining. The first phase of the web usage mining is the data processing phase including session reconstruction. Session reconstruction is the most important task of web usage mining since it directly affects the quality of the extracted frequent patterns at the final step, significantly. Session reconstruction methods can be classified into two categories, namely '
reactive'
and '
proactive'
with respect to the data source and the data processing time. If the user requests are processed after the server handles them, this technique is called as &lsquo
reactive&rsquo
, while in &lsquo
proactive&rsquo
strategies this processing occurs during the interactive browsing of the web site. Smart-SRA is a reactive session reconstruction techique, which uses web log data and the site topology. In order to compare Smart-SRA with previous reactive methods, a web agent simulator has been developed. Our agent simulator models behavior of web users and generates web user navigations as well as the log data kept by the web server. In this way, the actual user sessions will be known and the successes of different techniques can be compared. In this thesis, it is shown that the sessions generated by Smart-SRA are more accurate than the sessions constructed by previous heuristics.
19

Brohman, Kathryn. "Explaining variation in data warehouse usage, an interpretation perspective." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0015/NQ58115.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Jung, Changhee. "Effective techniques for understanding and improving data structure usage." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Turing Award winner Niklaus Wirth famously noted, `Algorithms + Data Structures = Programs', and it follows that data structures should be carefully considered for effective application development. In fact, data structures are the main focus of program understanding, performance engineering, bug detection, and security enhancement, etc. Our research is aimed at providing effective techniques for analyzing and improving data structure usage in fundamentally new approaches: First, detecting data structures; identifying what data structures are used within an application is a critical step toward application understanding and performance engineering. Second, selecting efficient data structures; analyzing data structures' behavior can recognize improper use of data structures and suggest alternative data structures better suited for the current situation where the application runs. Third, detecting memory leaks for data structures; tracking data accesses with little overhead and their careful analysis can enable practical and accurate memory leak detection. Finally, offloading time-consuming data structure operations; By leveraging a dedicated helper thread that executes the operations on the behalf of the application thread, we can improve the overall performance of the application.
21

Damineni, Sarath Chandra, and Sai Manikanta Munukoti. "Product Usage Data collection and Analysis in Lawn-mowers." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Background: As the requirements for the modern-day comforts are raising from day to day, the great evolution in the field of lawn-mowers is recorded. This evolution made companies produce a fleet of lawn-mowers(commercial, house-hold) for different kinds of usages. Despite the great evolution and market in this field, to the best of our knowledge, no effort was made to understand customer usage by analysis of real-time usage of lawn-mowers. This research made an attempt to analyse the real-time usage of lawn-mowers using techniques like machine learning. Objectives: The main objective of the thesis work is to understand customer usage of lawn-mowers by analysing the real-time usage data using machine learning algorithms. To achieve this, we first review several studies to identify what are the different ways(scenarios) and how to understand customer usage from those scenarios. After discussing these scenarios with the stakeholders at the company, we evaluated a suitable scenario in the case of lawn-mowers. Finally, we achieved the primary objective by clustering the usage of lawn-mowers by analysing the real-world time-series data from the Controller Area Network(CAN) bus based on the driving patterns. Methods: A Systematic literature review(SLR) is performed to identify the different ways to understand customer usage by analysing the usage data using machine learning algorithms and SLR is also performed to gain detailed knowledge about different machine learning algorithms to apply to the real-world data. Finally, an experiment is performed to apply the machine learning algorithms on the CAN bus time-series data to evaluate the usage of lawn-mowers into various clusters and the experiment also involves the comparison and selection of different machine learning algorithms applied to the data. Results: As a result of SLR, we achieved different scenarios to understand customer behaviours by analysing the usage data. After formulating the best suitable scenario for lawn-mowers, SLR also suggested the best suitable machine learning algorithms to be applied to the data for the scenario. Upon applying the machine learning algorithms after making necessary pre-processing steps, we achieved the clusters of usage of lawn-mowers for every driving pattern selected. We also achieved the clusters for different features of driving patterns that indicate the various characteristics like a change of intensity in the usage, rate of change in the usage, etc. Conclusions: This study identified customer behaviours based on their usage data by clustering the usage data. Moreover, clustering the CAN bus time-series data from lawn-mowers gave fresh insights to study human behaviours and interaction with the lawn-mowers. The formulated clusters have a great scope to classify and develop the individual strategy for each cluster formulated. Further, clusters can also be useful for identifying the outlying behaviour of users and/or individual components.
22

Khanna, Gaurav. "A Data-Locality Aware Mapping and Scheduling Framework for Data-Intensive Computing." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218559278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Hoffmann, Steve. "Genome Informatics for High-Throughput Sequencing Data Analysis." Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-152643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis introduces three different algorithmical and statistical strategies for the analysis of high-throughput sequencing data. First, we introduce a heuristic method based on enhanced suffix arrays to map short sequences to larger reference genomes. The algorithm builds on the idea of an error-tolerant traversal of the suffix array for the reference genome in conjunction with the concept of matching statistics introduced by Chang and a bitvector based alignment algorithm proposed by Myers. The algorithm supports paired-end and mate-pair alignments and the implementation offers methods for primer detection, primer and poly-A trimming. In our own benchmarks as well as independent bench- marks this tool outcompetes other currently available tools with respect to sensitivity and specificity in simulated and real data sets for a large number of sequencing protocols. Second, we introduce a novel dynamic programming algorithm for the spliced alignment problem. The advantage of this algorithm is its capability to not only detect co-linear splice events, i.e. local splice events on the same genomic strand, but also circular and other non-collinear splice events. This succinct and simple algorithm handles all these cases at the same time with a high accuracy. While it is at par with other state- of-the-art methods for collinear splice events, it outcompetes other tools for many non-collinear splice events. The application of this method to publically available sequencing data led to the identification of a novel isoform of the tumor suppressor gene p53. Since this gene is one of the best studied genes in the human genome, this finding is quite remarkable and suggests that the application of our algorithm could help to identify a plethora of novel isoforms and genes. Third, we present a data adaptive method to call single nucleotide variations (SNVs) from aligned high-throughput sequencing reads. We demonstrate that our method based on empirical log-likelihoods automatically adjusts to the quality of a sequencing experiment and thus renders a \"decision\" on when to call an SNV. In our simulations this method is at par with current state-of-the-art tools. Finally, we present biological results that have been obtained using the special features of the presented alignment algorithm
Diese Arbeit stellt drei verschiedene algorithmische und statistische Strategien für die Analyse von Hochdurchsatz-Sequenzierungsdaten vor. Zuerst führen wir eine auf enhanced Suffixarrays basierende heuristische Methode ein, die kurze Sequenzen mit grossen Genomen aligniert. Die Methode basiert auf der Idee einer fehlertoleranten Traversierung eines Suffixarrays für Referenzgenome in Verbindung mit dem Konzept der Matching-Statistik von Chang und einem auf Bitvektoren basierenden Alignmentalgorithmus von Myers. Die vorgestellte Methode unterstützt Paired-End und Mate-Pair Alignments, bietet Methoden zur Erkennung von Primersequenzen und zum trimmen von Poly-A-Signalen an. Auch in unabhängigen Benchmarks zeichnet sich das Verfahren durch hohe Sensitivität und Spezifität in simulierten und realen Datensätzen aus. Für eine große Anzahl von Sequenzierungsprotokollen erzielt es bessere Ergebnisse als andere bekannte Short-Read Alignmentprogramme. Zweitens stellen wir einen auf dynamischer Programmierung basierenden Algorithmus für das spliced alignment problem vor. Der Vorteil dieses Algorithmus ist seine Fähigkeit, nicht nur kollineare Spleiß- Ereignisse, d.h. Spleiß-Ereignisse auf dem gleichen genomischen Strang, sondern auch zirkuläre und andere nicht-kollineare Spleiß-Ereignisse zu identifizieren. Das Verfahren zeichnet sich durch eine hohe Genauigkeit aus: während es bei der Erkennung kollinearer Spleiß-Varianten vergleichbare Ergebnisse mit anderen Methoden erzielt, schlägt es die Wettbewerber mit Blick auf Sensitivität und Spezifität bei der Vorhersage nicht-kollinearer Spleißvarianten. Die Anwendung dieses Algorithmus führte zur Identifikation neuer Isoformen. In unserer Publikation berichten wir über eine neue Isoform des Tumorsuppressorgens p53. Da dieses Gen eines der am besten untersuchten Gene des menschlichen Genoms ist, könnte die Anwendung unseres Algorithmus helfen, eine Vielzahl weiterer Isoformen bei weniger prominenten Genen zu identifizieren. Drittens stellen wir ein datenadaptives Modell zur Identifikation von Single Nucleotide Variations (SNVs) vor. In unserer Arbeit zeigen wir, dass sich unser auf empirischen log-likelihoods basierendes Modell automatisch an die Qualität der Sequenzierungsexperimente anpasst und eine \"Entscheidung\" darüber trifft, welche potentiellen Variationen als SNVs zu klassifizieren sind. In unseren Simulationen ist diese Methode auf Augenhöhe mit aktuell eingesetzten Verfahren. Schließlich stellen wir eine Auswahl biologischer Ergebnisse vor, die mit den Besonderheiten der präsentierten Alignmentverfahren in Zusammenhang stehen
24

Teniente, Avilés Ernesto Homar. "3D mapping and path planning from range data." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/392615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis reports research on mapping, terrain classification and path planning. These are classical problems in robotics, typically studied independently, and here we link such problems by framing them within a common proprioceptive modality, that of three-dimensional laser range scanning. The ultimate goal is to deliver navigation paths for challenging mobile robotics scenarios. For this reason we also deliver safe traversable regions from a previously computed globally consistent map. We first examine the problem of registering dense point clouds acquired at different instances in time. We contribute with a novel range registration mechanism for pairs of 3D range scans using point-to-point and point-to-line correspondences in a hierarchical correspondence search strategy. For the minimization we adopt a metric that takes into account not only the distance between corresponding points, but also the orientation of their relative reference frames. We also propose FaMSA, a fast technique for multi-scan point cloud alignment that takes advantage of the asserted point correspondences during sequential scan matching, using the point match history to speed up the computation of new scan matches. To properly propagate the model of the sensor noise and the scan matching, we employ first order error propagation, and to correct the error accumulation from local data alignment, we consider the probabilistic alignment of 3D point clouds using a delayed-state Extended Information Filter (EIF). In this thesis we adapt the Pose SLAM algorithm to the case of 3D range mapping, Pose SLAM is the variant of SLAM where only the robot trajectory is estimated and where sensor data is solely used to produce relative constraints between robot poses. These dense mapping techniques are tested in several scenarios acquired with our 3D sensors, producing impressively rich 3D environment models. The computed maps are then processed to identify traversable regions and to plan navigation sequences. In this thesis we present a pair of methods to attain high-level off-line classification of traversable areas, in which training data is acquired automatically from navigation sequences. Traversable features came from the robot footprint samples during manual robot motion, allowing us to capture terrain constrains not easy to model. Using only some of the traversed areas as positive training samples, our algorithms are tested in real scenarios to find the rest of the traversable terrain, and are compared with a naive parametric and some variants of the Support Vector Machine. Later, we contribute with a path planner that guarantees reachability at a desired robot pose with significantly lower computation time than competing alternatives. To search for the best path, our planner incrementally builds a tree using the A* algorithm, it includes a hybrid cost policy to efficiently expand the search tree, combining random sampling from the continuous space of kinematically feasible motion commands with a cost to goal metric that also takes into account the vehicle nonholonomic constraints. The planer also allows for node rewiring, and to speed up node search, our method includes heuristics that penalize node expansion near obstacles, and that limit the number of explored nodes. The method book-keeps visited cells in the configuration space, and disallows node expansion at those configurations in the first full iteration of the algorithm. We validate the proposed methods with experiments in extensive real scenarios from different very complex 3D outdoors environments, and compare it with other techniques such as the A*, RRT and RRT* algorithms.
Esta tesis reporta investigación sobre el mapeo, clasificación de terreno y planificación de trayectorias. Estos son problemas clásicos en robótica los cuales generalmente se estudian de forma independiente, aquí se vinculan enmarcandolos con una modalidad propioceptiva común: un láser de rango 3D. El objetivo final es ofrecer trayectorias de navegación para escenarios complejos en el marco de la robótica móvil. Por esta razón también entregamos regiones transitables en un mapa global consistente calculado previamente. Primero examinamos el problema de registro de nubes de puntos adquiridas en diferentes instancias de tiempo. Contribuimos con un novedoso mecanismo de registro de pares de imagenes de rango 3D usando correspondencias punto a punto y punto a línea, en una estrategia de búsqueda de correspondencias jerárquica. Para la minimización optamos por una metrica que considera no sólo la distancia entre puntos, sino también la orientación de los marcos de referencia relativos. También proponemos FAMSA, una técnica para el registro rápido simultaneo de multiples nubes de puntos, la cual aprovecha las correspondencias de puntos obtenidas durante el registro secuencial, usando inicialmente la historia de correspondencias para acelerar el cálculo de las correspondecias en los nuevos registros de imagenes. Para propagar adecuadamente el modelo del ruido del sensor y del registro de imagenes, empleamos la propagación de error de primer orden, y para corregir el error acumulado del registro local, consideramos la alineación probabilística de nubes de puntos 3D utilizando un Filtro Extendido de Información de estados retrasados. En esta tesis adaptamos el algóritmo Pose SLAM para el caso de mapas con imagenes de rango 3D, Pose SLAM es la variante de SLAM donde solamente se estima la trayectoria del robot, usando los datos del sensor como restricciones relativas entre las poses robot. Estas técnicas de mapeo se prueban en varios escenarios adquiridos con nuestros sensores 3D produciendo modelos 3D impresionantes. Los mapas obtenidos se procesan para identificar regiones navegables y para planificar secuencias de navegación. Presentamos un par de métodos para lograr la clasificación de zonas transitables fuera de línea. Los datos de entrenamiento se adquieren de forma automática usando secuencias de navegación obtenidas manualmente. Las características transitables se captan de las huella de la trayectoria del robot, lo cual permite capturar restricciones del terreno difíciles de modelar. Con sólo algunas de las zonas transitables como muestras de entrenamiento positivo, nuestros algoritmos se prueban en escenarios reales para encontrar el resto del terreno transitable. Los algoritmos se comparan con algunas variantes de la máquina de soporte de vectores (SVM) y una parametrizacion ingenua. También, contribuimos con un planificador de trayectorias que garantiza llegar a una posicion deseada del robot en significante menor tiempo de cálculo a otras alternativas. Para buscar el mejor camino, nuestro planificador emplea un arbol de busqueda incremental basado en el algoritmo A*. Incluimos una póliza de coste híbrido para crecer de manera eficiente el árbol, combinando el muestro aleatorio del espacio continuo de comandos cinemáticos del robot con una métrica de coste al objetivo que también concidera las cinemática del robot. El planificador además permite reconectado de nodos, y, para acelerar la búsqueda de nodos, se incluye una heurística que penaliza la expansión de nodos cerca de los obstáculos, que limita el número de nodos explorados. El método conoce las céldas que ha visitado del espacio de configuraciones, evitando la expansión de nodos en configuraciones que han sido vistadas en la primera iteración completa del algoritmo. Los métodos propuestos se validán con amplios experimentos con escenarios reales en diferentes entornos exteriores, asi como su comparación con otras técnicas como los algoritmos A*, RRT y RRT*.
25

Vennesland, Audun. "Finding and Mapping Expertise Automatically Using Corporate Data." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:

In an organization, both management as well as new and experienced employees often have a need to get in touch with experts in a variety of situations. The new staff members need to learn how to perform their job, the management need - amongst other things - to man projects and vacancies, and other employees are often dependent on others' expertise to accomplish their tasks. Traditionally this problem has often been approached with computer applications using semi-automatic methods involving self-assessments of expertise stored in databases. These methods prove to be time-consuming, they do not consider the dynamics of expertise and the self-assessed expertise is often difficult to validate. This report presents an overview of issues involved in expertise finding and the development of a simple, yet effective prototype which tries to overcome the mentioned problems by using a fully automatic approach. A study of the Urban Development area at the Municipality of Trondheim is carried out to analyze this organizations' possessed expertise, sought after expertise and to collect necessary information for building the expertise finder prototype. The study found that a lot of expertise evidence is found in the formal correspondence archived in the case handling systems' document repository, and that the structure and content of these documents could fit a fully-automatic Expertise finder well. Four alternative test cases have been evaluated during the testing and evaluation of the prototype. One of these test cases - where expert profiles are modelled on-the-fly based on employees' names occurring in formal documents - is able to compete with- and in some cases outperform evaluation scores presented in related research.

26

Sims, LT Todd E. "Wireless Sensor Node Data Gathering and Location Mapping." Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/6869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
With advances in wireless communications and miniaturization of mobile sensors, Wireless Sensor Nodes are increasingly being deployed in Ad Hoc fashions. Efficiently gathering data from the networks now becomes a larger problem. Collecting sensor data from a group of nodes deployed in an unknown arrangement in the shortest amount of time requires the collector to utilize a methodology that minimizes collection overlap. Inexpensive commercial off-the-shelf wireless routers and mobile platforms that can be utilized to fly over a field of wireless nodes and create a link connecting to and retrieving the maximum amount of data, are examined in this thesis. The problems are two-fold first, the necessary task of locating the wireless devices in a given area, querying these devices to collect raw data for positioning, and second, the task of then creating a static map of derived locations. In order to enumerate device locations, the relationship of signal strength measurements and round trip signal times between wireless nodes and the wireless access router were investigated in this thesis. The results of this research support the conclusion that an inexpensive collection system can be readily configured for the task of automated client surveying and distance approximation.
27

Gamal, Eldin Tawfik Ahmed. "Autonomous Small Scale Data-logger for Temperature Mapping." UNF Digital Commons, 2015. http://digitalcommons.unf.edu/etd/585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Modern manufacturing processes require minimal human intervention and a high degree of automation to meet industry demands. Due to variability in industrial process conditions, custom systems are often sought for these applications. These systems must be compact, economical, and capable of operating under different environmental conditions. This work presents the development, fabrication, testing, and validation of a low cost small scale temperature data-logger used as a monitoring system for automated applications. The proposed system is battery powered and packaged in a manner able to operate in temperatures up to 100oC, with exposure to chemicals such as Isopropyl Alcohol, Propylene Glycol, and De-Ionized water for a period of 2 hours with accuracy of ±0.5oC. The hydration process used for contact lens manufacturing is proposed as a target application for the developed system. The developed system was bench top tested and validated using a convection oven and the three chemicals Propylene Glycol, Isopropyl Alcohol, and De-ionized Water. In addition, the system was tested “in-situ” in the hydration lines of a contact lens manufacturing process. The development process illustrated in this work including the system design, fabrication, and testing can be used as a base to develop the “best fit” monitoring system for multiple other applications.
28

Seetan, Raed. "A Data Mining Approach to Radiation Hybrid Mapping." Diss., North Dakota State University, 2014. https://hdl.handle.net/10365/27315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The task of mapping markers from Radiation Hybrid (RH) mapping experiments is typically viewed as equivalent to the traveling-salesman problem, which has combinatorial complexity. As an additional problem, experiments commonly result in some unreliable markers that reduce the overall map quality. Due to the large numbers of markers in current radiation hybrid populations, the use of the data mining techniques becomes increasingly important for reducing both the computational complexity and the impact of noise of the original data. In this dissertation, a clustering-based approach is proposed for addressing both the problem of filtering unreliable markers (framework maps) and the problem of mapping large numbers of markers (comprehensive maps) efficiently. Traditional approaches for eliminating unreliable markers use resampling of the full data set, which has an even higher computational complexity than the original mapping problem. In contrast, the proposed algorithms use a divide-and-conquer strategy to construct framework maps based on clusters that exclude unreliable markers. The clusters of markers are ordered using parallel processing and are then combined to form the complete map. Three algorithms are presented that explore the trade-off between the number of markers included in the framework map and placement accuracy. Since the mapping problem is susceptible to noise, it is often beneficial to remove markers that are not trustworthy. Traditional mapping techniques for building comprehensive maps process all markers together, including unreliable markers, in a single-iteration approach. The accuracy of the constructed maps may be reduced. In this research work, two-stage algorithms are proposed to mapping most markers by first creating a framework map of the reliable markers, and then incrementally adding the remaining markers to construct high quality comprehensive maps. All proposed algorithms have been evaluated on several human chromosomes using radiation hybrid datasets with varying sizes, and also the performance of our proposed algorithms is compared with state-of-the-art RH mapping softwares. Overall, the proposed algorithms are not only much faster than the comparative approaches, but that the quality of the resulting maps is also much higher.
29

Borck, Michael Geoffery. "Feature Extraction from Multi-modal Mobile Mapping Data." Thesis, Curtin University, 2016. http://hdl.handle.net/20.500.11937/57505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis investigates many different feature extraction methods and machine learning algorithms for their usefulness in detecting objects from vehicle-based mobile mapping systems datasets. A comprehensive analysis using performances measures and graphical techniques are applied to identify the best combination of features and classifiers. A system was built enable users who are not programmers to manage image data and to customise their analyses by combining common data analysis tools to fit their needs.
30

Ozaygen, Alkim. "Analysing the Usage Data of Open Access Scholarly Books: What Can Data Tell Us?" Thesis, Curtin University, 2019. http://hdl.handle.net/20.500.11937/79585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study explores data captured on the Internet related to OA scholarly books to provide a detailed overall picture of the dissemination of these books beginning from the point they were made OA. To uncover the factors that impact on the digital uses of OA books, it explores relationships between book characteristics and the characteristics and motivations of the groups using and sharing books in digital landscapes.
31

Eddie, Sarah Joan. "Hearing Aid Usage in Different Listening Environments." Thesis, University of Canterbury. Communication Disorders, 2007. http://hdl.handle.net/10092/1418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This study investigates the listening environments of hearing aid users by employing the data logging capacity of their hearing aids. The idea that a hearing aid user's listening environments are important in prescribing desired hearing aid features has been discussed in the literature, however, investigation of listening environments has been limited in the past as it has relied mainly on subjective recordings. Data logging, the capacity of a hearing aid to continuously store information regarding time spent in different programs, listening environments, and microphone modes, is now available in a number of hearing aid models, and therefore provides an objective tool for studying a hearing aid user's listening environments. The data logging information from fifty-seven new hearing aid wearers, including 50 males and 7 females (mean age = 68 years, SD = 11.3), was obtained during the first routine clinic follow-up session for each individual. Measures of time spent in different listening environments, microphone modes, and overall sound levels, were analyzed. Hearing aid usage time was found to be highest in "Speech Only" situations (44.8%), followed by "Quiet" (26.7%), "Noise Only" (16.3%) and "Speech in Noise" (12.3%) situations. The majority of the hearing aid users' time was spent in "Surround" microphone mode (74.3%), followed in order by "Split" (22.3%) and "Full" (3.5%) directional modes. Results of two separate two-way ANOVAs revealed no significant age effect either on time spent in different listening environments [F(3,49) = 0.7, p= 0.5] or on time spent in different microphone modes [F(3,20) = 0.6, p= 0.6]. These findings provide empirical evidence regarding the general listening pattern of hearing aid users, which can be used as a starting point when troubleshooting problems experienced by hearing aid clients, or assessing a user's need for various hearing aid features.
32

Chen, Rita. "ScratchStats : a site for visualizing and understanding scratch usage data." Thesis, Massachusetts Institute of Technology, 2010. http://scratch.mit.edu.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 65-66).
This thesis introduces ScratchStats, an extension to the Scratch website where users can view and understand Scratch usage data through a series of interactive visualizations. Scratch is a visual programming language that makes it easy to create interactive stories, games, and artwork. Accompanying the Scratch application is the Scratch Online Community, a website that allows users to upload and share their creations. The visualizations created in this thesis describe community, personal, and network statistics. ScratchStats aims to provide answers to questions about Scratch usage, promote reflection and introspective learning, and aid in teaching data literacy.
by Rita Chen.
M.Eng.
33

Erhardt, Keeley Donovan. "Bismuth : a blockchain-based program for verifying responsible data usage." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/119629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 74-75).
The amount of digital information generated, collected and stored is growing at a staggering rate. Data-driven insights are increasingly relied upon to make decisions, directly impacting individuals. The burgeoning importance of data in shaping the world around us requires a shift in the current data ownership, exchange and usage paradigm. Responsible data use should be verifiably free from leaking sensitive information, discriminatory usage, illegal applications, and other misuse. Additionally, a standard of correctness for computations executed against datasets should be enforced. Enlisting trusted parties to vet the code being executed against sensitive data can reduce the prevalence of irresponsible or malevolent data usage. Trusted parties can attest to attributes of the code-for example, that the code is privacy-preserving, or that it is legal to execute against data collected from users in a certain country, or that a computation reliably and correctly computes an answer as advertised-to ensure that individuals' personal information is used appropriately. This thesis presents an illustration of a design to structure, vet and verify the code that is executed against sensitive data, along with a proposal for using blockchain-based smart con- tracts to audit and enforce proper usage of vetted code to promote a paradigm of "safe" question-and-answer exchange. Finally, this thesis demonstrates Bismuth, a blockchain-based program built to implement the ideas presented in this work and to assist in a transition towards more thoughtful and responsible data usage.
by Keeley Donovan Erhardt.
M. Eng.
34

Day, Allen Jason. "The construction and usage of a microarray data warehousing system." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1666908781&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

STAGNI, Federico. "On Usage Control for Data Grids: Models, Architectures, and Specifications." Doctoral thesis, Università degli studi di Ferrara, 2009. http://hdl.handle.net/11392/2389199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis reasons on usage control in Data Grids, by presenting models, architectures and specifications. This work is a step toward a continuous monitoring and control of the data access and usage in a Data Grid. First, the thesis presents a background on Grids, security, and security for Grids, by making an abstraction to the current Grid implementations. We argue that usage control in Data Grids should be considered as a process composed by two black boxes. We analysed the requirements for Grid security, and propose a distributed usage control model suitable for Grids and distributed systems alike. Then, we apply such model to a Data Grid abstraction, and present a usage control architecture for Data Grids that uses the functional components of the currents Grids. We also present an abstract specification for an enforcing mechanism for usage control policies. To do so, we use a formal requirement engineering methodology with a bottom-up approach, that proves that the specification is sound and complete. With the methodology, we show formally that such abstract specification can enforce all the different typologies of usage control policies. Finally, we consider how existing prototypes can fit in the proposed architecture, and the advantages derived from using Semantic Grid techologies for the specification of policies subjects and objects.
36

Gadea, Cristian. "Collaborative Web-Based Mapping of Real-Time Sensor Data." Thesis, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The distribution of real-time GIS (Geographic Information System) data among users is now more important than ever as it becomes increasingly affordable and important for scientific and government agencies to monitor environmental phenomena in real-time. A growing number of sensor networks are being deployed all over the world, but there is a lack of solutions for their effective monitoring. Increasingly, GIS users need access to real-time sensor data from a variety of sources, and the data must be represented in a visually-pleasing way and be easily accessible. In addition, users need to be able to collaborate with each other to share and discuss specific sensor data. The real-time acquisition, analysis, and sharing of sensor data from a large variety of heterogeneous sensor sources is currently difficult due to the lack of a standard architecture to properly represent the dynamic properties of the data and make it readily accessible for collaboration between users. This thesis will present a JEE-based publisher/subscriber architecture that allows real-time sensor data to be displayed collaboratively on the web, requiring users to have nothing more than a web browser and Internet connectivity to gain access to that data. The proposed architecture is evaluated by showing how an AJAX-based and a Flash-based web application are able to represent the real-time sensor data within novel collaborative environments. By using the latest web-based technology and relevant open standards, this thesis shows how map data and GIS data can be made more accessible, more collaborative and generally more useful.
37

Block, Lorraine Joy. "Mapping nursing wound care data elements to SNOMED-CT." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/60290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Documentation is a professional responsibility in nursing because it facilitates communication, promotes good nursing care, and acts as a valuable method to demonstrate that legal and agency standards are followed. Nurses are increasingly using health information technologies, such as electronic health records, to document care. To be able to measure and compare the impact of nursing on patient outcomes, standardized clinical terminologies compliant with international standards are necessary. In British Columbia, Canada, nurses use a standardized wound care template to document their assessments and the care they provide to patients; however, the content of this assessment is currently not shared in a computable format between different electronic health records within the province. The purpose of this thesis was to map wound care data elements from the BC Standardized Nursing Wound Documentation standard to SNOMED-CT. To complete this “bottom-up” mapping activity, creation of a conceptual model of knowledge representation for nursing wound care was developed to inform three concurrent methods of mapping (manual, automated, and literature comparison) for 107 data elements. These methods produced candidate lists, which were reviewed by two expert wound care clinicians who created an expert consensus list. Results of this expert consensus list indicated that 40.2% of the terms had direct matches, 1.9% had one-to-many matches, and 57.9% had no matches. The outcome of this study was the creation of a conceptual model of nursing knowledge representation for wound care, a list of mapped wound care data elements to SNOMED-CT, identification of missing and duplicate concepts in SNOMED-CT, and application of concurrent mapping methods to inform the creation of an expert consensus list. The advancement of standardized clinical terminologies to support semantic interoperability between disparate electronic health records is an important measure to ensure patient information is shared throughout the continuum of care. This thesis work provides a method to incorporate local nursing standards into SNOMED-CT, with the intent to ensure that nursing care is represented.
Applied Science, Faculty of
Nursing, School of
Graduate
38

Polowinski, Jan. "Semi-Automatic Mapping of Structured Data to Visual Variables." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-108497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
While semantic web data is machine-understandable and well suited for advanced filtering, in its raw representation it is not conveniently understandable to humans. Therefore, visualization is needed. A core challenge when visualizing the structured but heterogeneous data turned out to be a flexible mapping to Visual Variables. This work deals with a highly flexible, semi-automatic solution with a maximum support of the visualization process, reducing the mapping possibilities to a useful subset. The basis for this is knowledge, concerning metrics and structure of the data on the one hand and available visualization structures, platforms and common graphical facts on the other hand — provided by a novel basic visualization ontology. A declarative, platform-independent mapping vocabulary and a framework was developed, utilizing current standards from the semantic web and the Model-Driven Architecture (MDA)
Während Semantic-Web-Daten maschinenverstehbar und hervorragend filterbar sind, sind sie — in ihrer Rohform — nicht leicht von Menschen verstehbar. Eine Visualisierung der Daten ist deshalb notwendig. Die Kernherausforderung dabei ist eine flexible Abbildung der strukturierten aber heterogenen Daten auf Visuelle Variablen. Diese Arbeit beschreibt eine hochflexible halbautomatische Lösung bei maximaler Unterstützung des Visualisierungsprozesses, welcher die Abbildungsmöglichkeiten, aus denen der Nutzer zu wählen hat, auf eine sinnvolle Teilmenge reduziert. Die Grundlage dafür sind einerseits Metriken und das Wissen über die Struktur der Daten und andererseits das Wissen über verfügbare Visualisierungsstrukturen, -plattformen und bekannte grafische Fakten, welche durch eine neuentwickelte Visualisierungsontologie bereitgestellt werden. Basierend auf Standards des Semantic Webs und der Model-getriebenen Architektur, wurde desweiteren ein deklaratives, plattformunabhängiges Visualisierungsvokabular und -framework entwickelt
39

Abdul, Hamid Juazer Rizal. "Mapping of forests using imaging spectrometry and lidar data." Thesis, University of Nottingham, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.415722.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lewis, Sian Patricia. "Mapping forest parameters using geostatistics and remote sensing data." Thesis, University College London (University of London), 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Gonalves, Jos Alberto. "Integration of SAR and SPOT data for topographic mapping." Thesis, University of London, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Diener, Matthias. "Automatic task and data mapping in shared memory architectures." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/131871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Arquiteturas paralelas modernas têm hierarquias de memória complexas, que consistem de vários níveis de memórias cache privadas e compartilhadas, bem como Non-Uniform Memory Access (NUMA) devido a múltiplos controladores de memória por sistema. Um dos grandes desafios dessas arquiteturas é melhorar a localidade e o balanceamento de acessos à memória de tal forma que a latência média de acesso à memória é reduzida. Dessa forma, o desempenho e a eficiência energética de aplicações paralelas podem ser melhorados. Os acessos podem ser melhorados de duas maneiras: (1) processos que acessam dados compartilhados (comunicação entre processos) podem ser alocados em unidades de execução próximas na hierarquia de memória, a fim de melhorar o uso das caches. Esta técnica é chamada de mapeamento de processos. (2) Mapear as páginas de memória que cada processo acessa ao nó NUMA que ele está sendo executado, assim, pode-se reduzir o número de acessos a memórias remotas em arquiteturas NUMA. Essa técnica é conhecida como mapeamento de dados. Para melhores resultados, os mapeamentos de processos e dados precisam ser realizados de forma integrada. Trabalhos anteriores nesta área executam os mapeamentos separadamente, o que limita os ganhos que podem ser alcançados. Além disso, a maioria dos mecanismos anteriores exigem operações caras, como traços de acessos à memória, para realizar o mapeamento, além de exigirem mudanças no hardware ou na aplicação paralela. Estes mecanismos não podem ser considerados soluções genéricas para o problema de mapeamento. Nesta tese, fazemos duas contribuições principais para o problema de mapeamento. Em primeiro lugar, nós introduzimos um conjunto de métricas e uma metodologia para analisar aplicações paralelas, a fim de determinar a sua adequação para um melhor mapeamento e avaliar os possíveis ganhos que podem ser alcançados através desse mapeamento otimizado. Em segundo lugar, propomos um mecanismo que executa o mapeamento de processos e dados online. Este mecanismo funciona no nível do sistema operacional e não requer alterações no hardware, os códigos fonte ou bibliotecas. Uma extensa avaliação com múltiplos conjuntos de carga de trabalho paralelos mostram consideráveis melhorias em desempenho e eficiência energética.
Reducing the cost of memory accesses, both in terms of performance and energy consumption, is a major challenge in shared-memory architectures. Modern systems have deep and complex memory hierarchies with multiple cache levels and memory controllers, leading to a Non-Uniform Memory Access (NUMA) behavior. In such systems, there are two ways to improve the memory affinity: First, by mapping tasks that share data (communicate) to cores with a shared cache, cache usage and communication performance are improved. Second, by mapping memory pages to memory controllers that perform the most accesses to them and are not overloaded, the average cost of accesses is reduced. We call these two techniques task mapping and data mapping, respectively. For optimal results, task and data mapping need to be performed in an integrated way. Previous work in this area performs the mapping only separately, which limits the gains that can be achieved. Furthermore, most previous mechanisms require expensive operations, such as communication or memory access traces, to perform the mapping, require changes to the hardware or to the parallel application, or use a simple static mapping. These mechanisms can not be considered generic solutions for the mapping problem. In this thesis, we make two contributions to the mapping problem. First, we introduce a set of metrics and a methodology to analyze parallel applications in order to determine their suitability for an improved mapping and to evaluate the possible gains that can be achieved using an optimized mapping. Second, we propose two automatic mechanisms that perform task mapping and combined task/data mapping, respectively, during the execution of a parallel application. These mechanisms work on the operating system level and require no changes to the hardware, the applications themselves or their runtime libraries. An extensive evaluation with parallel applications from multiple benchmark suites as well as real scientific applications shows substantial performance and energy efficiency improvements that are significantly higher than simple mechanisms and previous work, while maintaining a low overhead.
43

Gharabaghi, Sara. "Quantitative Susceptibility Mapping (QSM) Reconstruction from MRI Phase Data." Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1610018553822445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Griffiths, G. H. "Mapping rangeland vegetation in Northern Kenya from Landsat data." Thesis, Aston University, 1985. http://publications.aston.ac.uk/14254/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Firoozi, Nejad Behnam. "Population mapping using census data, GIS and remote sensing." Thesis, Queen's University Belfast, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.705917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
This thesis assesses approaches to population surface modeling by pulling together the benefits of reference gridded population data with local regression procedures and geographically weighted regression. This study provides a more detailed assessment of surface modelling accuracy than was achieved in any previous studies to assess factors which explain errors in the predictions. The primary aim of this thesis is to evaluate Martin’s (1989) population surface modeling approach and also design and implement a method using secondary data, suitable for application in England and Wales. This research is based on the idea that population data presented for a single zone could be redistributed in the zone using local parameters such as housing density. A weighted sum performs the spatial redistribution. The thesis also aims to make use of remote sensing (RS) data and image processing techniques such as maximum likelihood classification and normalised difference vegetation index to identify (un) populated cells. The potential of Landsat images and RS data analysis is assessed particularly for countries where high quality land use data are not readily obtainable, and their generation is not feasible in the near future. This thesis focuses on the identification of unpopulated cells, rather than populated units, using RS data. Case studies make use of data from Northern Ireland (NI), and Jonkoping in southern Sweden. The outcomes indicate the impact of population density, population variance, and resolution of source zones on the accuracy of population allocation to grid cells using Martin’s (1989) model. The results show significant accuracy in prediction to 100m cells using an alternative approach based on settlement data for NI and this is recommended as an alternative method for England and Wales. It also concluded that there the potential to generate population surfaces using Landsat data for areas where local residential data are not easily accessible.
46

Järpehult, Oscar, and Martin Lindblom. "Longitudinal measurements of link usage on Twitter." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
As Twitter launched with their unique way of limiting posts to only 140 characters the usage of link shorteners was brought forth. This was the only way to fit long URLs in tweets until Twitter solved this by providing their own integrated link shortener. This study investigates how links are used on Twitter. The study include both care fulldata collection including multiple APIs and analysis of the collected data providing new insight into this topic. It was found that a small set of internet domains account for a large part of the links found in posted tweets. This set of top occurring domains did not necessarily reflect the top domains typically on common internet top lists. When looking at link shorteners in posted tweets we found that “bit.ly” was the most common one. Due to our method of collecting data we had the possibility of looking up the amount of clicks “bit.ly” links had received. We compared the click data to the amount of retweets the tweets containing these links had received and this led to some interesting discoveries regarding the ratio between these two.
47

Hawana, Leila Mohammed. "Knowing Without Knowing: Real-Time Usage Identification of Computer Systems." PDXScholar, 2019. https://pdxscholar.library.pdx.edu/open_access_etds/4680.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Contemporary computers attempt to understand a user's actions and preferences in order to make decisions that better serve the user. In pursuit of this goal, computers can make observations that range from simple pattern recognition to listening in on conversations without the device being intentionally active. While these developments are incredibly useful for customization, the inherent security risks involving personal data are not always worth it. This thesis attempts to tackle one issue in this domain, computer usage identification, and presents a solution that identifies high-level usage of a system at any given moment without looking into any personal data. This solution, what I call "knowing without knowing," gives the computer just enough information to better serve the user without knowing any data that compromises privacy. With prediction accuracy at 99% and system overhead below 0.5%, this solution is not only reliable but is also scalable, giving valuable information that will lead to newer, less invasive solutions in the future.
48

Li, Haiyan. "Data visualization of asymmetric data using Sammon mapping and applications of self-organizing maps." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Business and Management. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
49

Luczak-Rösch, Markus [Verfasser]. "Usage-dependent maintenance of structured Web data sets / Markus Luczak-Rösch." Berlin : Freie Universität Berlin, 2014. http://d-nb.info/1068253827/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Willenberg, Darren. "Quantifying MyCiTi supply usage via Big Data and Agent Based Modelling." Master's thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/27362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The MyCiTi is currently generating large volumes of raw transactional information in the form of commuter smartcard transactions, which can be considered Big Data. Agent Based modelling (ABM) has been applied internationally as a means of deriving actionable intelligence from Big Data. It is proposed that ABM can be used to unlock the hidden potential within the aforementioned data. This paper demonstrates how to go about developing and calibrating a MATSim-based ABM to analyse AFC data. It is found that data formatting algorithms are critical in the preparation of data for modelling activities. These algorithms are highly complex, requiring significant time investment prior to development. Furthermore, the development of appropriate ABM calibration parameters requires careful consideration in terms of appropriate data collection, simulation testing, and justification. This study serves as strong evidence to suggest that ABM is an appropriate analysis technique for MyCiTi data systems. Validation exercises reveal that ABM is able to calculate on board bus usage and system behaviour with a strong degree of accuracy (R-squared 0.85). It is however recommended that additional research be conducted into more detailed calibration activities, such as fine-tuning agent behaviour during simulation. Ultimately this research study achieves its explorative objectives of model development and testing, and paves a way forward for future research into the practical applications of Big Data and ABM in the South African context.

To the bibliography