To see the other types of publications on this topic, follow the link: Mapping systems.

Dissertations / Theses on the topic 'Mapping systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Mapping systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Luo, Jiangtao. "Functional mapping of dynamic systems." [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0041189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reis, Julio Cesar Dos. "Mapping Adaptation between Biomedical Knowledge Organization Systems." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112231/document.

Full text
Abstract:
Les systèmes d'information biomédicaux actuels reposent sur l'exploitation de données provenant de sources multiples. Les Systèmes d'Organisation de la Connaissance (SOC) permettent d'expliciter la sémantique de ces données, ce qui facilite leur gestion et leur exploitation. Bénéficiant de l'évolution des technologies du Web sémantique, un nombre toujours croissant de SOCs a été élaboré et publié dans des domaines spécifiques tels que la génomique, la biologie, l'anatomie, les pathologies, etc. Leur utilisation combinée, nécessaire pour couvrir tout le domaine biomédical, repose sur la définition de mises en correspondance entre leurs éléments ou mappings. Les mappings connectent les entités des SOCs liées au même domaine via des relations sémantiques. Ils jouent un rôle majeur pour l'interopérabilité entre systèmes, en permettant aux applications d'interpréter les données annotées avec différents SOCs. Cependant, les SOCs évoluent et de nouvelles versions sont régulièrement publiées de façon à correspondre à des vues du domaine les plus à jour possible. La validité des mappings ayant été préalablement établis peut alors être remis en cause. Des méthodes sont nécessaires pour assurer leur cohérence sémantique au fil du temps. La maintenance manuelle des mappings est une possibilité lorsque le nombre de mappings est restreint. En présence de SOCs volumineux et évoluant très rapidement, des méthodes les plus automatiques possibles sont indispensables. Cette thèse de doctorat propose une approche originale pour adapter les mappings basés sur les changements détectés dans l'évolution de SOCs du domaine biomédical. Notre proposition consiste à comprendre précisément les mappings entre SOCs, à exploiter les types de changements intervenant lorsque les SOCs évoluent, puis à proposer des actions de modification des mappings appropriées. Nos contributions sont multiples : (i) nous avons réalisé un travail expérimental approfondi pour comprendre l'évolution des mappings entre SOCs; nous proposons des méthodes automatiques (ii) pour analyser les mappings affectés par l'évolution de SOCs, et (iii) pour reconnaître l'évolution des concepts impliqués dans les mappings via des patrons de changement; enfin (iv) nous proposons des techniques d'adaptation des mappings à base d'heuristiques. Nous proposons un cadre complet pour l'adaptation des mappings, appelé DyKOSMap, et un prototype logiciel. Nous avons évalué les méthodes proposées et le cadre formel avec des jeux de données réelles contenant plusieurs versions de mappings entre SOCs du domaine biomédical. Les résultats des expérimentations ont démontré l'efficacité des principes sous-jacents à l'approche proposée. La maintenance des mappings, en grande partie automatique, est de bonne qualité
Modern biomedical information systems require exchanging and retrieving data between them, due to the overwhelming available data generated in this domain. Knowledge Organization Systems (KOSs) offer means to make the semantics of data explicit which, in turn, facilitates their exploitation and management. The evolution of semantic technologies has led to the development and publication of an ever increasing number of large KOSs for specific sub-domains like genomics, biology, anatomy, diseases, etc. The size of the biomedical field demands the combined use of several KOSs, but it is only possible through the definition of mappings. Mappings interconnect entities of domain-related KOSs via semantic relations. They play a key role as references to enable advanced interoperability tasks between systems, allowing software applications to interpret data annotated with different KOSs. However, to remain useful and reflect the most up-to-date knowledge of the domain, the KOSs evolve and new versions are periodically released. This potentially impacts established mappings demanding methods to ensure, as automatic as possible, their semantic consistency over time. Manual maintenance of mappings stands for an alternative only if a restricted number of mappings are available. Otherwise supporting methods are required for very large and highly dynamic KOSs. To address such problem, this PhD thesis proposes an original approach to adapt mappings based on KOS changes detected in KOS evolution. The proposal consists in interpreting the established correspondences to identify the relevant KOS entities, on which the definition relies on, and based on the evolution of these entities to propose actions suited to modify mappings. Through this investigation, (i) we conduct in-depth experiments to understand the evolution of KOS mappings; we propose automatic methods (ii) to analyze mappings affected by KOS evolution, and (iii) to recognize the evolution of involved concepts in mappings via change patterns; finally (iv) we design techniques relying on heuristics explored by novel algorithms to adapt mappings. This research achieved a complete framework for mapping adaptation, named DyKOSMap, and an implementation of a software prototype. We thoroughly evaluated the proposed methods and the framework with real-world datasets containing several releases of mappings between biomedical KOSs. The obtained results from experimental validations demonstrated the overall effectiveness of the underlying principles in the proposed approach to adapt mappings. The scientific contributions of this thesis enable to largely automatically maintain mappings with a reasonable quality, which improves the support for mapping maintenance and consequently ensures a better interoperability over time
APA, Harvard, Vancouver, ISO, and other styles
3

Marsh, Steven George Edward. "Epitope mapping in major histocompatibility systems." Thesis, Open University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.361587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hibbs, Jeremy, Travis Kibler, Jesse Odle, Rachel Powers, Thomas Schucker, and Alex Warren. "Autonomous Mapping Using Unmanned Aerial Systems." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vascik, Anne Marie. "Physiographic Mapping of Ohio’s Soil Systems." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1471809603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Salzman, Rhonda A. (Rhonda Ann) 1978. "Manufacturing system design : flexible manufacturing systems and value stream mapping." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/82697.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2002.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references.
by Rhonda A. Salzman.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Sizer, Scott Marshall. "Locating and mapping cemeteries in Loudon County, Virginia." Morgantown, W. Va. : [West Virginia University Libraries], 1999. http://etd.wvu.edu/templates/showETD.cfm?recnum=623.

Full text
Abstract:
Thesis (M.A.)--West Virginia University, 1999.
Title from document title page. Document formatted into pages; contains vi, 31 p. : maps. Includes abstract. Includes bibliographical references (p. 18-19).
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Jun Carleton University Dissertation Geography. "TMDS thematic map design advisory system; for geographical information systems and electronic mapping systems." Ottawa, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Esmaeily, Kaveh. "Ontological mapping between different higher educational systems : The mapping of academic educational system on an international level." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-876.

Full text
Abstract:

This Master thesis sets its goals in researching and understanding the structure of different educational systems. The main goal that this paper inflicts is to develop a middleware aiming at translating courses between different educational systems.

The procedure is to find the meaning of objects and courses from the different educational systems point of view, this is mainly done through processes such as identifying the context, semantics and state of the objects involved, perhaps in different activities. The middleware could be applied, with small changes, to any structured system of education.

This thesis introduces a framework for using ontologies in the translation and integration of course aspects in different processes. It suggests using ontologies when adopting and structuring different educational systems on an international level. This thesis will, through an understanding of ontologies construct a middleware for the translation process between different courses in the different educational systems. As an example courses in Sweden, Germany and Tajikistan have been used for the mapping and constructing learning goals and qualifications.

APA, Harvard, Vancouver, ISO, and other styles
10

Coêlho, de Araújo Cristiano. "Communication mapping in multiprocessor platforms." Universidade Federal de Pernambuco, 2005. https://repositorio.ufpe.br/handle/123456789/2098.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:54:34Z (GMT). No. of bitstreams: 2 arquivo7168_1.pdf: 2138979 bytes, checksum: 19e8ea84018aa698112c01c9de47d857 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2005
Os avanços na tecnologia de fabricação de circuitos integrados tem permitido a implementação de sistemas inteiros em um único chip, combinando alto poder de processamento e baixo consumo em uma pequena área. Os chamados Multiprocessor System-on-Chip (MPSoc) incluem multiplos processadores heterogeneos, estruturas complexas de interconexão e componentes de propriedade intelectual fornecidos por terceiros. Esta tecnologia permitiu o surgimento de dispositivos portateis como telefones celulares, PDAs, dispositivos multimídia que combinam a portabilidade com a capacidade de antigos computadores desktop. Contudo, a especificação e validação destes sistemas se tornou uma tarefa muito difícil. Existe um gap entre a especificação do sistema em alto nível e a implementação em uma plataforma multirprocessador. Este gap entre a especificação e a implementação não é tratado de forma adequada pelas metodologias e ferramentas existentes. Tendo como consequencia atrasos no ciclo de desenvolvimento e erros que podem comprometer o projeto. Nesta tese é atacado o problema de implementação da comunicação modelada a nível de sistema em plataformas multiprocessadores. As contribuições deste trabalho são: (1) uma nova abordagem para a modelagem de plataformas multiprocessador; (2) uma metodologia para o mapeamento de comunicação na plataforma; (3) suporte de análise para avaliação da implementação da comunicação. As metodologias e ferramentas propostas foram validadas utilizando-se dois estudos de caso. O primeiro uma aplicação com múltiplas comunicações e o segundo uma aplicação multimídia
APA, Harvard, Vancouver, ISO, and other styles
11

Laskar, Z. (Zakaria). "Robust loop closures in 3D mapping systems." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201606042373.

Full text
Abstract:
Given an image sequence and odometry information from a moving camera, we propose a batch-based approach for robust reconstruction of scene structure and camera motion. A key part of our approach is robust loop closure detection, which is achieved by combining odometry and visual correspondences in a novel manner via bundle-adjustment. First, a large-scale structure-from motion pipeline is utilized to get a set of candidate feature correspondences and the respective triangulated 3D landmarks, which pass primitive geometric consistency checks. Thereafter, the 3D landmarks with similar visual description are arranged in a pairwise manner, where each pair is a loop closure constraint and is subjected to a 3D alignment term which tries to merge the 3D point pairs. The correctness of loop closure constraints are evaluated by including them in a bundle-adjustment optimization which checks their compatibility with the odometry information and point projections of the 3D point pairs on the cameras viewing them. The candidate loop closure constraints are iteratively reweighted such that only compatible constraints retain higher weights in the next iteration of the optimization of structure and motion. The proposed approach is evaluated using real data from a Google Tango device. The results show that we are able to produce high quality reconstructions from challenging data which includes many false loop closure candidates due to repeating scene structures and texture patterns. The results also show that it produces better reconstructions than the device’s built-in software or a state-of-the-art pose-graph formulation. Therefore, we believe that our approach could be widely useful as a backend loop closure engine in robust reconstruction of indoor scenes
Työssä esitellään eräajoon perustuva menetelmä ympäristön ja kameran liikkeen luotettavaan mallintamiseen, joka suoritetaan kuvasekvenssin ja odometriatiedon perusteella. Olennainen osa menetelmää on luotettava silmukoiden havaitseminen, mikä on toteutettu yhdistämällä odometriatieto ja kuvista lasketut vastinpisteet 3D-pisteiden joukkosovituksen (engl. bundle adjustment) avulla. Kehitetyn ratkaisun alussa hyödynnetään suuren mittakaavan structure-from-motion -toteutusta. Alkuvaihe tuottaa joukon mahdollisia piirrevastinpisteitä ja niistä kolmioimalla saatavat 3D-merkkipisteet, jotka noudattavat yksinkertaisia geometrisia rajoitteita. Jälkimmäisessä vaiheessa visuaalisesti yhtenevät 3D-merkkipisteet järjestetään pareittain siten, että kukin pari toimii rajoitteena silmukan sulkemiselle. Jokaiseen vastinpariin liittyy myös 3D-sovitustermi, joka pyrkii yhdistämään 3D-pisteparit. Silmukan sulkemiseen saadut rajoitteet arvioidaan käyttämällä joukkosovitukseen perustuvaa optimointia, joka varmistaa niiden yhdenmukaisuuden odometriatiedon ja 3D-pisteiden kameraprojektioiden kanssa. Potentiaalisten rajoitteiden iteratiivisessa uudelleen painotuksessa ainoastaan yhdenmukaiset rajoitteet saavat korkeammat painoarvot rakenteen ja liikkeen uudessa iteraatiossa. Ehdotetun menetelmän arvioinnissa on käytetty Google Tango -laitteella kerättyä reaalidataa. Tulokset osoittavat, että menetelmä tuottaa korkealaatuisia mallinnuksia haastavasta datasta, joka sisältää samankaltaisia näkymiä ja tekstuurirakenteita, mikä tuottaa lukuisia vääriä rajoitteita silmukan sulkemiseksi. Tulokset osoittavat, että ratkaisu luo parempia mallinnuksia kuin käytetyn laitteen sisäänrakennettu ohjelmisto tai tämänhetkistä huipputasoa edustava pose-graph -formulointi. Tästä syystä työssä kehitetty toteutustapa olisi erittäin käyttökelpoinen sisätilojen luotettavassa ja tarkassa mallintamisessa
APA, Harvard, Vancouver, ISO, and other styles
12

Tapp, Keith A. "Mapping democratic practice using soft systems methodologies /." [St. Lucia, Qld.], 2001. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe16138.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Parnum, Iain Michael. "Benthic habitat mapping using multibeam sonar systems." Thesis, Curtin University, 2007. http://hdl.handle.net/20.500.11937/1131.

Full text
Abstract:
The aim of this study was to develop and examine the use of backscatter data collected with multibeam sonar (MBS) systems for benthic habitat mapping. Backscatter data were collected from six sites around the Australian coastal zone using the Reson SeaBat 8125 MBS system operating at 455 kHz. Benthic habitats surveyed in this study included: seagrass meadows, rhodolith beds, coral reef, rock, gravel, sand, muddy sand, and mixtures of those habitats. Methods for processing MBS backscatter data were developed for the Coastal Water Habitat Mapping (CWHM) project by a team from the Centre for Marine Science and Technology (CMST). The CMST algorithm calculates the seafloor backscatter strength derived from the peak and integral (or average) intensity of backscattered signals for each beam. The seafloor backscatter strength estimated from the mean value of the integral backscatter intensity was shown in this study to provide an accurate measurement of the actual backscatter strength of the seafloor and its angular dependence. However, the seafloor backscatter strength derived from the peak intensity was found to be overestimated when the sonar insonification area is significantly smaller than the footprint of receive beams, which occurs primarily at oblique angles. The angular dependence of the mean backscatter strength showed distinct differences between hard rough substrates (such as rock and coral reef), seagrass, coarse sediments and fine sediments. The highest backscatter strength was observed not only for the hard and rough substrate, but also for marine vegetation, such as rhodolith and seagrass. The main difference in acoustic backscatter from the different habitats was the mean level, or angle-average backscatter strength. However, additional information can also be obtained from the slope of the angular dependence of backscatter strength.It was shown that the distribution of the backscatter. The shape parameter was shown to relate to the ratio of the insonification area (which can be interpreted as an elementary scattering cell) to the footprint size rather than to the angular dependence of backscatter strength. When this ratio is less than 5, the gamma shape parameter is very similar for different habitats and is nearly linearly proportional to the ratio. Above a ratio of 5, the gamma shape parameter is not significantly dependent on the ratio and there is a noticeable difference in this parameter between different seafloor types. A new approach to producing images of backscatter properties, introduced and referred to as the angle cube method, was developed. The angle cube method uses spatial interpolation to construct a three-dimensional array of backscatter data that is a function of X-Y coordinates and the incidence angle. This allows the spatial visualisation of backscatter properties to be free from artefacts of the angular dependence and provides satisfactory estimates of the backscatter characteristics.Using the angle-average backscatter strength and slope of the angular dependence, derived by the angle cube method, in addition to seafloor terrain parameters, habitat probability and classification maps were produced to show distributions of sand, marine vegetation (e.g. seagrass and rhodolith) and hard substrate (e.g. coral and bedrock) for five different survey areas. Ultimately, this study demonstrated that the combination of high-resolution bathymetry and backscatter strength data, as collected by MBS, is an efficient and cost-effective tool for benthic habitat mapping in costal zones.
APA, Harvard, Vancouver, ISO, and other styles
14

Parnum, Iain Michael. "Benthic habitat mapping using multibeam sonar systems." Curtin University of Technology, Dept. of Imaging and Applied Physics, Centre for Marine Science and Technology, 2007. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=18584.

Full text
Abstract:
The aim of this study was to develop and examine the use of backscatter data collected with multibeam sonar (MBS) systems for benthic habitat mapping. Backscatter data were collected from six sites around the Australian coastal zone using the Reson SeaBat 8125 MBS system operating at 455 kHz. Benthic habitats surveyed in this study included: seagrass meadows, rhodolith beds, coral reef, rock, gravel, sand, muddy sand, and mixtures of those habitats. Methods for processing MBS backscatter data were developed for the Coastal Water Habitat Mapping (CWHM) project by a team from the Centre for Marine Science and Technology (CMST). The CMST algorithm calculates the seafloor backscatter strength derived from the peak and integral (or average) intensity of backscattered signals for each beam. The seafloor backscatter strength estimated from the mean value of the integral backscatter intensity was shown in this study to provide an accurate measurement of the actual backscatter strength of the seafloor and its angular dependence. However, the seafloor backscatter strength derived from the peak intensity was found to be overestimated when the sonar insonification area is significantly smaller than the footprint of receive beams, which occurs primarily at oblique angles. The angular dependence of the mean backscatter strength showed distinct differences between hard rough substrates (such as rock and coral reef), seagrass, coarse sediments and fine sediments. The highest backscatter strength was observed not only for the hard and rough substrate, but also for marine vegetation, such as rhodolith and seagrass. The main difference in acoustic backscatter from the different habitats was the mean level, or angle-average backscatter strength. However, additional information can also be obtained from the slope of the angular dependence of backscatter strength.
It was shown that the distribution of the backscatter. The shape parameter was shown to relate to the ratio of the insonification area (which can be interpreted as an elementary scattering cell) to the footprint size rather than to the angular dependence of backscatter strength. When this ratio is less than 5, the gamma shape parameter is very similar for different habitats and is nearly linearly proportional to the ratio. Above a ratio of 5, the gamma shape parameter is not significantly dependent on the ratio and there is a noticeable difference in this parameter between different seafloor types. A new approach to producing images of backscatter properties, introduced and referred to as the angle cube method, was developed. The angle cube method uses spatial interpolation to construct a three-dimensional array of backscatter data that is a function of X-Y coordinates and the incidence angle. This allows the spatial visualisation of backscatter properties to be free from artefacts of the angular dependence and provides satisfactory estimates of the backscatter characteristics.
Using the angle-average backscatter strength and slope of the angular dependence, derived by the angle cube method, in addition to seafloor terrain parameters, habitat probability and classification maps were produced to show distributions of sand, marine vegetation (e.g. seagrass and rhodolith) and hard substrate (e.g. coral and bedrock) for five different survey areas. Ultimately, this study demonstrated that the combination of high-resolution bathymetry and backscatter strength data, as collected by MBS, is an efficient and cost-effective tool for benthic habitat mapping in costal zones.
APA, Harvard, Vancouver, ISO, and other styles
15

Guzina, Luka. "UNDERSTANDING DIGITAL TWIN: A SYSTEMATIC MAPPING STUDY." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55636.

Full text
Abstract:
Digital Twin is a concept that refers to a virtual representation of manufacturing elements suchas personnel, products, assets and process definitions, a living model that continuously updatesand changes as the physical counterpart changes to represent status, working conditions, productgeometries and resource states in a synchronous manner [1]. The digital representation providesboth the elements and the dynamics of how a physical part operates and lives throughout its lifecycle.In recent years, digital twin caught the attention of many researchers, who investigated its adoptionin various fields. In this thesis, we report on the planning, execution and results of a systematicmapping study to examine the current application of the digital twin, its research relevance, application domains, enabling technologies and perceived benefits. We start from an initial set of 675publications and through a rigorous selection process we obtain a final set of 29 primary studies.Using a classification framework, we extract relevant data. We analyse the extracted data usingboth quantitative and qualitative analyses using vertical and orthogonal analysis techniques.This work is aimed to investigate the current achievements of Digital Twin with the focus on revealing technologies it uses, its applications and benefits it offers by implementing it as well aspublication details.
APA, Harvard, Vancouver, ISO, and other styles
16

Siddiqui, Rafid. "On Fundamental Elements of Visual Navigation Systems." Doctoral thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00601.

Full text
Abstract:
Visual navigation is a ubiquitous yet complex task which is performed by many species for the purpose of survival. Although visual navigation is actively being studied within the robotics community, the determination of elemental constituents of a robust visual navigation system remains a challenge. Motion estimation is mistakenly considered as the sole ingredient to make a robust autonomous visual navigation system and therefore efforts are made to improve the accuracy of motion estimations. On the contrary, there are other factors which are as important as motion and whose absence could result in inability to perform seamless visual navigation such as the one exhibited by humans. Therefore, it is needed that a general model for a visual navigation system be devised which would describe it in terms of a set of elemental units. In this regard, a set of visual navigation elements (i.e. spatial memory, motion memory, scene geometry, context and scene semantics) are suggested as building blocks of a visual navigation system in this thesis. A set of methods are proposed which investigate the existence and role of visual navigation elements in a visual navigation system. A quantitative research methodology in the form of a series of systematic experiments is conducted on these methods. The thesis formulates, implements and analyzes the proposed methods in the context of visual navigation elements which are arranged into three major groupings; a) Spatial memory b) Motion Memory c) Manhattan, context and scene semantics. The investigations are carried out on multiple image datasets obtained by robot mounted cameras (2D/3D) moving in different environments. Spatial memory is investigated by evaluation of proposed place recognition methods. The recognized places and inter-place associations are then used to represent a visited set of places in the form of a topological map. Such a representation of places and their spatial associations models the concept of spatial memory. It resembles the humans’ ability of place representation and mapping for large environments (e.g. cities). Motion memory in a visual navigation system is analyzed by a thorough investigation of various motion estimation methods. This leads to proposals of direct motion estimation methods which compute accurate motion estimates by basing the estimation process on dominant surfaces. In everyday world, planar surfaces, especially the ground planes, are ubiquitous. Therefore, motion models are built upon this constraint. Manhattan structure provides geometrical cues which are helpful in solving navigation problems. There are some unique geometric primitives (e.g. planes) which make up an indoor environment. Therefore, a plane detection method is proposed as a result of investigations performed on scene structure. The method uses supervised learning to successfully classify the segmented clusters in 3D point-cloud datasets. In addition to geometry, the context of a scene also plays an important role in robustness of a visual navigation system. The context in which navigation is being performed imposes a set of constraints on objects and sections of the scene. The enforcement of such constraints enables the observer to robustly segment the scene and to classify various objects in the scene. A contextually aware scene segmentation method is proposed which classifies the image of a scene into a set of geometric classes. The geometric classes are sufficient for most of the navigation tasks. However, in order to facilitate the cognitive visual decision making process, the scene ought to be semantically segmented. The semantic of indoor scenes as well as semantic of the outdoor scenes are dealt with separately and separate methods are proposed for visual mapping of environments belonging to each type. An indoor scene consists of a corridor structure which is modeled as a cubic space in order to build a map of the environment. A “flash-n-extend” strategy is proposed which is responsible for controlling the map update frequency. The semantics of the outdoor scenes is also investigated and a scene classification method is proposed. The method employs a Markov Random Field (MRF) based classification framework which generates a set of semantic maps.
APA, Harvard, Vancouver, ISO, and other styles
17

Widoe, Jr Robert Owen. "Systems mapping: access to understanding, cooperation, and action." Thesis, University of Hawaii at Manoa, 2003. http://hdl.handle.net/10125/6846.

Full text
Abstract:
Systems Mapping uses qualitative research methods to capture and display, in graphic and narrative form, the workings of a system of human activity. In the past, Systems Mapping has been used singularly for program and systems development, for conducting research and evaluation, for guiding and informing policy development, for identifying system linkages, for designing and promoting collaborative systems efforts, and for informing funders, stakeholders, policy makers, and citizens. The purpose of this study was to use Systems Mapping to tie many of these diverse elements together, facilitate the successful redesign of a municipality's tourism and events promotion program, and to assess Systems Mapping's contribution to the process. Most Systems Mapping projects have unfolded in a relatively straight-forward manner. The current case required adapting and modifying the use of the Systems Mapping process. The differences and departures may have had a significant impact on the results for a number of reasons. Moreover, the mapping itself was stopped due to the identification of sensitive political issues that could not be resolved through a participatory process such as Systems Mapping. The Systems Mapping data were assessed using a version of Lincoln and Guba's (1985) criteria for establishing trustworthiness, and participant interviews were conducted to assess Systems Mapping's contributions. This use of Systems Mapping did produce some access to understanding and action, but not cooperation. There were a number of lessons learned, both cautionary and confirmatory regarding the use of Systems Mapping in this or other similar contexts.
xii, 219 leaves
APA, Harvard, Vancouver, ISO, and other styles
18

Luotsinen, Linus Jan. "AUTONOMOUS ENVIRONMENTAL MAPPING IN MULTI-AGENT UAV SYSTEMS." Master's thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4421.

Full text
Abstract:
UAV units are by many researchers and aviation specialists considered the future and cutting edge of modern flight technology. This thesis discusses methods for efficient autonomous environmental mapping in a multi-agent domain. An algorithm that emphasizes on team work by sharing the agents local map information and exploration intentions is presented as a solution to the mapping problem. General theories on how to model and implement rational autonomous behaviour for UAV agents are presented. Three different human and tactical behaviour modeling techniques are evaluated. The author found the CxBR paradigm to be the most interesting approach. Also, in order to test and quantify the theories presented in this thesis a simulation environment was developed. This simulation software allows for UAV agents to operate in a visual 3-D environment with mountains, other various terrain types, danger points and enemies to model unexpected events.
M.S.
Department of Electrical and Computer Engineering
Engineering and Computer Science;
Electrical and Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
19

Bonney, Colin Andrew. "Fault tolerant task mapping in many-core systems." Thesis, University of York, 2016. http://etheses.whiterose.ac.uk/20899/.

Full text
Abstract:
The advent of many-core systems, a network on chip containing hundreds or thousands of homogeneous processors cores, present new challenges in managing the cores effectively in response to processing demands, hardware faults and the need for heat management. Continually diminishing feature size of devices increase the probability of fabrication de- fects and the variability of performance of individual transistors. In many-core systems this can result in the failure of individual processing cores, routing nodes or communication links, which require the use of fault tolerant mechanisms. Diminishing feature size also increases the power density of devices, giving rise to the concept of dark silicon where only a portion of the functionality available on a chip can be active at any one time. Core fault tolerance and management of dark silicon can both be achieved by allocating a percentage of cores to be idle at any one time. Idle cores can be used as dark silicon to evenly distribute heat generated by processing cores and can also be used as spare cores to implement fault tolerance. Both of these can be achieved by the dynamic allocation of processes to tasks in response to changes to the status of hardware resources and the demands placed on the system, which in turn requires real time task mapping. This research proposes the use of a continuous fault/recovery cycle to implement graceful degradation and amelioration to provide real-time fault tolerance. Objective measures for core fault tolerance, link fault tolerance, network power and excess traffic have been developed for use by a multi-objective evolutionary algorithm that uses knowledge of the processing demands and hardware status to identify optimal task mappings. The fault/recovery cycle is shown to be effective in maintaining a high level of performance of a many-core array when presented with a series of hardware faults.
APA, Harvard, Vancouver, ISO, and other styles
20

El-Khaldi, Maher (Maher Sami). "Mapping boundaries of generative systems for design synthesis." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39323.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Architecture, 2007.
Page 123 blank.
Includes bibliographical references (p. 121-122).
Architects have been experimenting with generative systems for design without a clear reference or theory of what, why or how to deal with such systems. In this thesis I argue for three points. The first is that generative systems in architecture are implemented at a skin-deep level as they are only used to synthesize form within confined domains. The second is that such systems can be only implemented if a design formalism is defined. The third is that generative systems can be deeper integrated within a design process if they were coupled with performance-based evaluation methods. These arguments are discussed in four chapters: 1- Introduction: a panoramic view of generative systems in architecture and in. computing mapping their occurrences and implementations. 2- Generative Systems for Design: highlights on integrating generative systems in architecture design processes; and discussions on six generative systems including: Algorithmic, Parametrics, L-systems, Cellular Automata, Fractals and Shape Grammars. 3- Provisional taxonomy: A summery table of systems properties and a classification of generative systems properties as discussed in the previous chapter 4- Conclusion: comments and explanations on why such systems are simplicity implemented within design.
by Maher El-Khaldi.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
21

Grewe, Dominik. "Mapping parallel programs to heterogeneous multi-core systems." Thesis, University of Edinburgh, 2014. http://hdl.handle.net/1842/8852.

Full text
Abstract:
Heterogeneous computer systems are ubiquitous in all areas of computing, from mobile to high-performance computing. They promise to deliver increased performance at lower energy cost than purely homogeneous, CPU-based systems. In recent years GPU-based heterogeneous systems have become increasingly popular. They combine a programmable GPU with a multi-core CPU. GPUs have become flexible enough to not only handle graphics workloads but also various kinds of general-purpose algorithms. They are thus used as a coprocessor or accelerator alongside the CPU. Developing applications for GPU-based heterogeneous systems involves several challenges. Firstly, not all algorithms are equally suited for GPU computing. It is thus important to carefully map the tasks of an application to the most suitable processor in a system. Secondly, current frameworks for heterogeneous computing, such as OpenCL, are low-level, requiring a thorough understanding of the hardware by the programmer. This high barrier to entry could be lowered by automatically generating and tuning this code from a high-level and thus more user-friendly programming language. Both challenges are addressed in this thesis. For the task mapping problem a machine learning-based approach is presented in this thesis. It combines static features of the program code with runtime information on input sizes to predict the optimal mapping of OpenCL kernels. This approach is further extended to also take contention on the GPU into account. Both methods are able to outperform competing mapping approaches by a significant margin. Furthermore, this thesis develops a method for targeting GPU-based heterogeneous systems from OpenMP, a directive-based framework for parallel computing. OpenMP programs are translated to OpenCL and optimized for GPU performance. At runtime a predictive model decides whether to execute the original OpenMP code on the CPU or the generated OpenCL code on the GPU. This approach is shown to outperform both a competing approach as well as hand-tuned code.
APA, Harvard, Vancouver, ISO, and other styles
22

Shieh, Jong-Jiann. "Fine grain mapping strategies for pipelined computer systems." Case Western Reserve University School of Graduate Studies / OhioLINK, 1990. http://rave.ohiolink.edu/etdc/view?acc_num=case1054583150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Deveci, Mehmet. "Load-Balancing and Task Mapping for Exascale Systems." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1429199721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Johnson, Stephen Philip. "Mapping numerical software onto distributed memory parallel systems." Thesis, University of Greenwich, 1992. http://gala.gre.ac.uk/8676/.

Full text
Abstract:
The aim of this thesis is to further the use of parallel computers, in particular distributed memory systems, by proving strategies for parallelisation and developing the core component of tools to aid scalar software porting. The ported code must not only efficiently exploit available parallel processing speed and distributed memory, but also enable existing users of the scalar code to use the parallel version with identical inputs and allow maintenance to be performed by the scalar code author in conjunction with the parallel code. The data partition strategy has been used to parallelise an in-house solidification modelling code where all requirements for the parallel software were successfully met. To confirm the success of this parallelisation strategy, a much sterner test was used, parallelising the HARWELL-FLOW3D fluid flow package. The performance results of the parallel version clearly vindicate the conclusions of the first example. Speedup efficiencies of around 80 percent have been achieved on fifty processors for sizable models. In both these tests, the alterations to the code were fairly minor, maintaining the structure and style of the original scalar code which can easily be recognised by its original author. The alterations made to these codes indicated the potential for parallelising tools since the alterations were fairly minor and usually mechanical in nature. The current generation of parallelising compilers rely heavily on heuristic guidance in parallel code generation and other decisions that may be better made by a human. As a result, the code they produce will almost certainly be inferior to manually produced code. Also, in order not to sacrifice parallel code quality when using tools, the scalar code analysis to identify inherent parallelism in a application code, as used in parallelising compilers, has been extended to eliminate dependencies conservatively assumed, since these dependencies can greatly inhibit parallelisation. Extra information has been extracted both from control flow and from processing symbolic information. The tests devised to utilise this information enable the non-existence of a significant number of previously assumed dependencies to be proved. In some cases, the number of true dependencies has been more than halved. The dependence graph produced is of sufficient quality to greatly aid the parallelisation, with user interaction and interpretation, parallelism detection and code transformation validity being less inhibited by assumed dependencies. The use of tools rather than the black box approach removes the handicaps associated with using heuristic methods, if any relevant heuristic methods exist.
APA, Harvard, Vancouver, ISO, and other styles
25

Cook, Anthony. "Automated cartographic name placement using rule-based systems." Thesis, University of South Wales, 1988. https://pure.southwales.ac.uk/en/studentthesis/automated-cartographic-name-placement-using-rulebased-systems(d49af2c8-3a37-44c1-8cb6-a6cd3ec3195f).html.

Full text
Abstract:
This thesis describes automated cartographic name placement using rule-based systems. In particular it describes the problem involved with designing a system which is flexible enough to place names on a variety of maps. This is demonstrated using logic programming techniques written in PROLOG. Most previous name placement systems are either map specific or have demonstrated only a few aspects of name placement. However two of these systems, which use the rule-based approach for solving the name placement problem, do show greater flexibility. Nevertheless all known results from these seem unsophisticated when compared to many manually produced maps. This thesis describes further research into the use of rule-based systems. The systems described have the capability to handle a wider range of maps of greater complexity. Also described is a procedural program which implements an iterative strategy for name placement on the Ordnance Survey Route Planner map. The research attempts to classify label positions and configurations used on a wide range of maps and discusses ways of implementing these in an automated name placement system. A range of name placement rules are also studied in order to decide what type of data a flexible automated name placement system must be able to access. A combined vector and raster data structure approach is adopted. This supplies the necessary "visual" information needed to apply most of the name placement rules. Name placement and database primitives are used to construct the high level rules which make up the rule-based systems. This work has been undertaken in collaboration with the Ordnance Survey. The procedural name placement program, capable of placing names on the 1:625000 Route Planner map, has been implemented at their headquarters.
APA, Harvard, Vancouver, ISO, and other styles
26

Light, Brandon W. "Energy-efficient photon mapping." Link to electronic thesis, 2007. http://www.wpi.edu/Pubs/ETD/Available/etd-051007-092224/.

Full text
Abstract:
Thesis (M.S.) -- Worcester Polytechnic Institute.
Keywords: mobile devices; photon mapping; global illumination; ray tracing; energy; mobile; computer graphics. Includes bibliographical references (leaves 66-68).
APA, Harvard, Vancouver, ISO, and other styles
27

Amstutz, Christian. "Mapping to a Time-predictable Multiprocessor System-on-Chip." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-121296.

Full text
Abstract:
Traditional design methods could not cope with the recent development of multiprocessorsystems-on-chip (MPSoC). Especially, hard real-time systems that requiretime-predictability are cumbersome to develop. What is needed, is an efficient, automaticprocess that abstracts away all the implementation details. ForSyDe, a designmethodology developed at KTH, allows this on the system modelling side. The NoCSystem Generator, another project at KTH, has the ability to create automaticallycomplex systems-on-chip based on a network-on-chip on an FPGA. Both of themsupport the synchronous model of computation to ensure time-predictability. Inthis thesis, these two projects are analysed and modelled. Considering the characteristicsof the projects and exploiting the properties of the synchronous model ofcomputation, a mapping process to map processes to the processors at the differentnetwork nodes of the generated system-on-chip was developed. The mapping processis split into three steps: (1) Binding processes to processors, (2) Placement of theprocessors on net network nodes, and (3) scheduling of the processes on the nodes.An implementation of the mapping process is described and some synthetic exampleswere mapped to show the feasibility of algorithms.
APA, Harvard, Vancouver, ISO, and other styles
28

Daga, Mayank. "Architecture-Aware Mapping and Optimization on Heterogeneous Computing Systems." Thesis, Virginia Tech, 2011. http://hdl.handle.net/10919/32535.

Full text
Abstract:
The emergence of scientific applications embedded with multiple modes of parallelism has made heterogeneous computing systems indispensable in high performance computing. The popularity of such systems is evident from the fact that three out of the top five fastest supercomputers in the world employ heterogeneous computing, i.e., they use dissimilar computational units. A closer look at the performance of these supercomputers reveals that they achieve only around 50% of their theoretical peak performance. This suggests that applications that were tuned for erstwhile homogeneous computing may not be efficient for todayâ s heterogeneous computing and hence, novel optimization strategies are required to be exercised. However, optimizing an application for heterogeneous computing systems is extremely challenging, primarily due to the architectural differences in computational units in such systems. This thesis intends to act as a cookbook for optimizing applications on heterogeneous computing systems that employ graphics processing units (GPUs) as the preferred mode of accelerators. We discuss optimization strategies for multicore CPUs as well as for the two popular GPU platforms, i.e., GPUs from AMD and NVIDIA. Optimization strategies for NVIDIA GPUs have been well studied but when applied on AMD GPUs, they fail to measurably improve performance because of the differences in underlying architecture. To the best of our knowledge, this research is the first to propose optimization strategies for AMD GPUs. Even on NVIDIA GPUs, there exists a lesser known but an extremely severe performance pitfall called partition camping, which can affect application performance by up to seven-fold. To facilitate the detection of this phenomenon, we have developed a performance prediction model that analyzes and characterizes the effect of partition camping in GPU applications. We have used a large-scale, molecular modeling application to validate and verify all the optimization strategies. Our results illustrate that if appropriately optimized, AMD and NVIDIA GPUs can provide 371-fold and 328-fold improvement, respectively, over a hand-tuned, SSE-optimized serial implementation.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
29

Kaess, Michael. "Incremental smoothing and mapping." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26572.

Full text
Abstract:
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2009.
Committee Chair: Dellaert, Frank; Committee Member: Bobick, Aaron; Committee Member: Christensen, Henrik; Committee Member: Leonard, John; Committee Member: Rehg, James. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
30

Gadea, Cristian. "Collaborative Web-Based Mapping of Real-Time Sensor Data." Thesis, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19772.

Full text
Abstract:
The distribution of real-time GIS (Geographic Information System) data among users is now more important than ever as it becomes increasingly affordable and important for scientific and government agencies to monitor environmental phenomena in real-time. A growing number of sensor networks are being deployed all over the world, but there is a lack of solutions for their effective monitoring. Increasingly, GIS users need access to real-time sensor data from a variety of sources, and the data must be represented in a visually-pleasing way and be easily accessible. In addition, users need to be able to collaborate with each other to share and discuss specific sensor data. The real-time acquisition, analysis, and sharing of sensor data from a large variety of heterogeneous sensor sources is currently difficult due to the lack of a standard architecture to properly represent the dynamic properties of the data and make it readily accessible for collaboration between users. This thesis will present a JEE-based publisher/subscriber architecture that allows real-time sensor data to be displayed collaboratively on the web, requiring users to have nothing more than a web browser and Internet connectivity to gain access to that data. The proposed architecture is evaluated by showing how an AJAX-based and a Flash-based web application are able to represent the real-time sensor data within novel collaborative environments. By using the latest web-based technology and relevant open standards, this thesis shows how map data and GIS data can be made more accessible, more collaborative and generally more useful.
APA, Harvard, Vancouver, ISO, and other styles
31

Mellema, Garfield Richard. "An acoustic scatter-mapping imaging system." Thesis, University of British Columbia, 1990. http://hdl.handle.net/2429/29684.

Full text
Abstract:
The development of improved models of seismic diffraction is assisted by the availability of accurate scattering data. An acoustic scatter-mapping system was developed for the purpose of providing such data rapidly and at low cost. This system uses a source-receiver pair suspended on a trolley over the structure to be mapped. Signal generation, acquisition, processing, and plotting are performed on an AT-compatible microcomputer and a laser printer. The entire process can be performed in an automated manner within five hours, generating scatter-mapping plots in a format familiar to the geophysical industry. The system hardware was similar to those of Hilterman [1] and others referenced by him, but used a controlled source transducer. The available processing power of a microcomputer allowed the use of a 1 to 15 KHz swept-frequency source signal, similar to that used in Vibroseis and Chirp Radar, which is later crosscorrelated with received signal to provide precise scatter-mapping data for the target structure. Several examples of theoretical and experimental acoustic scatter-mappings are provided for comparison. The novelty of this system lies in its use of a swept frequency source signal. While common in the fields of seismology and radar, swept frequency source signals are new to the area of acoustic scatter mapping. When compared to a similar system using a pulsed source signal, this system produces a better controlled source signal of greater energy, resulting in a more useful resultant signal and better mapping characteristics. The system was able to map scattering from features in the target structure smaller than one percent of the crosscorrelated source signal's 37 mm dominant wavelength.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
32

Sedlacko, Michal, Robert-Andre Martinuzzi, Inge Røpke, Nuno Videira, and Paula Antunes. "Participatory systems mapping for sustainable consumption: Discussion of a method promoting systemic insights." Elsevier, 2014. http://dx.doi.org/10.1016/j.ecolecon.2014.11.020.

Full text
Abstract:
The paper describes our usage of and experience with the method of participatory systems mapping. The method, developed for the purpose of facilitating knowledge brokerage, builds on participatory modelling approaches and applications and was used in several events involving both researchers and policy makers. The paper presents and discusses examples of how different types of participatory interaction with causal loop diagrams ("system maps") produced different insights on issues related to sustainable consumption and enabled participatory reflection and sharing of knowledge. Together, these insights support a systemic understanding of the issues and thus the method provides instruments for coping with complexity when formulating policies for sustainable consumption. Furthermore the paper discusses the ability of the method - and its limits - to connect mental models of participants through structured discussion and thus bridge boundaries between different communities.
APA, Harvard, Vancouver, ISO, and other styles
33

Eskilsson, Anton. "Energy flow mapping of a sports facility : Energy flow mapping and suitable key performance indicator formulation for Rocklunda sports facility." Thesis, Mälardalens högskola, Akademin för ekonomi, samhälle och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-37280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Siddiqui, Abujawad Rafid. "On Fundamental Elements of Visual Navigation Systems." Doctoral thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-46484.

Full text
Abstract:
Visual navigation is a ubiquitous yet complex task which is performed by many species for the purpose of survival. Although visual navigation is actively being studied within the robotics community, the determination of elemental constituents of a robust visual navigation system remains a challenge. Motion estimation is mistakenly considered as the sole ingredient to make a robust autonomous visual navigation system and therefore efforts are made to improve the accuracy of motion estimations. On the contrary, there are other factors which are as important as motion and whose absence could result in inability to perform seamless visual navigation such as the one exhibited by humans. Therefore, it is needed that a general model for a visual navigation system be devised which would describe it in terms of a set of elemental units. In this regard, a set of visual navigation elements (i.e. spatial memory, motion memory, scene geometry, context and scene semantics) are suggested as building blocks of a visual navigation system in this thesis. A set of methods are proposed which investigate the existence and role of visual navigation elements in a visual navigation system. A quantitative research methodology in the form of a series of systematic experiments is conducted on these methods. The thesis formulates, implements and analyzes the proposed methods in the context of visual navigation elements which are arranged into three major groupings; a) Spatial memory b) Motion Memory c) Manhattan, context and scene semantics. The investigations are carried out on multiple image datasets obtained by robot mounted cameras (2D/3D) moving in different environments. Spatial memory is investigated by evaluation of proposed place recognition methods. The recognized places and inter-place associations are then used to represent a visited set of places in the form of a topological map. Such a representation of places and their spatial associations models the concept of spatial memory. It resembles the humans’ ability of place representation and mapping for large environments (e.g. cities). Motion memory in a visual navigation system is analyzed by a thorough investigation of various motion estimation methods. This leads to proposals of direct motion estimation methods which compute accurate motion estimates by basing the estimation process on dominant surfaces. In everyday world, planar surfaces, especially the ground planes, are ubiquitous. Therefore, motion models are built upon this constraint. Manhattan structure provides geometrical cues which are helpful in solving navigation problems. There are some unique geometric primitives (e.g. planes) which make up an indoor environment. Therefore, a plane detection method is proposed as a result of investigations performed on scene structure. The method uses supervised learning to successfully classify the segmented clusters in 3D point-cloud datasets. In addition to geometry, the context of a scene also plays an important role in robustness of a visual navigation system. The context in which navigation is being performed imposes a set of constraints on objects and sections of the scene. The enforcement of such constraints enables the observer to robustly segment the scene and to classify various objects in the scene. A contextually aware scene segmentation method is proposed which classifies the image of a scene into a set of geometric classes. The geometric classes are sufficient for most of the navigation tasks. However, in order to facilitate the cognitive visual decision making process, the scene ought to be semantically segmented. The semantic of indoor scenes as well as semantic of the outdoor scenes are dealt with separately and separate methods are proposed for visual mapping of environments belonging to each type. An indoor scene consists of a corridor structure which is modeled as a cubic space in order to build a map of the environment. A “flash-n-extend” strategy is proposed which is responsible for controlling the map update frequency. The semantics of the outdoor scenes is also investigated and a scene classification method is proposed. The method employs a Markov Random Field (MRF) based classification framework which generates a set of semantic maps.
APA, Harvard, Vancouver, ISO, and other styles
35

Kleiner, Kevin James. "A satellite derived map of ecological systems in the East Gulf Coast plain, USA." Auburn, Ala., 2007. http://repo.lib.auburn.edu/07M%20Theses/KLEINER_KEVIN_36.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ghawi, Raji. "Ontology-based cooperation of information systems : contributions to database-to-ontology mapping and XML-to-ontology mapping." Phd thesis, Université de Bourgogne, 2010. http://tel.archives-ouvertes.fr/tel-00559089.

Full text
Abstract:
This thesis treats the area of ontology-based cooperation of information systems. We propose a global architecture called OWSCIS that is based on ontologies and web-services for the cooperation of distributed heterogeneous information systems. In this thesis, we focus on the problem of connecting the local information sources to the local ontologies within OWSCIS architecture. This problem is articulated by three main axes: 1) the creation of the local ontology from the local information sources, 2) the mapping of local information sources to an existing local ontology, and 3) the translation of queries over the local ontologies into queries over local information sources.
APA, Harvard, Vancouver, ISO, and other styles
37

Smith, David Rowland. "Nonlinear system characterization through interpolated cell mapping." Thesis, Georgia Institute of Technology, 1988. http://hdl.handle.net/1853/19161.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

He, Ying. "Generic security templates for information system security arguments : mapping security arguments within healthcare systems." Thesis, University of Glasgow, 2014. http://theses.gla.ac.uk/5773/.

Full text
Abstract:
Industry reports indicate that the number of security incidents happened in healthcare organisation is increasing. Lessons learned (i.e. the causes of a security incident and the recommendations intended to avoid any recurrence) from those security incidents should ideally inform information security management systems (ISMS). The sharing of the lessons learned is an essential activity in the “follow-up” phase of security incident response lifecycle, which has long been addressed but not given enough attention in academic and industry. This dissertation proposes a novel approach, the Generic Security Template (GST), aiming to feed back the lessons learned from real world security incidents to the ISMS. It adapts graphical Goal Structuring Notations (GSN), to present the lessons learned in a structured manner through mapping them to the security requirements of the ISMS. The suitability of the GST has been confirmed by demonstrating that instances of the GST can be produced from real world security incidents of different countries based on in-depth analysis of case studies. The usability of the GST has been evaluated using a series of empirical studies. The GST is empirically evaluated in terms of its given effectiveness in assisting the communication of the lessons learned from security incidents as compared to the traditional text based approach alone. The results show that the GST can help to improve the accuracy and reduce the mental efforts in assisting the identification of the lessons learned from security incidents and the results are statistically significant. The GST is further evaluated to determine whether users can apply the GST to structure insights derived from a specific security incident. The results show that students with a computer science background can create an instance of the GST. The acceptability of the GST is assessed in a healthcare organisation. Strengths and weaknesses are identified and the GST has been adjusted to fit into organisational needs. The GST is then further tested to examine its capability to feed back the security lessons to the ISMS. The results show that, by using the GST, lessons identified from security incidents from one healthcare organisation in a specific country can be transferred to another and can indeed inform the improvements of the ISMS. In summary, the GST provides a unified way to feed back the lessons learned to the ISMS. It fosters an environment where different stakeholders can speak the same language while exchanging the lessons learned from the security incidents around the world.
APA, Harvard, Vancouver, ISO, and other styles
39

Agha, Jafari Wolde Bahareh. "A systematic Mapping study of ADAS and Autonomous Driving." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-42754.

Full text
Abstract:
Nowadays, autonomous driving revolution is getting closer to reality. To achieve the Autonomous driving the first step is to develop the Advanced Driver Assistance System (ADAS). Driver-assistance systems are one of the fastest-growing segments in automotive electronics since already there are many forms of ADAS available. To investigate state of art of development of ADAS towards Autonomous Driving, we develop Systematic Mapping Study (SMS). SMS methodology is used to collect, classify, and analyze the relevant publications. A classification is introduced based on the developments carried out in ADAS towards Autonomous driving. According to SMS methodology, we identified 894 relevant publications about ADAS and its developmental journey toward Autonomous Driving completed from 2012 to 2016. We classify the area of our research under three classifications: technical classifications, research types and research contributions. The related publications are classified under thirty-three technical classifications. This thesis sheds light on a better understanding of the achievements and shortcomings in this area. By evaluating collected results, we answer our seven research questions. The result specifies that most of the publications belong to the Models and Solution Proposal from the research type and contribution. The least number of the publications belong to the Automated…Autonomous driving from the technical classification which indicated the lack of publications in this area.
APA, Harvard, Vancouver, ISO, and other styles
40

Hall, Bryan, University of Western Sydney, Faculty of Science and Technology, and School of Science. "A review of the environmental resource mapping system and a proof that it is impossible to write a general algorithm for analysing interactions between organisms distributed at locations described by a locationally linked database and physical properties recorded within the database." THESIS_FST_SS_Hall_B.xml, 1994. http://handle.uws.edu.au:8081/1959.7/750.

Full text
Abstract:
The Environmental Resource Mapping System (E-RMS) is a geographic information system (GIS) that is used by the National Parks and Wildlife Service to assist in management of national parks. The package is available commercially from the Service and is used by other government departments for environmental management. E-RMS has also been present in Australian Universities and used for academic work for a number of years. This thesis demonstrates that existing procedures for product quality and performance have not been followed in the production of the package and that the package and therefore much of the work undertaken with the package is fundamentally flawed. The E-RMS software contains and produces a number of serious mistakes. Several problems are identified and discussed in this thesis. As a result of the shortcomings, the author recommends that an enquiry be conducted to investigate *1/ The technical feasibility of each project for which the E-RMS package has been used; *2/ The full extent and consequences of the failings inherent with the package; and *3/ The suitability of the E-RMS GIS package for the purposes for which it is sold. Australian Standard 3898 requires that the purpose, functions and limitations of consumer software shall be described. To comply with this standard, users of the E-RMS package would have to be informed of several factors related to it. These are discussed in the research. Failure to consider the usefulness and extractable nature of information in any GIS database will inevitably lead to problems that may endanger the phenomena that the GIS is designed to protect.
Master of Applied Science (Environmental Science)
APA, Harvard, Vancouver, ISO, and other styles
41

Wurm, Kurt B. "The Integration of Cadastral Base Mapping with Cadastral Parcel Attribution." Fogler Library, University of Maine, 2003. http://www.library.umaine.edu/theses/pdf/WurmKB2003.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Gomez, Gloria. "Issues in preschool concept mapping an interaction design perspective /." Swinburne Research Bank, 2009. http://hdl.handle.net.

Full text
Abstract:
Thesis (PhD) - Faculty of Design, Swinburne University of Technology, 2008.
Submitted in partial fulfillment of the requirements of the degree of Doctor of Philosophy, [Faculty of Design], Swinburne University of Technology - 2008. Typescript. Bibliography: p. 348-357.
APA, Harvard, Vancouver, ISO, and other styles
43

Bailey, Ian David. "The design of a mapping language for STEP software systems." Thesis, Brunel University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Wade, Steven David. "The development of geographical information systems for nitrate vulnerability mapping." Thesis, Coventry University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.336455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Chacon, de San Baldomero Alejandro. "Read mapping on heterogeneous systems: scalability strategies for bioinformatic primitives." Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/671736.

Full text
Abstract:
La seqüenciació genòmica és un component clau en nous avenços en medicina, i la seva democratització és un pas important per millorar l’accessibilitat per al pacient. Els beneficis implícits en el descobriment de noves variants genètiques són molt amplis, incloent des de la detecció precoç de càncer com la medicina personalitzada, passant pel disseny de fàrmacs i l’edició genòmica. Tots aquests usos potencials han incrementat exponencialment l’interès de la comunitat científica en el camp de la bioinformàtica durant els últims anys. A més, el sorgiment dels mètodes de Seqüenciació de Nova Generació ha contribuït a la reducció ràpida dels costos de seqüenciació, permetent el desenvolupament de noves aplicacions genòmiques. El principal objectiu d’aquesta tesi és el de millorar el rendiment i precisió de l’estat de l’art de la seqüenciació genètica a través de l’ús de plataformes de còmput heterogeni i sistemes de computació híbrida. Més específicament, el treball s’ha centrat en l’acceleració de el problema de mapeig de reads curts, ja que es descriu com un dels estadis del pipeline amb un major cost computacional. De forma global, s’ aspirava a reduir el temps de processament i el cost de la seqüenciació genètica, incrementant la disponibilitat d’aquest tipus d’anàlisi. La principal contribució d’aquesta tesi és la integració GPU del mapper GEM3 (GEM3-GPU). Aquest mapper reporta les mateixes dades de sortida per CPU i GPU, i és un dels primers mappers GPU que permet l’alineament de reads llargs i variables. Les propostes han estat validades utilitzant dades reals, ja que el mapper ha estat corrent en producció en un centre de seqüenciació genòmica (Centre Nacional d’Anàlisi Genòmica (CNAG)). En conjunció amb el mapper GEM3-GPU, durant aquesta tesi s’ha creat una llibreria bioinformàtica en CUDA (GEM-cutter). La llibreria aporta blocs de primitives GPU bàsiques que han estat altament optimitzades. Gem-cutter ofereix una API basada en primitives send and receive (message passing), i incorpora un scheduler per balancejar el treball. A més, la llibreria suporta totes les arquitectures GPU i Multi-GPU.
La secuenciación genómica es un componente clave en nuevos avances en medicina, y su democratización es un paso importante hacia la accesibilidad para el paciente. Los beneficios implícitos en el descubrimiento de nuevas variantes genéticas son muy amplios, incluyendo desde la detección precoz de cáncer como la medicina personalizada, pasando por el diseño de fármaco y la edición genómica. Estos usos potenciales han incrementado exponencialmente el interés de la comunidad científica en el campo de la bioinformática durante los últimos años. Además, el surgimiento de los métodos de Secuenciación de Nueva Generación ha contribuido a la reducción rápida de los costes de secuenciación, permitiendo el desarrollo de nuevas aplicaciones genómicas. El principal objetivo de esta tesis es el de mejorar el rendimiento y precisión del estado del arte de la secuenciación genética a través del uso de plataformas de computo heterogéneo y sistemas de hardware híbridos. Más específicamente, el trabajo se ha centrado en la aceleración del problema del short-read mapping, dado que se describe como uno de los estadíos del pipeline con un mayor coste computacional. De forma global, se aspiraba a reducir el tiempo de procesado y el coste de la secuenciación genética, incrementando su disponibilidad. La principal contribución de esta tesis es la integración GPU del mapper GEM3 (GEM3-GPU). Este mapper reporta los mismos datos de salida para CPU y GPU, y es uno de los primeros mappers GPU que permite el alineamiento de reads largos y variables. Las propuestas han sido validadas utilizando datos reales, dado que el mapper ha estado corriendo en producción en un centro de secuenciación (Centro Nacional de Análisis Genómico (CNAG)). En conjunción con el mapper GEM3-GPU, durante esta tesis se ha creado una librería bioinformática en CUDA (GEM-cutter). La librería provee bloques de primitivas GPU básicas que han sido altamente optimizadas. Gem-cutter ofrece una API basada en primitivas de send and receive (message passing), e incorpora un scheduler para balancear el trabajo. Además, la librería soporta todas las arquitecturas GPU y Multi-GPU.
Genomic sequencing is the key component of new advances in medicine, and its democratization is an important step in improving accessibility for the patient. The benefits involved in discovering new genomic variations are vast and include everything from early cancer detection to personalized medicine, drug design and genome editing. All of these potential uses have greatly increased the interest of the scientific community in the field of bioinformatics in recent years. Moreover, the emergence of next-generation sequencing methods has contributed to the rapid reduction of sequencing costs, enabling new applications of genomics in precision medicine. The main goal of this thesis is to improve the state of the art in performance and accuracy for genome sequencing through the use of heterogeneous computing platforms and hybrid hardware systems. More specifically, the work is focused on accelerating the problem of short-read mapping, as it is described as one of the most computationally expensive parts of the pipeline process. Overall, we aim to reduce the processing time and cost of genome sequencing, and then increasing the availability of this type analysis. The main contribution of this thesis is the full GPU integration of the GEM3 mapper (GEM3-GPU), reporting significant improvements in performance and competitive accuracy results. The mapper reports the same output files for CPU and GPU and is one of the first GPU mappers to allow very long and variable read alignment. The proposals have been validated using real data, since the mapper has been running in production at a genomic sequencing center (Centro Nacional de Análisis Genómico (CNAG)). Together with the GEM3-GPU mapper, a complete bioinformatics CUDA library (GEM-cutter) has been created. The library provides the basic building blocks for genomic applications, which are highly optimised to run on GPUs. Gem-cutter offers an API based on send and receive primitives (message passing) and incorporates a scheduler to balance the work. Furthermore, the library supports all GPU architectures and Multi-GPU execution.
Universitat Autònoma de Barcelona. Programa de Doctorat en Informàtica
APA, Harvard, Vancouver, ISO, and other styles
46

Perumal, Kumaresen Pavithra. "AGENT BASED SYSTEMS IN SOFTWARE TESTING –A SYSTEMATIC MAPPING STUDY." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-48672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hung, Jonathan (Jonathan W. )., and Nicholas Pierce. "A qualitative mapping and evaluation of an aerospace supply chain strategy." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68823.

Full text
Abstract:
Thesis (M. Eng. in Logistics)--Massachusetts Institute of Technology, Engineering Systems Division, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 55-56).
An effective supply chain is critical to the success of the products and services sold by companies. Companies must have an explicit understanding of what the supply chain strategy is in order to evaluate it. While most organizations have well-documented business strategies, they lack the same for their supply chain strategy. The methodology proposed by Perez-Franco, Singh, and Sheffi (201 1a; 201 1b) is a way to evaluate a supply chain strategy by using interviews, surveys, and discussions. The goal for this project was to test the applicability of the Perez-Franco et al. methodology to the aerospace industry through an applied case. We conducted a qualitative mapping of the supply chain strategy for a specific satellite program in Lockheed Martin Space Systems (LMSS). This thesis research is the first time the methodology has been applied and tested with a company in the aerospace industry. As a whole, LMSS has increased focus on their supply chain, and works to directly align their supply chain with their business objectives. For our case, we selected a specific project within the Space Systems division that lacks an explicitly stated supply chain strategy and has a potential gap with objectives. Through our research, we found that the Perez-Franco et al. methodology is applicable to the aerospace industry. As a result of this case application, we propose and present potential deviations and additions to build upon the methodology that yields interesting insights. The results with LMSS revealed areas of disagreement identified through evaluating themes of support, consistency, and sufficiency. Additionally, the methodology allowed us to conduct a diagnostic of the supply chain strategy against business goals. The primary conclusion in the diagnostic was a perceived conflict between quality and affordability initiatives. This is the key recommendation that the company should investigate further to locate the root cause(s) of the disagreement. Outcomes from this case show that the methodology can be applied to a wide number of industries.
by Jonathan Hung and Nicholas Pierce.
M.Eng.in Logistics
APA, Harvard, Vancouver, ISO, and other styles
48

Corner, Robert J. "Knowledge representation in geographic information systems." Thesis, Curtin University, 1999. http://hdl.handle.net/20.500.11937/928.

Full text
Abstract:
In order to satisfy increasing demand for better, smarter, more flexible land resource information an alternative form of representation is proposed. That representation is to be achieved through the coupling of Expert System methods and Geographic Information Systems. Instead of representing resource information using entities such as soil types, defined by rigid boundaries on a map, a more fluid presentation is proposed. Individual resource attributes will be represented by surfaces that describe their probability of occurrence, at a number of levels, across a landscape. Such flexible representations, which are designed to better capture the mental models behind their creation, are capable of being combined and synthesised to answer a wide range of resource queries.An investigation of methods of knowledge representation in a number of fields of research, led to the belief that a Bayesian Network provides a representational calculus that is appropriate to the "fuzzy" and imprecise conceptual models used in resource assessment. The fundamental mathematical principles of such networks have been tailored to provide a representation that is in tune with the intuitive processes of a surveyor's thinking.Software has been written to demonstrate the method and tested on a variety of data sets from Australia and overseas. These tests and demonstrations have used a range of densities of knowledge and range of acuity in evidential data. In general the results accord with the mental models used as drivers. A number of operational facets of the method have been highlighted during these demonstrations and attention has been given to a discussion of them.
APA, Harvard, Vancouver, ISO, and other styles
49

Pereira, Savio Joseph. "On the utilization of Simultaneous Localization and Mapping(SLAM) along with vehicle dynamics in Mobile Road Mapping Systems." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/94425.

Full text
Abstract:
Mobile Road Mapping Systems (MRMS) are the current solution to the growing demand for high definition road surface maps in wide ranging applications from pavement management to autonomous vehicle testing. The focus of this research work is to improve the accuracy of MRMS by using the principles of Simultaneous Localization and Mapping (SLAM). First a framework for describing the sensor measurement models in MRMS is developed. Next the problem of estimating the road surface from the set of sensor measurements is formulated as a SLAM problem and two approaches are proposed to solve the formulated problem. The first is an incremental solution wherein sensor measurements are processed in sequence using an Extended Kalman Filter (EKF). The second is a post-processing solution wherein the SLAM problem is formulated as an inference problem over a factor graph and existing factor graph SLAM techniques are used to solve the problem. For the mobile road mapping problem, the road surface being measured is one the primary inputs to the dynamics of the MRMS. Hence, concurrent to the main objective this work also investigates the use of the dynamics of the host vehicle of the system to improve the accuracy of the MRMS. Finally a novel method that builds off the concepts of the popular model fitting algorithm, Random Sampling and Consensus (RANSAC), is developed in order to identify outliers in road surface measurements and estimate the road elevations at grid nodes using these measurements. The developed methods are validated in a simulated environment and the results demonstrate a significant improvement in the accuracy of MRMS over current state-of-the art methods.
Doctor of Philosophy
Mobile Road Mapping Systems (MRMS) are the current solution to the growing demand for high definition road surface maps in wide ranging applications from pavement management to autonomous vehicle testing. The objective of this research work is to improve the accuracy of MRMS by investigating methods to improve the sensor data fusion process. The main focus of this work is to apply the principles from the field of Simultaneous Localization and Mapping (SLAM) in order to improve the accuracy of MRMS. The concept of SLAM has been successfully applied to the field of mobile robot navigation and thus the motivation of this work is to investigate its application to the problem of mobile road mapping. For the mobile road mapping problem, the road surface being measured is one the primary inputs to the dynamics of the MRMS. Hence this work also investigates whether knowledge regarding the dynamics of the system can be used to improve the accuracy. Also developed as part of this work is a novel method for identifying outliers in road surface datasets and estimating elevations at road surface grid nodes. The developed methods are validated in a simulated environment and the results demonstrate a significant improvement in the accuracy of MRMS over current state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
50

Al-Kadid, M. Ajjan. "Identification of nonlinear dynamic systems using the force-state mapping technique." Thesis, Queen Mary, University of London, 1989. http://qmro.qmul.ac.uk/xmlui/handle/123456789/25569.

Full text
Abstract:
The identification of the dynamic characteristics of nonlinear systems is of increasing interest in the field of modal testing. In this work an investigation has been carried out into the force-state mapping approach to identification of nonlinear systems proposed by Masri and Caughey. They originally suggested a nonparametric identification technique based on curve fitting the restoring force in terms of the velocity and displacement using two dimensional Chebyshev polynomials. It has been shown that the use of Chebyshev polynomials is unnecessarily restrictive and that a simpler approach based on ordinary polynomials and special functions provides a simpler, faster and more accurate identification for polynomial and nonpolynomial types of nonlinearity. This simpler approach has allowed the iterative identification technique for multi-degree of freedom systems to be simplified and a direct identification approach, which is not subject to bias errors, has been suggested. A new procedure for identifying both the type and location of nonlinear elements in lumped parameter systems has been developed and has yielded encouraging results. The practical implementation of the force-state mapping technique required the force, acceleration, velocity and displacement signals to be available at the same instants of time for each measurement station. In order to minimise the instrumentation required, only the force and acceleration are measured and the remaining signals are estimated by integrating the acceleration. The integration problem has been investigated using several approaches both in the frequency and time domains. An analysis of the sensitivity of the estimated parameters with respect to any amplitude and phase measurement errors has been carried out for single-d.o.f. linear systems. Estimates are shown to be extremely sensitive to phase errors for lightly damped structures. The estimation of the mass or generalised mass and modal matrices required for the identification of single or multi-d.o.f. nonlinear systems respectively, has also been investigated. Initial estimates were obtained using a linear multi-point force appropriation method, normally used for the excitation of normal modes. These estimates were then refined using a new technique based on studying the sensitivity of the mass with respect to the estimated system parameters obtained using a nonlinear model. This sensitivity approach seemed promising since accurate results were obtained. It was also shown that accurate estimates for the modal matrix were not essential for carrying out a force-state mapping identification. Finally, the technique has been applied experimentally to the identification of a cantilevered T-beam structure with stiffness and damping nonlinearity. The cases of two well separated and then two fairly close modes were considered. Reasonable agreement between the behaviour of the nonlinear mathematical model and the structure was achieved considering inaccuracies in the measurement set-up. Conclusions have been drawn and some ideas for future work presented.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography