Dissertations / Theses on the topic 'Geographic Information Retrieval'

To see the other types of publications on this topic, follow the link: Geographic Information Retrieval.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Geographic Information Retrieval.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lakey, John Christopher. "HIERARCHICAL GEOGRAPHICAL IDENTIFIERS AS AN INDEXING TECHNIQUE FOR GEOGRAPHIC INFORMATION RETRIEVAL." MSSTATE, 2008. http://sun.library.msstate.edu/ETD-db/theses/available/etd-11062008-195327/.

Full text
Abstract:
Location plays an ever increasing role in modern web-based applications. Many of these applications leverage off-the-shelf search engine technology to provide interactive access to large collections of data. Unfortunately, these commodity search engines do not provide special support for location-based indexing and retrieval. Many applications overcome this constraint by applying geographic bounding boxes in conjunction with range queries. We propose an alternative technique based on geographic identifiers and suggest it will yield faster query evaluation and provide higher search precision. Our experiment compared the two approaches by executing thousands of unique queries on a dataset with 1.8 million records. Based on the quantitative results obtained, our technique yielded drastic performance improvements in both query execution time and precision.
APA, Harvard, Vancouver, ISO, and other styles
2

Overell, Simon E. "Geographic information retrieval : Classification, disambiguation and modelling." Thesis, Imperial College London, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.504918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Bin, and Hsinchun Chen. "Validating a Geographic Image Retrieval System." Wiley Periodicals, Inc, 2000. http://hdl.handle.net/10150/105934.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
This paper summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. By using an image as its interface, the prototype system addresses a troublesome aspect of traditional retrieval models, which require users to have complete knowledge of the low-level features of an image. In addition we describe an experiment to validate the performance of this image retrieval system against that of human subjects in an effort to address the scarcity of research evaluating performance of an algorithm against that of human beings. The results of the experiment indicate that the system could do as well as human subjects in accomplishing the tasks of similarity analysis and image categorization. We also found that under some circumstances texture features of an image are insufficient to represent a geographic image. We believe, however, that our image retrieval system provides a promising approach to integrating image processing techniques and information retrieval algorithms.
APA, Harvard, Vancouver, ISO, and other styles
4

Rydberg, Christoffer. "Time Efficiency of Information Retrieval with Geographic Filtering." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-172918.

Full text
Abstract:
This study addresses the question of time efficiency of two major models within Information Retrieval (IR): the Extended Boolean Model (EBM) and the Vector Space Model (VSM). Both models use the same weighting scheme, based on term-frequency-inverse document frequency (tf-idf). The VSM uses a cosine score computation to rank the document-query similarity. In the EBM, P-norm scores are used, which ranks documents not just by matching terms, but also by taking the Boolean interconnections between the terms in the query into account. Additionally, this study investigates how documents with a single geographic affiliation can be retrieved based on features such as the location and geometry of the geographic surface. Furthermore, we want to answer how to best integrate this geographic search with the two IR-models previously described. From previous research we conclude that using an index based on Z-Space Filling Curves (Z-SFC) is the best approach for documents containing a single geographic affiliation. When documents are retrieved from the Z-SFC-index, there are no guarantees that the retrieved documents are relevant for the search area. It is, however, guaranteed that only the retrieved documents can be relevant. Furthermore, the ranked output of the IR models gives a great advantage to the geographic search, namely that we can focus on documents with a high relevance. We intersect the results from one of the IR models with the results from the Z-SFC index and sort the resulting list of documents by relevance. At this point we can iterate over the list, check for intersections of each document's geometry and the search geometry, and only retrieve documents whose geometries are relevant for the search. Since the user is only interested in the top results we can stop as soon as a sufficient amount of results have been obtained. The conclusion of this study is that the VSM is an easy-to-implement, time efficient, retrieval model. It is inferior to the EBM in the sense that it is a rather simple bag-of-words model, while the EBM allows to specify term- conjunctions and disjunctions. The geographic search has shown to be time efficient and independent of which of the two IR models that is used. The gap in efficiency between the VSM and the EBM, however, drastically increases as the query gets longer and more results are obtained. Depending on the requirements of the user, the collection size, the length of queries, etc., the benefits of the EBM might outweigh the downside of performance. For search engines with a big document collection and many users, however, it is likely to be too slow.
Den här studien addresserar tidseffektiviteten av två större modeller inom informationssökning: ”Extended Boolean Model” (EBM) och ”Vector Space Model” (VSM) . Båda modellerna använder samma typ av viktningsschema, som bygger på ”term frequency–inverse document frequency“ (tf- idf). I VSM rankas varje dokument, utifrån en söksträng, genom en skalärprodukt av dokumentets och söksträngens vektorrepresentationer. I EBM används såkallade ”p-norm score functions” som rankar dokument, inte bara utifrån matchande termer, utan genom att ta hänsyn till de Booleska sammanbindningar som finns mellan sökorden. Utöver detta undersöker studien hur dokument med en geografisk anknytning kan hämtas baserat på positionen och geometrin av den geografiska ytan. Vidare vill vi besvara hur denna geografiska sökning på bästa sätt kan integreras med de två informationssökningmodellerna. Utifrån tidigare forskning dras slutsatsen att det bästa tillvägagångssättet för dokument med endast en geografisk anknytning är att använda ett index baserat på ”Z-Space Filling Curves” (Z-SFC). När dokument hämtas genom Z-SFC-indexet finns det inga garantier att de hämtade dokumenten är relevanta för sökytan. Det är däremot garanterat att endast dessa dokument kan vara relevanta. Vidare är det rankade utdatat från IR-modellerna till en stor fördel för den geografiska sökningen, nämligen att vi kan fokusera på dokument med hög relevans. Detta görs genom att jämföra resultaten från vald IR-modell med resultaten från Z-SFC-indexet och sortera de matchande dokumenten efter relevans. Därefter kan vi iterera över listan och beräkna vilka dokuments geometrier som skär sökningens geometri. Eftersom användaren endast är intresserad av de högst rankade dokumenten kan vi avbryta när vi har tillräckligt många sökresultat. Slutsatsen av studien är att VSM är enkel att implementera och mycket tidseffektiv jämfört med EBM. Modellen är underlägsen EBM i den mening att det är en ganska enkel ”bag of words”-modell, medan EBM tillåter specificering av konjuktioner och disjunktioner. Den geografiska sökningen har visats vara tidseffektiv och oberoende av vilken av de två IR-modellerna som används.Skillnaden i tidseffektivitet mellan VSM och EBM ökar däremot drastiskt när söksträngen blir längre och fler resultat erhålls. Emellertid, beroende på användarens krav, storleken på dokumentsamlingen, söksträngens längd, etc., kan fördelarna med EBM ibland överväga nackdelen av den lägre prestandan. För sökmotorer med stora dokumentsamlingar och många användare är dock modellen sannolikt för långsam.
APA, Harvard, Vancouver, ISO, and other styles
5

Hu, You-Heng Surveying &amp Spatial Information Systems Faculty of Engineering UNSW. "Development, evaluation and application of a geographic information retrieval system." Publisher:University of New South Wales. Surveying & Spatial Information Systems, 2007. http://handle.unsw.edu.au/1959.4/41754.

Full text
Abstract:
Geographic Information Retrieval (GIR) systems provide users with functionalities of representation, storage, organisation of and access to various types of electronic information resources based on their textual and geographic context. This thesis explores various aspects of the development, evaluation and application of GIR systems. The first study focuses upon the extraction and grounding of geographic information entities. My approach for this study consists of a hierarchical structure-based geographic relationship model that is used to describe connections between geographic information entities, and a supervised machine learning algorithm that is used to resolve ambiguities. The proposed approach has been evaluated on a toponym disambiguation task using a large collection of news articles. The second study details the development and validation of a GIR ranking mechanism. The proposed approach takes advantage of the power of the Genetic Programming (GP) paradigm with the aim of finding an optimal functional form that integrates both textual and geographic similarities between retrieved documents and a given user query. My approach has been validated by applying it to a large collection of geographic metadata documents. The third study addresses the problem of modelling the GIR retrieval process that takes into account both thematic and geographic criteria. Based on the Spreading Activation Network (SAN), the proposed model consists a two-layer associative network that is used to construct a structured search space; a constrained spreading activation algorithm that is used to retrieve and to rank relevant documents; and a geographic knowledge base that is used to provide necessary domain knowledge for network. The retrieval performance of my model has been evaluated using the GeoCLEF 2006 tasks. The fourth study discusses the publishing, browsing and navigation of geographic information on the World Wide Web. Key challenges in designing and implementing of a GIR user interface through which online content can be systematically organised based on their geospatial characteristics, and can be efficiently accessed and interrelated, are addressed. The effectiveness and the usefulness of the system are shown by applying it to a large collection of geo-tagged web pages.
APA, Harvard, Vancouver, ISO, and other styles
6

Paiva, Joao Argemiro de Carvalho. "Topological Equivalence and Similarity in Multi-Representation Geographic Databases." Fogler Library, University of Maine, 1998. http://www.library.umaine.edu/theses/pdf/PaivaJA1998.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

McFarland, Sean Alan. "Decision making theory with geographic information systems support." CSUSB ScholarWorks, 2008. https://scholarworks.lib.csusb.edu/etd-project/3393.

Full text
Abstract:
Decisions are made with varying degrees of effectiveness and efficiency and are influenced by a myriad of internal and external forces. Decision Support Systems (DSS) software can effectively aid decision making through processing the facts and producing meaningful outputs for use by the person or team in making the final choice. Geographic Information Systems (GIS), a form of DSS, are very effective when locational data are present. This thesis talks about using GIS software in decision making procedures.
APA, Harvard, Vancouver, ISO, and other styles
8

McCurry, David B. "Provenance Tracking in a Commons of Geographic Data." Fogler Library, University of Maine, 2007. http://www.library.umaine.edu/theses/pdf/McCurryDB2007.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fraser, Mark E. "Architecture and methodology for storage, retrieval and presentation of geo-spatial information." [Gainesville, Fla.] : University of Florida, 2001. http://purl.fcla.edu/fcla/etd/UFE0000316.

Full text
Abstract:
Thesis (M.S.)--University of Florida, 2001.
Title from title page of source document. Document formatted into pages; contains xi, 77 p.; also contains graphics. Includes vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Wei. "Automated spatiotemporal and semantic information extraction for hazards." Diss., University of Iowa, 2014. https://ir.uiowa.edu/etd/1415.

Full text
Abstract:
This dissertation explores three research topics related to automated spatiotemporal and semantic information extraction about hazard events from Web news reports and other social media. The dissertation makes a unique contribution of bridging geographic information science, geographic information retrieval, and natural language processing. Geographic information retrieval and natural language processing techniques are applied to extract spatiotemporal and semantic information automatically from Web documents, to retrieve information about patterns of hazard events that are not explicitly described in the texts. Chapters 2, 3 and 4 can be regarded as three standalone journal papers. The research topics covered by the three chapters are related to each other, and are presented in a sequential way. Chapter 2 begins with an investigation of methods for automatically extracting spatial and temporal information about hazards from Web news reports. A set of rules is developed to combine the spatial and temporal information contained in the reports based on how this information is presented in text in order to capture the dynamics of hazard events (e.g., changes in event locations, new events occurring) as they occur over space and time. Chapter 3 presents an approach for retrieving semantic information about hazard events using ontologies and semantic gazetteers. With this work, information on the different kinds of events (e.g., impact, response, or recovery events) can be extracted as well as information about hazard events at different levels of detail. Using the methods presented in Chapter 2 and 3, an approach for automatically extracting spatial, temporal, and semantic information from tweets is discussed in Chapter 4. Four different elements of tweets are used for assigning appropriate spatial and temporal information to hazard events in tweets. Since tweets represent shorter, but more current information about hazards and how they are impacting a local area, key information about hazards can be retrieved through extracted spatiotemporal and semantic information from tweets.
APA, Harvard, Vancouver, ISO, and other styles
11

Mountain, David Michael. "Exploring mobile trajectories : an investigation of individual spatial behaviour and geographic filters for information retrieval." Thesis, City University London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Hsinchun, Joanne Martinez, Tobun Dorbin Ng, and Bruce R. Schatz. "A Concept Space Approach to Addressing the Vocabulary Problem in Scientific Information Retrieval: An Experiment on the Worm Community System." Wiley Periodicals, Inc, 1997. http://hdl.handle.net/10150/105991.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
This research presents an algorithmic approach to addressing the vocabulary problem in scientific information retrieval and information sharing, using the molecular biology domain as an example. We first present a literature review of cognitive studies related to the vocabulary problem and vocabuiary-based search aids (thesauri) and then discuss techniques for building robust and domain-specific thesauri to assist in cross-domain scientific information retrieval. Using a variation of the automatic thesaurus generation techniques, which we refer to as the concept space approach, we recently conducted an experiment in the molecular biology domain in which we created a C. elegans worm thesaurus of 7,657 worm-specific terms and a Drosofila fly thesaurus of 15,626 terms. About 30% of these terms overlapped, which created vocabulary paths from one subject domain to the other. Based on a cognitive study of term association involving four biologists, we found that a large percentage (59.6-85.6%) of the terms suggested by the subjects were identified in the conjoined fly-worm thesaurus. However, we found only a small percentage (8.4-18.1%) of the associations suggested by the subjects in the thesaurus. In a follow-up document retrieval study involving eight fly biologists, an actual worm database (Worm Community System), and the conjoined flyworm thesaurus, subjects were able to find more relevant documents (an increase from about 9 documents to 20) and to improve the document recall level (from 32.41 to 65.28%) when using the thesaurus, although the precision level did not improve significantly. Implications of adopting the concept space approach for addressing the vocabulary problem in Internet and digital libraries applications are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
13

Katerattanakul, Nitsawan. "A pilot study in an application of text mining to learning system evaluation." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2010. http://scholarsmine.mst.edu/thesis/pdf/Katerattanakul_09007dcc807b614f.pdf.

Full text
Abstract:
Thesis (M.S.)--Missouri University of Science and Technology, 2010.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed June 19, 2010) Includes bibliographical references (p. 72-75).
APA, Harvard, Vancouver, ISO, and other styles
14

Shea, Geoffrey Yu Kai. "A web-based approach to the integration of diverse data sources for GIS /." Sydney : School of Surveying and Spatial Information Systems, University of New South Wales, 2001. http://www.library.unsw.edu.au/~thesis/adt-NUN/public/adt-NUN20011018.170350/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Thakur, Amritanshu. "Semantic construction with provenance for model configurations in scientific workflows." Master's thesis, Mississippi State : Mississippi State University, 2008. http://library.msstate.edu/etd/show.asp?etd=etd-07312008-092758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Hakopa, Henry Hauiti, and n/a. "Ka pu te ruha, ka hao te rangatahi." University of Otago. Department of Information Science, 1998. http://adt.otago.ac.nz./public/adt-NZDU20070524.125029.

Full text
Abstract:
The relationship between Maori and land is imperative. It forms the basis for developing conceptual blueprints fundamental to producing a data model from a Maori paradigm and integrating that cultural paradigm with western information systems technology. The primary objective of this thesis focuses on blending ancient Maori techniques for managing land information with the advanced tools offered by information systems technology. Like other oral traditions, information about ancestral land and resources were registered in the memories of tribal elders and leaders. Today Maori land information found in the Maori land courts are largely paper-based. By contrast, western civilisations have adapted quickly to computerised systems for managing land information. Unfortunately for Maori, most GIS tend to operate on models influenced by the viewpoint of the dominating culture and their world view. This poses challenges and risks for Maori. This research rejects the idea of adopting technology wholesale, based on western paradigms. Argued from an eclectic theoretical approach incorpating a Maori world view, this study captures the cultural concept of land, develops a conceptual blueprint based on that perspective, and engages that cultural stamp into a western system of managing land information. Thus a blend of the old and the new techniques for managing Maori land information is incorporated, hence ka pu te ruha, ka hao te rangatahi.
APA, Harvard, Vancouver, ISO, and other styles
17

Leidner, Jochen Lothar. "Toponym resolution in text." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/1849.

Full text
Abstract:
Background. In the area of Geographic Information Systems (GIS), a shared discipline between informatics and geography, the term geo-parsing is used to describe the process of identifying names in text, which in computational linguistics is known as named entity recognition and classification (NERC). The term geo-coding is used for the task of mapping from implicitly geo-referenced datasets (such as structured address records) to explicitly geo-referenced representations (e.g., using latitude and longitude). However, present-day GIS systems provide no automatic geo-coding functionality for unstructured text. In Information Extraction (IE), processing of named entities in text has traditionally been seen as a two-step process comprising a flat text span recognition sub-task and an atomic classification sub-task; relating the text span to a model of the world has been ignored by evaluations such as MUC or ACE (Chinchor (1998); U.S. NIST (2003)). However, spatial and temporal expressions refer to events in space-time, and the grounding of events is a precondition for accurate reasoning. Thus, automatic grounding can improve many applications such as automatic map drawing (e.g. for choosing a focus) and question answering (e.g. for questions like How far is London from Edinburgh?, given a story in which both occur and can be resolved). Whereas temporal grounding has received considerable attention in the recent past (Mani and Wilson (2000); Setzer (2001)), robust spatial grounding has long been neglected. Concentrating on geographic names for populated places, I define the task of automatic Toponym Resolution (TR) as computing the mapping from occurrences of names for places as found in a text to a representation of the extensional semantics of the location referred to (its referent), such as a geographic latitude/longitude footprint. The task of mapping from names to locations is hard due to insufficient and noisy databases, and a large degree of ambiguity: common words need to be distinguished from proper names (geo/non-geo ambiguity), and the mapping between names and locations is ambiguous (London can refer to the capital of the UK or to London, Ontario, Canada, or to about forty other Londons on earth). In addition, names of places and the boundaries referred to change over time, and databases are incomplete. Objective. I investigate how referentially ambiguous spatial named entities can be grounded, or resolved, with respect to an extensional coordinate model robustly on open-domain news text. I begin by comparing the few algorithms proposed in the literature, and, comparing semiformal, reconstructed descriptions of them, I factor out a shared repertoire of linguistic heuristics (e.g. rules, patterns) and extra-linguistic knowledge sources (e.g. population sizes). I then investigate how to combine these sources of evidence to obtain a superior method. I also investigate the noise effect introduced by the named entity tagging step that toponym resolution relies on in a sequential system pipeline architecture. Scope. In this thesis, I investigate a present-day snapshot of terrestrial geography as represented in the gazetteer defined and, accordingly, a collection of present-day news text. I limit the investigation to populated places; geo-coding of artifact names (e.g. airports or bridges), compositional geographic descriptions (e.g. 40 miles SW of London, near Berlin), for instance, is not attempted. Historic change is a major factor affecting gazetteer construction and ultimately toponym resolution. However, this is beyond the scope of this thesis. Method. While a small number of previous attempts have been made to solve the toponym resolution problem, these were either not evaluated, or evaluation was done by manual inspection of system output instead of curating a reusable reference corpus. Since the relevant literature is scattered across several disciplines (GIS, digital libraries, information retrieval, natural language processing) and descriptions of algorithms are mostly given in informal prose, I attempt to systematically describe them and aim at a reconstruction in a uniform, semi-formal pseudo-code notation for easier re-implementation. A systematic comparison leads to an inventory of heuristics and other sources of evidence. In order to carry out a comparative evaluation procedure, an evaluation resource is required. Unfortunately, to date no gold standard has been curated in the research community. To this end, a reference gazetteer and an associated novel reference corpus with human-labeled referent annotation are created. These are subsequently used to benchmark a selection of the reconstructed algorithms and a novel re-combination of the heuristics catalogued in the inventory. I then compare the performance of the same TR algorithms under three different conditions, namely applying it to the (i) output of human named entity annotation, (ii) automatic annotation using an existing Maximum Entropy sequence tagging model, and (iii) a na¨ıve toponym lookup procedure in a gazetteer. Evaluation. The algorithms implemented in this thesis are evaluated in an intrinsic or component evaluation. To this end, we define a task-specific matching criterion to be used with traditional Precision (P) and Recall (R) evaluation metrics. This matching criterion is lenient with respect to numerical gazetteer imprecision in situations where one toponym instance is marked up with different gazetteer entries in the gold standard and the test set, respectively, but where these refer to the same candidate referent, caused by multiple near-duplicate entries in the reference gazetteer. Main Contributions. The major contributions of this thesis are as follows: • A new reference corpus in which instances of location named entities have been manually annotated with spatial grounding information for populated places, and an associated reference gazetteer, from which the assigned candidate referents are chosen. This reference gazetteer provides numerical latitude/longitude coordinates (such as 51320 North, 0 50 West) as well as hierarchical path descriptions (such as London > UK) with respect to a world wide-coverage, geographic taxonomy constructed by combining several large, but noisy gazetteers. This corpus contains news stories and comprises two sub-corpora, a subset of the REUTERS RCV1 news corpus used for the CoNLL shared task (Tjong Kim Sang and De Meulder (2003)), and a subset of the Fourth Message Understanding Contest (MUC-4; Chinchor (1995)), both available pre-annotated with gold-standard. This corpus will be made available as a reference evaluation resource; • a new method and implemented system to resolve toponyms that is capable of robustly processing unseen text (open-domain online newswire text) and grounding toponym instances in an extensional model using longitude and latitude coordinates and hierarchical path descriptions, using internal (textual) and external (gazetteer) evidence; • an empirical analysis of the relative utility of various heuristic biases and other sources of evidence with respect to the toponym resolution task when analysing free news genre text; • a comparison between a replicated method as described in the literature, which functions as a baseline, and a novel algorithm based on minimality heuristics; and • several exemplary prototypical applications to show how the resulting toponym resolution methods can be used to create visual surrogates for news stories, a geographic exploration tool for news browsing, geographically-aware document retrieval and to answer spatial questions (How far...?) in an open-domain question answering system. These applications only have demonstrative character, as a thorough quantitative, task-based (extrinsic) evaluation of the utility of automatic toponym resolution is beyond the scope of this thesis and left for future work.
APA, Harvard, Vancouver, ISO, and other styles
18

Dickinson, Matthew G. Musser Dale Roy. "Architecting the spatial enablement of a film location database for enhanced geographic analysis and query." Diss., Columbia, Mo. : University of Missouri-Columbia, 2009. http://hdl.handle.net/10355/6729.

Full text
Abstract:
The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Title from PDF of title page (University of Missouri--Columbia, viewed on March 19, 2010). Thesis advisor: Dr. Dale R. Musser. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
19

Slabber, Frans Bresler. "Semi-automated extraction of structural orientation data from aerospace imagery combined with digital elevation models." Thesis, Rhodes University, 1996. http://hdl.handle.net/10962/d1005614.

Full text
Abstract:
A computer-based method for determining the orientation of planar geological structures from remotely sensed images, utilizing digital geological images and digital elevation models (DEMs), is developed and assessed. The method relies on operator skill and experience to recognize geological structure traces on images, and then employs software routines (GEOSTRUC©) to calculate the orientation of selected structures. The operator selects three points on the trace of a planar geological feature as seen on a digital geological image that is co registered with a DEM of the same area. The orientation of the plane that contains the three points is determined using vector algebra equations. The program generates an ASCII data file which contains the orientation data as well as the geographical location of the measurements. This ASCII file can then be utilized in further analysis of the orientation data. The software development kit (SDK) for TNTmips v5.00, from MicroImages Inc. and operating in the X Windows environment, was employed to construct the software. The Watcom C\C++ Development Environment was used to generate the executable program, GEOSTRUC© . GEOSTRUC© was tested in two case studies. The case studies utilized digital data derived from the use of different techniques and from different sources which varied in scale and resolution. This was done to illustrate the versatility of the program and its application to a wide range of data types. On the whole, the results obtained using the GEOSTRUC© analyses compare favourably to field data from each test area. Use of the method to determine the orientation of axial planes in the case study revealed the usefulness of the method as a powerful analytic tool for use on a macroscopic scale. The method should not he applied in area with low variation in relief as the method proved to be less accurate in these areas. Advancements in imaging technology will serve to create images with better resolution, which will, in turn, improve the overall accuracy of the method.
APA, Harvard, Vancouver, ISO, and other styles
20

Kjerne, Daniel. "Modeling cadastral spatial relationships using an object-oriented information structure." PDXScholar, 1987. https://pdxscholar.library.pdx.edu/open_access_etds/3721.

Full text
Abstract:
This thesis identifies a problem in the current practice for storage of locational data of entities in the cadastral layer of a land information system (LIS), and presents as a solution an information model that uses an object-oriented paradigm.
APA, Harvard, Vancouver, ISO, and other styles
21

Martins, Bruno. "Geographically Aware Web Text Mining." Master's thesis, Department of Informatics, University of Lisbon, 2009. http://hdl.handle.net/10451/14301.

Full text
Abstract:
Text mining and search have become important research areas over the past few years, mostly due to the large popularity of the Web. A natural extension for these technologies is the development of methods for exploring the geographic context of Web information. Human information needs often present specific geographic constraints. Many Web documents also refer to speci c locations. However, relatively little e ort has been spent on developing the facilities required for geographic access to unstructured textual information. Geographically aware text mining and search remain relatively unexplored. This thesis addresses this new area, arguing that Web text mining can be applied to extract geographic context information, and that this information can be explored for information retrieval. Fundamental questions investigated include handling geographic references in text, assigning geographic scopes to the documents, and building retrieval applications that handle/use geographic scopes. The thesis presents appropriate solutions for each of these challenges, together with a comprehensive evaluation of their efectiveness. By investigating these questions, the thesis presents several findings on how the geographic context can be efectively handled by text processing tools.
APA, Harvard, Vancouver, ISO, and other styles
22

Kanaparthy, Venu Madhav Singh. "GML represntation for interoperable spatial data exchange in a mobile mapping application." Master's thesis, Mississippi State : Mississippi State University, 2004. http://library.msstate.edu/etd/show.asp?etd=etd-07102004-133629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Shea, Geoffrey Yu Kai Surveying &amp Spatial Information Systems Faculty of Engineering UNSW. "A Web-Based Approach to the Integration of Diverse Data Sources for GIS." Awarded by:University of New South Wales. Surveying and Spatial Information Systems, 2001. http://handle.unsw.edu.au/1959.4/17855.

Full text
Abstract:
The rigorous developments of GIS over the past decades have enabled application developers to create powerful systems that are used to facilitate the management of spatial data. Unfortunately, each one of these systems is specific to a local service, with little or no interconnection with services in other locales. This makes it virtually impossible to perform dynamic and interactive GIS operations across multiple locales which have similar or dissimilar system configurations. The Spatial Data Transfer Standard (SDTS) resolved the problems partially by offering excellent conceptual and logical abstraction model for data exchange. Recent advancements of the Internet enlightened the GIS community as to the realization of an ideal concept of information interchange. A suite of new technologies that embraces Extensible Markup Language (XML), Scalable Vector Graphics (SVG), Portable Network Graphics (PNG) and Java creates a powerful and new perspective that can be applied to all phases of online GIS system development. The online GIS is a Web-based approach to integrating diverse spatial data sources for GIS applications. To address the spatial data integration options and implications related to the Web-based approach the investigation was undertaken in 5 phases: (1) Determine the mapping requirements of graphic and non-graphic spatial data for online GIS application; (2) Analyze the requirements of spatial data integration for online environments; (3) Investigate a suitable method for integrating different formats of spatial data; (4) Study the feasibility and applicability of setting up the online GIS; and (5) Develop a prototype for online sharing of teaching resources. Resulting from the critical review on current Internet technology, a conceptual framework for spatial data integration was proposed. This framework was based on the emerging Internet technology on XML, SVG, PNG, and Java. It was comprised of four loosely coupled modules, namely, Application Interface, Presentation, Integrator, and Data module. This loosely coupled framework provides an environment that will be independent of the underlying GIS data structure and makes it easy to change or update the system as a new task or knowledge is acquired. A feasibility study was conducted to test the applicability for the proposed conceptual framework. A detailed user requirements and system specification was thus devised from the feasibility study. These user requirements and system specification provided some guidelines for online GIS application development. They were expressed specifically in terms of six aspects: (1) User; (2) Teaching resources management; (3) Data; (4) Cartography; (5) Functions; and (6) Software development configuration. A prototype system based on some of the devised system specifications was developed. In the prototype software design, the architecture of a Three-Tier Client-Server computing model was adopted. Due to the inadequacy of native support for SVG and PNG in all currently available Web browsers, the prototype was thus implemented in HTML, Java and vendor specific vector format. The prototype demonstrated how teaching resources from a variety of sources and format (including map data and non-map resources) were integrated and shared. The implementation of the prototype revealed that the Web is still an ideal medium for providing wider accessibility of geographical information to a larger number of users through a corporate intranet or the Internet cost-effectively. The investigation concluded that current WWW technology is limited in its capability for spatial data integration and delivering online functionality. However, developing of XML-based GIS data model and graphic standards SVG and PNG for structuring and transferring spatial data on the Internet appear to be providing solutions to the current limitations. It is believed that the ideal world where everyone retrieving spatial information contextually through a Web browser disregarding the information format and location will eventually become true.
APA, Harvard, Vancouver, ISO, and other styles
24

Hahmann, Stefan. "Zur Beziehung von Raum und Inhalt nutzergenerierter geographischer Informationen." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-148835.

Full text
Abstract:
In the last ten years there has been a significant progress of the World Wide Web, which evolved to become the so-called “Web 2.0”. The most important feature of this new quality of the WWW is the participation of the users in generating contents. This trend facilitates the formation of user communities which collaborate on diverse projects, where they collect and publish information. Prominent examples of such projects are the online-encyclopedia “Wikipedia”, the microblogging-platform “Twitter”, the photo-platform “Flickr” and the database of topographic information “OpenStreetMap”. User-generated content, which is directly or indirectly geospatially referenced, is of-ten termed more specifically as “volunteered geographic information”. The geospatial reference of this information is constituted either directly by coordinates that are given as meta-information or indirectly through georeferencing of toponyms or addresses that are contained in this information. Volunteered geographic information is particularly suited for research, as it can be accessed with low or even at no costs at all. Furthermore it reflects a variety of human decisions which are linked to geographic space. In this thesis, the relationship of space and content of volunteered geographic information is investigated from two different perspectives. The first part of this thesis addresses the question for which share of information there exists a relationship between space and content of the information, such that the information is locatable in geospace. In this context, the assumption that about 80% of all information has a reference to space has been well known within the community of geographic information system users. Since the 1980s it has served as a marketing tool within the whole geoinformation sector, although there has not been any empirical evidence. This thesis contributes to fill this research gap. For the validation of the ‘80%-hypothesis’ two approaches are presented. The first approach is based on a corpus of information that is as representative as possible for world knowledge. For this purpose the German language edition of Wikipedia has been selected. This corpus is modeled as a network of information where the articles are considered the nodes and the cross references are considered the edges of a directed graph. With the help of this network a graduated definition of geospatial references is possible. It is implemented by computing the distance of each article to its closest article within the network that is assigned with spatial coordinates. Parallel to this, a survey-based approach is developed where participants have the task to assign pieces of information to one of the categories “direct geospatial reference”, “indirect geospatial reference” and “no geospatial reference”. A synthesis of both approaches leads to an empirically justified figure for the “80%-assertion”. The result of the investigation is that for the corpus of Wikipedia 27% of the information may be categorized as directly geospatially referenced and 30% of the information may be categorized as indirectly geospatially referenced. In the second part of the thesis the question is investigated in how far volunteered geographic information that is produced on mobile devices is related to the locations where it is published. For this purpose, a collection of microblogging-texts produced on mobile devices serve as research corpus. Microblogging-texts are short texts that are published via the World Wide Web. For this type of information the relationship be-tween the content of the information and their position is less obvious than e.g. for topographic information or photo descriptions. The analysis of microblogging-texts offers new possibilities for market and opinion research, the monitoring of natural events and human activities as well as for decision support in disaster management. The spatial analysis of the texts may add extra value. In fact for some of the applications the spatial analysis is a necessary condition. For this reason, the investigation of the relationship of the published contents with the locations where they are generated is of interest. Within this thesis, methods are described that support the investigation of this relationship. In the presented approach, classified Points of Interest serve as a model for the environment. For the purpose of the investigation of the correlation between these points and the microblogging-texts, manual classification and natural language processing are used in order to classify these texts according to their relevance in regard to the respective feature classes. Subsequently, it is tested whether the share of relevant texts in the proximity of objects of the tested classes is above average. The results of the investigation show that the strength of the location-content-correlation depends on the tested feature class. While for the feature classes ‘train station’, ‘airport’ and ‘restaurant’ a significant dependency of the share of relevant texts on the distance to the respective objects may be observed, this is not confirmed for objects of other feature classes, such as ‘cinema’ and ‘supermarket’. However, as prior research that describes investigations on small cartographic scale has detected correlations between space and content of microblogging-texts, it can be concluded that the strength of the correlation between space and content of microblogging-texts depends on scale and topic
Während der vergangenen zehn Jahre vollzog sich eine signifikante Veränderung des World Wide Webs, das sich zum sogenannten „Web 2.0“ entwickelte. Das wesentlichste Merkmal dieser neuen Qualität des WWW ist die Beteiligung der Nutzer bei der Erstellung der Inhalte. Diese Entwicklung fördert das Entstehen von Nutzergemeinschaften, die kollaborativ in unterschiedlichsten Projekten Informationen sammeln und veröffentlichen. Prominente Beispiele für solche Projekte sind die Online-Enzyklopädie „Wikipedia“, die Microblogging-Plattform „Twitter“, die Foto-Plattform „Flickr“ und die Sammlung topographischer Informationen „OpenStreetMap“. Nutzergenerierte Inhalte, die direkt oder indirekt raumbezogen sind, können spezifischer als „nutzergenerierte geographische Informationen“ bezeichnet werden. Der Raumbezug dieser Informationen entsteht entweder direkt durch die Angabe räumlicher Koordinaten als Metainformationen oder er kann indirekt durch die Georeferenzierung von in den Informationen enthaltenen Toponymen oder Adressen hergestellt werden. Nutzergenerierte geographische Informationen haben für die Forschung den besonderen Vorteil, dass sie einerseits häufig gänzlich ohne oder nur mit geringen Kosten verfügbar gemacht werden können und andererseits eine Vielzahl von menschlichen Entscheidungen widerspiegeln, die mit dem Raum verknüpft sind. In der vorliegenden Dissertation wird die Beziehung von Raum und Inhalt nutzergenerierter geographischer Informationen aus zwei Perspektiven untersucht. Im ersten Teil der Arbeit steht die Frage im Vordergrund, für welchen Anteil an Informationen eine Beziehung zwischen Raum und Informationsinhalt in der Art besteht, dass die Informationen im Georaum lokalisierbar sind. In diesem Zusammenhang existiert seit den 1980er Jahren die unter Nutzern von geographischen Informationssystemen weit verbreitete These, dass 80% aller Informationen einen Raumbezug haben. Diese These dient im gesamten Spektrum der Branche als Marketinginstrument, ist jedoch nicht empirisch belegt. Diese Arbeit trägt dazu bei, die bestehende Forschungslücke zu schließen. Für die Prüfung dieser These, die in der Arbeit als „Raumbezugshypothese“ bezeichnet wird, werden zwei Ansätze vorgestellt. Der erste Ansatz basiert auf der Analyse eines möglichst repräsentativen Informationskorpus, wofür die deutsche Sprachversion der Wikipedia ausgewählt wird. Diese wird als Informationsnetzwerk modelliert, indem deren Artikel als Knoten und deren interne Querverweise als Kanten eines gerichteten Graphen betrachtet werden. Mit Hilfe dieses Netzwerkes ist es möglich eine abgestufte Definition des Raumbezuges von Informationen einzuführen, indem die Entfernung jedes Artikels innerhalb des Netzwerkes zum jeweils nächstgelegenen Artikel, der mit räumlichen Koordinaten gekennzeichnet ist, berechnet wird. Parallel dazu wird ein Befragungsansatz entwickelt, bei dem Probanden die Aufgabe haben, Informationen in die Kategorien „Direkter Raumbezug“, „Indirekter Raumbezug“ und „Kein Raumbezug“ einzuordnen. Die Synthese beider Ansätze führt zu einer empirisch begründeten Zahl für die „Raumbezugsthese“. Das Ergebnis ist, dass für das Untersuchungskorpus Wikipedia 27% der Informationen als direkt raumbezogenen und 30% der Informationen als indirekt raumbezogen kategorisiert werden können. Im zweiten Teil der Arbeit wird die Forschungsfrage untersucht, inwiefern nutzergenerierte Informationen, die über mobile Geräte erzeugt werden, in Beziehung zu den Orten stehen, an denen sie veröffentlicht werden. Als Forschungskorpus dienen mobil verfasste Microblogging-Texte. Dies sind kurze Texte, die über das WWW veröffentlicht werden. Bei dieser Informationsart liegt im Gegensatz zu beispielsweise topographischen Information oder Fotobeschreibungen die Vermutung eines starken Zusammenhanges zwischen dem Inhalt der Informationen und deren Positionen nicht nahe. Die Analyse von Microblogging-Texten bietet unter anderem Potential für die Markt- und Meinungsforschung, die Beobachtung von Naturereignissen und menschlichen Aktivitäten sowie die Entscheidungsunterstützung in Katastrophenfällen. Aus der räumlichen Auswertung kann sich dabei ein Mehrwert ergeben, für einen Teil der Anwendungen ist die räumliche Auswertung sogar die notwendige Voraussetzung. Aus diesem Grund ist die Erforschung des Zusammenhanges der veröffentlichten Inhalte mit den Orten, an denen diese entstehen, von Interesse. In der Arbeit werden eine Methoden vorgestellt, mit deren Hilfe die Untersuchung dieser Korrelation am Beispiel von klassifizierten Points of Interest durchgeführt wird. Zu diesem Zweck werden die Texte mit Hilfe von manueller Klassifikation und maschineller Sprachverarbeitung entsprechend ihrer Relevanz für die getesteten Objektklassen klassifiziert. Anschließend wird geprüft, ob der Anteil der relevanten Texte in der Nähe von Objekten der getesteten Klassen überdurchschnittlich hoch ist. Die Ergebnisse der Untersuchungen zeigen, dass die Stärke der Raum-Inhalt-Korrelation von den getesteten Objektklassen abhängig ist. Während sich beispielsweise bei Bahnhöfen, Flughäfen und Restaurants eine deutliche Abhängigkeit des Anteils der relevanten Texte von der Entfernung zu den betreffenden Objekten zeigt, kann dies für andere Objektklassen, wie z.B. Kino oder Supermarkt nicht bestätigt werden. Da frühere Forschungsarbeiten bei der Analyse im kleinmaßstäbigen Bereich eine Korrelation der Informationsinhalte mit deren Entstehungsorten feststellten, kann geschlussfolgert werden, dass der Zusammenhang zwischen Raum und Inhalt bei Microblogging-Texten sowohl vom Maßstab als auch vom Thema abhängig ist
APA, Harvard, Vancouver, ISO, and other styles
25

ROCHA, Júlio Henrique. "Ranking de relevância baseado em informações geográficas e sociais." Universidade Federal de Campina Grande, 2016. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/661.

Full text
Abstract:
Submitted by Kilvya Braga (kilvyabraga@hotmail.com) on 2018-05-14T13:17:14Z No. of bitstreams: 1 JÚLIO HENRIQUE ROCHA - DISSERTAÇÃO (PPGCC) 2016.pdf: 3692650 bytes, checksum: 97151b25e0e73635f40106266ca79e2e (MD5)
Made available in DSpace on 2018-05-14T13:17:14Z (GMT). No. of bitstreams: 1 JÚLIO HENRIQUE ROCHA - DISSERTAÇÃO (PPGCC) 2016.pdf: 3692650 bytes, checksum: 97151b25e0e73635f40106266ca79e2e (MD5) Previous issue date: 2016
Capes
Recuperação de Informação Geográfica (GIR) é uma área de pesquisa que desenvolve e viabiliza a construção de mecanismos de busca por conteúdos distribuídos pela Internet envolvendo algum contexto geográfico. Os motores de busca geográfica, que são artefatos produzidos na área de GIR, podem ser especificados para trabalhar em diversos contextos (e.g., esportes, concursos públicos), buscando um tratamento adequado ao tipo de documento manipulado. Atualmente, a comunidade científica e o meio comercial vêm concentrando esforços na construção de motores de busca geográfica com o foco em encontrar notícias distribuídas na Internet. Contudo, motores de busca (geográfica ou não) com foco em notícias, deveriam considerar o fator de credibilidade da informação contida nas mesmas no momento de ordená-las. Infelizmente, na maior parte das vezes, isso não acontece. Mensurar a credibilidade de notícias é uma atividade onerosa e complexa, por exigir o conhecimento dos fatos relatados. Dessa forma, os motores de busca acabam deixando a cargo do usuário a responsabilidade em confiar no que está sendo lido. Nesse contexto, esta dissertação propõe um método de ranking de relevância com foco em notícias e baseado em informações colhidas em redes sociais, para valorar um grau de credibilidade e, assim, ordená-las. O valor de credibilidade da notícia é calculado considerando a afinidade dos usuários, que a compartilharam em sua rede social, com as localidades mencionadas na notícia. Por fim, o ranking de relevância proposto é integrado a uma ferramenta de busca e leitura de notícias, denominada GeoSEn News, que viabiliza a consulta por meio de diversas operações espaciais e permite a visualização dos resultados em diferentes perspectivas. Tal ferramenta foi utilizada para avaliar o método proposto através de experimentos utilizando dados colhidos na rede social Twitter e em mídias informativas espalhadas pelo Brasil. A avaliação apresentou resultados promissores e atestou a viabilidade da construção do ranking de relevância que se baseia em informações coletadas em redes sociais.
Geographic Information Retrieval is a research field that develops and allows the construction of search engines to retrieve information with geographic context that is available on the Internet. Produced in the GIR field, geographic search engines can be specified to work in many different contexts (e.g., as sports, concerts), seeking proper ways to handle the chosen document type. Nowadays, the scientific community and the commerce are focusing efforts on building geographic search engines to find news over the Internet. However, search engines (geographical or otherwise) focused on news should consider the information credibility factor in the moment of ranking them. Unfortunately, in most cases, it is not what happens. Measure the news credibility is a complex and expensive task since it requires knowledge of the stated facts. Thereby, search engines end up giving the user the responsibility to trust or not what is being read. In this context, this work proposes a relevance ranking method focused in news and based on information collected from social networks, to evaluate a credibility factor and thus, rank them. The news credibility value is calculated considering the affinity of users who have shared it on their social network with the locations mentioned in the news. Lastly, the proposed relevance ranking is integrated with a search engine and reading news tool called GeoSEn News, which enables various spatial operations queries and allows result visualization in different perspectives. Through experiments using data collected in the social network Twitter and informational media throughout Brazil, this tool was used to evaluate the proposed method. The evaluation presented promising results and certified the feasibility of building relevance ranking based on information collected from social networks.
APA, Harvard, Vancouver, ISO, and other styles
26

Schreiber, Werner. "GIS and EUREPGAP : applying GIS to increase effective farm management in accordance GAP requirements." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53440.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2003.
ENGLISH ABSTRACT: With the inception of precision farming techniques during the last decade, agricultural efficiency has improved, leading to greater productivity and enhanced economic benefits associated with agriculture. The awareness of health risks associated with food borne diseases has also increased. Systems such as Hazard Analysis and Critical Control Points (RACCP) in the USA and Good Agricultural Practices (GAP) in Europe are trying to ensure that no food showing signs of microbial contamination associated with production techniques are allowed onto the export market. Growers participating in exporting are thus being forced to conform to the requirements set by international customers. The aim of this study was to compile a computerized record keeping system that would aid farmers with the implementation of GAP on farms, by making use of GIS capabilities. A database, consisting of GAP-specific data was developed. ArcView GIS was used to implement the database, while customized analyses procedures through the use of Avenue assisted in GAP-specific farming related decisions. An agricultural area focusing on the export market was needed for this study, and the nut producing Levubu district was identified as ideal. By making use of ArcView GIS, distinct relationships between different data sets were portrayed in tabular, graphical, geographical and report format. GAP requirements state that growers must base decisions on timely, relevant information. With information available in the above-mentioned formats, decisions regarding actions taken can be justified. By analysing the complex interaction between datasets, the influences that agronomical inputs have on production were portrayed, moving beyond the standard requirements of GAP. Agricultural activities produce enormous quantities of data, and GIS proved to be an indispensable tool because of the ability to analyse and manipulate data with a spatial component. The implementation of good agricultural practices lends itself to the use of GIS. With the correct information available at the right time, better decisions can promote optimal croppmg, whilst rmmrrnzmg the negative effects on the consumer and environment.
AFRIKAANSE OPSOMMING: Gedurende die afgelope dekade het die gebruik van presisie boerderytegnieke tot verbeterde gewasverbouing gelei, wat verhoogde produktiwiteit en ekonomiese welvarendheid tot gevolg gehad het. 'n Wêreldwye bewustheid ten opsigte van die oordrag van siektekieme geasosieer met varsprodukte het ontstaan. Met die implementering van Hazard Analysis and Critical Control Points (HACCP) en Good Agricultural Practices (GAP), poog die VSA en Europa om voedsel wat tekens van besmetting toon van die invoermark te weerhou. Buitelandse produsente en uitvoerders word dus hierdeur gedwing om by internasionale voedselstandaarde aan te pas. Hierdie navorsing het ten doel gehad om 'n gerekenariseerde rekordhouding stelsel daar te stel wat produsente sal bystaan tydens die implementering van GAP, deur gebruik te maak van GIS. 'n Databasis gerig op die implementering van GAP is ontwerp. ArcView GIS is gebruik word om die databasis te implementeer, waarna spesifieke navrae die data ontleed het om sodoende die besluitnemingsproses te vergemaklik. 'n Landbou-area wat aktief in die uitvoermark deelneem was benodig vir dié studie, en die Levubu distrik was ideaal. Verwantskappe tussen datastelle is bepaal en uitgebeeld in tabel-, grafiek- en verslag vorm. Die suksesvolle implementering van GAP vereis dat alle besluite op relevante inligting gebaseer word, en met inligting beskikbaar in die bogenoemde formaat kan alle besluite geregverdig word. Deur die komplekse interaksie tussen insette en produksie te analiseer, was dit moontlik om verwantskappe uit te beeld wat verder strek as wat GAP vereistes stipuleer. Deur die gebruikerskoppelvlak in ArcView te verpersoonlik is die gebruiker nie belaai met onnodige berekeninge nie. Aktiwiteite soos landbou produseer groot datastelle, en die vermoë van GIS om die ruimtelike verwantskappe te analiseer en uit te beeld, het getoon dat GIS 'n instrumentele rol in die besluitnemingsproses speel. Deur middel van beter besluitneming kan optimale gewasverbouing verseker word, terwyl die negatiewe impak op die verbruiker en omgewing tot 'n minimum beperk word.
APA, Harvard, Vancouver, ISO, and other styles
27

Macario, Carla Geovana do Nascimento. "Anotação semantica de dados geoespaciais." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275838.

Full text
Abstract:
Orientador: Claudia Maria Bauzer Medeiros
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-15T04:11:30Z (GMT). No. of bitstreams: 1 Macario_CarlaGeovanadoNascimento_D.pdf: 3780981 bytes, checksum: 4b8ad7138779392bff940f1f95ad1f51 (MD5) Previous issue date: 2009
Resumo: Dados geoespaciais constituem a base para sistemas de decisão utilizados em vários domínios, como planejamento de transito, fornecimento de serviços ou controle de desastres. Entretanto, para serem usados, estes dados precisam ser analisados e interpretados, atividades muitas vezes trabalhosas e geralmente executadas por especialistas. Apesar disso estas interpretacoes nao sao armazenadas e quando o são, geralmente correspondem a alguma informacao textual e em linguagem própria, gravadas em arquivos tecnicos. A ausencia de solucoes eficientes para armazenar estas interpretaçães leva a problemas como retrabalho e dificuldades de compartilhamento de informação. Neste trabalho apresentamos uma soluçao para estes problemas que baseia-se no uso de anotações semânticas, uma abordagem que promove um entendimento comum dos conceitos usados. Para tanto, propomos a adocão de workflows científicos para descricao do processo de anotacão dos dados e tambíem de um esquema de metadados e ontologias bem conhecidas, aplicando a soluçao a problemas em agricultura. As contribuicães da tese envolvem: (i) identificacao de um conjunto de requisitos para busca semantica a dados geoespaciais; (ii) identificacao de características desejóveis para ferramentas de anotacão; (iii) proposta e implementacao parcial de um framework para a anotacão semântica de diferentes tipos de dados geoespaciais; e (iv) identificacao dos desafios envolvidos no uso de workflows para descrever o processo de anotaçcaão. Este framework foi parcialmente validado, com implementação para aplicações em agricultura
Abstract: Geospatial data are a basis for decision making in a wide range of domains, such as traffic planning, consumer services disasters controlling. However, to be used, these kind of data have to be analyzed and interpreted, which constitutes a hard task, prone to errors, and usually performed by experts. Although all of these factors, the interpretations are not stored. When this happens, they correspond to descriptive text, which is stored in technical files. The absence of solutions to efficiently store them leads to problems such as rework and difficulties in information sharing. In this work we present a solution for these problems based on semantic annotations, an approach for a common understanding of concepts being used. We propose the use of scientific workflows to describe the annotation process for each kind of data, and also the adoption of well known metadata schema and ontologies. The contributions of this thesis involves: (i) identification of requirements for semantic search of geospatial data; (ii) identification of desirable features for annotation tools; (iii) proposal, and partial implementation, of a a framework for semantic annotation of different kinds of geospatial data; and (iv) identification of challenges in adopting scientific workflows for describing the annotation process. This framework was partially validated, through an implementation to produce annotations for applications in agriculture
Doutorado
Banco de Dados
Doutora em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
28

Buscaldi, Davide. "Toponym Disambiguation in Information Retrieval." Doctoral thesis, Universitat Politècnica de València, 2010. http://hdl.handle.net/10251/8912.

Full text
Abstract:
In recent years, geography has acquired a great importance in the context of Information Retrieval (IR) and, in general, of the automated processing of information in text. Mobile devices that are able to surf the web and at the same time inform about their position are now a common reality, together with applications that can exploit this data to provide users with locally customised information, such as directions or advertisements. Therefore, it is important to deal properly with the geographic information that is included in electronic texts. The majority of such kind of information is contained as place names, or toponyms. Toponym ambiguity represents an important issue in Geographical Information Retrieval (GIR), due to the fact that queries are geographically constrained. There has been a struggle to nd speci c geographical IR methods that actually outperform traditional IR techniques. Toponym ambiguity may constitute a relevant factor in the inability of current GIR systems to take advantage from geographical knowledge. Recently, some Ph.D. theses have dealt with Toponym Disambiguation (TD) from di erent perspectives, from the development of resources for the evaluation of Toponym Disambiguation (Leidner (2007)) to the use of TD to improve geographical scope resolution (Andogah (2010)). The Ph.D. thesis presented here introduces a TD method based on WordNet and carries out a detailed study of the relationship of Toponym Disambiguation to some IR applications, such as GIR, Question Answering (QA) and Web retrieval. The work presented in this thesis starts with an introduction to the applications in which TD may result useful, together with an analysis of the ambiguity of toponyms in news collections. It could not be possible to study the ambiguity of toponyms without studying the resources that are used as placename repositories; these resources are the equivalent to language dictionaries, which provide the di erent meanings of a given word.
Buscaldi, D. (2010). Toponym Disambiguation in Information Retrieval [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8912
Palancia
APA, Harvard, Vancouver, ISO, and other styles
29

Syed, Awase Khirni. "Exploratory representations for geographic information retrieved from the internet /." Zürich, 2008. http://opac.nebis.ch/cgi-bin/showAbstract.pl?sys=000254196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lin, Tzy Li 1972. "A mutimodal framework for geocoding digital objects." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275493.

Full text
Abstract:
Orientador: Ricardo da Silva Torres
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-24T12:28:05Z (GMT). No. of bitstreams: 1 Lin_TzyLi_D.pdf: 31046132 bytes, checksum: 1b92a866d8b83a7500c124693f33d083 (MD5) Previous issue date: 2014
Resumo: Informação geográfica é usualmente encontrada em objetos digitais (como documentos, imagens e vídeos), sendo de grande interesse utilizá-la na implementação de diferentes serviços. Por exemplo, serviços de navegação baseados em mapas e buscas geográficas podem se beneficiar das localizações geográficas associadas a objetos digitais. A implementação destes serviços, no entanto, demanda o uso de coleções de dados geocodificados. Este trabalho estuda a combinação de conteúdo textual e visual para geocodificar objetos digitais e propõe um arcabouço de agregação de listas para geocodificação multimodal. A informação textual e visual de vídeos e imagens é usada para definir listas ordenadas. Em seguida, elas são combinadas e a nova lista ordenada resultante é usada para definir a localização geográfica de vídeos e imagens. Uma arquitetura que implementa essa proposta foi projetada de modo que módulos específicos para cada modalidade (e.g., textual ou visual) possam ser aperfeiçoados independentemente. Outro componente é o módulo de fusão responsável pela combinação das listas ordenadas definidas por cada modalidade. Outra contribuição deste trabalho é a proposta de uma nova medida de avaliação da efetividade de métodos de geocodificação chamada Weighted Average Score (WAS). Ela é baseada em ponderações de distâncias que permitem avaliar a efetividade de uma abordagem, considerando todos os resultados de geocodificação das amostras de teste. O arcabouço proposto foi validado em dois contextos: desafio Placing Task da iniciativa MediaEval 2012, que consiste em atribuir, automaticamente, coordenadas geográficas a vídeos; e geocodificação de fotos de prédios da Virginia Tech (VT) nos EUA. No contexto do desafio Placing Task, os resultados mostram como nossa abordagem melhora a geocodificação em comparação a métodos que apenas contam com uma modalidade (sejam descritores textuais ou visuais). Nós mostramos ainda que a proposta multimodal produziu resultados comparáveis às melhores submissões que também não usavam informações adicionais além daquelas disponibilizadas na base de treinamento. Em relação à geocodificação das fotos de prédios da VT, os experimentos demostraram que alguns dos descritores visuais locais produziram resultados efetivos. A seleção desses descritores e sua combinação melhoraram esses resultados quando a base de conhecimento tinha as mesmas características da base de teste
Abstract: Geographical information is often enclosed in digital objects (like documents, images, and videos) and its use to support the implementation of different services is of great interest. For example, the implementation of map-based browser services and geographic searches may take advantage of geographic locations associated with digital objects. The implementation of such services, however, demands the use of geocoded data collections. This work investigates the combination of textual and visual content to geocode digital objects and proposes a rank aggregation framework for multimodal geocoding. Textual and visual information associated with videos and images are used to define ranked lists. These lists are later combined, and the new resulting ranked list is used to define appropriate locations. An architecture that implements the proposed framework is designed in such a way that specific modules for each modality (e.g., textual and visual) can be developed and evolved independently. Another component is a data fusion module responsible for combining seamlessly the ranked lists defined for each modality. Another contribution of this work is related to the proposal of a new effectiveness evaluation measure named Weighted Average Score (WAS). The proposed measure is based on distance scores that are combined to assess how effective a designed/tested approach is, considering its overall geocoding results for a given test dataset. We validate the proposed framework in two contexts: the MediaEval 2012 Placing Task, whose objective is to automatically assign geographical coordinates to videos; and the task of geocoding photos of buildings from Virginia Tech (VT), USA. In the context of Placing Task, obtained results show how our multimodal approach improves the geocoding results when compared to methods that rely on a single modality (either textual or visual descriptors). We also show that the proposed multimodal approach yields comparable results to the best submissions to the Placing Task in 2012 using no additional information besides the available development/training data. In the context of the task of geocoding VT building photos, performed experiments demonstrate that some of the evaluated local descriptors yield effective results. The descriptor selection criteria and their combination improved the results when the used knowledge base has the same characteristics of the test set
Doutorado
Ciência da Computação
Doutora em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
31

Bae, Sanghoon. "Development of a real-time and geographical information system-based transit management information system." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-11242009-020226/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Guemeida, Abdelbasset. "Contributions à une nouvelle approche de Recherche d'Information basée sur la métaphore de l'impédance et illustrée sur le domaine de la santé." Phd thesis, Université Paris-Est, 2009. http://tel.archives-ouvertes.fr/tel-00581322.

Full text
Abstract:
Les récentes évolutions dans les technologies de l'information et de la communication, avec le développement de l'Internet, conduisent à l'explosion des volumes des sources de données. Des nouveaux besoins en recherche d'information émergent pour traiter l'information en relation aux contextes d'utilisation, augmenter la pertinence des réponses et l'usabilité des résultats produits, ainsi que les possibles corrélations entre sources de données, en rendant transparentes leurs hétérogénéités. Les travaux de recherche présentés dans ce mémoire apportent des contributions à la conception d'une Nouvelle Approche de Recherche d'Information (NARI) pour la prise de décision. NARI vise à opérer sur des grandes masses de données cataloguées, hétérogènes, qui peuvent être géo référencées. Elle est basée sur des exigences préliminaires de qualité (standardisation, réglementations), exprimées par les utilisateurs, représentées et gérées à l'aide des métadonnées. Ces exigences conduisent à pallier le manque de données ou leur insuffisante qualité, pour produire une information de qualité suffisante par rapport aux besoins décisionnels. En utilisant la perspective des utilisateurs, on identifie et/ou on prépare des sources de données, avant de procéder à l'étape d'intégration des contenus. L'originalité de NARI réside dans la métaphore de l'écart d'impédance (phénomène classique lorsque on cherche à connecter deux systèmes physiques hétérogènes). Cette métaphore, dont R. Jeansoulin est à l'origine, ainsi que l'attention portée au cadre réglementaire, en guident la conception. NARI est structurée par la dimension géographique (prise en compte de divers niveaux de territoires, corrélations entre plusieurs thématiques) : des techniques d'analyse spatiale supportent des tâches de la recherche d'information, réalisées souvent implicitement par les décideurs. Elle s'appuie sur des techniques d'intégration de données (médiation, entrepôts de données), des langages de représentation des connaissances et des technologies et outils relevant du Web sémantique, pour supporter la montée en charge, la généralisation et la robustesse théorique de l'approche. NARI est illustrée sur des exemples relevant de la santé
APA, Harvard, Vancouver, ISO, and other styles
33

Al, Nabhani Yousuf bin Harith bin Nasir. "The role and standardisation of geographical names on maps Oman as a case study /." Connect to e-thesis, 2007. http://theses.gla.ac.uk/460/.

Full text
Abstract:
Thesis (MSc.(R)) - University of Glasgow, 2007.
MSc.(R) thesis submitted to the Department of Geographical and Earth Sciences, Faculty of Physical Sciences, University of Glasgow, 2007. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
34

Ren, Fang. "Geovisualizing and modeling physical and Internet activities in space-time toward an integrated analysis of activity patterns in the information age /." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1196200534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Paul, Nathan J. "Creating a user-friendly multiple natural disaster database with a functioning display using Google mapping systems a thesis presented to the Department of Geology and Geography in candidacy for the degree of Master of Science /." Diss., Maryville, Mo. : Northwest Missouri State University, 2009. http://www.nwmissouri.edu/library/theses/paulnathanj/index.htm.

Full text
Abstract:
Thesis (M.S.)--Northwest Missouri State University, 2009.
The full text of the thesis is included in the pdf file. Title from title screen of full text.pdf file (viewed on April 9, 2010) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
36

Gouvêa, Cleber. "Uma Abordagem para o Enriquecimento de Gazetteers a partir de Notícias visando o Georreferenciamento de Textos na Web." Universidade Catolica de Pelotas, 2009. http://tede.ucpel.edu.br:8080/jspui/handle/tede/98.

Full text
Abstract:
Made available in DSpace on 2016-03-22T17:26:21Z (GMT). No. of bitstreams: 1 dissertacao_mestrado_cleber.pdf: 565462 bytes, checksum: 906465b0884050d40a2c09bf52b60526 (MD5) Previous issue date: 2009-03-23
Georeferencing of texts, that is, the identification of the geographical context of texts is becoming popular in the Web due to the high demand for geographical information and due to the raising of services for query and retrieval like Google Earth (geobrowsers). The main challenge is to relate texts to geographical locations. These associations are stored in structures called gazetteers. Although there are gazetteers like Geonames and TGN, they fail in coverage, lacking information about some countries, and they also fail by weak specialization, lacking detailed references to locations (fine granularity) as for example names of streets, squares, monuments, rivers, neighborhoods, etc. This kind of information that acts as indirect references to geographical locations is defined as Location Indicators . This dissertation presents an approach that identifies Location Indicators related to geographical locations, by analyzing texts of news published in the Web. The goal is to enrich create gazetteers with the identified relations and then perform geo-referencing of news. Location Indicators include non-geographical entities that are dynamic and may change along the time. The use of news published in the Web is a useful way to discover Location Indicators, covering a great number of locations and maintaining detailed information about each location. Different training news corpora are compared for the creation of gazetteers and evaluated by their ability to correctly identify cities in texts of news Georeferencing of texts, that is, the identification of the geographical context of texts is becoming popular in the Web due to the high demand for geographical information and due to the raising of services for query and retrieval like Google Earth (geobrowsers). The main challenge is to relate texts to geographical locations. These associations are stored in structures called gazetteers. Although there are gazetteers like Geonames and TGN, they fail in coverage, lacking information about some countries, and they also fail by weak specialization, lacking detailed references to locations (fine granularity) as for example names of streets, squares, monuments, rivers, neighborhoods, etc. This kind of information that acts as indirect references to geographical locations is defined as Location Indicators . This dissertation presents an approach that identifies Location Indicators related to geographical locations, by analyzing texts of news published in the Web. The goal is to enrich create gazetteers with the identified relations and then perform geo-referencing of news. Location Indicators include non-geographical entities that are dynamic and may change along the time. The use of news published in the Web is a useful way to discover Location Indicators, covering a great number of locations and maintaining detailed information about each location. Different training news corpora are compared for the creation of gazetteers and evaluated by their ability to correctly identify cities in texts of news
Com o advento da Internet e o crescente número de informações disponíveis torna-se necessária a definição de estratégias especiais que permitam aos usuários o acesso rápido a informações relevantes. Como a Web possui grande volume de informações principalmente com o foco geográfico torna-se necessário recuperar e estruturar essas informações de forma a poder relacioná-las com o contexto e realidade das pessoas através de métodos e sistemas automáticos. Para isso uma das necessidades é possibilitar o georreferenciamento dos textos, ou seja, identificar as entidades geográficas presentes e associá-las com sua correta localização espacial. Nesse sentido, os topônimos (ex: nomes de localidades como cidades, países, etc.), devido à possibilidade de identificar de forma precisa determinada região espacial, apresentam-se como ideais para a identificação do contexto geográfico dos textos. Essa tarefa, denominada de Resolução de Topônimos apresenta, no entanto, desafios importantes principalmente do ponto de vista lingüístico, já que uma localidade pode possuir variados tipos de ambigüidade. Com relação a isso a principal estratégia para superar estes problemas compreende a identificação de evidências que auxiliem na identificação e desambiguação das localidades nos textos. Para essa verificação são utilizados geralmente os serviços de um ou mais dicionários toponímicos (Gazetteers). Como são criados de forma manual eles apresentam, no entanto deficiência de informações relacionadas principalmente a entidades que podem identificar, embora de forma indireta, determinados tipos de lugares como ruas, praças, universidades etc., as quais são definidas como Indicadores de Localidade. O presente trabalho propõe uma abordagem para a recuperação dessas entidades aproveitando para isso o caráter geográfico das informações jornalísticas. Para ilustrar a viabilidade do processo diferentes tipos de corpora de notícias foram testados e comparados pela habilidade de criação de Gazetteers com os Indicadores recuperados, sendo os Gazetteers avaliados então pela capacidade de identificação das cidades relacionadas às notícias testadas. Os resultados demonstram a utilidade da abordagem para o enriquecimento de Gazetteers e consequentemente para a recuperação de Indicadores de Localidade com maior simplicidade e extensibilidade que os trabalhos atuais
APA, Harvard, Vancouver, ISO, and other styles
37

Guénec, Nadège. "Méthodologies pour la création de connaissances relatives au marché chinois dans une démarche d'Intelligence Économique : application dans le domaine des biotechnologies agricoles." Phd thesis, Université Paris-Est, 2009. http://tel.archives-ouvertes.fr/tel-00554743.

Full text
Abstract:
Le décloisonnement des économies et l'accélération mondiale des échanges commerciaux ont, en une décennie à peine, transformés l'environnement concurrentiel des entreprises. La zone d'activités s'est élargie en ouvrant des nouveaux marchés à potentiels très attrayants. Ainsi en est-il des BRIC (Brésil, Russie, Inde et Chine). De ces quatre pays, impressionnants par la superficie, la population et le potentiel économique qu'ils représentent, la Chine est le moins accessible et le plus hermétique à notre compréhension de par un système linguistique distinct des langues indo-européennes d'une part et du fait d'une culture et d'un système de pensée aux antipodes de ceux de l'occident d'autre part. Pourtant, pour une entreprise de taille internationale, qui souhaite étendre son influence ou simplement conserver sa position sur son propre marché, il est aujourd'hui absolument indispensable d'être présent sur le marché chinois. Comment une entreprise occidentale aborde-t-elle un marché qui de par son altérité, apparaît tout d'abord comme complexe et foncièrement énigmatique ? Six années d'observation en Chine, nous ont permis de constater les écueils dans l'accès à l'information concernant le marché chinois. Comme sur de nombreux marchés extérieurs, nos entreprises sont soumises à des déstabilisations parfois inimaginables. L'incapacité à " lire " la Chine et à comprendre les enjeux qui s'y déroulent malgré des effets soutenus, les erreurs tactiques qui découlent d'une mauvaise appréciation du marché ou d'une compréhension biaisée des jeux d'acteurs nous ont incités à réfléchir à une méthodologie de décryptage plus fine de l'environnement d'affaire qui puisse offrir aux entreprises françaises une approche de la Chine en tant que marché. Les méthodes de l'Intelligence Economique (IE) se sont alors imposées comme étant les plus propices pour plusieurs raisons : le but de l'IE est de trouver l'action juste à mener, la spécificité du contexte dans lequel évolue l'organisation est prise en compte et l'analyse se fait en temps réel. Si une approche culturelle est faite d'interactions humaines et de subtilités, une approche " marché " est dorénavant possible par le traitement automatique de l'information et de la modélisation qui s'en suit. En effet, dans toute démarche d'Intelligence Economique accompagnant l'implantation d'une activité à l'étranger, une grande part de l'information à portée stratégique vient de l'analyse du jeu des acteurs opérants dans le même secteur d'activité. Une telle automatisation de la création de connaissance constitue, en sus de l'approche humaine " sur le terrain ", une réelle valeur ajoutée pour la compréhension des interactions entre les acteurs car elle apporte un ensemble de connaissances qui, prenant en compte des entités plus larges, revêtent un caractère global, insaisissable par ailleurs. La Chine ayant fortement développé les technologies liées à l'économie de la connaissance, il est dorénavant possible d'explorer les sources d'information scientifiques et techniques chinoises. Nous sommes en outre convaincus que l'information chinoise prendra au fil du temps une importance de plus en plus cruciale. Il devient donc urgent pour les organisations de se doter de dispositifs permettant non seulement d'accéder à cette information mais également d'être en mesure de traiter les masses d'informations issues de ces sources. Notre travail consiste principalement à adapter les outils et méthodes issues de la recherche française à l'analyse de l'information chinoise en vue de la création de connaissances élaborées. L'outil MATHEO, apportera par des traitements bibliométriques une vision mondiale de la stratégie chinoise. TETRALOGIE, outil dédié au data-mining, sera adapté à l'environnement linguistique et structurel des bases de données scientifiques chinoises. En outre, nous participons au développement d'un outil d'information retreival (MEVA) qui intègre les données récentes des sciences cognitives et oeuvrons à son application dans la recherche de l'information chinoise, pertinente et adéquate. Cette thèse étant réalisée dans le cadre d'un contrat CIFRE avec le Groupe Limagrain, une application contextualisée de notre démarche sera mise en œuvre dans le domaine des biotechnologies agricoles et plus particulièrement autour des enjeux actuels de la recherche sur les techniques d'hybridation du blé. L'analyse de ce secteur de pointe, qui est à la fois une domaine de recherche fondamentale, expérimentale et appliquée donne actuellement lieu à des prises de brevets et à la mise sur le marché de produits commerciaux et représente donc une thématique très actuelle. La Chine est-elle réellement, comme nous le supposons, un nouveau territoire mondial de la recherche scientifique du 21e siècle ? Les méthodes de l'IE peuvent-elles s'adapter au marché chinois ? Après avoir fourni les éléments de réponses à ces questions dans es deux premières parties de notre étude, nous poserons en troisième partie, le contexte des biotechnologies agricoles et les enjeux mondiaux en terme de puissance économico-financière mais également géopolitique de la recherche sur l'hybridation du blé. Puis nous verrons en dernière partie comment mettre en œuvre une recherche d'information sur le marché chinois ainsi que l'intérêt majeur en terme de valeur ajoutée que représente l'analyse de l'information chinoise
APA, Harvard, Vancouver, ISO, and other styles
38

Freitas, Sérgio Augusto Sousa. "User interfaces for geographic information retrieval systems." Master's thesis, 2007. http://hdl.handle.net/10451/13870.

Full text
Abstract:
Tese de mestrado em Informática, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2007
Os sistemas de pesquisa Web contemporâneos não suportam o contexto geográfico, devido ao facto de ignorarem a informação geográfica que pode ser inferida através da análise das páginas Web, e por outro lado não reconhecerem o contexto geográfico que pode ser extraído das pesquisas realizadas. Esta tese investiga se, as interfaces de utilizador para sistemas de pesquisa que suportam o contexto geográfico, permitem melhorar a capacidade de recuperação de documentos e aumentar o nível de satisfação dos utilizadores. Para atingir este objectivo foi desenhada e implementada uma interface de utilizador para um sistema de pesquisa geográfico Português, que foi avaliada através de uma metodologia centrada no utilizador, cujos resultados demonstraram que o suporte do contexto geográfico acarreta benefícios reais para o utilizador no decorrer do processo de pesquisa da informação na Web.
Current Web search services do not support the geographic context, due to the lack of support for the geographic information that can be inferred from the analysis of Web pages and for the geographic context that can be extracted from the user queries. It is therefore important to research geographical aware search services that can improve the retrieval efficiency using this information. This thesis investigates if the support of geographic enabled Web information retrieval user interfaces can improve retrieval efficiency and raise user satisfaction. To achieve this goal a fully featured geographic information retrieval user interface was designed, implemented, and integrated on a Portuguese research search engine. The user interface was evaluated using a user centred methodology, which shown that supporting the geographic context brings actual benefits to the users during the information seeking process.
APA, Harvard, Vancouver, ISO, and other styles
39

Abargues, Casanova Carlos. "Discovery and retrieval of Geographic data using Google." Master's thesis, 2009. http://hdl.handle.net/10362/2536.

Full text
Abstract:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
The growth of content in the Internet makes the existence of effective ways to retrieve the desired information fundamental. Search engines represent applications that fulfil this need. In these last years it has been clearly increased the number of services and tools to add and use the geographic component of the content published on the World Wide Web, what represents a clear trend towards the so called GeoWeb. This web paradigm promotes the search of content based also in their geographical component. Here is presented a study about the possibilities of using the different services and tools that Google offers to discover and retrieve geographic information. The study is based in the use of Keyhole Markup Language files (KML) to express geographic data and the analysis of their discovery and indexing. This discovery process is done by crawlers and the study tried to obtain objective measures about the time and effectiveness of the process simulating a real case scenario. In the other side the different KML elements that could allocate information and metadata were analyzed. In order to better understand which of these elements are effectively used in the indexing process a test data set composed by KML files containing information in these elements were launched and the obtained results analyzed and commented. With the experiment’s results the use of these services and tools are analyzed as a general solution for Geographic Information Retrieval. Finally some considerations about future studies that could improve these tools usage are exposed.
APA, Harvard, Vancouver, ISO, and other styles
40

Lin, Yu-Yang, and 林育暘. "Store Name Extraction and Name-Address Matching for Geographic Information Retrieval." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/11588483903176116990.

Full text
Abstract:
碩士
國立中央大學
資訊工程學系
102
Mobile devices are the trend of 2014. According to the report of IDC, the first time unit shipments of tablet has exceed PCs in 2013 Q4. The smart phone has already exceed other devices in unit shipments and market ratio. LBS (Location-based Service) plays an important role in this trend. Because of the device mobility, many demand have been proposed, for example, navigation, searching restaurant or gas station. It’s usually needs a POI (Point-of Interest) database to support a LBS. The web is the largest data source, these data come from website manager, crowdsourcing and people sharing information, including address, name, phone and comment. There are many method to extract address associated information nowadays, but they are usually faced with the challenge of extracting name of POI. It’s a limitation of information retrieval. Our system could be separated into three parts: the Taiwan address normalization, the Store Name Entity Recognition and Address-StoreNE matching. Finally, users can search the store names on the mobile device and get the informations like address, telephone and comment immediately. In the part of Store NER, our research propose a common characteristic of store and organization names. We use these characteristic as features to join the CRF model, enhanced the recognition result.
APA, Harvard, Vancouver, ISO, and other styles
41

Gluck, Myron Henry. "Understanding performance in information systems an investigation of system and user views of geographic information /." 1993. http://books.google.com/books?id=jgHhAAAAMAAJ.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Mearns, Martie Alèt. "Requirements of a web-based geographic information system clearinghouse." Thesis, 2012. http://hdl.handle.net/10210/7532.

Full text
Abstract:
M.Inf.
Users of geographic information systems (GIS) are often faced with a challenge with regard to identification, location and overall access to digital data used in the application of GIS. The selection of the appropriate data from the large volumes available, also gaining access to available data and the establishment of the distribution of data from one central source are necessary tasks in order to improve the dissemination of GIS data. However, these are difficult tasks due to many users being unaware of the full range of available digital GIS data. A mechanism that could assist in improving access to digital GIS data is the Webbased GIS clearinghouse. This study was initiated to determine the requirements of GIS clearinghouses for optimum accessibility to digital GIS data. A literature study was conducted to investigate the nature of data that is used in GIS clearinghouses, the current trends in GIS data on the Web and the unique characteristics of the Web that can increase accessibility to digital GIS data. A selection of clearinghouses was made and these were evaluated in order to determine variables that can be translated into criteria from which a model for the evaluation of GIS clearinghouses could be established. This model can act as a working document or check-list for users to evaluate GIS clearinghouses, or for designers to create new or improve existing GIS clearinghouses.
APA, Harvard, Vancouver, ISO, and other styles
43

Hu, Yonggang. "A web-based 2D/3D geospatial image visualization system /." 2004. http://wwwlib.umi.com/cr/yorku/fullcit?pMQ99327.

Full text
Abstract:
Thesis (M.Sc.)--York University, 2004. Graduate Programme in Earth and Space Science & Engineering.
Typescript. Includes bibliographical references (leaves 136-143). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://wwwlib.umi.com/cr/yorku/fullcit?pMQ99327
APA, Harvard, Vancouver, ISO, and other styles
44

Weyman, Tamara R., University of Western Sydney, College of Health and Science, and School of Natural Sciences. "Spatial information sharing for better regional decision making." 2007. http://handle.uws.edu.au:8081/1959.7/17592.

Full text
Abstract:
The overall aim of this research project was to determine whether a technological spatial innovation, such as online spatial portal (OSP), would provide an effective mechanism to support better policy dialogue between the technical capacity and decision making spheres within and between local government, enabling improved policy development and application. This was addressed by using a qualitative, multi-methodological research methodology to examine both current theory and the practical experiences and opinions of local government professionals. The literature review focused on the emerging theory field of ‘policy dialogue’ - the local governance and the importance of spatial information (SI) and geographic information systems (GIS) for supporting decisions. The interview analysis of sample Greater Western Sydney (GWS) council professionals confirmed the complexity of local government policy. A significant issue that hindered policy development across the participating local governments was the occurrence of silo cultures within internal and external relationships between council officers. The second interview phase with GWS council professionals, followed by a demonstration of an OSP concept (GWSspatial), identified the applications, opportunities and challenges for the development and use of a technological spatial innovation. The key applications identified were- sharing and knowledge management of SI, immediate management of SI, immediate online access and integration of local/regional SI, and analysis opportunities to facilitate purposeful dialogue and informed decision making by council professionals within a region. Policy framework case studies were conducted at three scales: the Pitt Town development – at LGA level; Bushfire emergency management – at cross jurisdictional level; and the Sydney Metropolitan Strategy - at regional level. The catalysts, which trigger the need, acceptance and commitment of decision makers, thereby supporting the key applications of a technological spatial innovation include disaster response, critical environment management challenges and regional land use planning and management.
Doctor of Philosophy (PhD)
APA, Harvard, Vancouver, ISO, and other styles
45

"Knowledge-Driven Methods for Geographic Information Extraction in the Biomedical Domain." Doctoral diss., 2019. http://hdl.handle.net/2286/R.I.55581.

Full text
Abstract:
abstract: Accounting for over a third of all emerging and re-emerging infections, viruses represent a major public health threat, which researchers and epidemiologists across the world have been attempting to contain for decades. Recently, genomics-based surveillance of viruses through methods such as virus phylogeography has grown into a popular tool for infectious disease monitoring. When conducting such surveillance studies, researchers need to manually retrieve geographic metadata denoting the location of infected host (LOIH) of viruses from public sequence databases such as GenBank and any publication related to their study. The large volume of semi-structured and unstructured information that must be reviewed for this task, along with the ambiguity of geographic locations, make it especially challenging. Prior work has demonstrated that the majority of GenBank records lack sufficient geographic granularity concerning the LOIH of viruses. As a result, reviewing full-text publications is often necessary for conducting in-depth analysis of virus migration, which can be a very time-consuming process. Moreover, integrating geographic metadata pertaining to the LOIH of viruses from different sources, including different fields in GenBank records as well as full-text publications, and normalizing the integrated metadata to unique identifiers for subsequent analysis, are also challenging tasks, often requiring expert domain knowledge. Therefore, automated information extraction (IE) methods could help significantly accelerate this process, positively impacting public health research. However, very few research studies have attempted the use of IE methods in this domain. This work explores the use of novel knowledge-driven geographic IE heuristics for extracting, integrating, and normalizing the LOIH of viruses based on information available in GenBank and related publications; when evaluated on manually annotated test sets, the methods were found to have a high accuracy and shown to be adequate for addressing this challenging problem. It also presents GeoBoost, a pioneering software system for georeferencing GenBank records, as well as a large-scale database containing over two million virus GenBank records georeferenced using the algorithms introduced here. The methods, database and software developed here could help support diverse public health domains focusing on sequence-informed virus surveillance, thereby enhancing existing platforms for controlling and containing disease outbreaks.
Dissertation/Thesis
Doctoral Dissertation Biomedical Informatics 2019
APA, Harvard, Vancouver, ISO, and other styles
46

Hill, Linda Ladd. "Access to geographic concepts in online bibliographic files effectiveness of current practices and the potential of a graphic interface /." 1990. http://books.google.com/books?id=jwHhAAAAMAAJ.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Halbleib, Michael D. "Using advanced spreadsheet features for agricultural GIS applications." Thesis, 2001. http://hdl.handle.net/1957/31441.

Full text
Abstract:
A GIS analysis procedure was developed to explore relationships between imagery, yield data, soil information, and other assessments of a field or orchard. A set of conversion utilities, a spreadsheet, and an inexpensive shape file viewer were used to manipulate, plot, and display data. Specific features described include procedures used to: 1) display automated yield monitoring and aerial imagery data as surface maps for visual analysis, 2) generate maps from gridded soil sampling schemes that display either the collected soil data values or management information derived from further manipulation of the sample values, 3) evaluate relationships among data layers such as yield monitor, imagery, and soil data, 4) conduct an upper boundary line evaluation of potential yield-limiting factors. The analysis process is demonstrated on wheat, meadowfoam, and hazelnut data, from crops grown in Oregon.
Graduation date: 2002
APA, Harvard, Vancouver, ISO, and other styles
48

Ali, Khaled. "Application of GeoDAS and other advanced GIS technologies for modeling stream sediment geochemical distribution patterns to assess gold resources potential in Yunnan Province, South China /." 2005. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR19718.

Full text
Abstract:
Thesis (M.Sc.)--York University, 2005. Graduate Programme in Earth and Space Science.
Typescript. Includes bibliographical references (leaves 136-151). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR19718
APA, Harvard, Vancouver, ISO, and other styles
49

Teka, Brhane Bahrishum. "A systematic comparison of spatial search strategies for open government datasets." Master's thesis, 2019. http://hdl.handle.net/10362/67708.

Full text
Abstract:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Datasets produced or collected by governments are being made publicly available for re-use. Open government data portals help realize such reuse by providing list of datasets and links to access those datasets. This ensures that users can search, inspect and use the data easily. With the rapidly increasing size of datasets in open government data portals, just like it is the case with the web, nding relevant datasets with a query of few keywords is a challenge. Furthermore, those data portals not only consist of textual information but also georeferenced data that needs to be searched properly. Currently, most popular open government data portals like the data.gov.uk and data.gov.ie lack the support for simultaneous thematic and spatial search. Moreover, the use of query expansion hasn't also been studied in open government datasets. In this study we have assessed di erent spatial search strategies and query expansions' performance and impact on user relevance judgment. To evaluate those strategies we harvested machine readable spatial datasets and their metadata from three English based open government data portals, performed metadata enhancement, developed a prototype and performed theoretical and user evaluation. According to the results from the evaluations keyword based search strategy returned limited number of results but the highest relevance rating. In the other hand aggregated spatial and thematic search improved the number of results of the baseline keyword based strategy with a 1 second increase in response time and but decreased relevance rating. Moreover, strategies based on WordNet Synonyms query expansion exhibited the highest relevance rated rst seven results than all other strategies except the keyword based baseline strategy in three out of the four query terms. Regarding the use of Hausdor distance and area of overlap, since documents were returned as results only if they overlap with the query, the number of results returned were the same in both spatial similarities. But strategies using Hausdor distance were of higher relevance rating and average mean than area of overlap based strategies in three of the four queries. In conclusion, while the spatial search strategies assessed in this study can be used to improve the existing keyword based OGDs search approaches, we recommend OGD developers to also consider using WordNet Synonyms based query expansion and hausdor distance as a way of improving relevant spatial data discovery in open government datasets using few keywords and tolerable response time.
APA, Harvard, Vancouver, ISO, and other styles
50

Wing, Benjamin Patai. "Data-rich document geotagging using geodesic grids." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-05-3632.

Full text
Abstract:
This thesis investigates automatic geolocation (i.e. identification of the location, expressed as latitude/longitude coordinates) of documents. Geolocation can be an effective means of summarizing large document collections and is an important component of geographic information retrieval. We describe several simple supervised methods for document geolocation using only the document’s raw text as evidence. All of our methods predict locations in the context of geodesic grids of varying degrees of resolution. We evaluate the methods on geotagged Wikipedia articles and Twitter feeds. For Wikipedia, our best method obtains a median prediction error of just 11.8 kilometers. Twitter geolocation is more challenging: we obtain a median error of 479 km, an improvement on previous results for the dataset.
text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography