Tesis sobre el tema "Open environmental data"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Open environmental data.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 34 mejores tesis para su investigación sobre el tema "Open environmental data".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Sadler, Jeffrey Michael. "Hydrologic Data Sharing Using Open Source Software and Low-Cost Electronics". BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/4425.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
While it is generally accepted that environmental data are critical to understanding environmental phenomena, there are yet improvements to be made in their consistent collection, curation, and sharing. This thesis describes two research efforts to improve two different aspects of hydrologic data collection and management. First described is a recipe for the design, development, and deployment of a low-cost environmental data logging and transmission system for environmental sensors and its connection to an open source data-sharing network. The hardware is built using several low-cost, open-source, mass-produced components. The system automatically ingests data into HydroServer, a standards-based server in the open source Hydrologic Information System (HIS) created by the Consortium of Universities for the Advancement of Hydrologic Sciences Inc (CUAHSI). A recipe for building the system is provided along with several test deployment results. Second, a connection between HydroServer and HydroShare is described. While the CUAHSI HIS system is intended to empower the hydrologic sciences community with better data storage and distribution, it lacks support for the kind of “Web 2.0” collaboration and social-networking capabilities that are increasing scientific discovery in other fields. The design, development, and testing of a software system that integrates CUAHSI HIS with the HydroShare social hydrology architecture is presented. The resulting system supports efficient archive, discovery, and retrieval of data, extensive creator and science metadata, assignment of a persistent digital identifier such as a Digital Object Identifier (DOI), scientific discussion and collaboration around the data and other basic social-networking features. In this system, HydroShare provides functionality for social interaction and collaboration while the existing HIS provides the distributed data management and web services framework. The system is expected to enable scientists, for the first time, to access and share both national- and research lab-scale hydrologic time series in a standards-based web services architecture combined with a social network developed specifically for the hydrologic sciences.These two research projects address and provide a solution for significant challenges in the automatic collection, curation, and feature-rich sharing of hydrologic data.
2

Montori, Federico <1990&gt. "Delivering IoT Services in Smart Cities and Environmental Monitoring through Collective Awareness, Mobile Crowdsensing and Open Data". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amsdottorato.unibo.it/8957/1/THESIS_REV.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
The Internet of Things (IoT) is the paradigm that allows us to interact with the real world by means of networking-enabled devices and convert physical phenomena into valuable digital knowledge. Such a rapidly evolving field leveraged the explosion of a number of technologies, standards and platforms. Consequently, different IoT ecosystems behave as closed islands and do not interoperate with each other, thus the potential of the number of connected objects in the world is far from being totally unleashed. Typically, research efforts in tackling such challenge tend to propose a new IoT platforms or standards, however, such solutions find obstacles in keeping up the pace at which the field is evolving. Our work is different, in that it originates from the following observation: in use cases that depend on common phenomena such as Smart Cities or environmental monitoring a lot of useful data for applications is already in place somewhere or devices capable of collecting such data are already deployed. For such scenarios, we propose and study the use of Collective Awareness Paradigms (CAP), which offload data collection to a crowd of participants. We bring three main contributions: we study the feasibility of using Open Data coming from heterogeneous sources, focusing particularly on crowdsourced and user-contributed data that has the drawback of being incomplete and we then propose a State-of-the-Art algorith that automatically classifies raw crowdsourced sensor data; we design a data collection framework that uses Mobile Crowdsensing (MCS) and puts the participants and the stakeholders in a coordinated interaction together with a distributed data collection algorithm that prevents the users from collecting too much or too less data; (3) we design a Service Oriented Architecture that constitutes a unique interface to the raw data collected through CAPs through their aggregation into ad-hoc services, moreover, we provide a prototype implementation.
3

Dumpawar, Suruchi. "Open government data intermediaries : mediating data to drive changes in the built environment". Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/97994.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Thesis: S.M., Massachusetts Institute of Technology, Department of Comparative Media Studies, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 127-133).
In recent years open data initiatives, which make government data publicly available in a machine-readable format for reuse and redistribution, have proliferated, driven by the launch of open-data government initiatives such as data.gov and data.gov.uk. Research on open data has focused on its potential for governance, its implications on transparency, accountability, and service delivery, and its limitations and barriers of use. However, less attention has been focused on the practices of data intermediaries-an emerging configuration of actors that plays an essential role in facilitating the use and reuse of data by aggregating open government data and enhancing it through a range of data practices. This thesis will assess the data practices of open government data intermediaries from three perspectives. First, it will trace the development of open government data initiatives to contend that at a moment when open data policy is seeing global diffusion with the potential of increasing social, political, and economic impact, there is a crucial need to assess the practices of intermediaries to understand how open government data is put to use. Second, it will develop a framework to analyze the role of open government data intermediaries by proposing a definition for "the data intermediary function" constituted by a range of technical, civic, representational, and critical data practices. Third, it will assess the data practices of two open government data intermediaries, 596 Acres and Transparent Chennai, who as urban actors facilitate the conversion of open government data into actionable information for communities to effect changes in the built environment. In describing and assessing the tools, practices, and methods developed by open data intermediaries this thesis will explore the potential and limitations of data intermediaries, and offer recommendations that might inform future open government data initiatives that seek to mediate open government data to facilitate changes in the built environment.
by Suruchi Dumpawar.
S.M.
4

Neira, Maria Elena. "An open architecture for data environments based on context interchange". Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/69352.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Wiggins, John Sterling. "Design and specification of a PC-based, open architecture environment controller". Thesis, Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/17299.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Miles, Shaun Graeme. "An investigation of issues of privacy, anonymity and multi-factor authentication in an open environment". Thesis, Rhodes University, 2012. http://hdl.handle.net/10962/d1006653.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
This thesis performs an investigation into issues concerning the broad area ofIdentity and Access Management, with a focus on open environments. Through literature research the issues of privacy, anonymity and access control are identified. The issue of privacy is an inherent problem due to the nature of the digital network environment. Information can be duplicated and modified regardless of the wishes and intentions ofthe owner of that information unless proper measures are taken to secure the environment. Once information is published or divulged on the network, there is very little way of controlling the subsequent usage of that information. To address this issue a model for privacy is presented that follows the user centric paradigm of meta-identity. The lack of anonymity, where security measures can be thwarted through the observation of the environment, is a concern for users and systems. By an attacker observing the communication channel and monitoring the interactions between users and systems over a long enough period of time, it is possible to infer knowledge about the users and systems. This knowledge is used to build an identity profile of potential victims to be used in subsequent attacks. To address the problem, mechanisms for providing an acceptable level of anonymity while maintaining adequate accountability (from a legal standpoint) are explored. In terms of access control, the inherent weakness of single factor authentication mechanisms is discussed. The typical mechanism is the user-name and password pair, which provides a single point of failure. By increasing the factors used in authentication, the amount of work required to compromise the system increases non-linearly. Within an open network, several aspects hinder wide scale adoption and use of multi-factor authentication schemes, such as token management and the impact on usability. The framework is developed from a Utopian point of view, with the aim of being applicable to many situations as opposed to a single specific domain. The framework incorporates multi-factor authentication over multiple paths using mobile phones and GSM networks, and explores the usefulness of such an approach. The models are in tum analysed, providing a discussion into the assumptions made and the problems faced by each model.
Adobe Acrobat Pro 9.5.1
Adobe Acrobat 9.51 Paper Capture Plug-in
7

Kalibjian, Jeffrey R. "APPLICATION OF INTRUSION DETECTION SOFTWARE TO PROTECT TELEMETRY DATA IN OPEN NETWORKED COMPUTER ENVIRONMENTS". International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/606817.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California
Over the past few years models for Internet based sharing and selling of telemetry data have been presented [1] [2] [3] at ITC conferences. A key element of these sharing/selling architectures was security. This element was needed to insure that information was not compromised while in transit or to insure particular parties had a legitimate right to access the telemetry data. While the software managing the telemetry data needs to be security conscious, the networked computer hosting the telemetry data to be shared or sold also needs to be resistant to compromise. Intrusion Detection Systems (IDS) may be used to help identify and protect computers from malicious attacks in which data can be compromised.
8

Triperina, Evangelia. "Visual interactive knowledge management for multicriteria decision making and ranking in linked open data environments". Thesis, Limoges, 2020. http://www.theses.fr/2020LIMO0010.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Le doctorat impliqués la recherche dans le domaine des représentations visuelles assistées par des technologies sémantiques et des ontologies afin de soutenir les décisions et les procédures d'élaboration des politiques, dans le cadre de la recherche et des systèmes d'information académique. Les visualisations seront également prises en charge par l'exploration de données et les processus d'extraction de connaissances dans l'environnement de données liées. Pour élaborer, les techniques d'analyse visuelle seront utilisées pour l'organisation des visualisations afin de présenter l'information de manière à utiliser les capacités perceptuelles humaines et aideront éventuellement les procédures de prise de décision et de prise de décision. En outre, la représentation visuelle et, par conséquent, les processus décisionnels et décisionnels seront améliorés au moyen des technologies sémantiques basées sur des modèles conceptuels sous forme d'ontologies. Ainsi, l'objectif principal de la thèse de doctorat proposée consiste en la combinaison des technologies sémantiques clés et des techniques de visualisation interactive basées principalement sur la perception du graphique afin de rendre les systèmes de prise de décision plus efficaces. Le domaine de la demande sera le système de recherche et d'information académique
The dissertation herein involves research in the field of the visual representations aided by semantic technologies and ontologies in order to support decisions and policy making procedures, in the framework of research and academic information systems. The visualizations will be also supported by data mining and knowledge extraction processes in the linked data environment. To elaborate, visual analytics’ techniques will be employed for the organization of the visualizations in order to present the information in such a way that will utilize the human perceptual abilities and that will eventually assist the decision support and policy making procedures. Furthermore, the visual representation and consequently the decision and policy making processes will be ameliorated by the means of the semantic technologies based on conceptual models in the form of ontologies. Thus, the main objective of the proposed doctoral thesis consists the combination of the key semantic technologies with interactive visualisations techniques based mainly on graph’s perception in order to make decision support systems more effective. The application field will be the research and academic information systems
9

Neumann, Bradley C. "Is All Open Space Created Equal? A Hedonic Application within a Data-Rich GIS Environment". Fogler Library, University of Maine, 2005. http://www.library.umaine.edu/theses/pdf/NeumannBC2005.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Chivarar, Sonia y Haithem Hamdi. "Technology Convergence and Open Innovation : An Empirical Study on How Nexus of Forces Influences the Open Innovation Environment". Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Informatik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-23980.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
This study is conducted within the domains of technology convergence and Open Innovation environment. Two frameworks have been adopted in the study, namely; Nexus of Forces and Capability-Based Framework for Open Innovation. The first purpose of the investigation was to identify to what extent and in what ways does Nexus of Forces affects the knowledge capabilities within the Open Innovation environment. The second purpose of the investigation was to identify what practical implications does Nexus of Forces brings to the Open Innovation practices. The investigation was conducted on a single company – Swisscom – by following a case study strategy. The methodological approach for collecting the data was a mixed method approach with concurrent embedded strategy. The study has focused mainly on qualitative data and the quantitative data was nested with the focus to strengthen the findings. For the primary data collection, 6 respondents were selected, Expert A and Expert B for interviews and 4 managers for survey.   In regard to the first purpose, our findings have shown that practices of Nexus of Forces have strategical implications on the process of managing knowledge capabilities. The extents of the NoF implications are through a direct and indirect level for the departments, which work with Open Innovation projects and at meta-level for the higher organizational structures within the company. In regard to the second purpose, our findings have shown that practices of Nexus of Forces have tactical implications on the Open Innovation practices. The final outcome of the study is a theoretical model that displays the strategical and tactical implications of Nexus of Forces on the knowledge capabilities and Open Innovation practices within the Open Innovation environment.
11

Bennett, Stacey Patricia. "An object oriented expert system for specifying computer data security requirements in an open systems environment". Thesis, University of Birmingham, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341835.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Castro, Ginard Alfred. "Detection, characterisation and use of open clusters in a Galactic context in a Big Data environment". Doctoral thesis, Universitat de Barcelona, 2021. http://hdl.handle.net/10803/671790.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Open clusters are groups of stars, gravitationally bound together, that were born from the same molecular cloud and, thus, share similar positions, kinematics, ages and metallicities. Traditional methods to detect open clusters rely in the visual inspection of regions of the sky to look for positional overdensities of stars, which then are checked to follow an isochrone pattern in a colour-magnitude diagram. The publication of the second Gaia data release, with more than 1.3 billion stars with parallax and proper motion measurements together with mean photometry in three broadbands, boosted the development of novel machine learning-based techniques to automatise the search for open clusters, using both the astrometric and photometric information. The characterised open clusters in the Galaxy are popular tracers of properties of the Galactic disc such as the structure and evolution of the spiral arms, or testbed for stellar evolution studies for instance, because their astrophysical parameters are estimated with greater precision than for field stars. Therefore, a good understanding of the open cluster population in the Milky Way is key for Galactic archaeology studies. Our aim for this thesis is to transform classical methodologies to detect different kinds of patterns from astronomical data, that mostly relies on visual inspection, to an automatic data mining procedure to extract meaningful information from stellar catalogues. We also aim to use the result of the application of machine learning techniques to Gaia data, in a broader Galactic context. We have developed a data mining methodology to blindly search for open clusters in the Galactic disc. First, we use a density-based clustering algorithm, DBSCAN, to search for overdensities in the five-dimensional astrometric parameter space in Gaia data. The deployment of the clustering step in a Big Data environment, at the MareNostrum supercomputer located in the Barcelona Supercomputing Center, prevents the search to be limited by computational limitations. Second, the detected overdensities are classified into mere statistical or physical overdensities using an artificial neural network trained to recognise the isochrone pattern that open cluster member stars follow in a colour-magnitude diagram. We estimate astrophysical parameters such as ages, distances and line-of-sight extinctions for the whole open cluster population using an artificial neural network trained on well-known open clusters. We use this additional information, together with radial velocities gathered from different space-based and ground-based surveys, to trace the Galactic spiral present-day structure using GaussianMixtureModels to associate the young (< 30 Myr) open clusters to their mother spiral arms. We also describe the spiral arms evolution during the last 80 Myr to provide new insights into the nature of the Milky Way spiral structure. The automatization of the open cluster detection procedure, together with its deployment in a Big Data environment, has resulted in more than 650 new open clusters detected with this methodology. The new UBC clusters (named after the University of Barcelona) represent one-third of the actual open clusters census (2017 objects with Gaia DR2 parameters), and it is the largest single contribution to the open cluster catalogue. We are able to add 264 young open clusters (< 30 Myr) to the 84 high-mass star- forming regions traditionally used to trace spiral arms, to increase the Galactocentric azimuth range where the Milky Way spiral arms are defined, and better estimate their present-day parameters. By analysing the age distribution of the open clusters across the Galactic spiral arms, and computing the spiral arms pattern speeds following the open clusters orbits from their birthplaces, we are able to disfavour classical density waves as the main mechanism for the formation of the Milky Way spiral arms, favouring a transient behaviour. This thesis has shown that the use of machine learning, with proper treatment of the computational resources, has a long journey ahead in a data-dominated future for Astronomy.
Els cúmuls estel·lars oberts són conjunts d'estels, lligats gravitatòriament, nascuts al mateix núvol molecular que tenen propietats similars. Aquests cúmuls són traçadors populars de la estructura del disc Galàctic, com ara els braços espirals. El segon llançament de dades de Gaia, amb més de 1300 milions d'estels, impossibilita la detecció de cúmuls a partir de mètodes tradicionals degut al gran volum del catàleg. Per això, el desenvolupament de tècniques automàtiques per aquest fi ha crescut juntament amb el volums dels catàlegs a analitzar. Hem desenvolupat una metodologia per a la cerca a cegues de cúmuls oberts al disc Galàctic. Hem utilitzat un algoritme de clustering, DBSCAN, per trobar sobredensitats en l'espai astromètric de cinc dimensions de Gaia. La implementació del mètode de clustering a un entorn de Big Data, al superordinador MareNostrum, ens permet cercar cúmuls oberts basant-nos en les seves propietats físiques. Les sobredensitats detectades s'identifiquen com a cúmuls oberts reals per mitjà d'una xarxa neuronal artificial que reconeix isòcrones en un diagrama de color-magnitud. L'automatització del procediment de detecció amb l'ús de tècniques de Big Data, ha resultat en més de 650 nous cúmuls. Aquests nous cúmul representen un terç de la població actual, i és la contribució individual més gran al catàleg. Hem pogut estimar les propietats físiques dels cúmuls com distància, edat i extinció, fent servir una xarxa neuronal artificial entrenada sobre cúmuls coneguts. Fem servir aquesta informació, juntament amb mesures de velocitat radial, per traçar l'estructura espiral actual de la nostra Galàxia associant els cúmuls oberts més joves (< 30 milions d'anys) al braç espiral on s'han format. Amb això, hem augmentat el nombre de traçadors de braços espirals, afegint 264 cúmuls joves als traçadors utilitzats tradicionalment. Això ens ha permès estimar millor els paràmetres actuals d'aquests braços. Analitzant la distribució en edat dels cúmuls dins dels braços espirals, i calculant la velocitat en la que aquests braços es mouen a partir de l'orbita dels cúmuls, hem pogut desfavorir la teoria clàssica d'ona de densitat com a mecanisme principal de formació de l'estructura espiral, trobant un comportament més transitori dels braços.
13

Ramoly, Nathan. "Contextual integration of heterogeneous data in an open and opportunistic smart environment : application to humanoid robots". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLL003/document.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
L'association de robots personnels et d’intelligences ambiantes est une nouvelle voie pour l’aide à domicile. Grâce aux appareils intelligents de l'environnement, les robots pourraient fournir un service de haute qualité. Cependant, des verrous existent pour la perception, la cognition et l’action.En effet, une telle association cause des problèmes de variétés, qualités et conflits, engendrant des données hétérogènes et incertaines. Cela complique la perception du contexte et la cognition, i.e. le raisonnement et la prise de décision. La connaissance du contexte est utilisée par le robot pour effectuer des actions. Cependant, il se peut qu’il échoue, à cause de changements de contexte ou par manque de connaissance. Ce qui annule ou retarde son plan. La littérature aborde ces sujets, mais n’offre aucune solution viable et complète. Face à ces verrous, nous avons proposé des contributions, autour à la fois du raisonnement et de l’apprentissage. Nous avons d’abord conçu un outil d'acquisition de contexte qui gère et modélise l’incertitude. Puis, nous avons proposé une technique de détection de situations anormales à partir de données incertaines. Ensuite, un planificateur dynamique, qui considère les changements de contexte, a été proposé. Enfin, nous avons développé une méthode d'apprentissage par renforcement et expérience pour éviter proactivement les échecs.Toutes nos contributions ont été implémentées et validées via simulation ou à l’aide d’un robot dans une plateforme d’espaces intelligents
Personal robots associated with ambient intelligence are an upcoming solution for domestic care. In fact, helped with devices dispatched in the environment, robots could provide a better care to users. However, such robots are encountering challenges of perception, cognition and action.In fact, such an association brings issues of variety, data quality and conflicts, leading to the heterogeneity and uncertainty of data. These are challenges for both perception, i.e. context acquisition, and cognition, i.e. reasoning and decision making. With the knowledge of the context, the robot can intervene through actions. However, it may encounter task failures due to a lack of knowledge or context changes. This causes the robot to cancel or delay its agenda. While the literature addresses those topics, it fails to provide complete solutions. In this thesis, we proposed contributions, exploring both reasoning and learning approaches, to cover the whole spectrum of problems. First, we designed novel context acquisition tool that supports and models uncertainty of data. Secondly, we proposed a cognition technique that detects anomalous situation over uncertain data and takes a decision in accordance. Then, we proposed a dynamic planner that takes into consideration the last context changes. Finally, we designed an experience-based reinforcement learning approach to proactively avoid failures.All our contributions were implemented and validated through simulations and/or with a small robot in a smart home platform
14

Rathnayaka, Mudiyanselage Udara Madushantha Somarathna. "Data quality analysis in a GIS environment of OpenStreetMap geodatabase for Sri Lanka". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
The purpose of the present study is to analyze the data quality of OpenStreetMap geodatabase in a GIS environment; the case of study is a region of Sri Lanka. OpenStreetMap (OSM) is one of the most well-known crowd-sourced products, providing a global map base thanks to the mapping activity carried out by volunteers all around the world. As the quality of collected information remains a significant concern for the geospatial information community and in geospatial data management, a qualitative and quantitative assessment of OSM data is of great importance, due to the large diffusion and adoption of this kind of volunteered geospatial information (VGI). This study concerns the OSM dataset currently available for the Mawanella area in Sri Lanka and has been performed in an open-source Geographic Information System (GIS) environment, QGIS. OSM vector files are the raw materials for the analysis. The evaluation has been realized considering the main quality attributes to be maintained in a mapping product, either based on intrinsic properties and on the relationship with official databases available for the same area. The results of the study suggest that the current quality of OSM maps in the study area is fairly good, but completeness is poor and must be improved.
15

Rafes, Karima. "Le Linked Data à l'université : la plateforme LinkedWiki". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS032/document.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Le Center for Data Science de l’Université Paris-Saclay a déployé une plateforme compatible avec le Linked Data en 2016. Or, les chercheurs rencontrent face à ces technologies de nombreuses difficultés. Pour surmonter celles-ci, une approche et une plateforme appelée LinkedWiki, ont été conçues et expérimentées au-dessus du cloud de l’université (IAAS) pour permettre la création d’environnements virtuels de recherche (VRE) modulaires et compatibles avec le Linked Data. Nous avons ainsi pu proposer aux chercheurs une solution pour découvrir, produire et réutiliser les données de la recherche disponibles au sein du Linked Open Data, c’est-à-dire du système global d’information en train d’émerger à l’échelle du Web. Cette expérience nous a permis de montrer que l’utilisation opérationnelle du Linked Data au sein d’une université est parfaitement envisageable avec cette approche. Cependant, certains problèmes persistent, comme (i) le respect des protocoles du Linked Data et (ii) le manque d’outils adaptés pour interroger le Linked Open Data avec SPARQL. Nous proposons des solutions à ces deux problèmes. Afin de pouvoir vérifier le respect d’un protocole SPARQL au sein du Linked Data d’une université, nous avons créé l’indicateur SPARQL Score qui évalue la conformité des services SPARQL avant leur déploiement dans le système d’information de l’université. De plus, pour aider les chercheurs à interroger le LOD, nous avons implémenté le démonstrateur SPARQLets-Finder qui démontre qu’il est possible de faciliter la conception de requêtes SPARQL à l’aide d’outils d’autocomplétion sans connaissance préalable des schémas RDF au sein du LOD
The Center for Data Science of the University of Paris-Saclay deployed a platform compatible with Linked Data in 2016. Because researchers face many difficulties utilizing these technologies, an approach and then a platform we call LinkedWiki were designed and tested over the university’s cloud (IAAS) to enable the creation of modular virtual search environments (VREs) compatible with Linked Data. We are thus able to offer researchers a means to discover, produce and reuse the research data available within the Linked Open Data, i.e., the global information system emerging at the scale of the internet. This experience enabled us to demonstrate that the operational use of Linked Data within a university is perfectly possible with this approach. However, some problems persist, such as (i) the respect of protocols and (ii) the lack of adapted tools to interrogate the Linked Open Data with SPARQL. We propose solutions to both these problems. In order to be able to verify the respect of a SPARQL protocol within the Linked Data of a university, we have created the SPARQL Score indicator which evaluates the compliance of the SPARQL services before their deployments in a university’s information system. In addition, to help researchers interrogate the LOD, we implemented a SPARQLets-Finder, a demonstrator which shows that it is possible to facilitate the design of SPARQL queries using autocompletion tools without prior knowledge of the RDF schemas within the LOD
16

Paumelle, Martin. "Description multi-dimensionnelle de l'environnement à l'échelle des territoires : contribution pour la recherche de déterminants environnementaux dans l'étiologie des maladies chroniques". Electronic Thesis or Diss., Université de Lille (2022-....), 2023. http://www.theses.fr/2023ULILR050.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Parmi les maladies chroniques, la maladie de Crohn (MC) et l'insuffisance rénale chronique terminale (IRCT) présentent une étiologie multifactorielle encore en partie indéterminée pour laquelle un lien avec l'environnement est fortement suspecté. La répartition spatiale de leur incidence a été cartographiée à l'échelle communale dans le Nord de la France, à partir de deux registres de santé (Epimad et Nephronor). Ces disparités spatiales d'incidence constituent le point d'entrée pour investiguer les déterminants environnementaux susceptibles d'être impliqués dans la survenue de ces maladies.La caractérisation de l'environnement et son lien à la santé est souvent appréhendée de manière cloisonnée. Les travaux se focalisent sur une source d'émission, un polluant, un milieu d'exposition. Bien que ces approches soient nécessaires, elles peuvent s'avérer limitées pour appréhender la complexité du lien entre environnement et santé, surtout pour des maladies multifactorielles dont les facteurs de risque environnementaux sont encore inconnus. Dans ce cas, il semble judicieux de privilégier des stratégies territoriales et multidimensionnelles, avant éventuellement de cibler des facteurs de risque environnementaux spécifiques. Dans ce contexte, comment mobiliser de multiples données environnementales ouvertes pour identifier les déterminants territoriaux de maladies multifactorielles ?L'objectif principal de cette thèse est de proposer une description intégrée de l'environnement à l'échelle des territoires pour renseigner l'étiologie des maladies étudiées. La stratégie a consisté à collecter et réutiliser des données environnementales ouvertes. Cette démarche a permis d'identifier 24 sources de données et de générer 113 indicateurs spatialisés à l'échelle communale pour quatre départements. Ces indicateurs permettent de caractériser le niveau de contamination des milieux (air, eau, sols), le niveau des émissions polluantes, la localisation des sources d'émissions, l'occupation des sols, les pratiques agricoles, la naturalité des territoires et le climat. Plusieurs méthodologies ont été utilisées pour exploiter ces indicateurs et caractériser l'environnement sous un prisme multidimensionnel.Une première approche a consisté à développer des indices spatiaux composites. Ces indices permettent de synthétiser, en une mesure globale, l'information contenue dans un grand nombre d'indicateurs. D'abord, un indice de vulnérabilité et un indice de résilience ont été calculés. Ils permettent de caractériser l'inégale répartition spatiale des déterminants territoriaux favorables et défavorables à la santé. Ensuite, des indices composites de multi-contamination des milieux (air, eau, sols) ont été construits.Une deuxième approche a été développée en utilisant des méthodes de classification multivariées pour créer des typologies territoriales et décrire les profils environnementaux des communes. Ces résultats apportent une vision plus complexe des territoires, et ont permis d'appréhender comment les pressions environnementales se répartissent dans l'espace et se combinent les unes avec les autres.Enfin, les résultats de ces approches multi-dimensionnelles ont été associés aux variations spatiales d'incidence des maladies chroniques, suggérant des liens potentiels entre l'environnement et la survenue de ces pathologie. Pour l'IRCT, des associations ont été observées avec la pression urbaine et la pollution atmosphérique en particules fines, corroborant la littérature existante. Pour la MC, des liens ont été suggérés avec les pratiques agricoles, la naturalité des territoires et la pollution métallique des sols. D'autres approches épidémiologiques doivent maintenant être envisagées pour éprouver ces hypothèses et poursuivre les recherches
Among chronic diseases, Crohn's disease (CD) and end-stage renal disease (ESRD) have a multifactorial etiology that remains partly unknown, with a strong suspicion of an environmental link. The spatial distribution of their incidence has been mapped at the municipal level in Northern France, using two health registers (Epimad and Nephronor). These spatial disparities in incidence serve as the starting point to investigate potential environmental determinants that may be involved in the onset of these diseases.The characterization of the environment and its link to health is often approached in a fragmented manner, focusing on a specific emission source, pollutant, or exposure medium. While these approaches are necessary, they may be limited in comprehending the complexity of the relationship between the environment and health, especially for multifactorial diseases with unknown environmental risk factors. In such cases, it is relevant to prioritize territorial and multidimensional strategies before potentially targeting specific environmental risk factors. In this context, how can multiple open environmental data sources be leveraged to identify territorial determinants of multifactorial diseases?The main objective of this thesis is to offer an integrated description of the environment at the territorial level to inform the etiology of the studied diseases. The strategy involved collecting and reusing open environmental data. This approach identified 24 data sources and generated 113 spatial indicators at the municipal level for four departments. These indicators allow for the characterization of contamination levels in various media (air, water, soil), pollutant emissions, the location of emission sources, land use, agricultural practices, the natural features of territories, and climate. Several methodologies were used to exploit these indicators and characterize the environment from a multidimensional perspective.A first approach involved developing composite spatial indices. These indices synthesize information from many indicators into a single global measure. Initially, vulnerability and resilience indices were calculated. They characterize the uneven spatial distribution of environmental determinants that have a beneficial or detrimental impact on health. Subsequently, composite indices of multi-media contamination (air, water, soil) were constructed.A second approach was developed using multivariate classification methods to create territorial typologies and describe the environmental profiles of municipalities. These results provide a more complex view of territories and have allowed to understand how environmental pressures are distributed in space and overlap with each other.Finally, the results of these multidimensional approaches were linked to spatial variations in the incidence of chronic diseases, suggesting potential connections between the environment and the occurrence of these pathologies. For ES-CKD, associations were observed with urban pressure and fine particulate air pollution, corroborating existing literature. For CD, links were suggested with agricultural practices, the natural characteristics of territories, and metallic soil pollution. Further epidemiological approaches are now needed to test these hypotheses and advance research in this area
17

Reski, Nico. "Change your Perspective : Exploration of a 3D Network created with Open Data in an Immersive Virtual Reality Environment using a Head-mounted Display and Vision-based Motion Controls". Thesis, Linnéuniversitetet, Institutionen för medieteknik (ME), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-46779.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Year after year, technologies are evolving in an incredible rapid pace, becoming faster, more complex, more accurate and more immersive. Looking back just a decade, especially interaction technologies have made a major leap. Just two years ago in 2013, after being researched for quite some time, the hype around virtual reality (VR) arouse renewed enthusiasm, finally reaching mainstream attention as the so called head-mounted displays (HMD), devices worn on the head  to grant a visual peek into the virtual world, gain more and more acceptance with the end-user. Currently, humans interact with computers in a very counter-intuitive two dimensional way. The ability to experience digital content in the humans most natural manner, by simply looking around and perceiving information from their surroundings, has the potential to be a major game changer in how we perceive and eventually interact with digital information. However, this confronts designers and developers with new challenges of how to apply these exciting technologies, supporting interaction mechanisms to naturally explore digital information in the virtual world, ultimately overcoming real world boundaries. Within the virtual world, the only limit is our imagination. This thesis investigates an approach of how to naturally interact and explore information based on open data within an immersive virtual reality environment using a head-mounted display and vision-based motion controls. For this purpose, an immersive VR application visualizing information as a network of European capital cities has been implemented, offering interaction through gesture input. The application lays a major focus on the exploration of the generated network and the consumption of the displayed information. While the conducted user interaction study with eleven participants investigated their acceptance of the developed prototype, estimating their workload and examining their explorative behaviour, the additional dialog with five experts in the form of explorative discussions provided further feedback towards the prototype’s design and concept. The results indicate the participants’ enthusiasm and excitement towards the novelty and intuitiveness of exploring information in a less traditional way than before, while challenging them with the applied interface and interaction design in a positive manner. The design and concept were also accepted through the experts, valuing the idea and implementation. They provided constructive feedback towards the visualization of the information as well as emphasising and encouraging to be even bolder, making more usage of the available 3D environment. Finally, the thesis discusses these findings and proposes recommendations for future work.
18

RAZZAK, FAISAL. "The Role of Semantic Web Technologies in Smart Environments". Doctoral thesis, Politecnico di Torino, 2013. http://hdl.handle.net/11583/2506366.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Today semantic web technologies and Linked Data principles are providing formalism, standards, shared data semantics and data integration for unstructured data over the web. The result is a transformation from theWeb of Interaction to theWeb of Data and actionable information. On the crossroad lies our daily lives, containing plethora of unstructured data which is originating from low cost sensors and appliances to every computational element used in our modern lives, including computers, interactive watches, mobile phones, GPS devices etc. These facts accentuate an opportunity for system designers to combine these islands of data into a large actionable information space which can be utilized by automated and intelligent agents. As a result, this phenomenon is likely to institute a space that is smart enough to provide humans with comfort of living and to build an efficient society. Thus, in this context, the focus of my research has been to propose solutions to the problems in the domains of smart environment and energy management, under the umbrella of ambient intelligence. The potential role of semantic web technologies in these proposed solutions has been analysed and architectures for these solutions were designed, implemented and tested.
19

Dunkel, Alexander. "Assessing the perceived environment through crowdsourced spatial photo content for application to the fields of landscape and urban planning". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-207927.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Assessing information on aspects of identification, perception, emotion, and social interaction with respect to the environment is of particular importance to the fields of natural resource management. Our ability to visualize this type of information has rapidly improved with the proliferation of social media sites throughout the Internet in recent years. While many methods to extract information on human behavior from crowdsourced geodata already exist, this work focuses on visualizing landscape perception for application to the fields of landscape and urban planning. Visualization of people’s perceptual responses to landscape is demonstrated with crowdsourced photo geodata from Flickr, a popular photo sharing community. A basic, general method to map, visualize and evaluate perception and perceptual values is proposed. The approach utilizes common tools for spatial knowledge discovery and builds on existing research, but is specifically designed for implementation within the context of landscape perception analysis and particularly suited as a base for further evaluation in multiple scenarios. To demonstrate the process in application, three novel types of visualizations are presented: the mapping of lines of sight in Yosemite Valley, the assessment of landscape change in the area surrounding the High Line in Manhattan, and individual location analysis for Coit Tower in San Francisco. The results suggest that analyzing crowdsourced data may contribute to a more balanced assessment of the perceived landscape, which provides a basis for a better integration of public values into planning processes
Als Wahrnehmung wird der Bewusstseinsprozess des subjektiven Verstehens der Umwelt bezeichnet. Grundlage für diesen Prozess ist die Gewinnung von Informationen über die Sinne, also aus visuellen, olfaktorischen, akustischen und anderen Reizen. Die Wahrnehmung ist aber auch wesentlich durch interne Prozesse beeinflusst. Das menschliche Gehirn ist fortlaufend damit beschäftigt, sowohl bewusst als auch unbewusst Sinneswahrnehmungen mit Erinnerungen abzugleichen, zu vereinfachen, zu assoziieren, vorherzusagen oder zu vergleichen. Aus diesem Grund ist es schwierig, die Wahrnehmung von Orten und Landschaften in Planungsprozessen zu berücksichtigen. Jedoch wird genau dies von der Europäischen Landschaftskonvention gefordert, die Landschaft als einen bestimmten Bereich definiert, so wie er von Besuchern und Einwohnern wahrgenommen wird (“as a zone or area as perceived by local people or visitors”, ELC Art. 1, Abs. 38). Während viele Fortschritte und Erkenntnisse, zum Beispiel aus den Kognitionswissenschaften, heute helfen, die Wahrnehmung einzelner Menschen zu verstehen, konnte die Stadt- und Landschaftsplanung kaum profitieren. Es fehlt an Kenntnissen über das Zusammenwirken der Wahrnehmung vieler Menschen. Schon Stadtplaner Kevin Lynch beschäftigte dieses gemeinsame, kollektive ‚Bild‘ der menschlichen Umwelt ("generalized mental picture", Lynch, 1960, p. 4). Seitdem wurden kaum nennenswerte Fortschritte bei der Erfassung der allgemeinen, öffentlichen Wahrnehmung von Stadt- und Landschaft erzielt. Dies war Anlass und Motivation für die vorliegende Arbeit. Eine bisher in der Planung ungenutzte Informationsquelle für die Erfassung der Wahrnehmung vieler Menschen bietet sich in Form von crowdsourced Daten (auch ‚Big Data‘), also großen Mengen an Daten die von vielen Menschen im Internet zusammengetragen werden. Im Vergleich zu konventionellen Daten, zum Beispiel solchen die durch Experten erhoben werden und durch öffentliche Träger zur Verfügung stehen, eröffnet sich durch crowdsourced Daten eine bisher nicht verfügbare Quelle für Informationen, um die komplexen Zusammenhänge zwischen Raum, Identität und subjektiver Wahrnehmung zu verstehen. Dabei enthalten crowdsourced Daten lediglich Spuren menschlicher Entscheidungen. Aufgrund der Menge ist es aber möglich, wesentliche Informationen über die Wahrnehmung derer, die diese Daten zusammengetragen haben, zu gewinnen. Dies ermöglicht es Planern zu verstehen, wie Menschen ihre unmittelbare Umgebung wahrnehmen und mit ihr interagieren. Darüber hinaus wird es immer wichtiger, die Ansichten Vieler in Planungsprozessen zu berücksichtigen (Lynam, De Jong, Sheil, Kusumanto, & Evans, 2007; Brody, 2004). Der Wunsch nach öffentlicher Beteiligung sowie die Anzahl an beteiligten Stakeholdern nehmen dabei konstant zu. Durch das Nutzen dieser neuen Informationsquelle bietet sich eine Alternative zu herkömmlichen Ansätzen wie Umfragen, die genutzt werden um beispielsweise Meinungen, Positionen, Werte, Normen oder Vorlieben von bestimmten sozialen Gruppen zu messen. Indem es crowdsourced Daten erleichtern, solch soziokulturelle Werte zu bestimmen, können die Ergebnisse vor allem bei der schwierigen Gewichtung gegensätzlicher Interessen und Ansichten helfen. Es wird die Ansicht geteilt, dass die Nutzung von crowdsourced Daten, indem Einschätzungen von Experten ergänzt werden, letztendlich zu einer faireren, ausgeglichenen Berücksichtigung der Allgemeinheit in Entscheidungsprozessen führen kann (Erickson, 2011, p.1). Eine große Anzahl an Methoden ist bereits verfügbar, um aus dieser Datenquelle wichtige landschaftsbezogene Informationen auszulesen. Beispiele sind die Bewertung der Attraktivität von Landschaften, die Bestimmung der Bedeutung von Sehenswürdigkeiten oder Wahrzeichen, oder die Einschätzung von Reisevorlieben von Nutzergruppen. Viele der bisherigen Methoden wurden jedoch als ungenügend empfunden, um die speziellen Bedürfnisse und das breite Spektrum an Fragestellungen zur Landschaftswahrnehmung in Stadt- und Landschaftsplanung zu berücksichtigen. Das Ziel der vorliegenden Arbeit ist es, praxisrelevantes Wissen zu vermitteln, welches es Planern erlaubt, selbstständig Daten zu erforschen, zu visualisieren und zu interpretieren. Der Schlüssel für eine erfolgreiche Umsetzung wird dabei in der Synthese von Wissen aus drei Kategorien gesehen, theoretische Grundlagen (1), technisches Wissen zur Datenverarbeitung (2) sowie Kenntnisse zur grafischen Visualisierungen (3). Die theoretischen Grundlagen werden im ersten Teil der Arbeit (Part I) präsentiert. In diesem Teil werden zunächst Schwachpunkte aktueller Verfahren diskutiert, um anschließend einen neuen, konzeptionell-technischen Ansatz vorzuschlagen der gezielt auf die Ergänzung bereits vorhandener Methoden zielt. Im zweiten Teil der Arbeit (Part II) wird anhand eines Datenbeispiels die Anwendung des Ansatzes exemplarisch demonstriert. Fragestellungen die angesprochen werden reichen von der Datenabfrage, Verarbeitung, Analyse, Visualisierung, bis zur Interpretation von Grafiken in Planungsprozessen. Als Basis dient dabei ein Datenset mit 147 Millionen georeferenzierte Foto-Daten und 882 Millionen Tags der Fotoaustauschplatform Flickr, welches in den Jahren 2007 bis 2015 von 1,3 Millionen Nutzern zusammengetragen wurde. Anhand dieser Daten wird die Entwicklung neuer Visualisierungstechniken exemplarisch vorgestellt. Beispiele umfassen Spatio-temporal Tag Clouds, eine experimentelle Technik zur Generierung von wahrnehmungsgewichteten Karten, die Visualisierung von wahrgenommenem Landschaftswandel, das Abbilden von wahrnehmungsgewichteten Sichtlinien, sowie die Auswertung von individueller Wahrnehmung von und an bestimmten Orten. Die Anwendung dieser Techniken wird anhand verschiedener Testregionen in den USA, Kanada und Deutschland für alle Maßstabsebenen geprüft und diskutiert. Dies umfasst beispielsweise die Erfassung und Bewertung von Sichtlinien und visuellen Bezügen in Yosemite Valley, das Monitoring von wahrgenommenen Veränderungen im Bereich der High Line in New York, die Auswertung von individueller Wahrnehmung für Coit Tower in San Francisco, oder die Beurteilung von regional wahrgenommenen identitätsstiftenden Landschaftswerten für Baden-Württemberg und die Greater Toronto Area (GTA). Anschließend werden Ansätze vorgestellt, um die Qualität und Validität von Visualisierungen einzuschätzen. Abschließend wird anhand eines konkreten Planungsbeispiels, des London View Management Frameworks (LVMF), eine spezifische Implementation des Ansatzes und der Visualisierungen kurz aufgezeigt und diskutiert. Mit der Arbeit wird vor allem das breite Potential betont, welches die Nutzung von crowdsourced Daten für die Bewertung von Landschaftswahrnehmung in Stadt- und Landschaftsplanung bereithält. Insbesondere crowdsourced Fotodaten werden als wichtige zusätzliche Informationsquelle gesehen, da sie eine bisher nicht verfügbare Perspektive auf die allgemeine, öffentliche Wahrnehmung der Umwelt ermöglichen. Während der breiteren Anwendung noch einige Grenzen gesetzt sind, können die vorgestellten experimentellen Methoden und Techniken schon wichtige Aufschlüsse über eine ganze Reihe von wahrgenommenen Landschaftswerten geben. Auf konzeptioneller Ebene stellt die Arbeit eine erste Grundlage für weitere Forschung dar. Bevor jedoch eine breite Anwendung in der Praxis möglich ist, müssen entscheidende Fragen gelöst werden, beispielsweise zum Copyright, zur Definition von ethischen Standards innerhalb der Profession, sowie zum Schutz der Privatsphäre Beteiligter. Längerfristig wird nicht nur die Nutzung der Daten als wichtig angesehen, sondern auch die Erschließung der essentiellen Möglichkeiten dieser Entwicklung zur besseren Kommunikation mit Auftraggebern, Beteiligten und der Öffentlichkeit in Planungs- und Entscheidungsprozessen
20

Sao, Pedro Michael A. "Real-time Assessment, Prediction, and Scaffolding of Middle School Students’ Data Collection Skills within Physical Science Simulations". Digital WPI, 2013. https://digitalcommons.wpi.edu/etd-dissertations/168.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Despite widespread recognition by science educators, researchers and K-12 frameworks that scientific inquiry should be an essential part of science education, typical classrooms and assessments still emphasize rote vocabulary, facts, and formulas. One of several reasons for this is that the rigorous assessment of complex inquiry skills is still in its infancy. Though progress has been made, there are still many challenges that hinder inquiry from being assessed in a meaningful, scalable, reliable and timely manner. To address some of these challenges and to realize the possibility of formative assessment of inquiry, we describe a novel approach for evaluating, tracking, and scaffolding inquiry process skills. These skills are demonstrated as students experiment with computer-based simulations. In this work, we focus on two skills related to data collection, designing controlled experiments and testing stated hypotheses. Central to this approach is the use and extension of techniques developed in the Intelligent Tutoring Systems and Educational Data Mining communities to handle the variety of ways in which students can demonstrate skills. To evaluate students' skills, we iteratively developed data-mined models (detectors) that can discern when students test their articulated hypotheses and design controlled experiments. To aggregate and track students' developing latent skill across activities, we use and extend the Bayesian Knowledge-Tracing framework (Corbett & Anderson, 1995). As part of this work, we directly address the scalability and reliability of these models' predictions because we tested how well they predict for student data not used to build them. When doing so, we found that these models demonstrate the potential to scale because they can correctly evaluate and track students' inquiry skills. The ability to evaluate students' inquiry also enables the system to provide automated, individualized feedback to students as they experiment. As part of this work, we also describe an approach to provide such scaffolding to students. We also tested the efficacy of these scaffolds by conducting a study to determine how scaffolding impacts acquisition and transfer of skill across science topics. When doing so, we found that students who received scaffolding versus students who did not were better able to acquire skills in the topic in which they practiced, and also transfer skills to a second topic when was scaffolding removed. Our overall findings suggest that computer-based simulations augmented with real-time feedback can be used to reliably measure the inquiry skills of interest and can help students learn how to demonstrate these skills. As such, our assessment approach and system as a whole shows promise as a way to formatively assess students' inquiry.
21

Adugna, Leykun y Goran Laic. "Kan projekt med öppen källkod användas delvis eller helt för at tuppfylla behoven för routing-applikationer?" Thesis, KTH, Medicinteknik och hälsosystem, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-272732.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
I dagens samhälle är det inte ovanligt för företag och organisationer att hitta bättre och alternativa mjukvaror med öppen källkod för att lösa sina behov. De söker programvaror som har de nödvändiga egenskaperna som krävs för att driva sin verksamhet och eventuellt ersätta egenutvecklad programvara för att spara tid och undvika onödiga kostnader. Denna avhandling har undersökt företagens behov av routing-applikationer och tagit fram ett förslag med hjälp av egenutvecklad testbädd. Den egenutvecklade testbädden kan användas av företag för att avgöra om den önskade öppen källkod programvara är lönsamt att implementera i ens verksamhet. Den routing-applikation som visade sig vara bättre än den befintliga är FRRouting(Free Range Routing). Lösningen som föreslås av studien har givit bevisad effekt genom ett pilotprojekt där öppen källkod har varit framgångsrikt på ett kvalitetsmässigt, funktionellt och kostnadseffektivt sätt att ersätta en befintlig programvara
Companies are looking into the open source community in the hope of finding a better alternative software to replace their existing software suit. They are looking for software that has the necessary properties required to run their business and possibly help them avoid unnecessary costs and save time. This thesis has examined the needs of routing application for companies and presented a suggestion by using self-developed testbed. The testbed can be used by companies to decide the beneficial of implementing the desired routing application software. The routing application that gave the best result in this study is FRRouting (Free Range Routing). The solution proposed by the study has been proven to be effective through a pilot project where open source program has been successful by retaining the expected quality, functionality in a cost-effective way.
22

王珏琄. "The condition of applying open data for sustainable development by environmental groups in Taiwan". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/a4egv9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Sayan, Bianca. "The Contribution of Open Frameworks to Life Cycle Assessment". Thesis, 2011. http://hdl.handle.net/10012/6336.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Environmental metrics play a significant role in behavioural change, policy formation, education, and industrial decision-making. Life Cycle Assessment (LCA) is a powerful framework for providing information on environmental impacts, but LCA data is under-utilized, difficult to access, and difficult to understand. Some of the issues that are required to be resolved to increase relevancy and use of LCA are accessibility, validation, reporting and publication, and transparency. This thesis proposes that many of these issues can be resolved through the application of open frameworks for LCA software and data. The open source software (OSS), open data, open access, and semantic web movements advocate the transparent development of software and data, inviting all interested parties to contribute. A survey was presented to the LCA community to gauge the community’s interest and receptivity to working within open frameworks, as well as their existing concerns with LCA data. Responses indicated dissatisfaction with existing tools and some interest in open frameworks, though interest in contributing was weak. The responses also pointed out transparency, the expansion of LCA information, and feedback to be desirable areas for improvement. Software for providing online LCA databases was developed according to open source, open data, and linked data principles and practices. The produced software incorporates features that attempt to resolve issues identified in previous literature in addition to needs defined from the survey responses. The developed software offers improvements over other databases in areas of transparency, data structure flexibility, and ability to facilitate user feedback. The software was implemented as a proof of concept, as a test-bed for attracting data contributions from LCA practitioners, and as a tool for interested users. The implementation allows users to add LCA data, to search through LCA data, and to use data from the software in separate independent tools.. The research contributes to the LCA field by addressing barriers to improving LCA data and access, and providing a platform on which LCA database tools and data can develop efficiently, collectively, and iteratively.
24

Sanchis, Huertas Ana. "Providing energy efficiency location-based strategies for buildings using linked open data". Master's thesis, 2012. http://hdl.handle.net/10362/8315.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Climate change is a main concern for humanity from the ending of 20th century. To improve and take care of our environment, a set of measures has been developed to monitor, manage, reduce consumption and raise efficiency of buildings, including the integration of renewable energies and the implementation of passive measures like the improvement of the building envelope. Complex methodologies are used in order to achieve these objectives. Using different tools and data translating is needed, and the loss of accuracy from the detailed input information is most of the times unavoidable. Moreover, including these measures in the development of a project have become a try and error process involving building characteristics, location data and energy efficiency measures. The raising of new technologies, capable of dealing with location-based data and semantics to relate and structure information in a machine readable way, may allow us to provide a set of technical measures to improve energy efficiency in an accessible, open, understandable and easy way from a few data about location and building characteristics. This work tries to define a model and its necessary and sufficient set of data. Its application will provide customized strategies acting as pre-feasibility constraints to help buildings achieve their energy efficiency objectives from its very conception. The model intends to be useful for non-expert users who want to know about their energy savings possibilities, and for professionals willing to get a sustainable starting point for their projects.
25

Marshall, Lucianne M. "Progression of marine phytoplankton blooms and environmental dynamics from sea-ice coverage to open waters in the coastal Arctic: comparing experimental data with continuous cabled observations". Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/10131.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
In this thesis, I present a unique temporal study of phytoplankton, nutrient and environmental dynamics that focussed on the transitional period between sea-ice cover conditions and open waters in a coastal inlet of the Canadian Arctic during 2016. I also compared the 2016 experimental data with continuous observations made by the Ocean Networks Canada (ONC) underwater observatory. Surface seawater sampling was conducted in Cambridge Bay with high temporal resolution from June 16 to August 3, to measure phytoplankton carbon and nitrate utilisation, silica production, phytoplankton biomass, phytoplankton taxonomy and dissolved nutrients. Throughout the study period, nitrate concentrations averaged 0.67  0.08 µmol L-1, and chlorophyll a and primary production were low at 0.11  0.005 µg L-1 and 0.25  0.02 µmol C L-1 d-1, respectively. The presence of sea-ice reduced physical mixing, which resulted in low surface nitrate concentrations. Phytoplankton assemblages, production rates and biomass were dominated by small flagellated cells (<5 µm) until late July, yet increases in temperature and nitrate later in the season enabled larger Chaetoceros spp. diatoms to bloom. The Chaetoceros bloom coincided with a peak in silica production (0.429 µmol Si L-1 d-1), which was otherwise low, but variable. The time series was divided into three phases based on changes in environmental conditions, these phases were used to evaluate changes in biological dynamics. Phase I was characterised by sea-ice, low nitrate and increasing phytoplankton biomass and primary production. Phase II was a transitional period, with calm water conditions a drop in phytoplankton biomass, however, an increase in the mean nitrate concentration enabled more consistent carbon fixation. PIII had greater environmental variability driven by mixing events. The mixing of the water column in PIII enabled larger Chaetoceors spp. to become prevalent in the surface waters contributing increasingly to the biomass and carbon utilisation. Overall, the nutrient concentrations, levels of biomass and production rates in Cambridge Bay were more reflective of those from oligotrophic regions. When comparing experimental data with observations made by the ONC observatory, a strong relationship between carbon utilisation and apparent oxygen utilisation became evident. This finding suggests that long-term in situ observations can potentially be used to monitor biological rates in the Arctic. The temporal resolution of this field study adds a seasonal perspective to our understanding of Arctic ecosystems, complements studies with greater spatial and interannual coverage, and can contribute to future numerical modelling of Arctic change. Furthermore, this study provides a first-time comparison between experimentally-measured phytoplankton production and cabled observations in the Arctic.
Graduate
2019-09-07
26

Chuang, Liang-Chieh y 莊良傑. "GOVERNMENT OPEN DATA PLATFORM、ENVIRONMENT SECURITY DATA AND HOUSING PRICES". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/2q5s75.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
碩士
元智大學
管理碩士在職專班
106
To adapt to the advent of digital age and the trend of global citizen movement, the authority has begun to promote the opening of government's political data platform on the basis of the resolution of the Executive Yuan’s 3322nd council on November 8, 2012. According to the results of the latest Global Open Data Index released by the Open Knowledge International in 2017, Taiwan was ranked first in 2015 and continued to hold the title in 2016. With the promotion of government units at all levels, there are currently 35,000 categories of materials on the government's political material opening platform. Finally, based on the four environmental safety data of car accident points, car theft, locomotive theft and residential theft, this study focused on the regions with high registered actual selling price of estate, divided Taoyuan into old and new districts, and analyzed the characteristics of the four environmental factors then generalized the relationship among them, to provide new options of house purchasing for the public. At the end of this study, we gave a number of suggestions based on the experience of the actual using of the Government Open Data, and expect to make the application of it more practical and convenient.
27

Tsou, Ya-Lun y 鄒亞崙. "Using XML Technology on The Query in Open GIS Data Environment". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/95229223047375234151.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
碩士
國立成功大學
測量工程學系碩博士班
92
Due to the various design on the format, software architecture, and operating procedures of different GIS software, their built-in functions are usually only applicable to the native data format designed. This inevitably imposes obstacles on the sharing, distribution and interoperability of geographic data. A typical scenario is the difficulty to issue a spatial query upon geographic data in different data formats because their corresponding query modules are often incompatible. Consequently it impedes us to query and acquire all available data in a distributed geographic data environment. The standardized description of geographic data provides a possibility that we no longer need to simultaneously manage data in various formats, and only need to concentrate on the standardized data format instead, regardless by whom data were created and what format they were created originally. Geographic Markup Language (GML), proposed by OpenGIS Consortium, has emerged as a strong candidate for the standard of geographic data description in recent years.   This research intends to investigate the issue of querying data recorded in GML (Geography Markup Language) format and propose a feasible operation procedure from data access, filtering and representation. The core idea is to return the queried results in GML as well, but only containing those features fulfilling the constraints. The merits of this approach is we only need to handle data in GML format, and any GML viewer can be used to display queried results. The interaction of GML query module is therefore no different from those corresponding query modules in any current GIS software. We further introduce metadata in the querying process, as it may serve as an important reference for interpreting queried results, particularly when they come from different GML files. The spatial query module in this research is based on the topological relationship model by OGC and later expanded to take human spatial cognition into consideration. The successful link between primitive topological relationships and natural language-like spatial predicates reduce the required training to naïve users.   With the development of GML query module, the issue of different format of geographic data is no longer a major obstacle in OpenGIS data environment. We find it is necessary to simultaneously manage spatial entities in different dimensionality and spatial data types while processing a spatial query. Besides, the corresponding primitive topological relationships for an individual spatial query may be rather different depending on the spatial data types of the spatial entities. While dealing with complex types of spatial entities, this should receive serious attention, as users may misinterpret the returned queried results while they never notice.
28

Teng, Yueh-chuan y 鄧岳荃. "Map Interface Content Interoperability in Geospatial SOA Environment with Open Geographic Data". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/82459092928047031340.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
碩士
國立成功大學
測量及空間資訊學系碩博士班
95
Full accessibility and correct use of distributed geospatial resource are two of the critical issues to recent GIS developments. With the innovated progress of recent geospatial SOA, open geospatial data format and web service have largely removed data acquisition obstacles. How to develop a middleware environment to effectively integrate heterogeneous geospatial resource, take advantages of the chaining capability of geospatial web services and develop built-in professional geospatial knowledge have emerged as our future challenges. Map interface operations in middleware environment were chosen as the major topic of this research. Besides taking full advantages of the accessibility of heterogeneous geographic data via web service, we expect to further improve the map interface display and application via built-in cartographic knowledge in middleware environment. To achieve better interoperability of heterogeneous data, a general-purpose data description framework based on the fundamental characteristics of geographic data is proposed. Complying with the ISO/TC211 19100 series international standards, the description framework enables all distributed geospatial features to automatically carry common and necessary description information. The middleware can therefore interpret the acquired data content in a standardized way and ensure the correct use of map operations. Served as a common description framework, it can be applied to any application domains and can be expanded whenever necessary. We further established a primitive geospatial SOA following various OGC standards(WMS、WFS、WCTS、OpenLS and Catalogue Service) that allows the middleware to collect and process required data via loose-coupling of web service. Based on the proposed description framework and built-in cartographic knowledge, the developed middleware can meet the demands of the correct display and operation of heterogeneous data in map interface, as well as avoid the possible wrong data use of na��ve users. It is clear that middleware will play a dominant role to bridge the gaps of users and data providers in the future GIS environment. Though we only focus on the common characteristic of geographic data in this research, the proposed fundamental middleware environment has sufficient flexibility to further improve the integration of heterogeneous data by including additional domain-specific knowledge.
29

YANG, SHUN-WEN y 楊舜文. "An Implementation of ETC Open Data Visualization and Traffic Analysis Using ELK Stack Environment". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/6uuze2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
碩士
東海大學
資訊工程學系
106
In the government to actively promote the Open Government Data. The Government Data open platform to provide a variety of open data. The most widely used and life is closely related to the weather information and air quality information. After "De-identification" information in Traffic Data Collection System(TDCS) , also open for everyone in this platform to use. TDCS data has millions of vehicle trips per day. The traditional analysis table has been unable to effectively and quickly present analysis and resolution. This thesis mainly uses ELK Stack three open source software combination of analysis system. Real-time analysis and statistics on the open data of the TDCS . Through the visualization of the system chart. You can quickly understand the current speed situation and analysis of traffic flow and departure traffic statistics. This analysis system using Linux Shell Script get open data, and reads the cleaned data via Logstash. Combine Logstash to filter the data category. Export to the Elasticsearch database and index it. Finally, Kibana shows the results of the analysis. This system solves the limit of the number of traditional pivot analysis tables. Searches and calculates the request duration time by about 0.3 second in 500 million data. Also in the simple query test also found Elasticsearch database than non-index MariaDB more than doubled.
30

Ho, Jheng-Ying y 何政穎. "Integrating Internet of Thing and Open Data into Hadoop Cloud Computing Based on CloudStack Virtual Environment". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/2zjd4f.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
碩士
國立虎尾科技大學
資訊工程研究所
103
This study proposes a Mashup technology based on virtualization cloud computing framework (MTVCCF), integrating the Internet of Thing, web service, and open data as the framework for developing the front-end system. In addition, CloudStack and Hadoop were employed to construct a virtual cloud computing-based environment as the core framework of the back-end system. Hadoop cloud computing was used to resolve big data problems, whereas CloudStack was employed to develop, manage, and configure the basic services rendered for cloud computing. A kernel-based virtual machine (KVM) was applied to improve the extensibility of the cloud server, mitigate the gap regarding usage rate, and reduce the risk for server crash. In this study, an elderly care cloud platform (ECCP) was developed to verify the feasibility of the MTVCCF. The ECCP was aimed to measure physiological signals of elderly people. In this platform, the Near Field Communication (NFC) protocol, Bluetooth, electronic sphygmomanometer, and wireless network were integrated to provide an Internet of Thing framework that enables communication among objects, thereby transmitting the generated data through Web service to the back-end system where the data are computed and stored. Empirically, the big data generated from ECCP were subject to stress testing, which verified that the MTVCCF proposed in this study can resolve the aforementioned problems regarding big data and Internet usage capacity.
31

SUNG, YI-HSUN y 宋羿勳. "An Analysis of the Production Indicators of Society, Economy and Environment with Data Mining - Taking Taiwan's Open Data as an Example". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/574dg2.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
碩士
東海大學
資訊管理學系
105
In an age when the concepts of Open Government and Open data are prevalent, the Open Government Data (OGD) provides a platform of information sharing, which enables the public to access to the governmental data.An Analysis on Economic, Social and Environmental OGD This paper, by Data Envelopment Analysis (DEA-SBM), classifies the data available on the platform of OGD in the period of 2013-2015 into 3 categories, finds out the input and output factors, and formulates pointers to measure the efficiency of government’s economic, social and environmental policies. This paper finds out the target variable and output pointer by DEA-SBM and Decision-tree Model, respectively. The government can make a balance among economic growth, social development and environmental sustainability, so as to enhance governance efficiency. The 13 pointer variables representing the interaction between economic, social and environment factors help us to look at the social and environmental implication of Taiwan’s economic development from 2013 to 2015.
32

Richter, Stefan [Verfasser]. "World libraries : towards efficiently sharing large data volumes in open untrusted environments while preserving privacy / vorgelegt von Stefan Richter". 2009. http://d-nb.info/1000139689/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Aboualizadehbehbahani, Maziar. "Proposing a New System Architecture for Next Generation Learning Environment". Thesis, 2016. http://hdl.handle.net/1805/10289.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Indiana University-Purdue University Indianapolis (IUPUI)
The emergence of information exchange and act of offering features through external interfaces is a vast but immensely valuable challenge, and essential elements of learning environments cannot be excluded. Nowadays, there are a lot of different service providers working in the learning systems market and each of them has their own advantages. On that premise, in today's world even large learning management systems are trying to cooperate with each other in order to be best. For instance, Instructure is a substantial company and can easily employ a dedicated team tasked with the development of a video conferencing functionality, but it chooses to use an open source alternative instead: The BigBlueButton. Unfortunately, different learning system manufacturers are using different technologies for various reasons, making integration that much harder. Standards in learning environments have come to resolve problems regarding exchanging information, providing and consuming functionalities externally and simultaneously minimizing the amount of effort needed to integrate systems. In addition to defining and simplifying these standards, careful consideration is essential when designing new, comprehensive and useful systems, as well as adding interoperability to existing systems, all which subsequently took part in this research. In this research I have reviewed most of the standards and protocols for integration in learning environments and proposed a revised approach for app stores in learning environments. Finally, as a case study, a learning tool has been developed to avail essential functionalities of a social educational learning management system integrated with other learning management systems. This tool supports the dominant and most popular standards for interoperability and can be added to learning management systems within seconds.
34

Anderson, Winston Noël. "Investigating the universality of a semantic web-upper ontology in the context of the African languages". Diss., 2016. http://hdl.handle.net/10500/21898.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Resumen
Ontologies are foundational to, and upper ontologies provide semantic integration across, the Semantic Web. Multilingualism has been shown to be a key challenge to the development of the Semantic Web, and is a particular challenge to the universality requirement of upper ontologies. Universality implies a qualitative mapping from lexical ontologies, like WordNet, to an upper ontology, such as SUMO. Are a given natural language family's core concepts currently included in an existing, accepted upper ontology? Does SUMO preserve an ontological non-bias with respect to the multilingual challenge, particularly in the context of the African languages? The approach to developing WordNets mapped to shared core concepts in the non-Indo-European language families has highlighted these challenges and this is examined in a unique new context: the Southern African languages. This is achieved through a new mapping from African language core concepts to SUMO. It is shown that SUMO has no signi ficant natural language ontology bias.
Computing
M. Sc. (Computer Science)

Pasar a la bibliografía