Academic literature on the topic 'LinkedData'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'LinkedData.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "LinkedData"

1

TANABE, Kosuke, Yuka EGUSA, and Masao TAKAKU. "A Subject Information Sharing System Based on Linked Data and FRSAD Model." Joho Chishiki Gakkaishi 26, no. 3 (2016): 260–76. http://dx.doi.org/10.2964/jsik_2016_030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Piirainen, Esko, Eija-Leena Laiho, Tea von Bonsdorff, and Tapani Lahti. "Managing Taxon Data in FinBIF." Biodiversity Information Science and Standards 3 (June 26, 2019). http://dx.doi.org/10.3897/biss.3.37422.

Full text
Abstract:
The Finnish Biodiversity Information Facility, FinBIF (https://species.fi), has developed its own taxon database. This allows FinBIF taxon specialists to maintain their own, expert-validated view of Finnish species. The database covers national needs and can be rapidly expanded by our own development team. Furthermore, in the database each taxon is given a globally unique persistent URI identifier (https://www.w3.org/TR/uri-clarification), which refers to the taxon concept, not just to the name. The identifier doesn’t change if the taxon concept doesn’t change. We aim to ensure compatibility with checklists from other countries by linking taxon concepts as Linked Data (https://www.w3.org/wiki/LinkedData) — a work started as a part of the Nordic e-Infrastructure Collaboration (NeIC) DeepDive project (https://neic.no/deepdive). The database is used as a basis for observation/specimen searches, e-Learning and identification tools, and it is browsable by users of the FinBIF portal. The data is accessible to everyone under CC-BY 4.0 license (https://creativecommons.org/licenses/by/4.0) in machine readable formats. The taxon specialists maintain the taxon data using a web application. Currently, there are 60 specialists. All changes made to the data go live every night. The nightly update interval allows the specialists a grace period to make their changes. Allowing the taxon specialists to modify the taxonomy database themselves leads to some challenges. To maintain the integrity of critical data, such as lists of protected species, we have had to limit what the specialists can do. Changes to critical data is carried out by an administrator. The database has special features for linking observations to the taxonomy. These include hidden species aggregates and tools to override how a certain name used in observations is linked to the taxonomy. Misapplied names remain an unresolved problem. The most precise way to record an observation is to use a taxon concept: Most observations are still recorded using plain names, but it is possible for the observer to pick a concept. Also, when data is published in FinBIF from other information systems, the data providers can link their observations to the concepts using the identifiers of concepts. The ability to use taxon concepts as basis of observations means we have to maintain the concepts over time — a task that may become arduous in the future (Fig. 1). As it stands now, the FinBIF taxon data model — including adjacent classes such as publication, person, image, and endangerment assessments — consists of 260 properties. If the data model were stored in a normalized relational database, there would be approximately 56 tables, which could be difficult to maintain. Keeping track of a complete history of data is difficult in relational databases. Alternatively, we could use document storage to store taxon data. However, there are some difficulties associated with document storages: (1) much work is required to implement a system that does small atomic update operations; (2) batch updates modifying multiple documents usually require writing a script; and (3) they are not ideal for doing searches. We use a document storage for observation data, however, because they are well suited for storing large quantities of complex records. In FinBIF, we have decided to use a triplestore for all small datasets, such as taxon data. More specifically, the data is stored according to the RDF specification (https://www.w3.org/RDF). An RDF Schema defines the allowed properties for each class. Our triplestore implementation is an Oracle relational database with two tables (resource and statement), which gives us the ability to do SQL queries and updates. Doing small atomic updates is easy as only a small subset of the triplets can be updated instead of the entire data entity. Maintaining a complete record of history comes without much effort, as it can be done on an individual triplet level. For performance-critical queries, the taxon data is loaded into an Elasticsearch (https://www.elastic.co) search engine.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "LinkedData"

1

SPAHIU, BLERINA. "Profiling Linked Data." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2017. http://hdl.handle.net/10281/151645.

Full text
Abstract:
Nonostante l'elevato numero di dati pubblicati come LD, il loro utilizzo non ha ancora mostrato il loro potenziale per l’assenza di comprensione dei metadati. I consumatori di dati hanno bisogno di ottenere informazioni dai dataset in modo veloce e concentrato per poter decidere se sono utili per il loro problema oppure no. Le tecniche di profilazione dei dati offrono una soluzione efficace a questo problema in quanto sono utilizzati per generare metadati e statistiche che descrivono il contenuto dei dataset. Questa tesi presenta una ricerca, che affronta i problemi legati alla profilazione Linked Data. Nonostante il termine profilazione dei dati è usato in modo generico per diverse informazioni che descrivono i dataset, in questa tesi noi andiamo a ricoprire tre aspetti della profilazione; topic-based, schema-based e linkage-based. Il profilo proposto in questa tesi è fondamentale per il processo decisionale ed è la base dei requisiti che portano verso la comprensione dei dataset. In questa tesi presentiamo un approccio per classificare automaticamente insiemi di dati in una delle categorie utilizzate nel mondo dei LD. Inoltre, indaghiamo il problema della profilazione multi-topic. Per la profilazione schema-based proponiamo un approccio riassuntivo schema-based, che fornisce una panoramica sui rapporti nei dati. I nostri riassunti sono concisi e chiari sufficientemente per riassumere l'intero dataset. Inoltre, essi rivelano problemi di qualità e possono aiutare gli utenti nei compiti di formulazione dei query. Molti dataset nel LD cloud contengono informazioni simili per la stessa entità. Al fine di sfruttare appieno il suo potenziale LD bisogna far vedere questa informazione in modo esplicito. Profiling Linkage fornisce informazioni sul numero di entità equivalenti tra i dataset e rivela possibili errori.Le tecniche di profiling sviluppate durante questo lavoro sono automatiche e possono essere applicate a differenti insiemi di dati indipendentemente dal dominio.
Recently, the increasing diffusion of Linked Data (LD) as a standard way to publish and structure data on the Web has received a growing attention from researchers and data publishers. LD adoption is reflected in different domains such as government, media, life science, etc., building a powerful Web available to anyone. Despite the high number of datasets published as LD, their usage is still not exploited as they lack comprehensive metadata. Data consumers need to obtain information about datasets content in a fast and summarized form to decide if they are useful for their use case at hand or not. Data profiling techniques offer an efficient solution to this problem as they are used to generate metadata and statistics that describe the content of the dataset. Existing profiling techniques do no cover a wide range of use cases. Many challenges due to the heterogeneity nature of Linked Data are still to overcome. This thesis presents the doctoral research which tackles the problems related to Profiling Linked Data. Even though the term of data profiling is the umbrella term for diverse descriptive information that describes a dataset, in this thesis we cover three aspects of profiling; topic-based, schema-based and linkage-based. The profile provided in this thesis is fundamental for the decision-making process and is the basic requirement towards the dataset understanding. In this thesis we present an approach to automatically classify datasets in one of the topical categories used in the LD cloud. Moreover, we investigate the problem of multi-topic profiling. For the schema-based profiling we propose a schema-based summarization approach, that provides an overview about the relations in the data. Our summaries are concise and informative enough to summarize the whole dataset. Moreover, they reveal quality issues and can help users in the query formulation tasks. Many datasets in the LD cloud contain similar information for the same entity. In order to fully exploit its potential LD should made this information explicit. Linkage profiling provides information about the number of equivalent entities between datasets and reveal possible errors. The techniques of profiling developed during this work are automatic and can be applied to different datasets independently of the domain.
APA, Harvard, Vancouver, ISO, and other styles
2

Quadrelli, Davide. "RSLT: trasformazione di Open LinkedData in testi in linguaggio naturaletramite template dichiarativi." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/11945/.

Full text
Abstract:
La diffusione del Semantic Web e di dati semantici in formato RDF, ha creato la necessità di un meccanismo di trasformazione di tali informazioni, semplici da interpretare per una macchina, in un linguaggio naturale, di facile comprensione per l'uomo. Nella dissertazione si discuterà delle soluzioni trovate in letteratura e, nel dettaglio, di RSLT, una libreria JavaScript che cerca di risolvere questo problema, consentendo la creazione di applicazioni web in grado di eseguire queste trasformazioni tramite template dichiarativi. Verranno illustrati, inoltre, tutti i cambiamenti e tutte le modi�che introdotte nella versione 1.1 della libreria, la cui nuova funzionalit�à principale �è il supporto a SPARQL 1.0.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "LinkedData"

1

Langer, André, Christoph Göpfert, and Martin Gaedke. "CARDINAL: Contextualized Adaptive Research Data Description INterface Applying LinkedData." In Lecture Notes in Computer Science, 11–27. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-74296-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography