Pour voir les autres types de publications sur ce sujet consultez le lien suivant : IL. Semantic web.

Thèses sur le sujet « IL. Semantic web »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « IL. Semantic web ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Sawant, Anup Satish. « Semantic web search ». Connect to this title online, 2009. http://etd.lib.clemson.edu/documents/1263410119/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Gessler, Damian, Gary Schiltz, Greg May, Shulamit Avraham, Christopher Town, David Grant et Rex Nelson. « SSWAP : A Simple Semantic Web Architecture and Protocol for semantic web services ». BioMed Central, 2009. http://hdl.handle.net/10150/610154.

Texte intégral
Résumé :
BACKGROUND:SSWAP (Simple Semantic Web Architecture and Protocol
pronounced "swap") is an architecture, protocol, and platform for using reasoning to semantically integrate heterogeneous disparate data and services on the web. SSWAP was developed as a hybrid semantic web services technology to overcome limitations found in both pure web service technologies and pure semantic web technologies.RESULTS:There are currently over 2400 resources published in SSWAP. Approximately two dozen are custom-written services for QTL (Quantitative Trait Loci) and mapping data for legumes and grasses (grains). The remaining are wrappers to Nucleic Acids Research Database and Web Server entries. As an architecture, SSWAP establishes how clients (users of data, services, and ontologies), providers (suppliers of data, services, and ontologies), and discovery servers (semantic search engines) interact to allow for the description, querying, discovery, invocation, and response of semantic web services. As a protocol, SSWAP provides the vocabulary and semantics to allow clients, providers, and discovery servers to engage in semantic web services. The protocol is based on the W3C-sanctioned first-order description logic language OWL DL. As an open source platform, a discovery server running at http://sswap.info webcite (as in to "swap info") uses the description logic reasoner Pellet to integrate semantic resources. The platform hosts an interactive guide to the protocol at http://sswap.info/protocol.jsp webcite, developer tools at http://sswap.info/developer.jsp webcite, and a portal to third-party ontologies at http://sswapmeet.sswap.info webcite (a "swap meet").CONCLUSION:SSWAP addresses the three basic requirements of a semantic web services architecture (i.e., a common syntax, shared semantic, and semantic discovery) while addressing three technology limitations common in distributed service systems: i.e., i) the fatal mutability of traditional interfaces, ii) the rigidity and fragility of static subsumption hierarchies, and iii) the confounding of content, structure, and presentation. SSWAP is novel by establishing the concept of a canonical yet mutable OWL DL graph that allows data and service providers to describe their resources, to allow discovery servers to offer semantically rich search engines, to allow clients to discover and invoke those resources, and to allow providers to respond with semantically tagged data. SSWAP allows for a mix-and-match of terms from both new and legacy third-party ontologies in these graphs.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Dingli, Alexiei. « Annotating the semantic web ». Thesis, University of Sheffield, 2005. http://etheses.whiterose.ac.uk/10272/.

Texte intégral
Résumé :
The web of today has evolved into a huge repository of rich Multimedia content for human consumption. The exponential growth of the web made it possible for information size to reach astronomical proportions; far more than a mere human can manage, causing the problem of information overload. Because of this, the creators of the web(lO) spoke of using computer agents in order to process the large amounts of data. To do this, they planned to extend the current web to make it understandable by computer programs. This new web is being referred to as the Semantic Web. Given the huge size of the web, a collective effort is necessary to extend the web. For this to happen, tools easy enough for non-experts to use must be available. This thesis first proposes a methodology which semi-automatically labels semantic entities in web pages. The methodology first requires a user to provide some initial examples. The tool then learns how to reproduce the user's examples and generalises over them by making use of Adaptive Information Extraction (AlE) techniques. When its level of performance is good enough when compared to the user, it then takes over the process and processes the remaining documents autonomously. The second methodology goes a step further and attempts to gather semantically typed information from web pages automatically. It starts from the assumption that semantics are already available all over the web, and by making use of a number of freely available resources (like databases) combined with AlE techniques, it is possible to extract most information automatically. These techniques will certainly not provide all the solutions for the problems brought about with the advent of the Semantic Web. They are intended to provide a step forward towards making the Semantic Web a reality.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Medjahed, Brahim. « Semantic Web Enabled Composition of Web Services ». Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/27364.

Texte intégral
Résumé :
In this dissertation, we present a novel approach for the automatic composition of Web services on the envisioned Semantic Web. Automatic service composition requires dealing with three major research thrusts: semantic description of Web services, composability of participant services, and generation of composite service descriptions. This dissertation deals with the aforementioned research issues. We first propose an ontology-based framework for organizing and describing semantic Web services. We introduce the concept of community to cluster Web services based on their domain of interest. Each community is defined as an instance of an ontology called community ontology. We then propose a composability model to check whether semantic Web services can be combined together, hence avoiding unexpected failures at run time. The model defines formal safeguards for meaningful composition through the use of composability rules. We also introduce the notions of composability degree and tau-composability to cater for partial and total composability. Based on the composability model, we propose a set of algorithms that automatically generate detailed descriptions of composite services from high-level specifications of composition requests. We introduce a Quality of Composition (QoC) model to assess the quality of the generated composite services. The techniques presented in this dissertation are implemented in WebDG, a prototype for accessing e-government Web services. Finally, we conduct an extensive performance study (analytical and experimental) of the proposed composition algorithms.
Ph. D.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Kaufmann, Esther. « Talking to the semantic web ». Zürich Univ, 2007. http://opac.nebis.ch/exlibris/aleph/u181̲/apachem̲edia/VPYCT7FTV1JRH42F9FUJJETBEUG4I7.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Cloran, Russell Andrew. « Trust on the semantic web ». Thesis, Rhodes University, 2006. http://eprints.ru.ac.za/852/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

CUNHA, LEONARDO MAGELA. « A SEMANTIC WEB APPLICATION FRAMEWORK ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2006. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=10084@1.

Texte intégral
Résumé :
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
FUNDAÇÃO PADRE LEONEL FRANCA
Até alguns anos atrás, a Web disseminava principalmente documentos. Com o advento das aplicações Web, as organizações puderam disponibilizar informações que estavam em seus bancos de dados e sistemas legados. Entretanto, a comunicação entre estas aplicações ou com aplicações de usuários finais, às vezes, não era possível devido a diferenças no formato de representação dos dados. O desenvolvimento de padrões (standards) e o uso da eXtensible Markup Language (XML) resolveram muitos destes problemas. Apesar das soluções desenvolvidas serem somente sintáticas elas funcionam em muitos casos, como por exemplo, na interoperabilidade de esquemas em sistemas bussiness to bussiness de e-commerce. Entretanto, a falta do aspecto semântico impossibilitou que as aplicações fizessem mais uso dos dados ou os utilizassem de forma mais inteligente. A idéia da Web Semântica é definir explicitamente o significado dos dados que se encontram na Web. Com isso, esperam-se aplicações capazes de entender o que significam os dados. E uma vez que estas aplicações entendam os dados, elas possibilitarão que os usuários utilizem essa nova Web dirigida a dados para facilitar as suas tarefas rotineiras. Esta tese propõe um framework para o desenvolvimento de aplicações para a Web Semântica. Considerando o que foi descrito no parágrafo anterior, o número de aplicações que podem ser construídas é quase infinito. Portanto, nós nos restringimos a observar as aplicações que tem por objetivo solucionar o problema apresentado pelo Semantic Web Challenge; e propor um framework que represente estas soluções. O Challenge tem como principal finalidade demonstrar como as aplicações podem atrair e beneficiar o usuário final através do uso das técnicas da Web Semântica. Conseqüentemente, nossa intenção é possibilitar que o desenvolvedor de aplicações possa atingir essa atração e benefícios, através do uso das técnicas de Web Semântica e de Engenharia de Software, utilizando um framework para o desenvolvimento das aplicações.
Documents have been the main vehicle of the Web until some years ago. With the advent of Web applications, data stored in organizations databases or legacy systems has been made available to users. However, very often, the exchange of data between those applications themselves or between them and end-users applications were not possible since they used different formats for the information representation. The development of standards and the use of the eXtensible Markup Language (XML) solved parts of the problem. That was a syntactic solution and it works for several cases, e.g., schema interoperability in Business-to-Business e-commerce scenarios. Nevertheless, the lack of semantics on these data prevented applications to take more advantage of them. The idea behind the Semantic Web is to define explicitly the semantics of data available on the Web. Therefore, we expect another step forward where applications, being them corporative or for end-users, will understand the meaning of the data available on the Web. Once those applications can understand it, they will be able to help users to take advantage of this data driven Web and to perform their daily tasks easily. This thesis proposes a framework for the development of Semantic Web applications. Considering the scenario described in the previous paragraph, the number of possible applications that can be developed is almost infinite. For this reason, we restricted ourselves to examine the solutions that aim to solve the problem presented at the Semantic Web Challenge; and to propose a framework that represent those solutions. The challenge is concerned in demonstrating how Semantic Web techniques can provide valuable or attractive applications to end users. Our main concern was then to demonstrate and help a developer to achieve that value addition or attractiveness, through Semantic Web techniques, in a Software Engineering approach using frameworks.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Isaksen, Leif. « Archaeology and the Semantic Web ». Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/206421/.

Texte intégral
Résumé :
This thesis explores the application of Semantic Web technologies to the discipline of Archaeology. Part One (Chapters 1-3) offers a discussion of historical developments in this field. It begins with a general comparison of the supposed benefits of semantic technologies and notes that they partially align with the needs of archaeologists. This is followed by a literature review which identifies two different perspectives on the Semantic Web: Mixed-Source Knowledge Representation (MSKR), which focuses on data interoperability between closed systems, and Linked Open Data (LOD), which connects decentralized, open resources. Part One concludes with a survey of 40 Cultural Heritage projects that have used semantic technologies and finds that they are indeed divided between these two visions. Part Two (Chapters 4-7) uses a case study, Roman Port Networks, to explore ways of facilitating MSKR. Chapter 4 describes a simple ontology and vocabulary framework, by means of which independently produced digital datasets pertaining to amphora finds at Roman harbour sites can be combined. The following chapters describe two entirely different approaches to converting legacy data to an ontology-compliant semantic format. The first, TRANSLATION, uses a 'Wizard'-style toolkit. The second, 'Introducing Semantics', is a wiki-based cookbook. Both methods are evaluated and found to be technically capable but socially impractical. The final chapter argues that the reason for this impracticality is the small-to-medium scale typical of MSKR projects. This does not allow for sufficient analytical return on the high level of investment required of project partners to convert and work with data in a new and unfamiliar format. It further argues that the scale at which such investment pays off is only likely to arise in an open and decentralized data landscape. Thus, for Archaeology to benefit from semantic technologies would require a severe sociological shift from current practice towards openness and decentralization. Whether such a shift is either desirable or feasible is raised as a topic for future work.
Styles APA, Harvard, Vancouver, ISO, etc.
9

ALHARTHI, KHALID AYED B. « AN ARABIC SEMANTIC WEB MODEL ». Kent State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=kent1367064711.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Foulkes, James. « Semantic web for knowledge networking / ». Leeds, 2001. http://www.leeds.ac.uk/library/counter2/compstmsc/20002001/foulkes.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Zhang, Jane. « Ontology and the Semantic Web ». dLIST, 2007. http://hdl.handle.net/10150/106454.

Texte intégral
Résumé :
This paper discusses the development of a new information representation system embodied in ontology and the Semantic Web. The new system differs from other representation systems in that it is based on a more sophisticated semantic representation of information, aims to go well beyond the document level, and designed to be understood and processed by machine. A common theme underlying these three features, i.e., turning documents into meaningful interchangeable data, reflects a rising use expectation nurtured by modern technology and, at the same time, presents a unique challenge for its enabling technologies.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Zammuto, Teresa. « Innovazione nel Semantic Web : Evoluzione della base di conoscenza semantica YAGO ». Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/11528/.

Texte intégral
Résumé :
La presente ricerca tratta lo studio delle basi di conoscenza, volto a facilitare la raccolta, l'organizzazione e la distribuzione della conoscenza. La scelta dell’oggetto è dovuta all'importanza sempre maggiore acquisita da questo ambito di ricerca e all'innovazione che esso è in grado di apportare nel campo del Web semantico. Viene analizzata la base di conoscenza YAGO: se ne descrivono lo stato dell’arte, le applicazioni e i progetti per sviluppi futuri. Il lavoro è stato condotto esaminando le pubblicazioni relative al tema e rappresenta una risorsa in lingua italiana sull'argomento.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Woukeu, Arouna. « Engineering documents and Web applications for the Semantic Web ». Thesis, University of Southampton, 2006. https://eprints.soton.ac.uk/263648/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
14

Ferreira, Jaider Andrade [UNESP]. « Wikis semânticos : da Web para a Web Semântica ». Universidade Estadual Paulista (UNESP), 2014. http://hdl.handle.net/11449/108380.

Texte intégral
Résumé :
Submitted by Jaider Andrade Ferreira (jaideraf@gmail.com) on 2014-08-08T13:12:46Z No. of bitstreams: 1 ferreira_ja_me_mar.pdf: 2583232 bytes, checksum: 13793146fcfbbf0f3e90673d816e0f8d (MD5)
Made available in DSpace on 2014-08-08T13:12:46Z (GMT). No. of bitstreams: 1 ferreira_ja_me_mar.pdf: 2583232 bytes, checksum: 13793146fcfbbf0f3e90673d816e0f8d (MD5) Previous issue date: 2014-07-30
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Com o desenvolvimento das Tecnologias de Informação e Comunicação, a Ciência da Informação precisou repensar a postura tradicional de gerenciamento informacional. O hipertexto, advindo principalmente por meio do ambiente Web, elevou ainda mais a complexidade do tratamento informacional. A popularização da Internet fez com que a Web se tornasse mais interativa e colaborativa tornando comum a utilização de sistemas wiki para o gerenciamento informacional. Idealizada por Tim Berners-Lee, surge a iniciativa da Web Semântica, na qual as máquinas são capazes de analisar os dados presentes na rede. Nesse contexto aparecem os wikis semânticos, caracterizados por fazerem uso das tecnologias da Web Semântica. Diante desse cenário, considera-se que a Ciência da Informação, que já se preocupa com o desenvolvimento da Web e da Web Semântica, deve também se preocupar com os wikis semânticos. Assim, por meio de uma pesquisa descritiva e exploratória, objetivou-se explorar, apresentar e descrever as características dos wikis semânticos com enfoque nas atividades de descrição, de recuperação e de intercâmbio de informações apoiadas em tecnologias da Web Semântica, visando a favorecer o entendimento, a discussão e o uso dessas tecnologias em ambientes informacionais digitais. Após uma apresentação sobre as raízes históricas da Web Semântica, são destacados os padrões de representação, codificação, descrição, relação e consulta de dados estruturados (URI, XML, RDF, RDFS, OWL e SPARQL) que, junto a outras tecnologias, formam a base da Web Semântica e apoiam o funcionamento dos wikis semânticos. Os wikis semânticos são apresentados e definidos como sistemas wiki que se utilizam de tecnologias da Web Semântica para incorporar conhecimento formalizado, conteúdo, estruturas e links em suas páginas. Após essa etapa, são descritas as principais atividades de descrição, de recuperação e de intercâmbio de informações no Semantic MediaWiki, o wiki semântico mais utilizado até o momento. Como considerações finais, entende-se que os wikis semânticos favorecem o entendimento, a discussão e o uso de tecnologias da Web Semântica em ambientes informacionais digitais.
Due to the development of Information and Communication Technologies, Information Science has been forced to rethink the traditional posture of information management. Hypertext, arising mainly through the Web environment, further increased the complexity of the information handling. The popularization of the Internet has led the Web to a more interactive and a more collaborative environment, bringing wiki systems, for example, to manage information in a collaborative way. Conceived by Tim Berners-Lee, there is the Semantic Web initiative in which machines are able to analyze data on the network. In this context, semantic wikis arise: wikis characterized by the use of Semantic Web technologies. Therefore, we believe that Information Science, which cares about the development of the Web and the Semantic Web, should also care about semantic wikis. Thus, by a descriptive and an exploratory research, the objective is to explore, to present and to describe the characteristics of the semantic wikis on the activities of representation, retrieval and exchange of information supported by Semantic Web technologies in order to facilitate the understanding, the discussion, and the use of these technologies in digital information environments. After a presentation about the origins of the Semantic Web, we highlight the data representation, encoding, description, relation, and query standards (URI, XML, RDF, RDFS, OWL and SPARQL) which, with other technologies, form the basis of the Semantic Web and support the functioning of semantic wikis. Semantic wikis are presented and defined as wiki systems that use Semantic Web technologies in order to incorporate formalized knowledge, content, structure and links on their pages. After that, we describe the main activities for information description, retrieval and interchange on Semantic MediaWiki, the most popular and most used semantic wiki engine so far. As conclusion, we consider that semantic wikis can promote understanding, discussions, and use of Semantic Web technologies in digital information environments.
FAPESP: 2011/15085-6
Styles APA, Harvard, Vancouver, ISO, etc.
15

Czerwinski, Silvia. « Bibliotheken als Akteure im Semantic Web ». Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-39083.

Texte intégral
Résumé :
Die Hochschulbibliothek Zwickau nahm die Aktualität des Semantic Web zum Anlass, die Hauptakteure des Diskurses in Bibliotheken zu einem Vortrag einzuladen. Die Schwerpunkte des Vortrages lagen nicht nur in der Einführung des Semantic Web und der Open Linked Data, sondern dass Bibliotheken in Datenstrukturierung und Datenspeicherung wichtige Akteure sind oder werden können.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Han, Wei. « Wrapper application generation for semantic web ». Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/5407.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Haddadi, Makhsous Saeed. « Semantic Web mechanisms in Cloud Environment ». Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-22696.

Texte intégral
Résumé :
Virtual Private Ontology Server (VPOS) is a middleware software with focus on ontologies (semantic models). VPOS is offering a smart way to its users how to access relevant part of ontology dependent on their context. The user context can be expertise level or level of experience or job position in a hierarchy structure. Instead of having numerous numbers of ontologies associated to different user contexts, VPOS keeps only one ontology but offers sub-ontologies to users on the basis of their context. VPOS also supports reasoning to infer new consequences out of assertions stated in the ontology. These consequences are also visible for certain contexts which have access to enough assertions inside ontology to be able to deduct them. There are some issues within current implementation of VPOS. The application uses the random-access memory of local machine for loading the ontology which could be the cause of scalability issue when ontology size exceeds memory space. Also assuming that each user of VPOS holds her own instance of application it might result into maintainability issues such as inconsistency between ontologies of different users and waste of computational resources. This thesis project is about to find some practical solutions to solve the issues of current implementation, first by upgrading the architecture of application using new framework to address scalability issue and then moving to cloud addressing maintainability issues. The final production of this thesis project would be Cloud-VPOS which is an application made to deal with semantic web mechanisms and function over cloud plat-form. Cloud-VPOS would be an application where semantic web meets cloud computing by employing semantic web mechanisms as cloud services.
ebbits project (Enabling business-based Internet of Things and Services)
Styles APA, Harvard, Vancouver, ISO, etc.
18

Azwari, Sana Al. « Updating RDF in the semantic web ». Thesis, University of Strathclyde, 2016. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=26921.

Texte intégral
Résumé :
RDF is widely used in the Semantic Web for representing ontology data. Many real world RDF collections are large and contain complex graph relationships that represent knowledge in a particular domain. Such large RDF collections evolve as a consequence of their representation of the changing world. Evolution in Semantic Web content produces difference files (deltas) that track changes between ontology versions. These changes may represent ontology modifications or simply changes in application data. An ontology is typically expressed in a combination of OWL, RDFS and RDF knowledge representation languages. A data repository that represents an ontology may be large and may be duplicated over the Internet, often in the form of a relational data store. Although this data may be distributed over the Internet, it needs to be managed and updated in the face of such evolutionary changes. In view of the size of typical collections, it is important to derive efficient ways of propagating updates to distributed datastores. The deltas can be used to reduce the storage and bandwidth overhead involved in disseminating ontology updates. Minimising the delta size can be achieved by reasoning over the underlying knowledge base. OWL 2 is a development of the OWL 1 standard that incorporate new features to aid application construction. Among the sub languages of OWL 2, OWL 2 RL/RDF provides an enriched rule set that extends the semantic capability of the OWL environment. This additional semantic content can be exploited in change detection approaches that strive to minimise the alterations to be made when ontologies are updated. The presence of blank nodes (i.e. nodes that are neither a URI nor a literal) in RDF collections provides a further challenge to ontology change detection. This is a consequence of the practical problems they introduce when comparing data structures before and after an update. The contribution of this thesis is a detailed analysis of the performance of RDF change detection techniques. In addition, the work proposes a new approach to maintaining the consistency of RDF by using knowledge embedded in the structure to generate efficient update transactions. The evaluation of this approach indicates that it reduces the overall update size, at the cost of increasing the processing time needed to generate the transactions. In the light of OWL 2 RL/RDF, this thesis examines the potential for reducing the delta size by pruning the application of unnecessary rules from the reasoning process and using an approach to delta generation that produces a small number of updates. It also assesses the impact of alternative approaches to handling blank nodes during the change detection process in ontology structures. The results indicate that pruning the rule set is a potentially expensive process but has the benefit of reducing the joins over relational data stores when carrying out the subsequent inferencing.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Norguet, Jean-Pierre. « Semantic analysis in web usage mining ». Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210890.

Texte intégral
Résumé :
With the emergence of the Internet and of the World Wide Web, the Web site has become a key communication channel in organizations. To satisfy the objectives of the Web site and of its target audience, adapting the Web site content to the users' expectations has become a major concern. In this context, Web usage mining, a relatively new research area, and Web analytics, a part of Web usage mining that has most emerged in the corporate world, offer many Web communication analysis techniques. These techniques include prediction of the user's behaviour within the site, comparison between expected and actual Web site usage, adjustment of the Web site with respect to the users' interests, and mining and analyzing Web usage data to discover interesting metrics and usage patterns. However, Web usage mining and Web analytics suffer from significant drawbacks when it comes to support the decision-making process at the higher levels in the organization.

Indeed, according to organizations theory, the higher levels in the organizations need summarized and conceptual information to take fast, high-level, and effective decisions. For Web sites, these levels include the organization managers and the Web site chief editors. At these levels, the results produced by Web analytics tools are mostly useless. Indeed, most of these results target Web designers and Web developers. Summary reports like the number of visitors and the number of page views can be of some interest to the organization manager but these results are poor. Finally, page-group and directory hits give the Web site chief editor conceptual results, but these are limited by several problems like page synonymy (several pages contain the same topic), page polysemy (a page contains several topics), page temporality, and page volatility.

Web usage mining research projects on their part have mostly left aside Web analytics and its limitations and have focused on other research paths. Examples of these paths are usage pattern analysis, personalization, system improvement, site structure modification, marketing business intelligence, and usage characterization. A potential contribution to Web analytics can be found in research about reverse clustering analysis, a technique based on self-organizing feature maps. This technique integrates Web usage mining and Web content mining in order to rank the Web site pages according to an original popularity score. However, the algorithm is not scalable and does not answer the page-polysemy, page-synonymy, page-temporality, and page-volatility problems. As a consequence, these approaches fail at delivering summarized and conceptual results.

An interesting attempt to obtain such results has been the Information Scent algorithm, which produces a list of term vectors representing the visitors' needs. These vectors provide a semantic representation of the visitors' needs and can be easily interpreted. Unfortunately, the results suffer from term polysemy and term synonymy, are visit-centric rather than site-centric, and are not scalable to produce. Finally, according to a recent survey, no Web usage mining research project has proposed a satisfying solution to provide site-wide summarized and conceptual audience metrics.

In this dissertation, we present our solution to answer the need for summarized and conceptual audience metrics in Web analytics. We first described several methods for mining the Web pages output by Web servers. These methods include content journaling, script parsing, server monitoring, network monitoring, and client-side mining. These techniques can be used alone or in combination to mine the Web pages output by any Web site. Then, the occurrences of taxonomy terms in these pages can be aggregated to provide concept-based audience metrics. To evaluate the results, we implement a prototype and run a number of test cases with real Web sites.

According to the first experiments with our prototype and SQL Server OLAP Analysis Service, concept-based metrics prove extremely summarized and much more intuitive than page-based metrics. As a consequence, concept-based metrics can be exploited at higher levels in the organization. For example, organization managers can redefine the organization strategy according to the visitors' interests. Concept-based metrics also give an intuitive view of the messages delivered through the Web site and allow to adapt the Web site communication to the organization objectives. The Web site chief editor on his part can interpret the metrics to redefine the publishing orders and redefine the sub-editors' writing tasks. As decisions at higher levels in the organization should be more effective, concept-based metrics should significantly contribute to Web usage mining and Web analytics.


Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished

Styles APA, Harvard, Vancouver, ISO, etc.
20

Tusek, Jasna. « Semantic web Einführung, wirtschaftliche Bedeutung, Perspektiven ». Saarbrücken VDM, Müller, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2851683&prov=M&dok_var=1&dok_ext=htm.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
21

Hurley, Danielle. « WordWeb : a lexical semantic web resource / ». Leeds : University of Leeds, School of Computer Studies, 2008. http://www.comp.leeds.ac.uk/fyproj/reports/0708/Hurley.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Fang, Ming. « Maintaining Integrity Constraints in Semantic Web ». Digital Archive @ GSU, 2013. http://digitalarchive.gsu.edu/cs_diss/73.

Texte intégral
Résumé :
As an expressive knowledge representation language for Semantic Web, Web Ontology Language (OWL) plays an important role in areas like science and commerce. The problem of maintaining integrity constraints arises because OWL employs the Open World Assumption (OWA) as well as the Non-Unique Name Assumption (NUNA). These assumptions are typically suitable for representing knowledge distributed across the Web, where the complete knowledge about a domain cannot be assumed, but make it challenging to use OWL itself for closed world integrity constraint validation. Integrity constraints (ICs) on ontologies have to be enforced; otherwise conflicting results would be derivable from the same knowledge base (KB). The current trends of incorporating ICs into OWL are based on its query language SPARQL, alternative semantics, or logic programming. These methods usually suffer from limited types of constraints they can handle, and/or inherited computational expensiveness. This dissertation presents a comprehensive and efficient approach to maintaining integrity constraints. The design enforces data consistency throughout the OWL life cycle, including the processes of OWL generation, maintenance, and interactions with other ontologies. For OWL generation, the Paraconsistent model is used to maintain integrity constraints during the relational database to OWL translation process. Then a new rule-based language with set extension is introduced as a platform to allow users to specify constraints, along with a demonstration of 18 commonly used constraints written in this language. In addition, a new constraint maintenance system, called Jena2Drools, is proposed and implemented, to show its effectiveness and efficiency. To further handle inconsistencies among multiple distributed ontologies, this work constructs a framework to break down global constraints into several sub-constraints for efficient parallel validation.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Hull, Duncan. « Semantic matching of bioinformatic web services ». Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.497578.

Texte intégral
Résumé :
Understanding bioinformatic data on the Web often requires the interoperation of heterogeneous and autonomous services. Unfortunately, getting many different services to interoperate is problematic, and frequently requires cumbersome shim components which can be difficult to describe and discover using existing techniques. The use of description logic reasoning has been proposed as a method for improving discovery of services, by classifying advertisements and matchmaking them with requests on the semantic Web. However, theoretical approaches to reasoning with semantic Web services have not been adequately tested on realistic scenarios while practical approaches have not fully investigated or applied useful aspects of current theory.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Thavappiragasam, Mathialakan. « A web semantic for SBML merge ». Thesis, University of South Dakota, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1566784.

Texte intégral
Résumé :

The manipulation of XML based relational representations of biological systems (BioML for Bioscience Markup Language) is a big challenge in systems biology. The needs of biologists, like translational study of biological systems, cause their challenges to become grater due to the material received in next generation sequencing. Among these BioML's, SBML is the de facto standard file format for the storage and exchange of quantitative computational models in systems biology, supported by more than 257 software packages to date. The SBML standard is used by several biological systems modeling tools and several databases for representation and knowledge sharing. Several sub systems are integrated in order to construct a complex bio system. The issue of combining biological sub-systems by merging SBML files has been addressed in several algorithms and tools. But it remains impossible to build an automatic merge system that implements reusability, flexibility, scalability and sharability. The technique existing algorithms use is name based component comparisons. This does not allow integration into Workflow Management System (WMS) to build pipelines and also does not include the mapping of quantitative data needed for a good analysis of the biological system. In this work, we present a deterministic merging algorithm that is consumable in a given WMS engine, and designed using a novel biological model similarity algorithm. This model merging system is designed with integration of four sub modules: SBMLChecker, SBMLAnot, SBMLCompare, and SBMLMerge, for model quality checking, annotation, comparison, and merging respectively. The tools are integrated into the BioExtract server leveraging iPlant collaborative resources to support users by allowing them to process large models and design work flows. These tools are also embedded into a user friendly online version SW4SBMLm.

Styles APA, Harvard, Vancouver, ISO, etc.
25

LIMA, FERNANDA. « SEMANTIC MODELING DESIGN OF WEB APPLICATION ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2003. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=4000@1.

Texte intégral
Résumé :
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Este trabalho apresenta um modelo para projeto e implementação de aplicações hipermídia no contexto da Web semântica. A partir dos princípios o Object Oriented Hypermedia Design Method, utilizamos as noções de ontologias para definir o modelo conceitual de uma aplicação, estendendo o poder expressivo daquele método. Os modelos de navegação são definidos utilizando-se uma linguagem de consulta que permite referências tanto ao esquema de dados quanto às suas instâncias, possibilitando a definição de estruturas de navegação flexíveis e abrangentes. Adicionalmente, propomos a utilização de estruturas de acesso facetadas para o apoio à escolha de objetos de navegação utilizando múltiplos critérios. Finalmente, apresentamos uma arquitetura de implementação que permite a utilização direta da especificação da aplicação na derivação da implementação da aplicação final.
In this thesis we present a method for the design and implementation of web applications for the Semantic Web. Based on the Object Oriented Hypermedia Design Method approach, we used ontology concepts to define an application conceptual model, extending the expressive power of the original method. The navigational models definitions use a query language capable of querying both schema and instances, enabling the specification of flexible access structures. Additionally, we propose the use of faceted access structures to improve the selection of navigational objects organized by multiple criteria. Finally, we present an implementation architecture that allows the direct use of the application specifications when deriving a final application implementation.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Andrejev, Andrej. « Semantic Web Queries over Scientific Data ». Doctoral thesis, Uppsala universitet, Datalogi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-274856.

Texte intégral
Résumé :
Semantic Web and Linked Open Data provide a potential platform for interoperability of scientific data, offering a flexible model for providing machine-readable and queryable metadata. However, RDF and SPARQL gained limited adoption within the scientific community, mainly due to the lack of support for managing massive numeric data, along with certain other important features – such as extensibility with user-defined functions, query modularity, and integration with existing environments and workflows. We present the design, implementation and evaluation of Scientific SPARQL – a language for querying data and metadata combined, represented using the RDF graph model extended with numeric multidimensional arrays as node values – RDF with Arrays. The techniques used to store RDF with Arrays in a scalable way and process Scientific SPARQL queries and updates are implemented in our prototype software – Scientific SPARQL Database Manager, SSDM, and its integrations with data storage systems and computational frameworks. This includes scalable storage solutions for numeric multidimensional arrays and an efficient implementation of array operations. The arrays can be physically stored in a variety of external storage systems, including files, relational databases, and specialized array data stores, using our Array Storage Extensibility Interface. Whenever possible SSDM accumulates array operations and accesses array contents in a lazy fashion. In scientific applications numeric computations are often used for filtering or post-processing the retrieved data, which can be expressed in a functional way. Scientific SPARQL allows expressing common query sub-tasks with functions defined as parameterized queries. This becomes especially useful along with functional language abstractions such as lexical closures and second-order functions, e.g. array mappers. Existing computational libraries can be interfaced and invoked from Scientific SPARQL queries as foreign functions. Cost estimates and alternative evaluation directions may be specified, aiding the construction of better execution plans. Costly array processing, e.g. filtering and aggregation, is thus preformed on the server, saving the amount of communication. Furthermore, common supported operations are delegated to the array storage back-ends, according to their capabilities. Both expressivity and performance of Scientific SPARQL are evaluated on a real-world example, and further performance tests are run using our mini-benchmark for array queries.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Alfaries, Auhood. « Ontology learning for Semantic Web Services ». Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/4667.

Texte intégral
Résumé :
The expansion of Semantic Web Services is restricted by traditional ontology engineering methods. Manual ontology development is time consuming, expensive and a resource exhaustive task. Consequently, it is important to support ontology engineers by automating the ontology acquisition process to help deliver the Semantic Web vision. Existing Web Services offer an affluent source of domain knowledge for ontology engineers. Ontology learning can be seen as a plug-in in the Web Service ontology development process, which can be used by ontology engineers to develop and maintain an ontology that evolves with current Web Services. Supporting the domain engineer with an automated tool whilst building an ontological domain model, serves the purpose of reducing time and effort in acquiring the domain concepts and relations from Web Service artefacts, whilst effectively speeding up the adoption of Semantic Web Services, thereby allowing current Web Services to accomplish their full potential. With that in mind, a Service Ontology Learning Framework (SOLF) is developed and applied to a real set of Web Services. The research contributes a rigorous method that effectively extracts domain concepts, and relations between these concepts, from Web Services and automatically builds the domain ontology. The method applies pattern-based information extraction techniques to automatically learn domain concepts and relations between those concepts. The framework is automated via building a tool that implements the techniques. Applying the SOLF and the tool on different sets of services results in an automatically built domain ontology model that represents semantic knowledge in the underlying domain. The framework effectiveness, in extracting domain concepts and relations, is evaluated by its appliance on varying sets of commercial Web Services including the financial domain. The standard evaluation metrics, precision and recall, are employed to determine both the accuracy and coverage of the learned ontology models. Both the lexical and structural dimensions of the models are evaluated thoroughly. The evaluation results are encouraging, providing concrete outcomes in an area that is little researched.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Vesse, Robert. « Link integrity for the Semantic Web ». Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/346394/.

Texte intégral
Résumé :
The usefulness and usability of data on the Semantic Web is ultimately reliant on the ability of clients to retrieve Resource Description Framework (RDF) data from the Web. When RDF data is unavailable clients reliant on that data may either fail to function entirely or behave incorrectly. As a result there is a need to investigate and develop techniques that aim to ensure that some data is still retrievable, even in the event that the primary source of the data is unavailable. Since this problem is essentially the classic link integrity problem from hypermedia and the Web we look at the range of techniques that have been suggested by past research and attempt to adapt these to the Semantic Web. Having studied past research we identified two potentially promising strategies for solving the problem: 1) Replication and Preservation; and 2) Recovery. Using techniques developed to implement these strategies for hypermedia and the Web as a starting point we designed our own implementations which adapted these appropriately for the Semantic Web. We describe the design, implementation and evaluation of our adaptations before going on to discuss the implications of the usage of such techniques. In this research we show that such approaches can be used to successfully apply link integrity to the Semantic Web for a variety of datasets on the Semantic Web but that further research is needed before such solutions can be widely deployed.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Cobden, Marcus. « Engineering a Semantic Web trust infrastructure ». Thesis, University of Southampton, 2014. https://eprints.soton.ac.uk/370614/.

Texte intégral
Résumé :
The ability to judge the trustworthiness of information is an important and challenging problem in the field of Semantic Web research. In this thesis, we take an end-to-end look at the challenges posed by trust on the Semantic Web, and present contributions in three areas: a Semantic Web identity vocabulary, a system for bootstrapping trust environments, and a framework for trust aware information management. Typically Semantic Web agents, which consume and produce information, are not described with sufficient information to permit those interacting with them to make good judgements of trustworthiness. A descriptive vocabulary for agent identity is required to enable effective inter agent discourse, and the growth of trust and reputation within the Semantic Web; we therefore present such a foundational identity ontology for describing web-based agents. It is anticipated that the Semantic Web will suffer from a trust network bootstrapping problem. In this thesis, we propose a novel approach which harnesses open data to bootstrap trust in new trust environments. This approach brings together public records published by a range of trusted institutions in order to encourage trust in identities within new environments. Information integrity and provenance are both critical prerequisites for well-founded judgements of information trustworthiness. We propose a modification to the RDF Named Graph data model in order to address serious representational limitations with the named graph proposal, which affect the ability to cleanly represent claims and provenance records. Next, we propose a novel graph based approach for recording the provenance of derived information. This approach offers computational and memory savings while maintaining the ability to answer graph-level provenance questions. In addition, it allows new optimisations such as strategies to avoid needless repeat computation, and a delta-based storage strategy which avoids data duplication.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Hanzal, Tomáš. « Modeling Events on the Semantic Web ». Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-201110.

Texte intégral
Résumé :
There are many ontologies and datasets on the semantic web that mention events. Events are important in our perception of the world and in our descriptions of it, therefore also on the semantic web. There is however not one best way to model them. This is connected to the fact that even the question what events are can be approached in different ways. Our aim is to better understand how events are represented on the semantic web and how it could be improved. To this end we first turn to the ways events are treated in philosophy and in foundational ontologies. We ask questions such as what sorts of things we call events, what ontological status we assign to events and if and how can events be distinguished from other entities such as situations. Then we move on to an empirical analysis of particular semantic web ontologies for events. In this analysis we find what kinds of things are usually called events on the semantic web (and what kinds of events there are). We use the findings from the philosophy of events to critically assess these ontologies, show their problems and indicate possible paths to their solution.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Andrade, Leandro José Silva. « SWoDS : Semantic Web (of Data) Service ». Instituto de Matemática. Departamento de Ciência da Computação, 2014. http://repositorio.ufba.br/ri/handle/ri/19286.

Texte intégral
Résumé :
Submitted by Santos Davilene (davilenes@ufba.br) on 2016-05-25T16:24:08Z No. of bitstreams: 1 DissertacaoMestradoDCC_Leandro_Andrade.pdf: 4292793 bytes, checksum: 81fe16e2cd1e5c84283f5931ba388398 (MD5)
Made available in DSpace on 2016-05-25T16:24:08Z (GMT). No. of bitstreams: 1 DissertacaoMestradoDCC_Leandro_Andrade.pdf: 4292793 bytes, checksum: 81fe16e2cd1e5c84283f5931ba388398 (MD5)
Criada com a proposta inicial de conectar basicamente documentos HTML, a Web hoje expandiu suas capacidades, tornando-se um ambiente bastante heterogêneo de aplicações, recursos, dados e usuários que interagem entre si. A proposta da Web Semântica, associada aos Serviços Web, busca estabelecer padrões que viabilizem a comunicação entre aplicações heterogêneas na Web. A Web de Dados, outra linha de evolução da Web, fornece orientações (Linked Data) sobre como usar as tecnologias da Web Semântica para publicar e definir ligações semânticas entre dados de diferentes fontes. Contudo, existe uma lacuna na integração entre aplicações baseadas em Serviços Web e aplicações da Web de Dados. Essa lacuna ocorre porque os Serviços Web são “executados”, enquanto que a Web de Dados é “consultada”. Dessa forma, esta dissertação apresenta o Semantic Web (of Data) Services (SWoDS) com objetivo de prover Serviços Web a partir de bases Linked Data. O Semantic Web (of Data) Services pode preencher a lacuna entre Serviços Web e aplicações baseadas na Web de Dados, fazendo que a Web de Dados seja “executada” através de Serviços Web Semânticos. Assim, permitindo que dados Linked Data, através do SWoDS, integrem aos Serviços Web, por meio de operações de composição automática e descoberta de serviços.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Arlitsch, Kenning. « Semantic Web Identity of academic organizations ». Doctoral thesis, Humboldt-Universität zu Berlin, Philosophische Fakultät I, 2017. http://dx.doi.org/10.18452/17671.

Texte intégral
Résumé :
Semantic Web Identity kennzeichnet den Zustand, in dem ein Unternehmen von Suchmaschinen als Solches erkannt wird. Das Abrufen einer Knowledge Graph Card in Google-Suchergebnissen für eine akademische Organisation wird als Indikator für SWI nominiert, da es zeigt, dass Google nachprüfbare Tatsachen gesammelt hat, um die Organisation als Einheit zu etablieren. Diese Anerkennung kann wiederum die Relevanz ihrer Verweisungen an diese Organisation verbessern. Diese Dissertation stellt Ergebnisse einer Befragung der 125 Mitgliedsbibliotheken der Association of Research Libraries vor. Die Ergebnisse zeigen, dass diese Bibliotheken in den strukturierten Datensätzen, die eine wesentliche Grundlage des Semantic Web sind und Faktor bei der Erreichung der SWI sind, schlecht vertreten sind. Der Mangel an SWI erstreckt sich auf andere akademische Organisationen, insbesondere auf die unteren Hierarchieebenen von Universitäten. Ein Mangel an SWI kann andere Faktoren von Interesse für akademische Organisationen beeinflussen, einschließlich der Fähigkeit zur Gewinnung von Forschungsförderung, Immatrikulationsraten und Verbesserung des institutionellen Rankings. Diese Studie vermutet, dass der schlechte Zustand der SWI das Ergebnis eines Versagens dieser Organisationen ist, geeignete Linked Open Data und proprietäre Semantic Web Knowledge Bases zu belegen. Die Situation stellt eine Gelegenheit für akademische Bibliotheken dar, Fähigkeiten zu entwickeln, um ihre eigene SWI zu etablieren und den anderen Organisationen in ihren Institutionen einen SWI-Service anzubieten. Die Forschung untersucht den aktuellen Stand der SWI für ARL-Bibliotheken und einige andere akademische Organisationen und beschreibt Fallstudien, die die Wirksamkeit dieser Techniken zur Verbesserung der SWI validieren. Die erklärt auch ein neues Dienstmodell der SWI-Pflege, die von anderen akademischen Bibliotheken für ihren eigenen institutionellen Kontext angepasst werden.
Semantic Web Identity (SWI) characterizes an entity that has been recognized as such by search engines. The display of a Knowledge Graph Card in Google search results for an academic organization is proposed as an indicator of SWI, as it demonstrates that Google has gathered enough verifiable facts to establish the organization as an entity. This recognition may in turn improve the accuracy and relevancy of its referrals to that organization. This dissertation presents findings from an in-depth survey of the 125 member libraries of the Association of Research Libraries (ARL). The findings show that these academic libraries are poorly represented in the structured data records that are a crucial underpinning of the Semantic Web and a significant factor in achieving SWI. Lack of SWI extends to other academic organizations, particularly those at the lower hierarchical levels of academic institutions, including colleges, departments, centers, and research institutes. A lack of SWI may affect other factors of interest to academic organizations, including ability to attract research funding, increase student enrollment, and improve institutional reputation and ranking. This study hypothesizes that the poor state of SWI is in part the result of a failure by these organizations to populate appropriate Linked Open Data (LOD) and proprietary Semantic Web knowledge bases. The situation represents an opportunity for academic libraries to develop skills and knowledge to establish and maintain their own SWI, and to offer SWI service to other academic organizations in their institutions. The research examines the current state of SWI for ARL libraries and some other academic organizations, and describes case studies that validate the effectiveness of proposed techniques to correct the situation. It also explains new services that are being developed at the Montana State University Library to address SWI needs on its campus, which could be adapted by other academic libraries.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Casamassima, Antonio. « Conversione per il Semantic Web di dati Turistico-Culturali : il progetto QRPlaces - Semantic Events ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/5004/.

Texte intégral
Résumé :
Il progetto QRPlaces - Semantic Events, oggetto di questo lavoro, focalizza l’attenzione sull’analisi, la progettazione e l’implementazione di un sistema che sia in grado di modellare i dati, relativi a diversi eventi facenti parte del patrimonio turistico - culturale della Regione Emilia Romagna 1, rendendo evidenti i vantaggi associati ad una rappresentazione formale incentrata sulla Semantica. I dati turistico - culturali sono intesi in questo ambito sia come una rappresentazione di “qualcosa che accade in un certo punto ad un certo momento” (come ad esempio un concerto, una sagra, una raccolta fondi, una rappresentazione teatrale e quant’altro) sia come tradizioni e costumi che costituiscono il patrimonio turistico-culturale e a cui si fa spesso riferimento con il nome di “Cultural Heritage”. Essi hanno la caratteristica intrinseca di richiedere una conoscenza completa di diverse informa- zioni correlata, come informazioni di geo localizzazione relative al luogo fisico che ospita l’evento, dati biografici riferiti all’autore o al soggetto che è presente nell’evento piuttosto che riferirsi ad informazioni che descrivono nel dettaglio tutti gli oggetti, come teatri, cinema, compagnie teatrali che caratterizzano l’evento stesso. Una corretta rappresentazione della conoscenza ad essi legata richiede, pertanto, una modellazione in cui i dati possano essere interconnessi, rivelando un valore informativo che altrimenti resterebbe nascosto. Il lavoro svolto ha avuto lo scopo di realizzare un dataset rispondente alle caratteristiche tipiche del Semantic Web grazie al quale è stato possibile potenziare il circuito di comunicazione e informazione turistica QRPlaces 2. Nello specifico, attraverso la conversione ontologica di dati di vario genere relativi ad eventi dislocati nel territorio, e sfruttando i principi e le tecnologie del Linked Data, si è cercato di ottenere un modello informativo quanto più possibile correlato e arricchito da dati esterni. L’obiettivo finale è stato quello di ottenere una sorgente informativa di dati interconnessi non solo tra loro ma anche con quelli presenti in sorgenti esterne, dando vita ad un percorso di collegamenti in grado di evidenziare una ricchezza informativa utilizzabile per la creazione di valore aggiunto che altrimenti non sarebbe possibile ottenere. Questo aspetto è stato realizzato attraverso un’in- terfaccia di MashUp che utilizza come sorgente il dataset creato e tutti i collegamenti con la rete del Linked Data, in grado di reperire informazioni aggiuntive multi dominio.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Ermilov, Timofey. « Ubiquitous Semantic Applications ». Doctoral thesis, Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-159065.

Texte intégral
Résumé :
As Semantic Web technology evolves many open areas emerge, which attract more research focus. In addition to quickly expanding Linked Open Data (LOD) cloud, various embeddable metadata formats (e.g. RDFa, microdata) are becoming more common. Corporations are already using existing Web of Data to create new technologies that were not possible before. Watson by IBM an artificial intelligence computer system capable of answering questions posed in natural language can be a great example. On the other hand, ubiquitous devices that have a large number of sensors and integrated devices are becoming increasingly powerful and fully featured computing platforms in our pockets and homes. For many people smartphones and tablet computers have already replaced traditional computers as their window to the Internet and to the Web. Hence, the management and presentation of information that is useful to a user is a main requirement for today’s smartphones. And it is becoming extremely important to provide access to the emerging Web of Data from the ubiquitous devices. In this thesis we investigate how ubiquitous devices can interact with the Semantic Web. We discovered that there are five different approaches for bringing the Semantic Web to ubiquitous devices. We have outlined and discussed in detail existing challenges in implementing this approaches in section 1.2. We have described a conceptual framework for ubiquitous semantic applications in chapter 4. We distinguish three client approaches for accessing semantic data using ubiquitous devices depending on how much of the semantic data processing is performed on the device itself (thin, hybrid and fat clients). These are discussed in chapter 5 along with the solution to every related challenge. Two provider approaches (fat and hybrid) can be distinguished for exposing data from ubiquitous devices on the Semantic Web. These are discussed in chapter 6 along with the solution to every related challenge. We conclude our work with a discussion on each of the contributions of the thesis and propose future work for each of the discussed approach in chapter 7.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Jayalal, S. G. V. S. « Web site link prediction and semantic relatedness of web pages ». Thesis, Keele University, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.421664.

Texte intégral
Résumé :
Relying solely on Web browsers to navigate large Web sites has created some navigation problems for users. Many researchers have stressed the importance of improving site user orientation and have suggested the use of information visualisation techniques, in particular "site maps" or "overview diagrams" to address this issue. Link prediction and the semantic relatedness of Web pages have been incorporated into such site maps. This thesis addresses disorientation within Web sites by presenting a visualisation of the site in order to answer one of the three fundamental questions identified by Nielsen and others that users might ask when they become disoriented while navigating a Web site, namely, Where am I now? Where have I been? Where can I go next? A method for making link predictions, which is based on Markov chains, has been developed and implemented in order to answer the third question, "where can I go next?". The method utilises information about the path already followed by the user. In addition to link prediction, pages which are semantically similar to the "current" page are automatically identified using an approach which is based on lexical chains. The proposed approach for link prediction using an exponentially-smoothed transition probability matrix incorporating site usage data over a time period was evaluated by comparing with similar approach developed by Sarukkai. The proposed semantic relatedness approach using weighted lexical chains was empirically compared with an earlier approach developed by Green using synset weight vectors. In conclusion, this thesis argues that Web site link prediction and the identification of semantically-related Web pages can be used to overcome disorientation. The approaches proposed are demonstrated to be superior to earlier methods.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Kardas, Karani. « Semantic Processes For Constructing Composite Web Services ». Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608715/index.pdf.

Texte intégral
Résumé :
In Web service composition, service discovery and combining suitable services through determination of interoperability among different services are important operations. Utilizing semantics improves the quality and facilitates automation of these operations. There are several previous approaches for semantic service discovery and service matching. In this work, we exploit and extend these semantic approaches in order to make Web service composition process more facilitated, less error prone and more automated. This work includes a service discovery and service interoperability checking technique which extends the previous semantic matching approaches. In addition to this, as a guidance system for the user, a new semantic domain model is proposed that captures semantic relations between concepts in various ontologies.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Oberle, Daniel. « Semantic management of middleware / ». New York, NY : Springer, 2006. http://www.loc.gov/catdir/enhancements/fy0663/2005908104-d.html.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
38

Hensel, Stephan, Markus Graube et Leon Urbas. « Methodology for Conflict Detection and Resolution in Semantic Revision Control Systems ». Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-211244.

Texte intégral
Résumé :
Revision control mechanisms are a crucial part of information systems to keep track of changes. It is one of the key requirements for industrial application of technologies like Linked Data which provides the possibility to integrate data from different systems and domains in a semantic information space. A corresponding semantic revision control system must have the same functionality as established systems (e.g. Git or Subversion). There is also a need for branching to enable parallel work on the same data or concurrent access to it. This directly introduces the requirement of supporting merges. This paper presents an approach which makes it possible to merge branches and to detect inconsistencies before creating the merged revision. We use a structural analysis of triple differences as the smallest comparison unit between the branches. The differences that are detected can be accumulated to high level changes, which is an essential step towards semantic merging. We implemented our approach as a prototypical extension of therevision control system R43ples to show proof of concept.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Cregan, Anne Computer Science &amp Engineering Faculty of Engineering UNSW. « Weaving the semantic web : Contributions and insights ». Publisher:University of New South Wales. Computer Science & ; Engineering, 2008. http://handle.unsw.edu.au/1959.4/42605.

Texte intégral
Résumé :
The semantic web aims to make the meaning of data on the web explicit and machine processable. Harking back to Leibniz in its vision, it imagines a world of interlinked information that computers `understand' and `know' how to process based on its meaning. Spearheaded by the World Wide Web Consortium, ontology languages OWL and RDF form the core of the current technical offerings. RDF has successfully enabled the construction of virtually unlimited webs of data, whilst OWL gives the ability to express complex relationships between RDF data triples. However, the formal semantics of these languages limit themselves to that aspect of meaning that can be captured by mechanical inference rules, leaving many open questions as to other aspects of meaning and how they might be made machine processable. The Semantic Web has faced a number of problems that are addressed by the included publications. Its germination within academia, and logical semantics has seen it struggle to become familiar, accessible and implementable for the general IT population, so an overview of semantic technologies is provided. Faced with competing `semantic' languages, such as the ISO's Topic Map standards, a method for building ISO-compliant Topic Maps in the OWL DL language has been provided, enabling them to take advantage of the more mature OWL language and tools. Supplementation with rules is needed to deal with many real-world scenarios and this is explored as a practical exercise. The available syntaxes for OWL have hindered domain experts in ontology building, so a natural language syntax for OWL designed for use by non-logicians is offered and compared with similar offerings. In recent years, proliferation of ontologies has resulted in far more than are needed in any given domain space, so a mechanism is proposed to facilitate the reuse of existing ontologies by giving contextual information and leveraging social factors to encourage wider adoption of common ontologies and achieve interoperability. Lastly, the question of meaning is addressed in relation to the need to define one's terms and to ground one's symbols by anchoring them effectively, ultimately providing the foundation for evolving a `Pragmatic Web' of action.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Muniz, Bruno de Azevedo. « SERIN Semantic Restful Interfaces ». Universidade de Fortaleza, 2014. http://dspace.unifor.br/handle/tede/93384.

Texte intégral
Résumé :
Made available in DSpace on 2019-03-29T23:42:09Z (GMT). No. of bitstreams: 0 Previous issue date: 2014-12-08
RESTful web services have become a widely used standard for manipulating data, called resources, available in distributed web servers, called hosts. In this context, several proposals have been made to attempt to formalize the semantics of resources and web services that manipulate them and thus try to integrate RESTful web services to the Semantic Web scenario. However, these proposals are applied to concrete web services, and not to an abstract interface that can be reused by several concrete implementations. This paper presents the Semantic RESTful interfaces - SERIN specification that proposes the use of abstract interfaces in semantic description of resources and RESTful web services. Semantic interfaces are annotated ontologies, written in OWL, whose classes formally describe the semantics of REST resources, and whose annotations indicate which web services are available to manipulate the resources of a host. SERIN similarly to interfaces of Object Oriented Programming, specifies abstract interfaces, ie, disconnected from any concrete implementation, so it represents a contract that determines which resources and web servicesshould be available for every host that implements it. Keywords: Semantic Web. Semantic Web Services. SWS.RESTful Web Services. Ontology.Semantic Interfaces. Abstract Interfaces. SERIN.
Os serviços web RESTful se tornaram um padrão de larga utilização que permitem manipular dados, denominados recursos, disponibilizados em servidores distribuídos na web, denominados hosts. Neste contexto, várias propostas tem sido feitas para tentar formalizar a semântica de recursos e dos serviços web que os manipulam e, desta forma, tentar integrar os serviços web RESTful ao cenário da Web Semântica. Entretanto, estas propostas se aplicam a serviços web concretos e não a uma interface abstrata que possa ser reutilizada por várias implementações concretas. Este trabalho apresenta as SemanticRESTfulINterfaces - SERIN, especificação que propõe a utilização de interfaces semânticas abstratas na descrição de recursos e serviços web RESTful. Interfaces semânticas são ontologias anotadas, escritas em OWL, cujas classes descrevem formalmente a semântica de recursos REST, e cujas anotações indicam quais serviços web estarão disponíveis para manipular os recursos de um host. O SERIN, analogamente as interfaces da Programação Orientada a Objetos, especifica interfaces abstratas, isto é, desconectadas de qualquer implementação concreta, logo representam um contrato que determina recursos e serviços web que devem estar disponíveis por todo host que a implementa. Palavras-chave: Web Semântica. Serviços Web Semânticos. SWS. Serviços Web RESTful. Ontologias. Interfaces Semânticas. Interfaces Abstratas. SERIN.
Styles APA, Harvard, Vancouver, ISO, etc.
41

Immaneni, Trivikram. « A HYBRID APPROACH TO RETRIEVING WEB DOCUMENTS AND SEMANTIC WEB DATA ». Wright State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=wright1199923822.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
42

Viola, Fabio <1986&gt. « Semantic Web and the Web of Things : concept, platform and applications ». Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amsdottorato.unibo.it/9029/1/main.pdf.

Texte intégral
Résumé :
The ubiquitous presence of devices with computational resources and connectivity is fostering the diffusion of the Internet of Things (IoT), where smart objects interoperate and react to the available information providing services to the users. The pervasiveness of the IoT across many different areas proves the worldwide interest of researchers from academic and enterprises worlds. This Research has brought to new technologies and protocols addressing different needs of emerging scenarios, making difficult to develop interoperable applications. The Web of Things is born to address this problem through the standard protocols responsible for the success of the Web. But a greater contribution can be provided by standards of the Semantic Web. Semantic Web protocols grant univocal identification of resources and representation of data in a way that information is machine understandable and computable and such that information from different sources can be easily aggregated. Semantic Web technologies are then interoperability enablers for the IoT. This Thesis investigates how to employ Semantic Web protocols in the IoT, to realize the Semantic Web of Things (SWoT) vision of an interoperable network of applications. Part I introduces the IoT, Part II investigates the algorithms to efficiently support the publish/subscribe paradigm in semantic brokers for the SWoT and their implementation in Smart-M3 and SEPA. The preliminary work toward the first benchmark for SWoT applications is presented. Part IV describes the Research activity aimed at applying the developed semantic infrastructures in real life scenarios (electro-mobility, home automation, semantic audio and Internet of Musical Things). Part V presents the conclusions. A lack of effective ways to explore and debug Semantic Web datasets emerged during these activities. Part III describes a second Research aimed at devising of a novel way to visualize semantic datasets, based on graphs and the new concept of Semantic Planes.
La presenza massiva di dispositivi dotati di capacità computazionale e connettività sta alimentando la diffusione di un nuovo paradigma nell'ICT, conosciuto come Internet of Things. L'IoT è caratterizzato dai cosiddetti smart object che interagiscono, cooperano e reagiscono alle informazioni a loro disponibili per fornire servizi agli utenti. La diffusione dell'IoT su così tante aree è la testimonianza di un interesse mondiale da parte di ricercatori appartenenti sia al mondo accademico che a quello industriale. La Ricerca ha portato alla nascita di tecnologie e protocolli progettati per rispondere ai diversi bisogni degli scenari emergenti, rendendo difficile sviluppare applicazioni interoperabili. Il Web of Things (WoT) è nato per rispondere a questi problemi tramite l'adozione degli standard che hanno favorito il successo del Web. Ma un contributo maggiore può venire dal Semantic Web of Things (SWoT). Infatti, i protocolli del Semantic Web permettono identificazione univoca delle risorse e una rappresentazione dei dati tale che le informazioni siano computabili e l'informazione di differenti fonti facilmente aggregabile. Le tecnologie del Semantic Web sono quindi degli interoperability enabler per l'IoT. Questa Tesi analizza come adottare le tecnologie del Semantic Web nell'IoT per realizzare la visione del SWoT di una rete di applicazioni interoperabile. Part I introduce l'IoT, Part II analizza gli algoritmi per supportare il publish-subscribe nei broker semantici e la loro implementazione in Smart-M3 e SEPA. Inoltre, viene presentato il lavoro preliminare verso il primo benchmark per applicazioni SWoT. Part IV discute l'applicazione dei risultati a diversi domini applicativi (mobilità elettrica, domotica, semantic audio ed Internet of Musical Things). Part V presenta le conclusioni sul lavoro svolto. La Ricerca su applicazioni semantiche ha evidenziato carenze negli attuali software di visualizzazione. Quindi, Part III presenta un nuovo metodo di rappresentazione delle basi di conoscenza semantiche basato sull’approccio a grafo che introduce il concetto di Semantic Plane.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Elgedawy, Islam Moukhtar, et islam_elgedawy@yahoo com au. « Correctness-Aware High-Level Functional Matching Approaches For Semantic Web Services ». RMIT University. Computer Science and Information Technology, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20070511.162143.

Texte intégral
Résumé :
Existing service matching approaches trade precision for recall, creating the need for humans to choose the correct services, which is a major obstacle for automating the service matching and the service aggregation processes. To overcome this problem, the matchmaker must automatically determine the correctness of the matching results according to the defined users' goals. That is, only service(s)-achieving users' goals are considered correct. This requires the high-level functional semantics of services, users, and application domains to be captured in a machine-understandable format. Also this requires the matchmaker to determine the achievement of users' goals without invoking the services. We propose the G+ model to capture the high-level functional specifications of services and users (namely goals, achievement contexts and external behaviors) providing the basis for automated goal achievement determination; also we propose the concepts substitutability graph to capture the application domains' semantics. To avoid the false negatives resulting from adopting existing constraint and behavior matching approaches during service matching, we also propose new constraint and behavior matching approaches to match constraints with different scopes, and behavior models with different number of state transitions. Finally, we propose two correctness-aware matching approaches (direct and aggregate) that semantically match and aggregate semantic web services according to their G+ models, providing the required theoretical proofs and the corresponding verifying simulation experiments.
Styles APA, Harvard, Vancouver, ISO, etc.
44

Pérez, de Laborda Schwankhart Cristian. « Incorporating relational data into the Semantic Web ». [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=982420390.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

Nebot, Romero María Victoria. « Scalable methods to analyze Semantic Web data ». Doctoral thesis, Universitat Jaume I, 2013. http://hdl.handle.net/10803/396347.

Texte intégral
Résumé :
Semantic Web data is currently being heavily used as a data representation format in scientific communities, social networks, business companies, news portals and other domains. The irruption and availability of Semantic Web data is demanding new methods and tools to efficiently analyze such data and take advantage of the underlying semantics. Although there exist some applications that make use of Semantic Web data, advanced analytical tools are still lacking, preventing the user from exploiting the attached semantics.
En la actualidad, tanto entre las comunidades científicas como en las empresas, así como en las redes sociales y otros dominios web, se emplean cada vez más datos anotados semánticamente, los cuales contribuyen al desarrollo de la Web Semántica. Dicho crecimiento de este tipo de datos requiere la creación de nuevos métodos y herramientas capaces de aprovechar la semántica subyacente para analizar los datos de forma eficiente. Aunque ya existen aplicaciones capaces de usar y gestionar datos anotados semánticamente, éstas no explotan la semántica para realizar análisis sofisticados.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Voutsadakis, George. « Federated description logics for the semantic web ». [Ames, Iowa : Iowa State University], 2010. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3403851.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Palmér, Matthias. « Learning Applications based on Semantic Web Technologies ». Doctoral thesis, KTH, Medieteknik och interaktionsdesign, MID, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-104446.

Texte intégral
Résumé :
The interplay between learning and technology is a growing field that is often referred to as Technology Enhanced Learning (TEL). Within this context, learning applications are software components that are useful for learning purposes, such as textbook replacements, information gathering tools, communication and collaboration tools, knowledge modeling tools, rich lab environments that allows experiments etc. When developing learning applications, the choice of technology depends on many factors. For instance, who and how many the intended end-users are, if there are requirements to support in-application collaboration, platform restrictions, the expertise of the developers, requirements to inter-operate with other systems or applications etc. This thesis provides guidance on a how to develop learning applications based on Semantic Web technology. The focus on Semantic Web technology is due to its basic design that allows expression of knowledge at the web scale. It also allows keeping track of who said what, providing subjective expressions in parallel with more authoritative knowledge sources. The intended readers of this thesis include practitioners such as software architects and developers as well as researchers in TEL and other related fields. The empirical part of the this thesis is the experience from the design and development of two learning applications and two supporting frameworks. The first learning application is the web application Confolio/EntryScape which allows users to collect files and online material into personal and shared portfolios. The second learning application is the desktop application Conzilla, which provides a way to create and navigate a landscape of interconnected concepts. Based upon the experience of design and development as well as on more theoretical considerations outlined in this thesis, three major obstacles have been identified: The first obstacle is: lack of non-expert and user friendly solutions for presenting and editing Semantic Web data that is not hard-coded to use a specific vocabulary. The thesis presents five categories of tools that support editing and presentation of RDF. The thesis also discusses a concrete software solution together with a list of the most important features that have crystallized during six major iterations of development. The second obstacle is: lack of solutions that can handle both private and collaborative management of resources together with related Semantic Web data. The thesis presents five requirements for a reusable read/write RDF framework and a concrete software solution that fulfills these requirements. A list of features that have appeared during four major iterations of development is also presented. The third obstacle is: lack of recommendations for how to build learning applications based on Semantic Web technology. The thesis presents seven recommendations in terms of architectures, technologies, frameworks, and type of application to focus on. In addition, as part of the preparatory work to overcome the three obstacles, the thesis also presents a categorization of applications and a derivation of the relations between standards, technologies and application types.

QC 20121105

Styles APA, Harvard, Vancouver, ISO, etc.
48

Deng, Feng. « Web service matching based on semantic classification ». Thesis, Högskolan Kristianstad, Sektionen för hälsa och samhälle, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-9750.

Texte intégral
Résumé :
This degree project is mainly discussing about a web service classification approach based on suffix tree algorithm. Nowadays, Web Services are made up of WSDL web Service, RESTful web Service and many traditional component Services on Internet. The cost of manual classification cannot satisfy the increasing web services, so this paper proposes an approach to automatically classify web service because of this approach only relies on the textual description of service. Though semantic similarity calculation, we achieve web service classification automatically. Experimental evaluation results show that this approach has an acceptable and stable efficiency on precision and recall.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Ziembicki, Joanna. « Distributed Search in Semantic Web Service Discovery ». Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/1103.

Texte intégral
Résumé :
This thesis presents a framework for semantic Web Service discovery using descriptive (non-functional) service characteristics in a large-scale, multi-domain setting. The framework uses Web Ontology Language for Services (OWL-S) to design a template for describing non-functional service parameters in a way that facilitates service discovery, and presents a layered scheme for organizing ontologies used in service description. This service description scheme serves as a core for desigining the four main functions of a service directory: a template-based user interface, semantic query expansion algorithms, a two-level indexing scheme that combines Bloom filters with a Distributed Hash Table, and a distributed approach for storing service description. The service directory is, in turn, implemented as an extension of the Open Service Discovery Architecture.

The search algorithms presented in this thesis are designed to maximize precision and completeness of service discovery, while the distributed design of the directory allows individual administrative domains to retain a high degree of independence and maintain access control to information about their services.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Åberg, Cécile. « An evaluation platform for semantic web technology / ». Linköping : Department of Computer and Information Science, Linköping University, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7904.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie