To see the other types of publications on this topic, follow the link: Multimedia web ontology language.

Dissertations / Theses on the topic 'Multimedia web ontology language'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Multimedia web ontology language.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Alaca, Aygul Filiz. "Natural Language Query Processing In Ontology Based Multimedia Databases." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611816/index.pdf.

Full text
Abstract:
In this thesis a natural language query interface is developed for semantic and spatio-temporal querying of MPEG-7 based domain ontologies. The underlying ontology is created by attaching domain ontologies to the core Rhizomik MPEG-7 ontology. The user can pose concept, complex concept (objects connected with an &ldquo
AND&rdquo
or &ldquo
OR&rdquo
connector), spatial (left, right . . . ), temporal (before, after, at least 10 minutes before, 5 minutes after . . . ), object trajectory and directional trajectory (east, west, southeast . . . , left, right, upwards . . . ) queries to the system. Furthermore, the system handles the negative meaning in the user input. When the user enters a natural language (NL) input, it is parsed with the link parser. According to query type, the objects, attributes, spatial relation, temporal relation, trajectory relation, time filter and time information are extracted from the parser output by using predefined rules. After the information extraction, SPARQL queries are generated, and executed against the ontology by using an RDF API. Results are retrieved and they are used to calculate spatial, temporal, and trajectory relations between objects. The results satisfying the required relations are displayed in a tabular format and user can navigate through the multimedia content.
APA, Harvard, Vancouver, ISO, and other styles
2

Suresh, Raju Vishnu. "Verifying arbitrary safety-related rules using Web Ontology Language." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-251652.

Full text
Abstract:
This project work has been undertaken in order to explore the possibility of verifying arbitrary safety-related rules in the context of heavy vehicles subject to ISO26262 functional safety standard of vehicle correctness using semantic web reasoning techniques that are in Linked Data format. The aim is to further use this as a method to claim functional safety for different configuration of vehicles, in a highly automated way. The ability of current system of tools to perform the verification involves manual work and is difficult to perform because of the size and complexity of the data. The entire work was studied and implemented within Scania, where in the integrated data from system safety department, in the Linked Data format was used for the implementation of the tool. The project work was proceeded in two stages. The initial stage of the project was surveying the existing reasoners and their applications to different problems in verification of rules, on the basis of different comparison criteria’s and benchmark results. The second stage of project involved determining a suitable way to represent the rules, in order to verify them against the available data.
Detta examensarbete har genomförts för att undersöka möjligheten att verifiera godtyckliga säkerhetsrelaterade regler i samband med tunga fordon som omfattas av ISO 26262 funktionell standard för fordonsäkerhet, med hjälp av semantiska webresoneringsmetoder i länkat dataformat. Syftet är att använda detta vidare som en högt automatiserad metod för funktionell säkerhet för olika fordonskonfigurationer. Det nuvarande systemet med verktyg för att utföra verifieringen innebär manuellt arbete och är svårt att använda på grund av datas storlek och komplexitet. Examensarbetet utfördes inom Scania, där data tillhandahölls av systemsäkerhetsavdelningen. För implementering av verktyget användes länkade data. Arbetets första steg var att kartlägga de befintliga resonerarna och deras tillämpningar på olika problem vid kontrollen av regler baserade på olika jämförelsekriterier och benchmarkresultat. Den andra etappen av projektet var att bestämma ett lämpligt sätt att representera reglerna för att verifiera dem mot tillgängliga data.
APA, Harvard, Vancouver, ISO, and other styles
3

Farrar, Scott O. "An ontology for linguistics on the Semantic Web." Diss., The University of Arizona, 2003. http://hdl.handle.net/10150/289879.

Full text
Abstract:
The current research presents an ontology for linguistics useful for an implementation on the Semantic Web. By adhering to this model, it is shown that data of the kind routinely collected by field linguists may be represented so as to facilitate automatic analysis and semantic search. The literature concerning typological databases, knowledge engineering, and the Semantic Web is reviewed. It is argued that the time is right for the integration of these three areas of research. Linguistic knowledge is discussed in the overall context of common-sense knowledge representation. A three-layer approach to meaning is assumed, one that includes conceptual, semantic, and linguistic levels of knowledge. In particular the level of semantics is shown to be crucial for a notional account of grammatical categories such as tense, aspect, and case. The level of semantic is viewed as an encoding of common-sense reality. To develop the ontology an upper model based on the Suggested Upper Merged Ontology (SUMO) is adopted, though elements from other ontologies are utilized as well. A brief comparison of available upper models is presented. It is argued that any ontology for linguistics should provide an account of at least (1) linguistic expressions, (2) mental linguistic units, (3) linguistic categories, and (4) discrete semantic units. The concepts and relations concerning these four domains are motivated as part of the ontology. Finally, an implementation for the Semantic Web is given by discussing the various data constructs necessary for markup (interlinear text, lexicons, paradigms, grammatical descriptions). It is argued that a characterization of the data constructs should not be included in the general ontology, but should be left up to the individual data provider to implement in XML Schema. A search scenario for linguistic data is discussed. It is shown that an ontology for linguistics provides the machinery for pure semantic search, that is, an advanced search framework whereby the user may use linguistic concepts, not just simple strings, as the search query.
APA, Harvard, Vancouver, ISO, and other styles
4

Sengupta, Kunal. "A Language for Inconsistency-Tolerant Ontology Mapping." Wright State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=wright1441044183.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lacy, Lee. "ITERCHANGING DISCRETE EVENT SIMULATIONPROCESS INTERACTION MODELSUSING THE WEB ONTOLOGY LANGUAGE - OWL." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3332.

Full text
Abstract:
Discrete event simulation development requires significant investments in time and resources. Descriptions of discrete event simulation models are associated with world views, including the process interaction orientation. Historically, these models have been encoded using high-level programming languages or special purpose, typically vendor-specific, simulation languages. These approaches complicate simulation model reuse and interchange. The current document-centric World Wide Web is evolving into a Semantic Web that communicates information using ontologies. The Web Ontology Language – OWL, was used to encode a Process Interaction Modeling Ontology for Discrete Event Simulations (PIMODES). The PIMODES ontology was developed using ontology engineering processes. Software was developed to demonstrate the feasibility of interchanging models from commercial simulation packages using PIMODES as an intermediate representation. The purpose of PIMODES is to provide a vendor-neutral open representation to support model interchange. Model interchange enables reuse and provides an opportunity to improve simulation quality, reduce development costs, and reduce development times.
Ph.D.
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
Modeling and Simulation
APA, Harvard, Vancouver, ISO, and other styles
6

Kavalec, Martin. "Ontology Learning and Information Extraction for the Semantic Web." Doctoral thesis, Vysoká škola ekonomická v Praze, 2006. http://www.nusl.cz/ntk/nusl-452.

Full text
Abstract:
The work gives overview of its three main topics: semantic web, information extraction and ontology learning. A method for identification relevant information on web pages is described and experimentally tested on pages of companies offering products and services. The method is based on analysis of a sample web pages and their position in the Open Directory catalogue. Furthermore, a modfication of association rules mining algorithm is proposed and experimentally tested. In addition to an identification of a relation between ontology concepts, it suggest possible naming of the relation.
APA, Harvard, Vancouver, ISO, and other styles
7

Tewolde, Noh Teamrat. "Evaluating a Semantic Approach to Address Data Interoperability." Diss., University of Pretoria, 2014. http://hdl.handle.net/2263/46272.

Full text
Abstract:
Semantic approaches have been used to facilitate interoperability in different fields of study. Current literature, however, shows that the semantic approach has not been used to facilitate the interoperability of addresses across domains. Addresses are important reference data used to identify locations and /or delivery points. Interoperability of address data across address or application domains is important because it facilitates the sharing of address data, addressing software and tools which can be used across domains. The aim of this research study has been to evaluate how a semantic (ontologies) approach could be used to facilitate address data interoperability and what the challenges and benefits of the semantic approach are. To realize the hypothesis and answer the research problems, a multi-tier hierarchy of ontology architecture was designed to integrate (across domain) address data with different levels of granularities. Four-tier hierarchy of ontologies was argued to be the optimal architecture for address data interoperability. At the top of the hierarchy was Foundation-Tier that includes vocabularies for location-related information and semantic language rules and concepts. The second tier has address reference ontology (called Base Address Ontology) that was developed to facilitate interoperability across the address domains. Developing optimal address reference ontology was one of the major goals of the research. Different domain ontologies were developed at the third tier of the hierarchy. Domain ontologies extend the vocabulary of the BAO (address reference ontology) with domain specific concepts. At the bottom of the hierarchy are application ontologies that are designed for specific purpose within an address domain or domains. Multiple scenarios of address data usage were considered to answer the research questions from different perspectives. Two interoperable address systems were developed to demonstrate the proof of concepts for the semantic approach. These interoperable environments were created using the UKdata+UPUdata ontology and UKpostal ontology, which illustrate different use cases of ontologies that facilitate interoperability. Ontology reason, inference, and SPARQL query tools were used to share, exchange, and process address data across address domains. Ontology inferences were done to exchange address data attributes between the UK administrative address data and UK postal service address data systems in the UKdata+UPUdata ontology. SPARQL queries were, furthermore, run to extract and process information from different perspective of an address domain and from combined perspectives of two (UK administrative and UK postal) address domains. The second interoperable system (UKpostal ontology) illustrated the use of ontology inference tools to share address data between two address data systems that provide different perspectives of a domain.
Dissertation (MSc)--University of Pretoria, 2014.
tm2015
Computer Science
MSc
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
8

Gao, Yongchun 1977. "The application of Web Ontology Language for information sharing in the dairy industry /." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=97957.

Full text
Abstract:
In this thesis the Semantic Web and its core technology---Web Ontology Language (OWL)---were studied. Considering the features of the different units involved in the dairy industry, OWL, in its capacity as an ontology description language, can be used to encode and thus exchange ontology among the units in the dairy industry. After creation of OWL file using Protege, an OWL parser was programmed to decode the ontology and data contained in the OWL file. Based on these investigations, it was determined that OWL can be used to encode, exchange, and decode data between farms and the units that interact with them, although large volumes of data among the service agencies pose certain challenges in terms of transfer size. A structure of the Semantic Web services in the dairy industry is proposed for Semantic Web Service registration, search and usage related to certain farm-management tasks. With the help of the Semantic Web and OWL, one can expect a more efficient data processing in the future dairy industry.
APA, Harvard, Vancouver, ISO, and other styles
9

Santos, Laécio Lima dos. "PR-OWL 2 RL : um formalismo para tratamento de incerteza na web semântica." reponame:Repositório Institucional da UnB, 2016. http://repositorio.unb.br/handle/10482/21547.

Full text
Abstract:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, Programa de Pós-Graducação em Informática, 2016.
Submitted by Fernanda Percia França (fernandafranca@bce.unb.br) on 2016-08-15T19:28:04Z No. of bitstreams: 1 2016_LaécioLimadosSantos.pdf: 8101789 bytes, checksum: cfa3494dd6c9cb6bcef19bad2ae8150d (MD5)
Approved for entry into archive by Raquel Viana(raquelviana@bce.unb.br) on 2016-10-11T21:17:42Z (GMT) No. of bitstreams: 1 2016_LaécioLimadosSantos.pdf: 8101789 bytes, checksum: cfa3494dd6c9cb6bcef19bad2ae8150d (MD5)
Made available in DSpace on 2016-10-11T21:17:42Z (GMT). No. of bitstreams: 1 2016_LaécioLimadosSantos.pdf: 8101789 bytes, checksum: cfa3494dd6c9cb6bcef19bad2ae8150d (MD5)
A Web Semântica (WS) adiciona informações semânticas a Web tradicional, permitindo que os computadores entendam conteúdos antes acessíveis apenas aos humanos. A Ontology Web Language (OWL), linguagem padrão para criação de ontologias na WS, se baseia em lógica descritiva para permitir uma modelagem formal de um domínio de conhecimento. A OWL, no entanto, não possui suporte para tratamento de incerteza, presente em diversas situações, o que motivou o estudo de várias alternativas para tratar este problema. O Probabilistic OWL (PR-OWL) adiciona suporte à incerteza ao OWL utilizando Multi-Entity Bayesian Networks (MEBN), uma linguagem probabilística de primeira ordem. A inferência no MEBN ocorre através da geração de uma rede bayesiana específica de situação (SSBN). O PR-OWL 2 estende a linguagem original oferecendo uma maior integração com o OWL e permitindo a construção de ontologias que mesclam conhecimento determinístico e probabilístico. PR-OWL não permite lidar com domínios que contenham bases assertivas muito grandes. Isto se deve a alta complexidade computacional da lógica descritiva na qual a OWL é baseada e ao fato de que as máquinas de inferência utilizadas nas implementações das versões do PR-OWL requerem que a base assertiva esteja carregada em memória. O presente trabalho propõe o PR-OWL 2 RL, uma versão escalável do PR-OWL baseada no profile OWL 2 RL e em triplestores. O OWL 2 RL permite raciocínio em tempo polinomial para as principais tarefas de inferência. Triplestores permitem armazenar triplas RDF (Resource Description Framework) em bancos de dados otimizados para trabalhar com grafos. Para permitir a geração de SSBN para bases contendo muitas evidências, este trabalho propõe um novo algoritmo, escalável ao instanciar nós de evidência apenas caso eles influenciem o nó objetivo. O plug-in PR-OWL 2 RL para o framework UnBBayes foi desenvolvido para permitir uma avaliação experimental dos algoritmos propostos. O estudo de caso abordado foi o de fraudes em licitações públicas. _______________________________________________________________________________________________ ABSTRACT
Semantic Web (SW) adds semantic information to the traditional Web, allowing computers to understand content before accessible only by human beings. The Web Ontology Language (OWL), main language for building ontologies in SW, allows a formal modeling of a knowledge domain based on description logics. OWL, however, does not support uncertainty. This restriction motivated the creation of several extensions of this language. Probabilistic OWL (PROWL) improves OWL with the ability to treat uncertainty using Multi-Entity Bayesian Networks (MEBN). MEBN is a first-order probabilistic logic. Its inference consists of generating a Situation Specific Bayesian Network (SSBN). PR-OWL 2 extends the PR-OWL offering a better integration with OWL and its underlying logic, allowing the creation of ontologies with deterministic and probabilistic parts. PR-OWL, however, does not deal with very large assertive bases. This is due to the high computational complexity of the description logic of OWL. Another fact is that reasoners used in PR-OWL implementation require that the data be fully load into memory at the time of inference. To address this issue, this work proposes PR-OWL 2 RL, a scalable version of PR-OWL based on OWL 2 RL profile and on triplestores. OWL 2 RL allows reasoning in polynomial time for the main reasoning tasks. Triplestores can store RDF (Resource Description Framework) triples in databases optimized to work with graphs. To allow the generation of SSBNs for databases with large evidence base, this work proposes a new algorithm that is scalable because it instantiates an evidence node only if it influence a target node. A plug-in for the UnBBayes framework was developed to allow an empirical evaluation of the new algorithms proposed. A case study over frauds into procurements was carried on.
APA, Harvard, Vancouver, ISO, and other styles
10

Johannes, Elisabeth. "DEUTSCH 1, 2, 3!! : an interactive, multimedia, web-based program for the German foreign language classroom." Thesis, Link to the online version, 2007. http://hdl.handle.net/10019/741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lossio-Ventura, Juan Antonio. "Towards the French Biomedical Ontology Enrichment." Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS220/document.

Full text
Abstract:
En biomedicine, le domaine du « Big Data » (l'infobésité) pose le problème de l'analyse de gros volumes de données hétérogènes (i.e. vidéo, audio, texte, image). Les ontologies biomédicales, modèle conceptuel de la réalité, peuvent jouer un rôle important afin d'automatiser le traitement des données, les requêtes et la mise en correspondance des données hétérogènes. Il existe plusieurs ressources en anglais mais elles sont moins riches pour le français. Le manque d'outils et de services connexes pour les exploiter accentue ces lacunes. Dans un premier temps, les ontologies ont été construites manuellement. Au cours de ces dernières années, quelques méthodes semi-automatiques ont été proposées. Ces techniques semi-automatiques de construction/enrichissement d'ontologies sont principalement induites à partir de textes en utilisant des techniques du traitement du langage naturel (TALN). Les méthodes de TALN permettent de prendre en compte la complexité lexicale et sémantique des données biomédicales : (1) lexicale pour faire référence aux syntagmes biomédicaux complexes à considérer et (2) sémantique pour traiter l'induction du concept et du contexte de la terminologie. Dans cette thèse, afin de relever les défis mentionnés précédemment, nous proposons des méthodologies pour l'enrichissement/la construction d'ontologies biomédicales fondées sur deux principales contributions.La première contribution est liée à l'extraction automatique de termes biomédicaux spécialisés (complexité lexicale) à partir de corpus. De nouvelles mesures d'extraction et de classement de termes composés d'un ou plusieurs mots ont été proposées et évaluées. L'application BioTex implémente les mesures définies.La seconde contribution concerne l'extraction de concepts et le lien sémantique de la terminologie extraite (complexité sémantique). Ce travail vise à induire des concepts pour les nouveaux termes candidats et de déterminer leurs liens sémantiques, c'est-à-dire les positions les plus pertinentes au sein d'une ontologie biomédicale existante. Nous avons ainsi proposé une approche d'extraction de concepts qui intègre de nouveaux termes dans l'ontologie MeSH. Les évaluations, quantitatives et qualitatives, menées par des experts et non experts, sur des données réelles soulignent l'intérêt de ces contributions
Big Data for biomedicine domain deals with a major issue, the analyze of large volume of heterogeneous data (e.g. video, audio, text, image). Ontology, conceptual models of the reality, can play a crucial role in biomedical to automate data processing, querying, and matching heterogeneous data. Various English resources exist but there are considerably less available in French and there is a strong lack of related tools and services to exploit them. Initially, ontologies were built manually. In recent years, few semi-automatic methodologies have been proposed. The semi-automatic construction/enrichment of ontologies are mostly induced from texts by using natural language processing (NLP) techniques. NLP methods have to take into account lexical and semantic complexity of biomedical data : (1) lexical refers to complex phrases to take into account, (2) semantic refers to sense and context induction of the terminology.In this thesis, we propose methodologies for enrichment/construction of biomedical ontologies based on two main contributions, in order to tackle the previously mentioned challenges. The first contribution is about the automatic extraction of specialized biomedical terms (lexical complexity) from corpora. New ranking measures for single- and multi-word term extraction methods have been proposed and evaluated. In addition, we present BioTex software that implements the proposed measures. The second contribution concerns the concept extraction and semantic linkage of the extracted terminology (semantic complexity). This work seeks to induce semantic concepts of new candidate terms, and to find the semantic links, i.e. relevant location of new candidate terms, in an existing biomedical ontology. We proposed a methodology that extracts new terms in MeSH ontology. The experiments conducted on real data highlight the relevance of the contributions
APA, Harvard, Vancouver, ISO, and other styles
12

Beaulac, Jacqueline. "Interactive multimedia composition on the World Wide Web : a solution for musicians using Java." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33270.

Full text
Abstract:
This thesis attempts to gauge the strengths and limitations of the Java programming language in terms of its use in the production of multimedia compositions: in particular, the ways in which Java supports the creation of interactive, non-deterministic musical works. An original solution to the problem of multimedia design is presented: a hierarchically defined, basic, yet flexible scripting language that is interpreted using Java. This scripting language allows the user to incorporate his/her own media into a coherent and interactive form using a small set of simple keywords and basic operators. It also allows new functionality to be added by advanced users with a basic knowledge of Java. By investigating how such a scripting language may be implemented, the extent to which Java may be applied towards multimedia applications in general is revealed.
APA, Harvard, Vancouver, ISO, and other styles
13

Ferreira, Déborah Mendes. "Adicionando temporalidade à linguagem OWL 2 : um estudo a partir da linguagem tOWL e sua decibilidade." reponame:Repositório Institucional da UnB, 2016. http://repositorio.unb.br/handle/10482/20558.

Full text
Abstract:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, Programa de Pós-Graducação em Informática, 2016.
Submitted by Fernanda Percia França (fernandafranca@bce.unb.br) on 2016-05-11T20:20:36Z No. of bitstreams: 1 2016_DéborahMendesFerreira.pdf: 1296097 bytes, checksum: 8ef238bfefbdb79d21de36ae89a1a525 (MD5)
Approved for entry into archive by Marília Freitas(marilia@bce.unb.br) on 2016-05-26T20:20:12Z (GMT) No. of bitstreams: 1 2016_DéborahMendesFerreira.pdf: 1296097 bytes, checksum: 8ef238bfefbdb79d21de36ae89a1a525 (MD5)
Made available in DSpace on 2016-05-26T20:20:12Z (GMT). No. of bitstreams: 1 2016_DéborahMendesFerreira.pdf: 1296097 bytes, checksum: 8ef238bfefbdb79d21de36ae89a1a525 (MD5)
Um dos maiores obstáculos para o fornecimento de melhor suporte para os usuários da Web é o fato de que o significado do conteúdo da maior parte da Web não ser acessível às máquinas. Para que as máquinas consigam assimilar o conteúdo da Web, máquinas e humanos precisam compartilhar conhecimento à respeito do mundo real, ou seja, é necessário ser capaz de representar o mundo, ou partes dele, dentro dos computadores. Ao representar o mundo, é desejável que tal representação seja o mais próxima possível da realidade para evitar que falsas suposições sejam feitas à respeito dele. Para que isso ocorra, temos que ser capazes de representar também um aspecto muito importante do mundo real: o tempo. Tempo é um aspecto muito importante da vida humana, muitos ambientes exigem uma consciência temporal. Neste trabalho apresentamos um estudo da compatibilidade entre a linguagem temporal Temporal Web Ontology Language (tOWL) e a Web Ontology Language 2 (OWL 2), verificando quais estruturas da tOWL são compatíveis com a OWL 2 e quais estruturas requerem modificações para manter a decidibilidade da linguagem. A linguagem tOWL foi desenvolvida para um fragmento da primeira versão da OWL. Algumas estruturas não podem ser simplesmente adicionadas à OWL 2 pois isto poderia afetar a decidibilidade. Este trabalho também apresenta os algoritmos para raciocínio automático para lidar com as modificações feitas na linguagem tOWL. Com estes algoritmos, é possível verificar consistência de base de dados, realizar consultas semânticas e obter conhecimentos implícitos, aprendendo novos fatos à respeito da base dados. Também é apresentado um estudo de caso utilizando uma base de dados de ocorrências aéreas. Uma ontologia temporal é construída para representar ocorrências aéreas. Devido à capacidade que a linguagem tOWL possui de lidar com aspectos temporais, podemos ligar cada ocorrência ao período em que ocorreu, podendo analisar, encontrar padrões e conectar informações com outras bases de dados. ________________________________________________________________________________________________ ABSTRACT
One major obstacle to provide better support for Web users is the fact that the meaning of the majority of Web content is not accessible to machines. If we want machines to understand Web content, machines and humans need to share knowledge about the real world, in other words, it is necessary to represent the world, or parts of it within the computer. To represent the world, it is desirable that such representation is as close to reality as possible to prevent that false assumptions are made about the world. If we want this to happen, we must be able to represent a very important aspect of the real world: time. Time is a very important aspect of human life. Many environments require a temporal awareness. One example of such an environment is the air traffic control. Each aircraft must follow a strict schedule to avoid any incident. Therefore, time should also be part of the real world representations. We present a study of the compatibility between the Temporal Web Ontology Language (towl) and the Web Ontology Language 2 (OWL 2), checking which tOWL structures are compatible with OWL 2 and which structures require modifications to maintain the decidability of the language. The tOWL language was developed for a fragment of the first version of OWL, some structures can not simply be added to OWL 2 since this could affect the decidability. This work also presents the algorithms for reasoning to deal with the changes made in the tOWL language. With these algorithms, we can check database consistency, perform semantic queries and get implicit knowledge, learning new facts regarding the database. We present a case study using a database of aircraft occurrences. A temporal ontology is built to represent plane accidents, due to the ability of the tOWL language to deal with temporal aspects, we can connect each occurrence to the period in which it occurred, and we may analyze events, finding patterns and connecting information with other databases.
APA, Harvard, Vancouver, ISO, and other styles
14

Sanches, Henderson Matsuura. "Onto-mama-nm : um modelo ontológico de tratamento de neoplasia mamária." reponame:Repositório Institucional da UnB, 2017. http://repositorio.unb.br/handle/10482/23517.

Full text
Abstract:
Dissertação (mestrado)—Universidade de Brasília, Faculdade Gama, Programa de Pós-Graduação em Engenharia Biomédica, 2017.
Submitted by Fernanda Percia França (fernandafranca@bce.unb.br) on 2017-05-11T17:09:26Z No. of bitstreams: 1 2017_HendersonMatsuuraSanches.pdf: 3859884 bytes, checksum: 321d23329465fde2ebd5422eaa30e5a4 (MD5)
Approved for entry into archive by Raquel Viana (raquelviana@bce.unb.br) on 2017-05-16T21:37:03Z (GMT) No. of bitstreams: 1 2017_HendersonMatsuuraSanches.pdf: 3859884 bytes, checksum: 321d23329465fde2ebd5422eaa30e5a4 (MD5)
Made available in DSpace on 2017-05-16T21:37:03Z (GMT). No. of bitstreams: 1 2017_HendersonMatsuuraSanches.pdf: 3859884 bytes, checksum: 321d23329465fde2ebd5422eaa30e5a4 (MD5) Previous issue date: 2017-05-16
O objetivo desse trabalho foi a construção de um modelo ontológico da Neoplasia Mamária (NM) denominado ONTO-MAMA-NM. Esse modelo é uma ferramenta importante para auxiliar especialistas e estudantes da área da saúde no tratamento do câncer de mama. O modelo ontológico foi criado na linguagem Web Ontology Language (OWL), cuja principal vantagem é a facilidade para expressar significados e semântica e aplicabilidade no processo de informações de forma automatizada. Por se tratar de um modelo aplicado à área médica, o ONTO-MANA-NM procura manter a compatibilidade com os padrões Digital Imaging and Communications in Medicine (DICOM) e Health Level Seven International (HL7), de modo a preservar a interoperabilidade das informações dos pacientes em ambientes hospitalares. Como resultado, obteve-se um detalhamento da ontologia desenvolvida e implementada no software Protégé 5.1 com o apoio da metodologia denominada de Methontology. Foi descrito todo o processo de desenvolvimento, desde a coleta de dados até a validação final do modelo junto aos especialistas. Sendo assim, foi avaliado em duas etapas, isto é, primeiramente pelos especialistas: fisioterapeutas, médicos, residentes e alunos da fisioterapia e medicina do HUB. Ao final do processo da validação do ONTO-MAMANM, informaram que desconheciam a ontologia e não tinham visto nada semelhante referente ao tratamento da NM, obtendo assim o primeiro modelo ontológico do tratamento da NM.
The aim of this work was the development of a Mammary Neoplasia (NM) ontological model called ONTOMAMA- NM. This model is a relevant tool to assist experts and students of the health area in the treatment of breast cancer. The ontological model was implemented in the Web Ontology Language (OWL) language, whose main advantage is the facility to express meanings, semantics and applicability in the information process in an automated way. As a model applied to the medical field, ONTO-MANA-NM seeks to maintain compatibility with the Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven International (HL7) standards, in order to preserve the interoperability of patients information in hospital environments. As a result, it was developed a detailed ontology and implemented in Protégé 5.1 software with the support of the methodology called as Methontology. The final development process was described since the data collection until the final validation of the model with the experts. Thus, it was evaluated in two stages, that is, firstly by the specialists: physiotherapists, physicians, residents and students of physiotherapy and HUB medicine. At the end of the ONTO-MAMA-NM validation process, they reported that they did not know about the ontology and had not seen anything similar regarding NM treatment, thus obtaining the first ontological model of NM treatment.
APA, Harvard, Vancouver, ISO, and other styles
15

Sevindik, Mentes Hilal. "Design and Development of a Mineral Exploration Ontology." Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/geosciences_theses/49.

Full text
Abstract:
In this thesis, an ontology for the mineral exploration domain is designed and developed applying the Protégé ontology editor. The MinExOnt ontology includes a formal and explicit representation of the terms describing real objects, activities, and processes in mineral exploration. The stages used for these activities have various vocabularies, which are semantically modeled in this ontology with Web Ontology Language (OWL). The aim of the thesis is to show how ontologies can be designed and developed to help manage and represent geological knowledge. In addition to providing a general workflow for building the ontology, this thesis presents a simple user guide for the used software, including Protégé, used for ontology development, and Knoodl-OntVis, used for OWL visualization.
APA, Harvard, Vancouver, ISO, and other styles
16

Reul, Quentin H. "Role of description logic reasoning in ontology matching." Thesis, University of Aberdeen, 2012. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=186278.

Full text
Abstract:
Semantic interoperability is essential on the Semantic Web to enable different information systems to exchange data. Ontology matching has been recognised as a means to achieve semantic interoperability on the Web by identifying similar information in heterogeneous ontologies. Existing ontology matching approaches have two major limitations. The first limitation relates to similarity metrics, which provide a pessimistic value when considering complex objects such as strings and conceptual entities. The second limitation relates to the role of description logic reasoning. In particular, most approaches disregard implicit information about entities as a source of background knowledge. In this thesis, we first present a new similarity function, called the degree of commonality coefficient, to compute the overlap between two sets based on the similarity between their elements. The results of our evaluations show that the degree of commonality performs better than traditional set similarity metrics in the ontology matching task. Secondly, we have developed the Knowledge Organisation System Implicit Mapping (KOSIMap) framework, which differs from existing approaches by using description logic reasoning (i) to extract implicit information as background knowledge for every entity, and (ii) to remove inappropriate correspondences from an alignment. The results of our evaluation show that the use of Description Logic in the ontology matching task can increase coverage. We identify people interested in ontology matching and reasoning techniques as the target audience of this work
APA, Harvard, Vancouver, ISO, and other styles
17

Cimiano, Philipp. "Ontology learning and population from text : algorithms, evaluation and applications /." New York, NY : Springer, 2006. http://www.loc.gov/catdir/enhancements/fy0824/2006931701-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Hahne, Fredrik, and Åsa Lindgren. "Från luddig verklighet till strikt formalism : Utveckling av en metod för den semantiska webben." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-477.

Full text
Abstract:

Internet is the world’s largest source of information, and it is expanding every day. It is possible to find all kind of information as long as you know how and where to look for it, but still it is only the words itself that are searched for. We have with this essay tried to find an approach that makes it possible to give the word a meaning or a context.

We have, as a starting point used the Socrates method, which is a method that breaks down texts into its smallest elements and forms activities. We have redone these activities to ontologies by forming general and specific descriptions of the activities. The ontologies are meant to create a common language for as well humans as computers, where meaning and context are built in.

After we have created our ontologies we used Web Ontology Language, OWL, which is the ontology language that is considered being closest to a standard. It has been developed for the semantic web, and that is the ultimate objective of our essay. The semantic web is meant to be an extension of the existing web, and it will include comprehension for computers.

We have become conscious that the semantic web would be a great improvement for both humans and computers, since it will be a lot easier to find the information you are looking for.


Internet är världens största källa till information och det expanderar för varje dag. Det är möjligt att hitta all slags information om man bara vet vart och hur man ska leta, ändå är det bara orden som eftersöks. Vi har med vår uppsats försökt ta fram ett tillvägagångssätt som gör det möjligt att ge orden en betydelse eller ett sammanhang.

Som utgångspunkt har vi använt oss av Sokratesmetoden, vilket är en metod som bryter ner texter till dess minsta beståndsdelar, och bildar aktiviteter. Dessa aktiviteter har vi gjort om till ontologier genom att bilda generella och specifika beskrivningar av aktiviteterna. Ontologier är tänkta att skapa ett gemensamt språk för människor och datorer, där betydelse och sammanhang byggs in.

När vi skapat våra ontologier använde vi oss av Web Ontology Language, OWL, vilket är ett ontologispråk som anses vara närmast en standard. Detta språk har utvecklats för att kunna användas för den semantiska webben, vilken även är slutmålet med vår uppsats. Den semantiska webben är tänkt att utöka den befintliga webben, och ska bygga in förståelse även för datorer.

Vi har insett att den semantiska webben skulle innebära en stor förbättring för såväl människor som datorer, då det skulle bli enklare att hitta eftersökt information.

APA, Harvard, Vancouver, ISO, and other styles
19

Deyab, Rodwan Bakkar. "Ontology-based information extraction from learning management systems." Master's thesis, Universidade de Évora, 2017. http://hdl.handle.net/10174/20996.

Full text
Abstract:
In this work we present a system for information extraction from Learning Management Systems. This system is ontology-based. It retrieves information according to the structure of the ontology to populate the ontology. We graphically present statistics about the ontology data. These statistics present latent knowledge which is difficult to see in the traditional Learning Management System. To answer questions about the ontology, a question answering system was developed using Natural Language Processing in the conversion of the natural language question into an ontology query language; Sumário: Extração de Informação de Sistemas de Gestão para Educação Usando Ontologias Neste dissertação apresentamos um sistema de extracção de informação de sistemas de gestão para educação (Learning Management Systems). Este sistema é baseado em ontologias e extrai informação de acordo com a estrutura da ontologia para a popular. Também permite apresentar graficamente algumas estatísticas sobre os dados da ontologia. Estas estatísticas revelam o conhecimento latente que é difícil de ver num sistema tradicional de gestão para a educação. Para poder responder a perguntas sobre os dados da ontologia, um sistema de resposta automática a perguntas em língua natural foi desenvolvido usando Processamento de Língua Natural para converter as perguntas para linguagem de interrogação de ontologias.
APA, Harvard, Vancouver, ISO, and other styles
20

Tufan, Emrah. "Context Based Interoperability To Support Infrastructure Management In Municipalities." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612535/index.pdf.

Full text
Abstract:
Interoperability between Geographic Information System (GIS) of different infrastructure companies is still a problem to be handled. Infrastructure companies deal with many operations as a part of their daily routine such as a regular maintenance, or sometimes they deal with unexpected situations such as a malfunction due to natural event, like a flood or an earthquake. These situations may affect all companies and affected infrastructure companies response to these effects. Responses may result in consequences and in order to model these consequences on GIS, GISs are able to share information, which brings the interoperability problem into the scene. The present research, aims at finding an answer to interoperability problem between GISs of different companies by considering contextual information. During the study, the geographical features are handled as the major concern and interoperability problem is examined by targeting them. The model constructed in this research is based on the ontology and because the meaning of the terms in the ontology depends on the context, ontology based context modeling is also used. v In this research, a system implementation is done for two different GISs of two
APA, Harvard, Vancouver, ISO, and other styles
21

Montenegro, Nuno Filipe Santos de Castro. "CityPlan." Doctoral thesis, Universidade de Lisboa. Faculdade de Arquitetura, 2015. http://hdl.handle.net/10400.5/9852.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Broda, Cynthia Marie. "Ontology and Knowledge Base of Brittle Deformation Microstructures for the San Andreas Fault Observatory at Depth (SAFOD) Core Samples." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/geosciences_theses/26.

Full text
Abstract:
The quest to answer fundamental questions and solve complex problems is a principal tenet of Earth science. The pursuit of scientific knowledge has generated profuse research, resulting in a plethora of information-rich resources. This phenomenon offers great potential for scientific discovery. However, a deficiency in information connectivity and processing standards has become evident. This deficiency has resulted in a demand for tools to facilitate and process this upsurge in information. This ontology project is an answer to the demand for information processing tools. The primary purpose of this domain-specific ontology and knowledge base is to organize, connect, and correlate research data related to brittle deformation microstructures. This semantically enabled ontology may be queried to return not only asserted information, but inferred knowledge that may not be evident. In addition, its standardized development in OWL-DL (Web Ontology Language-Description Logic) allows the potential for sharing and reuse among other geologic science communities.
APA, Harvard, Vancouver, ISO, and other styles
23

Snead, Brian Johnson. "The Morphic Orator: Transmogrified Delivery on the Audio-Enabled Web." Digital Archive @ GSU, 2008. http://digitalarchive.gsu.edu/english_theses/49.

Full text
Abstract:
Audio is an effective but often overlooked component of World Wide Web delivery. Of the nearly twenty billion web pages estimated to exist, statistically few use sound. Those few using sound often use it poorly and with hardly any regard to theoretical and rhetorical issues. This thesis is an examination of the uses of audio on the World Wide Web, specifically focusing on how that use could be informed by current and historical rhetorical theory. A theoretical methodology is applied to suggest the concepts and disciplines required to make online audio more meaningful and useful. The thesis argues for the connection between the Web and the modern orator, its embodiment, its place in sound reproduction technology, and awareness of the limitations placed on it by design and convention.
APA, Harvard, Vancouver, ISO, and other styles
24

Williams, Rewa Colette. "Patterns Of 4th Graders' Literacy Events In Web Page Development." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Buys, Nelia. "An interactive, multimedia, web-based program to develop proficiency in specific reading skills for English first-year university students : an empirical study." Thesis, Link to the online version, 2004. http://hdl.handle.net/10019/2935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Hatcher, Alexandra M. "From the Internet to the streets| Occupy Wall Street, the Internet, and activism." Thesis, Northern Arizona University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=1537772.

Full text
Abstract:

In September of 2011 protestors filled the streets of New York City’s Wall Street Financial District as part of the social movement known as Occupy Wall Street. Prior to their protests in the streets, Occupy Wall Street was a movement that originated and spread online through various social media such as Facebook, YouTube, Twitter, and interactive webpages. The strategy of using Internet communication as a tool for activism is not new. Social movements since the 1990s have utilized the Internet.

The growing use of Web 2.0 technologies in our everyday lives is a topic that is not yet fully understood or researched by anthropologists, nor is its potential for ethnographic research fully realized. This thesis addresses both of these points by presenting a case study of how, as anthropologists, we can collect data from both the online and in-person presences of a group.

This thesis focuses on the social movement, Occupy Wall Street, because of its beginnings and continuing activity online. In-person data of the Occupy Wall Street movement were collected at Occupy movements in Flint, Michigan and New York City, New York using traditional ethnographic methods such as interviews and participant observation. Online data were collected using computer scripts (programs that automate computer tasks), that recursively downloaded websites onto my personal, locally owned hard drive. Once the online data was collected, I also used computer scripts to filter through data and locate phenomena on the websites that I had chosen to focus. By analyzing both online and in-person data I am able to gain a more holistic view and new ways of understanding social movements.

APA, Harvard, Vancouver, ISO, and other styles
27

Bezi, Nicole Allison. "Exploring creative writing in the middle school classroom via the effective use of multimedia." CSUSB ScholarWorks, 2005. https://scholarworks.lib.csusb.edu/etd-project/2800.

Full text
Abstract:
The purpose of this project is to develop a website by which students can improve their understanding of literary elements. This project will aid the students in completing some research as part of the initial stages of the WebQuest, to help them better understand the importance of literary elements.
APA, Harvard, Vancouver, ISO, and other styles
28

Baturay, Meltem Huri. "Effects Of Web-based Multimedia Annotated Vocabulary Learning In Context Model On Foreign Language Vocabulary Retention Of Intermediate Level English Langauge Learners." Phd thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/3/12608905/index.pdf.

Full text
Abstract:
The aim of this study was to investigate the effects web-based multimedia annotated vocabulary learning in context model and in spaced repetitions on vocabulary retention of intermediate level English language learners. The research study encompassed two main faces which was related to development of the material and implementation of it. In WEBVOCLE, which stands for web-based vocabulary learning material, the contextual presentation of vocabulary were enriched with audible online dictionary, pictures and animations
target words were repeated by the learners with interactive exercises, such as gap-filling, cloze and multiple choice test, games, puzzles, in &lsquo
spaced repetitions&rsquo
. In the study both qualitative and quantitative data were gathered through attitude questionnaires, checklists, interviews, focus group interviews and through vocabulary retention tests. The qualitative data were analyzed according to qualitative data analysis techniques and quantitative data were analyzed using SPSS statistics software. Feedback obtained from the learners demonstrated that they not only developed a positive attitude toward English vocabulary language learning but also increased their vocabulary retention level of the target vocabulary through spaced repetitions.
APA, Harvard, Vancouver, ISO, and other styles
29

Angsuchotmetee, Chinnapong. "Un framework de traitement semantic d'événement dans les réseaux des capteurs multimedias." Thesis, Pau, 2017. http://www.theses.fr/2017PAUU3034/document.

Full text
Abstract:
Les progrès de la technologie des capteurs, des communications sans fil et de l'Internet des Objets ont favorisé le développement des réseaux de capteurs multimédias. Ces derniers sont composés de capteurs interconnectés capables de fournir de façon omniprésente un suivi fin d’un espace connecté. Grâce à leurs propriétés, les réseaux de capteurs multimédias ont suscité un intérêt croissant ces dernières années des secteurs académiques et industriels et ont été adoptés dans de nombreux domaines d'application (tels que la maison intelligente, le bureau intelligent, ou la ville intelligente). L'un des avantages de l'adoption des réseaux de capteurs multimédias est le fait que les données collectées (vidéos, audios, images, etc.) à partir de capteurs connexes contiennent des informations sémantiques riches (en comparaison avec des capteurs uniquement scalaires) qui permettent de détecter des événements complexes et de mieux gérer les exigences du domaine d'application. Toutefois, la modélisation et la détection des événements dans les reséaux de capteurs multimédias restent une tâche difficile à réaliser, car la traduction de toutes les données multimédias collectées en événements n'est pas simple.Dans cette thèse, un framework complet pour le traitement des événements complexes dans les réseaux de capteurs multimédias est proposé pour éviter les algorithmes codés en dur et pour permettre une meilleure adaptation aux évolution des besoins d’un domaine d'application. Le Framework est appelé CEMiD et composé de :• MSSN-Onto: une ontologie nouvellement proposée pour la modélisation des réseaux de capteurs,• CEMiD-Language: un langage original pour la modélisation des réseaux de capteurs multimédias et des événements à détecter, et• GST-CEMiD: un moteur de traitement d'événement complexe basé sur un pipeline sémantique.Le framework CEMiD aide les utilisateurs à modéliser leur propre infrastructure de réseau de capteurs et les événements à détecter via le langage CEMiD. Le moteur de détection du framework prend en entrée le modèle fourni par les utilisateurs pour initier un pipeline de détection d'événements afin d'extraire des données multimédias correspondantes, de traduire des informations sémantiques et de les traduire automatiquement en événements. Notre framework est validé par des prototypes et des simulations. Les résultats montrent que notre framework peut détecter correctement les événements multimédias complexes dans un scénario de charge de travail élevée (avec une latence de détection moyenne inférieure à une seconde)
The dramatic advancement of low-cost hardware technology, wireless communications, and digital electronics have fostered the development of multifunctional (wireless) Multimedia Sensor Networks (MSNs). Those latter are composed of interconnected devices able to ubiquitously sense multimedia content (video, image, audio, etc.) from the environment. Thanks to their interesting features, MSNs have gained increasing attention in recent years from both academic and industrial sectors and have been adopted in wide range of application domains (such as smart home, smart office, smart city, to mention a few). One of the advantages of adopting MSNs is the fact that data gathered from related sensors contains rich semantic information (in comparison with using solely scalar sensors) which allows to detect complex events and copes better with application domain requirements. However, modeling and detecting events in MSNs remain a difficult task to carry out because translating all gathered multimedia data into events is not straightforward and challenging.In this thesis, a full-fledged framework for processing complex events in MSNs is proposed to avoid hard-coded algorithms. The framework is called Complex Event Modeling and Detection (CEMiD) framework. Core components of the framework are:• MSSN-Onto: a newly proposed ontology for modeling MSNs,• CEMiD-Language: an original language for modeling multimedia sensor networks and events to be detected, and• GST-CEMiD: a semantic pipelining-based complex event processing engine.CEMiD framework helps users model their own sensor network infrastructure and events to be detected through CEMiD language. The detection engine of the framework takes all the model provided by users to initiate an event detection pipeline for extracting multimedia data feature, translating semantic information, and interpret into events automatically. Our framework is validated by means of prototyping and simulations. The results show that our framework can properly detect complex multimedia events in a high work-load scenario (with average detection latency for less than one second)
APA, Harvard, Vancouver, ISO, and other styles
30

Sazonau, Viachaslau. "General terminology induction in description logics." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/general-terminology-induction-in-description-logics(63142865-d610-4041-84fa-764af1759554).html.

Full text
Abstract:
In computer science, an ontology is a machine-processable representation of knowledge about some domain. Ontologies are encoded in ontology languages, such as the Web Ontology Language (OWL) based on Description Logics (DLs). An ontology is a set of logical statements, called axioms. Some axioms make universal statements, e.g. all fathers are men, while others record data, i.e. facts about specific individuals, e.g. Bob is a father. A set of universal statements is called TBox, as it encodes terminology, i.e. schema-level conceptual relationships, and a set of facts is called ABox, as it encodes instance-level assertions. Ontologies are extensively developed and widely used in domains such as biology and medicine. Manual engineering of a TBox is a difficult task that includes modelling conceptual relationships of the domain and encoding those relationships in the ontology language, e.g. OWL. Hence, it requires the knowledge of domain experts and skills of ontology engineers combined together. In order to assist engineering of TBoxes and potentially automate it, acquisition (or induction) of axioms from data has attracted research attention and is usually called Ontology Learning (OL). This thesis investigates the problem of OL from general principles. We formulate it as General Terminology Induction that aims at acquiring general, expressive TBox axioms (called general terminology) from data. The thesis addresses and investigates in depth two main questions: how to rigorously evaluate the quality of general TBox axioms and how to efficiently construct them. We design an approach for General Terminology Induction and implement it in an algorithm called DL-Miner. We extensively evaluate DL-Miner, compare it with other approaches, and run case studies together with domain experts to gain insight into its potential applications. The thesis should be of interest to ontology developers seeking automated means to facilitate building or enriching ontologies. In addition, as our experiments show, DL-Miner can deliver valuable insights into the data, i.e. can be useful for data analysis and debugging.
APA, Harvard, Vancouver, ISO, and other styles
31

Mousavi, Seyyed Abbas. "Development and Validation of a Multimedia Computer Package for the Assessment of Oral Proficiency of Adult ESL Learners: Implications for Score Comparability." Thesis, Griffith University, 2008. http://hdl.handle.net/10072/365987.

Full text
Abstract:
This thesis is about the conceptualization, design, development, trial, and validation of a multimedia package for the computer-based administration of an interview in testing the general English language proficiency of adult ESL learners. This research is significant at both theoretical and practical levels. Theoretically, it fills a gap in the comparability studies of computer-based tests and conventional face-to-face interviews. It also sheds light on the usability and validity of a new mode of presentation which treats the computerized test driven language production as an aspect of the target language use in assessing performance and interpreting test scores. This study employs a quantitative and qualitative survey to investigate the interaction between test-takers and computer-delivered tests. It explores the effects of test-taker characteristics (such as age, gender, language background, and computer familiarity) on test performance and draws upon qualitative feedback provided by the examinees in interpreting test usefulness and substantiating validity generalizations. At a practical level, this project contributes to the language testing industry by capturing the potential of the computer and digital media for developing tests and tasks and introducing a new set of innovative tasks for the assessment of speaking. It further formulates a practical process model for future test developers. The end product of this research is a working prototype of a multimedia language testing instrument using video and audio to present the tasks, which may be used as an entry/exit, gate-keeping, accreditation, certification, or placement mechanism. Along with the substantial findings about the comparability of computer-based tests with face-to-face interviews, this study provides a set of practical guidelines for researchers who embark on the design and development of computer-based language tests. Given the rate of innovation in the digital media, natural language processing and voice recognition technology, the present era must be considered a transitional one and the future is difficult to predict. This thesis, therefore, concludes with two principal suggestions regarding further research at conceptual and practical levels. First, due to the complexity of the nature of human-machine interaction, researchers in language testing (particularly speaking tests) are advised to exercise caution in validity generalizations, because modifications in the delivery mode can result in changes in the quality and nature of the task and, as a consequence, the quality of the speaking performance. Second, this study was a small-scale prototype and a working example of the use of digital video in oral testing. The results showed that, for largescale test development projects, language testing professionals need to utilize the services of a team of information technology experts in developing tests of speaking proficiency with a view to increasing the number and variety of tasks as well as enhancing the security and usability of the test.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Languages and Linguistics
Full Text
APA, Harvard, Vancouver, ISO, and other styles
32

Pujolà, Joan-Tomas̀. "CALL for help : a study of the use of help facilities and language learning strategies in the context of a Web-based multimedia CALL program." Thesis, University of Edinburgh, 2001. http://hdl.handle.net/1842/30661.

Full text
Abstract:
This thesis presents a description of how learners use the help facilities of a Web-based multimedia CALL program designed to foster second language learners' reading and listening skills and language learning strategies. Relevant literature review of three main areas of Applied Linguistics research and theory are first presented: Language Learning Strategies, Second Language Pedagogy and CALL. These have a direct influence on both the program and the research design. A description of the program then follows: ImPRESSions(c) is a Web-based multimedia program intended for self-study to help learners of English in their comprehension skills regarding news in newspapers, on television and radio. The targeted users are learners of English from pre-intermediate to advanced levels. The prototype for the research study targeted Spanish learners. The author of this thesis first designed the program using HTML and JavaScript programming languages based on the capabilities of the computer to interlink different media. Then the need for various help facilities and options was assessed and designed. Help facilities are understood here as the resources of the program which assist the learner in performing language learning tasks. The help facilities in ImPRESSions are divided between Assistance, those that provide learner help for comprehension of the texts, and Guidance, those that are related to the tasks and provide help for performing them. Thus we could state that Assistance facilities are related to cognitive strategies whereas Guidance facilities are more related to metacognitive strategies. This framework helped to conceptualise the design of the program and enabled the researcher to explore how different learners use the help facilities presented. This study investigates the variation of strategy use taking into account students' level and their perceived language learning strategy use. In essence this is an exploratory study of strategy use in a CALL environment 22 adult Spanish students worked with the program for four sessions. In these sessions learners' computer moves were tracked by online video screen recording and retrospective questions were audio recorded after they worked on different written and aural texts.
APA, Harvard, Vancouver, ISO, and other styles
33

Cregan, Anne Computer Science &amp Engineering Faculty of Engineering UNSW. "Weaving the semantic web: Contributions and insights." Publisher:University of New South Wales. Computer Science & Engineering, 2008. http://handle.unsw.edu.au/1959.4/42605.

Full text
Abstract:
The semantic web aims to make the meaning of data on the web explicit and machine processable. Harking back to Leibniz in its vision, it imagines a world of interlinked information that computers `understand' and `know' how to process based on its meaning. Spearheaded by the World Wide Web Consortium, ontology languages OWL and RDF form the core of the current technical offerings. RDF has successfully enabled the construction of virtually unlimited webs of data, whilst OWL gives the ability to express complex relationships between RDF data triples. However, the formal semantics of these languages limit themselves to that aspect of meaning that can be captured by mechanical inference rules, leaving many open questions as to other aspects of meaning and how they might be made machine processable. The Semantic Web has faced a number of problems that are addressed by the included publications. Its germination within academia, and logical semantics has seen it struggle to become familiar, accessible and implementable for the general IT population, so an overview of semantic technologies is provided. Faced with competing `semantic' languages, such as the ISO's Topic Map standards, a method for building ISO-compliant Topic Maps in the OWL DL language has been provided, enabling them to take advantage of the more mature OWL language and tools. Supplementation with rules is needed to deal with many real-world scenarios and this is explored as a practical exercise. The available syntaxes for OWL have hindered domain experts in ontology building, so a natural language syntax for OWL designed for use by non-logicians is offered and compared with similar offerings. In recent years, proliferation of ontologies has resulted in far more than are needed in any given domain space, so a mechanism is proposed to facilitate the reuse of existing ontologies by giving contextual information and leveraging social factors to encourage wider adoption of common ontologies and achieve interoperability. Lastly, the question of meaning is addressed in relation to the need to define one's terms and to ground one's symbols by anchoring them effectively, ultimately providing the foundation for evolving a `Pragmatic Web' of action.
APA, Harvard, Vancouver, ISO, and other styles
34

Goncalves, Joao Rafael Landeiro De sousa. "Impact analysis in description logic ontologies." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/impact-analysis-in-description-logic-ontologies(87ee476a-c690-44b5-bd4c-b9afbdf7a0a0).html.

Full text
Abstract:
With the growing popularity of the Web Ontology Language (OWL) as a logic-based ontology language, as well as advancements in the language itself, the need for more sophisticated and up-to-date ontology engineering services increases as well. While, for instance, there is active focus on new reasoners and optimisations, other services fall short of advancing at the same rate (it suffices to compare the number of freely-available reasoners with ontology editors). In particular, very little is understood about how ontologies evolve over time, and how reasoners’ performance varies as the input changes. Given the evolving nature of ontologies, detecting and presenting changes (via a so-called diff) between them is an essential engineering service, especially for version control systems or to support change analysis. In this thesis we address the diff problem for description logic (DL) based ontologies, specifically OWL 2 DL ontologies based on the SROIQ DL. The outcomes are novel algorithms employing both syntactic and semantic techniques to, firstly, detect axiom changes, and what terms had their meaning affected between ontologies, secondly, categorise their impact (for example, determining that an axiom is a stronger version of another), and finally, align changes appropriately, i.e., align source and target of axiom changes (so the stronger axiom with the weaker one, from our example), and axioms with the terms they affect. Subsequently, we present a theory of reasoner performance heterogeneity, based on field observations related to reasoner performance variability phenomena. Our hypothesis is that there exist two kinds of performance behaviour: an ontology/reasoner combination can be performance-homogeneous or performance-heterogeneous. Finally, we verify that performance-heterogeneous reasoner/ontology combinations contain small, performance-degrading sets of axioms, which we call hot spots. We devise a performance hot spot finding technique, and show that hot spots provide a promising basis for engineering efficient reasoners.
APA, Harvard, Vancouver, ISO, and other styles
35

Polowinski, Jan. "Ontology-Driven, Guided Visualisation Supporting Explicit and Composable Mappings." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-229908.

Full text
Abstract:
Data masses on the World Wide Web can hardly be managed by humans or machines. One option is the formal description and linking of data sources using Semantic Web and Linked Data technologies. Ontologies written in standardised languages foster the sharing and linking of data as they provide a means to formally define concepts and relations between these concepts. A second option is visualisation. The visual representation allows humans to perceive information more directly, using the highly developed visual sense. Relatively few efforts have been made on combining both options, although the formality and rich semantics of ontological data make it an ideal candidate for visualisation. Advanced visualisation design systems support the visualisation of tabular, typically statistical data. However, visualisations of ontological data still have to be created manually, since automated solutions are often limited to generic lists or node-link diagrams. Also, the semantics of ontological data are not exploited for guiding users through visualisation tasks. Finally, once a good visualisation setting has been created, it cannot easily be reused and shared. Trying to tackle these problems, we had to answer how to define composable and shareable mappings from ontological data to visual means and how to guide the visual mapping of ontological data. We present an approach that allows for the guided visualisation of ontological data, the creation of effective graphics and the reuse of visualisation settings. Instead of generic graphics, we aim at tailor-made graphics, produced using the whole palette of visual means in a flexible, bottom-up approach. It not only allows for visualising ontologies, but uses ontologies to guide users when visualising data and to drive the visualisation process at various places: First, as a rich source of information on data characteristics, second, as a means to formally describe the vocabulary for building abstract graphics, and third, as a knowledge base of facts on visualisation. This is why we call our approach ontology-driven. We suggest generating an Abstract Visual Model (AVM) to represent and »synthesise« a graphic following a role-based approach, inspired by the one used by J. v. Engelhardt for the analysis of graphics. It consists of graphic objects and relations formalised in the Visualisation Ontology (VISO). A mappings model, based on the declarative RDFS/OWL Visualisation Language (RVL), determines a set of transformations from the domain data to the AVM. RVL allows for composable visual mappings that can be shared and reused across platforms. To guide the user, for example, we discourage the construction of mappings that are suboptimal according to an effectiveness ranking formalised in the fact base and suggest more effective mappings instead. The guidance process is flexible, since it is based on exchangeable rules. VISO, RVL and the AVM are additional contributions of this thesis. Further, we initially analysed the state of the art in visualisation and RDF-presentation comparing 10 approaches by 29 criteria. Our approach is unique because it combines ontology-driven guidance with composable visual mappings. Finally, we compare three prototypes covering the essential parts of our approach to show its feasibility. We show how the mapping process can be supported by tools displaying warning messages for non-optimal visual mappings, e.g., by considering relation characteristics such as »symmetry«. In a constructive evaluation, we challenge both the RVL language and the latest prototype trying to regenerate sketches of graphics we created manually during analysis. We demonstrate how graphics can be varied and complex mappings can be composed from simple ones. Two thirds of the sketches can be almost or completely specified and half of them can be almost or completely implemented
Datenmassen im World Wide Web können kaum von Menschen oder Maschinen erfasst werden. Eine Option ist die formale Beschreibung und Verknüpfung von Datenquellen mit Semantic-Web- und Linked-Data-Technologien. Ontologien, in standardisierten Sprachen geschrieben, befördern das Teilen und Verknüpfen von Daten, da sie ein Mittel zur formalen Definition von Konzepten und Beziehungen zwischen diesen Konzepten darstellen. Eine zweite Option ist die Visualisierung. Die visuelle Repräsentation ermöglicht es dem Menschen, Informationen direkter wahrzunehmen, indem er seinen hochentwickelten Sehsinn verwendet. Relativ wenige Anstrengungen wurden unternommen, um beide Optionen zu kombinieren, obwohl die Formalität und die reichhaltige Semantik ontologische Daten zu einem idealen Kandidaten für die Visualisierung machen. Visualisierungsdesignsysteme unterstützen Nutzer bei der Visualisierung von tabellarischen, typischerweise statistischen Daten. Visualisierungen ontologischer Daten jedoch müssen noch manuell erstellt werden, da automatisierte Lösungen häufig auf generische Listendarstellungen oder Knoten-Kanten-Diagramme beschränkt sind. Auch die Semantik der ontologischen Daten wird nicht ausgenutzt, um Benutzer durch Visualisierungsaufgaben zu führen. Einmal erstellte Visualisierungseinstellungen können nicht einfach wiederverwendet und geteilt werden. Um diese Probleme zu lösen, mussten wir eine Antwort darauf finden, wie die Definition komponierbarer und wiederverwendbarer Abbildungen von ontologischen Daten auf visuelle Mittel geschehen könnte und wie Nutzer bei dieser Abbildung geführt werden könnten. Wir stellen einen Ansatz vor, der die geführte Visualisierung von ontologischen Daten, die Erstellung effektiver Grafiken und die Wiederverwendung von Visualisierungseinstellungen ermöglicht. Statt auf generische Grafiken zielt der Ansatz auf maßgeschneiderte Grafiken ab, die mit der gesamten Palette visueller Mittel in einem flexiblen Bottom-Up-Ansatz erstellt werden. Er erlaubt nicht nur die Visualisierung von Ontologien, sondern verwendet auch Ontologien, um Benutzer bei der Visualisierung von Daten zu führen und den Visualisierungsprozess an verschiedenen Stellen zu steuern: Erstens als eine reichhaltige Informationsquelle zu Datencharakteristiken, zweitens als Mittel zur formalen Beschreibung des Vokabulars für den Aufbau von abstrakten Grafiken und drittens als Wissensbasis von Visualisierungsfakten. Deshalb nennen wir unseren Ansatz ontologie-getrieben. Wir schlagen vor, ein Abstract Visual Model (AVM) zu generieren, um eine Grafik rollenbasiert zu synthetisieren, angelehnt an einen Ansatz der von J. v. Engelhardt verwendet wird, um Grafiken zu analysieren. Das AVM besteht aus grafischen Objekten und Relationen, die in der Visualisation Ontology (VISO) formalisiert sind. Ein Mapping-Modell, das auf der deklarativen RDFS/OWL Visualisation Language (RVL) basiert, bestimmt eine Menge von Transformationen von den Quelldaten zum AVM. RVL ermöglicht zusammensetzbare »Mappings«, visuelle Abbildungen, die über Plattformen hinweg geteilt und wiederverwendet werden können. Um den Benutzer zu führen, bewerten wir Mappings anhand eines in der Faktenbasis formalisierten Effektivitätsrankings und schlagen ggf. effektivere Mappings vor. Der Beratungsprozess ist flexibel, da er auf austauschbaren Regeln basiert. VISO, RVL und das AVM sind weitere Beiträge dieser Arbeit. Darüber hinaus analysieren wir zunächst den Stand der Technik in der Visualisierung und RDF-Präsentation, indem wir 10 Ansätze nach 29 Kriterien vergleichen. Unser Ansatz ist einzigartig, da er eine ontologie-getriebene Nutzerführung mit komponierbaren visuellen Mappings vereint. Schließlich vergleichen wir drei Prototypen, welche die wesentlichen Teile unseres Ansatzes umsetzen, um seine Machbarkeit zu zeigen. Wir zeigen, wie der Mapping-Prozess durch Tools unterstützt werden kann, die Warnmeldungen für nicht optimale visuelle Abbildungen anzeigen, z. B. durch Berücksichtigung von Charakteristiken der Relationen wie »Symmetrie«. In einer konstruktiven Evaluation fordern wir sowohl die RVL-Sprache als auch den neuesten Prototyp heraus, indem wir versuchen Skizzen von Grafiken umzusetzen, die wir während der Analyse manuell erstellt haben. Wir zeigen, wie Grafiken variiert werden können und komplexe Mappings aus einfachen zusammengesetzt werden können. Zwei Drittel der Skizzen können fast vollständig oder vollständig spezifiziert werden und die Hälfte kann fast vollständig oder vollständig umgesetzt werden
APA, Harvard, Vancouver, ISO, and other styles
36

Sauvinet, James A. "Semantic Services for Enterprise Data Exchange." ScholarWorks@UNO, 2013. http://scholarworks.uno.edu/td/1783.

Full text
Abstract:
Data exchange between different information systems is a complex issue. Each system, designed for a specific purpose, is defined using a vocabulary of the specific business. While Web services allow interoperations and data communications between multiple systems, the clients of the services must understand the vocabulary of the targeting data resources to select services or to construct queries. In this thesis we explore an ontology-based approach to facilitate clients’ queries in the vocabulary of the clients’ own domain, and to automate the query processing. A governmental inter-department data query process has been used to illustrate the capability of the semantic approach.
APA, Harvard, Vancouver, ISO, and other styles
37

Alves, Rachel Cristina Vesú [UNESP]. "Web semântica: uma análise focada no uso de metadados." Universidade Estadual Paulista (UNESP), 2005. http://hdl.handle.net/11449/93690.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:26:44Z (GMT). No. of bitstreams: 0 Previous issue date: 2005Bitstream added on 2014-06-13T18:30:28Z : No. of bitstreams: 1 alves_rcv_me_mar.pdf: 1114303 bytes, checksum: 18a01cab5a57564bfe296f1d00501495 (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Atualmente a nossa sociedade, denominada sociedade da informação, vem sendo caracterizada pela valorização da informação, pelo uso cada vez maior de tecnologias de informação e comunicação e pelo crescimento exponencial dos recursos informacionais disponibilizados em diversos ambientes, principalmente na Web. Essa realidade trouxe algumas mudanças no acesso automatizado às informações. Se por um lado temos uma grande quantidade de recursos informacionais disponibilizados, por outro temos como conseqüência problemas relacionados à busca, localização, acesso e recuperação dessas informações em ambientes digitais. Nesse contexto, o problema que originou essa pesquisa está relacionado com a dificuldade na busca e na recuperação de recursos informacionais digitais na Web e a ausência de tratamento adequado para a representação informacional desses recursos. O maior desafio para a comunidade científica no momento está na identificação de padrões e métodos de representação da informação, ou seja, na construção de formas de representação do recurso informacional de maneira a proporcionar sua busca e recuperação de modo mais eficiente. Assim, a proposição apontada nesse trabalho como solução do problema refere-se ao estabelecimento da Web Semântica e a aplicação de padrões de metadados para a representação da informação, pois são consideradas como iniciativas importantes para proporcionar uma melhor estruturação e representação dos recursos informacionais em ambientes digitais. Com uma metodologia baseada na análise exploratória e descritiva do tema a partir da literatura disponível, apresenta-se uma análise da Web Semântica como uma nova proposta para organização dos recursos informacionais na Web e as ferramentas tecnológicas que permeiam sua construção, com enfoque no uso de metadados como elemento fundamental para proporcionar... .
Nowadays our society, named society of information, has been characterized by the valorization of information through the increasing use of the information and communication technologies and the exponential growth of the informational resources, available in various environments, mainly on the Web. This reality has brought some changes for the automated access to information. If we have a big amount of informational resources available at one side, on the other we have problems related to search, localization, access and recuperation of this information in digital environments as a consequence. In this context, the problem that originated this research is related to the difficulty on searching and recuperating digital informational resources on the Web, and the lack of adequate treatment for the informational representation of these resources. At the moment, the biggest challenge for the scientific community is to identify patterns and methods of representation of information, that is, the construction of forms of representation of the informational resource in order to provide its search and recuperation in a more efficient manner. So, the pointed proposition for the solution of the problem, in this paper, refers to the Semantic Web establishment and the application of metadata patterns to the representation of information, because they are considered an important initiative for providing a better structuring and representation of the informational resources in digital environments. With a methodology based on the exploratory and descriptive analysis of the theme, beginning from the available literature, it is possible to present a Semantic Web analysis as a new proposal for the organization of the informational resources on the Web, and the technological tools that permeate its construction, focusing the use of metadata as the fundamental element to provide a better representation of the informational resources available on the Web, and their.
APA, Harvard, Vancouver, ISO, and other styles
38

Jayawardhana, Udaya Kumara. "An ontology-based framework for formulating spatio-temporal influenza (flu) outbreaks from twitter." Bowling Green State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1465941275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Alves, Rachel Cristina Vesu. "Web semântica : uma análise focada no uso de metadados /." Marília : [s.n.], 2005. http://hdl.handle.net/11449/93690.

Full text
Abstract:
Orientador: Plácida Leopoldina Ventura Amorim da Costa Santos
Banca: Silvana Ap. B. Gregório Vidotti
Banca: Edberto Ferneda
Resumo: Atualmente a nossa sociedade, denominada sociedade da informação, vem sendo caracterizada pela valorização da informação, pelo uso cada vez maior de tecnologias de informação e comunicação e pelo crescimento exponencial dos recursos informacionais disponibilizados em diversos ambientes, principalmente na Web. Essa realidade trouxe algumas mudanças no acesso automatizado às informações. Se por um lado temos uma grande quantidade de recursos informacionais disponibilizados, por outro temos como conseqüência problemas relacionados à busca, localização, acesso e recuperação dessas informações em ambientes digitais. Nesse contexto, o problema que originou essa pesquisa está relacionado com a dificuldade na busca e na recuperação de recursos informacionais digitais na Web e a ausência de tratamento adequado para a representação informacional desses recursos. O maior desafio para a comunidade científica no momento está na identificação de padrões e métodos de representação da informação, ou seja, na construção de formas de representação do recurso informacional de maneira a proporcionar sua busca e recuperação de modo mais eficiente. Assim, a proposição apontada nesse trabalho como solução do problema refere-se ao estabelecimento da Web Semântica e a aplicação de padrões de metadados para a representação da informação, pois são consideradas como iniciativas importantes para proporcionar uma melhor estruturação e representação dos recursos informacionais em ambientes digitais. Com uma metodologia baseada na análise exploratória e descritiva do tema a partir da literatura disponível, apresenta-se uma análise da Web Semântica como uma nova proposta para organização dos recursos informacionais na Web e as ferramentas tecnológicas que permeiam sua construção, com enfoque no uso de metadados como elemento fundamental para proporcionar... (Resumo completo, clicar acesso eletrônico abaixo).
Abstract: Nowadays our society, named society of information, has been characterized by the valorization of information through the increasing use of the information and communication technologies and the exponential growth of the informational resources, available in various environments, mainly on the Web. This reality has brought some changes for the automated access to information. If we have a big amount of informational resources available at one side, on the other we have problems related to search, localization, access and recuperation of this information in digital environments as a consequence. In this context, the problem that originated this research is related to the difficulty on searching and recuperating digital informational resources on the Web, and the lack of adequate treatment for the informational representation of these resources. At the moment, the biggest challenge for the scientific community is to identify patterns and methods of representation of information, that is, the construction of forms of representation of the informational resource in order to provide its search and recuperation in a more efficient manner. So, the pointed proposition for the solution of the problem, in this paper, refers to the Semantic Web establishment and the application of metadata patterns to the representation of information, because they are considered an important initiative for providing a better structuring and representation of the informational resources in digital environments. With a methodology based on the exploratory and descriptive analysis of the theme, beginning from the available literature, it is possible to present a Semantic Web analysis as a new proposal for the organization of the informational resources on the Web, and the technological tools that permeate its construction, focusing the use of metadata as the fundamental element to provide a better representation of the informational resources available on the Web, and their.
Mestre
APA, Harvard, Vancouver, ISO, and other styles
40

Magableh, Murad. "A generic architecture for semantic enhanced tagging systems." Thesis, De Montfort University, 2011. http://hdl.handle.net/2086/5172.

Full text
Abstract:
The Social Web, or Web 2.0, has recently gained popularity because of its low cost and ease of use. Social tagging sites (e.g. Flickr and YouTube) offer new principles for end-users to publish and classify their content (data). Tagging systems contain free-keywords (tags) generated by end-users to annotate and categorise data. Lack of semantics is the main drawback in social tagging due to the use of unstructured vocabulary. Therefore, tagging systems suffer from shortcomings such as low precision, lack of collocation, synonymy, multilinguality, and use of shorthands. Consequently, relevant contents are not visible, and thus not retrievable while searching in tag-based systems. On the other hand, the Semantic Web, so-called Web 3.0, provides a rich semantic infrastructure. Ontologies are the key enabling technology for the Semantic Web. Ontologies can be integrated with the Social Web to overcome the lack of semantics in tagging systems. In the work presented in this thesis, we build an architecture to address a number of tagging systems drawbacks. In particular, we make use of the controlled vocabularies presented by ontologies to improve the information retrieval in tag-based systems. Based on the tags provided by the end-users, we introduce the idea of adding “system tags” from semantic, as well as social, resources. The “system tags” are comprehensive and wide-ranging in comparison with the limited “user tags”. The system tags are used to fill the gap between the user tags and the search terms used for searching in the tag-based systems. We restricted the scope of our work to tackle the following tagging systems shortcomings: - The lack of semantic relations between user tags and search terms (e.g. synonymy, hypernymy), - The lack of translation mediums between user tags and search terms (multilinguality), - The lack of context to define the emergent shorthand writing user tags. To address the first shortcoming, we use the WordNet ontology as a semantic lingual resource from where system tags are extracted. For the second shortcoming, we use the MultiWordNet ontology to recognise the cross-languages linkages between different languages. Finally, to address the third shortcoming, we use tag clusters that are obtained from the Social Web to create a context for defining the meaning of shorthand writing tags. A prototype for our architecture was implemented. In the prototype system, we built our own database to host videos that we imported from real tag-based system (YouTube). The user tags associated with these videos were also imported and stored in the database. For each user tag, our algorithm adds a number of system tags that came from either semantic ontologies (WordNet or MultiWordNet), or from tag clusters that are imported from the Flickr website. Therefore, each system tag added to annotate the imported videos has a relationship with one of the user tags on that video. The relationship might be one of the following: synonymy, hypernymy, similar term, related term, translation, or clustering relation. To evaluate the suitability of our proposed system tags, we developed an online environment where participants submit search terms and retrieve two groups of videos to be evaluated. Each group is produced from one distinct type of tags; user tags or system tags. The videos in the two groups are produced from the same database and are evaluated by the same participants in order to have a consistent and reliable evaluation. Since the user tags are used nowadays for searching the real tag-based systems, we consider its efficiency as a criterion (reference) to which we compare the efficiency of the new system tags. In order to compare the relevancy between the search terms and each group of retrieved videos, we carried out a statistical approach. According to Wilcoxon Signed-Rank test, there was no significant difference between using either system tags or user tags. The findings revealed that the use of the system tags in the search is as efficient as the use of the user tags; both types of tags produce different results, but at the same level of relevance to the submitted search terms.
APA, Harvard, Vancouver, ISO, and other styles
41

Ribeiro, Junior Luiz Carlos. "OntoLP: construção semi-automática de ontologias a partir de textos da lingua portuguesa." Universidade do Vale do Rio do Sinos, 2008. http://www.repositorio.jesuita.org.br/handle/UNISINOS/2258.

Full text
Abstract:
Made available in DSpace on 2015-03-05T13:59:42Z (GMT). No. of bitstreams: 0 Previous issue date: 21
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
O crescimento da Internet provoca a necessidade de estruturas mais consistentes de representação do conhecimento disponível na rede. Nesse contexto, a Web Semântica e as ontologias aparecem como resposta ao problema. Contudo, a construção de ontologias é extremamente custosa, o que estimula diversas pesquisas visando automatizar a tarefa. Em sua maioria, essas pesquisas partem do conhecimento disponível em textos. As ferramentas e métodos são, nesse caso, dependentes de idioma. Para que todos tenham acesso aos benefícios da utilização de ontologias em larga escala, estudos específicos para cada língua são necessários. Nesse sentido, pouco foi feito para o Português. Este trabalho procura avançar nas questões concernentes à tarefa para a língua portuguesa, abrangendo o desenvolvimento e a avaliação de métodos para a construção automática de ontologias a partir de textos. Além disso, foi desenvolvida uma ferramenta de auxílio à construção de ontologias para a língua portuguesa integrada ao ambiente largamente
The internet evolution is in need of more sophisticated knowledge management techniques. In this context, the Semantic Web and Ontologies are being developed as a way to solve this problem. Ontology learning is, however, a dificult and expensive task. Research on ontology learning is usually based on natural language texts. Language specific tools have to be developed. There is no much research that considers specifically the portuguese language. This work advances in these questions and it considers portuguese in particular. The development and evaluation of methods are presented and discussed. Besides, the developed methods were integrated as a plug-in of the widely used ontology editor Protégé
APA, Harvard, Vancouver, ISO, and other styles
42

Havlena, Jan. "Distribuovaný informační systém založený na sémantických technologiích." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237211.

Full text
Abstract:
This master's thesis deals with the design of a distributed information system, where the data distribution is based on semantic technologies. The project analyzes the semantic web technologies with the focus on information exchange between information systems and the related terms, mainly ontologies, ontology languages and the Resource description framework. Furthermore, there is described a proposal an ontology which is used to describe the data exchanged between the systems and the technologies used to implement distributed information system. The most important of them are Java Server Faces and Sesame.
APA, Harvard, Vancouver, ISO, and other styles
43

Boban, Vesin. "Personalizacija procesa elektronskog učenja u tutorskom sistemu primenom tehnologija semantičkog veba." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2014. https://www.cris.uns.ac.rs/record.jsf?recordId=87677&source=NDLTD&language=en.

Full text
Abstract:
Predmet istraživanja disertacije obuhvata  realizaciju  opšteg  modela tutorskogsistema za elektronsko učenje iz različitih domena  primenom  tehnologija semantičkogveba i primena tog modela za  izgradnju tutorskog sistema za učenje programskog jezika Java sa elementima personalizacije.Cilj disertacije je implementacija  i predstavljanje  svih  elemenata  tutorskog sistema zaučenje programskog jezika Java  pomodu tehnologija semantičkog veba. Ovaj procesobuhvata kreiranje  osnovnih  gradivnih  ontologija  kao i  pravila za izvođenje konkretnihakcija kojim se postiže personalizacija nastavnog materijala.
The subject of the dissertation includes the implementation of a conceptual model of tutoring system for e-learning in different domains using semantic web technologies and application of that model in a design of a tutoring system for personalised learning of Java programming language.The goal of the dissertation is the implementation and presentation of all elements of the tutoring system for learning the Java programming language using semantic web technologies. This process includes the creation of the fundamental building blocks of ontologies and rules for carrying out the actions for adaptation of teaching materials.
APA, Harvard, Vancouver, ISO, and other styles
44

Borrego, Luís Carlos Moreira. "Criação de uma ontologia e respectiva povoação a partir do processamento de relatórios médicos." Master's thesis, Universidade de Évora, 2010. http://hdl.handle.net/10174/19490.

Full text
Abstract:
A evolução tecnológica tem provocado uma evolução na medicina, através de sistemas computacionais voltados para o armazenamento, captura e disponibilização de informações médicas. Os relatórios médicos são, na maior parte das vezes, guardados num texto livre não estruturado e escritos com vocabulário proprietário, podendo ocasionar falhas de interpretação. Através das linguagens da Web Semântica, é possível utilizar antologias como modo de estruturar e padronizar a informação dos relatórios médicos, adicionando¬ lhe anotações semânticas. A informação contida nos relatórios pode desta forma ser publicada na Web, permitindo às máquinas o processamento automático da informação. No entanto, o processo de criação de antologias é bastante complexo, pois existe o problema de criar uma ontologia que não cubra todo o domínio pretendido. Este trabalho incide na criação de uma ontologia e respectiva povoação, através de técnicas de PLN e Aprendizagem Automática que permitem extrair a informação dos relatórios médicos. Foi desenvolvida uma aplicação, que permite ao utilizador converter relatórios do formato digital para o formato OWL. ABSTRACT: Technological evolution has caused a medicine evolution through computer systems which allow storage, gathering and availability of medical information. Medical reports are, most of the times, stored in a non-structured free text and written in a personal way so that misunderstandings may occur. Through Semantic Web languages, it’s possible to use ontology as a way to structure and standardize medical reports information by adding semantic notes. The information in those reports can, by these means, be displayed on the web, allowing machines automatic information processing. However, the process of creating ontology is very complex, as there is a risk creating of an ontology that not covering the whole desired domain. This work is about creation of an ontology and its population through NLP and Machine Learning techniques to extract information from medical reports. An application was developed which allows the user to convert reports from digital for¬ mat to OWL format.
APA, Harvard, Vancouver, ISO, and other styles
45

Valencia, García Rafael. "Un entorno para la extracción incremental de conocimiento desde texto en lenguaje natural." Doctoral thesis, Universidad de Murcia, 2005. http://hdl.handle.net/10803/10922.

Full text
Abstract:
La creciente necesidad de enriquecer la Web con grandes cantidades de ontologías que capturen el conocimiento del dominio ha generado multitud de estudios e investigaciones en metodologías para poder salvar el cuello de botella que supone la construcción manual de ontologías. Esta necesidad ha conducido a definir una nueva línea de investigación denominada Ontology Learning. La solución que proponemos en este trabajo se basa en el desarrollo de un nuevo entorno para extracción incremental de conocimiento desde texto en lenguaje natural. Se ha adoptado el punto de vista de la ingeniería ontológica, de modo que el conocimiento adquirido se representa por medio de ontologías. Este trabajo aporta un nuevo método para la construcción semiautomática de ontologías a partir de textos en lenguaje natural que no sólo se centra en la obtención de jerarquías de conceptos, sino que tiene en cuenta también un amplio conjunto de relaciones semánticas entre conceptos.
The need for enriching fue Web with large amounts of ontologies has increased. This need for domain models has generated several studies and research on methodologies capable of overcoming the bottleneck provoked by fue manual construction of ontologies. This need has led towards a new research area to obtain semiautomatic methods to build ontologies, which is called, Ontology Learning. The solution proposed in this work is based on the development of a new environment for incremental knowledge extraction from naturallanguage texts. F or this purpose, an ontological engineering perspective has been adopted. Hence, fue knowledge acquired through fue developed environment is represented by means of ontologies. This work presents a new method for fue semiautomatic construction of ontologies from naturallanguage texts. This method is not only based on obtaining hierarchies of concepts, but it uses a set of semantic relations between concepts.
APA, Harvard, Vancouver, ISO, and other styles
46

Robisch, Katherine A. "Search Engine Optimization: A New Literacy Practice." University of Dayton / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1394533925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

El, Ghosh Mirna. "Automatisation du raisonnement et décision juridiques basés sur les ontologies." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR16/document.

Full text
Abstract:
Le but essentiel de la thèse est de développer une ontologie juridique bien fondée pour l'utiliser dans le raisonnement à base des règles. Pour cela, une approche middle-out, collaborative et modulaire est proposée ou des ontologies fondationnelles et core ont été réutilisées pour simplifier le développement de l'ontologie. L’ontologie résultante est adoptée dans une approche homogène a base des ontologies pour formaliser la liste des règles juridiques du code pénal en utilisant le langage logique SWRL
This thesis analyses the problem of building well-founded domain ontologies for reasoning and decision support purposes. Specifically, it discusses the building of legal ontologies for rule-based reasoning. In fact, building well-founded legal domain ontologies is considered as a difficult and complex process due to the complexity of the legal domain and the lack of methodologies. For this purpose, a novel middle-out approach called MIROCL is proposed. MIROCL tends to enhance the building process of well-founded domain ontologies by incorporating several support processes such as reuse, modularization, integration and learning. MIROCL is a novel modular middle-out approach for building well-founded domain ontologies. By applying the modularization process, a multi-layered modular architecture of the ontology is outlined. Thus, the intended ontology will be composed of four modules located at different abstraction levels. These modules are, from the most abstract to the most specific, UOM(Upper Ontology Module), COM(Core Ontology Module), DOM(Domain Ontology Module) and DSOM(Domain-Specific Ontology Module). The middle-out strategy is composed of two complementary strategies: top-down and bottom-up. The top-down tends to apply ODCM (Ontology-Driven Conceptual Modeling) and ontology reuse starting from the most abstract categories for building UOM and COM. Meanwhile, the bottom-up starts from textual resources, by applying ontology learning process, in order to extract the most specific categories for building DOM and DSOM. After building the different modules, an integration process is performed for composing the whole ontology. The MIROCL approach is applied in the criminal domain for modeling legal norms. A well-founded legal domain ontology called CriMOnto (Criminal Modular Ontology) is obtained. Therefore, CriMOnto has been used for modeling the procedural aspect of the legal norms by the integration with a logic rule language (SWRL). Finally, an hybrid approach is applied for building a rule-based system called CORBS. This system is grounded on CriMOnto and the set of formalized rules
APA, Harvard, Vancouver, ISO, and other styles
48

Licheri, Davide. "ANSwER-Sistema informativo ambientale basato su ontologia e logica Fuzzy." Doctoral thesis, Università degli studi di Trieste, 2008. http://hdl.handle.net/10077/2657.

Full text
Abstract:
2006/2007
Le fonti dati rilevanti per il monitoraggio dell’avifauna delle lagune friulane, previsto dal progetto ANSER (Programma INTERREG IIIA Transfrontaliero Adriatico), sono alimentate da tre diverse metodologie di censimento, una metodologia di cattura/marcatura e una metodologia di tracciamento radio telemetrico. L’ampio spettro di informazioni a riguardo è confluito in un sistema informativo ambientale che 1) traduce tutti i dati in Ecological Metadata Language (EML) seguendo un unico modello sintattico orientato agli oggetti, 2) lo arricchisce semanticamente con una ontologia di dominio basata sulla Logica Descrittiva, 3) ne analizza le performance predittive, validando, attraverso un sistema inferenziale fuzzy, il modello teorico rispetto ai dati raccolti su campo. I risultati più importanti sono descrivibili così: 1) l’eliminazione completa di eterogeneità tra dataset ha permesso di atomizzare le tuple, reificando in un’unica super-classe di eventi nel tempo, i contatti tra operatore e animale in un determinato luogo; 2) l’ontologia OWL-DL ha determinato in maniera consistente l’appartenenza delle specie alle guild considerate e la relativa attrazione verso i diversi habitat disponibili; 3) il modello fuzzy ha rivelato che le informazioni sull’habitat e sulla profondità delle acque nel punto di monitoraggio, influiscono differentemente sulla predizione di abbondanza delle diverse guild esaminate.
XX Ciclo
1969
APA, Harvard, Vancouver, ISO, and other styles
49

Піпіч, Артем Андрійович. "Семантична хореографія REST-сервісів." Master's thesis, Київ, 2018. https://ela.kpi.ua/handle/123456789/23432.

Full text
Abstract:
Робота виконана на 82 сторінках, містить 5 ілюстрацій, 24 таблиці. При підготовці використовувалась література з 37 джерел. Актуальність теми На сьогоднішній день з’являється все більше систем, в яких використовується велика кількість веб-сервісів. Для організації їх ефективної взаємодії використовуються різні підходи, проте більшість з них мають свої переваги та недоліки, які часто стають критичними для певної ситуації. Саме тому дослідження семантичної хореографії REST-сервісів як одного з можливих підходів до такої організації є актуальним. Використання даного підходу може дати суттєві результати при застосуванні в системах, в складі яких значну роль відіграють веб-сервіси. Мета та задачі дослідження Метою даної роботи є дослідження семантичної хореографії REST-сервісів а також способів використання даного підходу в системах, в складі яких значну роль відіграють веб-сервіси. Рішення поставлених завдань та досягнуті результати В роботі розглянуто засоби, за допомогою яких семантична хореографія REST-сервісів може бути ефективно реалізована. Запропоновано реалізацію такого підходу на основі обміну сервісами метаданими про запит через брокер повідомлень. Було реалізовано описаний підхід, в реалізації застосовано патерн проектування Сага для ефективної обробки помилок, пов’язаних в тому числі і з комунікацією між сервісами. Реалізацію було протестовано на багатьох тестових сценаріях; зроблено висновки щодо особливостей даного підходу, його переваг та можливостей покращення запропонованої реалізації. 5 Об’єкт досліджень Системи з REST-сервісами. Предмет досліджень Взаємодія REST-сервісів із застосування хореографії, що реалізована за допомогою семантичних засобів. Методи досліджень Для розв’язання зазначеної проблеми в роботі застосовано методи синтезу та аналізу, системного порівняння та аналізу, композиції логічних структур даних та логічного узагальнення отриманих результатів. Наукова новизна Наукова новизна роботи полягає у реалізації нового підходу до семантичної хореографії REST-сервісів, який засновано на використанні брокеру повідомлень та патерні проектування Saga. Практичне значення одержаних результатів Отримані результати реалізації підходу можуть використовуватись в системах, в складі яких значну роль відіграють веб-сервіси. Представлений приклад реалізації демонструє, що отримані результати можуть бути використані для реалізації системи медичного обслуговування.
Work carried out on 82 pages containing 5 figures, 24 tables. The paper was written with references to 37 different sources. Topicality To date, there are more and more systems that use a large number of web services. Various approaches are used to organize their effective interaction, however, most of them have advantages and disadvantages, which often become critical for a particular situation. That is why the research of semantic choreography of REST-services as one of the possible approaches to such an organization is topical. The use of this approach can yield significant results when applied in systems in which a significant role is played by web services. Purpose The aim of this work is to investigate the semantic choreography of REST-services, as well as how to apply this approach in systems in which a significant role is played by web services. Solution In this paper, we examined the means by which the semantic choreography of REST-services can be effectively implemented. The implementation of this approach based on the exchange of services by metadata about the request through the message broker is suggested. The described approach was implemented, the Saga design pattern was applied in the implementation for efficient error handling, including, among other things, communication between services. The implementation was tested on many test scenarios; conclusions were drawn regarding the specifics of this approach, its advantages and possibilities for improving the proposed implementation. 9 The object of research Systems with REST-services. The subject of research Interaction REST-services with the use of choreography, realized with the help of semantic means. Research methods To solve the described problem in this work methods of synthesis and analysis, system comparison and analysis, composition of logical data structures and logical generalization of the obtained results are applied. Scientific novelty The scientific novelty of the work is to implement a new approach to the semantic choreography of REST-services, based on the use of the message broker and Saga design pattern. The practical value of research The obtained results of the implementation of the approach can be used in systems in which a significant role is played by web services. The presented example of implementation shows that the results obtained can be used to implement the health care system.
Работа выполнена на 82 страницах, содержит 5 иллюстраций, 24 таблицы. При подготовке использовалась литература из 37 источников. Актуальность темы На сегодняшний день появляется все больше систем, в которых используется большое количество веб-сервисов. Для организации их эффективного взаимодействия используются различные подходы, однако большинство из них имеют свои преимущества и недостатки, которые часто становятся критическими для определенной ситуации. Именно поэтому исследования семантической хореографии REST-сервисов как одного из возможных подходов к такой организации является актуальным. Использование данного подхода может дать существенные результаты при применении в системах, в составе которых значительную роль играют веб-сервисы. Цель и задачи исследования Целью данной работы является исследование семантической хореографии REST-сервисов, а также способов применения данного подхода в системах, в составе которых значительную роль играют веб-сервисы. Решение поставленных задач и достигнутых результатах В работе рассмотрены средства, с помощью которых семантическая хореография REST-сервисов может быть эффективно реализована. Предложена реализация такого подхода на основе обмена сервисами метаданными о запросе через брокер сообщений. Было реализовано описанный подход, в реализации применен паттерн проектирования Сага для эффективной обработки ошибок, связанных в том числе и с коммуникацией между сервисами. Реализацию было протестировано на многих тестовых сценариях; сделаны выводы относительно особенностей данного подхода, его преимуществ и возможностей улучшения предложенной реализации. 7 Объект исследований Системы с REST-сервисами. Предмет исследований Взаимодействие REST-сервисов с применением хореографии, реализованной с помощью семантических средств. Методы исследований Для решения указанной проблемы в работе применены методы синтеза и анализа, системного сравнения и анализа, композиции логических структур данных и логического обобщения полученных результатов. Научная новизна Научная новизна работы заключается в реализации нового подхода к семантической хореографии REST-сервисов, основанной на использовании брокера сообщений и паттерне проектирования Saga. Практическое значение полученных результатов Полученные результаты реализации подхода могут использоваться в системах, в составе которых значительную роль играют веб-сервисы. Представленный пример реализации показывает, что полученные результаты могут быть использованы для реализации системы медицинского обслуживания.
APA, Harvard, Vancouver, ISO, and other styles
50

Krystal, Ingman. "Nonverbal communication on the net: Mitigating misunderstanding through the manipulation of text and use of images in computer-mediated communication." University of Findlay / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=findlay1557507788275899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography