Tesis sobre el tema "Knowledge based data management"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Knowledge based data management.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Knowledge based data management".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Andersson, Kent. "Knowledge Technology Applications for Knowledge Management". Doctoral thesis, Uppsala : Institutionen för informationsvetenskap, Univ. [distributör], 2000. http://w3.ub.uu.se/fulltext/91-506-1437-1.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Maimone, Anthony. "Data and Knowledge Acquisition in Case-based Reasoning for Diabetes Management". Ohio University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1156200718.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Adam, Elena Daniela. "Knowledge management cloud-based solutions in small enterprises". Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Informatik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-28275.

Texto completo
Resumen
Purpose – The aim of this study is to determine if adopting cloud-based knowledge management is a viable way forward for small enterprises and to investigate what are the main factors that might facilitate or inhibit these companies to adopt such solutions.Design/Methodology/Approach - In order to understand the main factors that could influence the adoption of a cloud-based knowledge management solution in small enterprises, I used a qualitative research approach, based on four semi-structured interviews with four small companies from Romania.Findings – The results of the study suggest that implementing knowledge management in the cloud is particularly beneficial for small enterprises, as a lower investment in IT infra-structure can create a competitive advantage and help them implement knowledge man-agement activities as a strategic resource. Moreover, the study suggests that relative ad-vantage, compatibility and technology readiness will influence companies in moving their knowledge to the cloud. Also, the study reveals that companies which did not adopt such a solution had already established systems for managing knowledge and failed to realize its benefits, it was not perceived as needed, they had a low level of awareness or cited security and uncertainty reasons.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Goasdoué, François. "Knowledge Representation meets DataBases for the sake of ontology-based data management". Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00759274.

Texto completo
Resumen
This Habilitation thesis outlines my research activities carried out as an Associate Professor at Univ. Paris-Sud and Inria Saclay Île-de-France. During this period, from 2003 to early 2012, my work was - and still is - at the interface between Knowledge Representation and Databases. I have mainly focused on ontology-based data management using the Semantic Web data models promoted by W3C: the Resource Description Framework (RDF) and the Web Ontology Language (OWL). In particular, my work has covered (i) the design, (ii) the optimization, and (iii) the decentralization of ontology-based data management techniques in these data models. This thesis briefly reports on the results obtained along these lines of research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kairouz, Joseph. "Patient data management system medical knowledge-base evaluation". Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=24060.

Texto completo
Resumen
The purpose of this thesis is to evaluate the medical data management expert system at the Pediatric Intensive Care Unit of the Montreal Children's Hospital. The objective of this study is to provide a systematic method to evaluate and, progressively improve the knowledge embedded in the medical expert system.
Following a literature survey on evaluation techniques and architecture of existing expert systems, an overview of the Patient Data Management System hardware and software components is presented. The design of the Expert Monitoring System is elaborated. Following its installation in the intensive Care Unit, the performance of the Expert Monitoring System is evaluated, operating on real vital sign data and corrections were formulated. A progressive evaluation technique, new methodology for evaluating an expert system knowledge-base is proposed for subsequent corrections and evaluations of the Expert Monitoring System.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

MILIA, GABRIELE. "Cloud-based solutions supporting data and knowledge integration in bioinformatics". Doctoral thesis, Università degli Studi di Cagliari, 2015. http://hdl.handle.net/11584/266783.

Texto completo
Resumen
In recent years, computer advances have changed the way the science progresses and have boosted studies in silico; as a result, the concept of “scientific research” in bioinformatics has quickly changed shifting from the idea of a local laboratory activity towards Web applications and databases provided over the network as services. Thus, biologists have become among the largest beneficiaries of the information technologies, reaching and surpassing the traditional ICT users who operate in the field of so-called "hard science" (i.e., physics, chemistry, and mathematics). Nevertheless, this evolution has to deal with several aspects (including data deluge, data integration, and scientific collaboration, just to cite a few) and presents new challenges related to the proposal of innovative approaches in the wide scenario of emergent ICT solutions. This thesis aims at facing these challenges in the context of three case studies, being each case study devoted to cope with a specific open issue by proposing proper solutions in line with recent advances in computer science. The first case study focuses on the task of unearthing and integrating information from different web resources, each having its own organization, terminology and data formats in order to provide users with flexible environment for accessing the above resources and smartly exploring their content. The study explores the potential of cloud paradigm as an enabling technology to severely curtail issues associated with scalability and performance of applications devoted to support the above task. Specifically, it presents Biocloud Search EnGene (BSE), a cloud-based application which allows for searching and integrating biological information made available by public large-scale genomic repositories. BSE is publicly available at: http://biocloud-unica.appspot.com/. The second case study addresses scientific collaboration on the Web with special focus on building a semantic network, where team members, adequately supported by easy access to biomedical ontologies, define and enrich network nodes with annotations derived from available ontologies. The study presents a cloud-based application called Collaborative Workspaces in Biomedicine (COWB) which deals with supporting users in the construction of the semantic network by organizing, retrieving and creating connections between contents of different types. Public and private workspaces provide an accessible representation of the collective knowledge that is incrementally expanded. COWB is publicly available at: http://cowb-unica.appspot.com/. Finally, the third case study concerns the knowledge extraction from very large datasets. The study investigates the performance of random forests in classifying microarray data. In particular, the study faces the problem of reducing the contribution of trees whose nodes are populated by non-informative features. Experiments are presented and results are then analyzed in order to draw guidelines about how reducing the above contribution. With respect to the previously mentioned challenges, this thesis sets out to give two contributions summarized as follows. First, the potential of cloud technologies has been evaluated for developing applications that support the access to bioinformatics resources and the collaboration by improving awareness of user's contributions and fostering users interaction. Second, the positive impact of the decision support offered by random forests has been demonstrated in order to tackle effectively the curse of dimensionality.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

White, Andrew Murray. "The application of knowledge-based techniques to constraint management in engineering databases". Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/16894.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Gebhardt, Johan Wilhelm Ludwig. "A comparative study of the business value of computer-based mapping tools in knowledge management". Thesis, Stellenbosch : Stellenbosch University, 2008. http://hdl.handle.net/10019.1/18151.

Texto completo
Resumen
Thesis (MBA)--Stellenbosch University, 2008.
ENGLISH ABSTRACT: In the past decade or two companies started to realise that competitive advantage is not only achieved by optimising their business value chain, but also in managing the knowledge in the company. This led to the development of different knowledge management models and to millions of dollars being spent on knowledge management implementations across the world. Although there were huge successes, a large number of initiatives were spectacular failures - believed to be mainly caused by the linear method of capturing and presenting knowledge. Computer-based mapping tools is a new generation of personal computer (PC) based tools that allow people to present knowledge graphically. Since the focus of most research into computer-based mapping tools has been on the educational use of mapping tools, the focus of this study will be on the business use of these tools. Thus a number of common, off-the-shelf computer-based mapping tools were evaluated to determine whether they can add business value. From the evaluation a decision matrix was developed to assist knowledge workers in selecting the best tool for a specific application. The primary activities of the knowledge value chain model were investigated to select a series of business activities where the use of computer-based mapping tools could possibly generate more business value in the execution of the business activity. These activities were then measured against a set of criteria that was developed in order to evaluate the different computer-based mapping tools. It was found that the selected software applications could be clearly separated based upon their theoretical and philosophical backgrounds into concept mapping tools and mind mapping tools. It was further found that the possible business value that could be derived through the use of these tools is more dependent on the selection of the correct type of tool, than on the selection of a specific software package. Lastly it was found that concept mapping tools could be used across a broader spectrum of business activities. The research also reached the conclusion that the use of concept mapping tools will possibly add more value to a business than the use of mind mapping software.
AFRIKAANSE OPSOMMING: Gedurende die afgelope dekade of wat het maatskappye al meer begin besef dat hulle mededingingsvoordeel nie net geleë is in hoe goed hulle die besigheid se waardeketting kan optimiseer nie, maar dat die kennis in die maatskappy ook beter bestuur moet word. Dit het tot gevolg gehad dat 'n aansienlike hoeveelheid kennis bestuursmodelle ontwikkel is en dat miljoene dollar gespandeer is op die implementering van kennis bestuurstelsels. Ten spyte van groot suksesse wat behaal is, was daar ook totale mislukkings. Die vermoede bestaan dat een van die redes vir die mislukkings die liniêre manier is waarop kennis vasgevang en aangebied is. Rekenaar-gebaseerde karteringspakkette is 'n nuwe generasie van persoonlike rekenaar programmatuur wat gebruikers in staat stel om kennis grafies voor te stel. Die meeste navorsing oor die gebruik van rekenaar-gebaseerde karteringspakkette het egter op die opvoedkundige aspek daarvan gefokus. In hierdie navorsing val die fokus eerder op die besigheidsgebruik van sodanige gereedskap. 'n Aantal algemeen beskikbare, van-die-rak pakkette is ge-evalueër om vas te stel of hulle waarde tot 'n besigheid kan toevoeg. Vanuit hierdie evaluering is In keuse-matriks saamgestel om kenniswerkers in staat te stel om die beste pakket vir 'n spesifieke besigheidsaktiwiteit te kies. Die primêre aktiwiteite van die kennis waardeketting model is ondersoek ten einde 'n aantal besigheidsaktiwiteite te kan selekteer wat moontlik meer waarde tot die besigheid kan toevoeg deur die gebruik van rekenaar-gebaseerde karteringspakkette. Die geselekteerde aktiwiteite is gemeet teen 'n reeks kriteria wat ontwikkel is om die verskillende rekenaar-gebaseerde karteringspakette teen mekaar op te weeg. Die navorsing het bevind dat die geselekteerde programmatuur pakkette hoofsaaklik in twee groepe val op grond van hulle teoretiese en filosofiese funderings, naamlik konsepkaarte en gedagtekaarte. Verder is vasgestel dat meer besigheidswaarde ontsluit word deur die keuse van die regte tipe programmatuur vir 'n spesifieke aanwending as deur die keuse van In spesifieke programmatuur pakket. Laastens is bevind dat konsepkaarte oor 'n wyer verspreiding van besigheidsaktiwiteite gebruik kan word. Eventueel kan afgelei word dat die gebruik van konsepkaarte meer waarde tot 'n besigheid sal toevoeg as die gebruik van gedagtekaarte.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Brooks, Brad Walton. "Automated Data Import and Revision Management in a Product Lifecycle Management Environment". Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd3182.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Meng, Changping y 蒙昌平. "Discovering meta-paths in large knowledge bases". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/209504.

Texto completo
Resumen
A knowledge base, such as Yago or DBpedia, can be modeled as a large graph with nodes and edges annotated with class and relationship labels. Recent work has studied how to make use of these rich information sources. In particular, meta-paths, which represent sequences of node classes and edge types between two nodes in a knowledge base, have been proposed for such tasks as information retrieval, decision making, and product recommendation. Current methods assume meta-paths are found by domain experts. However, in a large and complex knowledge base, retrieving meta-paths manually can be tedious and difficult. We thus study how to discover meta-paths automatically. Specifically, users are asked to provide example pairs of nodes that exhibit high proximity. We then investigate how to generate meta-paths that can best explain the relationship between these node pairs. Since this problem is computationally intractable, we propose a greedy algorithm to select the most relevant meta-paths. We also present a data structure to enable efficient execution of this algorithm. We further incorporate hierarchical relationships among node classes in our solutions. Finally, we propose an effective similarity join algorithm in order to generate more node pairs using these meta-paths. Extensive experiments on real knowledge bases show that our approach captures important meta-paths in an efficient and scalable manner.
published_or_final_version
Computer Science
Master
Master of Philosophy
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Antoine, Emilien. "Distributed data management with a declarative rule-based language webdamlog". Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00933808.

Texto completo
Resumen
Our goal is to enable aWeb user to easily specify distributed data managementtasks in place, i.e. without centralizing the data to a single provider. Oursystem is therefore not a replacement for Facebook, or any centralized system,but an alternative that allows users to launch their own peers on their machinesprocessing their own local personal data, and possibly collaborating with Webservices.We introduce Webdamlog, a datalog-style language for managing distributeddata and knowledge. The language extends datalog in a numberof ways, notably with a novel feature, namely delegation, allowing peersto exchange not only facts but also rules. We present a user study thatdemonstrates the usability of the language. We describe a Webdamlog enginethat extends a distributed datalog engine, namely Bud, with the supportof delegation and of a number of other novelties of Webdamlog such as thepossibility to have variables denoting peers or relations. We mention noveloptimization techniques, notably one based on the provenance of facts andrules. We exhibit experiments that demonstrate that the rich features ofWebdamlog can be supported at reasonable cost and that the engine scales tolarge volumes of data. Finally, we discuss the implementation of a Webdamlogpeer system that provides an environment for the engine. In particular, a peersupports wrappers to exchange Webdamlog data with non-Webdamlog peers.We illustrate these peers by presenting a picture management applicationthat we used for demonstration purposes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Xie, Tian y 謝天. "Development of a XML-based distributed service architecture for product development in enterprise clusters". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B30477165.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Červienka, Juraj. "Aplikace principů znalostního managementu ve vybrané firmě". Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2013. http://www.nusl.cz/ntk/nusl-223953.

Texto completo
Resumen
The thesis deals with the issue of the knowledge management and its principles. The introduction of thesis is addressed to theoretical basics of the knowledge management that is followed by the practical part. The theoretical part provides the starting point for the proposal and applications of system for the chosen company. The main aim of the practical part was to form the application for management of projects and the repository of the knowledge of the chosen company. This aim should be followed by increasing of the work efficiency and enhancing of the access to the information. The resulting application will be set up into the company workings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Schuster, Alfons. "Supporting data analysis and the management of uncertainty in knowledge-based systems through information aggregation processes". Thesis, University of Ulster, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264825.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Wang, Qing. "Intelligent Data Mining Techniques for Automatic Service Management". FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3883.

Texto completo
Resumen
Today, as more and more industries are involved in the artificial intelligence era, all business enterprises constantly explore innovative ways to expand their outreach and fulfill the high requirements from customers, with the purpose of gaining a competitive advantage in the marketplace. However, the success of a business highly relies on its IT service. Value-creating activities of a business cannot be accomplished without solid and continuous delivery of IT services especially in the increasingly intricate and specialized world. Driven by both the growing complexity of IT environments and rapidly changing business needs, service providers are urgently seeking intelligent data mining and machine learning techniques to build a cognitive ``brain" in IT service management, capable of automatically understanding, reasoning and learning from operational data collected from human engineers and virtual engineers during the IT service maintenance. The ultimate goal of IT service management optimization is to maximize the automation of IT routine procedures such as problem detection, determination, and resolution. However, to fully automate the entire IT routine procedure is still a challenging task without any human intervention. In the real IT system, both the step-wise resolution descriptions and scripted resolutions are often logged with their corresponding problematic incidents, which typically contain abundant valuable human domain knowledge. Hence, modeling, gathering and utilizing the domain knowledge from IT system maintenance logs act as an extremely crucial role in IT service management optimization. To optimize the IT service management from the perspective of intelligent data mining techniques, three research directions are identified and considered to be greatly helpful for automatic service management: (1) efficiently extract and organize the domain knowledge from IT system maintenance logs; (2) online collect and update the existing domain knowledge by interactively recommending the possible resolutions; (3) automatically discover the latent relation among scripted resolutions and intelligently suggest proper scripted resolutions for IT problems. My dissertation addresses these challenges mentioned above by designing and implementing a set of intelligent data-driven solutions including (1) constructing the domain knowledge base for problem resolution inference; (2) online recommending resolution in light of the explicit hierarchical resolution categories provided by domain experts; and (3) interactively recommending resolution with the latent resolution relations learned through a collaborative filtering model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Marano, Federica. "Exploring formal models of linguistic data structuring. Enhanced solutions for knowledge management systems based on NLP applications". Doctoral thesis, Universita degli studi di Salerno, 2012. http://hdl.handle.net/10556/349.

Texto completo
Resumen
2010 - 2011
The principal aim of this research is describing to which extent formal models for linguistic data structuring are crucial in Natural Language Processing (NLP) applications. In this sense, we will pay particular attention to those Knowledge Management Systems (KMS) which are designed for the Internet, and also to the enhanced solutions they may require. In order to appropriately deal with this topics, we will describe how to achieve computational linguistics applications helpful to humans in establishing and maintaining an advantageous relationship with technologies, especially with those technologies which are based on or produce man-machine interactions in natural language. We will explore the positive relationship which may exist between well-structured Linguistic Resources (LR) and KMS, in order to state that if the information architecture of a KMS is based on the formalization of linguistic data, then the system works better and is more consistent. As for the topics we want to deal with, frist of all it is indispensable to state that in order to structure efficient and effective Information Retrieval (IR) tools, understanding and formalizing natural language combinatory mechanisms seems to be the first operation to achieve, also because any piece of information produced by humans on the Internet is necessarily a linguistic act. Therefore, in this research work we will also discuss the NLP structuring of a linguistic formalization Hybrid Model, which we hope will prove to be a useful tool to support, improve and refine KMSs. More specifically, in section 1 we will describe how to structure language resources implementable inside KMSs, to what extent they can improve the performance of these systems and how the problem of linguistic data structuring is dealt with by natural language formalization methods. In section 2 we will proceed with a brief review of computational linguistics, paying particular attention to specific software packages such Intex, Unitex, NooJ, and Cataloga, which are developed according to Lexicon-Grammar (LG) method, a linguistic theory established during the 60’s by Maurice Gross. In section 3 we will describe some specific works useful to monitor the state of the art in Linguistic Data Structuring Models, Enhanced Solutions for KMSs, and NLP Applications for KMSs. In section 4 we will cope with problems related to natural language formalization methods, describing mainly Transformational-Generative Grammar (TGG) and LG, plus other methods based on statistical approaches and ontologies. In section 5 we will propose a Hybrid Model usable in NLP applications in order to create effective enhanced solutions for KMSs. Specific features and elements of our hybrid model will be shown through some results on experimental research work. The case study we will present is a very complex NLP problem yet little explored in recent years, i.e. Multi Word Units (MWUs) treatment. In section 6 we will close our research evaluating its results and presenting possible future work perspectives. [edited by author]
X n.s.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Radovanovic, Aleksandar. "Concept Based Knowledge Discovery from Biomedical Literature". Thesis, Online access, 2009. http://etd.uwc.ac.za/usrfiles/modules/etd/docs/etd_gen8Srv25Nme4_9861_1272229462.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Spiegler, Sebastian R. "Comparative study of clustering algorithms on textual databases : clustering of curricula vitae into comptency-based groups to support knowledge management /". Saarbrücken : VDM Verl. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3035354&prov=M&dok_var=1&dok_ext=htm.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Kybkalo, Anatoliy. "Znalostní management a znalostní báze". Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-203842.

Texto completo
Resumen
The theme of this diploma thesis is Knowledge Management that is becoming the focus of business companies. The theoretical part of the work is divided into several chapters that discuss the basic principles of Knowledge Management. The aim of this work is to describe the principle of knowledge management and create a basic draft of the knowledge base for the KPMG company. In the introductory part of the work, terms commonly used in knowledge management are explained. Further, the knowledge capital of the company, types of knowledge management and the related knowledge strategies are described. Second half of the theoretical part concerns the responsibilities, tasks and roles in knowledge management. The last chapters of the theoretical part describe the individual phases of knowledge management introduction in a company. The practical part of this thesis is focused at analysis of KPMG company knowledge management and design of a knowledge base that can substantially reduce the time necessary for completion of certain deliverables. The chapter concerns mainly with description of the system architecture of the knowledge base, that the author has designed for the KPMG company.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Olsson, Neve Theresia. "Capturing and Analysing Emotions to Support Organisational Learning : The Affect Based Learning Matrix". Doctoral thesis, Kista : Department of Computer and Systems Sciences, Stockholm University, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-1230.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Šmarda, Miroslav. "Aplikace principů znalostního managementů ve vybrané firmě". Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2017. http://www.nusl.cz/ntk/nusl-318298.

Texto completo
Resumen
This thesis focuses on problematics of knowledge management, its principles and application. Thesis is divided into three main parts. There are teoretical basis in first part, which are later used in analytical and practical parts. Practical part focuses on design own solution, which allows effectively work with knowledge.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Pérution-Kihli, Guillaume. "Data Management in the Existential Rule Framework : Translation of Queries and Constraints". Electronic Thesis or Diss., Université de Montpellier (2022-....), 2023. http://www.theses.fr/2023UMONS030.

Texto completo
Resumen
Le contexte général de ce travail est la problématique de la conception de systèmes de haute qualité intégrant plusieurs sources de données via une couche sémantique codée dans un langage de représentation et de raisonnement sur les connaissances. Nous considérons les systèmes de gestion de données basés sur des connaissances (KBDM), qui sont structurés en trois couches : la couche de données, qui comprend les sources de données, la couche de connaissances (ou ontologique), et les mappings entre les deux. Les mappings et les connaissances sont exprimés dans le cadre des règles existentielles. Une des difficultés intrinsèques à la conception d'un système KBDM est la nécessité de comprendre le contenu des sources de données. Les sources de données sont souvent fournies avec des requêtes et des contraintes typiques, à partir desquelles on peut tirer des informations précieuses sur leur sémantique, tant que cette information est rendue intelligible aux concepteurs du système KBDM. Cela nous amène à notre question centrale : est-il possible de traduire les requêtes et les contraintes des données au niveau de la connaissance tout en préservant leur sémantique ? Les principales contributions de cette thèse sont les suivantes. Nous étendons les travaux antérieurs sur la traduction de requêtes sur les données vers l'ontologie avec de nouvelles techniques pour le calcul de traductions de requêtes parfaites, minimalement complètes ou maximalement adéquates. En ce qui concerne la traduction des contraintes sur les données vers l'ontologie, nous définissons un cadre général et l'appliquons à plusieurs classes de contraintes. Enfin, nous fournissons un opérateur de réécriture de requêtes adéquat et complet pour les règles existentielles disjonctives et les mappings disjonctifs, ainsi que des résultats d'indécidabilité, qui sont d'un intérêt indépendant
The general context of this work is the issue of designing high-quality systems that integrate multiple data sources via a semantic layer encoded in a knowledge representation and reasoning language. We consider knowledge-based data management (KBDM) systems, which are structured in three layers: the data layer, which comprises the data sources, the knowledge (or ontological) layer, and the mappings between the two. Mappings and knowledge are expressed within the existential rule framework. One of the intrinsic difficulties in designing a KBDM is the need to understand the content of data sources. Data sources are often provided with typical queries and constraints, from which valuable information about their semantics can be drawn, as long as this information is made intelligible to KBDM designers. This motivates our core question: is it possible to translate data queries and constraints at the knowledge level while preserving their semantics?The main contributions of this thesis are the following. We extend previous work on data-to-ontology query translation with new techniques for the computation of perfect, minimally complete, or maximally sound query translations. Concerning data-to-ontology constraint translation, we define a general framework and apply it to several classes of constraints. Finally, we provide a sound and complete query rewriting operator for disjunctive existential rules and disjunctive mappings, as well as undecidability results, which are of independent interest
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Hatem, Muna Salman. "A framework for semantic web implementation based on context-oriented controlled automatic annotation". Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/3207.

Texto completo
Resumen
The Semantic Web is the vision of the future Web. Its aim is to enable machines to process Web documents in a way that makes it possible for the computer software to "understand" the meaning of the document contents. Each document on the Semantic Web is to be enriched with meta-data that express the semantics of its contents. Many infrastructures, technologies and standards have been developed and have proven their theoretical use for the Semantic Web, yet very few applications have been created. Most of the current Semantic Web applications were developed for research purposes. This project investigates the major factors restricting the wide spread of Semantic Web applications. We identify the two most important requirements for a successful implementation as the automatic production of the semantically annotated document, and the creation and maintenance of semantic based knowledge base. This research proposes a framework for Semantic Web implementation based on context-oriented controlled automatic Annotation; for short, we called the framework the Semantic Web Implementation Framework (SWIF) and the system that implements this framework the Semantic Web Implementation System (SWIS). The proposed architecture provides for a Semantic Web implementation of stand-alone websites that automatically annotates Web pages before being uploaded to the Intranet or Internet, and maintains persistent storage of Resource Description Framework (RDF) data for both the domain memory, denoted by Control Knowledge, and the meta-data of the Web site's pages. We believe that the presented implementation of the major parts of SWIS introduce a competitive system with current state of art Annotation tools and knowledge management systems; this is because it handles input documents in the ii context in which they are created in addition to the automatic learning and verification of knowledge using only the available computerized corporate databases. In this work, we introduce the concept of Control Knowledge (CK) that represents the application's domain memory and use it to verify the extracted knowledge. Learning is based on the number of occurrences of the same piece of information in different documents. We introduce the concept of Verifiability in the context of Annotation by comparing the extracted text's meaning with the information in the CK and the use of the proposed database table Verifiability_Tab. We use the linguistic concept Thematic Role in investigating and identifying the correct meaning of words in text documents, this helps correct relation extraction. The verb lexicon used contains the argument structure of each verb together with the thematic structure of the arguments. We also introduce a new method to chunk conjoined statements and identify the missing subject of the produced clauses. We use the semantic class of verbs that relates a list of verbs to a single property in the ontology, which helps in disambiguating the verb in the input text to enable better information extraction and Annotation. Consequently we propose the following definition for the annotated document or what is sometimes called the 'Intelligent Document' 'The Intelligent Document is the document that clearly expresses its syntax and semantics for human use and software automation'. This work introduces a promising improvement to the quality of the automatically generated annotated document and the quality of the automatically extracted information in the knowledge base. Our approach in the area of using Semantic Web iii technology opens new opportunities for diverse areas of applications. E-Learning applications can be greatly improved and become more effective.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Salman, Munir [Verfasser], Matthias [Gutachter] Hemmje y Dominic [Gutachter] Heutelbeck. "Flexible Distributed R&D Data Management Supporting Social Network-Based Knowledge, Content, and Software Asset Integration Management in Collaborative and Co-Creative R&D and Innovation / Munir Salman ; Gutachter: Matthias Hemmje, Dominic Heutelbeck". Hagen : FernUniversität in Hagen, 2018. http://d-nb.info/1170389791/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Krive, Jacob. "Effectiveness of Evidence-Based Computerized Physician Order Entry Medication Order Sets Measured by Health Outcomes". NSUWorks, 2013. http://nsuworks.nova.edu/gscis_etd/202.

Texto completo
Resumen
In the past three years, evidence based medicine emerged as a powerful force in an effort to improve quality and health outcomes, and to reduce cost of care. Computerized physician order entry (CPOE) applications brought safety and efficiency features to clinical settings, including ease of ordering medications via pre-defined sets. Order sets offer promise of standardized care beyond convenience features through evidence-based practices built upon a growing and powerful knowledge of clinical professionals to achieve potentially more consistent health outcomes with patients and to reduce frequency of medical errors, adverse drug effects, and unintended side effects during treatment. While order sets existed in paper form prior to the introduction of CPOE, their true potential was only unleashed with support of clinical informatics, at those healthcare facilities that installed CPOE systems and reap rewards of standardized care. Despite ongoing utilization of order sets at facilities that implemented CPOE, there is a lack of quantitative evidence behind their benefits. Comprehensive research into their impact requires a history of electronic medical records necessary to produce large population samples to achieve statistically significant results. The study, conducted at a large Midwest healthcare system consisting of several community and academic hospitals, was aimed at quantitatively analyzing benefits of the order sets applied to prevent venous thromboembolism (VTE) and treat pneumonia, congestive heart failure (CHF), and acute myocardial infarction (AMI) - testing hospital mortality, readmission, complications, and length of stay (LOS) as health outcomes. Results indicated reduction of acute VTE rates among non-surgical patients in the experimental group, while LOS and complications benefits were inconclusive. Pneumonia patients in the experimental group had lower mortality, readmissions, LOS, and complications rates. CHF patients benefited from order sets in terms of mortality and LOS, while there was no sufficient data to display results for readmissions and complications. Utilization of AMI order sets was insufficient to produce statistically significant results. Results will (1) empower health providers with evidence to justify implementation of order sets due to their effectiveness in driving improvements in health outcomes and efficiency of care and (2) provide researchers with new ideas to conduct health outcomes research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Caballé, Llobet Santi. "A Computational Model for the Construction of Knowledge-based Collaborative Learning Distributed Applications". Doctoral thesis, Universitat Oberta de Catalunya, 2008. http://hdl.handle.net/10803/9127.

Texto completo
Resumen
en català:

Un camp de recerca important dins del paradigma del Computer-Supported Collaborative Learning (CSCL) és la importància en la gestió eficaç de la informació d'esdeveniments generada durant l'activitat de l'aprenentatge col·laboratiu virtual, per a proporcionar coneixement sobre el comportament dels membres del grup. Aquesta visió és especialment pertinent en l'escenari educatiu actual que passa d'un paradigma tradicional - centrat en la figura d'un instructor magistral - a un paradigma emergent que considera els estudiants com actors centrals en el seu procés d'aprenentatge. En aquest nou escenari, els estudiants aprenen, amb l'ajuda de professors, la tecnologia i els altres estudiants, el que potencialment necessitaran per a desenvolupar les seves activitats acadèmiques o professionals futures.
Els principals aspectes a tenir en compte en aquest context són, primer de tot, com dissenyar una plataforma sota el paradigma del CSCL, que es pugui utilitzar en situacions reals d'aprenentatge col·laboratiu complexe i a llarg termini, basades en el model d'aprenentatge de resolució de problemes. I que permet al professor una anàlisi del grup més eficaç així com donar el suport adequat als estudiants quan sigui necessari.
En segon lloc, com extreure coneixement pertinent de la col·laboració per donar consciència i retorn als estudiants a nivell individual i de rendiment del grup, així com per a propòsits d'avaluació.
L'assoliment d'aquests objectius impliquen el disseny d'un model conceptual d'interacció durant l'aprenentatge col·laboratiu que estructuri i classifiqui la informació generada en una aplicació col·laborativa en diferents nivells de descripció. A partir d'aquesta aproximació conceptual, els models computacionals hi donen resposta per a proporcionar una extracció eficaç del coneixement produït per l'individu i per l'activitat del grup, així com la possibilitat d'explotar aquest coneixement com una eina metacognitiva pel suport en temps real i regulat del procés d'aprenentatge col·laboratiu.
A més a més, les necessitats dels entorns CSCL han evolucionat en gran mesura durant els darrers anys d'acord amb uns requisits pedagògics i tecnològics cada cop més exigents. Els entorns d'aprenentatge col·laboratius virtuals ara ja no depenen de grups d'estudiants homogenis, continguts i recursos d'aprenentatge estàtics, ni pedagogies úniques, sinó que exigeixen una forta personalització i un alt grau de flexibilitat. En aquest nou escenari, les organitzacions educatives actuals necessiten estendre's i moure's cap a paradigmes d'ensenyament altament personalitzats, amb immediatesa i constantment, on cada paradigma incorpora el seu propi model pedagògic, el seu propi objectiu d'aprenentatge i incorpora els seus propis recursos educatius específics.
Les demandes de les organitzacions actuals també inclouen la integració efectiva, en termes de cost i temps, de sistemes d'aprenentatge llegats i externs, que pertanyen a altres institucions, departaments i cursos. Aquests sistemes llegats es troben implementats en llenguatges diferents, suportats per plataformes heterogènies i distribuïdes arreu, per anomenar alguns dels problemes més habituals. Tots aquests problemes representen certament un gran repte per la comunitat de recerca actual i futura. Per tant, els propers esforços han d'anar encarats a ajudar a desenvolupadors, recercaires, tecnòlegs i pedagogs a superar aquests exigents requeriments que es troben actualment en el domini del CSCL, així com proporcionar a les organitzacions educatives solucions ràpides i flexibles per a potenciar i millorar el rendiment i resultats de l'aprenentatge col·laboratiu. Aquesta tesi proposa un primer pas per aconseguir aquests objectius.
An important research topic in Computer Supported Collaborative Learning (CSCL) is to explore the importance of efficient management of event information generated from group activity in collaborative learning practices for its further use in extracting and providing knowledge on interaction behavior.
The essential issue here is first how to design a CSCL platform that can be used for real, long-term, complex collaborative problem solving situations and which enables the instructor to both analyze group interaction effectively and provide an adequate support when needed. Secondly, how to extract relevant knowledge from collaboration in order to provide learners with efficient awareness and feedback as regards individual and group performance and assessment. The achievement of these tasks involve the design of a conceptual framework of collaborative learning interaction that structures and classifies the information generated in a collaborative application at several levels of description. Computational models are then to realize this conceptual approach for an efficient management of the knowledge produced by the individual and group activity as well as the possibility of exploiting this knowledge further as a metacognitive tool for real-time coaching and regulating the collaborative learning process.
In addition, CSCL needs have been evolving over the last years accordingly with more and more demanding pedagogical and technological requirements. On-line collaborative learning environments no longer depend on homogeneous groups, static content and resources, and single pedagogies, but high customization and flexibility are a must in this context. As a result, current educational organizations' needs involve extending and moving to highly customized learning and teaching forms in timely fashion, each incorporating its own pedagogical approach, each targeting a specific learning goal, and each incorporating its specific resources.
These entire issues certainly represent a great challenge for current and future research in this field. Therefore, further efforts need to be made that help developers, technologists and pedagogists overcome the demanding requirements currently found in the CSCL domain as well as provide modern educational organizations with fast, flexible and effective solutions for the enhancement and improvement of the collaborative learning performance and outcomes. This thesis proposes a first step toward these goals.

Índex foliat:
The main contribution in this thesis is the exploration of the importance of an efficient management of information generated from group activity in Computer-Supported Collaborative Learning (CSCL) practices for its further use in extracting and providing knowledge on interaction behavior. To this end, the first step is to investigate a conceptual model for data analysis and management so as to identify the many kinds of indicators that describe collaboration and learning and classify them into high-level potential categories of effective collaboration. Indeed, there are more evident key discourse elements and aspects than those shown by the literature, which play an important role both for promoting student participation and enhancing group and individual performance, such as, the impact and effectiveness of students' contributions, among others, that are explored in this work. By making these elements explicit, the discussion model proposed accomplishes high students' participation rates and contribution quality in a more natural and effective way. This approach goes beyond a mere interaction analysis of asynchronous discussion in the sense that it builds a multi-functional model that fosters knowledge sharing and construction, develops a strong sense of community among students, provides tutors with a powerful tool for students' monitoring, discussion regulation, while it allows for peer facilitation through self, peer and group awareness and assessment.
The results of the research described so far motivates the development of a computational system as the translation from the conceptual model into a computer system that implements the management of the information and knowledge acquired from the group activity, so as to be efficiently fed back to the collaboration. The achievement of a generic, robust, flexible, interoperable, reusable computational model that meets the fundamental functional needs shared by any collaborative learning experience is largely investigated in this thesis. The systematic reuse of this computational model permits a fast adaptation to new learning and teaching requirements, such as learning by discussion, by relying on the most advanced software engineering processes and methodologies from the field of software reuse, and thus important benefits are expected in terms of productivity, quality, and cost.
Therefore, another important contribution is to explore and extend suitable software reuse techniques, such as Generic Programming, so as to allow the computational model to be successfully particularized in as many as situations as possible without losing efficiency in the process. In particular, based on domain analysis techniques, a high-level computational description and formalization of the CSCL domain are identified and modeled. Then, different specific-platform developments that realize the conceptual description are provided. It is also explored a certain level of automation by means of advanced techniques based on Service-Oriented Architectures and Web-services while passing from the conceptual specification to the desired realization, which greatly facilitates the development of CSCL applications using this computational model.
Based on the outcomes of these investigations, this thesis contributes with computational collaborative learning systems, which are capable of managing both qualitative and quantitative information and transforming it into useful knowledge for all the implicated parties in an efficient and clear way. This is achieved by both the specific assessment of each contribution by the tutor who supervises the discussion and by rich statistical information about student's participation. This statistical data is automatically provided by the system; for instance, statistical data sheds light on the students' engagement in the discussion forum or how much interest drew the student's intervention in the form of participation impact, level of passivity, proactivity, reactivity, and so on. The aim is to provide both a deeper understanding of the actual discussion process and a more objective assessment of individual and group activity.
This information is then processed and analyzed by means of a multivariate statistical model in order to extract useful knowledge about the collaboration. The knowledge acquired is communicated back to the members of the learning group and their tutor in appropriate formats, thus providing valuable awareness and feedback of group interaction and performance as well as may help identify and assess the real skills and intentions of participants. The most important benefit expected from the conceptual model for interaction data analysis and management is a great improvement and enhancement of the learning and teaching collaborative experiences.
Finally, the possibilities of using distributed and Grid technology to support real CSCL environments are also extensively explored in this thesis. The results of this investigation lead to conclude that the features provided by these technologies form an ideal context for supporting and meeting demanding requirements of collaborative learning applications. This approach is taken one step further for enhancing the possibilities of the computational model in the CSCL domain and it is successfully adopted on an empirical and application basis. From the results achieved, it is proved the feasibility of distributed technologies to considerably enhance and improve the collaborative learning experience. In particular, the use of Grid computing is successfully applied for the specific purpose of increasing the efficiency of processing a large amount of information from group activity log files.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

COLOMBARI, RUGGERO. "Digitalization and operational data-driven decision-making: A socio-technical investigation of the implications for front-line production managers and workers". Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2963942.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Gomis, Marie-Joseph. "Web-based ERP systems: the new generation : case study: mySAP ERP". Thesis, Jönköping University, JTH, Computer and Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-7711.

Texto completo
Resumen

With the proliferation of Internet, ERP systems like all the domains of Information Technology have known an important evolution. This final thesis project is a study about the evolution of ERP systems, more precisely about their migration to the Web giving birth to a new generation of systems: the Web-Based or Web-enabled ERP systems. This migration to the Web is justified by the difficulty of making possible the communication between partner’s legacy systems and the organizations’ ERP systems. A historical evolution of these systems is presented in order to understand the reasons that lead vendors to adopt the Web Service Technology. Based on different studies, the main technologies such as Web services, Service-Oriented Architecture and Web Application server are also presented. From an interpretative research approach mySAP ERP has been chosen as a case study. This Master’s thesis has been led into AIRBUS France Company within the framework of the SAP Customer Competence Center (SAPCCC) Web site project. The project is aimed at re-building the SAPCCC Web site. The new characteristic of the Web site is to make it accessible by all AIRBUS partners working with SAP applications. To make the Web site accessible by the partners from their own applications located on their own platforms the development has been done thanks to mySAP ERP which is an ERP using the Web service technology. Finally, this thesis presents a comparative study between traditional ERP systems and the new generation of Web-based ERP systems.

Los estilos APA, Harvard, Vancouver, ISO, etc.
29

El, Sarraj Lama. "Exploitation d'un entrepôt de données guidée par des ontologies : application au management hospitalier". Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4331.

Texto completo
Resumen
Cette recherche s'inscrit dans le domaine de la personnalisation d'Entrepôt de Données (ED) et concerne l'aide à l'exploitation d'un ED. Nous intéressons à l'assistance à apporter à un utilisateur lors d'une analyse en ligne, dans son utilisation de ressources d'exploitation existantes. Le domaine d'application concerné est la gestion hospitalière, dans le cadre de la nouvelle gouvernance, et en se limitant au périmètre du Programme de Médicalisation des Systèmes d'Information (PMSI). Cette recherche a été supportée par l'Assistance Publique des Hôpitaux de Marseille (APHM). L'approche retenue pour développer une telle assistance à l'utilisateur d'ED est sémantique et guidée par l'usage d'ontologies. Le système d'assistance mettant en oeuvre cette approche, nommé Ontologies-based Personalization System (OPS), s'appuie sur une Base de Connaissances (BC) exploitée par un moteur de personnalisation. La BC est composée des trois ontologies : de domaine, de l'ED et des ressources. Le moteur de personnalisation permet d'une part une recherche personnalisée de ressources d'exploitation de l'ED en s'appuyant sur le profil de l'utilisateur, et d'autre part pour une ressource particulière, une recommandation de ressources complémentaires selon trois stratégies possibles. Afin de valider nos propositions, un prototype du système OPS a été développé avec un moteur de personnalisation a été implémenté en Java et exploitant une base de connaissance constituée des trois ontologies en OWL interconnectées. Nous illustrons le fonctionnement de notre système sur trois scenarii d'expérimentation liés au PMSI et définis avec des experts métiers de l'APHM
This research is situated in the domain of Data Warehouses (DW) personalization and concerns DW assistance. Specifically, we are interested in assisting a user during an online analysis processes to use existing operational resources. The application of this research concerns hospital management, for hospitals governance, and is limited to the scope of the Program of Medicalization of Information Systems (PMSI). This research was supported by the Public Hospitals of Marseille (APHM). Our proposal is a semantic approach based on ontologies. The support system implementing this approach, called Ontology-based Personalization System (OPS), is based on a knowledge base operated by a personalization engine. The knowledge base is composed of three ontologies: a domain ontology, an ontology of the DW structure, and an ontology of resources. The personalization engine allows firstly, a personalized search of resources of the DW based on users profile, and secondly for a particular resource, an expansion of the research by recommending new resources based on the context of the resource. To recommend new resources, we have proposed three possible strategies. To validate our proposal, a prototype of the OPS system was developed, a personalization engine has been implemented in Java. This engine exploit an OWL knowledge composed of three interconnected OWL ontologies. We illustrate three experimental scenarios related to PMSI and defined with APHM domain experts
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Molch, Silke. "Datenmodelle für fachübergreifende Wissensbasen in der interdisziplinären Anwendung". TUDpress, 2019. https://tud.qucosa.de/id/qucosa%3A36574.

Texto completo
Resumen
Ziel dieses Beitrags aus der Lehrpraxis ist es, die erforderlichen Herangehensweisen für die Erstellung von fachübergreifenden Wissensbasen und deren Nutzung im Rahmen studentischer Semesterprojekte exemplarisch am Lehrbeispiel einer anwendenden Ingenieurdisziplin darzustellen.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Harley, Samuel, Michael Reil, Thea Blunt-Henderson y George Bartlett. "Data, Information, and Knowledge Management". International Foundation for Telemetering, 2005. http://hdl.handle.net/10150/604784.

Texto completo
Resumen
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada
The Aberdeen Test Center Versatile Information System – Integrated, ONline (VISION) project has developed and deployed a telemetry capability based upon modular instrumentation, seamless communications, and the VISION Digital Library. Each of the three key elements of VISION contributes to a holistic solution to the data collection, distribution, and management requirements of Test and Evaluation. This paper provides an overview of VISION instrumentation, communications, and overall data management technologies, with a focus on engineering performance data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

REIS, JUNIOR JOSE S. B. "Métodos e softwares para análise da produção científica e detecção de frentes emergentes de pesquisa". reponame:Repositório Institucional do IPEN, 2015. http://repositorio.ipen.br:8080/xmlui/handle/123456789/26929.

Texto completo
Resumen
Submitted by Marco Antonio Oliveira da Silva (maosilva@ipen.br) on 2016-12-21T15:07:24Z No. of bitstreams: 0
Made available in DSpace on 2016-12-21T15:07:24Z (GMT). No. of bitstreams: 0
O progresso de projetos anteriores salientou a necessidade de tratar o problema dos softwares para detecção, a partir de bases de dados de publicações científicas, de tendências emergentes de pesquisa e desenvolvimento. Evidenciou-se a carência de aplicações computacionais eficientes dedicadas a este propósito, que são artigos de grande utilidade para um melhor planejamento de programas de pesquisa e desenvolvimento em instituições. Foi realizada, então, uma revisão dos softwares atualmente disponíveis, para poder-se delinear claramente a oportunidade de desenvolver novas ferramentas. Como resultado, implementou-se um aplicativo chamado Citesnake, projetado especialmente para auxiliar a detecção e o estudo de tendências emergentes a partir da análise de redes de vários tipos, extraídas das bases de dados científicas. Através desta ferramenta computacional robusta e eficaz, foram conduzidas análises de frentes emergentes de pesquisa e desenvolvimento na área de Sistemas Geradores de Energia Nuclear de Geração IV, de forma que se pudesse evidenciar, dentre os tipos de reatores selecionados como os mais promissores pelo GIF - Generation IV International Forum, aqueles que mais se desenvolveram nos últimos dez anos e que se apresentam, atualmente, como os mais capazes de cumprir as promessas realizadas sobre os seus conceitos inovadores.
Dissertação (Mestrado em Tecnologia Nuclear)
IPEN/D
Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Gängler, Thomas. "Semantic Federation of Musical and Music-Related Information for Establishing a Personal Music Knowledge Base". Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-72434.

Texto completo
Resumen
Music is perceived and described very subjectively by every individual. Nowadays, people often get lost in their steadily growing, multi-placed, digital music collection. Existing music player and management applications get in trouble when dealing with poor metadata that is predominant in personal music collections. There are several music information services available that assist users by providing tools for precisely organising their music collection, or for presenting them new insights into their own music library and listening habits. However, it is still not the case that music consumers can seamlessly interact with all these auxiliary services directly from the place where they access their music individually. To profit from the manifold music and music-related knowledge that is or can be available via various information services, this information has to be gathered up, semantically federated, and integrated into a uniform knowledge base that can personalised represent this data in an appropriate visualisation to the users. This personalised semantic aggregation of music metadata from several sources is the gist of this thesis. The outlined solution particularly concentrates on users’ needs regarding music collection management which can strongly alternate between single human beings. The author’s proposal, the personal music knowledge base (PMKB), consists of a client-server architecture with uniform communication endpoints and an ontological knowledge representation model format that is able to represent the versatile information of its use cases. The PMKB concept is appropriate to cover the complete information flow life cycle, including the processes of user account initialisation, information service choice, individual information extraction, and proactive update notification. The PMKB implementation makes use of SemanticWeb technologies. Particularly the knowledge representation part of the PMKB vision is explained in this work. Several new Semantic Web ontologies are defined or existing ones are massively modified to meet the requirements of a personalised semantic federation of music and music-related data for managing personal music collections. The outcome is, amongst others, • a new vocabulary for describing the play back domain, • another one for representing information service categorisations and quality ratings, and • one that unites the beneficial parts of the existing advanced user modelling ontologies. The introduced vocabularies can be perfectly utilised in conjunction with the existing Music Ontology framework. Some RDFizers that also make use of the outlined ontologies in their mapping definitions, illustrate the fitness in practise of these specifications. A social evaluation method is applied to carry out an examination dealing with the reutilisation, application and feedback of the vocabularies that are explained in this work. This analysis shows that it is a good practise to properly publish Semantic Web ontologies with the help of some Linked Data principles and further basic SEO techniques to easily reach the searching audience, to avoid duplicates of such KR specifications, and, last but not least, to directly establish a \"shared understanding\". Due to their project-independence, the proposed vocabularies can be deployed in every knowledge representation model that needs their knowledge representation capacities. This thesis added its value to make the vision of a personal music knowledge base come true.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Muhammad, Fuad Muhammad Marwan. "Similarity Search in High-dimensional Spaces with Applications to Time Series Data Mining and Information Retrieval". Phd thesis, Université de Bretagne Sud, 2011. http://tel.archives-ouvertes.fr/tel-00619953.

Texto completo
Resumen
Nous présentons l'un des principaux problèmes dans la recherche d'informations et de data mining, ce qui est le problème de recherche de similarité. Nous abordons ce problème dans une perspective essentiellement métrique. Nous nous concentrons sur des données de séries temporelles, mais notre objectif général est de développer des méthodes et des algorithmes qui peuvent être étendus aux autres types de données. Nous étudions de nouvelles méthodes pour traiter le problème de recherche de similarité dans des espaces haut-dimensionnels. Les nouvelles méthodes et algorithmes que nous introduisons sont largement testés et ils montrent une supériorité sur les autres méthodes et algorithmes dans la littérature.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Amad, Ashraf. "L’acquisition et l’extraction de connaissances dans un contexte patrimoniale peu documenté". Thesis, Paris 8, 2017. http://www.theses.fr/2017PA080101.

Texto completo
Resumen
L’importance de la documentation du patrimoine culturel croit parallèlement aux risques auxquels il est exposé tels que les guerres, le développement urbain incontrôlé, les catastrophes naturelles, la négligence et les techniques ou stratégies de conservation inappropriées. De plus, la documentation constitue un outil fondamental pour l'évaluation, la conservation, le suivi et la gestion du patrimoine culturel. Dès lors, cet outil majeur nous permet d’estimer la valeur historique, scientifique, sociale et économique de ce patrimoine. Selon plusieurs institutions internationales dédiées à la conservation du patrimoine culturel, il y a un besoin réel de développer et d’adapter de solutions informatiques capables de faciliter et de soutenir la documentation du patrimoine culturel peu documenté surtout dans les pays en développement où il y a un manque flagrant de ressources. Parmi ces pays, la Palestine représente un cas d’étude pertinent dans cette problématique de carence en documentation de son patrimoine. Pour répondre à cette problématique, nous proposons une approche d’acquisition et d’extraction de connaissances patrimoniales dans un contexte peu documenté. Nous prenons comme cas d’étude l’église de la Nativité en Palestine et nous mettons en place notre approche théorique par le développement d’une plateforme d’acquisition et d’extraction de connaissances patrimoniales à l’aide d’un Framework pour la documentation de patrimoine culturel.Notre solution est basée sur les technologies sémantiques, ce qui nous donne la possibilité, dès le début, de fournir une description ontologique riche, une meilleure structuration de l'information, un niveau élevé d'interopérabilité et un meilleur traitement automatique (lisibilité par les machines) sans efforts additionnels.De plus, notre approche est évolutive et réciproque car l’acquisition de connaissance (sous forme structurée) améliore l’extraction de connaissances patrimoniales à partir de texte non structuré et vice versa. Dès lors, l’interaction entre les deux composants de notre système ainsi que les connaissances patrimoniales se développent et s’améliorent au fil de temps surtout que notre système utilise les contributions manuelles et validations des résultats automatiques (dans les deux composants) par les experts afin d’optimiser sa performance
The importance of cultural heritage documentation increases in parallel with the risks to which it is exposed, such as wars, uncontrolled urban development, natural disasters, neglect and inappropriate conservation techniques or strategies. In addition, this documentation is a fundamental tool for the assessment, the conservation, and the management of cultural heritage. Consequently, this tool allows us to estimate the historical, scientific, social and economic value of this heritage. According to several international institutions dedicated to the preservation of cultural heritage, there is an urgent need to develop computer solutions to facilitate and support the documentation of poorly documented cultural heritage especially in developing countries where there is a lack of resources. Among these countries, Palestine represents a relevant case study in this issue of lack of documentation of its heritage. To address this issue, we propose an approach of knowledge acquisition and extraction in the context of poorly documented heritage. We take as a case study the church of the Nativity in Palestine and we put in place our theoretical approach by the development of a platform for the acquisition and extraction of heritage knowledge. Our solution is based on the semantic technologies, which gives us the possibility, from the beginning, to provide a rich ontological description, a better structuring of the information, a high level of interoperability and a better automatic processing without additional efforts.Additionally, our approach is evolutionary and reciprocal because the acquisition of knowledge (in structured form) improves the extraction of heritage knowledge from unstructured text and vice versa. Therefore, the interaction between the two components of our system as well as the heritage knowledge develop and improve over time especially that our system uses manual contributions and validations of the automatic results (in both components) by the experts to optimize its performance
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Datta, Roshni. "Knowledge-Based Performance Management Framework". The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1293725862.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Jäkel, Tobias. "Role-based Data Management". Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-224416.

Texto completo
Resumen
Database systems build an integral component of today’s software systems and as such they are the central point for storing and sharing a software system’s data while ensuring global data consistency at the same time. Introducing the primitives of roles and their accompanied metatype distinction in modeling and programming languages, results in a novel paradigm of designing, extending, and programming modern software systems. In detail, roles as modeling concept enable a separation of concerns within an entity. Along with its rigid core, an entity may acquire various roles in different contexts during its lifetime and thus, adapts its behavior and structure dynamically during runtime. Unfortunately, database systems, as important component and global consistency provider of such systems, do not keep pace with this trend. The absence of a metatype distinction, in terms of an entity’s separation of concerns, in the database system results in various problems for the software system in general, for the application developers, and finally for the database system itself. In case of relational database systems, these problems are concentrated under the term role-relational impedance mismatch. In particular, the whole software system is designed by using different semantics on various layers. In case of role-based software systems in combination with relational database systems this gap in semantics between applications and the database system increases dramatically. Consequently, the database system cannot directly represent the richer semantics of roles as well as the accompanied consistency constraints. These constraints have to be ensured by the applications and the database system loses its single point of truth characteristic in the software system. As the applications are in charge of guaranteeing global consistency, their development requires more effort in data management. Moreover, the software system’s data management is distributed over several layers, which results in an unstructured software system architecture. To overcome the role-relational impedance mismatch and bring the database system back in its rightful position as single point of truth in a software system, this thesis introduces the novel and tripartite RSQL approach. It combines a novel database model that represents the metatype distinction as first class citizen in a database system, an adapted query language on the database model’s basis, and finally a proper result representation. Precisely, RSQL’s logical database model introduces Dynamic Data Types, to directly represent the separation of concerns within an entity type on the schema level. On the instance level, the database model defines the notion of a Dynamic Tuple that combines an entity with the notion of roles and thus, allows for dynamic structure adaptations during runtime without changing an entity’s overall type. These definitions build the main data structures on which the database system operates. Moreover, formal operators connecting the query language statements with the database model data structures, complete the database model. The query language, as external database system interface, features an individual data definition, data manipulation, and data query language. Their statements directly represent the metatype distinction to address Dynamic Data Types and Dynamic Tuples, respectively. As a consequence of the novel data structures, the query processing of Dynamic Tuples is completely redesigned. As last piece for a complete database integration of a role-based notion and its accompanied metatype distinction, we specify the RSQL Result Net as result representation. It provides a novel result structure and features functionalities to navigate through query results. Finally, we evaluate all three RSQL components in comparison to a relational database system. This assessment clearly demonstrates the benefits of the roles concept’s full database integration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Montoya, David. "Une base de connaissance personnelle intégrant les données d'un utilisateur et une chronologie de ses activités". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLN009/document.

Texto completo
Resumen
Aujourd'hui, la plupart des internautes ont leurs données dispersées dans plusieurs appareils, applications et services. La gestion et le contrôle de ses données sont de plus en plus difficiles. Dans cette thèse, nous adoptons le point de vue selon lequel l'utilisateur devrait se voir donner les moyens de récupérer et d'intégrer ses données, sous son contrôle total. À ce titre, nous avons conçu un système logiciel qui intègre et enrichit les données d'un utilisateur à partir de plusieurs sources hétérogènes de données personnelles dans une base de connaissances RDF. Le logiciel est libre, et son architecture innovante facilite l'intégration de nouvelles sources de données et le développement de nouveaux modules pour inférer de nouvelles connaissances. Nous montrons tout d'abord comment l'activité de l'utilisateur peut être déduite des données des capteurs de son téléphone intelligent. Nous présentons un algorithme pour retrouver les points de séjour d'un utilisateur à partir de son historique de localisation. À l'aide de ces données et de données provenant d'autres capteurs de son téléphone, d'informations géographiques provenant d'OpenStreetMap, et des horaires de transports en commun, nous présentons un algorithme de reconnaissance du mode de transport capable de retrouver les différents modes et lignes empruntés par un utilisateur lors de ses déplacements. L'algorithme reconnaît l'itinéraire pris par l'utilisateur en retrouvant la séquence la plus probable dans un champ aléatoire conditionnel dont les probabilités se basent sur la sortie d'un réseau de neurones artificiels. Nous montrons également comment le système peut intégrer les données du courrier électronique, des calendriers, des carnets d'adresses, des réseaux sociaux et de l'historique de localisation de l'utilisateur dans un ensemble cohérent. Pour ce faire, le système utilise un algorithme de résolution d'entité pour retrouver l'ensemble des différents comptes utilisés par chaque contact de l'utilisateur, et effectue un alignement spatio-temporel pour relier chaque point de séjour à l'événement auquel il correspond dans le calendrier de l'utilisateur. Enfin, nous montrons qu'un tel système peut également être employé pour faire de la synchronisation multi-système/multi-appareil et pour pousser de nouvelles connaissances vers les sources. Les résultats d'expériences approfondies sont présentés
Typical Internet users today have their data scattered over several devices, applications, and services. Managing and controlling one's data is increasingly difficult. In this thesis, we adopt the viewpoint that the user should be given the means to gather and integrate her data, under her full control. In that direction, we designed a system that integrates and enriches the data of a user from multiple heterogeneous sources of personal information into an RDF knowledge base. The system is open-source and implements a novel, extensible framework that facilitates the integration of new data sources and the development of new modules for deriving knowledge. We first show how user activity can be inferred from smartphone sensor data. We introduce a time-based clustering algorithm to extract stay points from location history data. Using data from additional mobile phone sensors, geographic information from OpenStreetMap, and public transportation schedules, we introduce a transportation mode recognition algorithm to derive the different modes and routes taken by the user when traveling. The algorithm derives the itinerary followed by the user by finding the most likely sequence in a linear-chain conditional random field whose feature functions are based on the output of a neural network. We also show how the system can integrate information from the user's email messages, calendars, address books, social network services, and location history into a coherent whole. To do so, it uses entity resolution to find the set of avatars used by each real-world contact and performs spatiotemporal alignment to connect each stay point with the event it corresponds to in the user's calendar. Finally, we show that such a system can also be used for multi-device and multi-system synchronization and allow knowledge to be pushed to the sources. We present extensive experiments
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Stonehouse, George. "Knowledge based strategy : appraising knowledge creation capability in organisations". Thesis, Edinburgh Napier University, 2008. http://researchrepository.napier.ac.uk/Output/2446.

Texto completo
Resumen
This thesis sets out a journey which culminates in the development of an analytical framework, the "Organisational Creativity Appraisal" which is intended to assist organisations in evaluating their ability to support and develop creativity. This framework is derived from the common thread of the thesis, which is drawn from a range of research and consultancy projects, and the resulting published work, spanning an eight year period, centring on the role of knowledge and creativity in the strategy and performance of organisations. The literature of strategy, learning and creativity increasingly recognises that organisational context is critical to the formation of strategy, to the content of the strategy and to its successful implementation. The thesis explores the ways in which learning and creativity, the basis of knowledge-based strategy, are influenced by organisational context or social architecture. The research explores the ways in which managers can gain greater understanding of the social architectures of their organisations so as to assist in supporting their strategic development. The central core of the thesis is the nine published papers upon which it is based but it also derives from the broader perspective of my published work in the form of both articles and books. The thesis further draws upon my own experience as a leader and manager in the context of university business schools and as a consultant, researcher and developer in the context of a range of international private and public sector organisations. The work is based upon a premise that theory should inform practice and that practice should inform theory. The "Organisational Creativity Appraisal" framework is informed by both theory and practice and is intended to assist in management practice. There is no assumption that management research can arrive at prescriptions for managerial and organisational behaviour. On the other hand management research can usefully inform management and organisational behaviour, as long as it is employed in a critically reflective manner. The "Organisational Creativity Appraisal" presented in this work should be regarded as the framework in its present form which is likely to develop further as my research progresses in the future.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Rudd, Susan Elizabeth. "Knowledge-based analysis of partial discharge data". Thesis, University of Strathclyde, 2010. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=14447.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Rangaraj, Jithendra Kumar. "Knowledge-based Data Extraction Workbench for Eclipse". The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354290498.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Leigh, Christopher. "Knowledge management : a practice-based approach". Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2008. https://ro.ecu.edu.au/theses/236.

Texto completo
Resumen
Driven by advances in internet and wireless communications the information economy is exemplified by the unprecedented speed at which large amounts of information is created and disseminated. As a result, organisations and individuals now need to process more information and create new knowledge faster than ever before. This has forced organisations to consider how individuals and groups create knowledge that enables them to innovate fast enough to compete in the global marketplace. This portfolio contains a collection of studies and an investigation of practice-based approaches to knowing and learning in organisational work settings. It represents a departure from the traditional view of knowledge as belonging to individuals as contained in mental processes, which organisations then attempt to convert into embedded knowledge. This study seeks to explain how knowledge is achieved during the course of practice, being situated in cultural, historical and social contexts. The focus therefore goes beyond the study of knowledge in organisations to shed .light on the organising processes involved in knowing from an institutional and personal perspective. Firstly, it investigates, through a number of practice based studies, how knowledge is created and learning conducted in an organisational setting in order to further the existing research in this area. Secondly, it presents two frameworks that may be used to promote organisational learning and knowledge creation. This study draws on a number of theoretical frameworks, including; constructivist theories of learning, which views knowledge as actively constructed, relative, and pluralistic (Denzin and Lincoln, 2003), the Sociocultural approaches of Vygotsky (1978, 1986) that emphasised the interdependence of social and individual processes in the creation of knowledge, situated learning theories (Brown, Collins, Duguid, 1989 and Lave and Wenger, 1991) in which learning and cognition is situated in the activity in which it occurs, and the ideas of the incredulous postmodernist, for whom all know ledge is provisional, temporal and hypothetical. The portfolio concludes that that human knowledge is subjectively influenced by a large number of factors including cultural, social, pedagogical, and psychological issues in addition to language and context. Furthermore, it asserts that knowledge is a mediated achievement which is collectively created, intrinsically situated in people, artefacts and practices, and is always temporary and open to debate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Chan, Francis. "Knowledge management in Naval Sea Systems Command : a structure for performance driven knowledge management initiative". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02sep%5FChan.pdf.

Texto completo
Resumen
Thesis (M.S. in Product Development)--Naval Postgraduate School, September 2002.
Thesis advisor(s): Mark E. Nissen, Donald H. Steinbrecher. Includes bibliographical references (p. 113-117). Also available online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Thakkar, Hetal M. "Supporting knowledge discovery in data stream management systems". Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1790275561&sid=26&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Groth, Philip. "Knowledge management and discovery for genotype/phenotype data". Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2009. http://dx.doi.org/10.18452/16033.

Texto completo
Resumen
Die Untersuchung des Phänotyps bringt z.B. bei genetischen Krankheiten ein Verständnis der zugrunde liegenden Mechanismen mit sich. Aufgrund dessen wurden neue Technologien wie RNA-Interferenz (RNAi) entwickelt, die Genfunktionen entschlüsseln und mehr phänotypische Daten erzeugen. Interpretation der Ergebnisse solcher Versuche ist insbesondere bei heterogenen Daten eine große Herausforderung. Wenige Ansätze haben bisher Daten über die direkte Verknüpfung von Genotyp und Phänotyp hinaus interpretiert. Diese Dissertation zeigt neue Methoden, die Entdeckungen in Phänotypen über Spezies und Methodik hinweg ermöglichen. Es erfolgt eine Erfassung der verfügbaren Datenbanken und der Ansätze zur Analyse ihres Inhalts. Die Grenzen und Hürden, die noch bewältigt werden müssen, z.B. fehlende Datenintegration, lückenhafte Ontologien und der Mangel an Methoden zur Datenanalyse, werden diskutiert. Der Ansatz zur Integration von Genotyp- und Phänotypdaten, PhenomicDB 2, wird präsentiert. Diese Datenbank assoziiert Gene mit Phänotypen durch Orthologie über Spezies hinweg. Im Fokus sind die Integration von RNAi-Daten und die Einbindung von Ontologien für Phänotypen, Experimentiermethoden und Zelllinien. Ferner wird eine Studie präsentiert, in der Phänotypendaten aus PhenomicDB genutzt werden, um Genfunktionen vorherzusagen. Dazu werden Gene aufgrund ihrer Phänotypen mit Textclustering gruppiert. Die Gruppen zeigen hohe biologische Kohärenz, da sich viele gemeinsame Annotationen aus der Gen-Ontologie und viele Protein-Protein-Interaktionen innerhalb der Gruppen finden, was zur Vorhersage von Genfunktionen durch Übertragung von Annotationen von gut annotierten Genen zu Genen mit weniger Annotationen genutzt wird. Zuletzt wird der Prototyp PhenoMIX präsentiert, in dem Genotypen und Phänotypen mit geclusterten Phänotypen, PPi, Orthologien und weiteren Ähnlichkeitsmaßen integriert und deren Gruppierungen zur Vorhersage von Genfunktionen, sowie von phänotypischen Wörtern genutzt.
In diseases with a genetic component, examination of the phenotype can aid understanding the underlying genetics. Technologies to generate high-throughput phenotypes, such as RNA interference (RNAi), have been developed to decipher functions for genes. This large-scale characterization of genes strongly increases phenotypic information. It is a challenge to interpret results of such functional screens, especially with heterogeneous data sets. Thus, there have been only few efforts to make use of phenotype data beyond the single genotype-phenotype relationship. Here, methods are presented for knowledge discovery in phenotypes across species and screening methods. The available databases and various approaches to analyzing their content are reviewed, including a discussion of hurdles to be overcome, e.g. lack of data integration, inadequate ontologies and shortage of analytical tools. PhenomicDB 2 is an approach to integrate genotype and phenotype data on a large scale, using orthologies for cross-species phenotypes. The focus lies on the uptake of quantitative and descriptive RNAi data and ontologies of phenotypes, assays and cell-lines. Then, the results of a study are presented in which the large set of phenotype data from PhenomicDB is taken to predict gene annotations. Text clustering is utilized to group genes based on their phenotype descriptions. It is shown that these clusters correlate well with indicators for biological coherence in gene groups, such as functional annotations from the Gene Ontology (GO) and protein-protein interactions. The clusters are then used to predict gene function by carrying over annotations from well-annotated genes to less well-characterized genes. Finally, the prototype PhenoMIX is presented, integrating genotype and phenotype data with clustered phenotypes, orthologies, interaction data and other similarity measures. Data grouped by these measures are evaluated for theirnpredictiveness in gene functions and phenotype terms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Duh, Chinmiin. "Argumentation-based knowledge transformation". Thesis, Royal Holloway, University of London, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.251955.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Zou, Y. "BIM and knowledge based risk management system". Thesis, University of Liverpool, 2017. http://livrepository.liverpool.ac.uk/3010103/.

Texto completo
Resumen
The use of Building Information Modelling (BIM) for construction project risk management has become a growing research trend. However, it was observed that BIM-based risk management has not been widely used in practice and two important gaps leading to this problem are: 1) very few theories exist that can explain how BIM can be aligned with traditional techniques and integrated into existing processes for project risk management; and 2) current BIM solutions have very limited support on risk communication and information management during the project development process. To overcome these limitations, this research proposes a new approach that two traditional risk management techniques, Risk Breakdown Structure (RBS) and Case-based Reasoning (CBR), can be integrated into BIM-based platforms and an active linkage between the risk information and BIM can be established to support the project lifecycle. The core motivations behind the proposed solution are: 1) a tailored RBS could be used as a knowledge-based approach to classify, store and manage the information of a risk database in a proper structure and risk information in RBS could be linked to BIM for review, visualisation and communication; and 2) knowledge and experience stored in past risk reports could contribute to avoiding similar risks in new situations and the most relevant cases can be linked to BIM to support decision making during the project lifecycle. The scope of this research is limited to bridge projects; however, the basic methods and principles could be also applied to other types of projects. This research is in three phases. In the first stage, this research analysed the conceptual separation of BIM and the linkage rules between different types of risk and BIM. Specifically, an integrated bridge information model was divided into four Level of Contents (LOCs) and six technical systems based on the analysis of the Industry Foundation Classes (IFC) specification, a critical review of previous studies and the author’s project experience. Then a knowledge-based risk database was developed through an extensive collection of risk data, a process of data mining, and further assessment and translation of the data. Built on the risk database, a tailored RBS was developed to categorise and manage this risk information and a set of linkage rules between the tailored RBS and the four LOCs and six technical systems of BIM was established. Secondly, to further implement the linkage rules, a novel method to link BIM, RBS, and Work Breakdown Structure (WBS) to be a risk management system was developed. A prototype system was created based on Navisworks and the Microsoft SQL Server to support the implementation of the proposed approach. The system allows not only the storage of risk information in a central database but also to link the related risk information in the BIM model for review, visualisation and simulation. Thirdly, to facilitate the use of previous knowledge and experience for BIM-based risk management, the research proposed an approach of combining the use of two Natural Language Processing (NLP) techniques, i.e. Vector Space Model (VSM) and semantic query expansion, and outlined a new framework for the risk case retrieval system. A prototype was developed using the Python programming language to support the implementation of the proposed method. Preliminary testing results show that the proposed system is capable of retrieving relevant cases automatically and to return, for example, the top 10 similar cases. The main contribution of this research is the approach of integrating RBS and CBR into BIM through active linkages. The practical significance of this research is that the proposed approach enables the development of BIM-based risk management software to improve the risk identification, analysis, and information management during the project development process. This research provides evidence that traditional techniques can be aligned with BIM for risk management. One significant advantage of the proposed method is to combine the benefits of both traditional techniques and BIM for lifecycle project risk management and have the minimum disruption to the existing working processes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Dimitrios, Rekleitis. "Cloud-based Knowledge Management in Greek SME’s". Thesis, Linnéuniversitetet, Institutionen för informatik (IK), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78715.

Texto completo
Resumen
Nowadays, Cloud Technologies are commonly used for a lot of large organizations to aid knowledge sharing.  This brings some benefits to the organization by reducing the cost of the charges, improve security, enhance content accessibility, improve efficiency etc. On the other hand, Small and Medium Enterprises (SMEs) tend to manage their information in more informal way by not using the specific language or terminology of KM. Moreover, Small and Medium enterprises do not trust the adoption of cloud-based techniques for managing information for many reasons that discussed later. This thesis tries to provide the benefits and drawbacks of cloud-based Knowledge Management techniques in Greek SMEs and also to find how knowledge processes are used in Greek SMEs according to cloud-based Knowledge Management techniques. Also, through this work I will come up with the benefits and drawbacks of applying cloud-based techniques for managing information-knowledge in SMEs. To accomplish this, I derived with a methodology that is based on qualitative approach. More specifically, I provide an exhaustive literature review and then I contacted with five SMEs in Greece to explore, using different techniques, if these SMEs can benefit from the cloud-based Knowledge Management techniques and how indent are for adopting cloud-based Knowledge Management techniques in their organization. I realized that three of the SMEs are using cloud-based techniques for Knowledge Management, where the two of them does not. To be more specific one of these two SMEs does not manage its knowledge at all. However, all of the five organizations showed a great interest to adopt cloud-based and information system technologies for Knowledge Management. At the end, this work comes up with the following important findings and insights, as well: Cloud-based Knowledge Management techniques can bring a lot of benefits in terms of cost savings and performance. However, this suits the right and efficiently use of cloud-based techniques. The lack of using efficiently cloud-based Knowledge Management techniques may lead to some drawbacks, such as reduction on the performance of the organization and reduction on the savings.   This thesis also discusses some points for future direction such as the analysis of a larger space of organizations, the investigation of quantitative analysis and also the combination of both (qualitative and quantitative).
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Ricks, Wendell R. "Knowledge-Based System for Flight Information Management". W&M ScholarWorks, 1990. https://scholarworks.wm.edu/etd/1539625650.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Killeen, Patrick. "Knowledge-Based Predictive Maintenance for Fleet Management". Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40086.

Texto completo
Resumen
In recent years, advances in information technology have led to an increasing number of devices (or things) being connected to the internet; the resulting data can be used by applications to acquire new knowledge. The Internet of Things (IoT) (a network of computing devices that have the ability to interact with their environment without requiring user interaction) and big data (a field that deals with the exponentially increasing rate of data creation, which is a challenge for the cloud in its current state and for standard data analysis technologies) have become hot topics. With all this data being produced, new applications such as predictive maintenance are possible. One such application is monitoring a fleet of vehicles in real-time to predict their remaining useful life, which could help companies lower their fleet management costs by reducing their fleet's average vehicle downtime. Consensus self-organized models (COSMO) approach is an example of a predictive maintenance system for a fleet of public transport buses, which attempts to diagnose faulty buses that deviate from the rest of the bus fleet. The present work proposes a novel IoT-based architecture for predictive maintenance that consists of three primary nodes: namely, the vehicle node (VN), the server leader node (SLN), and the root node (RN). The VN represents the vehicle and performs lightweight data acquisition, data analytics, and data storage. The VN is connected to the fleet via its wireless internet connection. The SLN is responsible for managing a region of vehicles, and it performs more heavy-duty data storage, fleet-wide analytics, and networking. The RN is the central point of administration for the entire system. It controls the entire fleet and provides the application interface to the fleet system. A minimally viable prototype (MVP) of the proposed architecture was implemented and deployed to a garage of the Soci\'et\'e de Transport de l'Outaouais (STO), Gatineau, Canada. The VN in the MVP was implemented using a Raspberry Pi, which acquired sensor data from a STO hybrid bus by reading from a J1939 network, the SLN was implemented using a laptop, and the RN was deployed using meshcentral.com. The goal of the MVP was to perform predictive maintenance for the STO to help reduce their fleet management costs. The present work also proposes a fleet-wide unsupervised dynamic sensor selection algorithm, which attempts to improve the sensor selection performed by the COSMO approach. I named this algorithm the improved consensus self-organized models (ICOSMO) approach. To analyze the performance of ICOSMO, a fleet simulation was implemented. The J1939 data gathered from a STO hybrid bus, which was acquired using the MVP, was used to generate synthetic data to simulate vehicles, faults, and repairs. The deviation detection of the COSMO and ICOSMO approach was applied to the synthetic sensor data. The simulation results were used to compare the performance of the COSMO and ICOSMO approach. Results revealed that in general ICOSMO improved the accuracy of COSMO when COSMO was not performing optimally; that is, in the following situations: a) when the histogram distance chosen by COSMO was a poor choice, b) in an environment with relatively high sensor white noise, and c) when COSMO selected poor sensors. On average ICOSMO only rarely reduced the accuracy of COSMO, which is promising since it suggests deploying ICOSMO as a predictive maintenance system should perform just as well or better than COSMO . More experiments are required to better understand the performance of ICOSMO. The goal is to eventually deploy ICOSMO to the MVP.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía