Дисертації з теми "Knowledge based data management"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Knowledge based data management".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Andersson, Kent. "Knowledge Technology Applications for Knowledge Management." Doctoral thesis, Uppsala : Institutionen för informationsvetenskap, Univ. [distributör], 2000. http://w3.ub.uu.se/fulltext/91-506-1437-1.pdf.
Повний текст джерелаMaimone, Anthony. "Data and Knowledge Acquisition in Case-based Reasoning for Diabetes Management." Ohio University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1156200718.
Повний текст джерелаAdam, Elena Daniela. "Knowledge management cloud-based solutions in small enterprises." Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Informatik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-28275.
Повний текст джерелаGoasdoué, François. "Knowledge Representation meets DataBases for the sake of ontology-based data management." Habilitation à diriger des recherches, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00759274.
Повний текст джерелаKairouz, Joseph. "Patient data management system medical knowledge-base evaluation." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=24060.
Повний текст джерелаFollowing a literature survey on evaluation techniques and architecture of existing expert systems, an overview of the Patient Data Management System hardware and software components is presented. The design of the Expert Monitoring System is elaborated. Following its installation in the intensive Care Unit, the performance of the Expert Monitoring System is evaluated, operating on real vital sign data and corrections were formulated. A progressive evaluation technique, new methodology for evaluating an expert system knowledge-base is proposed for subsequent corrections and evaluations of the Expert Monitoring System.
MILIA, GABRIELE. "Cloud-based solutions supporting data and knowledge integration in bioinformatics." Doctoral thesis, Università degli Studi di Cagliari, 2015. http://hdl.handle.net/11584/266783.
Повний текст джерелаWhite, Andrew Murray. "The application of knowledge-based techniques to constraint management in engineering databases." Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/16894.
Повний текст джерелаGebhardt, Johan Wilhelm Ludwig. "A comparative study of the business value of computer-based mapping tools in knowledge management." Thesis, Stellenbosch : Stellenbosch University, 2008. http://hdl.handle.net/10019.1/18151.
Повний текст джерелаENGLISH ABSTRACT: In the past decade or two companies started to realise that competitive advantage is not only achieved by optimising their business value chain, but also in managing the knowledge in the company. This led to the development of different knowledge management models and to millions of dollars being spent on knowledge management implementations across the world. Although there were huge successes, a large number of initiatives were spectacular failures - believed to be mainly caused by the linear method of capturing and presenting knowledge. Computer-based mapping tools is a new generation of personal computer (PC) based tools that allow people to present knowledge graphically. Since the focus of most research into computer-based mapping tools has been on the educational use of mapping tools, the focus of this study will be on the business use of these tools. Thus a number of common, off-the-shelf computer-based mapping tools were evaluated to determine whether they can add business value. From the evaluation a decision matrix was developed to assist knowledge workers in selecting the best tool for a specific application. The primary activities of the knowledge value chain model were investigated to select a series of business activities where the use of computer-based mapping tools could possibly generate more business value in the execution of the business activity. These activities were then measured against a set of criteria that was developed in order to evaluate the different computer-based mapping tools. It was found that the selected software applications could be clearly separated based upon their theoretical and philosophical backgrounds into concept mapping tools and mind mapping tools. It was further found that the possible business value that could be derived through the use of these tools is more dependent on the selection of the correct type of tool, than on the selection of a specific software package. Lastly it was found that concept mapping tools could be used across a broader spectrum of business activities. The research also reached the conclusion that the use of concept mapping tools will possibly add more value to a business than the use of mind mapping software.
AFRIKAANSE OPSOMMING: Gedurende die afgelope dekade of wat het maatskappye al meer begin besef dat hulle mededingingsvoordeel nie net geleë is in hoe goed hulle die besigheid se waardeketting kan optimiseer nie, maar dat die kennis in die maatskappy ook beter bestuur moet word. Dit het tot gevolg gehad dat 'n aansienlike hoeveelheid kennis bestuursmodelle ontwikkel is en dat miljoene dollar gespandeer is op die implementering van kennis bestuurstelsels. Ten spyte van groot suksesse wat behaal is, was daar ook totale mislukkings. Die vermoede bestaan dat een van die redes vir die mislukkings die liniêre manier is waarop kennis vasgevang en aangebied is. Rekenaar-gebaseerde karteringspakkette is 'n nuwe generasie van persoonlike rekenaar programmatuur wat gebruikers in staat stel om kennis grafies voor te stel. Die meeste navorsing oor die gebruik van rekenaar-gebaseerde karteringspakkette het egter op die opvoedkundige aspek daarvan gefokus. In hierdie navorsing val die fokus eerder op die besigheidsgebruik van sodanige gereedskap. 'n Aantal algemeen beskikbare, van-die-rak pakkette is ge-evalueër om vas te stel of hulle waarde tot 'n besigheid kan toevoeg. Vanuit hierdie evaluering is In keuse-matriks saamgestel om kenniswerkers in staat te stel om die beste pakket vir 'n spesifieke besigheidsaktiwiteit te kies. Die primêre aktiwiteite van die kennis waardeketting model is ondersoek ten einde 'n aantal besigheidsaktiwiteite te kan selekteer wat moontlik meer waarde tot die besigheid kan toevoeg deur die gebruik van rekenaar-gebaseerde karteringspakkette. Die geselekteerde aktiwiteite is gemeet teen 'n reeks kriteria wat ontwikkel is om die verskillende rekenaar-gebaseerde karteringspakette teen mekaar op te weeg. Die navorsing het bevind dat die geselekteerde programmatuur pakkette hoofsaaklik in twee groepe val op grond van hulle teoretiese en filosofiese funderings, naamlik konsepkaarte en gedagtekaarte. Verder is vasgestel dat meer besigheidswaarde ontsluit word deur die keuse van die regte tipe programmatuur vir 'n spesifieke aanwending as deur die keuse van In spesifieke programmatuur pakket. Laastens is bevind dat konsepkaarte oor 'n wyer verspreiding van besigheidsaktiwiteite gebruik kan word. Eventueel kan afgelei word dat die gebruik van konsepkaarte meer waarde tot 'n besigheid sal toevoeg as die gebruik van gedagtekaarte.
Brooks, Brad Walton. "Automated Data Import and Revision Management in a Product Lifecycle Management Environment." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd3182.pdf.
Повний текст джерелаMeng, Changping, and 蒙昌平. "Discovering meta-paths in large knowledge bases." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/209504.
Повний текст джерелаpublished_or_final_version
Computer Science
Master
Master of Philosophy
Antoine, Emilien. "Distributed data management with a declarative rule-based language webdamlog." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00933808.
Повний текст джерелаXie, Tian, and 謝天. "Development of a XML-based distributed service architecture for product development in enterprise clusters." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B30477165.
Повний текст джерелаČervienka, Juraj. "Aplikace principů znalostního managementu ve vybrané firmě." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2013. http://www.nusl.cz/ntk/nusl-223953.
Повний текст джерелаSchuster, Alfons. "Supporting data analysis and the management of uncertainty in knowledge-based systems through information aggregation processes." Thesis, University of Ulster, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264825.
Повний текст джерелаWang, Qing. "Intelligent Data Mining Techniques for Automatic Service Management." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3883.
Повний текст джерелаMarano, Federica. "Exploring formal models of linguistic data structuring. Enhanced solutions for knowledge management systems based on NLP applications." Doctoral thesis, Universita degli studi di Salerno, 2012. http://hdl.handle.net/10556/349.
Повний текст джерелаThe principal aim of this research is describing to which extent formal models for linguistic data structuring are crucial in Natural Language Processing (NLP) applications. In this sense, we will pay particular attention to those Knowledge Management Systems (KMS) which are designed for the Internet, and also to the enhanced solutions they may require. In order to appropriately deal with this topics, we will describe how to achieve computational linguistics applications helpful to humans in establishing and maintaining an advantageous relationship with technologies, especially with those technologies which are based on or produce man-machine interactions in natural language. We will explore the positive relationship which may exist between well-structured Linguistic Resources (LR) and KMS, in order to state that if the information architecture of a KMS is based on the formalization of linguistic data, then the system works better and is more consistent. As for the topics we want to deal with, frist of all it is indispensable to state that in order to structure efficient and effective Information Retrieval (IR) tools, understanding and formalizing natural language combinatory mechanisms seems to be the first operation to achieve, also because any piece of information produced by humans on the Internet is necessarily a linguistic act. Therefore, in this research work we will also discuss the NLP structuring of a linguistic formalization Hybrid Model, which we hope will prove to be a useful tool to support, improve and refine KMSs. More specifically, in section 1 we will describe how to structure language resources implementable inside KMSs, to what extent they can improve the performance of these systems and how the problem of linguistic data structuring is dealt with by natural language formalization methods. In section 2 we will proceed with a brief review of computational linguistics, paying particular attention to specific software packages such Intex, Unitex, NooJ, and Cataloga, which are developed according to Lexicon-Grammar (LG) method, a linguistic theory established during the 60’s by Maurice Gross. In section 3 we will describe some specific works useful to monitor the state of the art in Linguistic Data Structuring Models, Enhanced Solutions for KMSs, and NLP Applications for KMSs. In section 4 we will cope with problems related to natural language formalization methods, describing mainly Transformational-Generative Grammar (TGG) and LG, plus other methods based on statistical approaches and ontologies. In section 5 we will propose a Hybrid Model usable in NLP applications in order to create effective enhanced solutions for KMSs. Specific features and elements of our hybrid model will be shown through some results on experimental research work. The case study we will present is a very complex NLP problem yet little explored in recent years, i.e. Multi Word Units (MWUs) treatment. In section 6 we will close our research evaluating its results and presenting possible future work perspectives. [edited by author]
X n.s.
Radovanovic, Aleksandar. "Concept Based Knowledge Discovery from Biomedical Literature." Thesis, Online access, 2009. http://etd.uwc.ac.za/usrfiles/modules/etd/docs/etd_gen8Srv25Nme4_9861_1272229462.pdf.
Повний текст джерелаSpiegler, Sebastian R. "Comparative study of clustering algorithms on textual databases : clustering of curricula vitae into comptency-based groups to support knowledge management /." Saarbrücken : VDM Verl. Müller, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=3035354&prov=M&dok_var=1&dok_ext=htm.
Повний текст джерелаKybkalo, Anatoliy. "Znalostní management a znalostní báze." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-203842.
Повний текст джерелаOlsson, Neve Theresia. "Capturing and Analysing Emotions to Support Organisational Learning : The Affect Based Learning Matrix." Doctoral thesis, Kista : Department of Computer and Systems Sciences, Stockholm University, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-1230.
Повний текст джерелаŠmarda, Miroslav. "Aplikace principů znalostního managementů ve vybrané firmě." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2017. http://www.nusl.cz/ntk/nusl-318298.
Повний текст джерелаPérution-Kihli, Guillaume. "Data Management in the Existential Rule Framework : Translation of Queries and Constraints." Electronic Thesis or Diss., Université de Montpellier (2022-....), 2023. http://www.theses.fr/2023UMONS030.
Повний текст джерелаThe general context of this work is the issue of designing high-quality systems that integrate multiple data sources via a semantic layer encoded in a knowledge representation and reasoning language. We consider knowledge-based data management (KBDM) systems, which are structured in three layers: the data layer, which comprises the data sources, the knowledge (or ontological) layer, and the mappings between the two. Mappings and knowledge are expressed within the existential rule framework. One of the intrinsic difficulties in designing a KBDM is the need to understand the content of data sources. Data sources are often provided with typical queries and constraints, from which valuable information about their semantics can be drawn, as long as this information is made intelligible to KBDM designers. This motivates our core question: is it possible to translate data queries and constraints at the knowledge level while preserving their semantics?The main contributions of this thesis are the following. We extend previous work on data-to-ontology query translation with new techniques for the computation of perfect, minimally complete, or maximally sound query translations. Concerning data-to-ontology constraint translation, we define a general framework and apply it to several classes of constraints. Finally, we provide a sound and complete query rewriting operator for disjunctive existential rules and disjunctive mappings, as well as undecidability results, which are of independent interest
Hatem, Muna Salman. "A framework for semantic web implementation based on context-oriented controlled automatic annotation." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/3207.
Повний текст джерелаSalman, Munir [Verfasser], Matthias [Gutachter] Hemmje, and Dominic [Gutachter] Heutelbeck. "Flexible Distributed R&D Data Management Supporting Social Network-Based Knowledge, Content, and Software Asset Integration Management in Collaborative and Co-Creative R&D and Innovation / Munir Salman ; Gutachter: Matthias Hemmje, Dominic Heutelbeck." Hagen : FernUniversität in Hagen, 2018. http://d-nb.info/1170389791/34.
Повний текст джерелаKrive, Jacob. "Effectiveness of Evidence-Based Computerized Physician Order Entry Medication Order Sets Measured by Health Outcomes." NSUWorks, 2013. http://nsuworks.nova.edu/gscis_etd/202.
Повний текст джерелаCaballé, Llobet Santi. "A Computational Model for the Construction of Knowledge-based Collaborative Learning Distributed Applications." Doctoral thesis, Universitat Oberta de Catalunya, 2008. http://hdl.handle.net/10803/9127.
Повний текст джерелаUn camp de recerca important dins del paradigma del Computer-Supported Collaborative Learning (CSCL) és la importància en la gestió eficaç de la informació d'esdeveniments generada durant l'activitat de l'aprenentatge col·laboratiu virtual, per a proporcionar coneixement sobre el comportament dels membres del grup. Aquesta visió és especialment pertinent en l'escenari educatiu actual que passa d'un paradigma tradicional - centrat en la figura d'un instructor magistral - a un paradigma emergent que considera els estudiants com actors centrals en el seu procés d'aprenentatge. En aquest nou escenari, els estudiants aprenen, amb l'ajuda de professors, la tecnologia i els altres estudiants, el que potencialment necessitaran per a desenvolupar les seves activitats acadèmiques o professionals futures.
Els principals aspectes a tenir en compte en aquest context són, primer de tot, com dissenyar una plataforma sota el paradigma del CSCL, que es pugui utilitzar en situacions reals d'aprenentatge col·laboratiu complexe i a llarg termini, basades en el model d'aprenentatge de resolució de problemes. I que permet al professor una anàlisi del grup més eficaç així com donar el suport adequat als estudiants quan sigui necessari.
En segon lloc, com extreure coneixement pertinent de la col·laboració per donar consciència i retorn als estudiants a nivell individual i de rendiment del grup, així com per a propòsits d'avaluació.
L'assoliment d'aquests objectius impliquen el disseny d'un model conceptual d'interacció durant l'aprenentatge col·laboratiu que estructuri i classifiqui la informació generada en una aplicació col·laborativa en diferents nivells de descripció. A partir d'aquesta aproximació conceptual, els models computacionals hi donen resposta per a proporcionar una extracció eficaç del coneixement produït per l'individu i per l'activitat del grup, així com la possibilitat d'explotar aquest coneixement com una eina metacognitiva pel suport en temps real i regulat del procés d'aprenentatge col·laboratiu.
A més a més, les necessitats dels entorns CSCL han evolucionat en gran mesura durant els darrers anys d'acord amb uns requisits pedagògics i tecnològics cada cop més exigents. Els entorns d'aprenentatge col·laboratius virtuals ara ja no depenen de grups d'estudiants homogenis, continguts i recursos d'aprenentatge estàtics, ni pedagogies úniques, sinó que exigeixen una forta personalització i un alt grau de flexibilitat. En aquest nou escenari, les organitzacions educatives actuals necessiten estendre's i moure's cap a paradigmes d'ensenyament altament personalitzats, amb immediatesa i constantment, on cada paradigma incorpora el seu propi model pedagògic, el seu propi objectiu d'aprenentatge i incorpora els seus propis recursos educatius específics.
Les demandes de les organitzacions actuals també inclouen la integració efectiva, en termes de cost i temps, de sistemes d'aprenentatge llegats i externs, que pertanyen a altres institucions, departaments i cursos. Aquests sistemes llegats es troben implementats en llenguatges diferents, suportats per plataformes heterogènies i distribuïdes arreu, per anomenar alguns dels problemes més habituals. Tots aquests problemes representen certament un gran repte per la comunitat de recerca actual i futura. Per tant, els propers esforços han d'anar encarats a ajudar a desenvolupadors, recercaires, tecnòlegs i pedagogs a superar aquests exigents requeriments que es troben actualment en el domini del CSCL, així com proporcionar a les organitzacions educatives solucions ràpides i flexibles per a potenciar i millorar el rendiment i resultats de l'aprenentatge col·laboratiu. Aquesta tesi proposa un primer pas per aconseguir aquests objectius.
An important research topic in Computer Supported Collaborative Learning (CSCL) is to explore the importance of efficient management of event information generated from group activity in collaborative learning practices for its further use in extracting and providing knowledge on interaction behavior.
The essential issue here is first how to design a CSCL platform that can be used for real, long-term, complex collaborative problem solving situations and which enables the instructor to both analyze group interaction effectively and provide an adequate support when needed. Secondly, how to extract relevant knowledge from collaboration in order to provide learners with efficient awareness and feedback as regards individual and group performance and assessment. The achievement of these tasks involve the design of a conceptual framework of collaborative learning interaction that structures and classifies the information generated in a collaborative application at several levels of description. Computational models are then to realize this conceptual approach for an efficient management of the knowledge produced by the individual and group activity as well as the possibility of exploiting this knowledge further as a metacognitive tool for real-time coaching and regulating the collaborative learning process.
In addition, CSCL needs have been evolving over the last years accordingly with more and more demanding pedagogical and technological requirements. On-line collaborative learning environments no longer depend on homogeneous groups, static content and resources, and single pedagogies, but high customization and flexibility are a must in this context. As a result, current educational organizations' needs involve extending and moving to highly customized learning and teaching forms in timely fashion, each incorporating its own pedagogical approach, each targeting a specific learning goal, and each incorporating its specific resources.
These entire issues certainly represent a great challenge for current and future research in this field. Therefore, further efforts need to be made that help developers, technologists and pedagogists overcome the demanding requirements currently found in the CSCL domain as well as provide modern educational organizations with fast, flexible and effective solutions for the enhancement and improvement of the collaborative learning performance and outcomes. This thesis proposes a first step toward these goals.
Índex foliat:
The main contribution in this thesis is the exploration of the importance of an efficient management of information generated from group activity in Computer-Supported Collaborative Learning (CSCL) practices for its further use in extracting and providing knowledge on interaction behavior. To this end, the first step is to investigate a conceptual model for data analysis and management so as to identify the many kinds of indicators that describe collaboration and learning and classify them into high-level potential categories of effective collaboration. Indeed, there are more evident key discourse elements and aspects than those shown by the literature, which play an important role both for promoting student participation and enhancing group and individual performance, such as, the impact and effectiveness of students' contributions, among others, that are explored in this work. By making these elements explicit, the discussion model proposed accomplishes high students' participation rates and contribution quality in a more natural and effective way. This approach goes beyond a mere interaction analysis of asynchronous discussion in the sense that it builds a multi-functional model that fosters knowledge sharing and construction, develops a strong sense of community among students, provides tutors with a powerful tool for students' monitoring, discussion regulation, while it allows for peer facilitation through self, peer and group awareness and assessment.
The results of the research described so far motivates the development of a computational system as the translation from the conceptual model into a computer system that implements the management of the information and knowledge acquired from the group activity, so as to be efficiently fed back to the collaboration. The achievement of a generic, robust, flexible, interoperable, reusable computational model that meets the fundamental functional needs shared by any collaborative learning experience is largely investigated in this thesis. The systematic reuse of this computational model permits a fast adaptation to new learning and teaching requirements, such as learning by discussion, by relying on the most advanced software engineering processes and methodologies from the field of software reuse, and thus important benefits are expected in terms of productivity, quality, and cost.
Therefore, another important contribution is to explore and extend suitable software reuse techniques, such as Generic Programming, so as to allow the computational model to be successfully particularized in as many as situations as possible without losing efficiency in the process. In particular, based on domain analysis techniques, a high-level computational description and formalization of the CSCL domain are identified and modeled. Then, different specific-platform developments that realize the conceptual description are provided. It is also explored a certain level of automation by means of advanced techniques based on Service-Oriented Architectures and Web-services while passing from the conceptual specification to the desired realization, which greatly facilitates the development of CSCL applications using this computational model.
Based on the outcomes of these investigations, this thesis contributes with computational collaborative learning systems, which are capable of managing both qualitative and quantitative information and transforming it into useful knowledge for all the implicated parties in an efficient and clear way. This is achieved by both the specific assessment of each contribution by the tutor who supervises the discussion and by rich statistical information about student's participation. This statistical data is automatically provided by the system; for instance, statistical data sheds light on the students' engagement in the discussion forum or how much interest drew the student's intervention in the form of participation impact, level of passivity, proactivity, reactivity, and so on. The aim is to provide both a deeper understanding of the actual discussion process and a more objective assessment of individual and group activity.
This information is then processed and analyzed by means of a multivariate statistical model in order to extract useful knowledge about the collaboration. The knowledge acquired is communicated back to the members of the learning group and their tutor in appropriate formats, thus providing valuable awareness and feedback of group interaction and performance as well as may help identify and assess the real skills and intentions of participants. The most important benefit expected from the conceptual model for interaction data analysis and management is a great improvement and enhancement of the learning and teaching collaborative experiences.
Finally, the possibilities of using distributed and Grid technology to support real CSCL environments are also extensively explored in this thesis. The results of this investigation lead to conclude that the features provided by these technologies form an ideal context for supporting and meeting demanding requirements of collaborative learning applications. This approach is taken one step further for enhancing the possibilities of the computational model in the CSCL domain and it is successfully adopted on an empirical and application basis. From the results achieved, it is proved the feasibility of distributed technologies to considerably enhance and improve the collaborative learning experience. In particular, the use of Grid computing is successfully applied for the specific purpose of increasing the efficiency of processing a large amount of information from group activity log files.
COLOMBARI, RUGGERO. "Digitalization and operational data-driven decision-making: A socio-technical investigation of the implications for front-line production managers and workers." Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2963942.
Повний текст джерелаGomis, Marie-Joseph. "Web-based ERP systems: the new generation : case study: mySAP ERP." Thesis, Jönköping University, JTH, Computer and Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-7711.
Повний текст джерелаWith the proliferation of Internet, ERP systems like all the domains of Information Technology have known an important evolution. This final thesis project is a study about the evolution of ERP systems, more precisely about their migration to the Web giving birth to a new generation of systems: the Web-Based or Web-enabled ERP systems. This migration to the Web is justified by the difficulty of making possible the communication between partner’s legacy systems and the organizations’ ERP systems. A historical evolution of these systems is presented in order to understand the reasons that lead vendors to adopt the Web Service Technology. Based on different studies, the main technologies such as Web services, Service-Oriented Architecture and Web Application server are also presented. From an interpretative research approach mySAP ERP has been chosen as a case study. This Master’s thesis has been led into AIRBUS France Company within the framework of the SAP Customer Competence Center (SAPCCC) Web site project. The project is aimed at re-building the SAPCCC Web site. The new characteristic of the Web site is to make it accessible by all AIRBUS partners working with SAP applications. To make the Web site accessible by the partners from their own applications located on their own platforms the development has been done thanks to mySAP ERP which is an ERP using the Web service technology. Finally, this thesis presents a comparative study between traditional ERP systems and the new generation of Web-based ERP systems.
El, Sarraj Lama. "Exploitation d'un entrepôt de données guidée par des ontologies : application au management hospitalier." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4331.
Повний текст джерелаThis research is situated in the domain of Data Warehouses (DW) personalization and concerns DW assistance. Specifically, we are interested in assisting a user during an online analysis processes to use existing operational resources. The application of this research concerns hospital management, for hospitals governance, and is limited to the scope of the Program of Medicalization of Information Systems (PMSI). This research was supported by the Public Hospitals of Marseille (APHM). Our proposal is a semantic approach based on ontologies. The support system implementing this approach, called Ontology-based Personalization System (OPS), is based on a knowledge base operated by a personalization engine. The knowledge base is composed of three ontologies: a domain ontology, an ontology of the DW structure, and an ontology of resources. The personalization engine allows firstly, a personalized search of resources of the DW based on users profile, and secondly for a particular resource, an expansion of the research by recommending new resources based on the context of the resource. To recommend new resources, we have proposed three possible strategies. To validate our proposal, a prototype of the OPS system was developed, a personalization engine has been implemented in Java. This engine exploit an OWL knowledge composed of three interconnected OWL ontologies. We illustrate three experimental scenarios related to PMSI and defined with APHM domain experts
Molch, Silke. "Datenmodelle für fachübergreifende Wissensbasen in der interdisziplinären Anwendung." TUDpress, 2019. https://tud.qucosa.de/id/qucosa%3A36574.
Повний текст джерелаHarley, Samuel, Michael Reil, Thea Blunt-Henderson, and George Bartlett. "Data, Information, and Knowledge Management." International Foundation for Telemetering, 2005. http://hdl.handle.net/10150/604784.
Повний текст джерелаThe Aberdeen Test Center Versatile Information System – Integrated, ONline (VISION) project has developed and deployed a telemetry capability based upon modular instrumentation, seamless communications, and the VISION Digital Library. Each of the three key elements of VISION contributes to a holistic solution to the data collection, distribution, and management requirements of Test and Evaluation. This paper provides an overview of VISION instrumentation, communications, and overall data management technologies, with a focus on engineering performance data.
REIS, JUNIOR JOSE S. B. "Métodos e softwares para análise da produção científica e detecção de frentes emergentes de pesquisa." reponame:Repositório Institucional do IPEN, 2015. http://repositorio.ipen.br:8080/xmlui/handle/123456789/26929.
Повний текст джерелаMade available in DSpace on 2016-12-21T15:07:24Z (GMT). No. of bitstreams: 0
O progresso de projetos anteriores salientou a necessidade de tratar o problema dos softwares para detecção, a partir de bases de dados de publicações científicas, de tendências emergentes de pesquisa e desenvolvimento. Evidenciou-se a carência de aplicações computacionais eficientes dedicadas a este propósito, que são artigos de grande utilidade para um melhor planejamento de programas de pesquisa e desenvolvimento em instituições. Foi realizada, então, uma revisão dos softwares atualmente disponíveis, para poder-se delinear claramente a oportunidade de desenvolver novas ferramentas. Como resultado, implementou-se um aplicativo chamado Citesnake, projetado especialmente para auxiliar a detecção e o estudo de tendências emergentes a partir da análise de redes de vários tipos, extraídas das bases de dados científicas. Através desta ferramenta computacional robusta e eficaz, foram conduzidas análises de frentes emergentes de pesquisa e desenvolvimento na área de Sistemas Geradores de Energia Nuclear de Geração IV, de forma que se pudesse evidenciar, dentre os tipos de reatores selecionados como os mais promissores pelo GIF - Generation IV International Forum, aqueles que mais se desenvolveram nos últimos dez anos e que se apresentam, atualmente, como os mais capazes de cumprir as promessas realizadas sobre os seus conceitos inovadores.
Dissertação (Mestrado em Tecnologia Nuclear)
IPEN/D
Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
Gängler, Thomas. "Semantic Federation of Musical and Music-Related Information for Establishing a Personal Music Knowledge Base." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-72434.
Повний текст джерелаMuhammad, Fuad Muhammad Marwan. "Similarity Search in High-dimensional Spaces with Applications to Time Series Data Mining and Information Retrieval." Phd thesis, Université de Bretagne Sud, 2011. http://tel.archives-ouvertes.fr/tel-00619953.
Повний текст джерелаAmad, Ashraf. "L’acquisition et l’extraction de connaissances dans un contexte patrimoniale peu documenté." Thesis, Paris 8, 2017. http://www.theses.fr/2017PA080101.
Повний текст джерелаThe importance of cultural heritage documentation increases in parallel with the risks to which it is exposed, such as wars, uncontrolled urban development, natural disasters, neglect and inappropriate conservation techniques or strategies. In addition, this documentation is a fundamental tool for the assessment, the conservation, and the management of cultural heritage. Consequently, this tool allows us to estimate the historical, scientific, social and economic value of this heritage. According to several international institutions dedicated to the preservation of cultural heritage, there is an urgent need to develop computer solutions to facilitate and support the documentation of poorly documented cultural heritage especially in developing countries where there is a lack of resources. Among these countries, Palestine represents a relevant case study in this issue of lack of documentation of its heritage. To address this issue, we propose an approach of knowledge acquisition and extraction in the context of poorly documented heritage. We take as a case study the church of the Nativity in Palestine and we put in place our theoretical approach by the development of a platform for the acquisition and extraction of heritage knowledge. Our solution is based on the semantic technologies, which gives us the possibility, from the beginning, to provide a rich ontological description, a better structuring of the information, a high level of interoperability and a better automatic processing without additional efforts.Additionally, our approach is evolutionary and reciprocal because the acquisition of knowledge (in structured form) improves the extraction of heritage knowledge from unstructured text and vice versa. Therefore, the interaction between the two components of our system as well as the heritage knowledge develop and improve over time especially that our system uses manual contributions and validations of the automatic results (in both components) by the experts to optimize its performance
Datta, Roshni. "Knowledge-Based Performance Management Framework." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1293725862.
Повний текст джерелаJäkel, Tobias. "Role-based Data Management." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-224416.
Повний текст джерелаMontoya, David. "Une base de connaissance personnelle intégrant les données d'un utilisateur et une chronologie de ses activités." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLN009/document.
Повний текст джерелаTypical Internet users today have their data scattered over several devices, applications, and services. Managing and controlling one's data is increasingly difficult. In this thesis, we adopt the viewpoint that the user should be given the means to gather and integrate her data, under her full control. In that direction, we designed a system that integrates and enriches the data of a user from multiple heterogeneous sources of personal information into an RDF knowledge base. The system is open-source and implements a novel, extensible framework that facilitates the integration of new data sources and the development of new modules for deriving knowledge. We first show how user activity can be inferred from smartphone sensor data. We introduce a time-based clustering algorithm to extract stay points from location history data. Using data from additional mobile phone sensors, geographic information from OpenStreetMap, and public transportation schedules, we introduce a transportation mode recognition algorithm to derive the different modes and routes taken by the user when traveling. The algorithm derives the itinerary followed by the user by finding the most likely sequence in a linear-chain conditional random field whose feature functions are based on the output of a neural network. We also show how the system can integrate information from the user's email messages, calendars, address books, social network services, and location history into a coherent whole. To do so, it uses entity resolution to find the set of avatars used by each real-world contact and performs spatiotemporal alignment to connect each stay point with the event it corresponds to in the user's calendar. Finally, we show that such a system can also be used for multi-device and multi-system synchronization and allow knowledge to be pushed to the sources. We present extensive experiments
Stonehouse, George. "Knowledge based strategy : appraising knowledge creation capability in organisations." Thesis, Edinburgh Napier University, 2008. http://researchrepository.napier.ac.uk/Output/2446.
Повний текст джерелаRudd, Susan Elizabeth. "Knowledge-based analysis of partial discharge data." Thesis, University of Strathclyde, 2010. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=14447.
Повний текст джерелаRangaraj, Jithendra Kumar. "Knowledge-based Data Extraction Workbench for Eclipse." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354290498.
Повний текст джерелаLeigh, Christopher. "Knowledge management : a practice-based approach." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2008. https://ro.ecu.edu.au/theses/236.
Повний текст джерелаChan, Francis. "Knowledge management in Naval Sea Systems Command : a structure for performance driven knowledge management initiative." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02sep%5FChan.pdf.
Повний текст джерелаThesis advisor(s): Mark E. Nissen, Donald H. Steinbrecher. Includes bibliographical references (p. 113-117). Also available online.
Thakkar, Hetal M. "Supporting knowledge discovery in data stream management systems." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1790275561&sid=26&Fmt=2&clientId=1564&RQT=309&VName=PQD.
Повний текст джерелаGroth, Philip. "Knowledge management and discovery for genotype/phenotype data." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2009. http://dx.doi.org/10.18452/16033.
Повний текст джерелаIn diseases with a genetic component, examination of the phenotype can aid understanding the underlying genetics. Technologies to generate high-throughput phenotypes, such as RNA interference (RNAi), have been developed to decipher functions for genes. This large-scale characterization of genes strongly increases phenotypic information. It is a challenge to interpret results of such functional screens, especially with heterogeneous data sets. Thus, there have been only few efforts to make use of phenotype data beyond the single genotype-phenotype relationship. Here, methods are presented for knowledge discovery in phenotypes across species and screening methods. The available databases and various approaches to analyzing their content are reviewed, including a discussion of hurdles to be overcome, e.g. lack of data integration, inadequate ontologies and shortage of analytical tools. PhenomicDB 2 is an approach to integrate genotype and phenotype data on a large scale, using orthologies for cross-species phenotypes. The focus lies on the uptake of quantitative and descriptive RNAi data and ontologies of phenotypes, assays and cell-lines. Then, the results of a study are presented in which the large set of phenotype data from PhenomicDB is taken to predict gene annotations. Text clustering is utilized to group genes based on their phenotype descriptions. It is shown that these clusters correlate well with indicators for biological coherence in gene groups, such as functional annotations from the Gene Ontology (GO) and protein-protein interactions. The clusters are then used to predict gene function by carrying over annotations from well-annotated genes to less well-characterized genes. Finally, the prototype PhenoMIX is presented, integrating genotype and phenotype data with clustered phenotypes, orthologies, interaction data and other similarity measures. Data grouped by these measures are evaluated for theirnpredictiveness in gene functions and phenotype terms.
Duh, Chinmiin. "Argumentation-based knowledge transformation." Thesis, Royal Holloway, University of London, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.251955.
Повний текст джерелаZou, Y. "BIM and knowledge based risk management system." Thesis, University of Liverpool, 2017. http://livrepository.liverpool.ac.uk/3010103/.
Повний текст джерелаDimitrios, Rekleitis. "Cloud-based Knowledge Management in Greek SME’s." Thesis, Linnéuniversitetet, Institutionen för informatik (IK), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78715.
Повний текст джерелаRicks, Wendell R. "Knowledge-Based System for Flight Information Management." W&M ScholarWorks, 1990. https://scholarworks.wm.edu/etd/1539625650.
Повний текст джерелаKilleen, Patrick. "Knowledge-Based Predictive Maintenance for Fleet Management." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40086.
Повний текст джерела