Dissertations / Theses on the topic 'KNOWLEDGE DISCOVERY BASED TECHNIQUE'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'KNOWLEDGE DISCOVERY BASED TECHNIQUE.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Mohd, Saudi Madihah. "A new model for worm detection and response : development and evaluation of a new model based on knowledge discovery and data mining techniques to detect and respond to worm infection by integrating incident response, security metrics and apoptosis." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5410.
Full textRadovanovic, Aleksandar. "Concept Based Knowledge Discovery from Biomedical Literature." Thesis, Online access, 2009. http://etd.uwc.ac.za/usrfiles/modules/etd/docs/etd_gen8Srv25Nme4_9861_1272229462.pdf.
Full textAamot, Elias. "Literature-based knowledge discovery in climate science." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-27047.
Full textShelke, Yuri Rajendra. "Knowledge Based Topology Discovery and Geo-localization." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1276877783.
Full textYildiz, Meliha Yetisgen. "Using statistical and knowledge-based approaches for literature-based discovery /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/7178.
Full textVermilyer, Robert. "Knowledge Discovery in Content-Based Image Retrieval Systems." NSUWorks, 2005. http://nsuworks.nova.edu/gscis_etd/898.
Full textAjala, Adebunmi Elizabeth. "Acquiring and filtering knowledge : discovery & case-based reasoning." Thesis, University of Surrey, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433304.
Full textPhan, John H. "Biomarker discovery and clinical outcome prediction using knowledge based-bioinformatics." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/33855.
Full textYu, Zhiguo. "Cooperative Semantic Information Processing for Literature-Based Biomedical Knowledge Discovery." UKnowledge, 2013. http://uknowledge.uky.edu/ece_etds/33.
Full textSiochi, Fernando C. "Building a knowledge based simulation optimization system with discovery learning." Diss., This resource online, 1995. http://scholar.lib.vt.edu/theses/available/etd-06062008-155425/.
Full textEngels, Robert. "Component based user guidance in knowledge discovery and data mining /." Sankt Augustin : Infix, 1999. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=008752552&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.
Full textEmami, Leila. "Conceptual Browser, a concept-based knowledge extraction technique." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0001/MQ43162.pdf.
Full textWang, Keqin. "Knowledge discovery in manufacturing quality data to support product design decision making." Troyes, 2010. http://www.theses.fr/2010TROY0005.
Full textThis work studies knowledge extraction in manufacturing quality data (MQD) for support-ing design decisions. Firstly, an ontological approach for analyzing design decisions and identifying designer’s needs for manufacturing quality knowledge is proposed. The decisions are analyzed ranging from task clarification, conceptual design, embodiment design to detail design. A decision model is proposed in which decisions and its knowledge elements are illustrated. An ontology is constructed to represent the decisions and their knowledge needs. Secondly, MQD preparation for further knowledge discovery is described. The nature of data in manufacturing is described. A GT (group technology) and QBOM (Quality Bill of Material)-based method is proposed to classify and organize MQD. As an important factor, the data quality (DQ) issues related with MQD is also analyzed for data mining (DM) application. A QFD (quality function deployment) based approach is proposed for translating data consumers’ DQ needs into specific DQ dimensions and initiatives. Thirdly, a DM-based manufacturing quality knowledge discovery method is proposed and validated through two popular DM functions and related algorithms. The two DM functions are illustrated through real world data sets from two different production lines. Fourthly, a MQD-based design support proto-type is developed. The prototype includes three major functions such as data input, knowledge extraction and input, knowledge search
Ni, Weizeng. "Ontology-based Feature Construction on Non-structured Data." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439309340.
Full textNeznanov, Alexey A., Dmitry A. Ilvovsky, and Sergei O. Kuznetsov. "FCART: A New FCA-based System for Data Analysis and Knowledge Discovery." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-113161.
Full textJones, David. "Improving engineering information access and knowledge discovery through model-based information navigation." Thesis, University of Bristol, 2019. http://hdl.handle.net/1983/2d1c1535-e582-41fd-a6f6-cc1178c21d2a.
Full textAl, Harbi H. Y. M. "Semantically aware hierarchical Bayesian network model for knowledge discovery in data : an ontology-based framework." Thesis, University of Salford, 2017. http://usir.salford.ac.uk/43293/.
Full textZhu, Cheng. "Efficient network based approaches for pattern recognition and knowledge discovery from large and heterogeneous datasets." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1378215769.
Full textLi, Xin. "Graph-based learning for information systems." Diss., The University of Arizona, 2009. http://hdl.handle.net/10150/193827.
Full textJia, Tao. "Geospatial Knowledge Discovery using Volunteered Geographic Information : a Complex System Perspective." Doctoral thesis, KTH, Geodesi och geoinformatik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-104783.
Full textQC 20121113
Chuddher, Bilal Akbar. "A novel knowledge discovery based approach for supplier risk scoring with application in the HVAC industry." Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/11628.
Full textZhao, Wei. "Feature-Based Hierarchical Knowledge Engineering for Aircraft Life Cycle Design Decision Support." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14639.
Full textBose, Aishwarya. "Effective web service discovery using a combination of a semantic model and a data mining technique." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/26425/1/Aishwarya_Bose_Thesis.pdf.
Full textBose, Aishwarya. "Effective web service discovery using a combination of a semantic model and a data mining technique." Queensland University of Technology, 2008. http://eprints.qut.edu.au/26425/.
Full textYang, Wanzhong. "Granule-based knowledge representation for intra and inter transaction association mining." Thesis, Queensland University of Technology, 2009. https://eprints.qut.edu.au/30398/1/Wanzhong_Yang_Thesis.pdf.
Full textYang, Wanzhong. "Granule-based knowledge representation for intra and inter transaction association mining." Queensland University of Technology, 2009. http://eprints.qut.edu.au/30398/.
Full textCicek, A. Ercument. "METABOLIC NETWORK-BASED ANALYSES OF OMICS DATA." Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1372866879.
Full textQu, Xiaoyan Angela. "Discovery and Prioritization of Drug Candidates for Repositioning Using Semantic Web-based Representation of Integrated Diseasome-Pharmacome Knowledge." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1254403900.
Full textRaje, Satyajeet. "ResearchIQ: An End-To-End Semantic Knowledge Platform For Resource Discovery in Biomedical Research." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1354657305.
Full textSeyedarabi, Faezeh. "Developing a model of teachers' web-based information searching : a study of search options and features to support personalised educational resource discovery." Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/10018062/.
Full textIBARAKI, Toshihide, Endre BOROS, Mutsunori YAGIURA, and Kazuya HARAGUCHI. "A Randomness Based Analysis on the Data Size Needed for Removing Deceptive Patterns." Institute of Electronics, Information and Communication Engineers, 2008. http://hdl.handle.net/2237/15011.
Full textDam, Hai Huong Information Technology & Electrical Engineering Australian Defence Force Academy UNSW. "A scalable evolutionary learning classifier system for knowledge discovery in stream data mining." Awarded by:University of New South Wales - Australian Defence Force Academy, 2008. http://handle.unsw.edu.au/1959.4/38865.
Full textCun, Yupeng [Verfasser]. "Network-Based Biomarker Discovery : Development of Prognostic Biomarkers for Personalized Medicine by Integrating Data and Prior Knowledge / Yupeng Cun." Bonn : Universitäts- und Landesbibliothek Bonn, 2014. http://d-nb.info/1051027977/34.
Full textFornells, Herrera Albert. "Marc integrador de les capacitats de Soft-Computing i de Knowledge Discovery dels Mapes Autoorganitzatius en el Raonament Basat en Casos." Doctoral thesis, Universitat Ramon Llull, 2007. http://hdl.handle.net/10803/9158.
Full textDins de l'ampli ventall de tècniques Soft-Computing per tractar coneixement complex, els Mapes Autoorganitzatius (SOM) destaquen sobre la resta per la seva capacitat en agrupar les dades en patrons, els quals permeten detectar relacions ocultes entre les dades. Aquesta capacitat ha estat explotada en treballs previs d'altres investigadors, on s'ha organitzat la memòria de casos del CBR amb SOM per tal de millorar la recuperació dels casos.
La finalitat de la present tesi és donar un pas més enllà en la simple combinació del CBR i de SOM, de tal manera que aquí s'introdueixen les capacitats de Soft-Computing i de Knowledge Discovery de SOM en totes les fases del CBR per nodrir-les del nou coneixement descobert. A més a més, les mètriques de complexitat apareixen en aquest context com un instrument precís per modelar el funcionament de SOM segons la tipologia de les dades. L'assoliment d'aquesta integració es pot dividir principalment en quatre fites: (1) la definició d'una metodologia per determinar la millor manera de recuperar els casos tenint en compte la complexitat de les dades i els requeriments de l'usuari; (2) la millora de la fiabilitat de la proposta de solucions gràcies a les relacions entre els clústers i els casos; (3) la potenciació de les capacitats explicatives mitjançant la generació d'explicacions simbòliques; (4) el manteniment incremental i semi-supervisat de la memòria de casos organitzada per SOM.
Tots aquests punts s'integren sota la plataforma SOMCBR, la qual és extensament avaluada sobre datasets provinents de l'UCI Repository i de dominis mèdics i telemàtics.
Addicionalment, la tesi aborda de manera secundària dues línies de recerca fruït dels requeriments dels projectes on ha estat ubicada. D'una banda, s'aborda la definició de funcions de similitud específiques per definir com comparar un cas resolt amb un de nou mitjançant una variant de la Computació Evolutiva anomenada Evolució de Gramàtiques (GE). D'altra banda, s'estudia com definir esquemes de cooperació entre sistemes heterogenis per millorar la fiabilitat de la seva resposta conjunta mitjançant GE. Ambdues línies són integrades en dues plataformes, BRAIN i MGE respectivament, i són també avaluades amb els datasets anteriors.
El Razonamiento Basado en Casos (CBR) es un paradigma de aprendizaje basado en establecer analogías con problemas previamente resueltos para resolver otros nuevos. Por tanto, la organización, el acceso y la utilización del conocimiento previo son aspectos clave para tener éxito. No obstante, la mayoría de los problemas presentan grandes volúmenes de datos complejos, inciertos y con conocimiento aproximado y, por tanto, el rendimiento del CBR puede verse afectado debido a la complejidad de gestionarlos. Esto ha hecho que en los últimos años haya surgido una nueva línea de investigación llamada Soft-Computing and Intelligent Information Retrieval focalizada en mitigar estos efectos. Es aquí donde nace el contexto de esta tesis.
Dentro del amplio abanico de técnicas Soft-Computing para tratar conocimiento complejo, los Mapas Autoorganizativos (SOM) destacan por encima del resto por su capacidad de agrupar los datos en patrones, los cuales permiten detectar relaciones ocultas entre los datos. Esta capacidad ha sido aprovechada en trabajos previos de otros investigadores, donde se ha organizado la memoria de casos del CBR con SOM para mejorar la recuperación de los casos.
La finalidad de la presente tesis es dar un paso más en la simple combinación del CBR y de SOM, de tal manera que aquí se introducen las capacidades de Soft-Computing y de Knowledge Discovery de SOM en todas las fases del CBR para alimentarlas del conocimiento nuevo descubierto. Además, las métricas de complejidad aparecen en este contexto como un instrumento preciso para modelar el funcionamiento de SOM en función de la tipología de los datos. La consecución de esta integración se puede dividir principalmente en cuatro hitos: (1) la definición de una metodología para determinar la mejor manera de recuperar los casos teniendo en cuenta la complejidad de los datos y los requerimientos del usuario; (2) la mejora de la fiabilidad en la propuesta de soluciones gracias a las relaciones entre los clusters y los casos; (3) la potenciación de las capacidades explicativas mediante la generación de explicaciones simbólicas; (4) el mantenimiento incremental y semi-supervisado de la memoria de casos organizada por SOM. Todos estos puntos se integran en la plataforma SOMCBR, la cual es ampliamente evaluada sobre datasets procedentes del UCI Repository y de dominios médicos y telemáticos.
Adicionalmente, la tesis aborda secundariamente dos líneas de investigación fruto de los requeri-mientos de los proyectos donde ha estado ubicada la tesis. Por un lado, se aborda la definición de funciones de similitud específicas para definir como comparar un caso resuelto con otro nuevo mediante una variante de la Computación Evolutiva denominada Evolución de Gramáticas (GE). Por otro lado, se estudia como definir esquemas de cooperación entre sistemas heterogéneos para mejorar la fiabilidad de su respuesta conjunta mediante GE. Ambas líneas son integradas en dos plataformas, BRAIN y MGE, las cuales también son evaluadas sobre los datasets anteriores.
Case-Based Reasoning (CBR) is an approach of machine learning based on solving new problems by identifying analogies with other previous solved problems. Thus, organization, access and management of this knowledge are crucial issues for achieving successful results. Nevertheless, the major part of real problems presents a huge amount of complex data, which also presents uncertain and partial knowledge. Therefore, CBR performance is influenced by the complex management of this knowledge. For this reason, a new research topic has appeared in the last years for tackling this problem: Soft-Computing and Intelligent Information Retrieval. This is the point where this thesis was born.
Inside the wide variety of Soft-Computing techniques for managing complex data, the Self-Organizing Maps (SOM) highlight from the rest due to their capability for grouping data according to certain patterns using the relations hidden in data. This capability has been used in a wide range of works, where the CBR case memory has been organized with SOM for improving the case retrieval.
The goal of this thesis is to take a step up in the simple combination of CBR and SOM. This thesis presents how to introduce the Soft-Computing and Knowledge Discovery capabilities of SOM inside all the steps of CBR to promote them with the discovered knowledge. Furthermore, complexity measures appear in this context as a mechanism to model the performance of SOM according to data topology. The achievement of this goal can be split in the next four points: (1) the definition of a methodology for setting up the best way of retrieving cases taking into account the data complexity and user requirements; (2) the improvement of the classification reliability through the relations between cases and clusters; (3) the promotion of the explaining capabilities by means of the generation of symbolic explanations; (4) the incremental and semi-supervised case-based maintenance. All these points are integrated in the SOMCBR framework, which has been widely tested in datasets from UCI Repository and from medical and telematic domains.
Additionally, this thesis secondly tackles two additional research lines due to the requirements of a project in which it has been developed. First, the definition of similarity functions ad hoc a domain is analyzed using a variant of the Evolutionary Computation called Grammar Evolution (GE). Second, the definition of cooperation schemes between heterogeneous systems is also analyzed for improving the reliability from the point of view of GE. Both lines are developed in two frameworks, BRAIN and MGE respectively, which are also evaluated over the last explained datasets.
Maus, Aaron. "Formulation of Hybrid Knowledge-Based/Molecular Mechanics Potentials for Protein Structure Refinement and a Novel Graph Theoretical Protein Structure Comparison and Analysis Technique." ScholarWorks@UNO, 2019. https://scholarworks.uno.edu/td/2673.
Full textHe, Yuanchen. "Fuzzy-Granular Based Data Mining for Effective Decision Support in Biomedical Applications." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/cs_diss/12.
Full textSirin, Göknur. "Supporting multidisciplinary vehicle modeling : towards an ontology-based knowledge sharing in collaborative model based systems engineering environment." Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2015. http://www.theses.fr/2015ECAP0024/document.
Full textSimulation models are widely used by industries as an aid for decision making to explore and optimize a broad range of complex industrial systems’ architectures. The increased complexity of industrial systems (cars, airplanes, etc.), ecological and economic concerns implies a need for exploring and analysing innovative system architectures efficiently and effectively by using simulation models. However, simulations designers currently suffer from limitations which make simulation models difficult to design and develop in a collaborative, multidisciplinary design environment. The multidisciplinary nature of simulation models requires a specific understanding of each phenomenon to simulate and a thorough description of the system architecture, its components and connections between components. To accomplish these objectives, the Model-Based Systems Engineering (MBSE) and Information Systems’ (IS) methodologies were used to support the simulation designer’s analysing capabilities in terms of methods, processes and design tool solutions. The objective of this thesis is twofold. The first concerns the development of a methodology and tools to build accurate simulation models. The second focuses on the introduction of an innovative approach to design, product and integrate the simulation models in a “plug and play" manner by ensuring the expected model fidelity. However, today, one of the major challenges in full-vehicle simulation model creation is to get domain level simulation models from different domain experts while detecting any potential inconsistency problem before the IVVQ (Integration, Verification, Validation, and Qualification) phase. In the current simulation model development process, most of the defects such as interface mismatch and interoperability problems are discovered late, during the IVVQ phase. This may create multiple wastes, including rework and, may-be the most harmful, incorrect simulation models, which are subsequently used as basis for design decisions. In order to address this problem, this work aims to reduce late inconsistency detection by ensuring early stage collaborations between the different suppliers and OEM. Thus, this work integrates first a Detailed Model Design Phase to the current model development process and, second, the roles have been re-organized and delegated between design actors. Finally an alternative architecture design tool is supported by an ontology-based DSL (Domain Specific Language) called Model Identity Card (MIC). The design tools and mentioned activities perspectives (e.g. decisions, views and viewpoints) are structured by inspiration from Enterprise Architecture Frameworks. To demonstrate the applicability of our proposed solution, engine-after treatment, hybrid parallel propulsion and electric transmission models are tested across automotive and aeronautic industries
Verma, Anju. "Ontology based personalized modeling for chronic disease risk evaluation and knowledge discovery an integrated approach : a thesis submitted to Auckland University of Technology in fulfilment of the requirements for [the] degree of Doctor of Philosophy (PhD), 2009 /." Click here to access this resource online, 2009. http://hdl.handle.net/10292/784.
Full textCrowe, Edward R. "A strategy for the synthesis of real-time statistical process control within the framework of a knowledge based controller." Ohio : Ohio University, 1995. http://www.ohiolink.edu/etd/view.cgi?ohiou1174336725.
Full textLindsey, Daniel Clayton. "A Geospatial Analysis of the Northeastern Plains Village Complex: An Exploration of a GIS-Based Multidisciplinary Method for the Incorporation of Western and Traditional Ecological Knowledge into the Discovery of Diagnostic Prehistoric Settlement Patterns." Thesis, North Dakota State University, 2019. https://hdl.handle.net/10365/31623.
Full textGiacometto, Torres Francisco Javier. "Adaptive load consumption modelling on the user side: contributions to load forecasting modelling based on supervised mixture of experts and genetic programming." Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/457631.
Full textEste trabajo de investigación propone tres aportaciones principales en el campo de la previsión de consumos: la mejora en la exactitud de la predicción, la mejora en la adaptabilidad del modelo ante diferentes escenarios de consumo y la automatización en la ejecución de los algoritmos de modelado y predicción. La mejora de precisión que ha sido introducida en la estrategia de modelado propuesta ha sido obtenida tras la implementación de algoritmos de aprendizaje supervisados pertenecientes a las siguientes familias de técnicas: aprendizaje de máquinas, inteligencia computacional, redes evolutivas, sistemas expertos y técnicas de regresión. Otras las medidas implementadas para aumentar la calidad de la predicción han sido: la minimización del error de pronóstico a través de la extracción de información basada en análisis multi-variable, la combinación de modelos expertos especializados en atributos específicos del perfil de consumo, el uso de técnicas de pre procesamiento para aumentar la precisión a través de la limpieza de variables, y por último implementación de la algoritmos de clasificación no supervisados para obtener los atributos y las clases características del consumo. La mejora en la adaptación del algoritmo de modelado se ha conseguido mediante la implementación de tres componentes al interior de la estrategia de combinación de modelos expertos. El primer componente corresponde a la implementación de técnicas de muestreo sobre cada conjunto de datos agrupados por clase; esto asegura la replicación de la distribución de probabilidad global en múltiples y estadísticamente independientes subconjuntos de entrenamiento. Estos sub conjuntos son usados para entrenar los modelos expertos que consecuentemente pasaran a formar los modelos base de la estructura jerárquica que combina los modelos expertos. El segundo componente corresponde a técnicas de análisis multi-resolución. A través de la descomposición de variables endógenas en sus componentes tiempo-frecuencia, se abstraen e implementan conocimientos importantes sobre la forma de la estructura jerárquica que adoptaran los modelos expertos. El tercero componente corresponde a los algoritmos de modelado que generan una topología interior auto organizada, que proporciona de modelo experto base completamente personalizado al perfil de consumo analizado. La mejora en la automatización se alcanza mediante la combinación de procedimientos automáticos para minimizar la interacción de un usuario experto en el procedimiento de predicción. Los resultados experimentales obtenidos, a partir de la aplicación de las estrategias de predicción de consumos propuestas, han demostrado la idoneidad de las técnicas y metodologías implementadas; sobre todo en el caso de la novedosa estrategia para la combinación de modelos expertos.
Ma, Sihui. "Discovery and dissemination of new knowledge in food science: Analytical methods for quantification of polyphenols and amino acids in fruits and the use of mobile phone-based instructional technology in food science education." Diss., Virginia Tech, 2019. http://hdl.handle.net/10919/100997.
Full textDoctor of Philosophy
Griffiths, Kerryn Eva. "Discovering, applying and integrating self-knowledge : a grounded theory study of learning in life coaching." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/37245/1/Kerryn_Griffiths_Thesis.pdf.
Full textScarinci, Rui Gureghian. "SES : sistema de extração semântica de informações." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1997. http://hdl.handle.net/10183/18398.
Full textOne of the most challenging area in Computer Science is related to Internet technology. This network offers to the users a large variety and amount of information, mainly, data storage in unstructured or semi-structured formats. However, the vast data volume and heterogeneity transforms the retrieved data manipulation a very arduous work. This problem was the prime motivation of this work. As with many tools for data retrieval and specific searching, the user has to manipulate in his personal computer an increasing amount of information, because these tools do not realize a precise data selection process. Many retrieval data are not interesting for the user. There are, also, a big diversity of subjects and standards in information transmission and storage, creating the most heterogeneous environments in data searching and retrieval. Due to this heterogeneity, the user has to know many data standards and searching tools to obtain the requested information. However, the fundamental problem for data manipulation is the partially or fully unstructured data formats, as text, mail and news data structures. For files in these formats, the user has to read each of the files to filter the relevant information, originating a loss of time, because the document could be not interesting for the user, or if it is interesting, its complete reading may be unnecessary at the moment. Some information as call-for-papers, product prices, economic statistics and others, has associated a temporal validity. Other information are updated periodically. Some of these temporal characteristics are explicit, others are implicitly embedded in other data types. As it is very difficult to retrieve the temporal data automatically, which generate, many times, the use of invalid information, as a result, some opportunities are lost. On this paper a system for extraction and summarizing of data is described. The main objective is to satisfy the user's selection needs and consequently information manipulation stored in a personal computer. To achieve this goal we are employed the concepts of Information Extraction (IE) and Knowledge Based Systems. The input data manipulation is done by an extraction procedure configured by a user who defined knowledge base. The objective of this paper is to develop a System of Semantic Extraction of Information which classifies the data extracted in meaningful classes for the user and to deduce the temporal validity of this data. This goal was achieved by the generation of a structured temporal data base.
Lazarski, Adam. "The importance of contextual factors on the accuracy of estimates in project management : an emergence of a framework for more realistic estimation process." Thesis, University of Bradford, 2014. http://hdl.handle.net/10454/13661.
Full textMarsolo, Keith Allen. "A workflow for the modeling and analysis of biomedical data." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180309265.
Full textHlosta, Martin. "Modul pro shlukovou analýzu systému pro dolování z dat." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237158.
Full textDong, Hai. "A customized semantic service retrieval methodology for the digital ecosystems environment." Thesis, Curtin University, 2010. http://hdl.handle.net/20.500.11937/2345.
Full textBlondet, Gaëtan. "Système à base de connaissances pour le processus de plan d'expériences numériques." Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2363/document.
Full textIn order to improve industrial competitiveness, product design relies more and more on numerical tools, such as numerical simulation, to develop better and cheaper products faster. Numerical Design of Experiments (NDOE) are more and more used to include variabilities during simulation processes, to design more robust, reliable and optimized product earlier in the product development process. Nevertheless, a NDOE process may be too expensive to be applied to a complex product, because of the high computational cost of the model and the high number of required experiments. Several methods exist to decrease this computational cost, but they required expert knowledge to be efficiently applied. In addition to that, NDoE process produces a large amount of data which must be managed. The aim of this research is to propose a solution to define, as fast as possible, an efficient NDoE process, which produce as much useful information as possible with a minimal number of simulations, for complex products. The objective is to shorten both process definition and execution steps. A knowledge-based system is proposed, based on a specific ontology and a bayesian network, to capitalise, share and reuse knowledge and data to predict the best NDoE process definition regarding to a new product. This system is validated on a product from automotive industry
Rougier, Simon. "Apport des images satellites à très haute résolution spatiale couplées à des données géographiques multi-sources pour l’analyse des espaces urbains." Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAH019/document.
Full textClimate change presents cities with significant environmental challenges. Urban planners need decision-making tools and a better knowledge of their territory. One objective is to better understand the link between the grey and the green infrastructures in order to analyse and represent them. The second objective is to propose a methodology to map the urban structure at urban fabric scale taking into account the grey and green infrastructures. In current databases, vegetation is not mapped in an exhaustive way. Therefore the first step is to extract tree and grass vegetation using Pléiades satellite images using an object-based image analysis and an active learning classification. Based on those classifications and multi-sources data, an approach based on knowledge discovery in databases is proposed. It is focused on set of indicators mostly coming from urbanism and landscape ecology. The methodology is built on Strasbourg and applied on Rennes to validate and check its reproducibility