Статті в журналах з теми "KNOWLEDGE DISCOVERY BASED TECHNIQUE"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: KNOWLEDGE DISCOVERY BASED TECHNIQUE.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "KNOWLEDGE DISCOVERY BASED TECHNIQUE".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Chen, Po-Chi, Ru-Fang Hsueh, and Shu-Yuen Hwang. "An ILP Based Knowledge Discovery System." International Journal on Artificial Intelligence Tools 06, no. 01 (March 1997): 63–95. http://dx.doi.org/10.1142/s0218213097000050.

Повний текст джерела
Анотація:
Interest in research into knowledge discovery in databases (KDD) has been growing continuously because of the rapid increase in the amount of information embedded in real-world data. Several systems have been proposed for studying the KDD process. One main task in a KDD system is to learn important and user-interesting knowledge from a set of collected data. Most proposed systems use simple machine learning methods to learn the pattern. This may result in efficient performance but the discovery quality is less useful. In this paper, we propose a method to integrated a new and complicated machine learning method called inductive logic programming (ILP) to improve the KDD quality. Such integration shows how this new learning technique can be easily applied to a KDD system and how it can improve the representation of the discovery. In our system, we use user's queries to indicate the importance and interestingness of the target knowledge. The system has been implemented on a SUN workstation using the Sybase database system. Detailed examples are also provided to illustrate the benefit of integrating the ILP technique with the KDD system.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

JONYER, ISTVAN, LAWRENCE B. HOLDER, and DIANE J. COOK. "GRAPH-BASED HIERARCHICAL CONCEPTUAL CLUSTERING." International Journal on Artificial Intelligence Tools 10, no. 01n02 (March 2001): 107–35. http://dx.doi.org/10.1142/s0218213001000441.

Повний текст джерела
Анотація:
Hierarchical conceptual clustering has proven to be a useful, although greatly under-explored data mining technique. A graph-based representation of structural information combined with a substructure discovery technique has been shown to be successful in knowledge discovery. The SUBDUE substructure discovery system provides the advantages of both approaches. This work presents SUBDUE and the development of its clustering functionalities. Several examples are used to illustrate the validity of the approach both in structured and unstructured domains, as well as compare SUBDUE to earlier clustering algorithms. Results show that SUBDUE successfully discovers hierarchical clusterings in both structured and unstructured data.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Weng, Cheng-Hsiung. "Knowledge discovery of digital library subscription by RFC itemsets." Electronic Library 34, no. 5 (2016): 772–88. http://dx.doi.org/10.1108/el-06-2015-0086.

Повний текст джерела
Анотація:
Purpose The paper aims to understand the book subscription characteristics of the students at each college and help the library administrators to conduct efficient library management plans for books in the library. Unlike the traditional association rule mining (ARM) techniques which mine patterns from a single data set, this paper proposes a model, recency-frequency-college (RFC) model, to analyse book subscription characteristics of library users and then discovers interesting association rules from equivalence-class RFC segments. Design/methodology/approach A framework which integrates the RFC model and ARM technique is proposed to analyse book subscription characteristics of library users. First, the author applies the RFC model to determine library users’ RFC values. After that, the author clusters library users’ transactions into several RFC segments by their RFC values. Finally, the author discovers RFC association rules and analyses book subscription characteristics of RFC segments (library users). Findings The paper provides experimental results from the survey data. It shows that the precision of the frequent itemsets discovered by the proposed RFC model outperforms the traditional approach in predicting library user subscription itemsets in the following time periods. Besides, the proposed approach can discover interesting and valuable patterns from library book circulation transactions. Research limitations/implications Because RFC thresholds were assigned based on expert opinion in this paper, it is an acquisition bottleneck. Therefore, researchers are encouraged to automatically infer the RFC thresholds from the library book circulation transactions. Practical implications The paper includes implications for the library administrators in conducting library book management plans for different library users. Originality/value This paper proposes a model, the RFC model, to analyse book subscription characteristics of library users.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Zhang, Guan Zhu, and Yu Ye Zhu. "Research of After-Sales Management System of Enterprises Based on J2EE and Data Mining Technology." Applied Mechanics and Materials 608-609 (October 2014): 375–81. http://dx.doi.org/10.4028/www.scientific.net/amm.608-609.375.

Повний текст джерела
Анотація:
With the globalization of market and economy, more and more enterprises realize the importance of after-sales service system. However, traditonal after-sales service system only focuses on the business process of system, and ignores important information of after-sales service data. It is data mining technique that solves the problem as a knowledge discovery technique. Data mining technique only can discover potential and valuable information and knowledge in lots of data for decision support. The paper analyzes the business process of after-sales service of enterprises, uses the idea of J2EE design mode, and expounds the development design of the system including the design of J2EE frame, functional module, system component and database.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Giustolisi, Orazio, and Dragan A. Savic. "A symbolic data-driven technique based on evolutionary polynomial regression." Journal of Hydroinformatics 8, no. 3 (July 1, 2006): 207–22. http://dx.doi.org/10.2166/hydro.2006.020b.

Повний текст джерела
Анотація:
This paper describes a new hybrid regression method that combines the best features of conventional numerical regression techniques with the genetic programming symbolic regression technique. The key idea is to employ an evolutionary computing methodology to search for a model of the system/process being modelled and to employ parameter estimation to obtain constants using least squares. The new technique, termed Evolutionary Polynomial Regression (EPR) overcomes shortcomings in the GP process, such as computational performance; number of evolutionary parameters to tune and complexity of the symbolic models. Similarly, it alleviates issues arising from numerical regression, including difficulties in using physical insight and over-fitting problems. This paper demonstrates that EPR is good, both in interpolating data and in scientific knowledge discovery. As an illustration, EPR is used to identify polynomial formulæ with progressively increasing levels of noise, to interpolate the Colebrook-White formula for a pipe resistance coefficient and to discover a formula for a resistance coefficient from experimental data.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Guan, Qing, and Jian He Guan. "Knowledge Acquisition of Interval Set-Valued Based on Granular Computing." Applied Mechanics and Materials 543-547 (March 2014): 2017–23. http://dx.doi.org/10.4028/www.scientific.net/amm.543-547.2017.

Повний текст джерела
Анотація:
The technique of a new extension of fuzzy rough theory using partition of interval set-valued is proposed for granular computing during knowledge discovery in this paper. The natural intervals of attribute values in decision system to be transformed into multiple sub-interval of [0,1]are given by normalization. And some characteristics of interval set-valued of decision systems in fuzzy rough set theory are discussed. The correctness and effectiveness of the approach are shown in experiments. The approach presented in this paper can also be used as a data preprocessing step for other symbolic knowledge discovery or machine learning methods other than rough set theory.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Li, Jian, and Jun Deng. "A Theoretical Study on Knowledge Discovery Technique for Structural Health Monitoring." Applied Mechanics and Materials 166-169 (May 2012): 1250–53. http://dx.doi.org/10.4028/www.scientific.net/amm.166-169.1250.

Повний текст джерела
Анотація:
Based on the similarity between knowledge discovery from data bases (KDD) and Structural health monitoring (SHM), and considered the particularity of SHM problems, a four-step framework of SHM is proposed. The framework extends the final goal of SHM from detecting damages to extracting knowledge to facilitate decision making. The purposes and proper methods of each step of this framework are discussed. To demonstrate the proposed SHM framework, a specific SHM method which is consisted by second order structural parameter identification as feature extraction and statistical control chart analysis of identified stiffness for feature analysis is then presented. Through clarifying the goal and hierarchy of extracting useful knowledge of SHM problems, the framework has potential to facilitate the further development of SHM.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Usai, Antonio, Marco Pironti, Monika Mital, and Chiraz Aouina Mejri. "Knowledge discovery out of text data: a systematic review via text mining." Journal of Knowledge Management 22, no. 7 (October 8, 2018): 1471–88. http://dx.doi.org/10.1108/jkm-11-2017-0517.

Повний текст джерела
Анотація:
Purpose The aim of this work is to increase awareness of the potential of the technique of text mining to discover knowledge and further promote research collaboration between knowledge management and the information technology communities. Since its emergence, text mining has involved multidisciplinary studies, focused primarily on database technology, Web-based collaborative writing, text analysis, machine learning and knowledge discovery. However, owing to the large amount of research in this field, it is becoming increasingly difficult to identify existing studies and therefore suggest new topics. Design/methodology/approach This article offers a systematic review of 85 academic outputs (articles and books) focused on knowledge discovery derived from the text mining technique. The systematic review is conducted by applying “text mining at the term level, in which knowledge discovery takes place on a more focused collection of words and phrases that are extracted from and label each document” (Feldman et al., 1998, p. 1). Findings The results revealed that the keywords extracted to be associated with the main labels, id est, knowledge discovery and text mining, can be categorized in two periods: from 1998 to 2009, the term knowledge and text were always used. From 2010 to 2017 in addition to these terms, sentiment analysis, review manipulation, microblogging data and knowledgeable users were the other terms frequently used. Besides this, it is possible to notice the technical, engineering nature of each term present in the first decade. Whereas, a diverse range of fields such as business, marketing and finance emerged from 2010 to 2017 owing to a greater interest in the online environment. Originality/value This is a first comprehensive systematic review on knowledge discovery and text mining through the use of a text mining technique at term level, which offers to reduce redundant research and to avoid the possibility of missing relevant publications.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Mahoto, Naeem Ahmed, Asadullah Shaikh, Mana Saleh Al Reshan, Muhammad Ali Memon, and Adel Sulaiman. "Knowledge Discovery from Healthcare Electronic Records for Sustainable Environment." Sustainability 13, no. 16 (August 9, 2021): 8900. http://dx.doi.org/10.3390/su13168900.

Повний текст джерела
Анотація:
The medical history of a patient is an essential piece of information in healthcare agencies, which keep records of patients. Due to the fact that each person may have different medical complications, healthcare data remain sparse, high-dimensional and possibly inconsistent. The knowledge discovery from such data is not easily manageable for patient behaviors. It becomes a challenge for both physicians and healthcare agencies to discover knowledge from many healthcare electronic records. Data mining, as evidenced from the existing published literature, has proven its effectiveness in transforming large data collections into meaningful information and knowledge. This paper proposes an overview of the data mining techniques used for knowledge discovery in medical records. Furthermore, based on real healthcare data, this paper also demonstrates a case study of discovering knowledge with the help of three data mining techniques: (1) association analysis; (2) sequential pattern mining; (3) clustering. Particularly, association analysis is used to extract frequent correlations among examinations done by patients with a specific disease, sequential pattern mining allows extracting frequent patterns of medical events and clustering is used to find groups of similar patients. The discovered knowledge may enrich healthcare guidelines, improve their processes and detect anomalous patients’ behavior with respect to the medical guidelines.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

UYSAL, İLHAN, and H. ALTAY GÜVENIR. "An overview of regression techniques for knowledge discovery." Knowledge Engineering Review 14, no. 4 (December 1999): 319–40. http://dx.doi.org/10.1017/s026988899900404x.

Повний текст джерела
Анотація:
Predicting or learning numeric features is called regression in the statistical literature, and it is the subject of research in both machine learning and statistics. This paper reviews the important techniques and algorithms for regression developed by both communities. Regression is important for many applications, since lots of real life problems can be modeled as regression problems. The review includes Locally Weighted Regression (LWR), rule-based regression, Projection Pursuit Regression (PPR), instance-based regression, Multivariate Adaptive Regression Splines (MARS) and recursive partitioning regression methods that induce regression trees (CART, RETIS and M5).
Стилі APA, Harvard, Vancouver, ISO та ін.
11

COBLE, JEFFREY A., RUNU RATHI, DIANE J. COOK, and LAWRENCE B. HOLDER. "ITERATIVE STRUCTURE DISCOVERY IN GRAPH-BASED DATA." International Journal on Artificial Intelligence Tools 14, no. 01n02 (February 2005): 101–24. http://dx.doi.org/10.1142/s0218213005002016.

Повний текст джерела
Анотація:
Much of current data mining research is focused on discovering sets of attributes that discriminate data entities into classes, such as shopping trends for a particular demographic group. In contrast, we are working to develop data mining techniques to discover patterns consisting of complex relationships between entities. Our research is particularly applicable to domains in which the data is event-driven or relationally structured. In this paper we present approaches to address two related challenges; the need to assimilate incremental data updates and the need to mine monolithic datasets. Many realistic problems are continuous in nature and therefore require a data mining approach that can evolve discovered knowledge over time. Similarly, many problems present data sets that are too large to fit into dynamic memory on conventional computer systems. We address incremental data mining by introducing a mechanism for summarizing discoveries from previous data increments so that the globally-best patterns can be computed by mining only the new data increment. To address monolithic datasets we introduce a technique by which these datasets can be partitioned and mined serially with minimal impact on the result quality. We present applications of our work in both the counter-terrorism and bioinformatics domains.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Adala, Asma, Nabil Tabbane, and Sami Tabbane. "A Framework for Automatic Web Service Discovery Based on Semantics and NLP Techniques." Advances in Multimedia 2011 (2011): 1–7. http://dx.doi.org/10.1155/2011/238683.

Повний текст джерела
Анотація:
As a greater number of Web Services are made available today, automatic discovery is recognized as an important task. To promote the automation of service discovery, different semantic languages have been created that allow describing the functionality of services in a machine interpretable form using Semantic Web technologies. The problem is that users do not have intimate knowledge about semantic Web service languages and related toolkits. In this paper, we propose a discovery framework that enables semantic Web service discovery based on keywords written in natural language. We describe a novel approach for automatic discovery of semantic Web services which employs Natural Language Processing techniques to match a user request, expressed in natural language, with a semantic Web service description. Additionally, we present an efficient semantic matching technique to compute the semantic distance between ontological concepts.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Alsukhni, Emad, Ahmed AlEroud, and Ahmad A. Saifan. "A Hybrid Pre-Post Constraint-Based Framework for Discovering Multi-Dimensional Association Rules Using Ontologies." International Journal of Information Technology and Web Engineering 14, no. 1 (January 2019): 112–31. http://dx.doi.org/10.4018/ijitwe.2019010106.

Повний текст джерела
Анотація:
Association rule mining is a very useful knowledge discovery technique to identify co-occurrence patterns in transactional data sets. In this article, the authors proposed an ontology-based framework to discover multi-dimensional association rules at different levels of a given ontology on user defined pre-processing constraints which may be identified using, 1) a hierarchy discovered in datasets; 2) the dimensions of those datasets; or 3) the features of each dimension. The proposed framework has post-processing constraints to drill down or roll up based on the rule level, making it possible to check the validity of the discovered rules in terms of support and confidence rule validity measures without re-applying association rule mining algorithms. The authors conducted several preliminary experiments to test the framework using the Titanic dataset by identifying the association rules after pre- and post-constraints are applied. The results have shown that the framework can be practically applied for rule pruning and discovering novel association rules.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Thilakaratne, Menasha, Katrina Falkner, and Thushari Atapattu. "A systematic review on literature-based discovery workflow." PeerJ Computer Science 5 (November 18, 2019): e235. http://dx.doi.org/10.7717/peerj-cs.235.

Повний текст джерела
Анотація:
As scientific publication rates increase, knowledge acquisition and the research development process have become more complex and time-consuming. Literature-Based Discovery (LBD), supporting automated knowledge discovery, helps facilitate this process by eliciting novel knowledge by analysing existing scientific literature. This systematic review provides a comprehensive overview of the LBD workflow by answering nine research questions related to the major components of the LBD workflow (i.e., input, process, output, and evaluation). With regards to theinputcomponent, we discuss the data types and data sources used in the literature. Theprocesscomponent presents filtering techniques, ranking/thresholding techniques, domains, generalisability levels, and resources. Subsequently, theoutputcomponent focuses on the visualisation techniques used in LBD discipline. As for theevaluationcomponent, we outline the evaluation techniques, their generalisability, and the quantitative measures used to validate results. To conclude, we summarise the findings of the review for each component by highlighting the possible future research directions.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Ramzan Begam, M., and P. Sengottuvelan. "Crime Case Reasoning Based Knowledge Discovery Using Sentence Case Relative Clustering for Crime Analyses." International Journal of Engineering & Technology 7, no. 3.27 (August 15, 2018): 91. http://dx.doi.org/10.14419/ijet.v7i3.27.17662.

Повний текст джерела
Анотація:
Day to day involvement in crime becomes higher statistics for providing information against crime occurrences. A crime committed in different locations, the point of crime occurrence, strategy be analyzed very tedious using only information records. Because information collection in the form of attribute case records with direct crime rates score, so valid factor identification of crime category is a problem. By using the crime cluster in data mining technique to analyze the criminal records to propose a sentence case relative clustering algorithm (SCRCA)with addition classification rule mining algorithm to solve the crime problems. Also to use the sentence case observer technique for knowledge discovery from the crime records for proper case identification from sentence case records and to help increase the predictive accuracy. Crime examines a developing method and identifying the field in law implementation without standard definitions for correct judgment. With the expanding utilization of the clustering automated frameworks to track crimes, information examiners helping the law implementation officers and analysts to accelerate the way toward measuring crimes. The main contribution is to analyses the attribute case with relative sentences of count word analyzes factor for improving the crime prediction for categorizing crime type.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Bhat, Prashant, and Pradnya Malaganve. "Metadata based Classification Techniques for Knowledge Discovery from Facebook Multimedia Database." International Journal of Intelligent Systems and Applications 13, no. 4 (August 8, 2021): 38–48. http://dx.doi.org/10.5815/ijisa.2021.04.04.

Повний текст джерела
Анотація:
Classification is a parlance of Data Mining to genre data of different kinds in particular classes. As we observe, social media is an immense manifesto that allows billions of people share their thoughts, updates and multimedia information as status, photo, video, link, audio and graphics. Because of this flexibility cloud has enormous data. Most of the times, this data is much complicated to retrieve and to understand. And the data may contain lot of noise and at most the data will be incomplete. To make this complication easier, the data existed on the cloud has to be classified with labels which is viable through data mining Classification techniques. In the present work, we have considered Facebook dataset which holds meta data of cosmetic company’s Facebook page. 19 different Meta Data are used as main attributes. Out of those, Meta Data ‘Type’ is concentrated for Classification. Meta data ‘Type’ is classified into four different classes such as link, status, photo and video. We have used two favored Classifiers of Data Mining that are, Bayes Classifier and Decision Tree Classifier. Data Mining Classifiers contain several classification algorithms. Few algorithms from Bayes and Decision Tree have been chosen for the experiment and explained in detail in the present work. Percentage split method is used to split the dataset as training and testing data which helps in calculating the Accuracy level of Classification and to form confusion matrix. The Accuracy results, kappa statistics, root mean squared error, relative absolute error, root relative squared error and confusion matrix of all the algorithms are compared, studied and analyzed in depth to produce the best Classifier which can label the company’s Facebook data into appropriate classes thus Knowledge Discovery is the ultimate goal of this experiment.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Nata, Gusti Ngurah Mega, Steven Anthony, and Putu Pande Yudiastra. "Knowledge Discovery And Virtual Tour To Support Tourism Promotion." IAIC Transactions on Sustainable Digital Innovation (ITSDI) 2, no. 2 (November 26, 2020): 94–106. http://dx.doi.org/10.34306/itsdi.v2i2.387.

Повний текст джерела
Анотація:
Planning a tourism trip is an important part for tourists so that their tour is satisfying. Service bureaus that have a function to help provide information and prepare tourist travel plans for tourists often provide random destination choices because they do not know the pattern of selecting tourist destinations. This will be detrimental to tourists when service bureaus make wrong tourism travel plans. Tourists also often find it difficult to determine which tourist destination to go to because they do not know the environmental conditions in tourist destinations. To overcome this problem, in this study, knowledge discovery and virtual tours are carried out to increase the promotion of tourism. Knowledge discovery is finding information or knowledge. Knowledge discovery uses data mining techniques to perform data analysis and find patterns. The data mining model that can be used is the frequent pattern by looking for Association Rule Mining from the data. Virtual tour is a technique that can provide 360 ??+ 180 degree images. The virtual tour will be able to show the overall environmental conditions at the tourist destination. The results that have been obtained are in the form of a quick recommendation of tourist attractions in accordance with the country of origin of tourists based on the Association rule mining values. The virtual tour has presented a 360 degree panoramic photo view to inform the environment situation in the place commented by the system.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Deng, Jun, Jian Li, and Daoyao Wang. "Knowledge Discovery from Vibration Measurements." Scientific World Journal 2014 (2014): 1–15. http://dx.doi.org/10.1155/2014/917524.

Повний текст джерела
Анотація:
The framework as well as the particular algorithms of pattern recognition process is widely adopted in structural health monitoring (SHM). However, as a part of the overall process of knowledge discovery from data bases (KDD), the results of pattern recognition are only changes and patterns of changes of data features. In this paper, based on the similarity between KDD and SHM and considering the particularity of SHM problems, a four-step framework of SHM is proposed which extends the final goal of SHM from detecting damages to extracting knowledge to facilitate decision making. The purposes and proper methods of each step of this framework are discussed. To demonstrate the proposed SHM framework, a specific SHM method which is composed by the second order structural parameter identification, statistical control chart analysis, and system reliability analysis is then presented. To examine the performance of this SHM method, real sensor data measured from a lab size steel bridge model structure are used. The developed four-step framework of SHM has the potential to clarify the process of SHM to facilitate the further development of SHM techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Rani, Mrs M. Akila, and Dr D. Shanthi. "A Study on Knowledge Discovery of Relevant Web Services with Semantic and Syntactic approaches." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 4, no. 1 (February 1, 2013): 8–11. http://dx.doi.org/10.24297/ijct.v4i1a.3026.

Повний текст джерела
Анотація:
Web mining is the application of data mining techniques to discover patterns from the Web. Web services defines set of standards like WSDL(Web Service Description Language), SOAP(Simple Object Access Protocol) and UDDI(Universal Description Discovery and Integration) to support service description, discovery and invocation in a uniform interchangeable format between heterogeneous applications. Due to huge number of Web services and short content of WSDL description, the identification of correct Web services becomes a time consuming process and retrieves a vast amount of irrelevant Web services. This emerges the need for the efficient Web service mining framework for Web service discovery. Discovery involves matching, assessment and selection. Various complex relationships may provide incompatibility in delivering and identifying efficient Web services. As a result the web service requester did not attain the exact useful services. A research has emerged to develop method to improve the accuracy of Web service discovery to match the best services. In the discovery of Web services there are two approaches are available namely Semantic based approach and Syntactic based approach. Semantic based approach gives high accuracy than Syntactic approach but it takes high processing time. Syntactic based approach has high flexibility. Thus, this paper presents a survey of semantic based and syntactic based approaches of Web service discovery system and it proposed a novel approach which has better accuracy and good flexibility than existing one. Finally, it compares the existing approaches in web service discovery.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Preiss, Judita, Mark Stevenson, and Robert Gaizauskas. "Exploring relation types for literature-based discovery." Journal of the American Medical Informatics Association 22, no. 5 (May 12, 2015): 987–92. http://dx.doi.org/10.1093/jamia/ocv002.

Повний текст джерела
Анотація:
Abstract Objective Literature-based discovery (LBD) aims to identify “hidden knowledge” in the medical literature by: (1) analyzing documents to identify pairs of explicitly related concepts (terms), then (2) hypothesizing novel relations between pairs of unrelated concepts that are implicitly related via a shared concept to which both are explicitly related. Many LBD approaches use simple techniques to identify semantically weak relations between concepts, for example, document co-occurrence. These generate huge numbers of hypotheses, difficult for humans to assess. More complex techniques rely on linguistic analysis, for example, shallow parsing, to identify semantically stronger relations. Such approaches generate fewer hypotheses, but may miss hidden knowledge. The authors investigate this trade-off in detail, comparing techniques for identifying related concepts to discover which are most suitable for LBD. Materials and methods A generic LBD system that can utilize a range of relation types was developed. Experiments were carried out comparing a number of techniques for identifying relations. Two approaches were used for evaluation: replication of existing discoveries and the “time slicing” approach.1 Results Previous LBD discoveries could be replicated using relations based either on document co-occurrence or linguistic analysis. Using relations based on linguistic analysis generated many fewer hypotheses, but a significantly greater proportion of them were candidates for hidden knowledge. Discussion and Conclusion The use of linguistic analysis-based relations improves accuracy of LBD without overly damaging coverage. LBD systems often generate huge numbers of hypotheses, which are infeasible to manually review. Improving their accuracy has the potential to make these systems significantly more usable.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Turčínek, Pavel, and Arnošt Motyčka. "Knowledge discovery on consumers’ behaviour." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 61, no. 7 (2013): 2893–901. http://dx.doi.org/10.11118/actaun201361072893.

Повний текст джерела
Анотація:
This paper summarizes results of the research project “Application of modern methods to data processing in the field of marketing research” which was solved at the Department of Informatics, Faculty of Business and Economics of Mendel University in Brno. The most of these results were presented at international conferences.It describes the use of knowledge discovery techniques on data from marketing research of consumers’ behaviour. The paper deals with issues of classification, cluster analysis, correlation and association rules.For classification there were used various algorithms: multi-layer perceptron neural network, self-organizing (Kohonen’s) maps, bayesian networks and generation of a decision tree. Beside Kohonen’s maps, which were tested in MATLAB software, all classification methods were tested in Weka software.In order to find clusters of the methods K-means, Expectation-Maximization, DBSCAN Weka was also used as software for clustering. Correlation analysis was done based on statistical approach. Generation of association rules was achieved by use of Apriori and the FP-growth algorithm in Weka.The paper describes above mentioned methods and shows achieved results of exploring data from marketing research on consumers’ behaviour.This article discusses the suitability of these methods usage on such data sets. It also suggests further research possibilities of knowledge discovery on consumers’ behaviour.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Zhang, Jia, Chris Lee, Petr Votava, Tsengdar J. Lee, Shuai Wang, Venkatesh Sriram, Neeraj Saini, Pujita Rao, and Ramakrishna Nemani. "A Trust-Powered Technique to Facilitate Scientific Tool Discovery and Recommendation." International Journal of Web Services Research 12, no. 3 (July 2015): 25–47. http://dx.doi.org/10.4018/ijwsr.2015070102.

Повний текст джерела
Анотація:
While the open science community engenders many similar scientific tools as services, how to differentiate them and help scientists select and reuse existing software services developed by peers remains a challenge. Most of the existing service discovery approaches focus on finding candidate services based on functional and non-functional requirements as well as historical usage analysis. Complementary to the existing methods, this paper proposes to leverage human trust to facilitate software service selection and recommendation. A trust model is presented that leverages the implicit human factor to help quantify the trustworthiness of candidate services. A hierarchical Knowledge-Social-Trust (KST) network model is established to extract hidden knowledge from various publication repositories (e.g., DBLP) and social networks (e.g., Twitter and DBLP). As a proof of concept, a prototyping service has been developed to help scientists evaluate and visualize trust of services. The performance factor is studied and experience is reported.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Pan, Zhiwen, Jiangtian Li, Yiqiang Chen, Jesus Pacheco, Lianjun Dai, and Jun Zhang. "Knowledge discovery in sociological databases." International Journal of Crowd Science 3, no. 3 (September 2, 2019): 315–32. http://dx.doi.org/10.1108/ijcs-09-2019-0023.

Повний текст джерела
Анотація:
Purpose The General Society Survey(GSS) is a kind of government-funded survey which aims at examining the Socio-economic status, quality of life, and structure of contemporary society. GSS data set is regarded as one of the authoritative source for the government and organization practitioners to make data-driven policies. The previous analytic approaches for GSS data set are designed by combining expert knowledges and simple statistics. By utilizing the emerging data mining algorithms, we proposed a comprehensive data management and data mining approach for GSS data sets. Design/methodology/approach The approach are designed to be operated in a two-phase manner: a data management phase which can improve the quality of GSS data by performing attribute pre-processing and filter-based attribute selection; a data mining phase which can extract hidden knowledge from the data set by performing data mining analysis including prediction analysis, classification analysis, association analysis and clustering analysis. Findings According to experimental evaluation results, the paper have the following findings: Performing attribute selection on GSS data set can increase the performance of both classification analysis and clustering analysis; all the data mining analysis can effectively extract hidden knowledge from the GSS data set; the knowledge generated by different data mining analysis can somehow cross-validate each other. Originality/value By leveraging the power of data mining techniques, the proposed approach can explore knowledge in a fine-grained manner with minimum human interference. Experiments on Chinese General Social Survey data set are conducted at the end to evaluate the performance of our approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

NASRAOUI, OLFA, and RAGHU KRISHNAPURAM. "AN EVOLUTIONARY APPROACH TO MINING ROBUST MULTI-RESOLUTION WEB PROFILES AND CONTEXT SENSITIVE URL ASSOCIATIONS." International Journal of Computational Intelligence and Applications 02, no. 03 (September 2002): 339–48. http://dx.doi.org/10.1142/s1469026802000646.

Повний текст джерела
Анотація:
We present a technique for simultaneously mining Web navigation patterns and maximally frequent context-sensitive itemsets (URL associations) from the historic user access data stored in Web server logs. A new hierarchical clustering technique that exploits the symbiosis between clusters in feature space and genetic biological niches in nature, called Hierarchical Unsupervised Niche Clustering (H-UNC) is presented. We use H-UNC as part of a complete system of knowledge discovery in Web usage data. Our approach does not necessitate fixing the number of clusters in advance, is insensitive to initialization, can handle noisy data, general non-differentiable similarity measures, and automatically provides profiles at multiple resolution levels. Our experiments show that our algorithm is not only capable of extracting meaningful user profiles on real Web sites, but also discovers associations between distinct URL pages on a site, with no additional cost. Unlike content based association methods, our approach discovers associations between different Web pages based only on the user access patterns and not on the page content. Also, unlike traditional context-blind association discovery methods, H-UNC discovers context-sensitive associations which are only meaningful within a limited context/user profile.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Shaik, Abdul Naveed, and Ansar Ali Khan. "Physiologically based pharmacokinetic (PBPK) modeling and simulation in drug discovery and development." ADMET and DMPK 7, no. 1 (February 23, 2019): 1–3. http://dx.doi.org/10.5599/admet.667.

Повний текст джерела
Анотація:
Physiologically based pharmacokinetic (PBPK) modeling is a mechanistic or physiology based mathematical modeling technique which integrates the knowledge from both drug-based properties including physiochemical and biopharmaceutical properties and system based or physiological properties to generate a model for predicting the absorption, distribution, metabolism and excretion (ADME) properties of a drug as well as pharmacokinetic behavior of a drug in preclinical species and humans.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Singh, Dharmpal. "An Effort to Design an Integrated System to Extract Information Under the Domain of Metaheuristics." International Journal of Applied Evolutionary Computation 8, no. 3 (July 2017): 13–52. http://dx.doi.org/10.4018/ijaec.2017070102.

Повний текст джерела
Анотація:
The main objective of this work is to develop an integrated system that is capable of extracting precise information (knowledge) based on any stored information using the techniques of data mining and soft computing. For the purpose of extracting precise information based on some stored information, it has been further observed that the research work related to the area of knowledge discovery based on certain information with the help of a particular data mining or soft computing model has been done, but the performance based on the particular soft computing or data mining model has not been reviewed as compared to the other models. The comparison of performance of various models in the area of soft computing domain or statistical domain or data mining area have been remained unattended with limitation of the survey. This absence leads to the necessity and carrying out research work for effective knowledge discovery based on a particular set of information on utilizing the versatility and potential view generation soft computing tools. The modified harmony search technique has been proposed in this paper and it has been observed that it has outperformed the other soft computing technique in case of training and tested data. The result of the modified harmony search technique has also been cross checked by the residual error. The concept of harmony search is also applied to other data set to check the optimality of the models.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Ltifi, Hela, Emna Ben Mohamed, and Mounir ben Ayed. "Interactive visual knowledge discovery from data-based temporal decision support system." Information Visualization 15, no. 1 (February 8, 2015): 31–50. http://dx.doi.org/10.1177/1473871614567794.

Повний текст джерела
Анотація:
The article aims to present a generic interactive visual analytics solution that provides temporal decision support using knowledge discovery from data modules together with interactive visual representations. It bases its design decisions on classification of visual representation techniques according to the criteria of temporal data type, periodicity, and dimensionality. The design proposal is applied to an existing medical knowledge discovery from data–based decision support system aiming at assisting physicians in the fight against nosocomial infections in the intensive care units. Our solution is fully implemented and evaluated.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Atefi, Soodeh, Sakshyam Panda, Emmanouil Panaousis, and Aron Laszka. "Principled Data-Driven Decision Support for Cyber-Forensic Investigations." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 4 (June 26, 2023): 5010–17. http://dx.doi.org/10.1609/aaai.v37i4.25628.

Повний текст джерела
Анотація:
In the wake of a cybersecurity incident, it is crucial to promptly discover how the threat actors breached security in order to assess the impact of the incident and to develop and deploy countermeasures that can protect against further attacks. To this end, defenders can launch a cyber-forensic investigation, which discovers the techniques that the threat actors used in the incident. A fundamental challenge in such an investigation is prioritizing the investigation of particular techniques since the investigation of each technique requires time and effort, but forensic analysts cannot know which ones were actually used before investigating them. To ensure prompt discovery, it is imperative to provide decision support that can help forensic analysts with this prioritization. A recent study demonstrated that data-driven decision support, based on a dataset of prior incidents, can provide state-of-the-art prioritization. However, this data-driven approach, called DISCLOSE, is based on a heuristic that utilizes only a subset of the available information and does not approximate optimal decisions. To improve upon this heuristic, we introduce a principled approach for data-driven decision support for cyber-forensic investigations. We formulate the decision-support problem using a Markov decision process, whose states represent the states of a forensic investigation. To solve the decision problem, we propose a Monte Carlo tree search based method, which relies on a k-NN regression over prior incidents to estimate state-transition probabilities. We evaluate our proposed approach on multiple versions of the MITRE ATT&CK dataset, which is a knowledge base of adversarial techniques and tactics based on real-world cyber incidents, and demonstrate that our approach outperforms DISCLOSE in terms of techniques discovered per effort spent.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Lamghari, Zineb. "Process Mining: Auditing Approach Based on Process Discovery Using Frequency Paths Concept." ASM Science Journal 17 (November 2, 2022): 1–11. http://dx.doi.org/10.32802/asmscj.2022.1225.

Повний текст джерела
Анотація:
In the company environment, the management team is responsible for producing normative models. The normative model is considered a standard model that aims at auditing all business processes in the same context. In this regard, the audit operation encompasses four process mining activities, in a hybrid evaluation (offline and online), which are the detect, the check, the compare, and the promote activities. This is still well performed for structured business processes. Otherwise, complex processes may deviate from the initial defined normative model context. Indeed, the latter must be refined for more precise results. Therefore, the combination of human knowledge, control-flow discovery algorithms, and process mining activities is required. To this end, we present a technique for reducing the complexity of unstructured process models (Spaghetti process models) into structured ones (Lasagna process models). This framework outputs a refined normative model for improving the future Business Process (BP) auditing operations. Moreover, this work introduces the sustainability advantage that can occur using process mining techniques.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Li, Qiao, and Junming Liu. "Development of an Intelligent NLP-Based Audit Plan Knowledge Discovery System." Journal of Emerging Technologies in Accounting 17, no. 1 (October 1, 2019): 89–97. http://dx.doi.org/10.2308/jeta-52665.

Повний текст джерела
Анотація:
ABSTRACT Auditors' discussions in audit plan brainstorming sessions provide valuable knowledge on how audit engagement teams evaluate information, identify and assess risks, and make audit decisions. Collected expertise and experience from experienced auditors can be used as decision support for future audit plan engagements. With the help of Natural Language Processing (NLP) techniques, this paper proposes an intelligent NLP-based audit plan knowledge discovery system (APKDS) that can collect and extract important contents from audit brainstorming discussions and transfer the extracted contents into an audit knowledge base for future use.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Ding, Zhipeng, Hongxia Yun, and Enze Li. "A multimedia knowledge discovery-based optimal scheduling approach considering visual behavior in smart education." Mathematical Biosciences and Engineering 20, no. 3 (2023): 5901–16. http://dx.doi.org/10.3934/mbe.2023254.

Повний текст джерела
Анотація:
<abstract> <p>Nowadays, the convergence of intelligent computing technique and education has been a hot concern for both academia and industry, producing the conception of smart education. It is predictable that automatic planning and scheduling for course contents are the most practical important task for smart education. As online and offline educational activities are visual behaviors, it remains challenging to capture and extract principal features. To breakthrough current barriers, this paper combines the visual perception technology and data mining theory, and proposes a multimedia knowledge discovery-based optimal scheduling approach in smart education about painting. At first, the data visualization is carried out to analyze the adaptive design of visual morphologies. On this basis, it is supposed to formulate a multimedia knowledge discovery framework which can implement multimodal inference tasks, so as to calculate specific course contents for specific individuals. At last, some simulation works are also conducted to obtain analysis results, showing that the proposed optimal scheduling scheme can work well in contents planning of smart education scenarios.</p> </abstract>
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Lamghari, Zineb. "An Integrated Approach for Discovering Process Models According to Business Process Types." ASM Science Journal 16 (July 26, 2021): 1–14. http://dx.doi.org/10.32802/asmscj.2021.767.

Повний текст джерела
Анотація:
Process discovery technique aims at automatically generating a process model that accurately describes a Business Process (BP) based on event data. Related discovery algorithms consider recorded events are only resulting from an operational BP type. While the management community defines three BP types, which are: Management, Support and Operational. They distinguish each BP type by different proprieties like the main business process objective as domain knowledge. This puts forward the lack of process discovery technique in obtaining process models according to business process types (Management and Support). In this paper, we demonstrate that business process types can guide the process discovery technique in generating process models. A special interest is given to the use of process mining to deal with this challenge.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Li, Jing Min, Jin Yao, and Yong Mou Liu. "A Model for Acquisition of Implicit Design Knowledge Based on KDD." Materials Science Forum 505-507 (January 2006): 505–10. http://dx.doi.org/10.4028/www.scientific.net/msf.505-507.505.

Повний текст джерела
Анотація:
Knowledge discovery in database (KDD) represents a new direction of data processing and knowledge innovation. Design is a knowledge-intensive process driven by various design objectives. Implicit knowledge acquisition is key and difficult for the intelligent design system applied to mechanical product design. In this study, the characteristic of implicit design knowledge and KDD are analyzed, a model for product design knowledge acquisition is set up, and the key techniques including the expression and application of domain knowledge and the methods of knowledge discovery are discussed. It is illustrated by an example that the method proposed can be used to obtain the engineering knowledge in design case effectively, and can promote the quality and intelligent standard of product design.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Smiti, Abir, and Zied Elouedi. "Dynamic maintenance case base using knowledge discovery techniques for case based reasoning systems." Theoretical Computer Science 817 (May 2020): 24–32. http://dx.doi.org/10.1016/j.tcs.2019.06.026.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Naik, Midde Venkateswarlu, D. Vasumathi, and A. P. Siva Kumar. "An Improved Intelligent Approach to Enhance the Sentiment Classifier for Knowledge Discovery Using Machine Learning." International Journal of Sensors, Wireless Communications and Control 10, no. 4 (December 18, 2020): 582–93. http://dx.doi.org/10.2174/2210327910999200528114552.

Повний текст джерела
Анотація:
Aims: The proposed research work is on an evolutionary enhanced method for sentiment or emotion classification on unstructured review text in the big data field. The sentiment analysis plays a vital role for current generation of people for extracting valid decision points about any aspect such as movie ratings, education institute or politics ratings, etc. The proposed hybrid approach combined the optimal feature selection using Particle Swarm Optimization (PSO) and sentiment classification through Support Vector Machine (SVM). The current approach performance is evaluated with statistical measures, such as precision, recall, sensitivity, specificity, and was compared with the existing approaches. The earlier authors have achieved an accuracy of sentiment classifier in the English text up to 94% as of now. In the proposed scheme, an average accuracy of sentiment classifier on distinguishing datasets outperformed as 99% by tuning various parameters of SVM, such as constant c value and kernel gamma value in association with PSO optimization technique. The proposed method utilized three datasets, such as airline sentiment data, weather, and global warming datasets, that are publically available. The current experiment produced results that are trained and tested based on 10- Fold Cross-Validations (FCV) and confusion matrix for predicting sentiment classifier accuracy. Background: The sentiment analysis plays a vital role for current generation people for extracting valid decisions about any aspect such as movie rating, education institute or even politics ratings, etc. Sentiment Analysis (SA) or opinion mining has become fascinated scientifically as a research domain for the present environment. The key area is sentiment classification on semi-structured or unstructured data in distinguish languages, which has become a major research aspect. User-Generated Content [UGC] from distinguishing sources has been hiked significantly with rapid growth in a web environment. The huge user-generated data over social media provides substantial value for discovering hidden knowledge or correlations, patterns, and trends or sentiment extraction about any specific entity. SA is a computational analysis to determine the actual opinion of an entity which is expressed in terms of text. SA is also called as computation of emotional polarity expressed over social media as natural text in miscellaneous languages. Usually, the automatic superlative sentiment classifier model depends on feature selection and classification algorithms. Methods: The proposed work used Support vector machine as classification technique and particle swarm optimization technique as feature selection purpose. In this methodology, we tune various permutations and combination parameters in order to obtain expected desired results with kernel and without kernel technique for sentiment classification on three datasets, including airline, global warming, weather sentiment datasets, that are freely hosted for research practices. Results: In the proposed scheme, The proposed method has outperformed with 99.2% of average accuracy to classify the sentiment on different datasets, among other machine learning techniques. The attained high accuracy in classifying sentiment or opinion about review text proves superior effectiveness over existing sentiment classifiers. The current experiment produced results that are trained and tested based on 10- Fold Cross-Validations (FCV) and confusion matrix for predicting sentiment classifier accuracy. Conclusion: The objective of the research issue sentiment classifier accuracy has been hiked with the help of Kernel-based Support Vector Machine (SVM) based on parameter optimization. The optimal feature selection to classify sentiment or opinion towards review documents has been determined with the help of a particle swarm optimization approach. The proposed method utilized three datasets to simulate the results, such as airline sentiment data, weather sentiment data, and global warming data that are freely available datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Omidipoor, Morteza, Ara Toomanian, Najmeh Neysani Samany, and Ali Mansourian. "Knowledge Discovery Web Service for Spatial Data Infrastructures." ISPRS International Journal of Geo-Information 10, no. 1 (December 31, 2020): 12. http://dx.doi.org/10.3390/ijgi10010012.

Повний текст джерела
Анотація:
The size, volume, variety, and velocity of geospatial data collected by geo-sensors, people, and organizations are increasing rapidly. Spatial Data Infrastructures (SDIs) are ongoing to facilitate the sharing of stored data in a distributed and homogeneous environment. Extracting high-level information and knowledge from such datasets to support decision making undoubtedly requires a relatively sophisticated methodology to achieve the desired results. A variety of spatial data mining techniques have been developed to extract knowledge from spatial data, which work well on centralized systems. However, applying them to distributed data in SDI to extract knowledge has remained a challenge. This paper proposes a creative solution, based on distributed computing and geospatial web service technologies for knowledge extraction in an SDI environment. The proposed approach is called Knowledge Discovery Web Service (KDWS), which can be used as a layer on top of SDIs to provide spatial data users and decision makers with the possibility of extracting knowledge from massive heterogeneous spatial data in SDIs. By proposing and testing a system architecture for KDWS, this study contributes to perform spatial data mining techniques as a service-oriented framework on top of SDIs for knowledge discovery. We implemented and tested spatial clustering, classification, and association rule mining in an interoperable environment. In addition to interface implementation, a prototype web-based system was designed for extracting knowledge from real geodemographic data in the city of Tehran. The proposed solution allows a dynamic, easier, and much faster procedure to extract knowledge from spatial data.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Ullah, Zafar, Muhammad Uzair, and Arshad Mehmood. "Extraction of Key Motifs as a Preview from 2017 Nobel Prize Winning Novel, ‘Never Let Me Go’." Journal of Research in Social Sciences 7, no. 2 (January 18, 2021): 83–98. http://dx.doi.org/10.52015/jrss.7i2.80.

Повний текст джерела
Анотація:
Word clouds manifest interactive visuals along with their statistical data. Thus knowledge discovery and aesthetic data visualization interlink to produce interactive word cloud which is an interesting, textual, statistical and visual data. This study aims to generate interactive word cloud—Cirrus—on the basis of statistical data to preview text of the novel for readers. So cirrus tool is selected from Voyant open access tools to produce interactive statistical word cloud. Then the generated word cloud and statistical data are analyzed with mixed method and its analysis draws insight from Rakesh Aggrawal’s Knowledge Discovery Theory which seeks innovative and interesting knowledge patterns. This thematic word cloud verifies already known themes and discovers innovative interesting themes. Current study reveals that all mentioned key themes can be easily extracted from a voluminous novel with the help of Cirrus tool. Key motifs have been presented in the word cloud for the readers. On the other hand, unwritten themes can’t be extracted through machine learning tools, rather it is the task of human cognition. Primarily, this novel based study reveals names of chief characters, for instance “Tommy (496),” “Ruth (455)” and “I (Kathy) (355).” Furthermore, motifs of nostalgic memories with word “remember (143),” “thought (126)” about “Hailsham (203),” “carer (74),” “sex (80),” sex “lectures (8)” have been discovered as a preview. Previewing technique prepares reader’s mind and gives an epigrammatic digital view of the text. The visual themes as knowledgeable word cloud leave an indelible mark on the slate of memory.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Sanchez-Segura, Maria-Isabel, Roxana González-Cruz, Fuensanta Medina-Dominguez, and German-Lenin Dugarte-Peña. "Valuable Business Knowledge Asset Discovery by Processing Unstructured Data." Sustainability 14, no. 20 (October 11, 2022): 12971. http://dx.doi.org/10.3390/su142012971.

Повний текст джерела
Анотація:
Modern organizations are challenged to enact a digital transformation and improve their competitiveness while contributing to the ninth Sustainable Development Goal (SGD), “Build resilient infrastructure, promote sustainable industrialization and foster innovation”. The discovery of hidden process data’s knowledge assets may help to digitalize processes. Working on a valuable knowledge asset discovery process, we found a major challenge in that organizational data and knowledge are likely to be unstructured and undigitized, constraining the power of today’s process mining methodologies (PMM). Whereas it has been proved in digitally mature companies, the scope of PMM becomes wider with the complement proposed in this paper, embracing organizations in the process of improving their digital maturity based on available data. We propose the C4PM method, which integrates agile principles, systems thinking and natural language processing techniques to analyze the behavioral patterns of organizational semi-structured or unstructured data from a holistic perspective to discover valuable hidden information and uncover the related knowledge assets aligned with the organization strategic or business goals. Those assets are the key to pointing out potential processes susceptible to be handled using PMM, empowering a sustainable organizational digital transformation. A case study analysis from a dataset containing information on employees’ emails in a multinational company was conducted.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Guo, Yu Dong. "Prototype System of Knowledge Management Based on Data Mining." Applied Mechanics and Materials 411-414 (September 2013): 251–54. http://dx.doi.org/10.4028/www.scientific.net/amm.411-414.251.

Повний текст джерела
Анотація:
Knowledge is a very crucial resource to promote economic development and society progress which includes facts, information, descriptions, or skills acquired through experience or education. With knowledge has being increasingly prominent, knowledge management has become important measure for the core competences promotion of a corporation. The paper begins with knowledge managements definition, and studies the process of knowledge discovery from databases (KDD),data mining techniques and SECI(Socialization, Externalization, Combination, Internalization) model of knowledge dimensions. Finally, a simple knowledge management prototype system was proposed which based on the KDD and data mining.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Liu, Han, Alexander Gegov, and Mihaela Cocea. "Rule Based Networks: An Efficient and Interpretable Representation of Computational Models." Journal of Artificial Intelligence and Soft Computing Research 7, no. 2 (April 1, 2017): 111–23. http://dx.doi.org/10.1515/jaiscr-2017-0008.

Повний текст джерела
Анотація:
Abstract Due to the vast and rapid increase in the size of data, data mining has been an increasingly important tool for the purpose of knowledge discovery to prevent the presence of rich data but poor knowledge. In this context, machine learning can be seen as a powerful approach to achieve intelligent data mining. In practice, machine learning is also an intelligent approach for predictive modelling. Rule learning methods, a special type of machine learning methods, can be used to build a rule based system as a special type of expert systems for both knowledge discovery and predictive modelling. A rule based system may be represented through different structures. The techniques for representing rules are known as rule representation, which is significant for knowledge discovery in relation to the interpretability of the model, as well as for predictive modelling with regard to efficiency in predicting unseen instances. This paper justifies the significance of rule representation and presents several existing representation techniques. Two types of novel networked topologies for rule representation are developed against existing techniques. This paper also includes complexity analysis of the networked topologies in order to show their advantages comparing with the existing techniques in terms of model interpretability and computational efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

V, Bindhu. "Artificial Intelligence based Business Process Automation for Enhanced Knowledge Management." June 2021 3, no. 2 (July 27, 2021): 65–78. http://dx.doi.org/10.36548/jeea.2021.2.001.

Повний текст джерела
Анотація:
A customer relationship management (CRM) system based on Artificial Intelligence (AI) is used to discover critical success factors (CSF) in order to improve the automated business process and deliver better knowledge management (KM). Moreover, different factors contribute towards achieving efficient knowledge management in CRM systems with AI schemes. Identifying the key elements may be accomplished in a variety of ways. For this purpose, Delphi technique, nominal group technique, and brainstorming approach are used. Using the interpretive structural modelling (ISM) approach, ten key variables, significance degree, and interaction are determined. CSFs such as funding, leadership, and support are the most important of the ten variables identified for integrating KM, CRM, and AI. This approach has the potential to significantly improve the business processes.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Chun, Se-Hak, and Young-Woong Ko. "Geometric Case Based Reasoning for Stock Market Prediction." Sustainability 12, no. 17 (September 1, 2020): 7124. http://dx.doi.org/10.3390/su12177124.

Повний текст джерела
Анотація:
Case based reasoning is a knowledge discovery technique that uses similar past problems to solve current new problems. It has been applied to many tasks, including the prediction of temporal variables as well as learning techniques such as neural networks, genetic algorithms, decision trees, etc. This paper presents a geometric criterion for selecting similar cases that serve as an exemplar for the target. The proposed technique, called geometric Case Based Reasoning, uses a shape distance method that uses the number of sign changes of features for the target case, especially when extracting nearest neighbors. Thus, this method overcomes the limitation of conventional case-based reasoning in that it uses Euclidean distance and does not consider how nearest neighbors are similar to the target case in terms of changes between previous and current features in a time series. These concepts are investigated against the backdrop of a practical application involving the prediction of a stock market index. The results show that the proposed technique is significantly better than the random walk model at p < 0.01. However, it was not significantly better than the conventional CBR model in the hit rate measure and did not surpass the conventional CBR in the mean absolute percentage error.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Rajakumari, D. "PERFORMANCE VALIDATION OF PRIOR QUANTIZATION TECHNIQUES IN OUTLIERS CLASSIFICATION USING WDBC DATASET." International Journal of Engineering Technologies and Management Research 5, no. 4 (February 26, 2020): 48–56. http://dx.doi.org/10.29121/ijetmr.v5.i4.2018.207.

Повний текст джерела
Анотація:
Data mining is the process of analyzing enormous data and summarizing it into the useful knowledge discovery and the task of data mining approaches is growing quickly, particularly classification techniques very efficient, way to classifying the data, which is important in the decision-making process for medical practitioners. This study presents the quantization and validation (OQV) techniques for fast outlier detection in large size WDBC data sets. The distance metrics utilization makes the algorithm as the linear one for various objects and assures the sequential scanning. The inclusion of direct quantization technique and the cluster explicit discovery assures the simplicity and the economical. The comparative analysis of proposed OQV techniques with the triangular boundary-based classification and the Weighing-based Feature Selection and Monotonic Classification (WFSMC) regarding the accuracy, precision, recall and the number of attributes assures an effectiveness of OQV for large size datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Kraker, Peter, Christopher Kittel, and Asura Enkhbayar. "Open Knowledge Maps: Creating a Visual Interface to the World’s Scientific Knowledge Based on Natural Language Processing." 027.7 Zeitschrift für Bibliothekskultur 4, no. 2 (November 11, 2016): 98–103. http://dx.doi.org/10.12685/027.7-4-2-157.

Повний текст джерела
Анотація:
The goal of Open Knowledge Maps is to create a visual interface to the world’s scientific knowledge. The base for this visual interface consists of so-called knowledge maps, which enable the exploration of existing knowledge and the discovery of new knowledge. Our open source knowledge mapping software applies a mixture of summarization techniques and similarity measures on article metadata, which are iteratively chained together. After processing, the representation is saved in a database for use in a web visualization. In the future, we want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery. We want to enable people to guide each other in their discovery by collaboratively annotating and modifying the automatically created maps.Das Ziel von Open Knowledge Map ist es, ein visuelles Interface zum wissenschaftlichen Wissen der Welt bereitzustellen. Die Basis für die dieses Interface sind sogenannte “knowledge maps”, zu deutsch Wissenslandkarten. Wissenslandkarten ermöglichen die Exploration bestehenden Wissens und die Entdeckung neuen Wissens. Unsere Open Source Software wendet für die Erstellung der Wissenslandkarten eine Reihe von Text Mining Verfahren iterativ auf die Metadaten wissenschaftlicher Artikel an. Die daraus resultierende Repräsentation wird in einer Datenbank für die Anzeige in einer Web-Visualisierung abgespeichert. In Zukunft wollen wir einen Raum für das kollektive Erstellen von Wissenslandkarten schaffen, der die Personen und Communities, welche sich mit der Exploration und Entdeckung wissenschaftlichen Wissens beschäftigen, zusammenbringt. Wir wollen es den NutzerInnen ermöglichen, einander in der Literatursuche durch kollaboratives Annotieren und Modifizieren von automatisch erstellten Wissenslandkarten zu unterstützen.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Hu, Ping, Dong-xiao Gu, and Yu Zhu. "Collaborative Case-Based Reasoning for Knowledge Discovery of Elders Health Assessment System." Open Biomedical Engineering Journal 8, no. 1 (September 29, 2014): 68–74. http://dx.doi.org/10.2174/1874120701408010068.

Повний текст джерела
Анотація:
The existing Elders Health Assessment (EHA) system based on single-case-library reasoning has low intelligence level, poor coordination, and limited capabilities of assessment decision support. To effectively support knowledge reuse of EHA system, this paper proposes collaborative case reasoning and applies it to the whole knowledge reuse process of EHA system. It proposes a multi-case library reasoning application framework of EHA knowledge reuse system, and studies key techniques such as case representation, case retrieval algorithm, case optimization and correction, and reuse etc.. In the aspect of case representation, XML-based multi-case representation for case organization and storage is applied to facilitate case retrieval and management. In the aspect of retrieval method, Knowledge-Guided Approach with Nearest-Neighbor is proposed. Given the complexity of EHA, Gray Relational Analysis with weighted Euclidean Distance is used to measure the similarity so as to improve case retrieval accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Lage, Olga, María Ramos, Rita Calisto, Eduarda Almeida, Vitor Vasconcelos, and Francisca Vicente. "Current Screening Methodologies in Drug Discovery for Selected Human Diseases." Marine Drugs 16, no. 8 (August 14, 2018): 279. http://dx.doi.org/10.3390/md16080279.

Повний текст джерела
Анотація:
The increase of many deadly diseases like infections by multidrug-resistant bacteria implies re-inventing the wheel on drug discovery. A better comprehension of the metabolisms and regulation of diseases, the increase in knowledge based on the study of disease-born microorganisms’ genomes, the development of more representative disease models and improvement of techniques, technologies, and computation applied to biology are advances that will foster drug discovery in upcoming years. In this paper, several aspects of current methodologies for drug discovery of antibacterial and antifungals, anti-tropical diseases, antibiofilm and antiquorum sensing, anticancer and neuroprotectors are considered. For drug discovery, two different complementary approaches can be applied: classical pharmacology, also known as phenotypic drug discovery, which is the historical basis of drug discovery, and reverse pharmacology, also designated target-based drug discovery. Screening methods based on phenotypic drug discovery have been used to discover new natural products mainly from terrestrial origin. Examples of the discovery of marine natural products are provided. A section on future trends provides a comprehensive overview on recent advances that will foster the pharmaceutical industry.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Karthikeyani Visalakshi N., Shanthi S., and Lakshmi K. "MapReduce-Based Crow Search-Adopted Partitional Clustering Algorithms for Handling Large-Scale Data." International Journal of Cognitive Informatics and Natural Intelligence 15, no. 4 (October 2021): 1–23. http://dx.doi.org/10.4018/ijcini.20211001.oa32.

Повний текст джерела
Анотація:
Cluster analysis is the prominent data mining technique in knowledge discovery and it discovers the hidden patterns from the data. The K-Means, K-Modes and K-Prototypes are partition based clustering algorithms and these algorithms select the initial centroids randomly. Because of its random selection of initial centroids, these algorithms provide the local optima in solutions. To solve these issues, the strategy of Crow Search algorithm is employed with these algorithms to obtain the global optimum solution. With the advances in information technology, the size of data increased in a drastic manner from terabytes to petabytes. To make proposed algorithms suitable to handle these voluminous data, the phenomena of parallel implementation of these clustering algorithms with Hadoop Mapreduce framework. The proposed algorithms are experimented with large scale data and the results are compared in terms of cluster evaluation measures and computation time with the number of nodes.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Ltifi, Hela, Mounir Ben Ayed, Ghada Trabelsi, and Adel M. Alimi. "Perspective Wall Technique for Visualizing and Interpreting Medical Data." International Journal of Knowledge Discovery in Bioinformatics 3, no. 2 (April 2012): 45–61. http://dx.doi.org/10.4018/jkdb.2012040104.

Повний текст джерела
Анотація:
Increasing the improvement of confidence and comprehensibility of medical data as well as the possibility of using the human capacities in medical pattern recognition is a significant interest for the coming years. In this context, we have created a visual knowledge discovery from databases application. It has been developed to efficiently and accurately understand a large collection of fixed and temporal patients’ data in the Intensive Care Unit in order to prevent the nosocomial infection occurrence. It is based on data visualization technique which is the perspective wall. Its application is a good example of the usefulness of data visualization techniques in the medical domain.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Hussain, Zahraa Faiz, Hind Raad Ibraheem, Mohammad Alsajri, Ahmed Hussein Ali, Mohd Arfian Ismail, Shahreen Kasim, and Tole Sutikno. "A new model for iris data set classification based on linear support vector machine parameter's optimization." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 1 (February 1, 2020): 1079. http://dx.doi.org/10.11591/ijece.v10i1.pp1079-1084.

Повний текст джерела
Анотація:
Data mining is known as the process of detection concerning patterns from essential amounts of data. As a process of knowledge discovery. Classification is a data analysis that extracts a model which describes an important data classes. One of the outstanding classifications methods in data mining is support vector machine classification (SVM). It is capable of envisaging results and mostly effective than other classification methods. The SVM is a one technique of machine learning techniques that is well known technique, learning with supervised and have been applied perfectly to a vary problems of: regression, classification, and clustering in diverse domains such as gene expression, web text mining. In this study, we proposed a newly mode for classifying iris data set using SVM classifier and genetic algorithm to optimize c and gamma parameters of linear SVM, in addition principle components analysis (PCA) algorithm was use for features reduction.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

D'Abrusco, Raffaele, Giuseppina Fabbiano, Omar Laurino, and Francesco Massaro. "Knowledge Discovery Workflows in the Exploration of Complex Astronomical Datasets." Proceedings of the International Astronomical Union 10, H16 (August 2012): 681–82. http://dx.doi.org/10.1017/s1743921314012885.

Повний текст джерела
Анотація:
AbstractThe massive amount of data produced by the recent multi-wavelength large-area surveys has spurred the growth of unprecedentedly massive and complex astronomical datasets that are proving the traditional data analysis techniques more and more inadequate. Knowledge discovery techniques, while relatively new to astronomy, have been successfully applied in several other quantitative disciplines for the determination of patterns in extremely complex datasets. The concerted use of different unsupervised and supervised machine learning techniques, in particular, can be a powerful approach to answer specific questions involving high-dimensional datasets and degenerate observables. In this paper I will present CLaSPS, a data-driven methodology for the discovery of patterns in high-dimensional astronomical datasets based on the combination of clustering techniques and pattern recognition algorithms. I shall also describe the result of the application of CLaSPS to a sample of a peculiar class of AGNs, the blazars.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії