Дисертації з теми "Rules database"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Rules database.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Rules database".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Zhang, Heng. "Efficient database management based on complex association rules." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-31917.

Повний текст джерела
Анотація:
The large amount of data accumulated by applications is stored in a database. Because of the large amount, name conflicts or missing values sometimes occur. This prevents certain types of analysis. In this work, we solve the name conflict problem by comparing the similarity of the data, and changing the test data into the form of a given template dataset. Studies on data use many methods to discover knowledge from a given dataset. One popular method is association rules mining, which can find associations between items. This study unifies the incomplete data based on association rules. However, most rules based on traditional association rules mining are item-to-item rules, which is a less than perfect solution to the problem. The data recovery system is based on complex association rules able to find two more types of association rules, prefix pattern-to-item, and suffix pattern-to-item rules. Using complex association rules, several missing values are filled in. In order to find the frequent prefixes and frequent suffixes, this system used FP-tree to reduce the time, cost and redundancy. The segment phrases method can also be used for this system, which is a method based on the viscosity of two words to split a sentence into several phrases. Additionally, methods like data compression and hash map were used to speed up the search.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Butylin, Sergei. "Predictive Maintenance Framework for a Vehicular IoT Gateway Node Using Active Database Rules." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38568.

Повний текст джерела
Анотація:
This thesis describes a proposed design and implementation of a predictive maintenance engine developed to fulfill the requirements of the STO Company (Societe de transport de l'Outaouais) for maintaining vehicles in the fleet. Predictive maintenance is proven to be an effective approach and has become an industry standard in many fields. However, in the transportation industry, it is still in the stages of development due to the complexity of moving systems and the high level dimensions of involved parameters. Because it is almost impossible to cover all use cases of the vehicle operational process using one particular approach to predictive maintenance, in our work we take a systematic approach to designing a predictive maintenance system in several steps. Each step is implemented at the corresponding development stage based on the available data accumulated during system funсtioning cycle. % by dividing the entire system into modules and implementing different approaches. This thesis delves into the process of designing the general infrastructural model of the fleet management system (FMS), while focusing on the edge gateway module located on the vehicle and its function of detecting maintenance events based on current vehicle status. Several approaches may be used to detect maintenance events, such as a machine learning approach or an expert system-based approach. While the final version of fleet management system will use a hybrid approach, in this thesis paper we chose to focus on the second option based on expert knowledge, while machine learning has been left for future implementation since it requires extensive training data to be gathered prior to conducting experiments and actualizing operations. Inspired by the IDEA methodology which promotes mapping business rules as software classes and using the object-relational model for mapping objects to database entities, we take active database features as a base for developing a rule engine implementation. However, in contrast to the IDEA methodology which seeks to describe the specific system and its sub-modules, then build active rules based on the interaction between sub-systems, we are not aware of the functional structure of the vehicle due to its complexity. Instead, we develop a framework for creating specific active rules based on abstract classifications structured as ECA rules (event-condition-action), but with some expansions made due to the specifics of vehicle maintenance. The thesis describes an attempt to implement such a framework, and particularly the rule engine module, using active database features making it possible to encapsulate the active behaviour inside the database and decouple event detection from other functionalities. We provide the system with a set of example rules and then conduct a series of experiments analyzing the system for performance and correctness of events detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Savasere, Ashok. "Efficient algorithms for mining association rules in large databases of cutomer transactions." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/8260.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Visavapattamawon, Suwanna. "Application of active rules to support database integrity constraints and view management." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/1981.

Повний текст джерела
Анотація:
The project demonstrates the enforcement of integrity constraints in both the conventional and active database systems. The project implements a more complex user-defined constraint, a complicated view and more detailed database auditing on the active database system.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

李守敦 and Sau-dan Lee. "Maintenance of association rules in large databases." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B31215531.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lee, Sau-dan. "Maintenance of association rules in large databases /." Hong Kong : University of Hong Kong, 1997. http://sunzi.lib.hku.hk/hkuto/record.jsp?B19003250.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Singh, Rohit Ph D. Massachusetts Institute of Technology. "Automatically learning optimal formula simplifiers and database entity matching rules." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113938.

Повний текст джерела
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 153-161).
Traditionally, machine learning (ML) is used to find a function from data to optimize a numerical score. On the other hand, synthesis is traditionally used to find a function (or a program) that can be derived from a grammar and satisfies a logical specification. The boundary between ML and synthesis has been blurred by some recent work [56,90]. However, this interaction between ML and synthesis has not been fully explored. In this thesis, we focus on the problem of finding a function given large amounts of data such that the function satisfies a logical specification and also optimizes a numerical score over the input data. We present a framework to solve this problem in two impactful application domains: formula simplification in constraint solvers and database entity matching (EM). First, we present a system called Swapper based on our framework that can automatically generate code for efficient formula simplifiers specialized to a class of problems. Formula simplification is an important part of modern constraint solvers, and writing efficient simplifiers has largely been an arduous manual task. Evaluation of Swapper on multiple applications of the Sketch constraint solver showed 15-60% improvement over the existing hand-crafted simplifier in Sketch. Second, we present a system called EM-Synth based on our framework that generates as effective and more interpretable EM rules than the state-of-the-art techniques. Database entity matching is a critical part of data integration and cleaning, and it usually involves learning rules or classifiers from labeled examples. Evaluation of EM-Synth on multiple real-world datasets against other interpretable (shallow decision trees, SIFI [116]) and noninterpretable (SVM, deep decision trees) methods showed that EM-Synth generates more concise and interpretable rules without sacrificing too much accuracy.
by Rohit Singh.
Ph. D.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Dudgikar, Mahesh. "A layered optimizer for mining association rules over relational database management systems." [Florida] : State University System of Florida, 2000. http://etd.fcla.edu/etd/uf/2000/ana6135/Master.pdf.

Повний текст джерела
Анотація:
Thesis (M.S.)--University of Florida, 2000.
Title from first page of PDF file. Document formatted into pages; contains xiii, 94 p.; also contains graphics. Vita. Includes bibliographical references (p. 92-93).
Стилі APA, Harvard, Vancouver, ISO та ін.
9

LOPES, CARLOS HENRIQUE PEREIRA. "CLASSIFICATION OF DATABASE REGISTERS THROUGH EVOLUTION OF ASSOCIATION RULES USING GENETIC ALGORITHMS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1999. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=7297@1.

Повний текст джерела
Анотація:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Esta dissertação investiga a utilização de Algoritmos Genéticos (AG) no processo de descoberta de conhecimento implícito em Banco de Dados (KDD - Knowledge Discovery Database). O objetivo do trabalho foi avaliar o desempenho de Algoritmos Genéticos no processo de classificação de registros em Bancos de Dados (BD). O processo de classificação no contexto de Algoritmos Genéticos consiste na evolução de regras de associação que melhor caracterizem, através de sua acurácia e abrangência, um determinado grupo de registros do BD. O trabalho consistiu de 4 etapas principais: um estudo sobre a área de Knowledge Discovery Database (KDD); a definição de um modelo de AG aplicado à Mineração de Dados (Data Mining); a implementação de uma ferramenta (Rule-Evolver) de Mineração de Dados; e o estudo de casos. O estudo sobre a área de KDD envolveu todo o processo de descoberta de conhecimento útil em banco de dados: definição do problema; seleção dos dados; limpeza dos dados; pré-processamento dos dados; codificação dos dados; enriquecimento dos dados; mineração dos dados e a interpretação dos resultados. Em particular, o estudo destacou a fase de Mineração de Dados e os algoritmos e técnicas empregadas (Redes Neurais, Indução de regras, Modelos Estatísticos e Algoritmos Genéticos). Deste estudo resultou um survey sobre os principais projetos de pesquisa na área. A modelagem do Algoritmo Genético consistiu fundamentalmente na definição de uma representação dos cromossomas, da função de avaliação e dos operadores genéticos. Em mineração de dados por regras de associação é necessário considerar-se atributos quantitativos e categóricos. Atributos quantitativos representam variáveis contínuas (faixa de valores) e atributos categóricos variáveis discretas. Na representação definida, cada cromossoma representa uma regra e cada gene corresponde a um atributo do BD, que pode ser quantitativo ou categórico conforme a aplicação. A função de avaliação associa um valor numérico à regra encontrada, refletindo assim uma medida da qualidade desta solução. A Mineração de Dados por AG é um problema de otimização onde a função de avaliação deve apontar para as melhores regras de associação. A acurácia e a abrangência são medidas de desempenho e, em alguns casos, se mantém nulas durante parte da evolução. Assim, a função de avaliação deve ser uma medida que destaca cromossomas contendo regras promissoras em apresentar acurácia e abrangência diferentes de zero. Foram implementadas 10 funções de avaliação. Os operadores genéticos utilizados (crossover e mutação) buscam recombinar as cláusulas das regras, de modo a procurar obter novas regras com maior acurácia e abrangência dentre as já encontradas. Foram implementados e testados 4 operadores de cruzamento e 2 de mutação. A implementação de uma ferramenta de modelagem de AG aplicada à Mineração de Dados, denominada Rule-Evolver, avaliou o modelo proposto para o problema de classificação de registros. O Rule-Evolver analisa um Banco de Dados e extrai as regras de associação que melhor diferenciem um grupo de registros em relação a todos os registros do Banco de Dados. Suas características principais são: seleção de atributos do BD; informações estatísticas dos atributos; escolha de uma função de avaliação entre as 10 implementadas; escolha dos operadores genéticos; visualização gráfica de desempenho do sistema; e interpretação de regras. Um operador genético é escolhido a cada reprodução em função de uma taxa preestabelecida pelo usuário. Esta taxa pode permanecer fixa ou variar durante o processo evolutivo. As funções de avaliação também podem ser alteradas (acrescidas de uma recompensa) em função da abrangência e da acurácia da regra. O Rule- Evolver possui uma interface entre o BD e o AG, necessária para tor
This dissertation investigates the application of Genetic Algorithms (GAs) to the process of implicit knowledge discovery over databases (KDD - Knowledge Discovery Database). The objective of the work has been the assessment of the Genetic Algorithms (GA) performance in the classification process of database registers. In the context of Genetic Algorithms, this classification process consists in the evolution of association rules that characterise, through its accuracy and range, a particular group of database registers. This work has encompassed four main steps: a study over the area of Knowledge Discovery Databases; the GA model definition applied to Data Mining; the implementation of the Data Mining Rule Evolver; and the case studies. The study over the KDD area included the overall process of useful knowledge discovery; the problem definition; data organisation; data pre-processing; data encoding; data improvement; data mining; and results´ interpretation. Particularly, the investigation emphasied the data mining procedure, techniques and algorithms (neural Networks, rule Induction, Statistics Models and Genetic Algorithms). A survey over the mais research projects in this area was developed from this work. The Genetic Algorithm modelling encompassed fundamentally, the definition of the chromosome representation, the fitness evaluation function and the genetic operators. Quantitative and categorical attributes must be taken into account within data mining through association rules. Quantitative attribites represent continuous variables (range of values), whereas categorical attributes are discrete variable. In the representation employed in this work, each chromosome represents a rule and each gene corresponds to a database attribute, which can be quantitative or categorical, depending on the application. The evaluation function associates a numerical value to the discovered rule, reflecting, therefore, the fitness evaluation function should drive the process towards the best association rules. The accuracy and range are performance statistics and, in some cases, their values stay nil during part of the evolutionary process. Therefore, the fitness evaluation function should reward chromosomes containing promising rules, which present accuracy and range different of zero. Ten fitness evaluation functions have been implemented. The genetic operators used in this work, crossover and mutation, seek to recombine rules´clauses in such a way to achieve rules of more accuracy and broader range when comparing the ones already sampled. Four splicing operators and two mutation operators have been experimented. The GA modeling tool implementation applied to Data Mining called Rule Evolever, evaluated the proposed model to the problem of register classification. The Rule Evolver analyses the database and extracts association rules that can better differentiate a group of registers comparing to the overall database registers. Its main features are: database attributes selection; attributes statistical information; evaluation function selection among ten implemented ones; genetic operators selection; graphical visualization of the system performance; and rules interpretation. A particular genetic operator is selected at each reproduction step, according to a previously defined rate set by the user. This rate may be kept fix or may very along the evolutionary process. The evolutionary process. The evaluation functions may also be changed (a rewarding may be included) according to the rule´s range and accuracy. The Rule Evolver implements as interface between the database and the GA, endowing the KDD process and the Data Mining phase with flexibility. In order to optimise the rules´ search process and to achieve better quality rules, some evolutionary techniques have been implemented (linear rank and elitism), and different random initialisation methods have been used as well; global averag
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Alsalama, Ahmed. "A Hybrid Recommendation System Based on Association Rules." TopSCHOLAR®, 2013. http://digitalcommons.wku.edu/theses/1250.

Повний текст джерела
Анотація:
Recommendation systems are widely used in e-commerce applications. Theengine of a current recommendation system recommends items to a particular user based on user preferences and previous high ratings. Various recommendation schemes such as collaborative filtering and content-based approaches are used to build a recommendation system. Most of current recommendation systems were developed to fit a certain domain such as books, articles, and movies. We propose a hybrid framework recommendation system to be applied on two dimensional spaces (User × Item) with a large number of users and a small number of items. Moreover, our proposed framework makes use of both favorite and non-favorite items of a particular user. The proposed framework is built upon the integration of association rules mining and the content-based approach. The results of experiments show that our proposed framework can provide accurate recommendations to users.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Poon, W. L. "Concurrency control in multiple perspective software development." Thesis, Imperial College London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341030.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Korsakas, Artūras. "Veiklos taisyklių manipuliavimo mechanizmo ir duomenų bazės tarpusavio sąveikos tyrimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2009~D_20090715_101402-08242.

Повний текст джерела
Анотація:
Sprendžiami, veiklos taisyklių specifikavimas ir klasifikavimas manipuliavimo mechanizmais. Tiriama, kaip, duomenų bazės struktūra transformuojama į veiklos taisyklių kūrimo aplinką, neprarandant duomenų struktūros elementų, ir užtikrinant pilną sąveika su duomenų baze.Išanalizuota veiklos taisyklės sąvoka ir jos pritaikymo galimybės veiklos procesų architektūroje. Apžvelgtos šiuolaikinės veiklos taisyklių manipuliavimo mechanizmų architektūros. Plačiau išnagrinėtos Blaze Advisor sistemos galimybės, kaip veiklos taisyklių valdymo sistema. Išnagrinėta 18 literatūros šaltinių. Sudaryta projektavimo metodika. Kurioje atsispindi pagrindiniai principai kaip transformuojama duomenų bazės struktūrą į veiklos taisyklių valdymo sistemos duomenų struktūrą. Laikantis metodikos atliktas eksperimentas tarp Blaze Advisor sistemos ir duomenų bazės. Sudarytos konceptualios schemos vaizduojančios duomenų srautus vykdymo metu informacinėje sistemoje: nuo duomenų bazės iki sprendimo. Suformuota metodika yra skirta praktiniam panaudojimui; ją taikant užtikrinamas sklandus veiklos taisyklių susiejimas su duomenų bazėje saugomais duomenimis sistemos projektavimo ir realizavimo etapuose.
Nowadays, nearly all of the commercial and government organizations are highly dependent on software system. Due to the inherent dynamic nature of their business environment, software evolution is inevitable. The growing of needs in management of global organizations similarly is growing the expert systems designing companies, which offer their own organizations management systems. This paper substance is how to transform the data structures to the business rules management systems, without losing the database structure elements, and ensuring fully interaction with the database. Representing the concept of business rules, analyzing architecture of business rules manipulation mechanisms. Exploring the Blaze Advisor as tool of business rule management system, which is implement of components such as: rule sets, decision trees, decision tables, and etc. These components are developing the processes of organization, which helps to efficient maintain and control the operations of internal logic. A study carried out between the Blaze Advisor tool and database, and established how to filtering the data from database, and then a data are transformed into information, and under information using the components to get the solution. To give spirit of this work was formulated a method of transforming data structures into development environment of business rules.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Albhbah, Atia M. "Dynamic web forms development using RuleML. Building a framework using metadata driven rules to control Web forms generation and appearance." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/5719.

Повний текст джерела
Анотація:
Web forms development for Web based applications is often expensive, laborious, error-prone, time consuming and requires a lot of effort. Web forms are used by many different people with different backgrounds and a lot of demands. There is a very high cost associated with the need to update the Web application systems to achieve these demands. A wide range of techniques and ideas to automate the generation of Web forms exist. These techniques and ideas however, are not capable of generating the most dynamic behaviour of form elements, and make Insufficient use of database metadata to control Web forms¿ generation and appearance. In this thesis different techniques are proposed that use RuleML and database metadata to build rulebases to improve the automatic and dynamic generation of Web forms. First this thesis proposes the use of a RuleML format rulebase using Reaction RuleML that can be used to support the development of automated Web interfaces. Database metadata can be extracted from system catalogue tables in typical relational database systems, and used in conjunction with the rulebase to produce appropriate Web form elements. Results show that this mechanism successfully insulates application logic from code and suggests that Abstract iii the method can be extended from generic metadata rules to more domain specific rules. Second it proposes the use of common sense rules and domain specific rules rulebases using Reaction RuleML format in conjunction with database metadata rules to extend support for the development of automated Web forms. Third it proposes the use of rules that involve code to implement more semantics for Web forms. Separation between content, logic and presentation of Web applications has become an important issue for faster development and easy maintenance. Just as CSS applied on the client side to control the overall presentation of Web applications, a set of rules can give a similar consistency to the appearance and operation of any set of forms that interact with the same database. We develop rules to order Web form elements and query forms using Reaction RuleML format in conjunction with database metadata rules. The results show the potential of RuleML formats for representing database structural and active semantics. Fourth it proposes the use of a RuleML based approach to provide more support for greater semantics for example advanced domain support even when this is not a DBMS feature. The approach is to specify most of the semantics associated with data stored in RDBMS, to overcome some RDBMSs limitations. RuleML could be used to represent database metadata as an external format.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Albhbah, Atia Mahmod. "Dynamic web forms development using RuleML : building a framework using metadata driven rules to control Web forms generation and appearance." Thesis, University of Bradford, 2013. http://hdl.handle.net/10454/5719.

Повний текст джерела
Анотація:
Web forms development for Web based applications is often expensive, laborious, error-prone, time consuming and requires a lot of effort. Web forms are used by many different people with different backgrounds and a lot of demands. There is a very high cost associated with the need to update the Web application systems to achieve these demands. A wide range of techniques and ideas to automate the generation of Web forms exist. These techniques and ideas however, are not capable of generating the most dynamic behaviour of form elements, and make Insufficient use of database metadata to control Web forms' generation and appearance. In this thesis different techniques are proposed that use RuleML and database metadata to build rulebases to improve the automatic and dynamic generation of Web forms. First this thesis proposes the use of a RuleML format rulebase using Reaction RuleML that can be used to support the development of automated Web interfaces. Database metadata can be extracted from system catalogue tables in typical relational database systems, and used in conjunction with the rulebase to produce appropriate Web form elements. Results show that this mechanism successfully insulates application logic from code and suggests that Abstract iii the method can be extended from generic metadata rules to more domain specific rules. Second it proposes the use of common sense rules and domain specific rules rulebases using Reaction RuleML format in conjunction with database metadata rules to extend support for the development of automated Web forms. Third it proposes the use of rules that involve code to implement more semantics for Web forms. Separation between content, logic and presentation of Web applications has become an important issue for faster development and easy maintenance. Just as CSS applied on the client side to control the overall presentation of Web applications, a set of rules can give a similar consistency to the appearance and operation of any set of forms that interact with the same database. We develop rules to order Web form elements and query forms using Reaction RuleML format in conjunction with database metadata rules. The results show the potential of RuleML formats for representing database structural and active semantics. Fourth it proposes the use of a RuleML based approach to provide more support for greater semantics for example advanced domain support even when this is not a DBMS feature. The approach is to specify most of the semantics associated with data stored in RDBMS, to overcome some RDBMSs limitations. RuleML could be used to represent database metadata as an external format.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Bogorny, Vania. "Enhancing spatial association rule mining in geographic databases." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2006. http://hdl.handle.net/10183/7841.

Повний текст джерела
Анотація:
A técnica de mineração de regras de associação surgiu com o objetivo de encontrar conhecimento novo, útil e previamente desconhecido em bancos de dados transacionais, e uma grande quantidade de algoritmos de mineração de regras de associação tem sido proposta na última década. O maior e mais bem conhecido problema destes algoritmos é a geração de grandes quantidades de conjuntos freqüentes e regras de associação. Em bancos de dados geográficos o problema de mineração de regras de associação espacial aumenta significativamente. Além da grande quantidade de regras e padrões gerados a maioria são associações do domínio geográfico, e são bem conhecidas, normalmente explicitamente representadas no esquema do banco de dados. A maioria dos algoritmos de mineração de regras de associação não garantem a eliminação de dependências geográficas conhecidas a priori. O resultado é que as mesmas associações representadas nos esquemas do banco de dados são extraídas pelos algoritmos de mineração de regras de associação e apresentadas ao usuário. O problema de mineração de regras de associação espacial pode ser dividido em três etapas principais: extração dos relacionamentos espaciais, geração dos conjuntos freqüentes e geração das regras de associação. A primeira etapa é a mais custosa tanto em tempo de processamento quanto pelo esforço requerido do usuário. A segunda e terceira etapas têm sido consideradas o maior problema na mineração de regras de associação em bancos de dados transacionais e tem sido abordadas como dois problemas diferentes: “frequent pattern mining” e “association rule mining”. Dependências geográficas bem conhecidas aparecem nas três etapas do processo. Tendo como objetivo a eliminação dessas dependências na mineração de regras de associação espacial essa tese apresenta um framework com três novos métodos para mineração de regras de associação utilizando restrições semânticas como conhecimento a priori. O primeiro método reduz os dados de entrada do algoritmo, e dependências geográficas são eliminadas parcialmente sem que haja perda de informação. O segundo método elimina combinações de pares de objetos geográficos com dependências durante a geração dos conjuntos freqüentes. O terceiro método é uma nova abordagem para gerar conjuntos freqüentes não redundantes e sem dependências, gerando conjuntos freqüentes máximos. Esse método reduz consideravelmente o número final de conjuntos freqüentes, e como conseqüência, reduz o número de regras de associação espacial.
The association rule mining technique emerged with the objective to find novel, useful, and previously unknown associations from transactional databases, and a large amount of association rule mining algorithms have been proposed in the last decade. Their main drawback, which is a well known problem, is the generation of large amounts of frequent patterns and association rules. In geographic databases the problem of mining spatial association rules increases significantly. Besides the large amount of generated patterns and rules, many patterns are well known geographic domain associations, normally explicitly represented in geographic database schemas. The majority of existing algorithms do not warrant the elimination of all well known geographic dependences. The result is that the same associations represented in geographic database schemas are extracted by spatial association rule mining algorithms and presented to the user. The problem of mining spatial association rules from geographic databases requires at least three main steps: compute spatial relationships, generate frequent patterns, and extract association rules. The first step is the most effort demanding and time consuming task in the rule mining process, but has received little attention in the literature. The second and third steps have been considered the main problem in transactional association rule mining and have been addressed as two different problems: frequent pattern mining and association rule mining. Well known geographic dependences which generate well known patterns may appear in the three main steps of the spatial association rule mining process. Aiming to eliminate well known dependences and generate more interesting patterns, this thesis presents a framework with three main methods for mining frequent geographic patterns using knowledge constraints. Semantic knowledge is used to avoid the generation of patterns that are previously known as non-interesting. The first method reduces the input problem, and all well known dependences that can be eliminated without loosing information are removed in data preprocessing. The second method eliminates combinations of pairs of geographic objects with dependences, during the frequent set generation. A third method presents a new approach to generate non-redundant frequent sets, the maximal generalized frequent sets without dependences. This method reduces the number of frequent patterns very significantly, and by consequence, the number of association rules.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Lee, John Wan Tung. "The discovery of fuzzy rules from fuzzy databases." Thesis, University of Sunderland, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298322.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Yang, Yuping. "Theory and mining of association rules over large databases /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488194825668668.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Kerdprasop, Kittisak. "Active Database Rule Set Reduction by Knowledge Discovery." NSUWorks, 1999. http://nsuworks.nova.edu/gscis_etd/631.

Повний текст джерела
Анотація:
The advent of active databases enhances the functionality of conventional passive databases. A large number of applications benefit from the active database systems because of the provision of the powerful active rule language and rule processing algorithm. With the power of active rules, data manipulation operations can be executed automatically when certain events occur and certain conditions are satisfied. Active rules can also impose unique and consistent constraints on the database, independent of the applications, such that no application can violate. The additional database functionality offered by active rules, however, comes at a price. It is not a straightforward task for database designers to define and maintain a large set of active rules. Moreover, the termination property of an active rule set is difficult to detect because of the subtle interactions among active rules. This dissertation has proposed a novel approach of applying machine learning techniques to discover a set of newly simplified active rules. The termination property of the discovered active rule set is also guaranteed via the stratification technique. The approach of discovering active rules is proposed in the context of relational active databases. It is an attempt to assist database designers by providing the facility to analyze and refine active rules at designing time. The main algorithm of active rule discovery is called the ARD algorithm. The usefulness of the algorithm was verified by the actual running on sample sets of active rules. The running results, which were these corresponding new sets of active rules, will be analyzed on the basis of the size and the complexity of the discovered rule sets. The size of the discovered rule set was analyzed in term of the number of active rules. The complexity was analyzed in term of the number of transition states, which are the changes in the database states as the result of rule execution. The experimental results revealed that with the proposed approach, the numbers of active rules and transition states could be reduced 61.11 % and 40%, respectively, on average.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Abdo, Walid Adly Atteya. "Enhancing association rules algorithms for mining distributed databases : integration of fast BitTable and multi-agent association rules mining in distributed medical databases for decision support." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5661.

Повний текст джерела
Анотація:
Over the past few years, mining data located in heterogeneous and geographically distributed sites have been designated as one of the key important issues. Loading distributed data into centralized location for mining interesting rules is not a good approach. This is because it violates common issues such as data privacy and it imposes network overheads. The situation becomes worse when the network has limited bandwidth which is the case in most of the real time systems. This has prompted the need for intelligent data analysis to discover the hidden information in these huge amounts of distributed databases. In this research, we present an incremental approach for building an efficient Multi-Agent based algorithm for mining real world databases in geographically distributed sites. First, we propose the Distributed Multi-Agent Association Rules algorithm (DMAAR) to minimize the all-to-all broadcasting between distributed sites. Analytical calculations show that DMAAR reduces the algorithm complexity and minimizes the message communication cost. The proposed Multi-Agent based algorithm complies with the Foundation for Intelligent Physical Agents (FIPA), which is considered as the global standards in communication between agents, thus, enabling the proposed algorithm agents to cooperate with other standard agents. Second, the BitTable Multi-Agent Association Rules algorithm (BMAAR) is proposed. BMAAR includes an efficient BitTable data structure which helps in compressing the database thus can easily fit into the memory of the local sites. It also includes two BitWise AND/OR operations for quick candidate itemsets generation and support counting. Moreover, the algorithm includes three transaction trimming techniques to reduce the size of the mined data. Third, we propose the Pruning Multi-Agent Association Rules algorithm (PMAAR) which includes three candidate itemsets pruning techniques for reducing the large number of generated candidate itemsets, consequently, reducing the total time for the mining process. The proposed PMAAR algorithm has been compared with existing Association Rules algorithms against different benchmark datasets and has proved to have better performance and execution time. Moreover, PMAAR has been implemented on real world distributed medical databases obtained from more than one hospital in Egypt to discover the hidden Association Rules in patients' records to demonstrate the merits and capabilities of the proposed model further. Medical data was anonymously obtained without the patients' personal details. The analysis helped to identify the existence or the absence of the disease based on minimum number of effective examinations and tests. Thus, the proposed algorithm can help in providing accurate medical decisions based on cost effective treatments, improving the medical service for the patients, reducing the real time response for the health system and improving the quality of clinical decision making.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Abdo, Walid A. A. "Enhancing association rules algorithms for mining distributed databases. Integration of fast BitTable and multi-agent association rules mining in distributed medical databases for decision support." Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5661.

Повний текст джерела
Анотація:
Over the past few years, mining data located in heterogeneous and geographically distributed sites have been designated as one of the key important issues. Loading distributed data into centralized location for mining interesting rules is not a good approach. This is because it violates common issues such as data privacy and it imposes network overheads. The situation becomes worse when the network has limited bandwidth which is the case in most of the real time systems. This has prompted the need for intelligent data analysis to discover the hidden information in these huge amounts of distributed databases. In this research, we present an incremental approach for building an efficient Multi-Agent based algorithm for mining real world databases in geographically distributed sites. First, we propose the Distributed Multi-Agent Association Rules algorithm (DMAAR) to minimize the all-to-all broadcasting between distributed sites. Analytical calculations show that DMAAR reduces the algorithm complexity and minimizes the message communication cost. The proposed Multi-Agent based algorithm complies with the Foundation for Intelligent Physical Agents (FIPA), which is considered as the global standards in communication between agents, thus, enabling the proposed algorithm agents to cooperate with other standard agents. Second, the BitTable Multi-Agent Association Rules algorithm (BMAAR) is proposed. BMAAR includes an efficient BitTable data structure which helps in compressing the database thus can easily fit into the memory of the local sites. It also includes two BitWise AND/OR operations for quick candidate itemsets generation and support counting. Moreover, the algorithm includes three transaction trimming techniques to reduce the size of the mined data. Third, we propose the Pruning Multi-Agent Association Rules algorithm (PMAAR) which includes three candidate itemsets pruning techniques for reducing the large number of generated candidate itemsets, consequently, reducing the total time for the mining process. The proposed PMAAR algorithm has been compared with existing Association Rules algorithms against different benchmark datasets and has proved to have better performance and execution time. Moreover, PMAAR has been implemented on real world distributed medical databases obtained from more than one hospital in Egypt to discover the hidden Association Rules in patients¿ records to demonstrate the merits and capabilities of the proposed model further. Medical data was anonymously obtained without the patients¿ personal details. The analysis helped to identify the existence or the absence of the disease based on minimum number of effective examinations and tests. Thus, the proposed algorithm can help in providing accurate medical decisions based on cost effective treatments, improving the medical service for the patients, reducing the real time response for the health system and improving the quality of clinical decision making.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Murphy, Brian R. "Order-sensitive XML query processing over relational sources." Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0505103-123753.

Повний текст джерела
Анотація:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: computation pushdown; XML; order-based Xquery processing; relational database; ordered SQL queries; data model mapping; XQuery; XML data mapping; SQL; XML algebra rewrite rules; XML document order. Includes bibliographical references (p. 64-67).
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Steidle, Christina Marie. "A System for Incorporating Time-based Event-Condition-Action Rules into Business Databases." Wright State University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=wright1250726209.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Hsu, Ing-Miin. "Distributed rule monitoring in distributed active databases /." The Ohio State University, 1993. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487841975356679.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Toprak, Serkan. "Data Mining For Rule Discovery In Relational Databases." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605356/index.pdf.

Повний текст джерела
Анотація:
Data is mostly stored in relational databases today. However, most data mining algorithms are not capable of working on data stored in relational databases directly. Instead they require a preprocessing step for transforming relational data into algorithm specified form. Moreover, several data mining algorithms provide solutions for single relations only. Therefore, valuable hidden knowledge involving multiple relations remains undiscovered. In this thesis, an implementation is developed for discovering multi-relational association rules in relational databases. The implementation is based on a framework providing a representation of patterns in relational databases, refinement methods of patterns, and primitives for obtaining necessary record counts from database to calculate measures for patterns. The framework exploits meta-data of relational databases for pruning search space of patterns. The implementation extends the framework by employing Apriori algorithm for further pruning the search space and discovering relational recursive patterns. Apriori algorithm is used for finding large itemsets of tables, which are used to refine patterns. Apriori algorithm is modified by changing support calculation method for itemsets. A method for determining recursive relations is described and a solution is provided for handling recursive patterns using aliases. Additionally, continuous attributes of tables are discretized utilizing equal-depth partitioning. The implementation is tested with gene localization prediction task of KDD Cup 2001 and results are compared to those of the winner approach.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Grilo, Júnior Tarcísio Ferreira. "Aplicação de técnicas de Data Mining para auxiliar no processo de fiscalização no âmbito do Tribunal de Contas do Estado da Paraíba." Universidade Federal da Paraí­ba, 2010. http://tede.biblioteca.ufpb.br:8080/handle/tede/5238.

Повний текст джерела
Анотація:
Made available in DSpace on 2015-05-08T14:53:30Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 2082485 bytes, checksum: 0c5cd714d0a43bac80888cfc1dd4e7cb (MD5) Previous issue date: 2010-09-03
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
This search has as goal to validate the hypothesis of the applicability of data mining techniques in Bidding and Contracts database managed by the Account Court of Paraiba State, enabling the generation of rules and discovery of hidden knowledge or implicit, contributing to the process of decision making, supervision and celerity in this Court of Auditors. To the best comprehension of this work, It was made a literature revision bringing at first place a historic vision about the decision process, as well as this theme evolution studies and the relation between the tender processes sent to Account Court of Paraiba State and the fraud indication discovery process and irregularities through the data mining process using. We will bring to light the concept of Business Intelligence (BI) and for it`s main components, as well as the concepts of knowledge discovery in database, and a comparing between the using of the instruments of data mining. We expect from this implant of the data mining an increase in the productivity and also an increase in speed of lawsuit process from the public accounts analysis and public money fiscal control.
Esta pesquisa tem como objetivo validar a hipótese da aplicabilidade das técnicas de mineração de dados na base de dados de Licitação e Contratos gerenciada pelo Tribunal de Contas do Estado da Paraíba (TCE-PB), possibilitando a geração de regras e descoberta de conhecimento oculto ou implícito, contribuindo desta forma com o processo de tomada de decisão, fiscalização e celeridade processual no âmbito desta Corte de Contas. Para melhor compreensão desse trabalho foi realizada uma revisão de literatura abordando primeiramente um histórico sobre o processo de decisão, bem como a evolução dos estudos deste tema e da relação entre os processos licitatórios enviados ao TCE-PB e o processo de descoberta de indícios de fraudes e irregularidades através do uso de mineração de dados. São abordados os conceitos sobre a tecnologia de Business Intelligence (BI) e dos seus principais componentes, bem como os conceitos de Descoberta de Conhecimentos em Bases de Dados (Knowledge Discorevy in Databases), e uma comparação das funcionalidades presentes nas ferramentas de mineração de dados. Espera-se com a implantação desta ferramenta de mineração de dados, um ganho de produtividade e um aumento na celeridade do tramite processual decorrentes da análise das contas públicas e na fiscalização do erário.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Huynh, Xuan-Hiep. "Interestingness Measures for Association Rules in a KDD Process : PostProcessing of Rules with ARQAT Tool." Phd thesis, Université de Nantes, 2006. http://tel.archives-ouvertes.fr/tel-00482649.

Повний текст джерела
Анотація:
This work takes place in the framework of Knowledge Discovery in Databases (KDD), often called "Data Mining". This domain is both a main research topic and an application ¯eld in companies. KDD aims at discovering previously unknown and useful knowledge in large databases. In the last decade many researches have been published about association rules, which are frequently used in data mining. Association rules, which are implicative tendencies in data, have the advantage to be an unsupervised model. But, in counter part, they often deliver a large number of rules. As a consequence, a postprocessing task is required by the user to help him understand the results. One way to reduce the number of rules - to validate or to select the most interesting ones - is to use interestingness measures adapted to both his/her goals and the dataset studied. Selecting the right interestingness measures is an open problem in KDD. A lot of measures have been proposed to extract the knowledge from large databases and many authors have introduced the interestingness properties for selecting a suitable measure for a given application. Some measures are adequate for some applications but the others are not. In our thesis, we propose to study the set of interestingness measure available in the literature, in order to evaluate their behavior according to the nature of data and the preferences of the user. The ¯nal objective is to guide the user's choice towards the measures best adapted to its needs and in ¯ne to select the most interesting rules. For this purpose, we propose a new approach implemented in a new tool, ARQAT (Association Rule Quality Analysis Tool), in order to facilitate the analysis of the behavior about 40 interest- ingness measures. In addition to elementary statistics, the tool allows a thorough analysis of the correlations between measures using correlation graphs based on the coe±cients suggested by Pear- son, Spearman and Kendall. These graphs are also used to identify the clusters of similar measures. Moreover, we proposed a series of comparative studies on the correlations between interestingness measures on several datasets. We discovered a set of correlations not very sensitive to the nature of the data used, and which we called stable correlations. Finally, 14 graphical and complementary views structured on 5 levels of analysis: ruleset anal- ysis, correlation and clustering analysis, most interesting rules analysis, sensitivity analysis, and comparative analysis are illustrated in order to show the interest of both the exploratory approach and the use of complementary views.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Richards, Graeme. "The discovery of association rules from tabular databases comprising nominal and ordinal attributes." Thesis, University of East Anglia, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268482.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Kopanas, Vassilios. "Relational database support for a rule based approach to information systems." Thesis, University of Manchester, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358052.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Heyne, Edward J. "A .Net Framework for Rule-Based Symbolic Database Visualization in 3D." University of Akron / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=akron1312664305.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Raj, Himanshu. "Integrated alerting for structured and free-text data in TriggerMan." [Gainesville, Fla.] : University of Florida, 2001. http://purl.fcla.edu/fcla/etd/UFE0000346.

Повний текст джерела
Анотація:
Thesis (M.S.)--University of Florida, 2001.
Title from title page of source document. Document formatted into pages; contains viii, 60 p.; also contains graphics. Includes vita. Includes bibliographical references.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Mondal, Kartick Chandra. "Algorithmes pour la fouille de données et la bio-informatique." Thesis, Nice, 2013. http://www.theses.fr/2013NICE4049.

Повний текст джерела
Анотація:
L'extraction de règles d'association et de bi-clusters sont deux techniques de fouille de données complémentaires majeures, notamment pour l'intégration de connaissances. Ces techniques sont utilisées dans de nombreux domaines, mais aucune approche permettant de les unifier n'a été proposée. Hors, réaliser ces extractions indépendamment pose les problèmes des ressources nécessaires (mémoire, temps d'exécution et accès aux données) et de l'unification des résultats. Nous proposons une approche originale pour extraire différentes catégories de modèles de connaissances tout en utilisant un minimum de ressources. Cette approche est basée sur la théorie des ensembles fermés et utilise une nouvelle structure de données pour extraire des représentations conceptuelles minimales de règles d'association, bi-clusters et règles de classification. Ces modèles étendent les règles d'association et de classification et les bi-clusters classiques, les listes d'objets supportant chaque modèle et les relations hiérarchiques entre modèles étant également extraits. Cette approche a été appliquée pour l'analyse de données d'interaction protéomiques entre le virus VIH-1 et l'homme. L'analyse de ces interactions entre espèces est un défi majeur récent en bio-informatique. Plusieurs bases de données intégrant des informations hétérogènes sur les interactions et des connaissances biologiques sur les protéines ont été construites. Les résultats expérimentaux montrent que l'approche proposée peut traiter efficacement ces bases de données et que les modèles conceptuels extraits peuvent aider à la compréhension et à l'analyse de la nature des relations entre les protéines interagissant
Knowledge pattern extraction is one of the major topics in the data mining and background knowledge integration domains. Out of several data mining techniques, association rule mining and bi-clustering are two major complementary tasks for these topics. These tasks gained much importance in many domains in recent years. However, no approach was proposed to perform them in one process. This poses the problems of resources required (memory, execution times and data accesses) to perform independent extractions and of the unification of the different results. We propose an original approach for extracting different categories of knowledge patterns while using minimum resources. This approach is based on the frequent closed patterns theoretical framework and uses a novel suffix-tree based data structure to extract conceptual minimal representations of association rules, bi-clusters and classification rules. These patterns extend the classical frameworks of association and classification rules, and bi-clusters as data objects supporting each pattern and hierarchical relationships between patterns are also extracted. This approach was applied to the analysis of HIV-1 and human protein-protein interaction data. Analyzing such inter-species protein interactions is a recent major challenge in computational biology. Databases integrating heterogeneous interaction information and biological background knowledge on proteins have been constructed. Experimental results show that the proposed approach can efficiently process these databases and that extracted conceptual patterns can help the understanding and analysis of the nature of relationships between interacting proteins
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Le, Guennec Yohann. "Développement d’équations d’état cubiques adaptées à la représentation de mélanges contenant des molécules polaires (eau, alcools, amines …) et des hydrocarbures." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0245/document.

Повний текст джерела
Анотація:
L’objectif principal de ce travail de thèse est de développer un modèle thermodynamique de type équation d’état cubique, permettant de prédire avec un maximum de précision les propriétés thermodynamiques des corps purs (des comportements de phases aux propriétés énergétiques - enthalpie, capacité calorifique - en incluant les propriétés volumiques) et des mélanges (équilibres de phases dans les régions sub- et supercritiques, points critiques, propriétés énergétiques, densités …), y compris les plus complexes. Concernant les corps purs tout d’abord : en nous appuyant sur la connaissance acquise par les études publiées pendant près d’un siècle et demi sur les équations d’état cubiques, nous avons identifié deux leviers pour accroître la précision de ces modèles. Le premier concerne la sélection d’une fonction α optimale (cette fonction est une quantité clef apparaissant dans le terme attractif du modèle) dont le bon paramétrage permet de représenter précisément les propriétés à saturation des corps purs, telles que la pression de saturation, l’enthalpie de vaporisation et la capacité calorifique du liquide à saturation. Afin que la fonction α puisse être extrapolée au domaine des hautes températures, nous avons défini les contraintes mathématiques que celle-ci doit respecter. Le second levier est le paramètre de translation volumique, paramètre clef pour la bonne représentation des densités liquides. Ces réflexions et les études associées sont à la base du développement des modèles tc-RK et tc-PR, utilisant une fonction α extrapolable à haute température ainsi qu’un paramètre de translation volumique, garantissant une précision jusqu’alors inégalée des propriétés sub- et supercritiques des corps purs prédites par des équations d’état cubiques. Afin d’étendre les modèles tc-RK et tc-PR aux mélanges, il a été nécessaire de développer des règles de mélange appropriées pour deux paramètres de l’équation d’état des mélanges : le covolume et le paramètre attractif. Des règles de mélanges récemment proposées qui combinent équation d’état et modèle de coefficient d’activité ont été adoptées. Les valeurs optimales des paramètres universels de ces règles de mélange ont été identifiées dans le cadre de cette thèse. Une règle de mélange linéaire pour le paramètre de translation volumique du mélange a été sélectionnée ; il a été prouvé que cette règle de mélange garantit l’invariance des propriétés d’équilibre de phases et des propriétés énergétiques entre les modèles translatés et non translatés. Afin de définir le modèle de coefficient d’activité optimal à intégrer dans la nouvelle règle de mélange, une base de données de 200 systèmes binaires a été développée. Ces systèmes binaires ont été sélectionnés afin d’être représentatifs des différents types d’interactions qui peuvent exister dans les mélanges non électrolytiques. La base de données accorde une place significative aux systèmes dits associés, qui sont certainement parmi les plus difficiles à modéliser par une équation d’état. In fine, cette thèse pose toutes les bases du développement d’une équation d’état cubique des mélanges. Le choix du modèle de coefficient d’activité optimal, la détermination des paramètres d’interactions binaires des 200 systèmes de la base de données et leur prédiction constituent des suites possibles de ce travail
The main objective of this thesis work is to develop a cubic equation of state thermodynamic model able to accurately predict the thermodynamic properties of pure compounds (from phase equilibrium data to energetic properties – enthalpy, heat capacity – and volume properties) and mixtures (phase equilibria in sub- and supercritical regions, critical points, energetic properties, densities…), including the most complex ones. Starting with pure compounds: relying on the knowledge collected all through the years from Van der Waals seminal work about cubic equations of state, we identified two levers to increase cubic-model accuracy. First is the selection of the optimal α function (this function is a key quantity involved in the model attractive term) the proper parameterization of which entails an accurate representation of pure-compound saturation properties such as saturation pressure, enthalpy of vaporization, saturated-liquid heat capacity. In order to safely extrapolate an α functions to the high temperature domain, we defined the mathematical constraints that it should satisfy. The second lever is the volume translation parameter, a key parameter for an accurate description of liquid densities. These studies led to the development of the tc-PR and tc-RK models, using an α function that correctly extrapolates to the high temperature domain so as a volume translation parameter, ensuring the most accurate estimations of pure-compound sub- and supercritical property from a cubic equation of state. In order to extend the tc-PR and tc-RK models to mixtures, it was necessary to develop adequate mixing rules for both equation of state parameters: the covolume and the attractive parameter. Recently proposed mixing rules combining an equation of state and an activity coefficient model have been retained. Optimal values of the mixing rules universal parameters have been identified in the framework of this thesis. A linear mixing rule for the volume translation parameter has been selected; it has been proven that this mixing rule does not change the phase equilibrium and energetic properties when switching from a translated to an untranslated model. In order to define the optimal activity coefficient model to include in the new mixing rule, a 200 binary-system database has been developed. These binary systems have been selected to be representative of the different kinds of interactions that can exist in non-electrolytic mixtures. The database includes in particular systems containing associating compounds, which are certainly among the most difficult ones to model with an equation of state. In fine, this thesis sets all the bases for the development of a cubic equation of state for mixtures. The selection of the optimal activity-coefficient model, the estimation of binary interaction parameters for the 200 binary systems from the database and their prediction are possible continuations of this work
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Fonseca, Adriane de Martini 1988. "Refactoring rules for Graph Databases = Regras de refatoração para banco de dados baseado em grafos." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/267724.

Повний текст джерела
Анотація:
Orientador: Luiz Camolesi Júnior
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Tecnologia
Made available in DSpace on 2018-08-27T04:46:35Z (GMT). No. of bitstreams: 1 Fonseca_AdrianedeMartini_M.pdf: 1663096 bytes, checksum: 54c4089adea68e67cb7ab43b4b46dfdf (MD5) Previous issue date: 2015
Resumo: A informação produzida atualmente apresenta crescimento em volume e complexidade, representando um desafio tecnológico que demanda mais do que a atual estrutura de Bancos de Dados Relacionais pode oferecer. Tal fato estimula o uso de diferentes formas de armazenamento, como Bancos de Dados baseados em Grafos (BDG). Os atuais Bancos de Dados baseados em Grafos são adaptados para suportar automaticamente a evolução do banco de dados, mas não fornecem recursos adequados para a organização da informação. Esta função é deixada a cargo das aplicações que acessam o banco de dados, comprometendo a integridade dos dados e sua confiabilidade. O objetivo deste trabalho é a definição de regras de refatoração para auxiliar o gerenciamento da evolução de Bancos de Dados baseados em Grafos. As regras apresentadas neste trabalho são adaptações e extensões de regras de refatoração consolidadas para bancos de dados relacionais para atender às características dos Bancos de Dados baseado em Grafos. O resultado deste trabalho é um catálogo de regras que poderá ser utilizado por desenvolvedores de ferramentas de administração de bancos de dados baseados em grafos para garantir a integridade das operações de evolução de esquemas de dados e consequentemente dos dados relacionados
Abstract: The information produced nowadays does not stop growing in volume and complexity, representing a technological challenge which demands more than the relational model for databases can currently offer. This situation stimulates the use of different forms of storage, such as Graph Databases. Current Graph Databases allow automatic database evolution, but do not provide adequate resources for the information organization. This is mostly left under the responsibility of the applications which access the database, compromising the data integrity and reliability. The goal of this work is the definition of refactoring rules to support the management of the evolution of Graph Databases. The rules presented in this document are adaptations and extensions of the existent refactoring rules for relational databases to meet the requirements of the Graph Databases features. The result of this work is a catalog of refactoring rules that can be used by developers of graph database management tools to guarantee the integrity of the operations of database evolution
Mestrado
Tecnologia e Inovação
Mestra em Tecnologia
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Haldavnekar, Nikhil. "An algorithm and implementation for extracting schematic and semantic knowledge from relational database systems." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE0000541.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Schroiff, Anna. "Using a Rule-System as Mediator for Heterogeneous Databases, exemplified in a Bioinformatics Use Case." Thesis, University of Skövde, School of Humanities and Informatics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-975.

Повний текст джерела
Анотація:

Databases nowadays used in all kinds of application areas often differ greatly in a number of properties. These varieties add complexity to the handling of databases, especially when two or more different databases are dependent.

The approach described here to propagate updates in an application scenario with heterogeneous, dependent databases is the use of a rule-based mediator. The system EruS (ECA rules updating SCOP) applies active database technologies in a bioinformatics scenario. Reactive behaviour based on rules is used for databases holding protein structures.

The inherent heterogeneities of the Structural Classification of Proteins (SCOP) database and the Protein Data Bank (PDB) cause inconsistencies in the SCOP data derived from PDB. This complicates research on protein structures.

EruS solves this problem by establishing rule-based interaction between the two databases. The system is built on the rule engine ruleCore with Event-Condition-Action rules to process PDB updates. It is complemented with wrappers accessing the databases to generate the events, which are executed as actions. The resulting system processes deletes and modifications of existing PDB entries and updates SCOP flatfiles with the relevant information. This is the first step in the development of EruS, which is to be extended in future work.

The project improves bioinformatics research by providing easy access to up-to-date information from PDB to SCOP users. The system can also be considered as a model for rule-based mediators in other application areas.

Стилі APA, Harvard, Vancouver, ISO та ін.
36

Morak, Michael. "The impact of disjunction on reasoning under existential rules." Thesis, University of Oxford, 2014. https://ora.ox.ac.uk/objects/uuid:b8f012c4-0210-41f6-a0d3-a9d1ea5f8fac.

Повний текст джерела
Анотація:
Ontological database management systems are a powerful tool that combine traditional database techniques with ontological reasoning methods. In this setting, a classical extensional database is enriched with an ontology, or a set of logical assertions, that describe how new, intensional knowledge can be derived from the extensional data. Conjunctive queries are therefore answered against this combined knowledge base of extensional and intensional data. Many languages that represent ontologies have been introduced in the literature. In this thesis we will focus on existential rules (also called tuple-generating dependencies or Datalog± rules), and three established languages in this area, namely guarded-based rules, sticky rules and weakly-acyclic rules. The main goal of the thesis is to enrich these languages with non-deterministic constructs (i.e. disjunctions) and investigate the complexity of the answering conjunctive queries under these extended languages. As is common in the literature, we will distinguish between combined complexity, where the database, the ontology and the query are considered as input, and data complexity, where only the database is considered as input. The latter case is relevant in practice, as usually the ontology and the query can be considered as fixed, and are usually much smaller than the database itself. After giving appropriate definitions to extend the considered languages to disjunctive existential rules, we establish a series of complexity results, completing the complexity picture for each of the above languages, and four different query languages: arbitrary conjunctive queries, bounded (hyper-)treewidth queries, acyclic queries and atomic queries. For the guarded-based languages, we show a strong 2EXPTIME lower bound for general queries that holds even for fixed ontologies, and establishes 2EXPTIME-completeness of the query answering problem in this case. For acyclic queries, the complexity can be reduced to EXPTIME, if the predicate arity is bounded, and the problem even becomes tractable for certain restricted languages, if only atomic queries are used. For ontologies represented by sticky disjunctive rules, we show that the problem becomes undecidable, even in the case of data complexity and atomic queries. Finally, for weakly-acyclic rules, we show that the complexity increases from 2EXPTIME to coN2EXPTIME in general, and from tractable to coNP in case of the data complexity, independent of which query language is used. After answering the open complexity questions, we investigate applications and relevant consequences of our results for description logics and give two generic complexity statements, respectively, for acyclic and general conjunctive query answering over description logic knowledge bases. These generic results allow for an easy determination of the complexity of this reasoning task, based on the expressivity of the considered description logic.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Manamalkav, Shankar N. "A framework for specifying and generating alerts in relational medical databases." [Gainesville, Fla.] : University of Florida, 2002. http://purl.fcla.edu/fcla/etd/UFE1000139.

Повний текст джерела
Анотація:
Thesis (M.S.)--University of Florida, 2002.
Title from title page of source document. Document formatted into pages; contains xi, 68 p.; also contains graphics. Includes vita. Includes bibliographical references.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Solihin, Wawan. "A simplified BIM data representation using a relational database schema for an efficient rule checking system and its associated rule checking language." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54831.

Повний текст джерела
Анотація:
Efforts to automate building rule checking have not brought us anywhere near to the ultimate goal to fully automate the rule checking process. With the advancement in BIM and the latest tools and computing capability, we have what is necessary to achieve it. And yet challenges still abound. This research takes a holistic approach to solve the issue by first examining the rule complexity and its logic structure. Three major aspects of the rules are addressed in this research. The first is a new approach to transform BIM data into a simple database schema and to make it easily query-able by adopting the data warehouse approach. Geometry and spatial operations are also commonly needed for automating rules, and therefore the second approach is to integrate these into a database in the form of multiple representations. The third is a standardized rule language that leverages the database query integrated with its geometry and spatial query capability, called BIMRL. It is designed for a non-programmatic approach to the rule definitions that is suitable for typical rule experts. The rule definition takes a form of triplet command: CHECK – EVALUATE – ACTION statement that can be chained to support more complex rules. A prototype system has been developed as a proof-of-concept using selected rules taken from various sources to demonstrate the validity of the approach to solve the challenges of automating the building rule checking.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Chen, Si-Wei, and 陳思偉. "Quantitative Association Rules in Transaction Database." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/93572361803151634072.

Повний текст джерела
Анотація:
碩士
輔仁大學
資訊工程學系
90
Mining quantitative association rules is to find the items associated with their quantities, which are purchased frequently and the relationship among them from a large transaction database. From such rules, we focus on which items associated with which quantities are purchased by “most” of the customers. Before generating quantitative association rules, we need to specify a criterion to claim what the "most" is. If the frequency of an itemset associated with quantities purchased satisfies the criterion, then we can say that the items in the itemset are purchased together frequently. If we do not consider the quantities associated with items, then we can specify the criterion definitely to find out association rules. However, for quantitative association rules, the items with different quantities are regarded as different new items, such that it is difficult to satisfy the “most” criterion for those new items. In the previous approaches, they relaxed the criterion in order to find the quantitative association rules. However, we do not know how to relax the criterion. If it is relaxed too much, then there may be many unuseful rules found. If it is relaxed not enough, then there may be no or few rules found. Besides, the range of the quantity associated with an item need to be partitioned into intervals, and some intervals can be combined. The process of partition and combination may loss some information. In this paper, we propose a new approach to discover the quantitative association rules. Our algorithm can overcome the problem of user-specified criterion, the range partition and the interval combination, such that we can find the quantitative association rules in which the users are actually interested.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Wang, Wei-Tse, and 王威澤. "A native XML database association rules mining method and a database compression approach using association rules mining." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/47594741243843634353.

Повний текст джерела
Анотація:
碩士
朝陽科技大學
資訊管理系碩士班
91
With the advancement of technology and popularity of applications in enterprises’ information system, greater and greater amount of data is generated everyday. To properly store and access these data, database applications have come into play and become crucial. The main task of data mining is to help enterprises make their decisions by extracting useful information from the large amount of complicated data storage for reference, so this is why data mining has been recently paid more attention than ever. Also, more storage media for data is required for the increasing amount of data. For unlimited needs of increasing amount of data, it will be wise to provide an efficient data compression technique to reduce the cost. The thesis proposes the related research on data mining. First of all, it is different from data mining fields based primarily on relational database. We propose a data mining method for native XML database. It can extract some knowledge from native XML database. Secondly, propose a semantic association rule – the rule that is extracted from data mining method. Convert it to the semantic association rule from the proposed procedures so as to make it more legible and easier to users as reference. Finally, propose a database compression using association rules mining. The method compresses the database for reducing the cost of storage. And from the association rules mining, it finds the association among these data. These association rules are further taken as reference for the organizations when making their strategic steps.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Qui, Ding-Ying, and 邱鼎穎. "Mining Weighted Association Rules from Large Database." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/69625187682793463209.

Повний текст джерела
Анотація:
碩士
輔仁大學
資訊工程學系
89
Mining association rules is to find associations among items from large transaction databases. Weighted association rule is also to describe the associations among items, and the importance of each item is considered. Hence, weighted association rules can provide more information than that of association rules. There are many researchers that have proposed their algorithms for mining association rules. Apriori algorithm needs to scan database many times and cost much search time for mining association rules, which is very inefficient. Many other algorithms improved the efficiency of Apriori algorithm, but a lot of memory space need to be taken. There are few approached proposed for mining weighted association rules. The previous approaches also take a lot of time to scan database and search for the needed information. In this thesis, we propose two new algorithms --- delay transaction disassemble (DTD) and Weighted Daley Transaction Disassembled (WDTD) for mining association rules and weighted association rules, respectively, which is very efficient and need not take much memory space. We observe that most of time spent for scanning the database is to disassemble each transaction in the database. The main idea of DTD and WDTD algorithms are that the transaction is not disassembled until it needs to be used. For mining weighted association rules, we also propose an AVL-index tree to store the transactions and the transaction overlap technique to further reduce the execution time and the memory space. The experimental results show that our algorithms outperform other algorithms for mining association rules and weighted association rules.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Yeh, Ming-Shiow, and 葉明繡. "Generating Fuzzy Rules from Relational Database Systems for Fuzzy Information Retrieval." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/17850078135325377817.

Повний текст джерела
Анотація:
碩士
國立交通大學
資訊科學學系
83
In this thesis, we present a fuzzy concept learning system algorithm (FCLS) to construct fuzzy decision trees from relational database systems and to generate fuzzy rules from the constructed fuzzy decision trees. The completeness of the constructed fuzzy decision tree is alsodiscussed in details. Based on the generated fuzzy rules, we also present a method to forecast null values in relational database systems. Furthermore,we also made an experiment to compare the proposed FCLS algorithm with the existing methods for analyzing the ability of approximation ofreal-valued functions. The experiment result shows that the overall result of approximation of the FCLS algorithm is better than the existing methods,especially when f(x)=x/2. Furthermore, we also present a new clustering algorithm to deal with fuzzy query processing for database systems. Theproposed algorithm is more flexible and more efficient than the existing method due to the fact that the proposed algorithm has the following good features: (1) The number of clusters does not need to be predefined. (2) The ranges of fuzzy terms can dynamically be changed. (3) It does not need to perform complicated membership function calculations. (4) The speed of fuzzy query processing can be much faster.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Liu, Jung-Kwi, and 劉榮貴. "Using Active Rules to Maintain Data Consistency in Heterogeneous Database Systems." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/40057821105496884609.

Повний текст джерела
Анотація:
碩士
大同工學院
資訊工程學系
84
The heterogeneous environment is emerging in the current information systems. Within this situation, it is desirable whenever maintain consistency across databases, i.e., to ensure that they do not contradict each other with respect to the existence or value of real-world entity. Maintaining data consistency is particularly difficult in heterogeneous database systems (HDBSs). This is due to the schematic and operation heterogeneity problem. Active rules have been used to deal with this complex data consistency problem. The aim of my research is to see how the active rules can be used to maintain data consistency in HDBSs. The author examine the active rules in the different types of autonomous local database systems with different integrity constraint mechanisms. Much current research declare synonym attributes to maintain the data consistency within HDBS. This is called equivalence level data consistency. In this thesis, the author shows that active rules can maintain data consistency not only in equivalence level, but also in different semantic level and instance level data consistency. The syntax of my active rules includes rule body and coupling model. The coupling model of active rules provide more flexibility and semantics to define complex active rules. Defining appropriate coupling model of related active rules will also reduce the problem of rule conflict resolution. The author also describes a new global transaction mechanism which integrates active rule processing and database data transaction. A prototype active HDBS has been developed to simulate my hypothesis. The result of my investigation has shown that it is a good solution by using active rules to maintain data consistency in HDBSs.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Han, Zhi-Xian, and 韓志賢. "Using Efficient Alogrithms for Mining Association Rules in Large Transactional Database." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/77201372589918332782.

Повний текст джерела
Анотація:
碩士
南台科技大學
資訊管理系
94
In the rapid advancing information technology, our living standard has been raising increasingly. Humans can not afford to live without information technology. With the fast change of information technology and the fast growth of trades, that makes transitional database become complexity and also makes it growth quickly. Hence, searching for useful information in a huge database efficiently becomes crucial and the process for data searching is called data mining. Data mining technology can widely apply in many fields, and association rules is one of the popular methods in data analysis. At this time, association rules can be categorized tow kind of algorithms: one does not require producing candidate itemsets during process likes Frequent-Pattern algorithm, another require producing candidate candidate itemset during process likes Apriori-like algorithm. The FP-tree algorithm will be more faster than Apriori-like algorithm because it use a data structure to store and compress the transactions of database to avoid redundant data searching in database. Generalized association rules is very similar to association rules. The main difference between the tow methods is in generalized association rules methods add taxonomies idea, that makes the algorithm apply more widely and more conform with business needing. The purpose of this thesis is to apply FP-tree(Frequent-Pattern tree)to be the base and modify it in order to build a new data structure that just need scan database one time to mining frequent itemsets, we called “D-tree”. We also modify this D-tree algorithm and FP-tree algorithm to apply to mining generalized association rules. And we purpose two algorithm “GFP-tree” and “GD-tree”.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Chen, Yi-Chun, and 陳奕鈞. "Mining Strong Substitution Rules between Sets of Items in Large Database." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/662h43.

Повний текст джерела
Анотація:
碩士
國立東華大學
資訊工程學系
95
Association rules mining problem has been studying for several years, while few works discuss on substitution rules mining. However, substitution rules mining will also lead to valuable knowledge in market prediction. In this thesis, the problem of mining substitution rules is discussed and SSM algorithm is proposed to solve it. Differ to previous works on substitution rules mining, only the strong substitution rules will be reported in this work. The idea of SSM algorithm can be decomposed into two stages: (1) generate frequent closed patterns; (2) utilize negative association rules and the Pearson correlation coefficient to mine out strong substitution rules based on the frequent closed patterns. Moreover, to make the mining process more efficient, two lemmas are proposed to prune the redundant substitution rules. The experiment results show that the SSM algorithm offers more excellent performance and finding less substitution rules.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Ho, Wei-hann, and 何維翰. "Weight Algorithm with Multiple Supports for Mining Association Rules in Large Database." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/79415498767118255576.

Повний текст джерела
Анотація:
碩士
立德管理學院
應用資訊研究所
91
It is a very important issue in the data mining field that to find out a useful related rule among the itemsets from a large trade database. Traditional related rule mining almost focuses on the trade number of itemsets, it is resulting in the high profit and low number sale of itemsets were ignored. In this paper, we present a new algorithm, call the WMMS (Weight Algorithm with Multiple Supports for Mining Association Rules), is proposed to probe the relationship between itemsets with profit. We set different supportive thresholds in accordance with different profit of itemsets. The related rule which was generated by WMMS algorithm can solve the problem that itemsets with high price but few trading times was difficult to find out but still can find out the brisk sale. By the forth section experiment, the algorithm that we proposed have a considerable benefit.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

"Mining fuzzy association rules in large databases with quantitative attributes." 1997. http://library.cuhk.edu.hk/record=b5889060.

Повний текст джерела
Анотація:
by Kuok, Chan Man.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.
Includes bibliographical references (leaves 74-77).
Abstract --- p.i
Acknowledgments --- p.iii
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Data Mining --- p.2
Chapter 1.2 --- Association Rule Mining --- p.3
Chapter 2 --- Background --- p.6
Chapter 2.1 --- Framework of Association Rule Mining --- p.6
Chapter 2.1.1 --- Large Itemsets --- p.6
Chapter 2.1.2 --- Association Rules --- p.8
Chapter 2.2 --- Association Rule Algorithms For Binary Attributes --- p.11
Chapter 2.2.1 --- AIS --- p.12
Chapter 2.2.2 --- SETM --- p.13
Chapter 2.2.3 --- "Apriori, AprioriTid and AprioriHybrid" --- p.15
Chapter 2.2.4 --- PARTITION --- p.18
Chapter 2.3 --- Association Rule Algorithms For Numeric Attributes --- p.20
Chapter 2.3.1 --- Quantitative Association Rules --- p.20
Chapter 2.3.2 --- Optimized Association Rules --- p.23
Chapter 3 --- Problem Definition --- p.25
Chapter 3.1 --- Handling Quantitative Attributes --- p.25
Chapter 3.1.1 --- Discrete intervals --- p.26
Chapter 3.1.2 --- Overlapped intervals --- p.27
Chapter 3.1.3 --- Fuzzy sets --- p.28
Chapter 3.2 --- Fuzzy association rule --- p.31
Chapter 3.3 --- Significance factor --- p.32
Chapter 3.4 --- Certainty factor --- p.36
Chapter 3.4.1 --- Using significance --- p.37
Chapter 3.4.2 --- Using correlation --- p.38
Chapter 3.4.3 --- Significance vs. Correlation --- p.42
Chapter 4 --- Steps For Mining Fuzzy Association Rules --- p.43
Chapter 4.1 --- Candidate itemsets generation --- p.44
Chapter 4.1.1 --- Candidate 1-Itemsets --- p.45
Chapter 4.1.2 --- Candidate k-Itemsets (k > 1) --- p.47
Chapter 4.2 --- Large itemsets generation --- p.48
Chapter 4.3 --- Fuzzy association rules generation --- p.49
Chapter 5 --- Experimental Results --- p.51
Chapter 5.1 --- Experiment One --- p.51
Chapter 5.2 --- Experiment Two --- p.53
Chapter 5.3 --- Experiment Three --- p.54
Chapter 5.4 --- Experiment Four --- p.56
Chapter 5.5 --- Experiment Five --- p.58
Chapter 5.5.1 --- Number of Itemsets --- p.58
Chapter 5.5.2 --- Number of Rules --- p.60
Chapter 5.6 --- Experiment Six --- p.61
Chapter 5.6.1 --- Varying Significance Threshold --- p.62
Chapter 5.6.2 --- Varying Membership Threshold --- p.62
Chapter 5.6.3 --- Varying Confidence Threshold --- p.63
Chapter 6 --- Discussions --- p.65
Chapter 6.1 --- User guidance --- p.65
Chapter 6.2 --- Rule understanding --- p.67
Chapter 6.3 --- Number of rules --- p.68
Chapter 7 --- Conclusions and Future Works --- p.70
Bibliography --- p.74
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Liao, Chen-Han, and 廖晨涵. "Mining Positive and Negative Generalized Association Rules among Products from a Transaction Database." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/5etga9.

Повний текст джерела
Анотація:
碩士
銘傳大學
資訊管理學系碩士班
95
When consumers buy products, they often make choices. In general, there are two kinds of deciding outcomes in decision situation that buy and not buy. Buy situation can be regarded as a kind of positive information. On the contrary, not buy is a kind of negative. It is usually very huge (not buy represents different situations), and it is also an important information to analysis customer behavior. In the study, we propose PNGAR* algorithm for mining positive and negative generalized association rules. It would generate suitable for positive generalized association rules and redundant negative generalized association rules. Second, PNGEAR algorithm adds negative information rules of unconformed to minimum confidence. They are both based on conditional database to generate association rules of forms X ^ ¬Z
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Shih, Bo-kwei, and 施柏魁. "Mining fuzzy association rules from RFM data and transaction data for database marketing." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/08673818974241937707.

Повний текст джерела
Анотація:
碩士
南華大學
資訊管理學研究所
94
In this paper, two important issues of mining association rules are investigated. The first problem is the discovery of generalized fuzzy association rules in the transaction database. It’s an important data-mining task, because more general and qualitative knowledge can be uncovered for decision making. However, few algorithms have been proposed in the literature. Moreover, the efficiency of these algorithms needs to be improved to handle real-world large datasets. The second problem is to discover association rules from the RFM data and the large itemsets identified in the transaction database. This kind of rules will be useful for marketing decision.     A cluster-based mining architecture is proposed to address the two problems. At first, an efficient fuzzy association rule miner, based on cluster-based fuzzy-sets tables, is presented to identify all the large fuzzy itemsets. This method requires less contrast to generate large itemsets. Next, a fuzzy rule discovery method is used to compute the confidence values for discovering the relationships between transaction database and RFM database. An illustrated example is given to demonstrate the effectiveness of the proposed methods .
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Su, Chien-hao, and 蘇建豪. "Mining the Association Rules of Chronic Diseases from the National Health Insurance Research Database." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/17405638510661693463.

Повний текст джерела
Анотація:
碩士
世新大學
資訊管理學研究所(含碩專班)
99
According to World Health Organization published in 2008, “ Noncommunicable diseases now become biggest killers.”. In the article pointed out: Chronic diseases such as heart disease and stroke are major causes of death in the world. Global burden of disease is transferred from the infectious diseases to noncommunicable diseases. The death toll of non-communicable diseases in the world with the number of deaths due to injury will reach 3,800 million, 70% of the total death toll. And the WHO estimates that died of non-communicable diseases in the world will increase by 17% in the next 10 years. In addition, the information of the Bureau of National Health Insurance displays the importance of chronic disease research that the order of top three about medical expenses are cancer, dialysis, and mechanical ventilation which are all the expenses of chronic disease. In the past, the chronic disease research emphasis being on medical pathology and clinical statistics. The statistical research are also mainly with clinical statistical data, instead of large-scale data. In this research, we use the technique of data mining to mine associated rules from chronic diseases with the National Health Insurance databases. After data preprocessing, we applied associate and sequence purperty to analyze 98 species of chronic diseases. The results showed that the proportion of women suffering from chronic diseases increases annually. We found that the cerebral vascular disease associated with high blood pressure and diabetes, and the heart disease also associated with high blood pressure and diabetes. The results consist to the results of previous research. In addition, we found the associated rule that the common chronic diseases of patients were found suffering from diabetes. The results were also found the knowledge that were five types of chronic diseases include cerebral vascular disease, heart disease, osteoporosis, diabetes and peptic ulcer. The results of this research can provide medical staff to reduce false in the medical practice in the future, and improve the reliability of medical practice. In addition, it can be able to predict the complications of chronic disease to avoid misdiagnosis and resulting waste of medical resources. Finally, it will be able to provide that the knowledge of chronic diseases, the severity of chronic diseases, and wasted resources for the general population, and remind people to care about their own health.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії