Dissertations / Theses on the topic 'Rule base'

To see the other types of publications on this topic, follow the link: Rule base.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Rule base.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hridoy, Md Rafiul Sabbir. "An Intelligent Flood Risk Assessment System using Belief Rule Base." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-65390.

Full text
Abstract:
Natural disasters disrupt our daily life and cause many sufferings. Among the various natural disasters, flood is one of the most catastrophic. Assessing flood risk helps to take necessary precautions and can save human lives. The assessment of risk involves various factors which can not be measured with hundred percent certainty. Therefore, the present methods of flood risk assessment can not assess the risk of flooding accurately.  This research rigorously investigates various types of uncertainties associated with the flood risk factors. In addition, a comprehensive study of the present flood risk assessment approaches has been conducted. Belief Rule Base expert systems are widely used to handle various of types of uncertainties. Therefore, this research considers BRBES’s approach to develop an expert system to assess the risk of flooding. In addition, to facilitate the learning procedures of BRBES, an optimal learning algorithm has been proposed. The developed BRBES has been applied taking real world case study area, located at Cox’s Bazar, Bangladesh. The training data has been collected from the case study area to obtain the trained BRB and to develop the optimal learning model. The BRBES can generate different "What-If" scenarios which enables the analysis of flood risk of an area from various perspectives which makes the system robust and sustainable. This system is said to be intelligent as it has knowledge base, inference engine as well as the learning capability.
APA, Harvard, Vancouver, ISO, and other styles
2

Antoine, Emilien. "Distributed data management with a declarative rule-based language webdamlog." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00933808.

Full text
Abstract:
Our goal is to enable aWeb user to easily specify distributed data managementtasks in place, i.e. without centralizing the data to a single provider. Oursystem is therefore not a replacement for Facebook, or any centralized system,but an alternative that allows users to launch their own peers on their machinesprocessing their own local personal data, and possibly collaborating with Webservices.We introduce Webdamlog, a datalog-style language for managing distributeddata and knowledge. The language extends datalog in a numberof ways, notably with a novel feature, namely delegation, allowing peersto exchange not only facts but also rules. We present a user study thatdemonstrates the usability of the language. We describe a Webdamlog enginethat extends a distributed datalog engine, namely Bud, with the supportof delegation and of a number of other novelties of Webdamlog such as thepossibility to have variables denoting peers or relations. We mention noveloptimization techniques, notably one based on the provenance of facts andrules. We exhibit experiments that demonstrate that the rich features ofWebdamlog can be supported at reasonable cost and that the engine scales tolarge volumes of data. Finally, we discuss the implementation of a Webdamlogpeer system that provides an environment for the engine. In particular, a peersupports wrappers to exchange Webdamlog data with non-Webdamlog peers.We illustrate these peers by presenting a picture management applicationthat we used for demonstration purposes.
APA, Harvard, Vancouver, ISO, and other styles
3

Wennerholm, Pia. "The Role of High-Level Reasoning and Rule-Based Representations in the Inverse Base-Rate Effect." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Universitetsbiblioteket [distributör], 2001. http://publications.uu.se/theses/91-554-5178-0/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kong, Guilan. "An online belief rule-based group clinical decision support system." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/an-online-belief-rulebased-group-clinical-decision-support-system(c31a65c7-60c3-4e7a-b18e-44fee95f7da1).html.

Full text
Abstract:
Around ten percent of patients admitted to National Health Service (NHS) hospitals have experienced a patient safety incident, and an important reason for the high rate of patient safety incidents is medical errors. Research shows that appropriate increase in the use of clinical decision support systems (CDSSs) could help to reduce medical errors and result in substantial improvement in patient safety. However several barriers continue to impede the effective implementation of CDSSs in clinical settings, among which representation of and reasoning about medical knowledge particularly under uncertainty are areas that require refined methodologies and techniques. Particularly, the knowledge base in a CDSS needs to be updated automatically based on accumulated clinical cases to provide evidence-based clinical decision support. In the research, we employed the recently developed belief Rule-base Inference Methodology using the Evidential Reasoning approach (RIMER) for design and development of an online belief rule-based group CDSS prototype. In the system, belief rule base (BRB) was used to model uncertain clinical domain knowledge, the evidential reasoning (ER) approach was employed to build inference engine, a BRB training module was developed for learning the BRB through accumulated clinical cases, and an online discussion forum together with an ER-based group preferences aggregation tool were developed for providing online clinical group decision support.We used a set of simulated patients in cardiac chest pain provided by our research collaborators in Manchester Royal Infirmary to validate the developed online belief rule-based CDSS prototype. The results show that the prototype can provide reliable diagnosis recommendations and the diagnostic performance of the system can be improved significantly after training BRB using accumulated clinical cases.
APA, Harvard, Vancouver, ISO, and other styles
5

Jacobs, Robert Alan Steiner John Phillip. "Improvements to autonomous forces through the use of genetic algorithms and rule base enhancement /." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA275033.

Full text
Abstract:
Thesis (M.S. in Information Technology Management) Naval Postgraduate School, September 1993.
Thesis advisor(s): Hemant K. Bhargava ; B. Ramesh. "September 1993." Bibliography: p. 80-83. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
6

Jacobs, Robert Alan, and John Phillip Steiner. "Improvements to autonomous forces through the use of genetic algorithms and rule base enhancement." Thesis, Monterey, California. Naval Postgraduate School, 1993. http://hdl.handle.net/10945/39954.

Full text
Abstract:
Approved for public release; distribution is unlimited.
This thesis discusses two approaches to enhancing the performance of intelligent autonomous agents in a computer combat simulation environment so that their performances more closely model the tactical decisions made by human players. The first approach a
APA, Harvard, Vancouver, ISO, and other styles
7

Vafaie, Haleh Carleton University Dissertation Engineering Electrical. "An inferencing procedure for guaranteeing the search time of a production-rule knowledge base." Ottawa, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sowan, Bilal I. "Enhancing Fuzzy Associative Rule Mining Approaches for Improving Prediction Accuracy. Integration of Fuzzy Clustering, Apriori and Multiple Support Approaches to Develop an Associative Classification Rule Base." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5387.

Full text
Abstract:
Building an accurate and reliable model for prediction for different application domains, is one of the most significant challenges in knowledge discovery and data mining. This thesis focuses on building and enhancing a generic predictive model for estimating a future value by extracting association rules (knowledge) from a quantitative database. This model is applied to several data sets obtained from different benchmark problems, and the results are evaluated through extensive experimental tests. The thesis presents an incremental development process for the prediction model with three stages. Firstly, a Knowledge Discovery (KD) model is proposed by integrating Fuzzy C-Means (FCM) with Apriori approach to extract Fuzzy Association Rules (FARs) from a database for building a Knowledge Base (KB) to predict a future value. The KD model has been tested with two road-traffic data sets. Secondly, the initial model has been further developed by including a diversification method in order to improve a reliable FARs to find out the best and representative rules. The resulting Diverse Fuzzy Rule Base (DFRB) maintains high quality and diverse FARs offering a more reliable and generic model. The model uses FCM to transform quantitative data into fuzzy ones, while a Multiple Support Apriori (MSapriori) algorithm is adapted to extract the FARs from fuzzy data. The correlation values for these FARs are calculated, and an efficient orientation for filtering FARs is performed as a post-processing method. The FARs diversity is maintained through the clustering of FARs, based on the concept of the sharing function technique used in multi-objectives optimization. The best and the most diverse FARs are obtained as the DFRB to utilise within the Fuzzy Inference System (FIS) for prediction. The third stage of development proposes a hybrid prediction model called Fuzzy Associative Classification Rule Mining (FACRM) model. This model integrates the ii improved Gustafson-Kessel (G-K) algorithm, the proposed Fuzzy Associative Classification Rules (FACR) algorithm and the proposed diversification method. The improved G-K algorithm transforms quantitative data into fuzzy data, while the FACR generate significant rules (Fuzzy Classification Association Rules (FCARs)) by employing the improved multiple support threshold, associative classification and vertical scanning format approaches. These FCARs are then filtered by calculating the correlation value and the distance between them. The advantage of the proposed FACRM model is to build a generalized prediction model, able to deal with different application domains. The validation of the FACRM model is conducted using different benchmark data sets from the University of California, Irvine (UCI) of machine learning and KEEL (Knowledge Extraction based on Evolutionary Learning) repositories, and the results of the proposed FACRM are also compared with other existing prediction models. The experimental results show that the error rate and generalization performance of the proposed model is better in the majority of data sets with respect to the commonly used models. A new method for feature selection entitled Weighting Feature Selection (WFS) is also proposed. The WFS method aims to improve the performance of FACRM model. The prediction performance is improved by minimizing the prediction error and reducing the number of generated rules. The prediction results of FACRM by employing WFS have been compared with that of FACRM and Stepwise Regression (SR) models for different data sets. The performance analysis and comparative study show that the proposed prediction model provides an effective approach that can be used within a decision support system.
Applied Science University (ASU) of Jordan
APA, Harvard, Vancouver, ISO, and other styles
9

Sowan, Bilal Ibrahim. "Enhancing fuzzy associative rule mining approaches for improving prediction accuracy : integration of fuzzy clustering, apriori and multiple support approaches to develop an associative classification rule base." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5387.

Full text
Abstract:
Building an accurate and reliable model for prediction for different application domains, is one of the most significant challenges in knowledge discovery and data mining. This thesis focuses on building and enhancing a generic predictive model for estimating a future value by extracting association rules (knowledge) from a quantitative database. This model is applied to several data sets obtained from different benchmark problems, and the results are evaluated through extensive experimental tests. The thesis presents an incremental development process for the prediction model with three stages. Firstly, a Knowledge Discovery (KD) model is proposed by integrating Fuzzy C-Means (FCM) with Apriori approach to extract Fuzzy Association Rules (FARs) from a database for building a Knowledge Base (KB) to predict a future value. The KD model has been tested with two road-traffic data sets. Secondly, the initial model has been further developed by including a diversification method in order to improve a reliable FARs to find out the best and representative rules. The resulting Diverse Fuzzy Rule Base (DFRB) maintains high quality and diverse FARs offering a more reliable and generic model. The model uses FCM to transform quantitative data into fuzzy ones, while a Multiple Support Apriori (MSapriori) algorithm is adapted to extract the FARs from fuzzy data. The correlation values for these FARs are calculated, and an efficient orientation for filtering FARs is performed as a post-processing method. The FARs diversity is maintained through the clustering of FARs, based on the concept of the sharing function technique used in multi-objectives optimization. The best and the most diverse FARs are obtained as the DFRB to utilise within the Fuzzy Inference System (FIS) for prediction. The third stage of development proposes a hybrid prediction model called Fuzzy Associative Classification Rule Mining (FACRM) model. This model integrates the ii improved Gustafson-Kessel (G-K) algorithm, the proposed Fuzzy Associative Classification Rules (FACR) algorithm and the proposed diversification method. The improved G-K algorithm transforms quantitative data into fuzzy data, while the FACR generate significant rules (Fuzzy Classification Association Rules (FCARs)) by employing the improved multiple support threshold, associative classification and vertical scanning format approaches. These FCARs are then filtered by calculating the correlation value and the distance between them. The advantage of the proposed FACRM model is to build a generalized prediction model, able to deal with different application domains. The validation of the FACRM model is conducted using different benchmark data sets from the University of California, Irvine (UCI) of machine learning and KEEL (Knowledge Extraction based on Evolutionary Learning) repositories, and the results of the proposed FACRM are also compared with other existing prediction models. The experimental results show that the error rate and generalization performance of the proposed model is better in the majority of data sets with respect to the commonly used models. A new method for feature selection entitled Weighting Feature Selection (WFS) is also proposed. The WFS method aims to improve the performance of FACRM model. The prediction performance is improved by minimizing the prediction error and reducing the number of generated rules. The prediction results of FACRM by employing WFS have been compared with that of FACRM and Stepwise Regression (SR) models for different data sets. The performance analysis and comparative study show that the proposed prediction model provides an effective approach that can be used within a decision support system.
APA, Harvard, Vancouver, ISO, and other styles
10

Valenta, Jan. "Automatické ladění vah pravidlových bází znalostí." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-233507.

Full text
Abstract:
This dissertation thesis introduces new methods of automated knowledge-base creation and tuning in information and expert systems. The thesis is divided in the two following parts. The first part is focused on the legacy expert system NPS32 developed at the Faculty of Electrical Engineering and Communication, Brno University of Technology. The mathematical base of the system is expression of the rule uncertainty using two values. Thus, it extends information capability of the knowledge-base by values of the absence of the information and conflict in the knowledge-base. The expert system has been supplemented by a learning algorithm. The learning algorithm sets weights of the rules in the knowledge base using differential evolution algorithm. It uses patterns acquired from an expert. The learning algorithm is only one-layer knowledge-bases limited. The thesis shows a formal proof that the mathematical base of the NPS32 expert system can not be used for gradient tuning of the weights in the multilayer knowledge-bases. The second part is focused on multilayer knowledge-base learning algorithm. The knowledge-base is based on a specific model of the rule with uncertainty factors. Uncertainty factors of the rule represents information impact ratio. Using a learning algorithm adjusting weights of every single rule in the knowledge base structure, the modified back propagation algorithm is used. The back propagation algorithm is modified for the given knowledge-base structure and rule model. For the purpose of testing and verifying the learning algorithm for knowledge-base tuning, the expert system RESLA has been developed in C#. With this expert system, the knowledge-base from medicine field, was created. The aim of this knowledge base is verify learning ability for complex knowledge-bases. The knowledge base represents heart malfunction diagnostic base on the acquired ECG (electrocardiogram) parameters. For the purpose of the comparison with already existing knowledge-basis, created by the expert and knowledge engineer, the expert system was compared with professionally designed knowledge-base from the field of agriculture. The knowledge-base represents system for suitable cultivar of winter wheat planting decision support. The presented algorithms speed up knowledge-base creation while keeping all advantages, which arise from using rules. Contrary to the existing solution based on neural network, the presented algorithms for knowledge-base weights tuning are faster and more simple, because it does not need rule extraction from another type of the knowledge representation.
APA, Harvard, Vancouver, ISO, and other styles
11

Wijesekera, Dhammika Harindra, and n/a. "A form based meta-schema for information and knowledge elicitation." Swinburne University of Technology, 2006. http://adt.lib.swin.edu.au./public/adt-VSWT20060904.123024.

Full text
Abstract:
Knowledge is considered important for the survival and growth of an enterprise. Currently knowledge is stored in various places including the bottom drawers of employees. The human being is considered to be the most important knowledge provider. Over the years knowledge based systems (KBS) have been developed to capture and nurture the knowledge of domain experts. However, such systems were considered to be separate and different from the traditional information systems development. Many KBS development projects have failed. The main causes for such failures have been recognised as the difficulties associated with the process of knowledge elicitation, in particular the techniques and methods employed. On the other hand, the main emphasis of information systems development has been in the areas of data and information capture relating to transaction based systems. For knowledge to be effectively captured and nurtured it is necessary for knowledge to be part of the information systems development activity. This thesis reports on a process of investigation and analysis conducted into the areas of information, knowledge and the overlapping areas. This research advocates a hybrid approach, where knowledge and information capture to be considered as one in a unified environment. A meta-schema design based on Formal Object Role Modelling (FORM), independent of implementation details, is introduced for this purpose. This is considered to be a key contribution of this research activity. Both information and knowledge is expected to be captured through this approach. Meta data types are provided for the capture of business rules and they form part of the knowledge base of an organisation. The integration of knowledge with data and information is also described. XML is recognised by many as the preferred data interchange language and it is investigated for the purpose of rule interchange. This approach is expected to enable organisations to interchange business rules and their meta-data, in addition to data and their schema. During interchange rules can be interpreted and applied by receiving systems, thus providing a basis for intelligent behaviour. With the emergence of new technologies such as the Internet the modelling of an enterprise as a series of business processes has gained prominence. Enterprises are moving towards integration, establishing well-described business processes within and across enterprises, to include their customers and suppliers. The purpose is to derive a common set of objectives and benefit from potential economic efficiencies. The suggested meta-schema design can be used in the early phases of requirements elicitation to specify, communicate, comprehend and refine various artefacts. This is expected to encourage domain experts and knowledge analysts work towards describing each business process and their interactions. Existing business processes can be documented and business efficiencies can be achieved through a process of refinement. The meta-schema design allows for a ?systems view? and sharing of such views, thus enabling domain experts to focus on their area of specialisation whilst having an understanding of other business areas and their facts. The design also allows for synchronisation of mental models of experts and the knowledge analyst. This has been a major issue with KBS development and one of the main reasons for the failure of such projects. The intention of this research is to provide a facility to overcome this issue. The natural language based FORM encourages verbalisation of the domain, hence increasing the understanding and comprehension of available business facts.
APA, Harvard, Vancouver, ISO, and other styles
12

Coy, Christopher G. "A Hybrid-Genetic Algorithm for Training a Sugeno-Type Fuzzy Inference System with a Mutable Rule Base." University of Toledo / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1289243615.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Monrat, Ahmed Afif. "A BELIEF RULE BASED FLOOD RISK ASSESSMENT EXPERT SYSTEM USING REAL TIME SENSOR DATA STREAMING." Thesis, Luleå tekniska universitet, Datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-71081.

Full text
Abstract:
Among the various natural calamities, flood is considered one of the most catastrophic natural hazards, which has a significant impact on the socio-economic lifeline of a country. The Assessment of flood risks facilitates taking appropriate measures to reduce the consequences of flooding. The flood risk assessment requires Big data which are coming from different sources, such as sensors, social media, and organizations. However, these data sources contain various types of uncertainties because of the presence of incomplete and inaccurate information. This paper presents a Belief rule-based expert system (BRBES) which is developed in Big data platform to assess flood risk in real time. The system processes extremely large dataset by integrating BRBES with Apache Spark while a web-based interface has developed allowing the visualization of flood risk in real time. Since the integrated BRBES employs knowledge driven learning mechanism, it has been compared with other data-driven learning mechanisms to determine the reliability in assessing flood risk. Integrated BRBES produces reliable results comparing from the other data-driven approaches. Data for the expert system has been collected targeting different case study areas from Bangladesh to validate the integrated system.
APA, Harvard, Vancouver, ISO, and other styles
14

Lowe, Joshua Brian. "Quantifying Seismic Risk for Portable Ground Support Equipment at Vandenberg Air Force Base." DigitalCommons@CalPoly, 2010. https://digitalcommons.calpoly.edu/theses/269.

Full text
Abstract:
This project develops a quantitative method to evaluate the seismic risk for portable GSE at Vandenberg Air Force Base. Using the latest probability data available from the USGS, risk thresholds are defined for portable GSE having the potential to cause a catastrophic event. Additionally, an example tool for design engineers was developed from the seismic codes showing the tipping hazard case can be simplified into strict geometrical terms. The misinterpretation and confusion regarding the Range Safety 24 Hour Rule exemption can be avoided by assessing seismic risk for portable GSE. By using the methods herein to quantify and understand seismic risk, more informed risk decisions can be made by engineering and management. The seismic codes and requirements used and referenced throughout include but are not limited to IBC, ASCE 7, EWR 127-1, and AFSPCMAN 91-710.
APA, Harvard, Vancouver, ISO, and other styles
15

Martins, Filipe Miguel Guerreiro. "eVentos 2 - Autonomous sailboat control." Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/11211.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Sailboat navigation started as a way to explore the world. Even though performance is significantly lower than that of a motorboat, in terms of resources, these vessels still are the best low-cost solutions. On the past, navigation depended greatly on estimates or on the stars. Nowadays it depends on precise data provided by a variety of electronic devices, independent from the user’s location. Autonomous sailboats are vessels that use only the wind for propulsion and have the capacity to control its sails and rudders without human intervention. These particularities give them almost unlimited autonomy and a very valuable ability to fulfill long term missions on the sea, such as collecting oceanographic data, search and rescue or surveillance. This dissertation presents a fuzzy logic controller for autonomous sailboats based on a proposed set of sensors, namely a GPS receiver, a weather meter and an electronic compass. Following a basic navigation approach, the proposed set of sensorswas studied in order to obtain an effective group of variables for the controller’s fuzzy sets, and rules for its rule base. In the end, four fuzzy logic controllers were designed, one for the sail(to maximize speed) and three for the rudder (in order to comply with all navigation situations). The result is a sailboat control system capable of operation in a low cost platform such as an Arduino prototyping board. Simulated results obtained from a data set of approximately 100 tests to each controller back up the theory presented for the controller’s operation, since physical experimentation was not possible.
APA, Harvard, Vancouver, ISO, and other styles
16

De, Kock Erika. "Decentralising the codification of rules in a decision support expert knowledge base." Pretoria : [s.n.], 2003. http://upetd.up.ac.za/thesis/available/etd-03042004-105746.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Le, Truong Giang. "Using Event-Based and Rule-Based Paradigms to Develop Context-Aware Reactive Applications." Phd thesis, Conservatoire national des arts et metiers - CNAM, 2013. http://tel.archives-ouvertes.fr/tel-00953368.

Full text
Abstract:
Context-aware pervasive computing has attracted a significant research interest from both academy and industry worldwide. It covers a broad range of applications that support many manufacturing and daily life activities. For instance, industrial robots detect the changes of the working environment in the factory to adapt their operations to the requirements. Automotive control systems may observe other vehicles, detect obstacles, and monitor the essence level or the air quality in order to warn the drivers in case of emergency. Another example is power-aware embedded systems that need to work based on current power/energy availability since power consumption is an important issue. Those kinds of systems can also be considered as smart applications. In practice, successful implementation and deployment of context-aware systems depend on the mechanism to recognize and react to variabilities happening in the environment. In other words, we need a well-defined and efficient adaptation approach so that the systems' behavior can be dynamically customized at runtime. Moreover, concurrency should be exploited to improve the performance and responsiveness of the systems. All those requirements, along with the need for safety, dependability, and reliability pose a big challenge for developers.In this thesis, we propose a novel programming language called INI, which supports both event-based and rule-based programming paradigms and is suitable for building concurrent and context-aware reactive applications. In our language, both events and rules can be defined explicitly, in a stand-alone way or in combination. Events in INI run in parallel (synchronously or asynchronously) in order to handle multiple tasks concurrently and may trigger the actions defined in rules. Besides, events can interact with the execution environment to adjust their behavior if necessary and respond to unpredictable changes. We apply INI in both academic and industrial case studies, namely an object tracking program running on the humanoid robot Nao and a M2M gateway. This demonstrates the soundness of our approach as well as INI's capabilities for constructing context-aware systems. Additionally, since context-aware programs are wide applicable and more complex than regular ones, this poses a higher demand for quality assurance with those kinds of applications. Therefore, we formalize several aspects of INI, including its type system and operational semantics. Furthermore, we develop a tool called INICheck, which can convert a significant subset of INI to Promela, the input modeling language of the model checker SPIN. Hence, SPIN can be applied to verify properties or constraints that need to be satisfied by INI programs. Our tool allows the programmers to have insurance on their code and its behavior.
APA, Harvard, Vancouver, ISO, and other styles
18

Natario, Romalho Maria Fernanda. "Application of an automatically designed fuzzy logic decision support system to connection admission control in ATM networks." Thesis, Queen Mary, University of London, 1996. http://qmro.qmul.ac.uk/xmlui/handle/123456789/3817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Barros, Matheus Alcântara. "A base de cálculo das contribuições ao PIS/PASEP e da COFINS e os ônus fiscais: método semiótico." Pontifícia Universidade Católica de São Paulo, 2016. https://tede2.pucsp.br/handle/handle/18818.

Full text
Abstract:
Submitted by Filipe dos Santos (fsantos@pucsp.br) on 2016-08-08T14:36:06Z No. of bitstreams: 1 Matheus Alcantara Barros.pdf: 1362966 bytes, checksum: aa08cd6d1cabe334c8ae02b5bb868339 (MD5)
Made available in DSpace on 2016-08-08T14:36:06Z (GMT). No. of bitstreams: 1 Matheus Alcantara Barros.pdf: 1362966 bytes, checksum: aa08cd6d1cabe334c8ae02b5bb868339 (MD5) Previous issue date: 2016-04-01
This dissertation involve the analysis of the matter of the inclusion or no inclusion of tax onuses into the base of calculation of the Contributions to the PIS/PASEP and of the COFINS along the chain of normative positivation. It is used here the Semiotics developed by Charles Sanders Peirce aiming to organize the Juridical System, showing it as a chain with signical properties. Thus, the elements of the Positive Law System, or, the legal rulings, are going to be seen as Interpretants, of which types, of the triad Immediate, Dynamical, and Final, will determinate the different stages in which will be discussed the matter related to the mentioned Social Contributions. Therefore, inferences will be made about the Immediate Interpretants which consists of the legal rulings over taxing competency to institute the Contributions to the PIS/PASEP and the COFINS and their Basic-rules, about the Dynamical Interpretants which consists of the legal rulings that constitutes the paying duties of these Contributions, and about the Final Interpretants which consists of the legal rulings produced or to be produced, with the force of the Judged Matter, about the topic. The dissertation aim to, initially, introduce the Logic of the Signs of Peirce, explaining about its premises and about some of its great variation of tools, which will be used at the analysis of the Law, seen in its dependency on the language (on “Signs”, therefore) to manifest itself, to only then specifically face the matter of the inclusion or no inclusion of tax onuses into the base of calculation of the Contributions to the PIS/PASEP and of the COFINS
Esta dissertação envolve a análise do tema da inclusão ou não inclusão de ônus fiscais nas bases de cálculo das Contribuições ao PIS/PASEP e da COFINS no decorrer da cadeia de positivação normativa. É utilizada aqui a Semiótica desenvolvida por Charles Sanders Peirce com o intuito de organizar o Sistema Jurídico, dispondo o mesmo como uma cadeia de propriedades sígnicas. Assim, os elementos do sistema do Direito Positivo, ou, as normas jurídicas, serão vistas como Interpretantes, cujos tipos, dentro da tríade Imediato, Dinâmico e Final, determinarão os diferentes estágios em que será abordado o tema relativo às mencionadas Contribuições Sociais. Serão feitas inferências, portanto, acerca dos Interpretantes Imediatos que se consubstanciam nas normas jurídicas de competência tributária para instituir as Contribuições ao PIS/PASEP e da COFINS e nas suas Regras-Matrizes, acerca dos Interpretantes Dinâmicos que se consubstanciam nas normas jurídicas constituidoras do dever de recolher essas Contribuições, e acerca dos Interpretantes Finais que se consubstanciam nas normas jurídicas já produzidas ou a serem produzidas, com a força da Coisa Julgada, pelo Poder Judiciário sobre o assunto. A dissertação pretende, inicialmente, introduzir a Lógica dos Signos de Charles Sanders Peirce, dispondo acerca de suas premissas e algumas de sua grande variedade de ferramentas, as quais serão justamente utilizadas na análise do Direito, visto a partir da sua dependência da linguagem (de “Signos”, portanto) para se manifestar, para só depois enfrentar especificamente o tema da inclusão ou não inclusão de ônus fiscais nas bases de cálculo das Contribuições ao PIS/PASEP e da COFINS
APA, Harvard, Vancouver, ISO, and other styles
20

Jiao, Lianmeng. "Classification of uncertain data in the framework of belief functions : nearest-neighbor-based and rule-based approaches." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2222/document.

Full text
Abstract:
Dans de nombreux problèmes de classification, les données sont intrinsèquement incertaines. Les données d’apprentissage disponibles peuvent être imprécises, incomplètes, ou même peu fiables. En outre, des connaissances spécialisées partielles qui caractérisent le problème de classification peuvent également être disponibles. Ces différents types d’incertitude posent de grands défis pour la conception de classifieurs. La théorie des fonctions de croyance fournit un cadre rigoureux et élégant pour la représentation et la combinaison d’une grande variété d’informations incertaines. Dans cette thèse, nous utilisons cette théorie pour résoudre les problèmes de classification des données incertaines sur la base de deux approches courantes, à savoir, la méthode des k plus proches voisins (kNN) et la méthode à base de règles.Pour la méthode kNN, une préoccupation est que les données d’apprentissage imprécises dans les régions où les classes de chevauchent peuvent affecter ses performances de manière importante. Une méthode d’édition a été développée dans le cadre de la théorie des fonctions de croyance pour modéliser l’information imprécise apportée par les échantillons dans les régions qui se chevauchent. Une autre considération est que, parfois, seul un ensemble de données d’apprentissage incomplet est disponible, auquel cas les performances de la méthode kNN se dégradent considérablement. Motivé par ce problème, nous avons développé une méthode de fusion efficace pour combiner un ensemble de classifieurs kNN couplés utilisant des métriques couplées apprises localement. Pour la méthode à base de règles, afin d’améliorer sa performance dans les applications complexes, nous étendons la méthode traditionnelle dans le cadre des fonctions de croyance. Nous développons un système de classification fondé sur des règles de croyance pour traiter des informations incertains dans les problèmes de classification complexes. En outre, dans certaines applications, en plus de données d’apprentissage, des connaissances expertes peuvent également être disponibles. Nous avons donc développé un système de classification hybride fondé sur des règles de croyance permettant d’utiliser ces deux types d’information pour la classification
In many classification problems, data are inherently uncertain. The available training data might be imprecise, incomplete, even unreliable. Besides, partial expert knowledge characterizing the classification problem may also be available. These different types of uncertainty bring great challenges to classifier design. The theory of belief functions provides a well-founded and elegant framework to represent and combine a large variety of uncertain information. In this thesis, we use this theory to address the uncertain data classification problems based on two popular approaches, i.e., the k-nearest neighbor rule (kNN) andrule-based classification systems. For the kNN rule, one concern is that the imprecise training data in class over lapping regions may greatly affect its performance. An evidential editing version of the kNNrule was developed based on the theory of belief functions in order to well model the imprecise information for those samples in over lapping regions. Another consideration is that, sometimes, only an incomplete training data set is available, in which case the ideal behaviors of the kNN rule degrade dramatically. Motivated by this problem, we designedan evidential fusion scheme for combining a group of pairwise kNN classifiers developed based on locally learned pairwise distance metrics.For rule-based classification systems, in order to improving their performance in complex applications, we extended the traditional fuzzy rule-based classification system in the framework of belief functions and develop a belief rule-based classification system to address uncertain information in complex classification problems. Further, considering that in some applications, apart from training data collected by sensors, partial expert knowledge can also be available, a hybrid belief rule-based classification system was developed to make use of these two types of information jointly for classification
APA, Harvard, Vancouver, ISO, and other styles
21

Estrada, Joel Gaspar. "Prevenção de riscos na fase de projeto com base na metodologia BIM." Master's thesis, Universidade de Aveiro, 2015. http://hdl.handle.net/10773/17338.

Full text
Abstract:
Mestrado em Engenharia Civil
As características e especificidades do setor da construção levam a que as respetivas atividades apresentem probabilidade de ocorrência de elevados riscos ocupacionais. A implementação de regras e de boas práticas de segurança criam um ambiente produtivo e sem acidentes, quando integradas no planeamento de segurança na fase de projeto. Esta fase é reconhecidamente o momento privilegiado para influenciar os resultados da construção, constituindo também a oportunidade ideal para influenciar os níveis de segurança na construção, sendo que a Prevention through Design associada à metodologia Building Information Modeling (BIM), pode conduzir à efetiva prevenção de riscos profissionais. A aplicação do BIM está a crescer nas áreas de engenharia e arquitetura, como uma metodologia fundamental que permite a criação de modelos 3D digitais de edifícios com informações incorporadas desde a fase de projeto até às fases de construção e operação. Suportada na revisão bibliográfica, esta dissertação pretende realçar a importância do desenvolvimento de um framework baseado em modelos BIM, que integre elementos que permitam identificar perigos e os consequentes riscos e implementar medidas de prevenção ou de controlo e fazer uma prevenção precisa na fase de projeto. Com este objetivo foi feita uma formalização, identificando a legislação que pode ser extraída e traduzida para códigos computacionais. Foi desenvolvido um framework com os passos principais para ser criado um sistema rulechecker, que permite verificar os requisitos legais e técnicos que contribuem para se obter na fase de projeto, de construção e de utilização um nível de segurança elevado. A fase de estruturas de um edifício objeto de estudo foi modelada através do software Autodesk Revit 2015, no qual foram inseridos todos os elementos necessários para se atingir um nível de segurança aceitável, no que respeita ao risco de queda em altura, tendo sido desenvolvido o projeto tridimensional de segurança para prevenir esses riscos. Toda esta informação foi considerada para ser usada durante a simulação do planeamento da construção através do software Autodesk Naviswork 2015 (modelo 4D), com o objetivo de se otimizar a calendarização da construção e da segurança. Criaram-se os objetos relativos aos sistemas de proteção contra quedas em altura (guarda corpos, linha de vida, sistemas de tamponamento de abreturas), sendo possível verificar no modelo 3D a colocação exata destes elementos temporários, a respetiva sequência temporal, extrair as quantidades e efetuar a sua estimativa de custos. Em suma pode-se concluir que com um framework baseado em BIM, é possível incorporar opções de segurança viáveis sob o ponto de vista construtivo e fiável sob o ponto de vista do desempenho da segurança, desde a fase de projeto, atribuindo a cada detalhe as características e os requisitos de segurança necessários, considerando-se que esta dissertação contribui para uma nova abordagem da gestão da segurança no setor da construção.
Construction setor characteristics usually lead to high occupational hazards. To struggle this tendency good safety practices and records create an incident free and productive environment when integrated in the planning for safety at the early stage of a project. The design phase is the privileged moment to influence all the construction results and consequently is also the ideal opportunity to influence construction safety. Thereby Prevention through Design associated with Building Information Modeling (BIM), can lead to a more developed and effective risk prevention. BIM application is growing in architecture, engineering and construction, as a fundamental methodology to create digital 3D models of buildings with information embedded from the design phase throughout construction and operation phase. These models are the digital representation of physical and functional characteristics of a building as an intelligent repository of elements whit relations and attributes, making it an effective vehicle for automatic decision-making in all phases of a project. This work aims to contribute to a new approach to safety management in conjunction with BIM methodology through 3D and 4D (simulation of planning) models. Supported in the literature review, this dissertation aims to enhance the importance of developing a framework in BIM-based models, integrating elements to identify risks and consequently implement control or prevention measures and make precise risk prevention during the design phase. With this goal a formalisation has been made, identifying the legislation that can be extracted and translated into computer code. A framework was developed with the primary steps to create a rule-checker system aiming the automatic checks of technical and legal requirements contributing to a high safety level in the design, construction and operation phase. The structures phase of a building was modelled with Autodesk Revit 2015, in which were inserted all the elements necessary to achieve an acceptable level of safety, regarding the prevention of falls from height. The 3D safety objects and the safety project were developed. All this information was considered to be used during the simulation of the construction planning through Autodesk Naviswork 2015 (to obtain the 4D model), in order to optimize the construction and safety planning. It was possible to create three-dimensional objets (guard rails, safety cables, horizontal oppenings protection), and it was possible to verify in the 3D model the exact placement of this temporary elements, its temporal sequence (the planning), extract the bill of quantities and cost s. Summarising, it can be concluded that with a BIM-based framework, it is possible to embedded viable safety options under the constructive point of view and reliable from the point of view of safety performance, from the design stage, giving at each detail the characteristics and safety requirements. So, it is considered that this dissertation contributes to a new approach of safety management in the construction industry.
APA, Harvard, Vancouver, ISO, and other styles
22

Kane, Mouhamadou bamba. "Extraction et sélection de motifs émergents minimaux : application à la chémoinformatique." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMC223/document.

Full text
Abstract:
La découverte de motifs est une tâche importante en fouille de données. Cemémoire traite de l’extraction des motifs émergents minimaux. Nous proposons une nouvelleméthode efficace qui permet d’extraire les motifs émergents minimaux sans ou avec contraintede support ; contrairement aux méthodes existantes qui extraient généralement les motifs émergentsminimaux les plus supportés, au risque de passer à côté de motifs très intéressants maispeu supportés par les données. De plus, notre méthode prend en compte l’absence d’attributqui apporte une nouvelle connaissance intéressante.En considérant les règles associées aux motifs émergents avec un support élevé comme desrègles prototypes, on a montré expérimentalement que cet ensemble de règles possède unebonne confiance sur les objets couverts mais malheureusement ne couvre pas une bonne partiedes objets ; ce qui constitue un frein pour leur usage en classification. Nous proposons uneméthode de sélection à base de prototypes qui améliore la couverture de l’ensemble des règlesprototypes sans pour autant dégrader leur confiance. Au vu des résultats encourageants obtenus,nous appliquons cette méthode de sélection sur un jeu de données chimique ayant rapport àl’environnement aquatique : Aquatox. Cela permet ainsi aux chimistes, dans un contexte declassification, de mieux expliquer la classification des molécules, qui sans cette méthode desélection serait prédites par l’usage d’une règle par défaut
Pattern discovery is an important field of Knowledge Discovery in Databases.This work deals with the extraction of minimal emerging patterns. We propose a new efficientmethod which allows to extract the minimal emerging patterns with or without constraint ofsupport ; unlike existing methods that typically extract the most supported minimal emergentpatterns, at the risk of missing interesting but less supported patterns. Moreover, our methodtakes into account the absence of attribute that brings a new interesting knowledge.Considering the rules associated with emerging patterns highly supported as prototype rules,we have experimentally shown that this set of rules has good confidence on the covered objectsbut unfortunately does not cover a significant part of the objects ; which is a disavadntagefor their use in classification. We propose a prototype-based selection method that improvesthe coverage of the set of the prototype rules without a significative loss on their confidence.We apply our prototype-based selection method to a chemical data relating to the aquaticenvironment : Aquatox. In a classification context, it allows chemists to better explain theclassification of molecules, which, without this method of selection, would be predicted by theuse of a default rule
APA, Harvard, Vancouver, ISO, and other styles
23

Wang, Olivier. "Adaptive Rules Model : Statistical Learning for Rule-Based Systems." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX037/document.

Full text
Abstract:
Les Règles Métiers (Business Rules en anglais, ou BRs) sont un outil communément utilisé dans l’industrie pour automatiser des prises de décisions répétitives. Le problème de l’adaptation de bases de règles existantes à un environnement en constante évolution est celui qui motive cette thèse. Des techniques existantes d’Apprentissage Automatique Supervisé peuvent être utilisées lorsque cette adaptation se fait en toute connaissance de la décision correcte à prendre en toute circonstance. En revanche, il n’existe actuellement aucun algorithme, qu’il soit théorique ou pratique, qui puisse résoudre ce problème lorsque l’information connue est de nature statistique, comme c’est le cas pour une banque qui souhaite contrôler la proportion de demandes de prêt que son service de décision automatique fait passer à des experts humains. Nous étudions spécifiquement le problème d’apprentissage qui a pour objectif d’ajuster les BRs de façon à ce que les décisions prises aient une valeur moyenne donnée.Pour ce faire, nous considérons les bases de Règles Métiers en tant que programmes. Après avoir formalisé quelques définitions et notations dans le Chapitre 2, le langage de programmation BR ainsi défini est étudié dans le Chapitre 4, qui prouve qu’il n’existe pas d’algorithme pour apprendre des Règles Métiers avec un objectif statistique dans le cas général. Nous limitons ensuite le champ d’étude à deux cas communs où les BRs sont limités d’une certaine façon : le cas Borné en Itérations dans lequel, quelles que soit les données d’entrée, le nombre de règles exécutées en prenant la décision est inférieur à une borne donnée ; et le cas Linéaire Borné en Itérations dans lequel les règles sont de plus écrite sous forme Linéaire. Dans ces deux cas, nous produisons par la suite un algorithme d’apprentissage basé sur la Programmation Mathématique qui peut résoudre ce problème. Nous étendons brièvement cette formalisation et cet algorithme à d’autres problèmes d’apprentissage à objectif statistique dans le Chapitre 5, avant de présenter les résultats expérimentaux de cette thèse dans le Chapitre 6
Business Rules (BRs) are a commonly used tool in industry for the automation of repetitive decisions. The emerging problem of adapting existing sets of BRs to an ever-changing environment is the motivation for this thesis. Existing Supervised Machine Learning techniques can be used when the adaptation is done knowing in detail which is the correct decision for each circumstance. However, there is currently no algorithm, theoretical or practical, which can solve this problem when the known information is statistical in nature, as is the case for a bank wishing to control the proportion of loan requests its automated decision service forwards to human experts. We study the specific learning problem where the aim is to adjust the BRs so that the decisions are close to a given average value.To do so, we consider sets of Business Rules as programs. After formalizing some definitions and notations in Chapter 2, the BR programming language defined this way is studied in Chapter 3, which proves that there exists no algorithm to learn Business Rules with a statistical goal in the general case. We then restrain the scope to two common cases where BRs are limited in some way: the Iteration Bounded case in which no matter the input, the number of rules executed when taking the decision is less than a given bound; and the Linear Iteration Bounded case in which rules are also all written in Linear form. In those two cases, we later produce a learning algorithm based on Mathematical Programming which can solve this problem. We briefly extend this theory and algorithm to other statistical goal learning problems in Chapter 5, before presenting the experimental results of this thesis in Chapter 6. The last includes a proof of concept to automate the main part of the learning algorithm which does not consist in solving a Mathematical Programming problem, as well as some experimental evidence of the computational complexity of the algorithm
APA, Harvard, Vancouver, ISO, and other styles
24

Ben, Salah Fatma. "Modélisation et simulation à base de règles pour la simulation physique." Thesis, Poitiers, 2018. http://www.theses.fr/2018POIT2293.

Full text
Abstract:
La simulation physique des objets déformables est au cœur de plusieurs applications dans l’informatique graphique. Dans ce contexte, nous nous intéressons à l’élaboration d’une plate-forme, qui combine le modèle topologique des Cartes Généralisées avec un ou plusieurs modèles mécaniques, pour l’animation physique d’objets maillés déformables, pouvant endurer des transformations topologiques comme des déchirures ou des fractures.Pour offrir un cadre aussi général que possible, nous avons adopté une approche à base de règles de manipulation et de transformation de graphes, telle que proposée par le logiciel JERBOA. Cet environnement offre des possibilités de prototypage rapide de différents modèles mécaniques. Il nous a permis de définir précisément le stockage des informations mécaniques dans la description topologique du maillage et de simuler les déformations en utilisant une base topologique pour le calcul des interactions et l’affectation des forces. Toutes les informations mécaniques sont ainsi stockées dans le modèle topologique, sans recours à une structure auxiliaire.La plate-forme réalisée est générale. Elle permet de simuler des objets 2D ou 3D, avec différents types de maillages, non nécessairement homogènes. Elle permet de simuler différents modèles mécaniques, continus ou discrets, avec des propriétés diverses d’homogénéité et d’isotropie. De plus, différentes solutions d’évolution topologique ont été proposées. Elles impliquent la sélection d’un critère déclenchant les modifications et un type de transformation proprement dit. Notre approche a également permis de réduire les mises à jour du modèle mécanique en cas de déchirure/fracture
The physical simulation of deformable objects is at the core of several computer graphics applications. In this context, we are interested in the creation of a framework, that combines a topological model, namely Generalized Maps, with one or several mechanical models, for the physical animation of deformable meshed objects that can undergo topological modifications such as tearing or fractures.To obtain a general framework, we chose to rely on graph manipulation and transformation rules, proposed by the JERBOA software. This environment provided us with fast prototyping facilities for different mechanical models. It allowed us to precisely define how to store mechanical properties in the topological description of a mesh and simulate its deformation in a topologically-based manner for interaction computation and force distribution. All mechanical properties are stored in the topological model without any external structure.This framework is general. It allows for the simulation of 2D or 3D objects, with different types of meshes, including non homogeneous ones. It also allowed for the simulation of several, continuous or discrete, mechanical models with various properties of homogeneity and isotropy. Furthermore, different methods to simulate topological modifications have been implemented in the framework. They include both the selection of a criterion to trigger topological modifications and a transformation type. Our approach also managed to reduce the number of updates of the mechanical model after tearing / fracture
APA, Harvard, Vancouver, ISO, and other styles
25

Honorato-Zimmer, Ricardo. "On a thermodynamic approach to biomolecular interaction networks." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/28904.

Full text
Abstract:
We explore the direct and inverse problem of thermodynamics in the context of rule-based modelling. The direct problem can be concisely stated as obtaining a set of rewriting rules and their rates from the description of the energy landscape such that their asymptotic behaviour when t → ∞ coincide. To tackle this problem, we describe an energy function as a finite set of connected patterns P and an energy cost function e which associates real values to each of these energy patterns. We use a finite set of reversible graph rewriting rules G to define the qualitative dynamics by showing which transformations are possible. Given G and P, we construct a finite set of rules Gp which i) has the same qualitative transition system as G and ii) when equipped with rates according to e, defines a continuous-time Markov chain that has detailed balance with respect to the invariant probability distribution determined by the energy function. The construction relies on a technique for rule refinement described in earlier work and allows us to represent thermodynamically consistent models of biochemical interaction networks in a concise manner. The inverse problem, on the other hand, is to i) check whether a rule-based model has an energy function that describes its asymptotic behaviour and if so ii) obtain the energy function from the graph rewriting rules and their rates. Although this problem is known to be undecidable in the general case, we find two suitable subsets of Kappa, our rule-based modelling framework of choice, were this question can be answer positively and the form of their energy functions described analytically.
APA, Harvard, Vancouver, ISO, and other styles
26

Cao, Qiushi. "Semantic technologies for the modeling of predictive maintenance for a SME network in the framework of industry 4.0 Smart condition monitoring for industry 4.0 manufacturing processes: an ontology-based approach Using rule quality measures for rule base refinement in knowledge-based predictive maintenance systems Combining chronicle mining and semantics for predictive maintenance in manufacturing processes." Thesis, Normandie, 2020. http://www.theses.fr/2020NORMIR04.

Full text
Abstract:
Dans le domaine de la fabrication, la détection d’anomalies telles que les défauts et les défaillances mécaniques permet de lancer des tâches de maintenance prédictive, qui visent à prévoir les défauts, les erreurs et les défaillances futurs et à permettre des actions de maintenance. Avec la tendance de l’industrie 4.0, les tâches de maintenance prédictive bénéficient de technologies avancées telles que les systèmes cyberphysiques (CPS), l’Internet des objets (IoT) et l’informatique dématérialisée (cloud computing). Ces technologies avancées permettent la collecte et le traitement de données de capteurs qui contiennent des mesures de signaux physiques de machines, tels que la température, la tension et les vibrations. Cependant, en raison de la nature hétérogène des données industrielles, les connaissances extraites des données industrielles sont parfois présentées dans une structure complexe. Des méthodes formelles de représentation des connaissances sont donc nécessaires pour faciliter la compréhension et l’exploitation des connaissances. En outre, comme les CPSs sont de plus en plus axées sur la connaissance, une représentation uniforme de la connaissance des ressources physiques et des capacités de raisonnement pour les tâches analytiques est nécessaire pour automatiser les processus de prise de décision dans les CPSs. Ces problèmes constituent des obstacles pour les opérateurs de machines qui doivent effectuer des opérations de maintenance appropriées. Pour relever les défis susmentionnés, nous proposons dans cette thèse une nouvelle approche sémantique pour faciliter les tâches de maintenance prédictive dans les processus de fabrication. En particulier, nous proposons quatre contributions principales: i) un cadre ontologique à trois niveaux qui est l’élément central d’un système de maintenance prédictive basé sur la connaissance; ii) une nouvelle approche sémantique hybride pour automatiser les tâches de prédiction des pannes de machines, qui est basée sur l’utilisation combinée de chroniques (un type plus descriptif de modèles séquentiels) et de technologies sémantiques; iii) a new approach that uses clustering methods with Semantic Web Rule Language (SWRL) rules to assess failures according to their criticality levels; iv) une nouvelle approche d’affinement de la base de règles qui utilise des mesures de qualité des règles comme références pour affiner une base de règles dans un système de maintenance prédictive basé sur la connaissance. Ces approches ont été validées sur des ensembles de données réelles et synthétiques
In the manufacturing domain, the detection of anomalies such as mechanical faults and failures enables the launching of predictive maintenance tasks, which aim to predict future faults, errors, and failures and also enable maintenance actions. With the trend of Industry 4.0, predictive maintenance tasks are benefiting from advanced technologies such as Cyber-Physical Systems (CPS), the Internet of Things (IoT), and Cloud Computing. These advanced technologies enable the collection and processing of sensor data that contain measurements of physical signals of machinery, such as temperature, voltage, and vibration. However, due to the heterogeneous nature of industrial data, sometimes the knowledge extracted from industrial data is presented in a complex structure. Therefore formal knowledge representation methods are required to facilitate the understanding and exploitation of the knowledge. Furthermore, as the CPSs are becoming more and more knowledge-intensive, uniform knowledge representation of physical resources and reasoning capabilities for analytic tasks are needed to automate the decision-making processes in CPSs. These issues bring obstacles to machine operators to perform appropriate maintenance actions. To address the aforementioned challenges, in this thesis, we propose a novel semantic approach to facilitate predictive maintenance tasks in manufacturing processes. In particular, we propose four main contributions: i) a three-layered ontological framework that is the core component of a knowledge-based predictive maintenance system; ii) a novel hybrid semantic approach to automate machinery failure prediction tasks, which is based on the combined use of chronicles (a more descriptive type of sequential patterns) and semantic technologies; iii) a new approach that uses clustering methods with Semantic Web Rule Language (SWRL) rules to assess failures according to their criticality levels; iv) a novel rule base refinement approach that uses rule quality measures as references to refine a rule base within a knowledge-based predictive maintenance system. These approaches have been validated on both real-world and synthetic data sets
APA, Harvard, Vancouver, ISO, and other styles
27

Dugast, Loic. "Introducing corpus-based rules and algorithms in a rule-based machine translation system." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8249.

Full text
Abstract:
Machine translation offers the challenge of automatically translating a text from one natural language into another. Statistical methods - originating from the field of information theory - have shown to be a major breakthrough in the field of machine translation. Prior to this paradigm, many systems had been developed following a rule-based approach. This denotes a system based on a linguistic description of the languages involved and of how translation occurs in the mind of the (human) translator. Statistical models on the contrary use empirical means and may work with very little linguistic hypothesis on language and translation as performed by humans. This had implications for rule-based translation systems, in terms of software architecture and the nature of the rules, which were manually input and lack any statistical feature. In the view of such diverging paradigms, we can imagine trying to combine both in a hybrid system. In the present work, we start by examining the state-of-the-art of both rule-based and statistical systems. We restrict the rule-based approach to transfer-based systems. We compare rule-based and statistical paradigms in terms of global translation quality and give a qualitative analysis of their respective specific errors. We also introduce initial black-box hybrid models that confirm there is an expected gain in combining the two approaches. Motivated by the qualitative analysis, we focus our study and experiments on lexical phrasal rules. We propose a setup allowing to extract such resources from corpora. Going one step further in the integration of rule-based and statistical approaches, we then examine how to combine the extracted rules with decoding modules that will allow for a corpus-based handling of ambiguity. This then leads to the final delivery of this work: a rule-based system for which we can learn non-deterministic rules from corpora, and whose decoder can be optimised on a tuning set in the same domain.
APA, Harvard, Vancouver, ISO, and other styles
28

Benmimoune, Lamine. "Une approche pour la conception de systèmes d'aide à la décision médicale basés sur un raisonnement mixte à base de connaissance." Thesis, Belfort-Montbéliard, 2016. http://www.theses.fr/2016BELF0307/document.

Full text
Abstract:
Afin d'accompagner les professionnels de santé dans leur démarche clinique, plusieurs systèmes de suivi et deprise en charge médicale ont été construits et déployés dans le milieu hospitalier. Ces systèmes permettentprincipalement de collecter des données médicales sur les patients, de les analyser et de présenter les résultats dedifférentes manières. Ils représentent un appui et une aide aux professionnels de santé dans leur prise de décisionpar rapport à l'évolution de l'état de santé des patients suivis. L'utilisation de tels systèmes nécessitesystématiquement une adaptation à la fois au domaine médical concerné et au mode d'intervention. Il estnécessaire, dans un milieu hospitalier, que ces systèmes puissent s'adapter et évoluer d'une manière simple, enlimitant toute maintenance corrective ou évolutive. Ils doivent être en mesure de prendre en compte dynamiquementdes connaissances théoriques et empiriques du domaine issues des experts médicaux.Afin de répondre à ces exigences, nous avons proposé une approche pour la construction d'un système d'aide à ladécision médicale capable de s'adapter au domaine médical concerné et au mode d'intervention approprié pourassister les professionnels de santé dans leur démarche clinique. Cette approche permet notamment l'organisationde la collecte des données médicales, en tenant compte du contexte du patient, la représentation desconnaissances du domaine à base d'ontologies ainsi que leur exploitation associée aux guides de bonnes pratiqueset à l'expérience clinique.Dans la continuité des travaux précédemment réalisés au sein de notre équipe de recherche, nous avons choisid'enrichir, avec notre approche, la plateforme E-care qui est dédiée au suivi et à la détection précoce de touteanomalie de l'état de patients atteints de maladies chroniques. Nous avons pu ainsi adapter facilement la plateformeE-care aux différentes expérimentations qui sont été menées notamment dans des EPHAD de la MutualitéFrançaise en Anjou-Mayenne, au CHU de Hautepierre et au CHUV à Lausanne.Les résultats de ces expérimentations ont montré l'efficacité de l'approche proposée. L'adaptation de la plateformepar rapport au domaine et au mode d'intervention de chacune de ces expérimentations se limite à de la simpleconfiguration. De plus, l'approche proposée a suscité l'intérêt du personnel médical par rapport à l'organisation de lacollecte des données, qui tient compte du contexte du patient, et par rapport à l'exploitation des connaissancesmédicales qui apporte aux professionnels de santé une assistance pour une meilleure prise de décision
To support health professionals in their clinical processes, several monitoring and medical care systems have beenbuilt and deployed in the hospital setting. These systems are mainly used to collect medical data on patients,analyze and present the outcomes in different ways. They represent support and assistance to health professionalsin their decision making regarding the evolution in the health status of the patients followed. The use of suchsystems always requires an adaptation to both the medical field and the mode of intervention. It is necessary, in ahospital setting, to adapt and evolve these systems in a simple manner, limiting any corrective or evolutionarymaintenance. Moreover, these systems should be able to consider dynamically the domain knowledge from medicalexperts.To meet these requirements, we proposed an approach for the construction of a medical decision support system(MDSS). This MDSS can adapt to the medical field and to the appropriate mode of intervention to assist healthprofessionals in their clinical processes. This approach allows especially the organization of the medical datacollection by taking into account the patient¿s context, the ontology-based knowledge representation of the domainand permits the exploitation of the medical guidelines and the clinical experience.In continuity of our research team¿s previous work, we chose to expand with our approach, the E-care platformwhich is dedicated to monitoring and early detection of any abnormality of the health status of patients with chronicdiseases. We were able to adapt easily the E-care platform for the various experiments that have been conducted,including EPHAD of the Mutualité Française in Anjou-Mayenne, Hautepierre hospital and Lausanne hospital(CHUV).The outcomes of these experiments have shown the effectiveness of the proposed approach. Where, the adaptationof the platform regarding to the domain and mode of intervention of each of these experiments is limited to thesimple configuration. Furthermore, the proposed approach has attracted the interest of the medical staff regardingthe organization of the medical data collection, and the exploitation of the medical knowledge which bringsassistance to the health professionals for better decision making
APA, Harvard, Vancouver, ISO, and other styles
29

Kearton, Kristian. "Correlating temporal rules to time-series data with rule-based intuition." Thesis, Monterey, California : Naval Postgraduate School, 2010. http://edocs.nps.edu/npspubs/scholarly/theses/2010/Mar/10Mar%5FKearton.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, March 2010.
Thesis Advisor(s): Garfinkel, Simson L. Second Reader: Schein, Andrew I. "March 2010." Description based on title screen as viewed on April 28, 2010. Author(s) subject terms: Temporal Analysis, Time-Series Data, Rule Based Evaluation, Supervised Learning. Includes bibliographical references (p. 45-48). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
30

Wetherall, Jodie. "Investigation into an improved modular rule-based testing framework for business rules." Thesis, University of Greenwich, 2010. http://gala.gre.ac.uk/6602/.

Full text
Abstract:
Rule testing in scheduling applications is a complex and potentially costly business problem. This thesis reports the outcome of research undertaken to develop a system to describe and test scheduling rules against a set of scheduling data. The overall intention of the research was to reduce commercial scheduling costs by minimizing human domain expert interaction within the scheduling process. This thesis reports the outcome of research initiated following a consultancy project to develop a system to test driver schedules against the legal driving rules in force in the UK and the EU. One of the greatest challenges faced was interpreting the driving rules and translating them into the chosen programming language. This part of the project took considerable effort to complete the programming, testing and debugging processes. A potential problem then arises if the Department of Transport or the European Union alter or change the driving rules. Considerable software development is likely to be required to support the new rule set. The approach considered takes into account the need for a modular software component that can be used in not just transport scheduling systems which look at legal driving rules but may also be integrated into other systems that have the need to test temporal rules. The integration of the rule testing component into existing systems is key to making the proposed solution reusable. The research outcome proposes an alternative approach to rule definition, similar to that of RuleML, but with the addition of rule metadata to provide the ability of describing rules of a temporal nature. The rules can be serialised and deserialised between XML (eXtensible Markup Language) and objects within an object oriented environment (in this case .NET with C#), to provide a means of transmission of the rules over a communication infrastructure. The rule objects can then be compiled into an executable software library, allowing the rules to be tested more rapidly than traditional interpreted rules. Additional support functionality is also defined to provide a means of effectively integrating the rule testing engine into existing applications. Following the construction of a rule testing engine that has been designed to meet the given requirements, a series of tests were undertaken to determine the effectiveness of the proposed approach. This lead to the implementation of improvements in the caching of constructed work plans to further improve performance. Tests were also carried out into the application of the proposed solution within alternative scheduling domains and to analyse the difference in computational performance and memory usage across system architectures, software frameworks and operating systems, with the support of Mono. Future work that is expected to follow on from this thesis will likely reside in investigations into the development of graphical design tools for the creation of the rules, improvements in the work plan construction algorithm, parallelisation of elements of the process to take better advantage of multi-core processors and off-loading of the rule testing process onto dedicated or generic computational processors.
APA, Harvard, Vancouver, ISO, and other styles
31

Asbayou, Omar. "L'identification des entités nommées en arabe en vue de leur extraction et classification automatiques : la construction d’un système à base de règles syntactico-sémantique." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE2136.

Full text
Abstract:
Cette thèse explique et présente notre démarche de la réalisation d’un système à base de règles de reconnaissance et de classification automatique des EN en arabe. C’est un travail qui implique deux disciplines : la linguistique et l’informatique. L’outil informatique et les règles la linguistiques s’accouplent pour donner naissance à une nouvelle discipline ; celle de « traitement automatique des langues », qui opère sur des niveaux différents (morphosyntaxique, syntaxique, sémantique, syntactico-sémantique etc.). Nous avons donc, dans ce qui nous concerne, mis en œuvre des informations et règles linguistiques nécessaires au service du logiciel informatique, qui doit être en mesure de les appliquer, pour extraire et classifier, par des annotations syntaxiques et/ou sémantiques, les différentes classes d’entités nommées.Ce travail de thèse s’inscrit donc dans un cadre général de traitement automatique des langues, mais plus particulièrement dans la continuité des travaux réalisés au niveau de l’analyse morphosyntaxique par la conception et la réalisation des bases des données lexicales SAMIA et ensuite DIINAR avec l’ensemble de résultats de recherches qui en découlent. C’est une tâche qui vise à l’enrichissement lexical par des entités nommées simples et complexes, et qui veut établir la transition de l’analyse morphosyntaxique vers l’analyse syntaxique, et syntatico-sémantique dans une visée plus générale de l’analyse du contenu textuel. Pour comprendre de quoi il s’agit, il nous était important de commencer par la définition de l’entité nommée. Et pour mener à bien notre démarche, nous avons distingué entre deux types principaux : pur nom propre et EN descriptive. Nous avons aussi établi une classification référentielle en se basant sur diverses classes et sous-classes qui constituent la référence de nos annotations sémantiques. Cependant, nous avons dû faire face à deux difficultés majeures : l’ambiguïté lexicale et les frontières des entités nommées complexes. Notre système adopte une approche à base de règles syntactico-sémantiques. Il est constitué, après le Niveau 0 d’analyse morphosyntaxique, de cinq niveaux de construction de patrons syntaxiques et syntactico-sémantiques basés sur les informations linguistique nécessaires (morphosyntaxiques, syntaxiques, sémantique, et syntactico-sémantique). Ce travail, après évaluation en utilisant deux corpus, a abouti à de très bons résultats en précision, en rappel et en F–mesure. Les résultats de notre système ont un apport intéressant dans différents application du traitement automatique des langues notamment les deux tâches de recherche et d’extraction d’informations. En effet, on les a concrètement exploités dans les deux applications (recherche et extraction d’informations). En plus de cette expérience unique, nous envisageons par la suite étendre notre système à l’extraction et la classification des phrases dans lesquelles, les entités classifiées, principalement les entités nommées et les verbes, jouent respectivement le rôle d’arguments et de prédicats. Un deuxième objectif consiste à l’enrichissement des différents types de ressources lexicales à l’instar des ontologies
This thesis explains and presents our approach of rule-based system of arabic named entity recognition and classification. This work involves two disciplines : linguistics and computer science. Computer tools and linguistic rules are merged to give birth to a new discipline : Natural Languge Processsing, which operates in different levels (morphosyntactic, syntactic, semantic, syntactico-semantic…). So, in our particular case, we have put the necessary linguistic information and rules to software sevice. This later should be able to apply and implement them in order to recognise and classify, by syntactic and semantic annotations, the different named entity classes.This work of thesis is incorporated within the general domain of natural language processing, but it particularly falls within the scope of the continuity of the accomplished work in terms of morphosyntactic analysis and the realisation of lexical data bases of SAMIA and then DIINAR as well as the accompanying scientific recearch. This task aimes at lexical enrichement with simple and complex named entities and at establishing the transition from the morphological analysis into syntactic and syntactico-semantic analysis. The ultimate objective is text analysis. To understand what it is about, it was important to start with named entity definition. To carry out this task, we distinguished between two main named entity types : pur proper name and descriptive named entities. We have also established a referential classification on the basis of different classes and sub-classes which constitue the reference for our semantic annotations. Nevertheless, we are confronted with two major difficulties : lexical ambiguity and the frontiers of complex named entities. Our system adoptes a syntactico-semantic rule-based approach. After Level 0 of morpho-syntactic analysis, the system is made up of five levels of syntactic and syntactico-semantic patterns based on tne necessary linguisic information (i.e. morphosyntactic, syntactic, semantic and syntactico-semantic information).This work has obtained very good results in termes of precision, recall and F-measure. The output of our system has an interesting contribution in different applications of the natural language processing especially in both tasks of information retrieval and information extraction. In fact, we have concretely exploited our system output in both applications (information retrieval and information extraction). In addition to this unique experience, we envisage in the future work to extend our system into the sentence extraction and classification, in which classified entities, mainly named entities and verbs, play respectively the role of arguments and predicates. The second objective consists in the enrichment of different types of lexical resources such as ontologies
APA, Harvard, Vancouver, ISO, and other styles
32

Grazziottin, Ribeiro Helena. "Un service de règles actives pour fédérations de bases de données." Université Joseph Fourier (Grenoble), 2000. http://www.theses.fr/2000GRE10084.

Full text
Abstract:
Dans les SGBD actifs la notion de réaction automatique à des événements est offerte au travers de règles actives de la forme Événement-Condition-Action. Ces règles sont gérées par des mécanismes spécifiques, dits actifs, intégrés dans les SGBD. Nous nous intéressons à l'introduction de ces mécanismes dans les fédérations de données. Les fédérations sont caractérisées par la distribution et l'autonomie de leurs composants et les mécanismes actifs doivent donc s'adapter à de telles caractéristiques. Notre approche propose de mettre en œuvre ces mécanismes sous forme d'un service de règles et un service d'événements qui coopèrent. Dans cette thèse nous nous intéressons plus précisément à la définition et à la structuration d'un service de règles. Nous proposons un service adaptable nommé ADRUS (pour ADaptable RUle Service) qui permet la construction et le contrôle de gestionnaires de règles spécialisés selon les besoins des applications de la fédération bases de données. Les modèles implantés par ces gestionnaires sont spécifiés à partir des trois métamodèles offerts par le service : le métamodèle de définition et de manipulation de règles, le métamodèle d'exécution de règles et le métamodèle de coopération entre gestionnaires. Notre travail se concentre sur la définition de la structure et des caractéristiques des métamodèles. Nous modélisons la coopération entre gestionnaires de règles, d'événements et de transactions car celle-ci est fondamentale pour l'exécution des règles dans une fédération. Nous présentons une expérience d'utilisation de notre service dans le cadre de la mise en œuvre de systèmes ODAS. Ces systèmes sont basés sur des services d'événements et de règles ouverts et répartis (Open and Distributed Active Services) utilisés au niveau d'une fédération de bases de données dans le contexte d'une application de type commerce électronique
APA, Harvard, Vancouver, ISO, and other styles
33

Gillespie, John. "The Base Erosion and Profit-Shifting Project, Action 7: A Critical Analysis of the Preparatory/Auxiliary Extension and the New Anti-Fragmentation Rule in the 2017 OECD Model Tax Convention." Thesis, Uppsala universitet, Juridiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-364361.

Full text
Abstract:
The PE is a concept under scrutiny. Action 7 of the BEPS Action Plan has laid out a path to defend against artificial avoidance of PE status in light of BEPS concerns that can be associated with moderating business practices. Out of Action 7 has come an update to Art. 5(4) MTC, with the extension of the ‘preparatory or auxiliary’ provision to all of the specific activity exemptions, as well as new provisions at Art. 5(4.1) MTC and Art. 5(8) MTC, together comprising the new ‘anti-fragmentation rule’ This 2017 MTC update has expanded existing concepts and introduced new concepts that need to be understood and analysed in order to assess the success of the Action 7 work. This alludes to the critical research question of this thesis, which is to critically analyse the extension of the ‘preparatory or auxiliary’ provision and the new anti-fragmentation rule in light of the appropriateness of the reforms as regards established legal principles and norms, to review the success of the reforms as against the objectives and goals of the OECD through the Action 7 work, and to assess whether the goals and objectives ought to have been adjusted or could be adjusted in the future in order to bring about a better solution to the artificial avoidance of PE status. This is performed in this thesis by first exploring the background to the PE concept in a wider sense, before offering specific critical analyses upon elements contained in the reforms. Amazon, the e-commerce giant, will be followed as a case example in order to give context to the impact of the reforms in practice. This thesis concludes that the need to transform the PE concept in light of the BEPS concerns prevails against the concerns that can be associated with the reforms of the 2017 MTC update.
APA, Harvard, Vancouver, ISO, and other styles
34

Bouzeghoub, Mokrane. "Secsi : un système expert en conception de systèmes d'informations, modélisation conceptuelle de schémas de bases de données." Paris 6, 1986. http://www.theses.fr/1986PA066046.

Full text
Abstract:
Les principaux objectifs du système sont d'une part la constitution d'une base de connaissances regroupant à la fois des acquis théoriques sur les modèles et une expérience pratique en conception de bases de données, et d'autre part la réalisation d'un système d'outils ouvert, capable aussi bien de données, et d'autre part la réalisation d'un système d'outils ouvert, capable aussi bien d'expliquer et de justifier ses choix et ses résultats que d'intégrer de nouveaux concepts et de nouvelles règles de conception. Outre l'architecture générale et les fonctionnalités du système, cette thèse décrit le modèle de représentation de connaissances base sur les réseaux sémantiques, les règles d'inférence et la méthodologie de conception adoptée.
APA, Harvard, Vancouver, ISO, and other styles
35

Marcelino, Sidney Soares. "Gera??o de processos WS-BPEL com base em um algoritmo de reescrita de regras." Universidade Federal do Rio Grande do Norte, 2013. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18101.

Full text
Abstract:
Made available in DSpace on 2014-12-17T15:48:10Z (GMT). No. of bitstreams: 1 SidneySM_DISSERT.pdf: 1456214 bytes, checksum: c5c3a8fc051effaff1597d26ba83ee87 (MD5) Previous issue date: 2013-12-12
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior
Web services are computational solutions designed according to the principles of Service Oriented Computing. Web services can be built upon pre-existing services available on the Internet by using composition languages. We propose a method to generate WS-BPEL processes from abstract specifications provided with high-level control-flow information. The proposed method allows the composition designer to concentrate on high-level specifi- cations, in order to increase productivity and generate specifications that are independent of specific web services. We consider service orchestrations, that is compositions where a central process coordinates all the operations of the application. The process of generating compositions is based on a rule rewriting algorithm, which has been extended to support basic control-flow information.We created a prototype of the extended refinement method and performed experiments over simple case studies
Os servi?os web s?o solu??es computacionais criadas de acordo com os princ?pios da Com- puta??o Orientada a Servi?os e disponibilizadas via Internet. Novos servi?os web podem surgir a partir outros pr?-existentes, utilizando linguagens de composi??o. Considerando orquestra??es de servi?os, onde existe um processo central que coordena todas as opera- ??es da aplica??o, propomos um m?todo para gera??o de processos WS-BPEL, a partir de especifica??es abstratas dotadas de informa??es de controle. O m?todo proposto permite ao projetista da composi??o se concentrar em especifica??es de alto n?vel, aumentando sua produtividade e gerando especifica??es independentes de servi?os web espec?ficos. O processo de gera??o de composi??es se baseia em um algoritmo de reescrita de regras, que foi estendido para dar suporte a informa??es de controle b?sicas. Criamos um prot?tipo do m?todo de refinamento estendido e realizamos experimentos sobre estudos de caso simples
APA, Harvard, Vancouver, ISO, and other styles
36

Bostan, Burcin. "A Fuzzy Petri Net Model For Intelligent Databases." Phd thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/3/12605962/index.pdf.

Full text
Abstract:
Knowledge intensive applications require an intelligent environment, which can perform deductions in response to user queries or events that occur inside or outside of the applications. For that, we propose a Fuzzy Petri Net (FPN) model to represent the knowledge and the behavior in an intelligent object-oriented database environment, which integrates fuzzy, active and deductive rules with database objects. By gaining intelligent behaviour, the system maintains objects to perceive dynamic occurences and user queries. Thus, objects can produce new knowledge or keep themselves in a consistent, stable, and upto-date state. The behavior of a system can be unpredictable due to the rules triggering or untriggering each other (non-termination). Intermediate and final database states may also differ according to the order of rule executions (non-confluence). In order to foresee and solve problematic behavior patterns, we employ static rule analysis on the FPN structure that provides easy checking of the termination property without requiring any extra construct. In addition, with our proposed inference algorithm, we guarantee confluent rule executions. The techniques and solutions provided in this study can be utilized in various complex systems, such as weather forecasting applications, environmental information systems, defense applications, video database applications, etc. We implement a prototype of the model for the weather forecasting of the Central Anatolia Region
APA, Harvard, Vancouver, ISO, and other styles
37

Seydoux, Nicolas. "Towards interoperable IOT systems with a constraint-aware semantic web of things." Thesis, Toulouse, INSA, 2018. http://www.theses.fr/2018ISAT0035.

Full text
Abstract:
Cette thèse porte sur le Web Sémantique des Objets (WSdO), un domaine de recherche à l'interface de l'Internet des Objets (IdO) et du Web Sémantique (WS). L’intégration des approche du WS à l'IdO permettent de traiter l'importante hétérogénéité des ressources, des technologies et des applications de l'IdO, laquelle est une source de problèmes d'interopérabilité freinant le déploiement de systèmes IdO. Un premier verrou scientifique est lié à la consommation en ressource des technologies du WS, là où l'IdO s’appuie sur des objets aux capacités de calcul et de communication limitées. De plus, les réseaux IdO sont déployés à grande échelle, quand la montée en charge est difficile pour les technologies du WS. Cette thèse a pour objectif de traiter ce double défi, et comporte deux contributions. La première porte sur l'identification de critères de qualité pour les ontologies de l'IdO, et l’élaboration de IoT-O, une ontologie modulaire pour l'IdO. IoT-O a été implantée pour enrichir les données d'un bâtiment instrumenté, et pour être moteur de semIoTics, notre application de gestion autonomique. La seconde contribution est EDR (Emergent Distributed Reasoning), une approche générique pour distribuer dynamiquement le raisonnement à base de règles. Les règles sont propagées de proche en proche en s'appuyant sur les descriptions échangées entre noeuds. EDR est évaluée dans deux scénario concrets, s'appuyant sur un serveur et des noeuds contraints pour simuler le déploiement
This thesis is situated in the Semantic Web of things (SWoT) domain, at the interface between the Internet of Things (IoT) and the Semantic Web (SW). The integration of SW approaches into the IoT aim at tackling the important heterogeneity of resources, technologies and applications in the IoT, which creates interoperability issues impeding the deployment of IoT systems. A first scientific challenge is risen by the resource consumption of the SW technologies, inadequated to the limites computation and communication capabilities of IoT devices. Moreover, IoT networks are deployed at a large scale, when SW technologies have scalability issues. This thesis addresses this double challenge by two contributions. The first one is the identification of quality criteria for IoT ontologies, leading to the proposition of IoT-O, a modular IoT ontology. IoT-O is deployed to enrich data from a smart building, and drive semIoTics, our autonomic computing application. The second contribution is EDR (Emergent Distributed Reasoning), a generic approach to dynamically distributed rule-based reasoning. Rules are propagated peer-to-peer, guided by descriptions exchanged among nodes. EDR is evaluated in two use-cases, using both a server and some constrained nodes to simulate the deployment
APA, Harvard, Vancouver, ISO, and other styles
38

Fogelqvist, Petter. "Verification of completeness and consistency in knowledge-based systems : A design theory." Thesis, Uppsala universitet, Informationssystem, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-168321.

Full text
Abstract:
Verification of knowledge-bases is a critical step to ensure the quality of a knowledge-based system. The success of these systems depends heavily on how qualitative the knowledge is. Manual verification is however cumbersome and error prone, especially for large knowledge-bases. This thesis provides a design theory, based upon the suggested framework by Gregor and Jones (2007). The theory proposes a general design of automated verification tools, which have the abilities of verifying heuristic knowledge in rule-based systems utilizing certainty factors. Included is a verification of completeness and consistency technique customized to this class of knowledge-based systems. The design theory is instantiated in a real-world verification tool development project at Uppsala University. Considerable attention is given to the design and implementation of this artifact – uncovering issues and considerations involved in the development process. For the knowledge management practitioner, this thesis offers guidance and recommendations for automated verification tool development projects. For the IS research community, the thesis contributes with extensions of existing design theory, and reveals some of the complexity involved with verification of a specific rule-based system utilizing certainty factors.
APA, Harvard, Vancouver, ISO, and other styles
39

Škrabal, Radek. "Vizualizace asociačních pravidel ve webovém prostředí." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-165135.

Full text
Abstract:
The aim of this thesis is to implement a web interface for the LISp-Miner academic system which provides association rule mining capability. There is a new trend these days for knowledge discovery in databases applications which are being transformed from desktop applications to web applications and this arises both new opportunities and issues. This thesis describes new interactive approach for association rule mining in which the user is an essential part of the algorithm and can alter the task setting. Users can also collaborate by creating domain knowledge repository which helps finding new interesting information out of the data.
APA, Harvard, Vancouver, ISO, and other styles
40

Cárdenas, Edward Hinojosa. "Geração genética multiobjetivo de sistemas fuzzy usando a abordagem iterativa." Universidade Federal de São Carlos, 2011. https://repositorio.ufscar.br/handle/ufscar/486.

Full text
Abstract:
Made available in DSpace on 2016-06-02T19:05:54Z (GMT). No. of bitstreams: 1 3998.pdf: 3486824 bytes, checksum: f1c040adfdc7d0672bc93a058f8a413d (MD5) Previous issue date: 2011-06-28
Financiadora de Estudos e Projetos
The goal of this work is to study, expand and evaluate the use of multiobjective genetic algorithms and the iterative rule learning approach in fuzzy system generation, especially, in fuzzy rule-based systems, both in automatic fuzzy rule generation from datasets and in fuzzy sets optimization. This work investigates the use of multi-objective genetic algorithms with a focus on the trade-off between accuracy and interpretability, considered contradictory objectives in the representation of fuzzy systems. With this purpose, we propose and implement an evolutive multi-objective genetic model composed of three stages. In the first stage uniformly distributed fuzzy sets are created. In the second stage, the rule base is generated by using an iterative rule learning approach and a multiobjective genetic algorithm. Finally the fuzzy sets created in the first stage are optimized through a multi-objective genetic algorithm. The proposed model was evaluated with a number of benchmark datasets and the results were compared to three other methods found in the literature. The results obtained with the optimization of the fuzzy sets were compared to the result of another fuzzy set optimizer found in the literature. Statistical comparison methods usually applied in similar context show that the proposed method has an improved classification rate and interpretability in comparison with the other methods.
O objetivo deste trabalho é estudar, expandir e avaliar o uso dos algoritmos genéticos multiobjetivo e a abordagem iterativa na geração de sistemas fuzzy, mais especificamente para sistemas fuzzy baseados em regras, tanto na geração automática da base de regras fuzzy a partir de conjuntos de dados, como a otimização dos conjuntos fuzzy. Esse trabalho investiga o uso dos algoritmos genéticos multiobjetivo com enfoque na questão de balanceamento entre precisão e interpretabilidade, ambos considerados contraditórios entre si na representação de sistemas fuzzy. Com este intuito, é proposto e implementado um modelo evolutivo multiobjetivo genético composto por três etapas. Na primeira etapa são criados os conjuntos fuzzy uniformemente distribuídos. Na segunda etapa é tratada a geração da base de regras usando a abordagem iterativa e um algoritmo genético multiobjetivo. Por fim, na terceira etapa os conjuntos fuzzy criados na primeira etapa são otimizados mediante um algoritmo genético multiobjetivo. O modelo desenvolvido foi avaliado em diversos conjuntos de dados benchmark e os resultados obtidos foram comparados com outros três métodos, que geram regras de classificação, encontrados na literatura. Os resultados obtidos após a otimização dos conjuntos fuzzy foram comparados com resultados de outro otimizador de conjuntos fuzzy encontrado na literatura. Métodos estatísticos de comparação usualmente aplicados em contextos semelhantes mostram uma melhor taxa de classificação e interpretabilidade do método proposto com relação a outros métodos.
APA, Harvard, Vancouver, ISO, and other styles
41

Doe, de Maindreville Christophe. "Conception et integration d'un langage de regles de production dans un sgbd relationnel." Paris 6, 1988. http://www.theses.fr/1988PA066209.

Full text
Abstract:
Dans le langage propose (rdl1), une regle consiste en une partie condition qui est une formule de calcul relationel de duplex et une partie action qu'est une sequence d'insertions/suppressions sur la base de donnees. La semantique operationnelle d'un programme rdl1 est donnee en termes d'etats de la base de donnees. L'execution d'un programme rdl1 definit alors une suite d'instances et est alors compris comme une fonction de transfert, eventuellement non deterministe, entre deux etats de la base de donnees. Un modele de representation et d'execution du langage rdl1 derive des reseaux de petri a predicats et introduit. Sont aussi presentes trois algorithmes d'evaluation et d'optimisation de programmes rdl1
APA, Harvard, Vancouver, ISO, and other styles
42

Nageba, Ebrahim. "Personalizable architecture model for optimizing the access to pervasive ressources and services : Application in telemedicine." Phd thesis, INSA de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00694445.

Full text
Abstract:
The growing development and use of pervasive systems, equipped with increasingly sophisticated functionalities and communication means, offer fantastic potentialities of services, particularly in the eHealth and Telemedicine domains, for the benifit of each citizen, patient or healthcare professional. One of the current societal challenges is to enable a better exploitation of the available services for all actors involved in a given domain. Nevertheless, the multiplicity of the offered services, the systems functional variety, and the heterogeneity of the needs require the development of knowledge models of these services, systems functions, and needs. In addition, the distributed computing environments heterogeneity, the availability and potential capabilities of various human and material resources (devices, services, data sources, etc.) required by the different tasks and processes, the variety of services providing users with data, the interoperability conflicts between schemas and data sources are all issues that we have to consider in our research works. Our contribution aims to empower the intelligent exploitation of ubiquitous resources and to optimize the quality of service in ambient environment. For this, we propose a knowledge meta-model of the main concepts of a pervasive environment, such as Actor, Task, Resource, Object, Service, Location, Organization, etc. This knowledge meta-model is based on ontologies describing the different aforementioned entities from a given domain and their interrelationships. We have then formalized it by using a standard language for knowledge description. After that, we have designed an architectural framework called ONOF-PAS (ONtology Oriented Framework for Pervasive Applications and Services) mainly based on ontological models, a set of rules, an inference engine, and object oriented components for tasks management and resources processing. Being generic, extensible, and applicable in different domains, ONOF-PAS has the ability to perform rule-based reasoning to handle various contexts of use and enable decision making in dynamic and heterogeneous environments while taking into account the availability and capabilities of the human and material resources required by the multiples tasks and processes executed by pervasive systems. Finally, we have instantiated ONOF-PAS in the telemedicine domain to handle the scenario of the transfer of persons victim of health problems during their presence in hostile environments such as high mountains resorts or geographically isolated areas. A prototype implementing this scenario, called T-TROIE (Telemedicine Tasks and Resources Ontologies for Inimical Environments), has been developed to validate our approach and the proposed ONOF-PAS framework.
APA, Harvard, Vancouver, ISO, and other styles
43

Palanisamy, Senthil Kumar. "Association rule based classification." Link to electronic thesis, 2006. http://www.wpi.edu/Pubs/ETD/Available/etd-050306-131517/.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Keywords: Itemset Pruning, Association Rules, Adaptive Minimal Support, Associative Classification, Classification. Includes bibliographical references (p.70-74).
APA, Harvard, Vancouver, ISO, and other styles
44

Shah, Syed Fawad Ali. "Intelligent Algorithms for a Hybrid FuelCell/Photovoltaic Standalone System : Simulation Of Hybrid FuelCell/Photovoltaic Standalone System." Thesis, Högskolan Dalarna, Datateknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:du-10046.

Full text
Abstract:
The Intelligent Algorithm is designed for theusing a Battery source. The main function is to automate the Hybrid System through anintelligent Algorithm so that it takes the decision according to the environmental conditionsfor utilizing the Photovoltaic/Solar Energy and in the absence of this, Fuel Cell energy isused. To enhance the performance of the Fuel Cell and Photovoltaic Cell we used batterybank which acts like a buffer and supply the current continuous to the load. To develop the main System whlogic based controller was used. Fuzzy Logic based controller used to develop this system,because they are chosen to be feasible for both controlling the decision process and predictingthe availability of the available energy on the basis of current Photovoltaic and Battery conditions. The Intelligent Algorithm is designed to optimize the performance of the system and to selectthe best available energy source(s) in regard of the input parameters. The enhance function of these Intelligent Controller is to predict the use of available energy resources and turn on thatparticular source for efficient energy utilization. A fuzzy controller was chosen to take thedecisions for the efficient energy utilization from the given resources. The fuzzy logic basedcontroller is designed in the Matlab-Simulink environment. Initially, the fuzzy based ruleswere built. Then MATLAB based simulation system was designed and implemented. Thenthis whole proposed model is simulated and tested for the accuracy of design and performanceof the system.
APA, Harvard, Vancouver, ISO, and other styles
45

Santana, Maykon Rocha. "Evolsys: um ambiente de configuração e análise de algoritmos evolutivos para sintonia da base de regras fuzzy do sistema de controle de um FMS." Universidade Federal de São Carlos, 2015. https://repositorio.ufscar.br/handle/ufscar/8413.

Full text
Abstract:
Submitted by Alison Vanceto (alison-vanceto@hotmail.com) on 2017-01-03T12:57:22Z No. of bitstreams: 1 DissMRS.pdf: 7075641 bytes, checksum: 8e6f815544b7f6f2ce4a1a5a47b25482 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2017-01-16T16:33:09Z (GMT) No. of bitstreams: 1 DissMRS.pdf: 7075641 bytes, checksum: 8e6f815544b7f6f2ce4a1a5a47b25482 (MD5)
Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2017-01-16T16:33:38Z (GMT) No. of bitstreams: 1 DissMRS.pdf: 7075641 bytes, checksum: 8e6f815544b7f6f2ce4a1a5a47b25482 (MD5)
Made available in DSpace on 2017-01-16T16:33:48Z (GMT). No. of bitstreams: 1 DissMRS.pdf: 7075641 bytes, checksum: 8e6f815544b7f6f2ce4a1a5a47b25482 (MD5) Previous issue date: 2016-12-14
Não recebi financiamento
In recent years, companies have used Artificial Intelligence (AI) techniques to facilitate the decisionmaking process in manufacturing systems. The use of these techniques allows increased performance of Flexible Manufacturing System (FMS). The automation of the process using computational resources allows a deeper analysis of the system conditions, which sometimes result in a better decision taking. In this sense, the Fuzzy Logic has been engaged to carry out this task, because it has the characteristic of dealing easily with inaccurate information and encoding knowledge specialist in Fuzzy rules. However, as soon as the system complexity increases, the task of generating a Fuzzy Rule Base (FRB) appropriate to the proposed system becomes increasingly difficult. To assist this process of generation of the FRB, several techniques can be used and among them stand out the search technique called Evolutionary Algorithm (EA). The EA is used, for example, for tuning the FRB of the FMS through the reduction of the optimization variables values as Makespan or Tardiness. In the case of variable called Makespan, the tuning occurs when the EA generates an FRB that reduces the makespan values of a FMS. However, the construction of the EA that effectively generates a tuning FRB is not trivial. It is required to be in the process, the construction of various EA with different selection methods and different mutation rates among other settings until an appropriate EA for a given situation appears. Therefore, in this study we aim to build an environment configuration and performance analysis of EAs in order to define the tuning FRB of the Fuzzy Control System of an FMS, i.e., it is intended to investigate how the EA ideal parameter scenario used for tuning the FRB of the said control system. In this study, the used EA was an extension of Genetic Algorithm (GA). For implementing the proposal, an evolutionary system for configuration and analysis of this variant of the GA was created. In this system, entitled "EvolSys - Evolutionary System" parameters of the system as Number of Input Variables of FRB, Number of Output Variables of FRB, Population Size, Mutation Rate and the EA Crossover Rate, among others are configured and then, one FRB is generated. Using this, there is an EA analysis of the possibility for choosing a FRB that will provide the reduction of makespan in FMS. Consequently, through this study, we may conclude that the use of EAs in collaboration with Fuzzy system may become an important tool for turning the system responsibility to the sequences of an FMS operation. Accordingly, the environment created meets the configuration step and analysis of EAs.
Nos últimos anos, empresas tem usado técnicas de Inteligência Artificial (AI) para auxiliar o processo de tomada de decisão em sistemas de manufatura. O uso dessas técnicas possibilita o aumento do desempenho dos Sistemas Flexíveis de Manufatura (FMS), uma vez que a automatização do processo com o uso de recursos computacionais permite uma análise mais profunda das condições do sistema o que, por vezes, resulta em uma melhor tomada de decisão. Neste sentido, a Lógica Fuzzy vem sendo usada para realizar essa tarefa, pois ela tem a característica de lidar facilmente com informações imprecisas, codificando o conhecimento do especialista nas chamadas Regras Fuzzy. Entretanto, à medida que a complexidade do sistema aumenta, a tarefa de gerar uma Base de Regras Fuzzy (FRB) adequada ao sistema proposto se torna cada vez mais difícil. Para auxiliar esse processo de geração da FRB, várias técnicas podem ser usadas e dentre elas destaca-se a técnica de busca denominada Algoritmo Evolutivo (EA). O EA pode ser usado, por exemplo, para a sintonia da Base de Regras Fuzzy do Sistema de Controle de um FMS por intermédio da redução de valores de variáveis de otimização como Makespan ou Tardiness. No caso da variável denominada Makespan, a sintonia ocorre quando o EA gera uma FRB que reduz os valores do makespan do FMS em questão. Entretanto, a construção do EA que efetivamente gera uma FRB sintonizada para um FMS não é trivial, pois é necessário que haja, nesse processo, a construção de vários tipos de EA com métodos de seleção diferentes, taxas de cruzamento e mutação diferentes dentre outras configurações, até que se encontre o EA adequado à uma dada situação. Sendo assim, no presente trabalho, o objetivo é a construção de um ambiente de configuração e análise de desempenho de EAs para sintonia da FRB do Sistema de Controle de um FMS, ou seja, pretende-se investigar qual o cenário de parâmetros ideal do EA usado na sintonia da FRB do referido sistema de controle. No presente trabalho, o EA usado foi uma extensão do Algoritmo Genético (GA). Para implementação da proposta, um Sistema Evolutivo para configuração e análise dessa variante do GA foi criado. Nesse sistema, intitulado “EvolSys - Evolutionary System”, parâmetros dos sistema como Número de Varáveis de Entrada da FRB, Número de Variáveis de Saída da FRB, Tamanho da População, Taxa de Mutação e Taxa de Cruzamento do EA, dentre outros são configurados e, por consequência, uma FRB é gerada. Com isso, há a possiblidade da análise do EA para a escolha de uma FRB que venha propiciar a redução do makespan em FMSs. Portanto, é possível concluir, a partir desse trabalho, que o uso de EAs em colaboração com os sistemas Fuzzy pode vir a se tornar uma importante ferramenta para sintonia da Base de Regras do sistema responsável pelo sequenciamento das operações de um FMS e, nesse sentido, o ambiente criado cumpre a etapa de configuração e análise do desempenho de EAs.
APA, Harvard, Vancouver, ISO, and other styles
46

Nicklasson, Henric, and Måns Ekström. "Monetary Policy Determination: A Taylor Rule Based Approach : A study of the West African Economic and Monetary Union." Thesis, Högskolan i Jönköping, Internationella Handelshögskolan, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-44368.

Full text
Abstract:
The purpose of this paper has been to investigate the monetary policy in the West African Economic and Monetary Union (WAEMU), in terms of a Taylor rule based approach to their use of their interest rate. The evaluation of the different rules was based on both in-sample and out-of-sample forecast errors. Few significant or consistent influences from the variables proposed by the rules can be established, which might suggest that the bank operates primarily under a discretionary framework rather than a rule. Furthermore, our findings indicate that the European Central Bank interest rate (ECB-rate) does not exclusively drive the Central Bank of West African States interest rate (BCEAO-rate), which suggests that they indeed do retain some independence of monetary policy to respond to domestic variables as proposed by earlier research, despite having a fixed exchange rate. These results put into question the credibility of the BCEAO in attaining their stated primary goal of price stability, as there seems to be no significant or consistent response to it in the setting of their interest rate, despite a suggested ability to react to it. This can be the cause of the current high volatility of inflation in the area and give rise to future volatility and instability as well.
APA, Harvard, Vancouver, ISO, and other styles
47

Gandharva, Kumar. "Study of Effect of Coverage and Purity on Quality of Learned Rules." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1428048034.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Hammoud, Suhel. "MapReduce network enabled algorithms for classification based on association rules." Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/5833.

Full text
Abstract:
There is growing evidence that integrating classification and association rule mining can produce more efficient and accurate classifiers than traditional techniques. This thesis introduces a new MapReduce based association rule miner for extracting strong rules from large datasets. This miner is used later to develop a new large scale classifier. Also new MapReduce simulator was developed to evaluate the scalability of proposed algorithms on MapReduce clusters. The developed associative rule miner inherits the MapReduce scalability to huge datasets and to thousands of processing nodes. For finding frequent itemsets, it uses hybrid approach between miners that uses counting methods on horizontal datasets, and miners that use set intersections on datasets of vertical formats. The new miner generates same rules that usually generated using apriori-like algorithms because it uses the same confidence and support thresholds definitions. In the last few years, a number of associative classification algorithms have been proposed, i.e. CPAR, CMAR, MCAR, MMAC and others. This thesis also introduces a new MapReduce classifier that based MapReduce associative rule mining. This algorithm employs different approaches in rule discovery, rule ranking, rule pruning, rule prediction and rule evaluation methods. The new classifier works on multi-class datasets and is able to produce multi-label predications with probabilities for each predicted label. To evaluate the classifier 20 different datasets from the UCI data collection were used. Results show that the proposed approach is an accurate and effective classification technique, highly competitive and scalable if compared with other traditional and associative classification approaches. Also a MapReduce simulator was developed to measure the scalability of MapReduce based applications easily and quickly, and to captures the behaviour of algorithms on cluster environments. This also allows optimizing the configurations of MapReduce clusters to get better execution times and hardware utilization.
APA, Harvard, Vancouver, ISO, and other styles
49

Kilinc, Adanali Yurdagul. "How To Follow A Rule: Practice Based Rule Following In Wittgenstein." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/3/12605900/index.pdf.

Full text
Abstract:
Rule following is a central concept in the philosophy of Wittgenstein who was one of the pioneers of modern philosophy. Wittgenstein criticizes the traditional concepts of rule, because they were vague, ambiguous, and idealized. He thinks that it is not possible to isolate rules from practice and that a rule takes its meaning in a certain context or in practice. Wittgenstein&rsquo
s concept of rule following is closely related to a set of concepts: internal relation, understanding, criterion. These concepts explains the intimate relation between rule following and practice. Wittgenstein believes that his theory of rule following does not generate some problems such as paradox of interpretation and regression. Furthermore, the concept of practice plays a central role in Wittgenstein&rsquo
s view of rule following. He removes metaphysical speculations that are put forward concerning the &ldquo
essence&rdquo
of rule following and locates rule following in a form of life, that is in a natural context. With this, he provides an explanation that clarifies misuses of language and establishes a correct relation between theory and practice.
APA, Harvard, Vancouver, ISO, and other styles
50

Lewis, Charles Michael. "Identification of rule-based models." Diss., Georgia Institute of Technology, 1986. http://hdl.handle.net/1853/30035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography