Dissertations / Theses on the topic 'Probabilistic Bayesian Network'

To see the other types of publications on this topic, follow the link: Probabilistic Bayesian Network.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Probabilistic Bayesian Network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sahin, Elvan. "Discrete-Time Bayesian Networks Applied to Reliability of Flexible Coping Strategies of Nuclear Power Plants." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103817.

Full text
Abstract:
The Fukushima Daiichi accident prompted the nuclear community to find a new solution to reduce the risky situations in nuclear power plants (NPPs) due to beyond-design-basis external events (BDBEEs). An implementation guide for diverse and flexible coping strategies (FLEX) has been presented by Nuclear Energy Institute (NEI) to manage the challenge of BDBEEs and to enhance reactor safety against extended station blackout (SBO). To assess the effectiveness of FLEX strategies, probabilistic risk assessment (PRA) methods can be used to calculate the reliability of such systems. Due to the uniqueness of FLEX systems, these systems can potentially carry dependencies among components not commonly modeled in NPPs. Therefore, a suitable method is needed to analyze the reliability of FLEX systems in nuclear reactors. This thesis investigates the effectiveness and applicability of Bayesian networks (BNs) and Discrete-Time Bayesian Networks (DTBNs) in the reliability analysis of FLEX equipment that is utilized to reduce the risk in nuclear power plants. To this end, the thesis compares BNs with two other reliability assessment methods: Fault Tree (FT) and Markov chain (MC). Also, it is shown that these two methods can be transformed into BN to perform the reliability analysis of FLEX systems. The comparison of the three reliability methods is shown and discussed in three different applications. The results show that BNs are not only a powerful method in modeling FLEX strategies, but it is also an effective technique to perform reliability analysis of FLEX equipment in nuclear power plants.
Master of Science
Some external events like earthquakes, flooding, and severe wind, may cause damage to the nuclear reactors. To reduce the consequences of these damages, the Nuclear Energy Institute (NEI) has proposed mitigating strategies known as FLEX (Diverse and Flexible Coping Strategies). After the implementation of FLEX in nuclear power plants, we need to analyze the failure or success probability of these engineering systems through one of the existing methods. However, the existing methods are limited in analyzing the dependencies among components in complex systems. Bayesian networks (BNs) are a graphical and quantitative technique that is utilized to model dependency among events. This thesis shows the effectiveness and applicability of BNs in the reliability analysis of FLEX strategies by comparing it with two other reliability analysis tools, known as Fault Tree Analysis and Markov Chain. According to the reliability analysis results, BN is a powerful and promising method in modeling and analyzing FLEX strategies.
APA, Harvard, Vancouver, ISO, and other styles
2

Yoo, Keunyoung. "Probabilistic SEM : an augmentation to classical Structural equation modelling." Diss., University of Pretoria, 2018. http://hdl.handle.net/2263/66521.

Full text
Abstract:
Structural equation modelling (SEM) is carried out with the aim of testing hypotheses on the model of the researcher in a quantitative way, using the sampled data. Although SEM has developed in many aspects over the past few decades, there are still numerous advances which can make SEM an even more powerful technique. We propose representing the nal theoretical SEM by a Bayesian Network (BN), which we would like to call a Probabilistic Structural Equation Model (PSEM). With the PSEM, we can take things a step further and conduct inference by explicitly entering evidence into the network and performing di erent types of inferences. Because the direction of the inference is not an issue, various scenarios can be simulated using the BN. The augmentation of SEM with BN provides signi cant contributions to the eld. Firstly, structural learning can mine data for additional causal information which is not necessarily clear when hypothesising causality from theory. Secondly, the inference ability of the BN provides not only insight as mentioned before, but acts as an interactive tool as the `what-if' analysis is dynamic.
Mini Dissertation (MCom)--University of Pretoria, 2018.
Statistics
MCom
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Wenyu. "A Probabilistic Approach for Prognostics of Complex Rotary Machinery Systems." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1423581651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Björkman, Peter. "Probabilistic Safety Assessment using Quantitative Analysis Techniques : Application in the Heavy Automotive Industry." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-163262.

Full text
Abstract:
Safety is considered as one of the most important areas in future research and development within the automotive industry. New functionality, such as driver support and active/passive safety systems are examples where development mainly focuses on safety. At the same time, the trend is towards more complex systems, increased software dependence and an increasing amount of sensors and actuators, resulting in a higher risk associated with software and hardware failures. In the area of functional safety, standards such as ISO 26262 assess safety mainly focusing on qualitative assessment techniques, whereas usage of quantitative techniques is a growing area in academic research. This thesis considers the field functional safety, with the emphasis on how hardware and software failure probabilities can be used to quantitatively assess safety of a system/function. More specifically, this thesis presents a method for quantitative safety assessment using Bayesian networks for probabilistic modeling. Since the safety standard ISO 26262 is becoming common in the automotive industry, the developed method is adjusted to use information gathered when implementing this standard. Continuing the discussion about safety, a method for modeling faults and failures using Markov models is presented. These models connect to the previous developed Bayesian network and complete the quantitative safety assessment. Furthermore, the potential for implementing the discussed models in the Modelica language is investigated, aiming to find out if models such as these could be useful in practice to simplify design work, in order to meet future safety goals.
APA, Harvard, Vancouver, ISO, and other styles
5

Quer, Giorgio. "Optimization of Cognitive Wireless Networks using Compressive Sensing and Probabilistic Graphical Models." Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3421992.

Full text
Abstract:
In-network data aggregation to increase the efficiency of data gathering solutions for Wireless Sensor Networks (WSNs) is a challenging task. In the first part of this thesis, we address the problem of accurately reconstructing distributed signals through the collection of a small number of samples at a Data Collection Point (DCP). We exploit Principal Component Analysis (PCA) to learn the relevant statistical characteristics of the signals of interest at the DCP. Then, at the DCP we use this knowledge to design a matrix required by the recovery techniques, that exploit convex optimization (Compressive Sensing, CS) in order to recover the whole signal sensed by the WSN from a small number of samples gathered. In order to integrate this monitoring model in a compression/recovery framework, we apply the logic of the cognition paradigm: we first observe the network, then we learn the relevant statistics of the signals, we apply it to recover the signal and to make decisions, that we effect through the control loop. This compression/recovery framework with a feedback control loop is named "Sensing, Compression and Recovery through ONline Estimation" (SCoRe1). The whole framework is designed for a WSN architecture, called WSN-control, that is accessible from the Internet. We also analyze with a Bayesian approach the whole framework to justify theoretically the choices made in our protocol design. The second part of the thesis deals with the application of the cognition paradigm to the optimization of a Wireless Local Area Network (WLAN). In this work, we propose an architecture for cognitive networking that can be integrated with the existing layered protocol stack. Specifically, we suggest the use of a probabilistic graphical model for modeling the layered protocol stack. In particular, we use a Bayesian Network (BN), a graphical representation of statistical relationships between random variables, in order to describe the relationships among a set of stack-wide protocol parameters and to exploit this cross-layer approach to optimize the network. In doing so, we use the knowledge learned from the observation of the data to predict the TCP throughput in a single-hop wireless network and to infer the future occurrence of congestion at the TCP layer in a multi-hop wireless network. The approach followed in the two main topics of this thesis consists of the following phases: (i) we apply the cognition paradigm to learn the specific probabilistic characteristics of the network, (ii) we exploit this knowledge acquired in the first phase to design novel protocol techniques, (iii) we analyze theoretically and through extensive simulation such techniques, comparing them with other state of the art techniques, and (iv) we evaluate their performance in real networking scenarios.
La combinazione delle informazioni nelle reti di sensori wireless è una soluzione promettente per aumentare l'efficienza delle techiche di raccolta dati. Nella prima parte di questa tesi viene affrontato il problema della ricostruzione di segnali distribuiti tramite la raccolta di un piccolo numero di campioni al punto di raccolta dati (DCP). Viene sfruttato il metodo dell'analisi delle componenti principali (PCA) per ricostruire al DCP le caratteristiche statistiche del segnale di interesse. Questa informazione viene utilizzata al DCP per determinare la matrice richiesta dalle tecniche di recupero che sfruttano algoritmi di ottimizzazione convessa (Compressive Sensing, CS) per ricostruire l'intero segnale da una sua versione campionata. Per integrare questo modello di monitoraggio in un framework di compressione e recupero del segnale, viene applicata la logica del paradigma 'cognitive': prima si osserva la rete; poi dall'osservazione si derivano le statistiche di interesse, che vengono applicate per il recupero del segnale; si sfruttano queste informazioni statistiche per prenderere decisioni e infine si rendono effettive queste decisioni con un controllo in retroazione. Il framework di compressione e recupero con controllo in retroazione è chiamato "Sensing, Compression and Recovery through ONline Estimation" (SCoRe1). L'intero framework è stato implementato in una architettura per WSN detta WSN-control, accessibile da Internet. Le scelte nella progettazione del protocollo sono state giustificate da un'analisi teorica con un approccio di tipo Bayesiano. Nella seconda parte della tesi il paradigma cognitive viene utilizzato per l'ottimizzazione di reti locali wireless (WLAN). L'architetture della rete cognitive viene integrata nello stack protocollare della rete wireless. Nello specifico, vengono utilizzati dei modelli grafici probabilistici per modellare lo stack protocollare: le relazioni probabilistiche tra alcuni parametri di diversi livelli vengono studiate con il modello delle reti Bayesiane (BN). In questo modo, è possibile utilizzare queste informazioni provenienti da diversi livelli per ottimizzare le prestazioni della rete, utilizzando un approccio di tipo cross-layer. Ad esempio, queste informazioni sono utilizzate per predire il throughput a livello di trasporto in una rete wireless di tipo single-hop, o per prevedere il verificarsi di eventi di congestione in una rete wireless di tipo multi-hop. L'approccio seguito nei due argomenti principali che compongono questa tesi è il seguente: (i) viene applicato il paradigma cognitive per ricostruire specifiche caratteristiche probabilistiche della rete, (ii) queste informazioni vengono utilizzate per progettare nuove tecniche protocollari, (iii) queste tecniche vengono analizzate teoricamente e confrontate con altre tecniche esistenti, e (iv) le prestazioni vengono simulate, confrontate con quelle di altre tecniche e valutate in scenari di rete realistici.
APA, Harvard, Vancouver, ISO, and other styles
6

Ramani, Shiva Shankar. "Graphical Probabilistic Switching Model: Inference and Characterization for Power Dissipation in VLSI Circuits." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bortolini, Rafaela. "Enhancing building performance : a Bayesian network model to support facility management." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/666187.

Full text
Abstract:
The performance of existing buildings is receiving increased concern due to the need to renovate the aging building stock and provide better quality of life for end users. The conservation state of buildings and the indoor environment conditions have been related to occupants’ well-being, health, and productivity. At the same time, there is a need for more sustainable buildings with reduced energy consumption. Most challenges encountered during the analysis of the performance of existing buildings are associated with the complex relationships among the causal factors involved. The performance of a building is influenced by several factors (e.g., environmental agents, occupant behavior, operation, maintenance), which also generate uncertainties when predicting it. Most previous studies that investigate methods to assess a building’s performance do not consider the uncertainty and are often based on linear models. Although different stakeholders’ requirements regarding building performance coexist, few studies centered on the implications of these requirements. Previous studies tend to be highly specific on indicators related to a particular performance aspect, overlooking potential trade-offs that may occur between them. Therefore, a holistic and integrated approach to manage the performance of existing buildings has not been explored. Facility managers need an efficient approach to deal with uncertainty, to manage risks, and systematically identify, analyze, evaluate and mitigate factors that may impact the building performance. Taking into account the aforementioned aspects, the aim of this thesis is to devise a Bayesian network (BN) model to holistically manage the operational performance of buildings and support facility management. The proposed model consists of an integrated probabilistic approach to assess the performance of existing buildings, considering three categories: safety and elements working properly, health and comfort, and energy efficiency. The model also provides an understanding of the causality chain between multiple factors and indicators regarding building performance. The understanding of the relationships between building condition, end user comfort and building energy efficiency, supports facility managers to unwind a causal explanation for the performance results in a reasoning process. The proposed model is tested and validated using sensitivity analysis and data from existing buildings. A set of model applications are discussed, including the assessment of a building’s performance holistically, the identification of causal factors, the prediction of building performance through renovation and retrofit scenarios, and the prioritization of maintenance actions. Case studies also allow to illustrate the applicability of the model for ensuring that its interactions and outcomes are feasible. Scenario analyses provide a basis for a deeper understanding of the potential responses of the model, helping facility managers to optimize operation strategies of buildings in order to enhance its performance. The results of this thesis also include data collection methods for the inputs of the proposed BN model. A building inspection system is proposed to evaluate the technical performance of buildings, a text-mining approach is developed to analyze maintenance requests of end users, and a questionnaire is formulated to collect end-user satisfaction regarding building comfort. To conclude, this work proposes the use of Building Information Modeling (BIM) to store and access building information, which are typically disperse and not standardized in existing buildings.
Actualmente, el desempeño de los edificios existentes es de gran interés debido a la necesidad de renovar el stock de edificios antiguos, proporcionando así una mejor calidad de vida a los usuarios finales. El estado de conservación de los edificios y las condiciones ambientales interiores se relacionan con el bienestar, la salud y la productividad de los ocupantes. Al mismo tiempo, existe la necesidad de edificios más sostenibles con un menor consumo energético. El desempeño de un edificio se ve afectado por varios factores (p.ej., agentes ambientales, comportamiento de los ocupantes, operación, mantenimiento, etc.). La mayoría de estos aspectos y causas muestran complejas relaciones, y consecuentemente existe una gran incertidumbre para predecirlo. Sin embargo, las investigaciones anteriores no contemplan estas relaciones causales y, a menudo, se basan en modelos lineales. Aunque el desempeño de los edificios se debe abordar teniendo en cuenta los requisitos de las diferentes partes interesadas, pocos estudios se centran en este enfoque. Los estudios anteriores tienden a analizar aspectos particulares del desempeño, ignorando las posibles relaciones que pueden ocurrir entre ellos. Los gestores de edificios deben abordar eficientemente la incertidumbre, gestionar los riesgos e identificar, analizar, evaluar y mitigar sistemáticamente los factores que pueden afectar el desempeño del edificio. Teniendo en cuenta los aspectos comentados anteriormente, el objetivo de esta tesis es desarrollar un modelo de red bayesiana (BN) para gestionar holísticamente el desempeño operativo de los edificios y apoyar su gestión. El modelo propuesto consiste en un enfoque probabilístico para evaluar el desempeño de los edificios existentes, considerando tres categorías: seguridad y funcionalidad, salud y confort, y eficiencia energética. El modelo también proporciona una interpretación de la cadena de causalidad entre los múltiples factores e indicadores relacionados con el desempeño del edificio. El análisis de las relaciones entre los diferentes aspectos del desempeño de los edificios (estado de conservación del edificio, el confort del usuario final y la eficiencia energética del edificio) va a permitir explicar y entender sus factores causales y va a posibilitar mejorar la gestión de estos edificios. La verificación del modelo propuesto se lleva a cabo mediante análisis de sensibilidad y datos de edificios existentes. Las aplicaciones del modelo incluyen: la evaluación del desempeño de edificios de forma integrada; la identificación de factores causales; la predicción del desempeño de los edificios a través de escenarios de renovación y modernización; y la priorización de las acciones de mantenimiento. La implementación del modelo en diversos casos de estudio permite ilustrar su aplicabilidad y validar su uso. Los resultados de esta tesis también incluyen métodos de recogida de datos para las variables del modelo propuesto. De hecho, se propone un sistema de inspección de edificios para evaluar el desempeño técnico de los edificios, se desarrolla un sistema de text mining para analizar las solicitudes de mantenimiento de los usuarios finales y se formula un cuestionario para recoger la satisfacción de los usuarios finales en relación a los espacios de los edificios en los que interactúan. Para concluir, este trabajo propone el uso del Building Information Modeling (BIM) para almacenar y acceder a la información necesaria para el modelo.
APA, Harvard, Vancouver, ISO, and other styles
8

Klukowski, Piotr. "Nuclear magnetic resonance spectroscopy interpretation for protein modeling using computer vision and probabilistic graphical models." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4720.

Full text
Abstract:
Dynamic development of nuclear magnetic resonance spectroscopy (NMR) allowed fast acquisition of experimental data which determine structure and dynamics of macromolecules. Nevertheless, due to lack of appropriate computational methods, NMR spectra are still analyzed manually by researchers what takes weeks or years depending on protein complexity. Therefore automation of this process is extremely desired and can significantly reduce time of protein structure solving. In presented work, a new approach to automated three-dimensional protein NMR spectra analysis is presented. It is based on Histogram of Oriented Gradients and Bayesian Network which have not been ever applied in that context in the history of research in the area. Proposed method was evaluated using benchmark data which was established by manual labeling of 99 spectroscopic images taken from 6 different NMR experiments. Afterwards subsequent validation was made using spectra of upstream of N-ras protein. With the use of proposed method, a three-dimensional structure of mentioned protein was calculated. Comparison with reference structure from protein databank reveals no significant differences what has proven that proposed method can be used in practice in NMR laboratories.
APA, Harvard, Vancouver, ISO, and other styles
9

Ramalingam, Nirmal Munuswamy. "A complete probabilistic framework for learning input models for power and crosstalk estimation in VLSI circuits." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tran, Thanh Binh. "A Bayesian Network framework for probabilistic identification of model parameters from normal and accelerated tests : application to chloride ingress into conrete." Nantes, 2015. https://archive.bu.univ-nantes.fr/pollux/show/show?id=1bd3c7d5-c357-43f1-b430-bb5e97e9ef3c.

Full text
Abstract:
La pénétration des chlorures dans le béton est l'une des causes principales de dégradation des ouvrages en béton armé. Sous l’attaque des chlorures des dégradations importantes auront lieu après 10 à 20 ans. Par conséquent, ces ouvrages devraient être périodiquement inspectés et réparés afin d’assurer des niveaux optimaux de capacité de service et de sécurité pendant leur durée de vie. Des paramètres matériels et environnementaux pertinents peuvent être déterminés à partir des données d’inspection. En raison de la cinétique longue des mécanismes de pénétration de chlorures et des difficultés pour mettre en place des techniques d'inspection, il est difficile d'obtenir des données d'inspection suffisantes pour caractériser le comportement à moyen et à long-terme de ce phénomène. L'objectif principal de cette thèse est de développer une méthodologie basée sur la mise à jour du réseau bayésien pour améliorer l'identification des incertitudes liées aux paramètres matériels et environnementaux des modèles en cas de quantité limitée de mesures. Le processus d'identification est appuyé sur des résultats provenant de tests normaux et accélérées effectués en laboratoire qui simulent les conditions de marée. Sur la base de ces données, plusieurs procédures sont proposées pour : (1) identifier des variables aléatoires d'entrée à partir de tests normaux ou naturels; (2) déterminer un temps équivalent d'exposition (et un facteur d'échelle) pour les tests accélérés; et (3) caractériser les paramètres en dépendants du temps. Les résultats indiquent que le cadre proposé peut être un outil utile pour identifier les paramètres du modèle, même à partir d’une base de données limitée
Chloride ingress into concrete is one of the major causes leading to the degradation of reinforced concrete (RC) structures. Under chloride attack important damages are generated after 10 to 20 years. Consequently, they should be periodically inspected and repaired to ensure an optimal level of serviceability and safety during its lifecycle. Relevant material and environmental parameters for reliability analysis could be determined from inspection data. In natural conditions, chloride ingress involves a large number of uncertainties related to material properties and exposure conditions. However, due to the slow process of chloride ingress and the difficulties for implementing the inspection techniques, it is difficult to obtain sufficient inspection data to characterise the mid- and long-term behaviour of this phenomenon. The main objective of this thesis is to develop a framework based on Bayesian Network updating for improving the identification of uncertainties related to material and environmental model parameters in case of limited amount of measurements in time and space. The identification process is based on results coming from in-lab normal and accelerated tests that simulate tidal conditions. Based on these data, several procedures are proposed to: (1) identify input random variables from normal or natural tests; (2) determine an equivalent exposure time (and a scale factor) for accelerated tests; and (3) characterise time-dependent parameters combining information from normal and accelerated tests. The results indicate that the proposed framework could be a useful tool to identify model parameters even from limited
APA, Harvard, Vancouver, ISO, and other styles
11

Gasse, Maxime. "Apprentissage de Structure de Modèles Graphiques Probabilistes : application à la Classification Multi-Label." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1003/document.

Full text
Abstract:
Dans cette thèse, nous nous intéressons au problème spécifique de l'apprentissage de structure de modèles graphiques probabilistes, c'est-à-dire trouver la structure la plus efficace pour représenter une distribution, à partir seulement d'un ensemble d'échantillons D ∼ p(v). Dans une première partie, nous passons en revue les principaux modèles graphiques probabilistes de la littérature, des plus classiques (modèles dirigés, non-dirigés) aux plus avancés (modèles mixtes, cycliques etc.). Puis nous étudions particulièrement le problème d'apprentissage de structure de modèles dirigés (réseaux Bayésiens), et proposons une nouvelle méthode hybride pour l'apprentissage de structure, H2PC (Hybrid Hybrid Parents and Children), mêlant une approche à base de contraintes (tests statistiques d'indépendance) et une approche à base de score (probabilité postérieure de la structure). Dans un second temps, nous étudions le problème de la classification multi-label, visant à prédire un ensemble de catégories (vecteur binaire y P (0, 1)m) pour un objet (vecteur x P Rd). Dans ce contexte, l'utilisation de modèles graphiques probabilistes pour représenter la distribution conditionnelle des catégories prend tout son sens, particulièrement dans le but minimiser une fonction coût complexe. Nous passons en revue les principales approches utilisant un modèle graphique probabiliste pour la classification multi-label (Probabilistic Classifier Chain, Conditional Dependency Network, Bayesian Network Classifier, Conditional Random Field, Sum-Product Network), puis nous proposons une approche générique visant à identifier une factorisation de p(y|x) en distributions marginales disjointes, en s'inspirant des méthodes d'apprentissage de structure à base de contraintes. Nous démontrons plusieurs résultats théoriques, notamment l'unicité d'une décomposition minimale, ainsi que trois procédures quadratiques sous diverses hypothèses à propos de la distribution jointe p(x, y). Enfin, nous mettons en pratique ces résultats afin d'améliorer la classification multi-label avec les fonctions coût F-loss et zero-one loss
In this thesis, we address the specific problem of probabilistic graphical model structure learning, that is, finding the most efficient structure to represent a probability distribution, given only a sample set D ∼ p(v). In the first part, we review the main families of probabilistic graphical models from the literature, from the most common (directed, undirected) to the most advanced ones (chained, mixed etc.). Then we study particularly the problem of learning the structure of directed graphs (Bayesian networks), and we propose a new hybrid structure learning method, H2PC (Hybrid Hybrid Parents and Children), which combines a constraint-based approach (statistical independence tests) with a score-based approach (posterior probability of the structure). In the second part, we address the multi-label classification problem, which aims at assigning a set of categories (binary vector y P (0, 1)m) to a given object (vector x P Rd). In this context, probabilistic graphical models provide convenient means of encoding p(y|x), particularly for the purpose of minimizing general loss functions. We review the main approaches based on PGMs for multi-label classification (Probabilistic Classifier Chain, Conditional Dependency Network, Bayesian Network Classifier, Conditional Random Field, Sum-Product Network), and propose a generic approach inspired from constraint-based structure learning methods to identify the unique partition of the label set into irreducible label factors (ILFs), that is, the irreducible factorization of p(y|x) into disjoint marginal distributions. We establish several theoretical results to characterize the ILFs based on the compositional graphoid axioms, and obtain three generic procedures under various assumptions about the conditional independence properties of the joint distribution p(x, y). Our conclusions are supported by carefully designed multi-label classification experiments, under the F-loss and the zero-one loss functions
APA, Harvard, Vancouver, ISO, and other styles
12

Ben, Mrad Ali. "Observations probabilistes dans les réseaux bayésiens." Thesis, Valenciennes, 2015. http://www.theses.fr/2015VALE0018/document.

Full text
Abstract:
Dans un réseau bayésien, une observation sur une variable signifie en général que cette variable est instanciée. Ceci signifie que l’observateur peut affirmer avec certitude que la variable est dans l’état signalé. Cette thèse porte sur d’autres types d’observations, souvent appelées observations incertaines, qui ne peuvent pas être représentées par la simple affectation de la variable. Cette thèse clarifie et étudie les différents concepts d’observations incertaines et propose différentes applications des observations incertaines dans les réseaux bayésiens.Nous commençons par dresser un état des lieux sur les observations incertaines dans les réseaux bayésiens dans la littérature et dans les logiciels, en termes de terminologie, de définition, de spécification et de propagation. Il en ressort que le vocabulaire n'est pas clairement établi et que les définitions proposées couvrent parfois des notions différentes.Nous identifions trois types d’observations incertaines dans les réseaux bayésiens et nous proposons la terminologie suivante : observation de vraisemblance, observation probabiliste fixe et observation probabiliste non-fixe. Nous exposons ensuite la façon dont ces observations peuvent être traitées et propagées.Enfin, nous donnons plusieurs exemples d’utilisation des observations probabilistes fixes dans les réseaux bayésiens. Le premier exemple concerne la propagation d'observations sur une sous-population, appliquée aux systèmes d'information géographique. Le second exemple concerne une organisation de plusieurs agents équipés d'un réseau bayésien local et qui doivent collaborer pour résoudre un problème. Le troisième exemple concerne la prise en compte d'observations sur des variables continues dans un RB discret. Pour cela, l'algorithme BN-IPFP-1 a été implémenté et utilisé sur des données médicales de l'hôpital Bourguiba de Sfax
In a Bayesian network, evidence on a variable usually signifies that this variable is instantiated, meaning that the observer can affirm with certainty that the variable is in the signaled state. This thesis focuses on other types of evidence, often called uncertain evidence, which cannot be represented by the simple assignment of the variables. This thesis clarifies and studies different concepts of uncertain evidence in a Bayesian network and offers various applications of uncertain evidence in Bayesian networks.Firstly, we present a review of uncertain evidence in Bayesian networks in terms of terminology, definition, specification and propagation. It shows that the vocabulary is not clear and that some terms are used to represent different concepts.We identify three types of uncertain evidence in Bayesian networks and we propose the followingterminology: likelihood evidence, fixed probabilistic evidence and not-fixed probabilistic evidence. We define them and describe updating algorithms for the propagation of uncertain evidence. Finally, we propose several examples of the use of fixed probabilistic evidence in Bayesian networks. The first example concerns evidence on a subpopulation applied in the context of a geographical information system. The second example is an organization of agent encapsulated Bayesian networks that have to collaborate together to solve a problem. The third example concerns the transformation of evidence on continuous variables into fixed probabilistic evidence. The algorithm BN-IPFP-1 has been implemented and used on medical data from CHU Habib Bourguiba in Sfax
APA, Harvard, Vancouver, ISO, and other styles
13

FALOTICO, ROSA. "Modelli ad Equazioni Strutturali e Reti Probabilistiche Bayesiane: due approcci a confronto nello studio di relazioni causali." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/19444.

Full text
Abstract:
Causality is a concept extremely important in science, but its definition is quite controversial and its detection is not exempt from problems. There are different approaches to deal with causality. Two of them are the Structural Equation Models (SEMs) and the Probabilistic Bayesian Networks (PBNs). SEMs are confirmative models. Given a causal structure, they test if it is coherent with data. In this context they are estimated using the Partial Least Squares Path Modeling technique in order to obtain the scores of latent variables. PBNs are inductive methods. Their attempt is to extract the causal scheme deriving from data, without presupposing any knowledge. Both models presents advantages and disadvantages regardless of the causality approach they refer to. SEMs are best suited for quantitative data and when there is a solid theoretical knowledge on the subject of analysis. PBNs are preferable for nonlinear analysis or uncertain causal scheme. In the thesis a possible integration of the two methods is proposed in the analysis of data deriving from a satisfaction and customer loyalty survey for “Customer American Satisfaction Index” (ACSI). Results suggest that the SEMs are more suitable than the PBNs and that the integration of the two statistical models is advantageous only in part. This is related to the kind of data, since the ACSI survey is structured for a PLS-PM analysis. Thus, it could be very interesting repeat the comparison for different types of data.
APA, Harvard, Vancouver, ISO, and other styles
14

Petiet, Florence. "Réseau bayésien dynamique hybride : application à la modélisation de la fiabilité de systèmes à espaces d'états discrets." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC2014/document.

Full text
Abstract:
L'analyse de fiabilité fait partie intégrante de la conception et du fonctionnement du système, en particulier pour les systèmes exécutant des applications critiques. Des travaux récents ont montré l'intérêt d'utiliser les réseaux bayésiens dans le domaine de la fiabilité, pour modélisation la dégradation d'un système. Les modèles graphiques de durée sont un cas particulier des réseaux bayésiens, qui permettent de s'affranchir de la propriété markovienne des réseaux bayésiens dynamiques. Ils s'adaptent aux systèmes dont le temps de séjour dans chaque état n'est pas nécessairement distribué exponentiellement, comme c'est le cas dans la plupart des applications industrielles. Des travaux antérieurs ont toutefois montré des limitations à ces modèles en terme de capacité de stockage et de temps de calcul, en raison du caractère discret de la variable temps de séjour. Une solution pourrait consister à considérer une variable de durée continue. Selon les avis d'experts, les variables de temps de séjour suivent une distribution de Weibull dans de nombreux systèmes. L'objectif de la thèse est d'intégrer des variables de temps de séjour suivant une distribution de Weibull dans un modèle de durée graphique en proposant une nouvelle approche. Après une présentation des réseaux bayésiens, et plus particulièrement des modèles graphiques de durée et leur limitation, ce rapport s'attache à présenter le nouveau modèle permettant la modélisation du processus de dégradation. Ce nouveau modèle est appelé modèle graphique de durée hybride Weibull. Un algorithme original permettant l'inférence dans un tel réseau a été mis en place. L'étape suivante a été la validation de l'approche. Ne disposant pas de données, il a été nécessaire de simuler des séquences d'états du système. Différentes bases de données ainsi construites ont permis d'apprendre d'un part un modèle graphique de durée, et d'autre part un modèle graphique de durée hybride-Weibull, afin de les comparer, que ce soit en terme de qualité d’apprentissage, de qualité d’inférence, de temps de calcul, et de capacité de stockage
Reliability analysis is an integral part of system design and operation, especially for systems running critical applications. Recent works have shown the interest of using Bayesian Networks in the field of reliability, for modeling the degradation of a system. The Graphical Duration Models are a specific case of Bayesian Networks, which make it possible to overcome the Markovian property of dynamic Bayesian Networks. They adapt to systems whose sojourn-time in each state is not necessarily exponentially distributed, which is the case for most industrial applications. Previous works, however, have shown limitations in these models in terms of storage capacity and computing time, due to the discrete nature of the sojourn time variable. A solution might be to allow the sojourn time variable to be continuous. According to expert opinion, sojourn time variables follow a Weibull distribution in many systems. The goal of this thesis is to integrate sojour time variables following a Weibull distribution in a Graphical Duration Model by proposing a new approach. After a presentation of the Bayesian networks, and more particularly graphical duration models, and their limitations, this report focus on presenting the new model allowing the modeling of the degradation process. This new model is called Weibull Hybrid Graphical Duration Model. An original algorithm allowing inference in such a network has been deployed. Various so built databases allowed to learn on one hand a Graphical Duration Model, and on an other hand a Graphical Duration Model Hybrid - Weibull, in order to compare them, in term of learning quality, of inference quality, of compute time, and of storage space
APA, Harvard, Vancouver, ISO, and other styles
15

SYED, MUHAMMAD FARRUKH SHAHID. "Data-Driven Approach based on Deep Learning and Probabilistic Models for PHY-Layer Security in AI-enabled Cognitive Radio IoT." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1048543.

Full text
Abstract:
Cognitive Radio Internet of Things (CR-IoT) has revolutionized almost every field of life and reshaped the technological world. Several tiny devices are seamlessly connected in a CR-IoT network to perform various tasks in many applications. Nevertheless, CR-IoT surfers from malicious attacks that pulverize communication and perturb network performance. Therefore, recently it is envisaged to introduce higher-level Artificial Intelligence (AI) by incorporating Self-Awareness (SA) capabilities into CR-IoT objects to facilitate CR-IoT networks to establish secure transmission against vicious attacks autonomously. In this context, sub-band information from the Orthogonal Frequency Division Multiplexing (OFDM) modulated transmission in the spectrum has been extracted from the radio device receiver terminal, and a generalized state vector (GS) is formed containing low dimension in-phase and quadrature components. Accordingly, a probabilistic method based on learning a switching Dynamic Bayesian Network (DBN) from OFDM transmission with no abnormalities has been proposed to statistically model signal behaviors inside the CR-IoT spectrum. A Bayesian filter, Markov Jump Particle Filter (MJPF), is implemented to perform state estimation and capture malicious attacks. Subsequently, GS containing a higher number of subcarriers has been investigated. In this connection, Variational autoencoders (VAE) is used as a deep learning technique to extract features from high dimension radio signals into low dimension latent space z, and DBN is learned based on GS containing latent space data. Afterward, to perform state estimation and capture abnormalities in a spectrum, Adapted-Markov Jump Particle Filter (A-MJPF) is deployed. The proposed method can capture anomaly that appears due to either jammer attacks in transmission or cognitive devices in a network experiencing different transmission sources that have not been observed previously. The performance is assessed using the receiver operating characteristic (ROC) curves and the area under the curve (AUC) metrics.
APA, Harvard, Vancouver, ISO, and other styles
16

Koudelka, Vlastimil. "Pravděpodobnostní neuronové sítě pro speciální úlohy v elektromagnetismu." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-233661.

Full text
Abstract:
Tato práce pojednává o technikách behaviorálního modelování pro speciální úlohy v elektromagnetismu, které je možno formulovat jako problém aproximace, klasifikace, odhadu hustoty pravděpodobnosti nebo kombinatorické optimalizace. Zkoumané methody se dotýkají dvou základních problémů ze strojového učení a combinatorické optimalizace: ”bias vs. variance dilema” a NP výpočetní komplexity. Boltzmanův stroj je v práci navržen ke zjednodušování komplexních impedančních sítí. Bayesovský přístup ke strojovému učení je upraven pro regularizaci Parzenova okna se snahou o vytvoření obecného kritéria pro regularizaci pravděpodobnostní a regresní neuronové sítě.
APA, Harvard, Vancouver, ISO, and other styles
17

Tembo, Mouafo Serge Romaric. "Applications de l'intelligence artificielle à la détection et l'isolation de pannes multiples dans un réseau de télécommunications." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0004/document.

Full text
Abstract:
Les réseaux de télécommunication doivent être fiables et robustes pour garantir la haute disponibilité des services. Les opérateurs cherchent actuellement à automatiser autant que possible les opérations complexes de gestion des réseaux, telles que le diagnostic de pannes.Dans cette thèse nous nous sommes intéressés au diagnostic automatique de pannes dans les réseaux d'accès optiques de l'opérateur Orange. L'outil de diagnostic utilisé jusqu'à présent, nommé DELC, est un système expert à base de règles de décision. Ce système est performant mais difficile à maintenir en raison, en particulier, du très grand volume d'informations à analyser. Il est également impossible de disposer d'une règle pour chaque configuration possible de panne, de sorte que certaines pannes ne sont actuellement pas diagnostiquées.Dans cette thèse nous avons proposé une nouvelle approche. Dans notre approche, le diagnostic des causes racines des anomalies et alarmes observées s'appuie sur une modélisation probabiliste, de type réseau bayésien, des relations de dépendance entre les différentes alarmes, compteurs, pannes intermédiaires et causes racines au niveau des différents équipements de réseau. Ce modèle probabiliste a été conçu de manière modulaire, de façon à pouvoir évoluer en cas de modification de l'architecture physique du réseau.Le diagnostic des causes racines des anomalies est effectué par inférence, dans le réseau bayésien, de l'état des noeuds non observés au vu des observations (compteurs, alarmes intermédiaires, etc...) récoltées sur le réseau de l'opérateur. La structure du réseau bayésien, ainsi que l'ordre de grandeur des paramètres probabilistes de ce modèle, ont été déterminés en intégrant dans le modèle les connaissances des experts spécialistes du diagnostic sur ce segment de réseau. L'analyse de milliers de cas de diagnostic de pannes a ensuite permis de calibrer finement les paramètres probabilistes du modèle grâce à un algorithme EM (Expectation Maximization).Les performances de l'outil développé, nommé PANDA, ont été évaluées sur deux mois de diagnostic de panne dans le réseau GPON-FTTH d'Orange en juillet-août 2015. Dans la plupart des cas, le nouveau système, PANDA, et le système en production, DELC, font un diagnostic identique. Cependant un certain nombre de cas sont non diagnostiqués par DELC mais ils sont correctement diagnostiqués par PANDA. Les cas pour lesquels les deux systèmes émettent des diagnostics différents ont été évalués manuellement, ce qui a permis de démontrer dans chacun de ces cas la pertinence des décisions prises par PANDA
Telecommunication networks must be reliable and robust to ensure high availability of services. Operators are currently searching to automate as much as possible, complex network management operations such as fault diagnosis.In this thesis we are focused on self-diagnosis of failures in the optical access networks of the operator Orange. The diagnostic tool used up to now, called DELC, is an expert system based on decision rules. This system is efficient but difficult to maintain due in particular to the very large volume of information to analyze. It is also impossible to have a rule for each possible fault configuration, so that some faults are currently not diagnosed.We proposed in this thesis a new approach. In our approach, the diagnosis of the root causes of malfunctions and alarms is based on a Bayesian network probabilistic model of dependency relationships between the different alarms, counters, intermediate faults and root causes at the level of the various network component. This probabilistic model has been designed in a modular way, so as to be able to evolve in case of modification of the physical architecture of the network. Self-diagnosis of the root causes of malfunctions and alarms is made by inference in the Bayesian network model of the state of the nodes not observed in view of observations (counters, alarms, etc.) collected on the operator's network. The structure of the Bayesian network, as well as the order of magnitude of the probabilistic parameters of this model, were determined by integrating in the model the expert knowledge of the diagnostic experts on this segment of the network. The analysis of thousands of cases of fault diagnosis allowed to fine-tune the probabilistic parameters of the model thanks to an Expectation Maximization algorithm. The performance of the developed probabilistic tool, named PANDA, was evaluated over two months of fault diagnosis in Orange's GPON-FTTH network in July-August 2015. In most cases, the new system, PANDA, and the system in production, DELC, make an identical diagnosis. However, a number of cases are not diagnosed by DELC but are correctly diagnosed by PANDA. The cases for which self-diagnosis results of the two systems are different were evaluated manually, which made it possible to demonstrate in each of these cases the relevance of the decisions taken by PANDA
APA, Harvard, Vancouver, ISO, and other styles
18

Rodriguez, Martinez Andres Florencio. "A probabilistic examplar based model." Thesis, University of Salford, 1998. http://usir.salford.ac.uk/14725/.

Full text
Abstract:
A central problem in case based reasoning (CBR) is how to store and retrieve cases. One approach to this problem is to use exemplar based models, where only the prototypical cases are stored. However, the development of an exemplar based model (EBM) requires the solution of several problems: (i) how can a EBM be represented? (ii) given a new case, how can a suitable exemplar be retrieved? (iii) what makes a good exemplar? (iv) how can an EBM be learned incrementally? This thesis develops a new model, called a probabilistic exemplar based model, that addresses these research questions. The model utilizes Bayesian networks to develop a suitable representation and uses probability theory to develop the foundations of the developed model. A probability propagation method is used to retrieve exemplars when a new case is presented and for assessing the prototypicality of an exemplar. The model learns incrementally by revising the exemplars retained and by updating the conditional probabilities required by the Bayesian network. The problem of ignorance, encountered when only a few cases have been observed, is tackled by introducing the concept of a virtual exemplar to represent all the unseen cases. The model is implemented in C and evaluated on three datasets. It is also contrasted with related work in CBR and machine learning (ML).
APA, Harvard, Vancouver, ISO, and other styles
19

Todeschini, Adrien. "Probabilistic and Bayesian nonparametric approaches for recommender systems and networks." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0237/document.

Full text
Abstract:
Nous proposons deux nouvelles approches pour les systèmes de recommandation et les réseaux. Dans la première partie, nous donnons d’abord un aperçu sur les systèmes de recommandation avant de nous concentrer sur les approches de rang faible pour la complétion de matrice. En nous appuyant sur une approche probabiliste, nous proposons de nouvelles fonctions de pénalité sur les valeurs singulières de la matrice de rang faible. En exploitant une représentation de modèle de mélange de cette pénalité, nous montrons qu’un ensemble de variables latentes convenablement choisi permet de développer un algorithme espérance-maximisation afin d’obtenir un maximum a posteriori de la matrice de rang faible complétée. L’algorithme résultant est un algorithme à seuillage doux itératif qui adapte de manière itérative les coefficients de réduction associés aux valeurs singulières. L’algorithme est simple à mettre en œuvre et peut s’adapter à de grandes matrices. Nous fournissons des comparaisons numériques entre notre approche et de récentes alternatives montrant l’intérêt de l’approche proposée pour la complétion de matrice à rang faible. Dans la deuxième partie, nous présentons d’abord quelques prérequis sur l’approche bayésienne non paramétrique et en particulier sur les mesures complètement aléatoires et leur extension multivariée, les mesures complètement aléatoires composées. Nous proposons ensuite un nouveau modèle statistique pour les réseaux creux qui se structurent en communautés avec chevauchement. Le modèle est basé sur la représentation du graphe comme un processus ponctuel échangeable, et généralise naturellement des modèles probabilistes existants à structure en blocs avec chevauchement au régime creux. Notre construction s’appuie sur des vecteurs de mesures complètement aléatoires, et possède des paramètres interprétables, chaque nœud étant associé un vecteur représentant son niveau d’affiliation à certaines communautés latentes. Nous développons des méthodes pour simuler cette classe de graphes aléatoires, ainsi que pour effectuer l’inférence a posteriori. Nous montrons que l’approche proposée peut récupérer une structure interprétable à partir de deux réseaux du monde réel et peut gérer des graphes avec des milliers de nœuds et des dizaines de milliers de connections
We propose two novel approaches for recommender systems and networks. In the first part, we first give an overview of recommender systems and concentrate on the low-rank approaches for matrix completion. Building on a probabilistic approach, we propose novel penalty functions on the singular values of the low-rank matrix. By exploiting a mixture model representation of this penalty, we show that a suitably chosen set of latent variables enables to derive an expectation-maximization algorithm to obtain a maximum a posteriori estimate of the completed low-rank matrix. The resulting algorithm is an iterative soft-thresholded algorithm which iteratively adapts the shrinkage coefficients associated to the singular values. The algorithm is simple to implement and can scale to large matrices. We provide numerical comparisons between our approach and recent alternatives showing the interest of the proposed approach for low-rank matrix completion. In the second part, we first introduce some background on Bayesian nonparametrics and in particular on completely random measures (CRMs) and their multivariate extension, the compound CRMs. We then propose a novel statistical model for sparse networks with overlapping community structure. The model is based on representing the graph as an exchangeable point process, and naturally generalizes existing probabilistic models with overlapping block-structure to the sparse regime. Our construction builds on vectors of CRMs, and has interpretable parameters, each node being assigned a vector representing its level of affiliation to some latent communities. We develop methods for simulating this class of random graphs, as well as to perform posterior inference. We show that the proposed approach can recover interpretable structure from two real-world networks and can handle graphs with thousands of nodes and tens of thousands of edges
APA, Harvard, Vancouver, ISO, and other styles
20

Luo, Zhiyuan. "A probabilistic reasoning and learning system based on Bayesian belief networks." Thesis, Heriot-Watt University, 1992. http://hdl.handle.net/10399/1490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ben, Ishak Mouna. "Probabilistic relational models : learning and evaluation : The relational Bayesian networks case." Nantes, 2015. https://archive.bu.univ-nantes.fr/pollux/show/show?id=28b20e1e-99b3-4956-b3aa-67c1e94e4790.

Full text
Abstract:
L’apprentissage statistique relationnel est apparu au début des années 2000 comme un nouveau domaine de l’apprentissage machine permettant de raisonner d’une manière efficace et robuste directement sur des structures de données relationnelles. Plusieurs méthodes classiques de fouille de données ont été adaptées pour application directe sur des données relationnelles. Les réseaux Bayésiens Relationnels (RBR) présentent une extension des réseaux Bayésiens (RB) dans ce contexte. Pour se servir de ce modèle, il faut tout d’abord le construire : la structure et les paramètres du RBR doivent être définis à la main ou être appris à partir d’une instance de base de données relationnelle. L’apprentissage de la structure reste toujours le problème le plus compliqué puisqu’il se situe dans la classe des problèmes NP-difficiles. Les méthodes d’apprentissage de la structure des RBR existantes sont inspirées des méthodes classique de l’apprentissage de la structure des RB. Pour pouvoir juger la qualité d’un algorithme d’apprentissage de la structure d’un RBR, il faut avoir des données de test et des mesures d’évaluation. Pour les RB les données sont souvent issues de benchmarks existants. Sinon, des processus de génération aléatoire du modèle et des données sont mis en oeuvre. Les deux pratiques sont quasi absentes pour les RBR. De plus, les mesures d’évaluation de la qualité d’un algorithme d ’apprentissage de la structure d’un RBR ne sont pas encore établies. Dans ce travail de thèse, nous proposons deux contributions majeures. I)Une approche de génération de RBR allant de la génération du schéma relationnel, de la structure de dépendance et des tables de probabilités à l’instanciation de ce modèle et la population d’une base de données relationnelle. Nous discutons aussi de l’adaptation des mesures d’évaluation des algorithmes d’apprentissage de RBs dans le contexte relationnel et nous proposons de nouvelles mesures d’évaluation. II) Une approche hybride pour l’apprentissage de la structure des RBR. Cette approche présente une extension de l’algorithme MMHC dans le contexte relationnel. Nous menons une étude expérimentale permettant de comparer ce nouvel algorithme d’apprentissage avec les approches déjà existantes
Statistical relational learning (SRL) appeared in the early 2000s as a new field of machine learning that enables effective and robust reasoning about relational data structures. Several conventional data mining methods have been adapted for direct application to relational data representation. Relational Bayesian Networks (RBNs) extend Bayesian networks (BNs) to a relational data mining context. To use this model, it is first necessary to build it: the structure and parameters of a RBN must be set manually or learned from a relational observational dataset. Learning the structure remains the most complicated issue as it is a NP-hard problem. Existing approaches for RBNs structure learning are inspired from classical methods of learning the structure of BNs. The evaluation of learning approaches requires testing datasets and evaluation measurements. For BNs, datasets are usually sampled from real known networks. Otherwise, processes to randomly generate the model and the data are already established. Both practices are almost absent for RBR. Moreover, metrics to evaluate a RBN structure learning algorithm are not yet proposed. This thesis provides two major contributions. I) A synthetic approach allowing to generate random RBNs from scratch. The proposed method allows to generate RBNs as well as synthetic relational data from a randomly generated relational schema and a random set of probabilistic dependencies. Also, we discuss the adaptation of the evaluation metrics of BNs structure learning algorithms to the relational context and we propose new relational evaluation measurements. II) A hybrid approach for RBNs structure learning. This approach presents an extension of the MMHC algorithm in the relational context. We present an experimental study to compare this new learning algorithm with the state-of-the-art approaches
APA, Harvard, Vancouver, ISO, and other styles
22

Ancell, Trueba Rafael. "Aportaciones de las redes bayesianas en meteorología.Predicción probabilística de precipitación. Applications of Bayesian Networks in Meteorology. Probabilistic Forecast of Precipitation." Doctoral thesis, Universidad de Cantabria, 2009. http://hdl.handle.net/10803/113596.

Full text
Abstract:
Esta tesis está dirigida principalmente a investigadores interesados en la aplicación de técnicas de minera de datos en Meteorología y otras ciencias medioambientales afines. De forma genérica, trata de la modelización probabilística de sistemas definidos por muchas variables, cuyas relaciones de dependencia son inferidas a partir de un conjunto representativo de datos. La idea es resolver algunos problemas prácticos relacionados con el diagnóstico y la predicción probabilística local en Meteorología, considerando el problema de la coherencia espacial. En concreto, el eje central de esta tesis ha sido el desarrollo de redes Bayesianas, para su aplicación en la predicción probabilística local.
This thesis is mainly oriented to researchers interested in the data mining techniques applied to Meteorology and other related environmental sciences. It uses probabilistic models to describe systems defined by many variables whose dependencies have to be inferred from a set of representative data. The main purpose is solve practical problems related to the diagnosis and probabilistic local forecasting Meteorology, considering the problem of spatial coherence. Specifically, the focus of this thesis has been the development of Bayesian networks to be applied in the local probabilistic forecasting.
APA, Harvard, Vancouver, ISO, and other styles
23

MANFREDOTTI, CRISTINA ELENA. "Modeling and inference with relational dynamic bayesian networks." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2010. http://hdl.handle.net/10281/7829.

Full text
Abstract:
Many domains in the real world are richly structured, containing a diverse set of agents characterized by different set of features and related to each other in a variety of ways. Moreover, uncertainty both on the objects observations and on their relations can be present. This is the case of many problems as, for example, multi-target tracking, activity recognition, automatic surveillance and traffic monitoring. The common ground of these types of problems is the necessity of recognizing and understanding the scene, the activities that are going on, who are the actors, their role and estimate their positions. When the environment is particularly complex, including several distinct entities whose behaviors might be correlated, automated reasoning becomes particularly challenging. Even in cases where humans can easily recognize activities, current computer programs fail because they lack of commonsense reasoning, and because the current limitation of automated reasoning systems. As a result surveillance supervision is so far mostly delegated to humans. The explicit representation of the interconnected behaviors of agents can provide better models for capturing key elements of the activities in the scene. In this Thesis we propose the use of relations to model particular correlations between agents features, aimed at improving the inference task. We propose the use of relational Dynamic Bayesian Networks, an extension of Dynamic Bayesian Networks with First Order Logic, to represent the dependencies between an agent’s attributes, the scene’s elements and the evolution of state variables over time. In this way, we can combine the advantages of First Order Logic (that can compactly represent structured environments), with those of probabilistic models (that provide a mathematically sound framework for inference in face of uncertainty). In particular, we investigate the use of Relational Dynamic Bayesian Networks to represent the dependencies between the agents’ behaviors in the context of multi-agents tracking and activity recognition. We propose a new formulation of the transition model that accommodates for relations and present a filtering algorithm that extends the Particle Filter algorithm in order to directly track relations between the agents. The explicit recognition of the relationships between interacting objects can improve the understanding of their dynamic domain. The inference algorithm we develop in this Thesis is able to take into account relations between interacting objects and we demonstrate with experiments that the performance of our relational approach outperforms those of standard non-relational methods. While the goal of emulating human-level inference on scene understanding is out of reach for the current state of the art, we believe that this work represents an important step towards better algorithms and models to provide inference in complex multi-agent systems. Another advantage of our probabilistic model is its ability to make inference online, so that the appropriate cause of action can be taken when necessary (e.g., raise an alarm). This is an important requirement for the adoption of automatic surveillance systems in the real world, and avoid the common problems associated with human surveillance.
APA, Harvard, Vancouver, ISO, and other styles
24

Kieling, Gustavo Luiz. "Inserção de conhecimento probabilístico para construção de agentes BDI modelados em redes bayesianas." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/28741.

Full text
Abstract:
A representação do conhecimento de maneira mais fiel possível à realidade é uma meta histórica e não resolvida até o momento na área da Inteligência Artificial. Problemas são resolvidos e decisões são tomadas levando-se em conta diversos tipos de conhecimentos, os quais muitos são tendenciosos, inexatos, ambíguos ou ainda incompletos. A fim de tentar emular a capacidade de representação do conhecimento humano, levando-se em conta as diversas dificuldades inerentes, tem-se construído sistemas computacionais que armazenam o conhecimento das mais diversas formas. Dentro deste contexto, este trabalho propõe um experimento que utiliza duas formas distintas de representação do conhecimento: a simbólica, neste caso BDI, e a probabilística, neste caso Redes Bayesianas. Para desenvolvermos uma prova de conceito desta proposta de representação do conhecimento estamos utilizando exemplos que serão construídos através da tecnologia de programação voltada para agentes. Para tal, foi desenvolvida uma implementação de um Sistema MultiAgente, estendendo o framework Jason através da implementação de um plugin chamado COPA. Para a representação do conhecimento probabilístico, utilizamos uma ferramenta de construção de Redes Bayesianas, também adaptada a este sistema. Os estudos de caso mostraram melhorias no gerenciamento do conhecimento incerto em relação às abordagens de construções de agentes BDI clássicos, ou seja, que não utilizam conhecimento probabilístico.
Achieving faithful representation of knowledge is a historic and still unreached goal in the area of Artificial Intelligence. Problems are solved and decisions are made taking into consideration different kinds of knowledge, from which many are biased, inaccurate, ambiguous or still incomplete. Computational systems that store knowledge in many different ways have been built in order to emulate the capacity of human knowledge representation, taking into consideration the several inherent difficulties to it. Within this context, this paper proposes an experiment that utilizes two distinct ways of representing knowledge: symbolic, BDI in this case, and probabilistic, Bayesian Networks in this case. In order to develop a proof of concept of this propose of knowledge representation, examples that will be built through agent oriented programming technology will be used. For that, implementation of a MultiAgent System was developed, extending the Jason framework through the implementation of a plugin called COPA. For the representation of probabilistic knowledge, a Bayesian Network building tool, also adapted to this system, was used. The case studies showed improvement in the management of uncertain knowledge in relation to the building approaches of classic BDI agents, i.e., that do not use probabilistic knowledge.
APA, Harvard, Vancouver, ISO, and other styles
25

Haußmann, Manuel [Verfasser], and Fred A. [Akademischer Betreuer] Hamprecht. "Bayesian Neural Networks for Probabilistic Machine Learning / Manuel Haußmann ; Betreuer: Fred A. Hamprecht." Heidelberg : Universitätsbibliothek Heidelberg, 2021. http://d-nb.info/1239116233/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Lingasubramanian, Karthikeyan. "Estimation of switching activity in sequential circuits using dynamic Bayesian Networks." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Maturana, Marcos Coelho. "Aplicação de Redes Bayesianas na análise da contribuição do erro humano em acidentes de colisão." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3135/tde-11082010-165909/.

Full text
Abstract:
Recentemente, na indústria naval, a normatização por sociedades classificadoras e pela IMO (International Maritime Organization) tem apresentado uma mudança paulatina, migrando dos procedimentos prescritivos para uma estrutura regulatória baseada em risco. Tal perspectiva oferece algumas vantagens para operadores e armadores (empresas que exploram comercialmente as embarcações): 1) maior capacidade de incorporar projetos inovadores, tecnicamente superiores, a custos aceitáveis; 2) maior confiança quanto à segurança; 3) melhor entendimento de eventos de periculosidade, dos riscos enfrentados em novos projetos e de medidas de mitigação. Especificamente no setor petrolífero, a análise, a avaliação e o gerenciamento de risco são vitais, em face da potencial gravidade dos acidentes no que diz respeito à vida humana, ao meio-ambiente e ao patrimônio. Dado que a maior parte dos acidentes nesta área são motivados por fatores humanos, o propósito deste trabalho é apresentar uma metodologia e técnicas eficientes de análise de confiabilidade humana aplicáveis a esta indústria. Durante as últimas décadas, se desenvolveram várias técnicas para o estudo quantitativo da confiabilidade humana. Na década de oitenta foram desenvolvidas técnicas que modelam o sistema por meio de árvores binárias, não permitindo a representação do contexto em que as ações humanas ocorrem. Desta forma, a representação dos indivíduos, suas inter-relações e a dinâmica do sistema não podem ser bem trabalhadas pela aplicação destas técnicas. Estas questões tornaram latente a necessidade de aprimoramento dos métodos utilizados para a HRA (Human Reliability Analysis). No intuito de extinguir, ou ao menos atenuar, estas limitações alguns autores vêm propondo a modelagem do sistema por meio de Redes Bayesianas. Espera-se que a aplicação desta ferramenta consiga suprimir boa parte das deficiências na modelagem da ação humana com o uso de árvores binárias. Este trabalho apresenta uma breve descrição da aplicação de Redes Bayesianas na HRA. Além disto, apresenta a aplicação desta técnica no estudo da operação de um navio petroleiro, tendo como foco a quantificação da contribuição do fator humano em cenários de colisão. Por fim, são feitas considerações a respeito dos fatores que podem influenciar no desempenho humano e no risco de colisão.
Recently, in the naval industry, the normalization of classification societies and IMO (International Maritime Organization) has presented a gradual change, going from prescriptive procedures to a regulatory structure based on risk. That perspective offers some advantages to operators and constructors: 1) greater capacity to incorporate innovations in design, technically superiors, at acceptable cost; 2) greater confidence as to security; 3) better understanding of hazardous events, the risks faced by new projects and measures of mitigation. Specifically in the oil sector, the analyze, evaluation, and management of risk are vital, in face of the accidents severity potential in respect to human life, environment and property. Given that the greater part of the accidents on this sector is caused by human factors, the purpose of this dissertation is present a methodology and efficient techniques to HRA (Human Reliability Analysis) that can be applied in this industry. During the last decades many techniques were developed to a quantitative study of the human reliability. In the eighties were developed some techniques based in the modeling by means of binaries trees. These techniques do not consider the representation of the context in which the human actions occur. Thus, the representation of individuals, their inter-relationships and dynamics of the system cannot be better worked by the application of these techniques. These issues became the improvement of the used methods for HRA a latent need. With the aim of extinguish, or attenuate at least, these weaknesses some authors proposed the modeling of the human system by means of Bayesians Network. It is expected that with the application of this tool can be suppressed great part of the deficiencies of the human action modeling by means of binaries trees. This work presents a brief description about the application of Bayesians Network in HRA. Additionally, is presented the application of this technique in the study of an oil tanker operation, focusing in the human factor quantification in scenarios of collision. Besides, are presented some considerations about the factors that can influence the human performance and the collision risk.
APA, Harvard, Vancouver, ISO, and other styles
28

Graversen, Therese. "Statistical and computational methodology for the analysis of forensic DNA mixtures with artefacts." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:4c3bfc88-25e7-4c5b-968f-10a35f5b82b0.

Full text
Abstract:
This thesis proposes and discusses a statistical model for interpreting forensic DNA mixtures. We develop methods for estimation of model parameters and assessing the uncertainty of the estimated quantities. Further, we discuss how to interpret the mixture in terms of predicting the set of contributors. We emphasise the importance of challenging any interpretation of a particular mixture, and for this purpose we develop a set of diagnostic tools that can be used in assessing the adequacy of the model to the data at hand as well as in a systematic validation of the model on experimental data. An important feature of this work is that all methodology is developed entirely within the framework of the adopted model, ensuring a transparent and consistent analysis. To overcome the challenge that lies in handling the large state space for DNA profiles, we propose a representation of a genotype that exhibits a Markov structure. Further, we develop methods for efficient and exact computation in a Bayesian network. An implementation of the model and methodology is available through the R package DNAmixtures.
APA, Harvard, Vancouver, ISO, and other styles
29

Ali, Agha Mouhamad Shaker. "Probabilistic analysis of supply chains resilience based on their characteristics using dynamic Bayesian networks." Thesis, University of Strathclyde, 2016. http://digitool.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=27525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Srinivasan, Vivekanandan. "Real delay graphical probabilistic switching model for VLSI circuits." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Venkataramani, Praveen. "Sequential quantum dot cellular automata design and analysis using Dynamic Bayesian Networks." [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Faria, Rodrigo Candido. "Redes probabilísticas: aprendendo estruturas e atualizando probabilidades." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-27062014-224607/.

Full text
Abstract:
Redes probabilísticas são modelos muito versáteis, com aplicabilidade crescente em diversas áreas. Esses modelos são capazes de estruturar e mensurar a interação entre variáveis, permitindo que sejam realizados vários tipos de análises, desde diagnósticos de causas para algum fenômeno até previsões sobre algum evento, além de permitirem a construção de modelos de tomadas de decisões automatizadas. Neste trabalho são apresentadas as etapas para a construção dessas redes e alguns métodos usados para tal, dando maior ênfase para as chamadas redes bayesianas, uma subclasse de modelos de redes probabilísticas. A modelagem de uma rede bayesiana pode ser dividida em três etapas: seleção de variáveis, construção da estrutura da rede e estimação de probabilidades. A etapa de seleção de variáveis é usualmente feita com base nos conhecimentos subjetivos sobre o assunto estudado. A construção da estrutura pode ser realizada manualmente, levando em conta relações de causalidade entre as variáveis selecionadas, ou semi-automaticamente, através do uso de algoritmos. A última etapa, de estimação de probabilidades, pode ser feita seguindo duas abordagens principais: uma frequentista, em que os parâmetros são considerados fixos, e outra bayesiana, na qual os parâmetros são tratados como variáveis aleatórias. Além da teoria contida no trabalho, mostrando as relações entre a teoria de grafos e a construção probabilística das redes, também são apresentadas algumas aplicações desses modelos, dando destaque a problemas nas áreas de marketing e finanças.
Probabilistic networks are very versatile models, with growing applicability in many areas. These models are capable of structuring and measuring the interaction among variables, making possible various types of analyses, such as diagnoses of causes for a phenomenon and predictions about some event, besides allowing the construction of automated decision-making models. This work presents the necessary steps to construct those networks and methods used to doing so, emphasizing the so called Bayesian networks, a subclass of probabilistic networks. The Bayesian network modeling is divided in three steps: variables selection, structure learning and estimation of probabilities. The variables selection step is usually based on subjective knowledge about the studied topic. The structure learning can be performed manually, taking into account the causal relations among variables, or semi-automatically, through the use of algorithms. The last step, of probabilities estimation, can be treated following two main approaches: by the frequentist approach, where parameters are considered fixed, and by the Bayesian approach, in which parameters are treated as random variables. Besides the theory contained in this work, showing the relations between graph theory and the construction of probabilistic networks, applications of these models are presented, highlighting problems in marketing and finance.
APA, Harvard, Vancouver, ISO, and other styles
33

Boff, Elisa. "Colaboração em ambientes inteligentes de aprendizagem mediada por um agente social probabilístico." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/15747.

Full text
Abstract:
Este trabalho propõe um modelo probabilístico de conhecimento e raciocínio para um agente, denominado Agente Social, cujo principal objetivo é analisar o perfil dos alunos, usuários de um Sistema Tutor Inteligente chamado AMPLIA, e compor grupos de trabalho. Para formar estes grupos, o Agente Social considera aspectos individuais do aluno e estratégias de formação de grupos. A aprendizagem colaborativa envolve relações sociais cujos processos são complexos e apresentam dificuldade para sua modelagem computacional. A fim de representar alguns elementos deste processo e de seus participantes, devem ser considerados aspectos individuais, tais como estado afetivo, questões psicológicas e cognição. Também devem ser considerados aspectos sociais, tais como a habilidade social, a aceitação e a forma em que as pessoas se relacionam e compõem seus grupos de trabalho ou estudo. Sistemas Tutores Inteligentes, Sistemas Multiagente e Computação Afetiva são áreas de pesquisa que vem sendo investigadas de forma a oferecer alternativas para representar e tratar computacionalmente alguns destes aspectos multidisciplinares que acompanham a aprendizagem individual e colaborativa. O Agente Social está inserido na sociedade de agentes do portal PortEdu que, por sua vez, fornece serviços ao ambiente de aprendizagem AMPLIA O PortEdu é um portal que provê serviços para os ambientes educacionais integrados a ele. Este portal foi modelado em uma abordagem multiagente e cada serviço oferecido é implementado por um agente específico. Os ambientes educacionais que utilizam os serviços do portal também são sociedades de agentes e, em geral, Sistemas Tutores Inteligentes. O ambiente AMPLIA (Ambiente Multiagente Probabilístico Inteligente de Aprendizagem) foi projetado para suportar o treinamento do raciocínio diagnóstico e modelagem de domínios de conhecimento incerto e complexo, como a área médica. Este ambiente usa a abordagem de Redes Bayesianas onde os alunos constróem suas próprias redes para um problema apresentado pelo sistema através de um editor gráfico de Redes Bayesianas. Neste trabalho, o editor do AMPLIA foi adaptado para uma versão colaborativa, que permite a construção das redes por vários alunos remotos conectados ao sistema. É através deste editor que o Agente Social observa e interage com os alunos sugerindo a composição dos grupos. Foram realizados experimentos práticos acompanhados por instrumentos de avaliação, com o objetivo de analisar a composição de grupos sugerida pelo Agente Social e relacioná-la com os grupos formados espontaneamente pelos alunos no ambiente de sala de aula. O resultado do trabalho individual e dos grupos também foi analisado e discutido nesta pesquisa.
This research proposes a probabilistic knowledge and reasoning model for an agent, named Social Agent, whose main goal is to analyze students' profiles and to organize them in workgroups. These students are users of an Intelligent Tutoring System named AMPLIA. In order to suggest those groups, the Social Agent considers individual aspects of the students and also strategies for group formation. Collaborative learning involves social relationships with complex processes which are difficult to model computationally. In order to represent these relationships, we should consider several aspects of the student, such as affective state, psychological issues, and cognition. We should also consider social aspects such as social ability, social acceptance and how people relate to each other, and how they compose their workgroups. Intelligent Tutoring Systems, Multiagent Systems and Affective Computing are research areas which our research group have been investigating, in order to represent and to deal computationally with multidisciplinary issues involving individual and collaborative learning. The Social Agent is part of an agent society of the PortEdu Portal, which provides services to AMPLIA. PortEdu is an educational portal which provides facilities to educational environments integrated to it. This portal has been modeled using a multiagent approach and each of its services is represented by a specific agent. The educational environments that make use of the portal's services are also agent societies and, in general, Intelligent Tutoring Systems. AMPLIA (Probabilistic Multiagent Learning Environment) has been designed in order to support diagnostic reasoning and the modeling of diagnostic hypotheses in domains with complex and uncertain knowledge, such as the medical domain. This environment uses a Bayesian Networks approach in which students build their own networks for a clinical case through a Bayesian Network graphical editor. Here, the AMPLIA editor has been adapted and extended to a collaborative version, which enables the network construction for remote students connected to the system. Through this editor, the Social Agent observes and interacts with students, suggesting the composition of workgroups. Practical experiments using assessment tools have been carried out, in order to analyze the workgroups suggested by the Social Agent and to compare them with groups naturally composed by students in the classroom. The results of the work done by individual students and by workgroups were also analyzed and discussed in this research.
APA, Harvard, Vancouver, ISO, and other styles
34

Rejimon, Thara. "Reliability-centric probabilistic analysis of VLSI circuits." [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Srivastava, Saket. "Probabilistic modeling of quantum-dot cellular automata." [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0002399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ghanem, Amal Saleh. "Probabilistic models for mining imbalanced relational data." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/2266.

Full text
Abstract:
Most data mining and pattern recognition techniques are designed for learning from at data files with the assumption of equal populations per class. However, most real-world data are stored as rich relational databases that generally have imbalanced class distribution. For such domains, a rich relational technique is required to accurately model the different objects and relationships in the domain, which can not be easily represented as a set of simple attributes, and at the same time handle the imbalanced class problem.Motivated by the significance of mining imbalanced relational databases that represent the majority of real-world data, learning techniques for mining imbalanced relational domains are investigated. In this thesis, the employment of probabilistic models in mining relational databases is explored. In particular, the Probabilistic Relational Models (PRMs) that were proposed as an extension of the attribute-based Bayesian Networks. The effectiveness of PRMs in mining real-world databases was explored by learning PRMs from a real-world university relational database. A visual data mining tool is also proposed to aid the interpretation of the outcomes of the PRM learned models.Despite the effectiveness of PRMs in relational learning, the performance of PRMs as predictive models is significantly hindered by the imbalanced class problem. This is due to the fact that PRMs share the assumption common to other learning techniques of relatively balanced class distributions in the training data. Therefore, this thesis proposes a number of models utilizing the effectiveness of PRMs in relational learning and extending it for mining imbalanced relational domains.The first model introduced in this thesis examines the problem of mining imbalanced relational domains for a single two-class attribute. The model is proposed by enriching the PRM learning with the ensemble learning technique. The premise behind this model is that an ensemble of models would attain better performance than a single model, as misclassification committed by one of the models can be often correctly classified by others.Based on this approach, another model is introduced to address the problem of mining multiple imbalanced attributes, in which it is important to predict several attributes rather than a single one. In this model, the ensemble bagging sampling approach is exploited to attain a single model for mining several attributes. Finally, the thesis outlines the problem of imbalanced multi-class classification and introduces a generalized framework to handle this problem for both relational and non-relational domains.
APA, Harvard, Vancouver, ISO, and other styles
37

Masood, Adnan. "Measuring Interestingness in Outliers with Explanation Facility using Belief Networks." NSUWorks, 2014. http://nsuworks.nova.edu/gscis_etd/232.

Full text
Abstract:
This research explores the potential of improving the explainability of outliers using Bayesian Belief Networks as background knowledge. Outliers are deviations from the usual trends of data. Mining outliers may help discover potential anomalies and fraudulent activities. Meaningful outliers can be retrieved and analyzed by using domain knowledge. Domain knowledge (or background knowledge) is represented using probabilistic graphical models such as Bayesian belief networks. Bayesian networks are graph-based representation used to model and encode mutual relationships between entities. Due to their probabilistic graphical nature, Belief Networks are an ideal way to capture the sensitivity, causal inference, uncertainty and background knowledge in real world data sets. Bayesian Networks effectively present the causal relationships between different entities (nodes) using conditional probability. This probabilistic relationship shows the degree of belief between entities. A quantitative measure which computes changes in this degree of belief acts as a sensitivity measure . The first contribution of this research is enhancing the performance for measurement of sensitivity based on earlier research work, the Interestingness Filtering Engine Miner algorithm. The algorithm developed (IBOX - Interestingness based Bayesian outlier eXplainer) provides progressive improvement in the performance and sensitivity scoring of earlier works. Earlier approaches compute sensitivity by measuring divergence among conditional probability of training and test data, while using only couple of probabilistic interestingness measures such as Mutual information and Support to calculate belief sensitivity. With ingrained support from the literature as well as quantitative evidence, IBOX provides a framework to use multiple interestingness measures resulting in better performance and improved sensitivity analysis. The results provide improved performance, and therefore explainability of rare class entities. This research quantitatively validated probabilistic interestingness measures as an effective sensitivity analysis technique in rare class mining. This results in a novel, original, and progressive research contribution to the areas of probabilistic graphical models and outlier analysis.
APA, Harvard, Vancouver, ISO, and other styles
38

Rogge-Solti, Andreas. "Probabilistic Estimation of Unobserved Process Events." Phd thesis, Universität Potsdam, 2014. http://opus.kobv.de/ubp/volltexte/2014/7042/.

Full text
Abstract:
Organizations try to gain competitive advantages, and to increase customer satisfaction. To ensure the quality and efficiency of their business processes, they perform business process management. An important part of process management that happens on the daily operational level is process controlling. A prerequisite of controlling is process monitoring, i.e., keeping track of the performed activities in running process instances. Only by process monitoring can business analysts detect delays and react to deviations from the expected or guaranteed performance of a process instance. To enable monitoring, process events need to be collected from the process environment. When a business process is orchestrated by a process execution engine, monitoring is available for all orchestrated process activities. Many business processes, however, do not lend themselves to automatic orchestration, e.g., because of required freedom of action. This situation is often encountered in hospitals, where most business processes are manually enacted. Hence, in practice it is often inefficient or infeasible to document and monitor every process activity. Additionally, manual process execution and documentation is prone to errors, e.g., documentation of activities can be forgotten. Thus, organizations face the challenge of process events that occur, but are not observed by the monitoring environment. These unobserved process events can serve as basis for operational process decisions, even without exact knowledge of when they happened or when they will happen. An exemplary decision is whether to invest more resources to manage timely completion of a case, anticipating that the process end event will occur too late. This thesis offers means to reason about unobserved process events in a probabilistic way. We address decisive questions of process managers (e.g., "when will the case be finished?", or "when did we perform the activity that we forgot to document?") in this thesis. As main contribution, we introduce an advanced probabilistic model to business process management that is based on a stochastic variant of Petri nets. We present a holistic approach to use the model effectively along the business process lifecycle. Therefore, we provide techniques to discover such models from historical observations, to predict the termination time of processes, and to ensure quality by missing data management. We propose mechanisms to optimize configuration for monitoring and prediction, i.e., to offer guidance in selecting important activities to monitor. An implementation is provided as a proof of concept. For evaluation, we compare the accuracy of the approach with that of state-of-the-art approaches using real process data of a hospital. Additionally, we show its more general applicability in other domains by applying the approach on process data from logistics and finance.
Unternehmen versuchen Wettbewerbsvorteile zu gewinnen und die Kundenzufriedenheit zu erhöhen. Um die Qualität und die Effizienz ihrer Prozesse zu gewährleisten, wenden Unternehmen Geschäftsprozessmanagement an. Hierbei spielt die Prozesskontrolle im täglichen Betrieb eine wichtige Rolle. Prozesskontrolle wird durch Prozessmonitoring ermöglicht, d.h. durch die Überwachung des Prozessfortschritts laufender Prozessinstanzen. So können Verzögerungen entdeckt und es kann entsprechend reagiert werden, um Prozesse wie erwartet und termingerecht beenden zu können. Um Prozessmonitoring zu ermöglichen, müssen prozessrelevante Ereignisse aus der Prozessumgebung gesammelt und ausgewertet werden. Sofern eine Prozessausführungsengine die Orchestrierung von Geschäftsprozessen übernimmt, kann jede Prozessaktivität überwacht werden. Aber viele Geschäftsprozesse eignen sich nicht für automatisierte Orchestrierung, da sie z.B. besonders viel Handlungsfreiheit erfordern. Dies ist in Krankenhäusern der Fall, in denen Geschäftsprozesse oft manuell durchgeführt werden. Daher ist es meist umständlich oder unmöglich, jeden Prozessfortschritt zu erfassen. Zudem ist händische Prozessausführung und -dokumentation fehleranfällig, so wird z.B. manchmal vergessen zu dokumentieren. Eine Herausforderung für Unternehmen ist, dass manche Prozessereignisse nicht im Prozessmonitoring erfasst werden. Solch unbeobachtete Prozessereignisse können jedoch als Entscheidungsgrundlage dienen, selbst wenn kein exaktes Wissen über den Zeitpunkt ihres Auftretens vorliegt. Zum Beispiel ist bei der Prozesskontrolle zu entscheiden, ob zusätzliche Ressourcen eingesetzt werden sollen, wenn eine Verspätung angenommen wird. Diese Arbeit stellt einen probabilistischen Ansatz für den Umgang mit unbeobachteten Prozessereignissen vor. Dabei werden entscheidende Fragen von Prozessmanagern beantwortet (z.B. "Wann werden wir den Fall beenden?", oder "Wann wurde die Aktivität ausgeführt, die nicht dokumentiert wurde?"). Der Hauptbeitrag der Arbeit ist die Einführung eines erweiterten probabilistischen Modells ins Geschäftsprozessmanagement, das auf stochastischen Petri Netzen basiert. Dabei wird ein ganzheitlicher Ansatz zur Unterstützung der einzelnen Phasen des Geschäftsprozesslebenszyklus verfolgt. Es werden Techniken zum Lernen des probabilistischen Modells, zum Vorhersagen des Zeitpunkts des Prozessendes, zum Qualitätsmanagement von Dokumentationen durch Erkennung fehlender Einträge, und zur Optimierung von Monitoringkonfigurationen bereitgestellt. Letztere dient zur Auswahl von relevanten Stellen im Prozess, die beobachtet werden sollten. Diese Techniken wurden in einer quelloffenen prototypischen Anwendung implementiert. Zur Evaluierung wird der Ansatz mit existierenden Alternativen an echten Prozessdaten eines Krankenhauses gemessen. Die generelle Anwendbarkeit in weiteren Domänen wird examplarisch an Prozessdaten aus der Logistik und dem Finanzwesen gezeigt.
APA, Harvard, Vancouver, ISO, and other styles
39

Dabla, Essi Ahoefa. "Approche bayesienne multiéchelle pour la modélisation de la fiabilité d'un module de puissance en environnement ferroviaire." Thesis, Toulouse, INPT, 2019. http://www.theses.fr/2019INPT0102.

Full text
Abstract:
Le contrôle de la fiabilité des composants électroniques critiques est un des enjeux des acteurs du secteur ferroviaire. Les modules de puissance à IGBT (Insulated Gate Bipolar Transistors) appartiennent à cette liste de composants. Ils sont soumis à de fortes contraintes correspondant à celles rencontrées dans des environnements ferroviaires sévères. Les conditions environnementales rencontrées dans l’exploitation ferroviaire et les fortes exigences en termes de disponibilité imposent des niveaux de fiabilité élevés aux IGBT. Dans une optique d’amélioration de leur fiabilité, une méthodologie d’évaluation a été développée basée sur une approche probabiliste et supportée par un réseau bayesien. Pour la mise en place du modèle, plusieurs briques de travail ont été assemblées. En premier lieu, une approche originale nommée « Cycle en U» a été proposée mettant en évidence de façon biunivoque un niveau système associé au train et un niveau composant assimilable à l’IGBT considérés simultanément selon des vues fonctionnelles et dysfonctionnelles. Dans ce cadre, le travail a conduit, dans un premier temps, à mettre en évidence les mécanismes caractérisant, dans une logique descendante, l’influence de la sollicitation du train sur la sollicitation du composant puis, selon une logique ascendante, de l’impact dysfonctionnel de la défaillance au niveau composant sur la fiabilité du système. Dans un deuxième temps, les résultats de cette analyse ont débouché sur la mise en place de la structure d’un modèle bayesien dont le caractère générique lui permet d’être déployé pour la modélisation fiabiliste de tout type de système ferroviaire. Le travail de modélisation basé sur les réseaux bayesiens sert de support au rapprochement entre modèles analytiques (physique de défaillance) et données issues de l’utilisation du composant élémentaire dans son environnement de fonctionnement. Le modèle a été utilisé pour la modélisation de la fiabilité d’un IGBT dans un cadre d’application correspondant au métro de la ville de Chennai en Inde. Les données et connaissances expertes recueillies sur le projet ont permis de déterminer les tables de probabilités du réseau bayesien. Les résultats probabilistes du modèle ont été traduites en indicateurs de fiabilité
The reliability control of critical electronic components is one of the challenges to be faced by railway stakeholders. IGBT (Insulated Gate Bipolar Transistors) power modules belong to this list of components. They are subject to high stresses corresponding to those encountered in harsh railway environments. The environmental conditions encountered in rail operations and the demanding availability requirements impose high levels of reliability on IGBT. In order to improve their reliability, an evaluation methodology has been developed based on a probabilistic approach and supported by a Bayesian network. For the implementation of the model, several working elements were assembled. First, an original approach called "U-Cycle" was proposed, highlighting in a one-to-one way a system level associated with the train and a component level similar to the IGBT considered simultaneously according to functional and dysfunctional views. In this context, the work led, first, to highlight the mechanisms characterizing, in a top-down logic, the influence of train loading on component stress and, in a bottom-up logic, the dysfunctional impact of the failure at component level on system reliability. In a second step, the results of this analysis led to the implementation of the structure of a Bayesian model whose generic nature allows it to be deployed for the reliable modelling of any type of rail system. The modelling work based on Bayesian networks is used to support the reconciliation between analytical models (failure physics) and data from the use of the elementary component in its operating environment. The model was used to model the reliability of an IGBT in an application framework corresponding to the metro in the city of Chennai, India. The data and expert knowledge collected on the project made it possible to determine the probability tables of the Bayesian network. The probabilistic results of the model have been translated into reliability indicators
APA, Harvard, Vancouver, ISO, and other styles
40

König, Johan. "Analyzing Substation Automation System Reliability using Probabilistic Relational Models and Enterprise Architecture." Doctoral thesis, KTH, Industriella informations- och styrsystem, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-145006.

Full text
Abstract:
Modern society is unquestionably heavily reliant on supply of electricity. Hence, the power system is one of the important infrastructures for future growth. However, the power system of today was designed for a stable radial flow of electricity from large power plants to the customers and not for the type of changes it is presently being exposed to, like large scale integration of electric vehicles, wind power plants, residential photovoltaic systems etc. One aspect of power system control particular exposed to these changes is the design of power system control and protection functionality. Problems occur when the flow of electricity changes from a unidirectional radial flow to a bidirectional. Such an implication requires redesign of control and protection functionality as well as introduction of new information and communication technology (ICT). To make matters worse, the closer the interaction between the power system and the ICT systems the more complex the matter becomes from a reliability perspective. This problem is inherently cyber-physical, including everything from system software to power cables and transformers, rather than the traditional reliability concern of only focusing on power system components. The contribution of this thesis is a framework for reliability analysis, utilizing system modeling concepts that supports the industrial engineering issues that follow with the imple-mentation of modern substation automation systems. The framework is based on a Bayesian probabilistic analysis engine represented by Probabilistic Relational Models (PRMs) in com-bination with an Enterprise Architecture (EA) modeling formalism. The gradual development of the framework is demonstrated through a number of application scenarios based on substation automation system configurations. This thesis is a composite thesis consisting of seven papers. Paper 1 presents the framework combining EA, PRMs and Fault Tree Analysis (FTA). Paper 2 adds primary substation equipment as part of the framework. Paper 3 presents a mapping between modeling entities from the EA framework ArchiMate and substation automation system configuration objects from the IEC 61850 standard. Paper 4 introduces object definitions and relations in coherence with EA modeling formalism suitable for the purpose of the analysis framework. Paper 5 describes an extension of the analysis framework by adding logical operators to the probabilistic analysis engine. Paper 6 presents enhanced failure rates for software components by studying failure logs and an application of the framework to a utility substation automation system. Finally, Paper 7 describes the ability to utilize domain standards for coherent modeling of functions and their interrelations and an application of the framework utilizing software-tool support.

QC 20140505

APA, Harvard, Vancouver, ISO, and other styles
41

Jackson, Zara. "Basal Metabolic Rate (BMR) estimation using Probabilistic Graphical Models." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-384629.

Full text
Abstract:
Obesity is a growing problem globally. Currently 2.3 billion adults are overweight, and this number is rising. The most common method for weight loss is calorie counting, in which to lose weight a person should be in a calorie deficit. Basal Metabolic Rate accounts for the majority of calories a person burns in a day and it is therefore a major contributor to accurate calorie counting. This paper uses a Dynamic Bayesian Network to estimate Basal Metabolic Rate (BMR) for a sample of 219 individuals from all Body Mass Index (BMI) categories. The data was collected through the Lifesum app. A comparison of the estimated BMR values was made with the commonly used Harris Benedict equation, finding that food journaling is a sufficient method to estimate BMR. Next day weight prediction was also computed based on the estimated BMR. The results stated that the Harris Benedict equation produced more accurate predictions than the metabolic model proposed, therefore more work is necessary to find a model that accurately estimates BMR.
APA, Harvard, Vancouver, ISO, and other styles
42

Leite, Filho Hugo Pereira. "APLICABILIDADE DE MEMÓRIA LÓGICA COMO FERRAMENTA COADJUVANTE NO DIAGNÓSTICO DAS DOENÇAS GENÉTICAS." Pontifícia Universidade Católica de Goiás, 2006. http://localhost:8080/tede/handle/tede/3073.

Full text
Abstract:
Made available in DSpace on 2016-08-10T10:55:25Z (GMT). No. of bitstreams: 1 Hugo Pereira Leite Filho.pdf: 1747513 bytes, checksum: 1d5d4b0eff9478fb7f58eca6fa166bec (MD5) Previous issue date: 2006-08-25
This study has involved the interaction among knowledge in very distinctive areas, or else: informatics, engineering e genetics, emphasizing the building of a taking decision backing system methodology. The aim of this study has been the development of a tool to help in the diagnosis of chromosomal aberrations, presenting like tutorial model the Turner Syndrome. So to do that there have been used classification techniques based in decision trees, probabilistic networks (Naïve Bayes, TAN e BAN) and neural MLP network (from English, Multi- Layer Perception) and training algorithm by error retro propagation. There has been chosen an algorithm and a tool able to propagate evidence and develop efficient inference techniques able to originate appropriate techniques to combine the expert knowledge with defined data in a databank. We have come to a conclusion about the best solution to work out the shown problem in this study that was the Naïve Bayes model, because this one presented the greatest accuracy. The decision - ID3, TAN e BAN tree models presented solutions to the indicated problem, but those were not as much satisfactory as the Naïve Bayes. However, the neural network did not promote a satisfactory solution.
O estudo envolveu a interação entre áreas de conhecimento bastante distintas, a saber: informática, engenharia e genética, com ênfase na metodologia da construção de um sistema de apoio à tomada de decisão. Este estudo tem como objetivo o desenvolvimento de uma ferramenta para o auxílio no diagnóstico de anomalias cromossômicas, apresentando como modelo tutorial a Síndrome de Turner. Para isso foram utilizadas técnicas de classificação baseadas em árvores de decisão, redes probabilísticas (Naïve Bayes, TAN e BAN) e rede neural MLP (do inglês, Multi- Layer Perceptron) com algoritmo de treinamento por retropropagação de erro. Foi escolhido um algoritmo e uma ferramenta capaz de propagar evidências e desenvolver as técnicas de inferência eficientes capazes de gerar técnicas apropriadas para combinar o conhecimento do especialista com dados definidos em uma base de dados. Chegamos a conclusão que a melhor solução para o domínio do problema apresentado neste estudo foi o modelo Naïve Bayes, pois este modelo apresentou maior acurácia. Os modelos árvore de decisão-ID3, TAN e BAN apresentaram soluções para o domínio do problema sugerido, mas as soluções não foram tão satisfatória quanto o Naïve Bayes. No entanto, a rede neural não promoveu solução satisfatória.
APA, Harvard, Vancouver, ISO, and other styles
43

Tran, Ngoc Hoang. "Extension des systèmes MES au diagnostic des performances des systèmes de production au travers d'une approche probabiliste Bayésienne." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAI048/document.

Full text
Abstract:
Cette thèse s'inscrit dans le domaine de la diagnostic, en particulier de Manufacturing Execution System (MES) . Elle apporte sa contribution au diagnostic de système en présence de défaillances potentielles suit à une variation du TRS, un indicateur de performance qui donne une image de l’état de fonctionnement d’un système de production (équipement, ligne, atelier, usine) à travers l’estimation des pertes selon trois origines : disponibilité, performance, qualité. L’objectif est de fournir le maximum d’informations sur les origines d’une variation du TRS afin de permettre à l'exploitant de prendre la bonne décision. Aussi, sur la base d'un tel modèle, nous proposons une méthodologie de déploiement pour intégrer une fonction de diagnostic aux solutions MES existantes dans un contexte industriel
This Phd thesis takes place in the diagnostic field, especially in contexte of Manufacturing Execution System (MES). It contributes to the diagnostic system in the presence of potential failures following a triggering signal OEE drift, an indicator performance that gives a picture of the production system state (equipment, production line, site, and enterprise) by estimating downtime from 3 major origins: availability, performance, and quality. Our objective is to provide maximum information of the origins of an OEE variation and to support making the best decision for four categories users of OEE (operator, leader team, supervisor, direction). Also, basis on that model, the purpose will provides a deployment methodology to integrate with MES solution in an industrial context
APA, Harvard, Vancouver, ISO, and other styles
44

Foulliaron, Josquin. "Utilisation des modèles graphiques probabilistes pour la mise en place d'une politique de maintenance à base de pronostic." Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1205/document.

Full text
Abstract:
Une des conséquences les plus marquantes de l'évolution actuelle de l’industrie ferroviaire est l'augmentation des contraintes exercées aussi bien sur les voies que sur les matériels roulants ; tant en termes de sollicitations, de charges, de fréquences, qu'en termes d'exigences de disponibilité et de sécurité. De ce fait, la recherche de politiques de maintenance optimales répondant aux objectifs de disponibilité, de coûts, de sécurité est devenue un sujet particulièrement d'actualité. Pour répondre à cette demande d’ajustement des stratégies de maintenance, le formalisme des réseaux bayésiens est une approche de plus en plus utilisée pour développer des outils d'aide à la décision. Afin de s’affranchir de l’hypothèse markovienne restrictive imposée par l’utilisation « standard » des réseaux bayésiens, une structure originale a été proposée pour modéliser finement un processus de dégradation dans le cadre discret à partir de distributions de temps de séjour quelconques. Cette approche, dénommée Modèles Graphiques de Durée, autorise une finesse de modélisation du processus de dégradation qui permet de reproduire le comportement de systèmes multi-composants et multi-états, tout en tenant compte de variables exogènes. Cette modélisation semi-markovienne de la dégradation a, jusqu'à présent, été utilisée surtout pour évaluer ou comparer des stratégies de maintenance pouvant mêler des approches correctives, systématiques ou conditionnelles. Cette thèse vise à étendre les travaux précédents aux actions de maintenance prévisionnelle. Cette approche, qualifiée également de pronostic, offre en effet l’avantage d’une prédiction de l’instant optimal d’intervention maximisant la durée de fonctionnement du système avant intervention, tout en satisfaisant les contraintes d’exploitation et d’entretien. Les systèmes considérés sont à espaces d’états discrets et finis, périodiquement observables, situation fréquente pour de nombreuses applications industrielles, notamment dans le domaine des transports. Ces travaux de thèse proposent, à partir du formalisme des réseaux bayésiens dynamiques et des modèles graphiques de durée, des outils de pronostic dans le but de permettre la modélisation de politiques de maintenance préventives prévisionnelle. Pour répondre à cet objectif, un algorithme de pronostic basé sur des distributions de temps de séjour a tout d’abord été introduit, dans le but de calculer une estimation de la durée de vie résiduelle (RUL) d'un système et de la mettre à jour à chaque fois qu’un nouveau diagnostic est disponible. Pour améliorer la précision des calculs de pronostic, un nouveau modèle de dégradation a ensuite été proposé pour tenir compte de l'existence éventuelle de plusieurs dynamiques de dégradation coexistantes. Son principe consiste à identifier à chaque instant un mode de dégradation actif, puis à répercuter cette information sur les temps de séjour considérés dans les états suivants par l'utilisation de lois de temps de séjour conditionnelles. Enfin, des solutions pour diminuer la complexité des calculs d'inférence exacte sont proposées
One of the most important consequences due to current developments in the rail industry is the increase of stresses on tracks and rolling stock; in terms of loads, frequencies, and both in terms of availability and security requirements. Therefore, looking for optimal maintenance policies to meet the availability, cost and security objectives has become a particularly topical subject. To address this need of maintenance strategy adjustment, approaches using bayesian networks have increasingly been used for the development of decision support tools. To overcome the restrictive Markovian assumption induced by the use of standard bayesian networks, a specific structure has been proposed to accurately model a degradation process in discrete case using any kind of sojourn time distributions. This approach called "Graphical duration model" make possible to describe multicomponent and multi state system behaviours by taking into account many exogenous variables. This semi-markovian modelling of the degradation has mainly been used to evaluate and compare different maintenance strategies based on corrective, systematic and conditional approaches. This PhD thesis aims to extend previous works to predictive maintenance policies. This approach, based on prognosis computations, has the advantage to predict the optimal intervention time maximizing the remaining useful life of the system and both satisfying operating and maintaining constraints. Considered systems have finite discrete state spaces and are periodically observable as many existing ones in the industry and particularly in the field of transport systems. The presented works, based on the dynamic bayesian network formalism and the graphical duration model, propose prognostic tools in order to model the set of predictive maintenance policies. A prognosis algorithm is first introduced to compute the remaining useful life (RUL) of the system and update this estimation each time a new diagnosis is available. To improve the prognosis estimation accuracy, a new degradation model is proposed to take into account the possible existence of many coexisting degradation modes. The principle is to identify at each time the active degradation mode and then to use this information to choose sojourn times considered in next states using conditional sojourn times distributions. At last, some solutions to reduce the complexity of inference computations are proposed
APA, Harvard, Vancouver, ISO, and other styles
45

Mroszczyk, Przemyslaw. "Computation with continuous mode CMOS circuits in image processing and probabilistic reasoning." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/computation-with-continuous-mode-cmos-circuits-in-image-processing-and-probabilistic-reasoning(57ae58b7-a08c-4a67-ab10-5c3a3cf70c09).html.

Full text
Abstract:
The objective of the research presented in this thesis is to investigate alternative ways of information processing employing asynchronous, data driven, and analogue computation in massively parallel cellular processor arrays, with applications in machine vision and artificial intelligence. The use of cellular processor architectures, with only local neighbourhood connectivity, is considered in VLSI realisations of the trigger-wave propagation in binary image processing, and in Bayesian inference. Design issues, critical in terms of the computational precision and system performance, are extensively analysed, accounting for the non-ideal operation of MOS devices caused by the second order effects, noise and parameter mismatch. In particular, CMOS hardware solutions for two specific tasks: binary image skeletonization and sum-product algorithm for belief propagation in factor graphs, are considered, targeting efficient design in terms of the processing speed, power, area, and computational precision. The major contributions of this research are in the area of continuous-time and discrete-time CMOS circuit design, with applications in moderate precision analogue and asynchronous computation, accounting for parameter variability. Various analogue and digital circuit realisations, operating in the continuous-time and discrete-time domains, are analysed in theory and verified using combined Matlab-Hspice simulations, providing a versatile framework suitable for custom specific analyses, verification and optimisation of the designed systems. Novel solutions, exhibiting reduced impact of parameter variability on the circuit operation, are presented and applied in the designs of the arithmetic circuits for matrix-vector operations and in the data driven asynchronous processor arrays for binary image processing. Several mismatch optimisation techniques are demonstrated, based on the use of switched-current approach in the design of current-mode Gilbert multiplier circuit, novel biasing scheme in the design of tunable delay gates, and averaging technique applied to the analogue continuous-time circuits realisations of Bayesian networks. The most promising circuit solutions were implemented on the PPATC test chip, fabricated in a standard 90 nm CMOS process, and verified in experiments.
APA, Harvard, Vancouver, ISO, and other styles
46

Kosgodagan, Alex. "High-dimensional dependence modelling using Bayesian networks for the degradation of civil infrastructures and other applications." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2017. http://www.theses.fr/2017IMTA0020/document.

Full text
Abstract:
Cette thèse explore l’utilisation des réseaux Bayésiens (RB) afin de répondre à des problématiques de dégradation en grandes dimensions concernant des infrastructures du génie civil. Alors que les approches traditionnelles basées l’évolution physique déterministe de détérioration sont déficientes pour des problèmes à grande échelle, les gestionnaires d’ouvrages ont développé une connaissance de modèles nécessitant la gestion de l’incertain. L’utilisation de la dépendance probabiliste se révèle être une approche adéquate dans ce contexte tandis que la possibilité de modéliser l’incertain est une composante attrayante. Le concept de dépendance au sein des RB s’exprime principalement de deux façons. D’une part, les probabilités conditionnelles classiques s’appuyant le théorème de Bayes et d’autre part, une classe de RB faisant l’usage de copules et corrélation de rang comme mesures de dépendance. Nous présentons à la fois des contributions théoriques et pratiques dans le cadre de ces deux classes de RB ; les RB dynamiques discrets et les RB non paramétriques, respectivement. Des problématiques concernant la paramétrisation de chacune des classes sont également abordées. Dans un contexte théorique, nous montrons que les RBNP permet de caractériser n’importe quel processus de Markov
This thesis explores high-dimensional deterioration-related problems using Bayesian networks (BN). Asset managers become more and more familiar on how to reason with uncertainty as traditional physics-based models fail to fully encompass the dynamics of large-scale degradation issues. Probabilistic dependence is able to achieve this while the ability to incorporate randomness is enticing.In fact, dependence in BN is mainly expressed in two ways. On the one hand, classic conditional probabilities that lean on thewell-known Bayes rule and, on the other hand, a more recent classof BN featuring copulae and rank correlation as dependence metrics. Both theoretical and practical contributions are presented for the two classes of BN referred to as discrete dynamic andnon-parametric BN, respectively. Issues related to the parametrization for each class of BN are addressed. For the discrete dynamic class, we extend the current framework by incorporating an additional dimension. We observed that this dimension allows to have more control on the deterioration mechanism through the main endogenous governing variables impacting it. For the non-parametric class, we demonstrate its remarkable capacity to handle a high-dimension crack growth issue for a steel bridge. We further show that this type of BN can characterize any Markov process
APA, Harvard, Vancouver, ISO, and other styles
47

Nguyen, Dang-Trinh. "Diagnostic en ligne des systèmes à événements discrets complexes : approche mixte logique/probabiliste." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT067/document.

Full text
Abstract:
Les systèmes de production auquel nous nous intéressons ici sont caractérisés par leur haut niveau de flexibilité et leur fort niveau d'incertitude lié par exemple à la forte variabilité de la demande, le haut niveau des technologies produites, un flux de production stressant, la présence d'opérateurs humains, de produits, etc. Le domaine de l'industrie du semi-conducteur est un exemple caractéristique de ce type de systèmes. Ces systèmes caractérisent également des équipements nombreux et couteux, des routes de produits diverses, voire même réentrantes sur un même équipement, des équipements de métrologie produits, etc.La présence non systématique d'équipements de métrologie en sortie de chacun des équipements de production (Patterson et al, 2005) rend ce système encore davantage complexe. Cela a en effet pour conséquences des problématiques inéluctables de propagations de défaillances au travers du flux de produits, défaillances qui ne pourront être détectées plus tard qu'au travers d'un arrêt d'équipement non programmé ou alors lors d'un contrôle produit sur un équipement de métrologie. Pour faire face à une telle complexité, un modèle de structure de commande hiérarchique et modulaire est généralement en premier lieu préconisé, il s'agit du modèle CIM (Jones et al, 1990). Ce modèle consiste à décomposer dans un premier temps le système de pilotage en 5 niveaux de commande allant de la couche capteurs/actionneurs en passant par le contrôle-commande et la supervision. Nous nous intéresserons ici plus particulièrement aux trois derniers niveaux temps réels de ce modèle. En effet, lorsqu'une défaillance est détectée au niveau le plus bas de cette pyramide de commande, il s'agit de mettre en place un mécanisme permettant de localiser, en temps réel et de manière efficace, la ou les origines possibles d'une telle défaillance, qu'elle soit propagée, ou non afin de fournir au système d'aide à la décision les informations importantes pour guider l'opérateur humain dans sa phase de maintenance corrective et ainsi contribuer à réduire le temps d'arrêts d'équipements ; l'origine ou la cause de l'arrêt pouvant être l'équipement lui-même (panne de capteur, d'actionneur, déréglage…) ou une mauvaise maintenance, ou encore une recette mal qualifié, etc…L'idée générale que nous défendons ici consiste à s'appuyer sur le mécanisme de génération en ligne du modèle d'historique des opérations exécutées réduit à celles suspectes pour identifier la structure du réseau Bayésien correspondant au modèle de diagnostic ; et de mener par la suite le calcul des probabilités du modèle Bayésien résultant afin de déterminer les candidats à visiter en premier (notion de score) et ainsi contribuer à optimiser la prise de décision pour la maintenance corrective.L'approche générale se veut donc à la croisée d'une approche déterministe et une probabiliste dans un contexte dynamique. Au-delà de ces propositions méthodologiques, nous avons développé une application logicielle permettant de valider notre proposition sur un cas d'étude de la réalité. Les résultats sont particulièrement encourageants et ont fait l'objet de publications des conférences internationales et la soumission dans la revue International Journal of Risk and Reliability
Today's manufacturing systems are challenged by increasing demand diversity and volume that result in short product life cycles with the emergence of high-mix low-volume production. Therefore, one of the main objectives in the manufacturing domain is to reduce cycle time (CT) while ensuring product quality at reduced cost. In such competitive environment, product quality is ensured by introducing more rigorous controls at each production step that results in extended cycle times and increased production costs. This can be reduced by introducing R2R (run to run) loops where control on the product quality is carried out after multiple consecutive production steps. However, product quality drift, detected by metrology at the end of run-to-run loop, results in stopping respective sequence of production equipment. The manufacturing systems are equipped with sensors that provide basis for real time monitoring and diagnosis; however, placement of these sensors is constrained by its structure and the functions they perform. Besides this, these sensors cannot be placed across the equipment due to associated big data analyses challenge. This also results in non-observable components that limit our ability to support effective real time monitoring and fault diagnosis initiatives. Consequently, production equipment in R2R loop are stopped upon product quality drift detection at the inspection step. It is because of the fact that we are unable to diagnose that which equipment or components are responsible for the product quality drift. As a result, production capacities are reduced not because of faulty equipment or components but due to our inability for efficient and effective diagnosis.In this scenario, the key challenge is to diagnose faulty equipment and localize failure(s) against these unscheduled equipment breakdowns. Moreover, the situation becomes more complex if the potential failure(s) is unknown and requires experts' intervention before corrective maintenance can be applied. In addition to this, new failures can emerge as a consequence of different failures and associated delay in its localization and detection. Therefore, success of the manufacturing domain, in such competitive environment, depends on quick and more accurate fault isolation, detection and diagnosis. This paper proposes a methodology that exploits historical data over unobserved equipment components to reduce search space of potential faulty components followed by more accurate diagnosis of failures and causes. The key focus is to improve the effectiveness and efficiency of real time monitoring of potential faulty components and causes diagnoses.This research focuses on potential diagnosis using Logical Diagnosis model (Deschamps et al., 2007) which that offers real time diagnosis in an automated production system. This reduces the search space for faulty equipment from a given production flow and optimizes the learning step for the subsequent BN. The BN model, based on the graphical structure, received from Logical Diagnosis model then computes joint and conditional probabilities for each node, to support corrective maintenance decisions upon scheduled and unscheduled equipment breakdowns. The proposed method enables real time diagnosis for corrective maintenance in fully or semi-automated manufacturing systems
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Zhiyi. "évaluation du risque sismique par approches neuronales." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC089/document.

Full text
Abstract:
L'étude probabiliste de sûreté (EPS) parasismique est l'une des méthodologies les plus utiliséespour évaluer et assurer la performance des infrastructures critiques, telles que les centrales nucléaires,sous excitations sismiques. La thèse discute sur les aspects suivants: (i) Construction de méta-modèlesavec les réseaux de neurones pour construire les relations entre les intensités sismiques et les paramètresde demande des structures, afin d'accélérer l'analyse de fragilité. L'incertitude liée à la substitution desmodèles des éléments finis par les réseaux de neurones est étudiée. (ii) Proposition d'une méthodologiebayésienne avec réseaux de neurones adaptatifs, afin de prendre en compte les différentes sourcesd'information, y compris les résultats des simulations numériques, les valeurs de référence fournies dansla littérature et les évaluations post-sismiques, dans le calcul de courbes de fragilité. (iii) Calcul des loisd'atténuation avec les réseaux de neurones. Les incertitudes épistémiques des paramètres d'entrée de loisd'atténuation, tels que la magnitude et la vitesse moyenne des ondes de cisaillement de trente mètres, sontprises en compte dans la méthodologie développée. (iv) Calcul du taux de défaillance annuel en combinantles résultats des analyses de fragilité et de l'aléa sismique. Les courbes de fragilité sont déterminées parle réseau de neurones adaptatif, tandis que les courbes d'aléa sont obtenues à partir des lois d'atténuationconstruites avec les réseaux de neurones. Les méthodologies proposées sont appliquées à plusieurs casindustriels, tels que le benchmark KARISMA et le modèle SMART
Seismic probabilistic risk assessment (SPRA) is one of the most widely used methodologiesto assess and to ensure the performance of critical infrastructures, such as nuclear power plants (NPPs),faced with earthquake events. SPRA adopts a probabilistic approach to estimate the frequency ofoccurrence of severe consequences of NPPs under seismic conditions. The thesis provides discussionson the following aspects: (i) Construction of meta-models with ANNs to build the relations betweenseismic IMs and engineering demand parameters of the structures, for the purpose of accelerating thefragility analysis. The uncertainty related to the substitution of FEMs models by ANNs is investigated.(ii) Proposal of a Bayesian-based framework with adaptive ANNs, to take into account different sourcesof information, including numerical simulation results, reference values provided in the literature anddamage data obtained from post-earthquake observations, in the fragility analysis. (iii) Computation ofGMPEs with ANNs. The epistemic uncertainties of the GMPE input parameters, such as the magnitudeand the averaged thirty-meter shear wave velocity, are taken into account in the developed methodology.(iv) Calculation of the annual failure rate by combining results from the fragility and hazard analyses.The fragility curves are determined by the adaptive ANN, whereas the hazard curves are obtained fromthe GMPEs calibrated with ANNs. The proposed methodologies are applied to various industrial casestudies, such as the KARISMA benchmark and the SMART model
APA, Harvard, Vancouver, ISO, and other styles
49

Luna, José Eduardo Ochoa. "Lógicas probabilísticas com relações de independência: representação de conhecimento e aprendizado de máquina." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-17082011-090935/.

Full text
Abstract:
A combinação de lógica e probabilidade (lógicas probabilísticas) tem sido um tópico bastante estudado nas últimas décadas. A maioria de propostas para estes formalismos pressupõem que tanto as sentenças lógicas como as probabilidades sejam especificadas por especialistas. Entretanto, a crescente disponibilidade de dados relacionais sugere o uso de técnicas de aprendizado de máquina para produzir sentenças lógicas e estimar probabilidades. Este trabalho apresenta contribuições em termos de representação de conhecimento e aprendizado. Primeiro, uma linguagem lógica probabilística de primeira ordem é proposta. Em seguida, três algoritmos de aprendizado de lógica de descrição probabilística crALC são apresentados: um algoritmo probabilístico com ênfase na indução de sentenças baseada em classificadores Noisy-OR; um algoritmo que foca na indução de inclusões probabilísticas (componente probabilístico de crALC); um algoritmo de natureza probabilística que induz sentenças lógicas ou inclusões probabilísticas. As propostas de aprendizado são avaliadas em termos de acurácia em duas tarefas: no aprendizado de lógicas de descrição e no aprendizado de terminologias probabilísticas em crALC. Adicionalmente, são discutidas aplicações destes algoritmos em processos de recuperação de informação: duas abordagens para extensão semântica de consultas na Web usando ontologias probabilísticas são discutidas.
The combination of logic and probabilities (probabilistic logics) is a topic that has been extensively explored in past decades. The majority of work in probabilistic logics assumes that both logical sentences and probabilities are specified by experts. As relational data is increasingly available, machine learning algorithms have been used to induce both logical sentences and probabilities. This work contributes in knowledge representation and learning. First, a rst-order probabilistic logic is proposed. Then, three algorithms for learning probabilistic description logic crALC are given: a probabilistic algorithm focused on learning logical sentences and based on Noisy-OR classiers; an algorithm that aims at learning probabilistic inclusions (probabilistic component of crALC) and; an algorithm that using a probabilistic setting, induces either logical sentences or probabilistic inclusions. Evaluation of these proposals has been performed in two situations: by measuring learning accuracy of both description logics and probabilistic terminologies. In addition, these learning algorithms have been applied to information retrieval processes: two approaches for semantic query extension through probabilistic ontologies are discussed.
APA, Harvard, Vancouver, ISO, and other styles
50

De, Galizia Antonello. "Évaluation probabiliste de l’efficacité des barrières humaines prises dans leur contexte organisationnel." Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0018/document.

Full text
Abstract:
Les travaux menés dans cette thèse CIFRE s’inscrivent dans le cadre d’une collaboration pérenne entre le CRAN et l'EDF R&D dont un des résultats majeurs a été le développement d'une méthodologie d’analyse de risques, appelée Analyse Intégrée des Risques (AiDR). Cette méthodologie traite des systèmes sociotechniques sous les angles technique, humain et organisationnel et dont les équipements sont soumis à des actions de maintenance et/ou de conduite. La thèse a pour objet ainsi de proposer une évolution du modèle dit de « barrière humaine » développé dans l'AiDR pour évaluer l'efficacité de ces actions humaines prises leur contexte organisationnel. Nos contributions majeures s'organisent autour de 3 axes : 1. Une amélioration de la structure préexistante du modèle de barrière humaine afin d’aboutir à un modèle basé sur des facteurs de forme appelés performance shaping factors (PSF) fournis par les méthodes d’Évaluation Probabiliste de la Fiabilité Humaine (EPFH) ;2. L’intégration de la résilience et la modélisation de l’interaction entre mécanismes résilients et pathogènes impactant l'efficacité des actions dans les relations causales probabilistes ;3. Un traitement global des jugements d’expert cohérent avec la structure mathématique du modèle proposé permettant d’estimer d’une manière objective les paramètres du modèle. Ce traitement se fonde sur la construction d’un questionnaire permettant de "guider" l’expert vers l’évaluation d’effets conjoints issus de l’interaction entre mécanismes pathogènes et résilients. L’ensemble des contributions proposées a été validé sur un cas d’application portant sur une barrière humaine mise en place dans un cas d’inondation externe d’une unité de production d’électricité d’EDF
The work carried out in this CIFRE PhD thesis is part of a long-term collaboration between CRAN and EDF R&D, one of the major results of which was the development of a risk analysis methodology called Integrated Risk Analysis (AiDR). This methodology deals with sociotechnical systems from technical, human and organizational points of view and whose equipment is subjected to maintenance and/or operation activities. This thesis aims to propose an evolution of the so-called "human barrier" model developed in the AiDR in order to evaluate the effectiveness of these human actions taken their organizational context. Our major contributions are organized around 3 axes: 1. Improvement of the pre-existing structure of the human barrier model to achieve a model based on performance shaping factors (PSF) provided by the Human Reliability Assessment (HRA) methods; 2. Integration of resilience and modeling of the interaction between resilient and pathogenic mechanisms impacting the effectiveness of activities in a probabilistic causal framework; 3. A global treatment of the expert judgments consistent with the mathematical structure of the proposed model in order to objectively estimate the parameters of the model. This treatment is based on a questionnaire to guide experts towards the evaluation of joint effects resulting from the interaction between pathogenic and resilient mechanisms. All of the proposed contributions have been validated on an application case involving a human barrier put in place during an external flooding occurring at an EDF power plant
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography