Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Continuous Time Bayesian Networks.

Thèses sur le sujet « Continuous Time Bayesian Networks »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Continuous Time Bayesian Networks ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Nodelman, Uri D. « Continuous time bayesian networks / ». May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

ACERBI, ENZO. « Continuos time Bayesian networks for gene networks reconstruction ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/52709.

Texte intégral
Résumé :
Dynamic aspects of gene regulatory networks are typically investigated by measuring system variables at multiple time points. Current state-of-the-art computational approaches for reconstructing gene networks directly build on such data, making a strong assumption that the system evolves in a synchronous fashion at fixed points in time. However, nowadays omics data are being generated with increasing time course granularity. Thus, modellers now have the possibility to represent the system as evolving in continuous time and improve the models' expressiveness. Continuous time Bayesian networks is proposed as a new approach for gene network reconstruction from time course expression data. Their performance was compared to two state-of-the-art methods: dynamic Bayesian networks and Granger causality analysis. On simulated data methods's comparison was carried out for networks of increasing dimension, for measurements taken at different time granularity densities and for measurements evenly vs. unevenly spaced over time. Continuous time Bayesian networks outperformed the other methods in terms of the accuracy of regulatory interactions learnt from data for all network dimensions. Furthermore, their performance degraded smoothly as the dimension of the network increased. Continuous time Bayesian network were significantly better than dynamic Bayesian networks for all time granularities tested and better than Granger causality for dense time series. Both continuous time Bayesian networks and Granger causality performed robustly for unevenly spaced time series, with no significant loss of performance compared to the evenly spaced case, while the same did not hold true for dynamic Bayesian networks. The comparison included the IRMA experimental datasets which confirmed the effectiveness of the proposed method. Continuous time Bayesian networks were then applied to elucidate the regulatory mechanisms controlling murine T helper 17 (Th17) cell differentiation and were found to be effective in discovering well-known regulatory mechanisms as well as new plausible biological insights. Continuous time Bayesian networks resulted to be effective on networks of both small and big dimensions and particularly feasible when the measurements are not evenly distributed over time. Reconstruction of the murine Th17 cell differentiation network using continuous time Bayesian networks revealed several autocrine loops suggesting that Th17 cells may be auto regulating their own differentiation process.
Styles APA, Harvard, Vancouver, ISO, etc.
3

CODECASA, DANIELE. « Continuous time bayesian network classifiers ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/80691.

Texte intégral
Résumé :
Streaming data are relevant to finance, computer science, and engineering, while they are becoming increasingly important to medicine and biology. Continuous time Bayesian networks are designed for analyzing efficiently multivariate streaming data, exploiting the conditional independencies in continuous time homogeneous Markov processes. Continuous time Bayesian network classifiers are a specialization of continuous time Bayesian networks designed for multivariate streaming data classification when time duration of events matters and the class occurs in the future. Continuous time Bayesian network classifiers are presented and analyzed. Structural learning is introduced for this class of models when complete data are available. A conditional log-likelihood scoring is derived to improve the marginal log- likelihood structural learning on continuous time Bayesian net- work classifiers. The expectation maximization algorithm is developed to address the unsupervised learning of continuous time Bayesian network classifiers when the class is unknown. Performances of continuous time Bayesian network classifiers in the case of classification and clustering are analyzed with the help of a rich set of numerical experiments on synthetic and real data sets. Continuous time Bayesian network classifiers learned by maximizing marginal log-likelihood and conditional log-likelihood are compared with continuous time naive Bayes and dynamic Bayesian networks. Results show that the conditional log-likelihood scoring combined with Bayesian parameter estimation outperforms marginal log-likelihood scoring and dynamic Bayesian networks in the case of supervised classification. Conditional log-likelihood scoring becomes even more effective when the amount of available data is limited. Continuous time Bayesian network classifiers outperform dynamic Bayesian networks even on data sets generated from dis- crete time models. Clustering results show that in the case of unsupervised learning the marginal log-likelihood score is the most effective way to learn continuous time Bayesian network classifiers. Continuous time models again outperform dynamic Bayesian networks even when applied on discrete time data sets. A Java software toolkit implementing the main theoretical achievements of the thesis has been designed and developed under the name of the CTBNCToolkit. It provides a free stand- alone toolkit for multivariate trajectory classification and an open source library, which can be extend in accordance with the GPL v.2.0 license. The CTBNCToolkit allows classification and clustering of multivariate trajectories using continuous time Bayesian network classifiers. Structural learning, maximizing marginal log-likelihood and conditional log-likelihood scores, is provided.
Styles APA, Harvard, Vancouver, ISO, etc.
4

VILLA, SIMONE. « Continuous Time Bayesian Networks for Reasoning and Decision Making in Finance ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/69953.

Texte intégral
Résumé :
L'analisi dell'enorme quantità di dati finanziari, messi a disposizione dai mercati elettronici, richiede lo sviluppo di nuovi modelli e tecniche per estrarre efficacemente la conoscenza da utilizzare in un processo decisionale informato. Lo scopo della tesi concerne l'introduzione di modelli grafici probabilistici utilizzati per il ragionamento e l'attività decisionale in tale contesto. Nella prima parte della tesi viene presentato un framework che utilizza le reti Bayesiane per effettuare l'analisi e l'ottimizzazione di portafoglio in maniera olistica. In particolare, esso sfrutta, da un lato, la capacità delle reti Bayesiane di rappresentare distribuzioni di probabilità in modo compatto ed efficiente per modellare il portafoglio e, dall'altro, la loro capacità di fare inferenza per ottimizzare il portafoglio secondo diversi scenari economici. In molti casi, si ha la necessità di ragionare in merito a scenari di mercato nel tempo, ossia si vuole rispondere a domande che coinvolgono distribuzioni di probabilità che evolvono nel tempo. Le reti Bayesiane a tempo continuo possono essere utilizzate in questo contesto. Nella seconda parte della tesi viene mostrato il loro utilizzo per affrontare problemi finanziari reali e vengono descritte due importanti estensioni. La prima estensione riguarda il problema di classificazione, in particolare vengono introdotti un algoritmo per apprendere tali classificatori da Big Data e il loro utilizzo nel contesto di previsione dei cambi valutari ad alta frequenza. La seconda estensione concerne l'apprendimento delle reti Bayesiane a tempo continuo in domini non stazionari, in cui vengono modellate esplicitamente le dipendenze statistiche presenti nelle serie temporali multivariate consentendo loro di cambiare nel corso del tempo. Nella terza parte della tesi viene descritto l'uso delle reti Bayesiane a tempo continuo nell'ambito dei processi decisionali di Markov, i quali consentono di modellare processi decisionali sequenziali in condizioni di incertezza. In particolare, viene introdotto un metodo per il controllo di sistemi dinamici a tempo continuo che sfrutta le proprietà additive e contestuali per scalare efficacemente su grandi spazi degli stati. Infine, vengono mostrate le prestazioni di tale metodo in un contesto significativo di trading.
The analysis of the huge amount of financial data, made available by electronic markets, calls for new models and techniques to effectively extract knowledge to be exploited in an informed decision-making process. The aim of this thesis is to introduce probabilistic graphical models that can be used to reason and to perform actions in such a context. In the first part of this thesis, we present a framework which exploits Bayesian networks to perform portfolio analysis and optimization in a holistic way. It leverages on the compact and efficient representation of high dimensional probability distributions offered by Bayesian networks and their ability to perform evidential reasoning in order to optimize the portfolio according to different economic scenarios. In many cases, we would like to reason about the market change, i.e. we would like to express queries as probability distributions over time. Continuous time Bayesian networks can be used to address this issue. In the second part of the thesis, we show how it is possible to use this model to tackle real financial problems and we describe two notable extensions. The first one concerns classification, where we introduce an algorithm for learning these classifiers from Big Data, and we describe their straightforward application to the foreign exchange prediction problem in the high frequency domain. The second one is related to non-stationary domains, where we explicitly model the presence of statistical dependencies in multivariate time-series while allowing them to change over time. In the third part of the thesis, we describe the use of continuous time Bayesian networks within the Markov decision process framework, which provides a model for sequential decision-making under uncertainty. We introduce a method to control continuous time dynamic systems, based on this framework, that relies on additive and context-specific features to scale up to large state spaces. Finally, we show the performances of our method in a simplified, but meaningful trading domain.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Fan, Yu. « Continuous time Bayesian Network approximate inference and social network applications ». Diss., [Riverside, Calif.] : University of California, Riverside, 2009. http://proquest.umi.com/pqdweb?index=0&did=1957308751&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268330625&clientId=48051.

Texte intégral
Résumé :
Thesis (Ph. D.)--University of California, Riverside, 2009.
Includes abstract. Title from first page of PDF file (viewed March 8, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 130-133). Also issued in print.
Styles APA, Harvard, Vancouver, ISO, etc.
6

GATTI, ELENA. « Graphical models for continuous time inference and decision making ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/19575.

Texte intégral
Résumé :
Reasoning about evolution of system in time is both an important and challenging task. We are interested in probability distributions over time of events where often observations are irregularly spaced over time. Probabilistic models have been widely used to accomplish this task but they have some limits. Indeed, Hidden Markov Models and Dynamic Bayesian Networks in general require the specification of a time granularity between consecutive observations. This requirement leads to computationally inefficient learning and inference procedures when the adopted time granularity is finer than the time spent between consecutive observations, and to possible losses of information in the opposite case. The framework of Continuous Time Bayesian Networks (CTBN) overcomes this limit, allowing the representation of temporal dynamics over a structured state space. In this dissertation an overview of the semantic and inference aspects of the framework of the CTBNs is proposed. The limits of exact inference are overcome using approximate inference, in particular the cluster-graph message passing algorithm and the Gibbs Sampling has been investigated. The CTBN has been applied to a real case study of diagnosis of cardiogenic heart failure, developed in collaboration with domain experts. Moving from the task of simply reasoning under uncertainty, to the task of deciding how to act in the world, a part of the dissertation is devoted to graphical models that allow the inclusion of decisions. We describe Influence Diagrams, which extend Bayesian Networks by introducing decisions and utilities. We then discuss an approach for approximate representation of optimal strategies in influence diagrams. The contributions of the dissertation are the following: design and development of a CTBN software package implementing two of the most important inference algorithms (Expectation Propagation and Gibbs Sampling), development of a realistic diagnosis scenario of cardiogenic heart failure (to the best of our knowledge it is the first clinical application of this type), the approach of information enhancement to reduce the domain of the policy in large influence diagrams together with an important contribution concerning the identification of informational links to add in the graph.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Alharbi, Randa. « Bayesian inference for continuous time Markov chains ». Thesis, University of Glasgow, 2019. http://theses.gla.ac.uk/40972/.

Texte intégral
Résumé :
Continuous time Markov chains (CTMCs) are a flexible class of stochastic models that have been employed in a wide range of applications from timing of computer protocols, through analysis of reliability in engineering, to models of biochemical networks in molecular biology. These models are defined as a state system with continuous time transitions between the states. Extensive work has been historically performed to enable convenient and flexible definition, simulation, and analysis of continuous time Markov chains. This thesis considers the problem of Bayesian parameter inference on these models and investigates computational methodologies to enable such inference. Bayesian inference over continuous time Markov chains is particularly challenging as the likelihood cannot be evaluated in a closed form. To overcome the statistical problems associated with evaluation of the likelihood, advanced algorithms based on Monte Carlo have been used to enable Bayesian inference without explicit evaluation of the likelihoods. An additional class of approximation methods has been suggested to handle such inference problems, known as approximate Bayesian computation. Novel Markov chain Monte Carlo (MCMC) approaches were recently proposed to allow exact inference. The contribution of this thesis is in discussion of the techniques and challenges in implementing these inference methods and performing an extensive comparison of these approaches on two case studies in systems biology. We investigate how the algorithms can be designed and tuned to work on CTMC models, and to achieve an accurate estimate of the posteriors with reasonable computational cost. Through this comparison, we investigate how to avoid some practical issues with accuracy and computational cost, for example by selecting an optimal proposal distribution and introducing a resampling step within the sequential Monte-Carlo method. Within the implementation of the ABC methods we investigate using an adaptive tolerance schedule to maximise the efficiency of the algorithm and in order to reduce the computational cost.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Parton, Alison. « Bayesian inference for continuous-time step-and-turn movement models ». Thesis, University of Sheffield, 2018. http://etheses.whiterose.ac.uk/20124/.

Texte intégral
Résumé :
This thesis concerns the statistical modelling of animal movement paths given observed GPS locations. With observations being in discrete time, mechanistic models of movement are often formulated as such. This popularity remains despite an inability to compare analyses through scale invariance and common problems handling irregularly timed observations. A natural solution is to formulate in continuous time, yet uptake of this has been slow, often excused by a difficulty in interpreting the ‘instantaneous’ parameters associated with a continuous-time model. The aim here was to bolster usage by developing a continuous-time model with interpretable parameters, similar to those of popular discrete-time models that use turning angles and step lengths to describe the movement process. Movement is defined by a continuous-time, joint bearing and speed process, the parameters of which are dependent on a continuous-time behavioural switching process, thus creating a flexible class of movement models. Further, we allow for the observed locations derived from this process to have unknown error. Markov chain Monte Carlo inference is presented for parameters given irregular, noisy observations. The approach involves augmenting the observed locations with a reconstruction of the underlying continuous-time process. Example implementations showcasing this method are given featuring simulated and real datasets. Data from elk (Cervus elaphus), which have previously been modelled in discrete time, demonstrate the interpretable nature of the model, finding clear differences in behaviour over time and insights into short-term behaviour that could not have been obtained in discrete time. Observations from reindeer (Rangifer tarandus) reveal the effect observation error has on the identification of large turning angles—a feature often inferred in discrete-time modelling. Scalability to realistically large datasets is shown for lesser black-backed gull (Larus fuscus) data.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Tucker, Allan Brice James. « The automatic explanation of Multivariate Time Series with large time lags ». Thesis, Birkbeck (University of London), 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.246924.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

CRISTINI, ALESSANDRO. « Continuous-time spiking neural networks : paradigm and case studies ». Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2014. http://hdl.handle.net/2108/202297.

Texte intégral
Résumé :
In the last decades many neuron models have been proposed in order to emulate the spiking behavior of the cortical neurons, from the simplest Integrateand- Fire to the most bio-realistic Hodgkin-Huxley model. The choice of which model have to be used depends on the trade-off between bio-plausibility and computational cost, that may be related to the specific purpose. The modeling of a continuous-time spiking neural network is the main purpose of this thesis. The “continuous-time” term refers to the fact that a spike can occur at any given time, thus in order to do exact computations without loss of information an exact ad hoc event-driven strategy for simulations has been implemented. In particular, the latter is suitable for the simplified neuron model here used. Despite its simplicity, the model shows some important bio-plausible behaviors, such as subthreshold decay, spike latency, refractoriness, etc. Moreover, some bio-inspired synaptic plasticity rules have been implemented (e.g., STDP). With the aim of taking into account non-local interconnections among populations of neurons, gammadistributed synaptic delays are also introduced. These characteristics make possible to investigate various scenarios in which the dynamics showed by the network can be more bio-realistic. Further, some case studies are illustrated: jitter phenomenon and “path multimodality” in feedforward networks, and dynamical activity groups for CNN-like topologies. Finally, future directions of this work are briefly discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Elshamy, Wesam Samy. « Continuous-time infinite dynamic topic models ». Diss., Kansas State University, 2012. http://hdl.handle.net/2097/15176.

Texte intégral
Résumé :
Doctor of Philosophy
Department of Computing and Information Sciences
William Henry Hsu
Topic models are probabilistic models for discovering topical themes in collections of documents. In real world applications, these models provide us with the means of organizing what would otherwise be unstructured collections. They can help us cluster a huge collection into different topics or find a subset of the collection that resembles the topical theme found in an article at hand. The first wave of topic models developed were able to discover the prevailing topics in a big collection of documents spanning a period of time. It was later realized that these time-invariant models were not capable of modeling 1) the time varying number of topics they discover and 2) the time changing structure of these topics. Few models were developed to address this two deficiencies. The online-hierarchical Dirichlet process models the documents with a time varying number of topics. It varies the structure of the topics over time as well. However, it relies on document order, not timestamps to evolve the model over time. The continuous-time dynamic topic model evolves topic structure in continuous-time. However, it uses a fixed number of topics over time. In this dissertation, I present a model, the continuous-time infinite dynamic topic model, that combines the advantages of these two models 1) the online-hierarchical Dirichlet process, and 2) the continuous-time dynamic topic model. More specifically, the model I present is a probabilistic topic model that does the following: 1) it changes the number of topics over continuous time, and 2) it changes the topic structure over continuous-time. I compared the model I developed with the two other models with different setting values. The results obtained were favorable to my model and showed the need for having a model that has a continuous-time varying number of topics and topic structure.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Thomas, Zachary Micah. « Bayesian Hierarchical Space-Time Clustering Methods ». The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1435324379.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
13

Acciaroli, Giada. « Calibration of continuous glucose monitoring sensors by time-varying models and Bayesian estimation ». Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3425746.

Texte intégral
Résumé :
Minimally invasive continuous glucose monitoring (CGM) sensors are wearable medical devices that provide frequent (e.g., 1-5 min sampling rate) real-time measurements of glucose concentration for several consecutive days. This can be of great help in the daily management of diabetes. Most of the CGM systems commercially available today have a wire-based electrochemical sensor, usually placed in the subcutaneous tissue, which measures a "raw" electrical current signal via a glucose-oxidase electrochemical reaction. Observations of the raw electrical signal are frequently revealed by the sensor on a fine, uniformly spaced, time grid. These samples of electrical nature are in real-time converted to interstitial glucose (IG) concentration levels through a calibration process by fitting a few blood glucose (BG) concentration measurements, sparsely collected by the patient through fingerprick. Usually, for coping with such a process, CGM sensor manufacturers employ linear calibration models to approximate, albeit in limited time-intervals, the nonlinear relationship between electrical signal and glucose concentration. Thus, on the one hand, frequent calibrations (e.g., two per day) are required to guarantee a good sensor accuracy. On the other, each calibration requires patients to add uncomfortable extra actions to the many already needed in the routine of diabetes management. The aim of this thesis is to develop new calibration algorithms for minimally invasive CGM sensors able to ensure good sensor accuracy with the minimum number of calibrations. In particular, we propose i) to replace the time-invariant gain and offset conventionally used by the linear calibration models with more sophisticated time-varying functions valid for multiple-day periods, with unknown model parameters for which an a priori statistical description is available from independent training sets; ii) to numerically estimate the calibration model parameters by means of a Bayesian estimation procedure that exploits the a priori information on model parameters in addition to some BG samples sparsely collected by the patient. The thesis is organized in 6 chapters. In Chapter 1, after a background introduction on CGM sensor technologies, the calibration problem is illustrated. Then, some state-of-art calibration techniques are briefly discussed with their open problems, which result in the aims of the thesis illustrated at the end of the chapter. In Chapter 2, the datasets used for the implementation of the calibration techniques are described, together with the performance metrics and the statistical analysis tools which will be employed to assess the quality of the results. In Chapter 3, we illustrate a recently proposed calibration algorithm (Vet- toretti et al., IEEE Trans Biomed Eng 2016), which represents the starting point of the study proposed in this thesis. In particular, we demonstrate that, thanks to the development of a time-varying day-specific Bayesian prior, the algorithm can become able to reduce the calibration frequency from two to one per day. However, the linear calibration model used by the algorithm has domain of validity limited to certain time intervals, not allowing to further reduce calibrations to less then one per day and calling for the development of a new calibration model valid for multiple-day periods like that developed in the remainder of this thesis. In Chapter 4, a novel Bayesian calibration algorithm working in a multi-day framework (referred to as Bayesian multi-day, BMD, calibration algorithm) is presented. It is based on a multiple-day model of sensor time-variability with second order statistical priors on its unknown parameters. In each patient-sensor realization, the numerical values of the calibration model parameters are determined by a Bayesian estimation procedure exploiting the BG samples sparsely collected by the patient. In addition, the distortion introduced by the BG-to-IG kinetics is compensated during parameter identification via non-parametric deconvolution. The BMD calibration algorithm is applied to two datasets acquired with the "present-generation" Dexcom (Dexcom Inc., San Diego, CA) G4 Platinum (DG4P) CGM sensor and a "next-generation" Dexcom CGM sensor prototype (NGD). In the DG4P dataset, results show that, despite the reduction of calibration frequency (on average from 2 per day to 0.25 per day), the BMD calibration algorithm significantly improves sensor accuracy compared to the manufacturer calibration algorithm. In the NGD dataset, performance is even better than that of present generation, allowing to further reduce calibrations toward zero. In Chapter 5, we analyze the potential margins for improvement of the BMD calibration algorithm and propose a further extension of the method. In particular, to cope with the inter-sensor and inter-subject variability, we propose a multi-model approach and a Bayesian model selection framework (referred to as multi-model Bayesian framework, MMBF) in which the most likely calibration model is chosen among a finite set of candidates. A preliminary assessment of the MMBF is conducted on synthetic data generated by a well-established type 1 diabetes simulation model. Results show a statistically significant accuracy improvement compared to the use of a unique calibration model. Finally, the major findings of the work carried out in this thesis, possible applications and margins for improvement are summarized in Chapter 6.
I sensori minimamente invasivi per il monitoraggio in continua della glicemia, indicati con l’acronimo CGM (continuous glucose monitoring), sono dei dispositivi medici indossabili capaci di misurare la glicemia in tempo reale, ogni 1-5 minuti, per più giorni consecutivi. Questo tipo di misura fornisce un profilo di glicemia quasi continuo che risulta essere un’informazione molto utile per la gestione quotidiana della terapia del diabete. La maggior parte dei dispositivi CGM ad oggi disponibili nel mercato dispongono di un sensore di tipo elettrochimico, solitamente inserito nel tessuto sottocutaneo, che misura una corrente elettrica generata dalla reazione chimica di glucosio-ossidasi. Le misure di corrente elettrica sono fornite dal sensore con campionamento uniforme ad elevata frequenza temporale e vengono convertite in tempo reale in valori di glicemia interstiziale attraverso un processo di calibrazione. La procedura di calibrazione prevede l’acquisizione da parte del paziente di qualche misura di glicemia plasmatica di riferimento tramite dispositivi pungidito. Solitamente, le aziende produttrici di sensori CGM implementano un processo di calibrazione basato su un modello di tipo lineare che approssima, sebbene in intervalli di tempo di durata limitata, la più complessa relazione tra corrente elettrica e glicemia. Di conseguenza, si rendono necessarie frequenti calibrazioni (per esempio, due al giorno) per aggiornare i parametri del modello di calibrazione e garantire una buona accuratezza di misura. Tuttavia, ogni calibrazione prevede l’acquisizione da parte del paziente di misure di glicemia tramite dispositivi pungidito. Questo aumenta la già numerosa lista di azioni che i pazienti devono svolgere quotidianamente per gestire la loro terapia. Lo scopo di questa tesi è quello di sviluppare un nuovo algoritmo di calibrazione per sensori CGM minimamente invasivi capace di garantire una buona accuratezza di misura con il minimo numero di calibrazioni. Nello specifico, si propone i) di sostituire il guadagno ed offset tempo-invarianti solitamente utilizzati nei modelli di calibrazione di tipo lineare con delle funzioni tempo-varianti, capaci di descrivere il comportamento del sensore per intervalli di tempo di più giorni, e per cui sia disponibile dell’informazione a priori riguardante i parametri incogniti; ii) di stimare il valore numerico dei parametri del modello di calibrazione con metodo Bayesiano, sfruttando l’informazione a priori sui parametri di calibrazione in aggiunta ad alcune misure di glicemia plasmatica di riferimento. La tesi è organizzata in 6 capitoli. Nel Capitolo 1, dopo un’introduzione sulle tecnologie dei sensori CGM, viene illustrato il problema della calibrazione. In seguito, vengono discusse alcune tecniche di calibrazione che rappresentano lo stato dell’arte ed i loro problemi aperti, che risultano negli scopi della tesi descritti alla fine del capitolo. Nel Capitolo 2 vengono descritti i dataset utilizzati per l’implementazione delle tecniche di calibrazione. Inoltre, vengono illustrate le metriche di accuratezza e le tecniche di analisi statistica utilizzate per analizzare la qualità dei risultati. Nel Capitolo 3 viene illustrato un algoritmo di calibrazione recentemente proposto in letteratura (Vettoretti et al., IEEE, Trans Biomed Eng 2016). Questo algoritmo rappresenta il punto di partenza dello studio svolto in questa tesi. Più precisamente, viene dimostrato che, grazie all’utilizzo di un prior Bayesiano specifico per ogni giorno di utilizzo, l’algoritmo diventa efficace nel ridurre le calibrazioni da due a una al giorno senza perdita di accuratezza. Tuttavia, il modello lineare di calibrazione utilizzato dall’algoritmo ha dominio di validità limitato a brevi intervalli di tempo tra due calibrazioni successive, rendendo impossibile l’ulteriore riduzione delle calibrazioni a meno di una al giorno senza perdita di accuratezza. Questo determina la necessità di sviluppare un nuovo modello di calibrazione valido per intervalli di tempo più estesi, fino a più giorni consecutivi, come quello sviluppato nel resto di questa tesi. Nel Capitolo 4 viene presentato un nuovo algoritmo di calibrazione di tipo Bayesiano (Bayesian multi-day, BMD). L’algoritmo si basa su un modello della tempo-varianza delle caratteristiche del sensore nei suoi giorni di utilizzo e sulla disponibilità di informazione statistica a priori sui suoi parametri incogniti. Per ogni coppia paziente-sensore, il valore numerico dei parametri del modello è determinato tramite stima Bayesiana sfruttando alcune misure plasmatiche di riferimento acquisite dal paziente con dispositivi pungidito. Inoltre, durante la stima dei parametri, la dinamica introdotta dalla cinetica plasma-interstizio viene compensata tramite deconvoluzione nonparametrica. L’algoritmo di calibrazione BMD viene applicato a due differenti set di dati acquisiti con il sensore commerciale Dexcom (Dexocm Inc., San Diego, CA) G4 Platinum (DG4P) e con un prototipo di sensore Dexcom di nuova generazione (NGD). Nei dati acquisiti con il sensore DG4P, i risultati dimostrano che, nonostante le calibrazioni vengano ridotte (in media da 2 al giorno a 0.25 al giorno), l’ algoritmo BMD migliora significativamente l’accuratezza del sensore rispetto all’algoritmo di calibrazione utilizzato dall’azienda produttrice del sensore. Nei dati acquisiti con il sensore NGD, i risultati sono ancora migliori, permettendo di ridurre ulteriormente le calibrazioni fino a zero. Nel Capitolo 5 vengono analizzati i potenziali margini di miglioramento dell’algoritmo di calibrazione BMD discusso nel capitolo precedente e viene proposta un’ulteriore estensione dello stesso. In particolare, per meglio gestire la variabilità tra sensori e tra soggetti, viene proposto un approccio di calibrazione multi-modello e un metodo Bayesiano di selezione del modello (Multi-model Bayesian framework, MMBF) in cui il modello di calibrazione più probabile a posteriori viene scelto tra un set di possibili candidati. Tale approccio multi-modello viene analizzato in via preliminare su un set di dati simulati generati da un simulatore del paziente diabetico di tipo 1 ben noto in letteratura. I risultati dimostrano che l’accuratezza del sensore migliora in modo significativo con MMBF rispetto ad utilizzare un unico modello di calibrazione. Infine, nel Capitolo 6 vengono riassunti i principali risultati ottenuti in questa tesi, le possibili applicazioni, e i margini di miglioramento per gli sviluppi futuri.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Howells, Timothy Paul. « Pattern recognition in physiological time-series data using Bayesian neural networks ». Thesis, University of Edinburgh, 2003. http://hdl.handle.net/1842/24717.

Texte intégral
Résumé :
This thesis describes the application of Bayesian techniques to the analysis of a large database of physiological time series data collected during the management of patients following traumatic brain injury at the Western General Hospital in Edinburgh. The study can be divided into three main sections: •   Model validation using simulated data: Techniques are developed that show that under certain conditions the distribution of network outputs generated by these Bayesian neural networks correctly models the desired conditional probability density functions for a wide range of simple problems for which exact solutions can be derived. This provides the basis for using these models in a scientific context. •   Model validation using real data. Statistical prognostic modelling for head injured patients is well advanced using simple demographic and clinical features. The Bayesean techniques developed in the previous section are applied to this problem, and the results are compared to those obtained using standard statistical techniques. •  Application of these models to physiological data. The models are now applied to the full database and used to interpret the data and provide new insight into the risk factors for head injured patients in intensive care.
Styles APA, Harvard, Vancouver, ISO, etc.
15

Murray, Lawrence. « Bayesian learning of continuous time dynamical systems with applications in functional magnetic resonance imaging ». Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/4157.

Texte intégral
Résumé :
Temporal phenomena in a range of disciplines are more naturally modelled in continuous-time than coerced into a discrete-time formulation. Differential systems form the mainstay of such modelling, in fields from physics to economics, geoscience to neuroscience. While powerful, these are fundamentally limited by their determinism. For the purposes of probabilistic inference, their extension to stochastic differential equations permits a continuous injection of noise and uncertainty into the system, the model, and its observation. This thesis considers Bayesian filtering for state and parameter estimation in general non-linear, non-Gaussian systems using these stochastic differential models. It identifies a number of challenges in this setting over and above those of discrete time, most notably the absence of a closed form transition density. These are addressed via a synergy of diverse work in numerical integration, particle filtering and high performance distributed computing, engineering novel solutions for this class of model. In an area where the default solution is linear discretisation, the first major contribution is the introduction of higher-order numerical schemes, particularly stochastic Runge-Kutta, for more efficient simulation of the system dynamics. Improved runtime performance is demonstrated on a number of problems, and compatibility of these integrators with conventional particle filtering and smoothing schemes discussed. Finding compatibility for the smoothing problem most lacking, the major theoretical contribution of the work is the introduction of two novel particle methods, the kernel forward-backward and kernel two-filter smoothers. By harnessing kernel density approximations in an importance sampling framework, these attain cancellation of the intractable transition density, ensuring applicability in continuous time. The use of kernel estimators is particularly amenable to parallelisation, and provides broader support for smooth densities than a sample-based representation alone, helping alleviate the well known issue of degeneracy in particle smoothers. Implementation of the methods for large-scale problems on high performance computing architectures is provided. Achieving improved temporal and spatial complexity, highly favourable runtime comparisons against conventional techniques are presented. Finally, attention turns to real world problems in the domain of Functional Magnetic Resonance Imaging (fMRI), first constructing a biologically motivated stochastic differential model of the neural and hemodynamic activity underlying the observed signal in fMRI. This model and the methodological advances of the work culminate in application to the deconvolution and effective connectivity problems in this domain.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Dodd, Tony. « Prior knowledge for time series modelling ». Thesis, University of Southampton, 2000. https://eprints.soton.ac.uk/254110/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Anishchenko, Anastasiia [Verfasser], et Oliver [Akademischer Betreuer] Mülken. « Efficiency of continuous-time quantum walks : from networks with disorder to deterministic fractals ». Freiburg : Universität, 2015. http://d-nb.info/1122592876/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Arastuie, Makan. « Generative Models of Link Formation and Community Detection in Continuous-Time Dynamic Networks ». University of Toledo / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1596718772873086.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
19

Shaikh, A. D. « Modelling data and voice traffic over IP networks using continuous-time Markov models ». Thesis, Aston University, 2009. http://publications.aston.ac.uk/15385/.

Texte intégral
Résumé :
Common approaches to IP-traffic modelling have featured the use of stochastic models, based on the Markov property, which can be classified into black box and white box models based on the approach used for modelling traffic. White box models, are simple to understand, transparent and have a physical meaning attributed to each of the associated parameters. To exploit this key advantage, this thesis explores the use of simple classic continuous-time Markov models based on a white box approach, to model, not only the network traffic statistics but also the source behaviour with respect to the network and application. The thesis is divided into two parts: The first part focuses on the use of simple Markov and Semi-Markov traffic models, starting from the simplest two-state model moving upwards to n-state models with Poisson and non-Poisson statistics. The thesis then introduces the convenient to use, mathematically derived, Gaussian Markov models which are used to model the measured network IP traffic statistics. As one of the most significant contributions, the thesis establishes the significance of the second-order density statistics as it reveals that, in contrast to first-order density, they carry much more unique information on traffic sources and behaviour. The thesis then exploits the use of Gaussian Markov models to model these unique features and finally shows how the use of simple classic Markov models coupled with use of second-order density statistics provides an excellent tool for capturing maximum traffic detail, which in itself is the essence of good traffic modelling. The second part of the thesis, studies the ON-OFF characteristics of VoIP traffic with reference to accurate measurements of the ON and OFF periods, made from a large multi-lingual database of over 100 hours worth of VoIP call recordings. The impact of the language, prosodic structure and speech rate of the speaker on the statistics of the ON-OFF periods is analysed and relevant conclusions are presented. Finally, an ON-OFF VoIP source model with log-normal transitions is contributed as an ideal candidate to model VoIP traffic and the results of this model are compared with those of previously published work.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Burchett, Woodrow. « Improving the Computational Efficiency in Bayesian Fitting of Cormack-Jolly-Seber Models with Individual, Continuous, Time-Varying Covariates ». UKnowledge, 2017. http://uknowledge.uky.edu/statistics_etds/27.

Texte intégral
Résumé :
The extension of the CJS model to include individual, continuous, time-varying covariates relies on the estimation of covariate values on occasions on which individuals were not captured. Fitting this model in a Bayesian framework typically involves the implementation of a Markov chain Monte Carlo (MCMC) algorithm, such as a Gibbs sampler, to sample from the posterior distribution. For large data sets with many missing covariate values that must be estimated, this creates a computational issue, as each iteration of the MCMC algorithm requires sampling from the full conditional distributions of each missing covariate value. This dissertation examines two solutions to address this problem. First, I explore variational Bayesian algorithms, which derive inference from an approximation to the posterior distribution that can be fit quickly in many complex problems. Second, I consider an alternative approximation to the posterior distribution derived by truncating the individual capture histories in order to reduce the number of missing covariates that must be updated during the MCMC sampling algorithm. In both cases, the increased computational efficiency comes at the cost of producing approximate inferences. The variational Bayesian algorithms generally do not estimate the posterior variance very accurately and do not directly address the issues with estimating many missing covariate values. Meanwhile, the truncated CJS model provides a more significant improvement in computational efficiency while inflating the posterior variance as a result of discarding some of the data. Both approaches are evaluated via simulation studies and a large mark-recapture data set consisting of cliff swallow weights and capture histories.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Sahin, Elvan. « Discrete-Time Bayesian Networks Applied to Reliability of Flexible Coping Strategies of Nuclear Power Plants ». Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103817.

Texte intégral
Résumé :
The Fukushima Daiichi accident prompted the nuclear community to find a new solution to reduce the risky situations in nuclear power plants (NPPs) due to beyond-design-basis external events (BDBEEs). An implementation guide for diverse and flexible coping strategies (FLEX) has been presented by Nuclear Energy Institute (NEI) to manage the challenge of BDBEEs and to enhance reactor safety against extended station blackout (SBO). To assess the effectiveness of FLEX strategies, probabilistic risk assessment (PRA) methods can be used to calculate the reliability of such systems. Due to the uniqueness of FLEX systems, these systems can potentially carry dependencies among components not commonly modeled in NPPs. Therefore, a suitable method is needed to analyze the reliability of FLEX systems in nuclear reactors. This thesis investigates the effectiveness and applicability of Bayesian networks (BNs) and Discrete-Time Bayesian Networks (DTBNs) in the reliability analysis of FLEX equipment that is utilized to reduce the risk in nuclear power plants. To this end, the thesis compares BNs with two other reliability assessment methods: Fault Tree (FT) and Markov chain (MC). Also, it is shown that these two methods can be transformed into BN to perform the reliability analysis of FLEX systems. The comparison of the three reliability methods is shown and discussed in three different applications. The results show that BNs are not only a powerful method in modeling FLEX strategies, but it is also an effective technique to perform reliability analysis of FLEX equipment in nuclear power plants.
Master of Science
Some external events like earthquakes, flooding, and severe wind, may cause damage to the nuclear reactors. To reduce the consequences of these damages, the Nuclear Energy Institute (NEI) has proposed mitigating strategies known as FLEX (Diverse and Flexible Coping Strategies). After the implementation of FLEX in nuclear power plants, we need to analyze the failure or success probability of these engineering systems through one of the existing methods. However, the existing methods are limited in analyzing the dependencies among components in complex systems. Bayesian networks (BNs) are a graphical and quantitative technique that is utilized to model dependency among events. This thesis shows the effectiveness and applicability of BNs in the reliability analysis of FLEX strategies by comparing it with two other reliability analysis tools, known as Fault Tree Analysis and Markov Chain. According to the reliability analysis results, BN is a powerful and promising method in modeling and analyzing FLEX strategies.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Hill, Laura Anne. « Bayesian networks for modelling time : with an application for modelling survival for gene expression data ». Thesis, Queen's University Belfast, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.527815.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Lebre, Sophie. « Stochastic process analysis for Genomics and Dynamic Bayesian Networks inference ». Phd thesis, Université d'Evry-Val d'Essonne, 2007. http://tel.archives-ouvertes.fr/tel-00260250.

Texte intégral
Résumé :
This thesis is dedicated to the development of statistical and computational methods for the analysis of DNA sequences and gene expression time series.

First we study a parsimonious Markov model called Mixture Transition Distribution (MTD) model which is a mixture of Markovian transitions. The overly high number of constraints on the parameters of this model hampers the formulation of an analytical expression of the Maximum Likelihood Estimate (MLE). We propose to approach the MLE thanks to an EM algorithm. After comparing the performance of this algorithm to results from the litterature, we use it to evaluate the relevance of MTD modeling for bacteria DNA coding sequences in comparison with standard Markovian modeling.

Then we propose two different approaches for genetic regulation network recovering. We model those genetic networks with Dynamic Bayesian Networks (DBNs) whose edges describe the dependency relationships between time-delayed genes expression. The aim is to estimate the topology of this graph despite the overly low number of repeated measurements compared with the number of observed genes.

To face this problem of dimension, we first assume that the dependency relationships are homogeneous, that is the graph topology is constant across time. Then we propose to approximate this graph by considering partial order dependencies. The concept of partial order dependence graphs, already introduced for static and non directed graphs, is adapted and characterized for DBNs using the theory of graphical models. From these results, we develop a deterministic procedure for DBNs inference.

Finally, we relax the homogeneity assumption by considering the succession of several homogeneous phases. We consider a multiple changepoint
regression model. Each changepoint indicates a change in the regression model parameters, which corresponds to the way an expression level depends on the others. Using reversible jump MCMC methods, we develop a stochastic algorithm which allows to simultaneously infer the changepoints location and the structure of the network within the phases delimited by the changepoints.

Validation of those two approaches is carried out on both simulated and real data analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Hamilton, Benjamin Russell. « Applications of bayesian filtering in wireless networks : clock synchronization, localization, and rf tomography ». Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44707.

Texte intégral
Résumé :
In this work, we investigate the application of Bayesian filtering techniques such as Kalman Filtering and Particle filtering to the problems of network time synchronization, self-localization and radio-frequency (RF) tomography in wireless networks. Networks of large numbers of small, cheap, mobile wireless devices have shown enormous potential in applications ranging from intrusion detection to environmental monitoring. These applications require the devices to have accurate time and position estimates, however traditional techniques may not be available. Additionally RF tomography offers a new paradigm to sense the network environment and could greatly enhance existing network capabilities. While there are some existing works addressing these problems, they all suffer from limitations. Current time synchronization methods are not energy efficient on small wireless devices with low quality oscillators. Existing localization methods do not consider additional sources of information available to nodes in the network such as measurements from accelerometers or models of the shadowing environment in the network. RF tomography has only been examined briefly in such networks, and current algorithms can not handle node mobility and rely on shadowing models that have not been experimentally verified. We address the time synchronization problem by analyzing the characteristics of the clocks in small wireless devices, developing a model for it, and then applying a Kalman filter to track both clock offset and skew. In our investigation into RF tomography, we present a method using a Kalman filter which jointly estimates and tracks static and dynamic objects in the environment. We also use channel measurements collected from a field test of our RF tomography testbed to compare RF shadowing models. For the localization problem, we present two algorithms incorporating additional information for improved localization: one based on a distributed extended Kalman filter that combines local acceleration measurements with signal strength measurements for improved localization, and another that uses a distributed particle filter to incorporate a model of the channel environment.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Ziegler-Barranco, Ana, Luis Mera-Barco, Vidal Aramburu-Rojas, Carlos Raymundo, Nestor Mamani-Macedo et Francisco Dominguez. « SCAT Model Based on Bayesian Networks for Lost-Time Accident Prevention and Rate Reduction in Peruvian Mining Operations ». Springer, 2020. http://hdl.handle.net/10757/656168.

Texte intégral
Résumé :
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
Several factors affect the activities of the mining industry. For example, accident rates are critical because they affect company ratings in the stock market (Standard & Poors). Considering that the corporate image is directly related to its stakeholders, this study conducts an accident analysis using quantitative and qualitative methods. In this way, the contingency rate is controlled, mitigated, and prevented while serving the needs) of the stakeholders. The Bayesian network method contributes to decision-making through a set of variables and the dependency relationships between them, establishing an earlier probability of unknown variables. Bayesian models have different applications, such as diagnosis, classification, and decision, and establish relationships among variables and cause–effect links. This study uses Bayesian inference to identify the various patterns that influence operator accident rates at a contractor mining company, and therefore, study and assess the possible differences in its future operations.
Styles APA, Harvard, Vancouver, ISO, etc.
26

Wu, Xinying. « Reliability Assessment of a Continuous-state Fuel Cell Stack System with Multiple Degrading Components ». Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1556794664723115.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
27

Lee, Joon-Hee. « Rank-Date Distribution Method (R-D Method) For Daily Time-Series Bayesian Networks And Total Maximum Daily Load Estimation ». DigitalCommons@USU, 2008. https://digitalcommons.usu.edu/etd/132.

Texte intégral
Résumé :
Daily time series-based models are required to estimate the higher frequency fluctuations of nutrient loads and concentrations. Some mechanistic mathematical models can provide daily time series outputs of nutrient concentrations but it is difficult to incorporate non-numerical data, such as management scenarios, to mechanistic mathematical models. Bayesian networks (BNs) were designed to accept and process inputs of varied types of both numerical and non-numerical inputs. A Rank-Data distribution method (R-D method) was developed to provide large time series of daily predicted flows and Total Phosphorus (TP) loads to BNs driving daily time series estimates of T-P concentrations into Hyrum and Cutler Reservoirs, Cache County, Utah. Time series of water resources data may consist of data distributions and time series of the ranks of the data at the measurement times. The R-D method estimates the data distribution by interpolating cumulative failure probability (CFPs) plots of observations. This method also estimates cumulative failure probability of predictions on dates with no data by interpolating CFP time series of observations. The R-D method estimates time series of mean daily flows with less residual between predicted flows and observed flows than interpolation of observed flows using data sets sampled randomly at varying frequencies. Two Bayesian Networks, BN 1 (Bayesian Network above Hyrum Reservoir) and BN 2 (Bayesian Network below Hyrum Reservoir) were used to simulate the effect of the Little Bear River Conservation Project (LBRCP) and exogenous variables on water quality to explore the causes of an observed reduction in Total Phosphorus (TP) concentration since 1990 at the mouth of the Little Bear River. A BN provided the fine data distribution of flows and T-P loads under scenarios of conservation practices or exogenous variables using daily flows and TP loads estimated by R-D method. When these BN outputs were connected with the rank time series estimated by interpolation of the ranks of existing observations at measurement dates, time series estimation of TP concentrations into Cutler Reservoir under two different conservation practice options was obtained. This time series showed duration and starting time of water quality criterion violation. The TMDL processes were executed based on daily TP loads from R-D instead of mean or median values.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Mroszczyk, Przemyslaw. « Computation with continuous mode CMOS circuits in image processing and probabilistic reasoning ». Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/computation-with-continuous-mode-cmos-circuits-in-image-processing-and-probabilistic-reasoning(57ae58b7-a08c-4a67-ab10-5c3a3cf70c09).html.

Texte intégral
Résumé :
The objective of the research presented in this thesis is to investigate alternative ways of information processing employing asynchronous, data driven, and analogue computation in massively parallel cellular processor arrays, with applications in machine vision and artificial intelligence. The use of cellular processor architectures, with only local neighbourhood connectivity, is considered in VLSI realisations of the trigger-wave propagation in binary image processing, and in Bayesian inference. Design issues, critical in terms of the computational precision and system performance, are extensively analysed, accounting for the non-ideal operation of MOS devices caused by the second order effects, noise and parameter mismatch. In particular, CMOS hardware solutions for two specific tasks: binary image skeletonization and sum-product algorithm for belief propagation in factor graphs, are considered, targeting efficient design in terms of the processing speed, power, area, and computational precision. The major contributions of this research are in the area of continuous-time and discrete-time CMOS circuit design, with applications in moderate precision analogue and asynchronous computation, accounting for parameter variability. Various analogue and digital circuit realisations, operating in the continuous-time and discrete-time domains, are analysed in theory and verified using combined Matlab-Hspice simulations, providing a versatile framework suitable for custom specific analyses, verification and optimisation of the designed systems. Novel solutions, exhibiting reduced impact of parameter variability on the circuit operation, are presented and applied in the designs of the arithmetic circuits for matrix-vector operations and in the data driven asynchronous processor arrays for binary image processing. Several mismatch optimisation techniques are demonstrated, based on the use of switched-current approach in the design of current-mode Gilbert multiplier circuit, novel biasing scheme in the design of tunable delay gates, and averaging technique applied to the analogue continuous-time circuits realisations of Bayesian networks. The most promising circuit solutions were implemented on the PPATC test chip, fabricated in a standard 90 nm CMOS process, and verified in experiments.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Lenz, Lutz Henning. « Automatic Tuning of Integrated Filters Using Neural Networks ». PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4604.

Texte intégral
Résumé :
Component values of integrated filters vary considerably due to· manufacturing tolerances and environmental changes. Thus it is of major importance that the components of an integrated filter be electronically tunable. The method explored in this thesis is the transconductance-C-method. A method of realizing higher-order filters is to use a cascade structure of second-order filters. In this context, a method of tuning second-order filters becomes important The research objective of this thesis is to determine if the Neural Network methodology can be used to facilitate the filter tuning process for a second-order filter (realized via the transconductance-C-method). Since this thesis is, at least to the knowledge of the author, the first effort in this direction, basic principles of filters and of Neural Networks [1-22] are presented. A control structure is proposed which comprises three parts: the filter, the Neural Network, and a digital spectrum analyzer. The digital spectrum analyzer sends a test signal to the filter and measures the magnitude of the output at 49 frequency samples. The Neural Network part includes a memory that stores the 49 sampled values of the nominal spectrum. ·A comparator subtracts the latter values from the measured (actual) values, and feeds them as input to the Neural Network. The outputs of the Neural Network are the values of the percentage tuning amount The adjusting device, which is envisioned as a component of the filter itself, translates the output of the Neural Network to adjustments in the value of the filter's transconductances. Experimental results provide a demonstration that the Neural Network methodology can be usefully applied to the above problem context. A feedforward, singlehidden layer Backpropagation Network reduces the manufacturing errors of up to 85% for the pole frequency and of up to 41% for the quality factor down to less than approximately 5% each. It is demonstrated that the method can be iterated to further reduce the error.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Selent, Douglas A. « Creating Systems and Applying Large-Scale Methods to Improve Student Remediation in Online Tutoring Systems in Real-time and at Scale ». Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/308.

Texte intégral
Résumé :
"A common problem shared amongst online tutoring systems is the time-consuming nature of content creation. It has been estimated that an hour of online instruction can take up to 100-300 hours to create. Several systems have created tools to expedite content creation, such as the Cognitive Tutors Authoring Tool (CTAT) and the ASSISTments builder. Although these tools make content creation more efficient, they all still depend on the efforts of a content creator and/or past historical. These tools do not take full advantage of the power of the crowd. These issues and challenges faced by online tutoring systems provide an ideal environment to implement a solution using crowdsourcing. I created the PeerASSIST system to provide a solution to the challenges faced with tutoring content creation. PeerASSIST crowdsources the work students have done on problems inside the ASSISTments online tutoring system and redistributes that work as a form of tutoring to their peers, who are in need of assistance. Multi-objective multi-armed bandit algorithms are used to distribute student work, which balance exploring which work is good and exploiting the best currently known work. These policies are customized to run in a real-world environment with multiple asynchronous reward functions and an infinite number of actions. Inspired by major companies such as Google, Facebook, and Bing, PeerASSIST is also designed as a platform for simultaneous online experimentation in real-time and at scale. Currently over 600 teachers (grades K-12) are requiring students to show their work. Over 300,000 instances of student work have been collected from over 18,000 students across 28,000 problems. From the student work collected, 2,000 instances have been redistributed to over 550 students who needed help over the past few months. I conducted a randomized controlled experiment to evaluate the effectiveness of PeerASSIST on student performance. Other contributions include representing learning maps as Bayesian networks to model student performance, creating a machine-learning algorithm to derive student incorrect processes from their incorrect answer and the inputs of the problem, and applying Bayesian hypothesis testing to A/B experiments. We showed that learning maps can be simplified without practical loss of accuracy and that time series data is necessary to simplify learning maps if the static data is highly correlated. I also created several interventions to evaluate the effectiveness of the buggy messages generated from the machine-learned incorrect processes. The null results of these experiments demonstrate the difficulty of creating a successful tutoring and suggest that other methods of tutoring content creation (i.e. PeerASSIST) should be explored."
Styles APA, Harvard, Vancouver, ISO, etc.
31

Jagannathan, Ramanujan. « Evaluation of Crossover Displaced Left-turn (XDL) Intersections and Real-time Signal Control Strategies with Artificial Intelligence Techniques ». Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/10144.

Texte intégral
Résumé :
Although concepts of the XDL intersection or CFI (Continuous Flow Intersection) have been around for approximately four decades, users do not yet have a simplified procedure to evaluate its traffic performance and compare it with a conventional intersection. Several studies have shown qualitative and quantitative benefits of the XDL intersection without providing accessible tools for traffic engineers and planners to estimate average control delays, and queues. Modeling was conducted on typical geometries over a wide distribution of traffic flow conditions for three different design configurations or cases using VISSIM simulations with pre-timed signal settings. Some comparisons with similar conventional designs show considerable savings in average control delay, and average queue length and increase in intersection capacity. The statistical models provide an accessible tool for a practitioner to assess average delay and average queue length for three types of XDL intersections. Pre-timed signal controller settings are provided for each of the five intersections of the XDL network. In this research, a "real-time" traffic signal control strategy is developed using genetic algorithms and neural networks to provide near-optimal traffic performance for XDL intersections. Knowing the traffic arrival pattern at an intersection in advance, it is possible to come up with the best signal control strategy for the respective scenario. Hypothetical cases of traffic arrival patterns are generated and genetic algorithms are used to come up with near-optimal signal control strategy for the respective cases. The neural network controller is then trained and tested using pairs of hypothetical traffic scenarios and corresponding signal control strategies. The developed neural network controller produces near-optimal traffic signal control strategy in "real-time" for all varieties of traffic arrival patterns.
Master of Science
Styles APA, Harvard, Vancouver, ISO, etc.
32

Iacopini, Matteo. « Essays on econometric modelling of temporal networks ». Thesis, Paris 1, 2018. http://www.theses.fr/2018PA01E058/document.

Texte intégral
Résumé :
La théorie des graphes a longtemps été étudiée en mathématiques et en probabilité en tant qu’outil pour décrire la dépendance entre les nœuds. Cependant, ce n’est que récemment qu’elle a été mise en œuvre sur des données, donnant naissance à l’analyse statistique des réseaux réels.La topologie des réseaux économiques et financiers est remarquablement complexe: elle n’est généralement pas observée, et elle nécessite ainsi des procédures inférentielles adéquates pour son estimation, d’ailleurs non seulement les nœuds, mais la structure de la dépendance elle-même évolue dans le temps. Des outils statistiques et économétriques pour modéliser la dynamique de changement de la structure du réseau font défaut, malgré leurs besoins croissants dans plusieurs domaines de recherche. En même temps, avec le début de l’ère des “Big data”, la taille des ensembles de données disponibles devient de plus en plus élevée et leur structure interne devient de plus en plus complexe, entravant les processus inférentiels traditionnels dans plusieurs cas. Cette thèse a pour but de contribuer à ce nouveau champ littéraire qui associe probabilités, économie, physique et sociologie en proposant de nouvelles méthodologies statistiques et économétriques pour l’étude de l’évolution temporelle des structures en réseau de moyenne et haute dimension
Graph theory has long been studied in mathematics and probability as a tool for describing dependence between nodes. However, only recently it has been implemented on data, giving birth to the statistical analysis of real networks.The topology of economic and financial networks is remarkably complex: it is generally unobserved, thus requiring adequate inferential procedures for it estimation, moreover not only the nodes, but the structure of dependence itself evolves over time. Statistical and econometric tools for modelling the dynamics of change of the network structure are lacking, despite their increasing requirement in several fields of research. At the same time, with the beginning of the era of “Big data” the size of available datasets is becoming increasingly high and their internal structure is growing in complexity, hampering traditional inferential processes in multiple cases.This thesis aims at contributing to this newborn field of literature which joins probability, economics, physics and sociology by proposing novel statistical and econometric methodologies for the study of the temporal evolution of network structures of medium-high dimension
Styles APA, Harvard, Vancouver, ISO, etc.
33

Romano, Michele. « Near real-time detection and approximate location of pipe bursts and other events in water distribution systems ». Thesis, University of Exeter, 2012. http://hdl.handle.net/10871/9862.

Texte intégral
Résumé :
The research work presented in this thesis describes the development and testing of a new data analysis methodology for the automated near real-time detection and approximate location of pipe bursts and other events which induce similar abnormal pressure/flow variations (e.g., unauthorised consumptions, equipment failures, etc.) in Water Distribution Systems (WDSs). This methodology makes synergistic use of several self-learning Artificial Intelligence (AI) and statistical/geostatistical techniques for the analysis of the stream of data (i.e., signals) collected and communicated on-line by the hydraulic sensors deployed in a WDS. These techniques include: (i) wavelets for the de-noising of the recorded pressure/flow signals, (ii) Artificial Neural Networks (ANNs) for the short-term forecasting of future pressure/flow signal values, (iii) Evolutionary Algorithms (EAs) for the selection of optimal ANN input structure and parameters sets, (iv) Statistical Process Control (SPC) techniques for the short and long term analysis of the burst/other event-induced pressure/flow variations, (v) Bayesian Inference Systems (BISs) for inferring the probability of a burst/other event occurrence and raising the detection alarms, and (vi) geostatistical techniques for determining the approximate location of a detected burst/other event. The results of applying the new methodology to the pressure/flow data from several District Metered Areas (DMAs) in the United Kingdom (UK) with real-life bursts/other events and simulated (i.e., engineered) burst events are also reported in this thesis. The results obtained illustrate that the developed methodology allowed detecting the aforementioned events in a fast and reliable manner and also successfully determining their approximate location within a DMA. The results obtained additionally show the potential of the methodology presented here to yield substantial improvements to the state-of-the-art in near real-time WDS incident management by enabling the water companies to save water, energy, money, achieve higher levels of operational efficiency and improve their customer service. The new data analysis methodology developed and tested as part of the research work presented in this thesis has been patented (International Application Number: PCT/GB2010/000961).
Styles APA, Harvard, Vancouver, ISO, etc.
34

Junuthula, Ruthwik Reddy. « Modeling, Evaluation and Analysis of Dynamic Networks for Social Network Analysis ». University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1544819215833249.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
35

Vigraham, Saranyan A. « An Analog Evolvable Hardware Device for Active Control ». Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1195506953.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Tugui, Catalin Adrian. « Design Methodology for High-performance Circuits Based on Automatic Optimization Methods ». Thesis, Supélec, 2013. http://www.theses.fr/2013SUPL0002/document.

Texte intégral
Résumé :
Ce travail de thèse porte sur le développement d’une méthodologie efficace pour la conception analogique, des algorithmes et des outils correspondants qui peuvent être utilisés dans la conception dynamique de fonctions linéaires à temps continu. L’objectif principal est d’assurer que les performances pour un système complet peuvent être rapidement investiguées, mais avec une précision comparable aux évaluations au niveau transistor.Une première direction de recherche a impliqué le développement de la méthodologie de conception basée sur le processus d'optimisation automatique de cellules au niveau transistor et la synthèse de macro-modèles analogiques de haut niveau dans certains environnements comme Mathworks - Simulink, VHDL-AMS ou Verilog-A. Le processus d'extraction des macro-modèles se base sur un ensemble complet d'analyses (DC, AC, transitoire, paramétrique, Balance Harmonique) qui sont effectuées sur les schémas analogiques conçues à partir d’une technologie spécifique. Ensuite, l'extraction et le calcul d'une multitude de facteurs de mérite assure que les modèles comprennent les caractéristiques de bas niveau et peuvent être directement régénéré au cours de l'optimisation.L'algorithme d'optimisation utilise une méthode bayésienne, où l'espace d’évaluation est créé à partir d'un modèle de substitution (krigeage dans ce cas), et la sélection est effectuée en utilisant le critère d’amélioration (Expected Improvement - EI) sujet à des contraintes. Un outil de conception a été développé (SIMECT), qui a été intégré comme une boîte à outils Matlab, employant les algorithmes d’extraction des macro-modèles et d'optimisation automatique
The aim of this thesis is to establish an efficient analog design methodology, the algorithms and the corresponding design tools which can be employed in the dynamic conception of linear continuous-time (CT) functions. The purpose is to assure that the performance figures for a complete system can be rapidly investigated, but with comparable accuracy to the transistor-level evaluations. A first research direction implied the development of the novel design methodology based on the automatic optimization process of transistor-level cells using a modified Bayesian Kriging approach and the synthesis of robust high-level analog behavioral models in environments like Mathworks – Simulink, VHDL-AMS or Verilog-A.The macro-model extraction process involves a complete set of analyses (DC, AC, transient, parametric, Harmonic Balance) which are performed on the analog schematics implemented on a specific technology process. Then, the extraction and calculus of a multitude of figures of merit assures that the models include the low-level characteristics and can be directly regenerated during the optimization process.The optimization algorithm uses a Bayesian method, where the evaluation space is created by the means of a Kriging surrogate model, and the selection is effectuated by using the expected improvement (EI) criterion subject to constraints.A conception tool was developed (SIMECT), which was integrated as a Matlab toolbox, including all the macro-models extraction and automatic optimization techniques
Styles APA, Harvard, Vancouver, ISO, etc.
37

FREIRE, Arthur Silva. « Modelo de redes bayesianas para melhoria do trabalho em equipe em projetos ágeis de desenvolvimento de software ». Universidade Federal de Campina Grande, 2016. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/766.

Texte intégral
Résumé :
Submitted by Maria Medeiros (maria.dilva1@ufcg.edu.br) on 2018-05-22T12:37:25Z No. of bitstreams: 1 ARTHUR SILVA FREIRE -DISSERTAÇÃO (PPGCC) 2016.pdf: 2232664 bytes, checksum: 7d856251235ae5bacc2b971e556d50e3 (MD5)
Made available in DSpace on 2018-05-22T12:37:25Z (GMT). No. of bitstreams: 1 ARTHUR SILVA FREIRE -DISSERTAÇÃO (PPGCC) 2016.pdf: 2232664 bytes, checksum: 7d856251235ae5bacc2b971e556d50e3 (MD5) Previous issue date: 2016
Capes
A utilização de métodos ágeis requer que os indivíduos e as interações entre eles sejam considerados mais importantes que processos e ferramentas. Além disso, equipes ágeis precisam ser auto-organizáveis para garantir rápida agregação de valor e responsividade à mudança. Para isso, é necessário que todos os membros da equipe colaborem entre si e entendam o conceito de responsabilidade e comprometimento por parte de todos. Na literatura, é destacado o impacto positivo que fatores relacionados ao Trabalho em Equipe têm sobre o sucesso de projetos geridos com métodos ágeis. Em alguns trabalhos, ferramentas para avaliar e identificar oportunidades de melhoria do Trabalho em Equipe são apresentadas. Entretanto, no contexto em que se insere este trabalho, elas apresentam limitações, pois não focam em projetos ágeis, dependem apenas de avaliação subjetiva, ou não levam em consideração fatores-chave essenciais do ponto de vista da qualidade do Trabalho em Equipe. Portanto, neste trabalho, é apresentado um modelo de Redes Bayesianas para avaliar e identificar oportunidades de melhoria do Trabalho em Equipe em projetos de software geridos com métodos ágeis. A motivação para utilizar Redes Bayesianas advém da sua adequação para modelar incertezas em um determinado domínio, além da facilidade para modelar e quantificar os relacionamentos entre os fatores-chave que influenciam a qualidade do Trabalho em Equipe. Além do modelo, também é apresentado um procedimento para auxiliar na sua utilização. O modelo e o procedimento foram avaliados em um estudo de caso com três equipes de desenvolvimento de software. De acordo com os resultados do estudo de caso, foi possível concluir que o modelo mensura a qualidade do Trabalho em Equipe precisamente, ajudando na identificação de oportunidades de melhoria desse fator, e o custo-benefício de sua utilização como procedimento proposto é positivo.
Agile methods consider individuals and interactions more important than processes and tools. In addition, agile teams are required to be self-organized to ensure rapid aggregation of value and responsiveness to change. Thereby, it is necessary that team members collaborate to embrace the concept of whole-team responsibility and commitment. In the literature, it is shown that teamwork factors are critical to achieve success in agile projects. Some researchers have proposed tools for assessing and improving teamwork quality. However, in the context of agile software development, these tools are limited because they don’t focus on agile projects, depend on subjective assessment, or don’t include important teamwork quality key factors. Therefore, we present a Bayesian Network model to assess and improve agile teams’ teamwork quality. The motivation to use Bayesian Networks comes from its suitability for modeling uncertainties in a given domain, in addition to the easiness to model and quantify the relationships between the teamwork quality key factors. Besides the model, a procedure for using the model is also presented. Both model and procedure were evaluated in a case study with three units of analysis (i.e., agile software development teams). According to the case study results, the model measures the teamwork quality precisely, assisting on the identification of improvement opportunities for this factor, and the cost-benefit for using it with the presented procedure is positive.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Virbalas, Linas. « Informacinių technologijų rizikos valdymo sistema ». Master's thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20090908_201756-32001.

Texte intégral
Résumé :
Šiuo darbu pristatoma sukurta sistema, kuria galima modeliuoti ir valdyti rizikas, kylančias iš IT, susijusias su IS nepasiekiamumu ar lėtu veikimu. Sistema realizuota pasitelkus neuroninius tinklus ir yra apmokoma sukaupta statistine informacija iš informacinių sistemų. Jai nurodoma, kurios statistinės informacijos laiko eilutes norima modeliuoti – t.y. kurios iš jų yra rizikos išraiška (serverių apkrovimas, IS atsakymo laikas ir pan.). Sistema pati nustato koreliuojančias statistines laiko eilutes, sugrupuoja susijusias ir kiekvienai grupei sukuria po modelį – apibendrina iki tol nežinomą priklausomybę tarp laiko eilučių pasitelkusi neuroninį tinklą. Kiekvienam iš tų modelių pateikus įtakojančių parametrų reikšmes, sistema sumodeliuoja rizikos parametro reikšmę. Eksperimentai parodė, jog sistema gali būti sėkmingai naudojama mišriame IT ūkyje ir geba modeliuoti įvairius IT bei IS komponentų parametrus, kurie sąlygoja rizikas.
By this work we present an IT risk management system, which is capable to model and manage risks that arise from IT wich are related with IS downtimes and slow response times. The system is implemented by using a proposed neural network architecture as a heart of the modeling engine. It is trained with accumulated datasets from existing information systems. The user shows for the system which statistical data time series one needs to model – i.e. the one which represents the risk (like server load, IS response time, etc.). The system automatically determines correlated statistical time series, groups them and creates a separate model for each group – this model generalizes until then unknown relationship between time series by invoking neural network. The model then accepts values of the input parameters and the system models the value of the risk parameter. Experiments have shown that the proposed system can be successfully used in a mixed IT environment and can be rewarding for one who tracks IT risks coming from various IT and IS components.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Harlé, Flore. « Détection de ruptures multiples dans des séries temporelles multivariées : application à l'inférence de réseaux de dépendance ». Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT043/document.

Texte intégral
Résumé :
Cette thèse présente une méthode pour la détection hors-ligne de multiples ruptures dans des séries temporelles multivariées, et propose d'en exploiter les résultats pour estimer les relations de dépendance entre les variables du système. L'originalité du modèle, dit du Bernoulli Detector, réside dans la combinaison de statistiques locales issues d'un test robuste, comparant les rangs des observations, avec une approche bayésienne. Ce modèle non paramétrique ne requiert pas d'hypothèse forte sur les distributions des données. Il est applicable sans ajustement à la loi gaussienne comme sur des données corrompues par des valeurs aberrantes. Le contrôle de la détection d'une rupture est prouvé y compris pour de petits échantillons. Pour traiter des séries temporelles multivariées, un terme est introduit afin de modéliser les dépendances entre les ruptures, en supposant que si deux entités du système étudié sont connectées, les événements affectant l'une s'observent instantanément sur l'autre avec une forte probabilité. Ainsi, le modèle s'adapte aux données et la segmentation tient compte des événements communs à plusieurs signaux comme des événements isolés. La méthode est comparée avec d'autres solutions de l'état de l'art, notamment sur des données réelles de consommation électrique et génomiques. Ces expériences mettent en valeur l'intérêt du modèle pour la détection de ruptures entre des signaux indépendants, conditionnellement indépendants ou complètement connectés. Enfin, l'idée d'exploiter les synchronisations entre les ruptures pour l'estimation des relations régissant les entités du système est développée, grâce au formalisme des réseaux bayésiens. En adaptant la fonction de score d'une méthode d'apprentissage de la structure, il est vérifié que le modèle d'indépendance du système peut être en partie retrouvé grâce à l'information apportée par les ruptures, estimées par le modèle du Bernoulli Detector
This thesis presents a method for the multiple change-points detection in multivariate time series, and exploits the results to estimate the relationships between the components of the system. The originality of the model, called the Bernoulli Detector, relies on the combination of a local statistics from a robust test, based on the computation of ranks, with a global Bayesian framework. This non parametric model does not require strong hypothesis on the distribution of the observations. It is applicable without modification on gaussian data as well as data corrupted by outliers. The detection of a single change-point is controlled even for small samples. In a multivariate context, a term is introduced to model the dependencies between the changes, assuming that if two components are connected, the events occurring in the first one tend to affect the second one instantaneously. Thanks to this flexible model, the segmentation is sensitive to common changes shared by several signals but also to isolated changes occurring in a single signal. The method is compared with other solutions of the literature, especially on real datasets of electrical household consumption and genomic measurements. These experiments enhance the interest of the model for the detection of change-points in independent, conditionally independent or fully connected signals. The synchronization of the change-points within the time series is finally exploited in order to estimate the relationships between the variables, with the Bayesian network formalism. By adapting the score function of a structure learning method, it is checked that the independency model that describes the system can be partly retrieved through the information given by the change-points, estimated by the Bernoulli Detector
Styles APA, Harvard, Vancouver, ISO, etc.
40

Jebreen, Kamel. « Modèles graphiques pour la classification et les séries temporelles ». Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0248/document.

Texte intégral
Résumé :
Dans cette thèse nous nous intéressons aux méthodes de classifications supervisées utilisant les réseaux bayésiens. L'avantage majeur de ces méthodes est qu'elles peuvent prendre en compte les interactions entre les variables explicatives. Dans une première partie nous proposons une procédure de discrétisation spécifique et une procédure de sélection de variables qui permettent d'améliorer considérablement les classifieurs basés sur des réseaux bayésiens. Cette procédure a montré de très bonnes performances empiriques sur un grand choix de jeux de données connus de l’entrepôt d'apprentissage automatique (UCI Machine Learning repository). Une application pour la prévision de type d’épilepsie à partir de de caractéristiques des patients extraites des images de Tomographie par émission de positrons (TEP) confirme l’efficacité de notre approche comparé à des approches communes de classifications supervisées. Dans la deuxième partie de cette thèse nous nous intéressons à la modélisation des interactions entre des variables dans le contexte de séries chronologiques en grande dimension. Nous avons proposé deux nouvelles approches. La première, similaire à la technique "neighborhood Lasso" remplace la technique Lasso par des machines à vecteurs de supports. La deuxième approche est un réseau bayésien restreint: les variables observées à chaque instant et à l’instant précédent sont utilisées dans un réseau dont la structure est restreinte. Nous montrons l’efficacité de ces approches par des simulations utilisant des donnés simulées issues de modèles linéaires, non-linéaires et un mélange des deux
First, in this dissertation, we will show that Bayesian networks classifiers are very accurate models when compared to other classical machine learning methods. Discretising input variables often increase the performance of Bayesian networks classifiers, as does a feature selection procedure. Different types of Bayesian networks may be used for supervised classification. We combine such approaches together with feature selection and discretisation to show that such a combination gives rise to powerful classifiers. A large choice of data sets from the UCI machine learning repository are used in our experiments, and the application to Epilepsy type prediction based on PET scan data confirms the efficiency of our approach. Second, in this dissertation we also consider modelling interaction between a set of variables in the context of time series and high dimension. We suggest two approaches; the first is similar to the neighbourhood lasso where the lasso model is replaced by Support Vector Machines (SVMs); the second is a restricted Bayesian network for time series. We demonstrate the efficiency of our approaches simulations using linear and nonlinear data set and a mixture of both
Styles APA, Harvard, Vancouver, ISO, etc.
41

Rahier, Thibaud. « Réseaux Bayésiens pour fusion de données statiques et temporelles ». Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM083/document.

Texte intégral
Résumé :
La prédiction et l'inférence sur des données temporelles sont très souvent effectuées en utilisant uniquement les séries temporelles. Nous sommes convaincus que ces tâches pourraient tirer parti de l'utilisation des métadonnées contextuelles associées aux séries temporelles, telles que l'emplacement, le type, etc. Réciproquement, les tâches de prédiction et d'inférence sur les métadonnées pourraient bénéficier des informations contenues dans les séries temporelles. Cependant, il n'existe pas de méthode standard pour modéliser conjointement les données de séries temporelles et les métadonnées descriptives. De plus, les métadonnées contiennent fréquemment des informations hautement corrélées ou redondantes et peuvent contenir des erreurs et des valeurs manquantes.Nous examinons d’abord le problème de l’apprentissage de la structure graphique probabiliste inhérente aux métadonnées en tant que réseau Bayésien. Ceci présente deux avantages principaux: (i) une fois structurées en tant que modèle graphique, les métadonnées sont plus faciles à utiliser pour améliorer les tâches sur les données temporelles et (ii) le modèle appris permet des tâches d'inférence sur les métadonnées uniquement, telles que l'imputation de données manquantes. Cependant, l'apprentissage de la structure de réseau Bayésien est un défi mathématique conséquent, impliquant un problème d'optimisation NP-difficile. Pour faire face à ce problème, nous présentons un algorithme d'apprentissage de structure sur mesure, inspiré de nouveaux résultats théoriques, qui exploite les dépendances (quasi)-déterministes généralement présentes dans les métadonnées descriptives. Cet algorithme est testé sur de nombreux jeux de données de référence et sur certains jeux de métadonnées industriels contenant des relations déterministes. Dans les deux cas, il s'est avéré nettement plus rapide que l'état de la l'art, et a même trouvé des structures plus performantes sur des données industrielles. De plus, les réseaux Bayésiens appris sont toujours plus parcimonieux et donc plus lisibles.Nous nous intéressons ensuite à la conception d'un modèle qui inclut à la fois des (méta)données statiques et des données temporelles. En nous inspirant des modèles graphiques probabilistes pour les données temporelles (réseaux Bayésiens dynamiques) et de notre approche pour la modélisation des métadonnées, nous présentons une méthodologie générale pour modéliser conjointement les métadonnées et les données temporelles sous forme de réseaux Bayésiens hybrides statiques-dynamiques. Nous proposons deux algorithmes principaux associés à cette représentation: (i) un algorithme d'apprentissage qui, bien qu'optimisé pour les données industrielles, reste généralisable à toute tâche de fusion de données statiques et dynamiques, et (ii) un algorithme d'inférence permettant les d'effectuer à la fois des requêtes sur des données temporelles ou statiques uniquement, et des requêtes utilisant ces deux types de données.%Nous fournissons ensuite des résultats sur diverses applications inter-domaines telles que les prévisions, le réapprovisionnement en métadonnées à partir de séries chronologiques et l’analyse de dépendance d’alarmes en utilisant les données de certains cas d’utilisation difficiles de Schneider Electric.Enfin, nous approfondissons certaines des notions introduites au cours de la thèse, et notamment la façon de mesurer la performance en généralisation d’un réseau Bayésien par un score inspiré de la procédure de validation croisée provenant de l’apprentissage automatique supervisé. Nous proposons également des extensions diverses aux algorithmes et aux résultats théoriques présentés dans les chapitres précédents, et formulons quelques perspectives de recherche
Prediction and inference on temporal data is very frequently performed using timeseries data alone. We believe that these tasks could benefit from leveraging the contextual metadata associated to timeseries - such as location, type, etc. Conversely, tasks involving prediction and inference on metadata could benefit from information held within timeseries. However, there exists no standard way of jointly modeling both timeseries data and descriptive metadata. Moreover, metadata frequently contains highly correlated or redundant information, and may contain errors and missing values.We first consider the problem of learning the inherent probabilistic graphical structure of metadata as a Bayesian Network. This has two main benefits: (i) once structured as a graphical model, metadata is easier to use in order to improve tasks on temporal data and (ii) the learned model enables inference tasks on metadata alone, such as missing data imputation. However, Bayesian network structure learning is a tremendous mathematical challenge, that involves a NP-Hard optimization problem. We present a tailor-made structure learning algorithm, inspired from novel theoretical results, that exploits (quasi)-determinist dependencies that are typically present in descriptive metadata. This algorithm is tested on numerous benchmark datasets and some industrial metadatasets containing deterministic relationships. In both cases it proved to be significantly faster than state of the art, and even found more performant structures on industrial data. Moreover, learned Bayesian networks are consistently sparser and therefore more readable.We then focus on designing a model that includes both static (meta)data and dynamic data. Taking inspiration from state of the art probabilistic graphical models for temporal data (Dynamic Bayesian Networks) and from our previously described approach for metadata modeling, we present a general methodology to jointly model metadata and temporal data as a hybrid static-dynamic Bayesian network. We propose two main algorithms associated to this representation: (i) a learning algorithm, which while being optimized for industrial data, is still generalizable to any task of static and dynamic data fusion, and (ii) an inference algorithm, enabling both usual tasks on temporal or static data alone, and tasks using the two types of data.%We then provide results on diverse cross-field applications such as forecasting, metadata replenishment from timeseries and alarms dependency analysis using data from some of Schneider Electric’s challenging use-cases.Finally, we discuss some of the notions introduced during the thesis, including ways to measure the generalization performance of a Bayesian network by a score inspired from the cross-validation procedure from supervised machine learning. We also propose various extensions to the algorithms and theoretical results presented in the previous chapters, and formulate some research perspectives
Styles APA, Harvard, Vancouver, ISO, etc.
42

Kindermann, Lars. « Neuronale Netze zur Berechnung Iterativer Wurzeln und Fraktionaler Iterationen ». Doctoral thesis, Universitätsbibliothek Chemnitz, 2002. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200201544.

Texte intégral
Résumé :
Diese Arbeit entwickelt eine Methode, Funktionalgleichungen der Art g(g(x))=f(x) bzw. g^n(x)=f(x) mit Hilfe neuronaler Netze zu lösen. Gesucht ist eine Funktion g(x), die mehrfach hintereinandergeschaltet genau einer gegebenen Funktion f(x) entspricht. Man nennt g=f^1/n eine iterative Wurzel oder fraktionale Iteration von f. Lösungen für g zu finden, stellt das inverse Problem der Iteration dar oder die Erweiterung der Wurzel- bzw. Potenzoperation auf die Funktionsalgebra. Geschlossene Ausdrücke für Funktionswurzeln einer gegebenen Funktion zu finden, ist in der Regel nicht möglich oder sehr schwer. Numerische Verfahren sind nicht in allgemeiner Form beschrieben oder als Software vorhanden. Ausgehend von der Fähigkeit eines neuronalen Netzes, speziell des mehrschichtigen Perzeptrons, durch Training eine gegebene Funktion f(x) zu approximieren, erlaubt eine spezielle Topologie des Netzes auch die Berechnung von fraktionalen Iterationen von f. Ein solches Netz besteht aus n identischen, hintereinandergeschalteten Teilnetzen, die, wenn das Gesamtnetz f approximiert, jedes für sich g = f^1/n annähern. Es ist lediglich beim Training des Netzes darauf zu achten, dass die korrespondierenden Gewichte aller Teilnetze den gleichen Wert annehmen. Dazu werden mehrere Verfahren entwickelt: Lernen nur im letzten Teilnetz und Kopieren der Gewichte auf die anderen Teile, Angleichen der Teilnetze durch Kopplungsfaktoren oder Einführung eines Fehlerterms, der Unterschiede in den Teilnetzen bestraft. Als weitere Näherungslösung wird ein iteriertes lineares Modell entwickelt, das durch ein herkömmliches neuronales Netz mit hoher Approximationsgüte für nichtlineare Zusammenhänge korrigiert wird. Als Anwendung ist konkret die Modellierung der Bandprofilentwicklung beim Warmwalzen von Stahlblech gegeben. Einige Zentimeter dicke Stahlblöcke werden in einer Walzstraße von mehreren gleichartigen, hintereinanderliegenden Walzgerüsten zu Blechen von wenigen Millimetern Dicke gewalzt. Neben der Dicke ist das Profil - der Dickenunterschied zwischen Bandmitte und Rand - eine wichtige Qualitätsgröße. Sie kann vor und hinter der Fertigstraße gemessen werden, aus technischen Gründen aber nicht zwischen den Walzgerüsten. Eine genaue Kenntnis ist jedoch aus produktionstechnischen Gründen wichtig. Der Stand der Technik ist die Berechnung dieser Zwischenprofile durch das wiederholte Durchrechnen eines mathematischen Modells des Walzvorganges für jedes Gerüst und eine ständige Anpassung von adaptiven Termen dieses Modells an die Messdaten. Es wurde gezeigt, dass mit einem adaptiven neuronalen Netz, das mit Eingangs- und Ausgangsprofil sowie allen vorhandenen Kenn- und Stellgrößen trainiert wird, die Vorausberechnung des Endprofils mit deutlich höherer Genauigkeit vorgenommen werden kann. Das Problem ist, dass dieses Netz die Übertragungsfunktion der gesamten Straße repräsentiert, Zwischenprofile können nicht ausgegeben werden. Daher wird der Versuch gemacht, beide Eigenschaften zu verbinden: Die genaue Endprofilmodellierung eines neuronalen Netzes wird kombiniert mit der Fähigkeit des iterierten Modells, Zwischenprofile zu berechnen. Dabei wird der in Form von Messdaten bekannte gesamte Prozess als iterierte Verknüpfung von technisch identischen Teilprozessen angesehen. Die Gewinnung eines Modells des Einzelprozesses entspricht damit der Berechnung der iterativen Wurzel des Gesamtprozesses.
Styles APA, Harvard, Vancouver, ISO, etc.
43

Tagscherer, Michael. « Dynamische Neuronale Netzarchitektur für Kontinuierliches Lernen ». Doctoral thesis, Universitätsbibliothek Chemnitz, 2001. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200100725.

Texte intégral
Résumé :
Am Beispiel moderner Automatisierungssysteme wird deutlich, dass die Steuerung und optimale Führung der technischen Prozesse eng verbunden ist mit der Verfügbarkeit eines möglichst exakten Prozessmodells. Steht jedoch kein Modell des zu steuernden Systems zur Verfügung oder ist das System nicht ausreichend genau analytisch beschreibbar, muss ein adäquates Modell auf der Basis von Beobachtungen (Messdaten) abgeleitet werden. Erschwerend wirken sich hierbei starke Nichtlinearitäten sowie der zeitvariante Charakter der zu identifizierenden Systeme aus. Die Zeitvarianz, beispielsweise durch Alterung oder Verschleiß hervorgerufen, erfordert zusätzlich eine schritthaltende Adaption an den sich verändernden Prozess. Das einmalige, zeitlich begrenzte Erstellen eines Modells ist somit nicht ausreichend. Stattdessen muss zeitlich unbegrenzt "nachtrainiert" werden, was dementsprechend als "Kontinuierliches Lernen" bezeichnet wird. Auch wenn das Ableiten eines Systemmodells anhand von Beobachtungen eine typische Aufgabenstellung für Neuronale Netze ist, stellt die Zeitvarianz Neuronale Netze dennoch vor enorme Probleme. Im Rahmen der Dissertation wurden diese Probleme identifiziert und anhand von unterschiedlichen Neuronalen Netzansätzen analysiert. Auf den sich hieraus ergebenden Ergebnissen steht anschließend die Entwicklung eines neuartigen Neuronalen Netzansatzes im Mittelpunkt. Die besondere Eigenschaft des hybriden ICE-Lernverfahrens ist die Fähigkeit, eine zur Problemkomplexität adäquate Netztopologie selbstständig zu generieren und diese entsprechend des zeitvarianten Charakters der Zielfunktion dynamisch adaptieren zu können. Diese Eigenschaft begünstigt insbesondere schnelles Initiallernen. Darüber hinaus ist das ICE-Verfahren in der Lage, parallel zur Modellausgabe Vertrauenswürdigkeitsprognosen für die aktuelle Ausgabe zur Verfügung zu stellen. Den Abschluss der Arbeit bildet eine spezielle Form des ICE-Ansatzes, bei der durch asymmetrische Aktivierungsfunktionen Parallelen zur Fuzzy-Logik hergestellt werden. Dadurch wird es möglich, automatisch Regeln abzuleiten, welche das erlernte Modell beschreiben. Die "Black-Box", die Neuronale Netze in der Regel darstellen, wird dadurch transparenter
One of the main requirements for an optimal industrial control system is the availability of a precise model of the process, e.g. for a steel rolling mill. If no model or no analytical description of such a process is available a sufficient model has to be derived from observations, i.e. system identification. While nonlinear function approximation is a well-known application for neural networks, the approximation of nonlinear functions that change over time poses many additional problems which have been in the focus of this research. The time-variance caused for example by aging or attrition requires a continuous adaptation to process changes throughout the life-time of the system, here referred to as continuous learning. Based on the analysis of different neural network approaches the novel incremental construction algorithm ICE for continuous learning tasks has been developed. One of the main advantages of the ICE-algorithm is that the number of RBF-neurons and the number of local models of the hybrid network have not to be determined in advance. This is an important feature for fast initial learning. The evolved network is automatically adapted to the time-variant target function. Another advantage of the ICE-algorithm is the ability to simultaneously learn the target function and a confidence value for the network output. Finally a special version of the ICE-algorithm with asymmetric receptive fields is introduced. Here similarities to fuzzy logic are intended. The goal is to automatically derive rules which describe the learned model of the unknown process. In general a neural network is a "black box". In contrast to that an ICE-network is more transparent
Styles APA, Harvard, Vancouver, ISO, etc.
44

Kindermann, Lars. « Neuronale Netze zur Berechnung Iterativer Wurzeln und Fraktionaler Iterationen ». [S.l. : s.n.], 2001. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10424174.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
45

LIU, ZHEN-ZHONG, et 劉振中. « Adaptively controlling nonlinear continuous-time systems using neural networks ». Thesis, 1992. http://ndltd.ncl.edu.tw/handle/71989183416198206106.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Chien, Chia-Yi. « Construction of Continuous-State Bayesian Networks Using D-Separation Property and Partial Correlations ». 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2607200613535800.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
47

Chien, Chia-Yi, et 簡佳怡. « Construction of Continuous-State Bayesian Networks Using D-Separation Property and Partial Correlations ». Thesis, 2006. http://ndltd.ncl.edu.tw/handle/71625418714253282137.

Texte intégral
Résumé :
碩士
國立臺灣大學
工業工程學研究所
94
The development of microarray technology is capable of generating a huge amount of gene expression data at once to help us analyze the whole genome mechanism. Many analysis methods have been developed and applied to analyze the microarray data, such as Clustering analysis, Factor analysis and Bayesian networks. Bayesian networks can better help biologists to understand the biological meanings behind the microarray data. In general, algorithms of Bayesian network construction can be divided into two categories: the search-and-score approach and the constraint-based approach. How to construct Bayesian networks rapidly and efficiently become a challenge to biotechnology researches. Before constructing a Bayesian network, the node ordering is the first difficulty and the actual node ordering is usually unknown. In this research, we develop a method to search for possible node orderings based on the d-separation property. There are three assigning procedures in the node ordering algorithm. With the proposed ordering procedures, we produce three possible node sequences. We also propose an algorithm of Bayesian network construction by using d-separation property and partial correlation to analyze variables with continuous states. Our algorithm is one of to the constraint-based approaches. Finally, we apply our algorithm to two real-word cases; one is the Saccharomyces cerevisiae cell cycle gene expression data collected by Spellman et al., and the other is the caspases data.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Walker, James. « Bayesian Inference and Model Selection for Partially-Observed, Continuous-Time, Stochastic Epidemic Models ». Thesis, 2019. http://hdl.handle.net/2440/124703.

Texte intégral
Résumé :
Emerging infectious diseases are an ongoing threat to the health of populations around the world. In response, countries such as the USA, UK and Australia, have outlined data collection protocols to surveil these novel diseases. One of the aims of these data collection protocols is to characterise the disease in terms of transmissibility and clinical severity in order to inform an appropriate public health response. This kind of data collection protocol is yet to be enacted in Australia, but such a protocol is likely to be tested during a seasonal in uenza ( u) outbreak in the next few years. However, it is important that methods for characterising these diseases are ready and well understood for when an epidemic disease emerges. The epidemic may only be characterised well if its dynamics are well described (by a model) and are accurately quanti ed (by precisely inferred model parameters). This thesis models epidemics and the data collection process as partially-observed continuous-time Markov chains and aims to choose between models and infer parameters using early outbreak data. It develops Bayesian methods to infer epidemic parameters from data on multiple small outbreaks, and outbreaks in a population of households. An exploratory analysis is conducted to assess the accuracy and precision of parameter estimates under di erent epidemic surveillance schemes, di erent models and di erent kinds of model misspeci cation. It describes a novel Bayesian model selection method and employs it to infer two important characteristics for understanding emerging epidemics: the shape of the infectious period distribution; and, the time of infectiousness relative to symptom onset. Lastly, this thesis outlines a method for jointly inferring model parameters and selecting between epidemic models. This new method is compared with an existing method on two epidemic models and is applied to a di cult model selection problem.
Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2020
Styles APA, Harvard, Vancouver, ISO, etc.
49

BARONE, ROSARIO. « MCMC methods for continuous time multi-state models and high dimensional copula models ». Doctoral thesis, 2020. http://hdl.handle.net/11573/1365737.

Texte intégral
Résumé :
In this Thesis we propose Markov chain Monte Carlo (MCMC) methods for several classes of models. We consider both parametric and nonparametric Bayesian approaches proposing either alternatives in computation to already existent methods or new computational tools. In particular, we consider continuous time multi-state models (CTMSM), that is a class of stochastic processes useful for modelling several phenomena evolving continuously in time, with a finite number of states. Inference for these models is straightforward if the processes are fully observed, while it presents some computational difficulties if the processes are discretely observed and there is no additional information about the state transitions. In particular, in the semi-Markov models case the likelihood function is not available in closed form and approximation techniques are required. In the first Chapter we provide a uniformization based algorithm for simulating continuous time semi-Markov trajectories between discretely observed points and propose a Metropolis within Gibbs algorithm in order to sample from the posterior distributions of the parameters of that class of processes. As it will be shown, our method generalizes the Markov case. In the second Chapter we present a novel Bayesian nonparametric approach for inference on CTMSM. We propose a Dirichlet Process Mixture with continuous time Markov multi-state kernels, providing a Gibbs sampler which exploit the conjugacy between the Markov CTMSM density and the chosen base measure. The method, that is applicable with fully observed and discretely observed data, represents a flexible solution which avoid parametric assumptions on the process and allows to get density estimation and clustering. In the last Chapter we focus on copulas, a class of models for dependence between random variables. The copula approach allows for the construction of joint distributions as product of marginals and copula function. In particular, we focus on the modelling of the dependence between more than two random variables. In that case, assuming a multidimensional copula model for the multivariate data implies that paired data dependencies are assumed to belong to the same parametric family. This constraint makes this class of models not very flexible. A proposed solution to this problem is the vine copula constructions, which allows us to rewrite the multivariate copula as product of pair-copulas which may belong to different copula families. Another solution may be the nonparametric approach. We present two Bayesian nonparametric methods for inference on copulas in high dimensions. The first proposal is an alternative to an already existent method for high dimensional copulas. The second method is a novel Dirichlet Process Mixture of conditional multivariate copulas, which accounts for covariates on the dependence between the considered variables. Applications with both simulated and real data are provided in the last section of the first and the second Chapters, while in the last Chapter there are only application with simulated data.
Styles APA, Harvard, Vancouver, ISO, etc.
50

謝明佳. « A Bayesian Study on the Plant-Capture Approach for Population Size Estimation in Continuous Time ». Thesis, 2001. http://ndltd.ncl.edu.tw/handle/14406744993162482411.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie