To see the other types of publications on this topic, follow the link: Continuous Time Bayesian Network.

Dissertations / Theses on the topic 'Continuous Time Bayesian Network'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 44 dissertations / theses for your research on the topic 'Continuous Time Bayesian Network.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

CODECASA, DANIELE. "Continuous time bayesian network classifiers." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/80691.

Full text
Abstract:
Streaming data are relevant to finance, computer science, and engineering, while they are becoming increasingly important to medicine and biology. Continuous time Bayesian networks are designed for analyzing efficiently multivariate streaming data, exploiting the conditional independencies in continuous time homogeneous Markov processes. Continuous time Bayesian network classifiers are a specialization of continuous time Bayesian networks designed for multivariate streaming data classification when time duration of events matters and the class occurs in the future. Continuous time Bayesian network classifiers are presented and analyzed. Structural learning is introduced for this class of models when complete data are available. A conditional log-likelihood scoring is derived to improve the marginal log- likelihood structural learning on continuous time Bayesian net- work classifiers. The expectation maximization algorithm is developed to address the unsupervised learning of continuous time Bayesian network classifiers when the class is unknown. Performances of continuous time Bayesian network classifiers in the case of classification and clustering are analyzed with the help of a rich set of numerical experiments on synthetic and real data sets. Continuous time Bayesian network classifiers learned by maximizing marginal log-likelihood and conditional log-likelihood are compared with continuous time naive Bayes and dynamic Bayesian networks. Results show that the conditional log-likelihood scoring combined with Bayesian parameter estimation outperforms marginal log-likelihood scoring and dynamic Bayesian networks in the case of supervised classification. Conditional log-likelihood scoring becomes even more effective when the amount of available data is limited. Continuous time Bayesian network classifiers outperform dynamic Bayesian networks even on data sets generated from dis- crete time models. Clustering results show that in the case of unsupervised learning the marginal log-likelihood score is the most effective way to learn continuous time Bayesian network classifiers. Continuous time models again outperform dynamic Bayesian networks even when applied on discrete time data sets. A Java software toolkit implementing the main theoretical achievements of the thesis has been designed and developed under the name of the CTBNCToolkit. It provides a free stand- alone toolkit for multivariate trajectory classification and an open source library, which can be extend in accordance with the GPL v.2.0 license. The CTBNCToolkit allows classification and clustering of multivariate trajectories using continuous time Bayesian network classifiers. Structural learning, maximizing marginal log-likelihood and conditional log-likelihood scores, is provided.
APA, Harvard, Vancouver, ISO, and other styles
2

Nodelman, Uri D. "Continuous time bayesian networks /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fan, Yu. "Continuous time Bayesian Network approximate inference and social network applications." Diss., [Riverside, Calif.] : University of California, Riverside, 2009. http://proquest.umi.com/pqdweb?index=0&did=1957308751&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268330625&clientId=48051.

Full text
Abstract:
Thesis (Ph. D.)--University of California, Riverside, 2009.
Includes abstract. Title from first page of PDF file (viewed March 8, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 130-133). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
4

ACERBI, ENZO. "Continuos time Bayesian networks for gene networks reconstruction." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/52709.

Full text
Abstract:
Dynamic aspects of gene regulatory networks are typically investigated by measuring system variables at multiple time points. Current state-of-the-art computational approaches for reconstructing gene networks directly build on such data, making a strong assumption that the system evolves in a synchronous fashion at fixed points in time. However, nowadays omics data are being generated with increasing time course granularity. Thus, modellers now have the possibility to represent the system as evolving in continuous time and improve the models' expressiveness. Continuous time Bayesian networks is proposed as a new approach for gene network reconstruction from time course expression data. Their performance was compared to two state-of-the-art methods: dynamic Bayesian networks and Granger causality analysis. On simulated data methods's comparison was carried out for networks of increasing dimension, for measurements taken at different time granularity densities and for measurements evenly vs. unevenly spaced over time. Continuous time Bayesian networks outperformed the other methods in terms of the accuracy of regulatory interactions learnt from data for all network dimensions. Furthermore, their performance degraded smoothly as the dimension of the network increased. Continuous time Bayesian network were significantly better than dynamic Bayesian networks for all time granularities tested and better than Granger causality for dense time series. Both continuous time Bayesian networks and Granger causality performed robustly for unevenly spaced time series, with no significant loss of performance compared to the evenly spaced case, while the same did not hold true for dynamic Bayesian networks. The comparison included the IRMA experimental datasets which confirmed the effectiveness of the proposed method. Continuous time Bayesian networks were then applied to elucidate the regulatory mechanisms controlling murine T helper 17 (Th17) cell differentiation and were found to be effective in discovering well-known regulatory mechanisms as well as new plausible biological insights. Continuous time Bayesian networks resulted to be effective on networks of both small and big dimensions and particularly feasible when the measurements are not evenly distributed over time. Reconstruction of the murine Th17 cell differentiation network using continuous time Bayesian networks revealed several autocrine loops suggesting that Th17 cells may be auto regulating their own differentiation process.
APA, Harvard, Vancouver, ISO, and other styles
5

VILLA, SIMONE. "Continuous Time Bayesian Networks for Reasoning and Decision Making in Finance." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/69953.

Full text
Abstract:
L'analisi dell'enorme quantità di dati finanziari, messi a disposizione dai mercati elettronici, richiede lo sviluppo di nuovi modelli e tecniche per estrarre efficacemente la conoscenza da utilizzare in un processo decisionale informato. Lo scopo della tesi concerne l'introduzione di modelli grafici probabilistici utilizzati per il ragionamento e l'attività decisionale in tale contesto. Nella prima parte della tesi viene presentato un framework che utilizza le reti Bayesiane per effettuare l'analisi e l'ottimizzazione di portafoglio in maniera olistica. In particolare, esso sfrutta, da un lato, la capacità delle reti Bayesiane di rappresentare distribuzioni di probabilità in modo compatto ed efficiente per modellare il portafoglio e, dall'altro, la loro capacità di fare inferenza per ottimizzare il portafoglio secondo diversi scenari economici. In molti casi, si ha la necessità di ragionare in merito a scenari di mercato nel tempo, ossia si vuole rispondere a domande che coinvolgono distribuzioni di probabilità che evolvono nel tempo. Le reti Bayesiane a tempo continuo possono essere utilizzate in questo contesto. Nella seconda parte della tesi viene mostrato il loro utilizzo per affrontare problemi finanziari reali e vengono descritte due importanti estensioni. La prima estensione riguarda il problema di classificazione, in particolare vengono introdotti un algoritmo per apprendere tali classificatori da Big Data e il loro utilizzo nel contesto di previsione dei cambi valutari ad alta frequenza. La seconda estensione concerne l'apprendimento delle reti Bayesiane a tempo continuo in domini non stazionari, in cui vengono modellate esplicitamente le dipendenze statistiche presenti nelle serie temporali multivariate consentendo loro di cambiare nel corso del tempo. Nella terza parte della tesi viene descritto l'uso delle reti Bayesiane a tempo continuo nell'ambito dei processi decisionali di Markov, i quali consentono di modellare processi decisionali sequenziali in condizioni di incertezza. In particolare, viene introdotto un metodo per il controllo di sistemi dinamici a tempo continuo che sfrutta le proprietà additive e contestuali per scalare efficacemente su grandi spazi degli stati. Infine, vengono mostrate le prestazioni di tale metodo in un contesto significativo di trading.
The analysis of the huge amount of financial data, made available by electronic markets, calls for new models and techniques to effectively extract knowledge to be exploited in an informed decision-making process. The aim of this thesis is to introduce probabilistic graphical models that can be used to reason and to perform actions in such a context. In the first part of this thesis, we present a framework which exploits Bayesian networks to perform portfolio analysis and optimization in a holistic way. It leverages on the compact and efficient representation of high dimensional probability distributions offered by Bayesian networks and their ability to perform evidential reasoning in order to optimize the portfolio according to different economic scenarios. In many cases, we would like to reason about the market change, i.e. we would like to express queries as probability distributions over time. Continuous time Bayesian networks can be used to address this issue. In the second part of the thesis, we show how it is possible to use this model to tackle real financial problems and we describe two notable extensions. The first one concerns classification, where we introduce an algorithm for learning these classifiers from Big Data, and we describe their straightforward application to the foreign exchange prediction problem in the high frequency domain. The second one is related to non-stationary domains, where we explicitly model the presence of statistical dependencies in multivariate time-series while allowing them to change over time. In the third part of the thesis, we describe the use of continuous time Bayesian networks within the Markov decision process framework, which provides a model for sequential decision-making under uncertainty. We introduce a method to control continuous time dynamic systems, based on this framework, that relies on additive and context-specific features to scale up to large state spaces. Finally, we show the performances of our method in a simplified, but meaningful trading domain.
APA, Harvard, Vancouver, ISO, and other styles
6

GATTI, ELENA. "Graphical models for continuous time inference and decision making." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/19575.

Full text
Abstract:
Reasoning about evolution of system in time is both an important and challenging task. We are interested in probability distributions over time of events where often observations are irregularly spaced over time. Probabilistic models have been widely used to accomplish this task but they have some limits. Indeed, Hidden Markov Models and Dynamic Bayesian Networks in general require the specification of a time granularity between consecutive observations. This requirement leads to computationally inefficient learning and inference procedures when the adopted time granularity is finer than the time spent between consecutive observations, and to possible losses of information in the opposite case. The framework of Continuous Time Bayesian Networks (CTBN) overcomes this limit, allowing the representation of temporal dynamics over a structured state space. In this dissertation an overview of the semantic and inference aspects of the framework of the CTBNs is proposed. The limits of exact inference are overcome using approximate inference, in particular the cluster-graph message passing algorithm and the Gibbs Sampling has been investigated. The CTBN has been applied to a real case study of diagnosis of cardiogenic heart failure, developed in collaboration with domain experts. Moving from the task of simply reasoning under uncertainty, to the task of deciding how to act in the world, a part of the dissertation is devoted to graphical models that allow the inclusion of decisions. We describe Influence Diagrams, which extend Bayesian Networks by introducing decisions and utilities. We then discuss an approach for approximate representation of optimal strategies in influence diagrams. The contributions of the dissertation are the following: design and development of a CTBN software package implementing two of the most important inference algorithms (Expectation Propagation and Gibbs Sampling), development of a realistic diagnosis scenario of cardiogenic heart failure (to the best of our knowledge it is the first clinical application of this type), the approach of information enhancement to reduce the domain of the policy in large influence diagrams together with an important contribution concerning the identification of informational links to add in the graph.
APA, Harvard, Vancouver, ISO, and other styles
7

Alharbi, Randa. "Bayesian inference for continuous time Markov chains." Thesis, University of Glasgow, 2019. http://theses.gla.ac.uk/40972/.

Full text
Abstract:
Continuous time Markov chains (CTMCs) are a flexible class of stochastic models that have been employed in a wide range of applications from timing of computer protocols, through analysis of reliability in engineering, to models of biochemical networks in molecular biology. These models are defined as a state system with continuous time transitions between the states. Extensive work has been historically performed to enable convenient and flexible definition, simulation, and analysis of continuous time Markov chains. This thesis considers the problem of Bayesian parameter inference on these models and investigates computational methodologies to enable such inference. Bayesian inference over continuous time Markov chains is particularly challenging as the likelihood cannot be evaluated in a closed form. To overcome the statistical problems associated with evaluation of the likelihood, advanced algorithms based on Monte Carlo have been used to enable Bayesian inference without explicit evaluation of the likelihoods. An additional class of approximation methods has been suggested to handle such inference problems, known as approximate Bayesian computation. Novel Markov chain Monte Carlo (MCMC) approaches were recently proposed to allow exact inference. The contribution of this thesis is in discussion of the techniques and challenges in implementing these inference methods and performing an extensive comparison of these approaches on two case studies in systems biology. We investigate how the algorithms can be designed and tuned to work on CTMC models, and to achieve an accurate estimate of the posteriors with reasonable computational cost. Through this comparison, we investigate how to avoid some practical issues with accuracy and computational cost, for example by selecting an optimal proposal distribution and introducing a resampling step within the sequential Monte-Carlo method. Within the implementation of the ABC methods we investigate using an adaptive tolerance schedule to maximise the efficiency of the algorithm and in order to reduce the computational cost.
APA, Harvard, Vancouver, ISO, and other styles
8

Parton, Alison. "Bayesian inference for continuous-time step-and-turn movement models." Thesis, University of Sheffield, 2018. http://etheses.whiterose.ac.uk/20124/.

Full text
Abstract:
This thesis concerns the statistical modelling of animal movement paths given observed GPS locations. With observations being in discrete time, mechanistic models of movement are often formulated as such. This popularity remains despite an inability to compare analyses through scale invariance and common problems handling irregularly timed observations. A natural solution is to formulate in continuous time, yet uptake of this has been slow, often excused by a difficulty in interpreting the ‘instantaneous’ parameters associated with a continuous-time model. The aim here was to bolster usage by developing a continuous-time model with interpretable parameters, similar to those of popular discrete-time models that use turning angles and step lengths to describe the movement process. Movement is defined by a continuous-time, joint bearing and speed process, the parameters of which are dependent on a continuous-time behavioural switching process, thus creating a flexible class of movement models. Further, we allow for the observed locations derived from this process to have unknown error. Markov chain Monte Carlo inference is presented for parameters given irregular, noisy observations. The approach involves augmenting the observed locations with a reconstruction of the underlying continuous-time process. Example implementations showcasing this method are given featuring simulated and real datasets. Data from elk (Cervus elaphus), which have previously been modelled in discrete time, demonstrate the interpretable nature of the model, finding clear differences in behaviour over time and insights into short-term behaviour that could not have been obtained in discrete time. Observations from reindeer (Rangifer tarandus) reveal the effect observation error has on the identification of large turning angles—a feature often inferred in discrete-time modelling. Scalability to realistically large datasets is shown for lesser black-backed gull (Larus fuscus) data.
APA, Harvard, Vancouver, ISO, and other styles
9

Elshamy, Wesam Samy. "Continuous-time infinite dynamic topic models." Diss., Kansas State University, 2012. http://hdl.handle.net/2097/15176.

Full text
Abstract:
Doctor of Philosophy
Department of Computing and Information Sciences
William Henry Hsu
Topic models are probabilistic models for discovering topical themes in collections of documents. In real world applications, these models provide us with the means of organizing what would otherwise be unstructured collections. They can help us cluster a huge collection into different topics or find a subset of the collection that resembles the topical theme found in an article at hand. The first wave of topic models developed were able to discover the prevailing topics in a big collection of documents spanning a period of time. It was later realized that these time-invariant models were not capable of modeling 1) the time varying number of topics they discover and 2) the time changing structure of these topics. Few models were developed to address this two deficiencies. The online-hierarchical Dirichlet process models the documents with a time varying number of topics. It varies the structure of the topics over time as well. However, it relies on document order, not timestamps to evolve the model over time. The continuous-time dynamic topic model evolves topic structure in continuous-time. However, it uses a fixed number of topics over time. In this dissertation, I present a model, the continuous-time infinite dynamic topic model, that combines the advantages of these two models 1) the online-hierarchical Dirichlet process, and 2) the continuous-time dynamic topic model. More specifically, the model I present is a probabilistic topic model that does the following: 1) it changes the number of topics over continuous time, and 2) it changes the topic structure over continuous-time. I compared the model I developed with the two other models with different setting values. The results obtained were favorable to my model and showed the need for having a model that has a continuous-time varying number of topics and topic structure.
APA, Harvard, Vancouver, ISO, and other styles
10

Acciaroli, Giada. "Calibration of continuous glucose monitoring sensors by time-varying models and Bayesian estimation." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3425746.

Full text
Abstract:
Minimally invasive continuous glucose monitoring (CGM) sensors are wearable medical devices that provide frequent (e.g., 1-5 min sampling rate) real-time measurements of glucose concentration for several consecutive days. This can be of great help in the daily management of diabetes. Most of the CGM systems commercially available today have a wire-based electrochemical sensor, usually placed in the subcutaneous tissue, which measures a "raw" electrical current signal via a glucose-oxidase electrochemical reaction. Observations of the raw electrical signal are frequently revealed by the sensor on a fine, uniformly spaced, time grid. These samples of electrical nature are in real-time converted to interstitial glucose (IG) concentration levels through a calibration process by fitting a few blood glucose (BG) concentration measurements, sparsely collected by the patient through fingerprick. Usually, for coping with such a process, CGM sensor manufacturers employ linear calibration models to approximate, albeit in limited time-intervals, the nonlinear relationship between electrical signal and glucose concentration. Thus, on the one hand, frequent calibrations (e.g., two per day) are required to guarantee a good sensor accuracy. On the other, each calibration requires patients to add uncomfortable extra actions to the many already needed in the routine of diabetes management. The aim of this thesis is to develop new calibration algorithms for minimally invasive CGM sensors able to ensure good sensor accuracy with the minimum number of calibrations. In particular, we propose i) to replace the time-invariant gain and offset conventionally used by the linear calibration models with more sophisticated time-varying functions valid for multiple-day periods, with unknown model parameters for which an a priori statistical description is available from independent training sets; ii) to numerically estimate the calibration model parameters by means of a Bayesian estimation procedure that exploits the a priori information on model parameters in addition to some BG samples sparsely collected by the patient. The thesis is organized in 6 chapters. In Chapter 1, after a background introduction on CGM sensor technologies, the calibration problem is illustrated. Then, some state-of-art calibration techniques are briefly discussed with their open problems, which result in the aims of the thesis illustrated at the end of the chapter. In Chapter 2, the datasets used for the implementation of the calibration techniques are described, together with the performance metrics and the statistical analysis tools which will be employed to assess the quality of the results. In Chapter 3, we illustrate a recently proposed calibration algorithm (Vet- toretti et al., IEEE Trans Biomed Eng 2016), which represents the starting point of the study proposed in this thesis. In particular, we demonstrate that, thanks to the development of a time-varying day-specific Bayesian prior, the algorithm can become able to reduce the calibration frequency from two to one per day. However, the linear calibration model used by the algorithm has domain of validity limited to certain time intervals, not allowing to further reduce calibrations to less then one per day and calling for the development of a new calibration model valid for multiple-day periods like that developed in the remainder of this thesis. In Chapter 4, a novel Bayesian calibration algorithm working in a multi-day framework (referred to as Bayesian multi-day, BMD, calibration algorithm) is presented. It is based on a multiple-day model of sensor time-variability with second order statistical priors on its unknown parameters. In each patient-sensor realization, the numerical values of the calibration model parameters are determined by a Bayesian estimation procedure exploiting the BG samples sparsely collected by the patient. In addition, the distortion introduced by the BG-to-IG kinetics is compensated during parameter identification via non-parametric deconvolution. The BMD calibration algorithm is applied to two datasets acquired with the "present-generation" Dexcom (Dexcom Inc., San Diego, CA) G4 Platinum (DG4P) CGM sensor and a "next-generation" Dexcom CGM sensor prototype (NGD). In the DG4P dataset, results show that, despite the reduction of calibration frequency (on average from 2 per day to 0.25 per day), the BMD calibration algorithm significantly improves sensor accuracy compared to the manufacturer calibration algorithm. In the NGD dataset, performance is even better than that of present generation, allowing to further reduce calibrations toward zero. In Chapter 5, we analyze the potential margins for improvement of the BMD calibration algorithm and propose a further extension of the method. In particular, to cope with the inter-sensor and inter-subject variability, we propose a multi-model approach and a Bayesian model selection framework (referred to as multi-model Bayesian framework, MMBF) in which the most likely calibration model is chosen among a finite set of candidates. A preliminary assessment of the MMBF is conducted on synthetic data generated by a well-established type 1 diabetes simulation model. Results show a statistically significant accuracy improvement compared to the use of a unique calibration model. Finally, the major findings of the work carried out in this thesis, possible applications and margins for improvement are summarized in Chapter 6.
I sensori minimamente invasivi per il monitoraggio in continua della glicemia, indicati con l’acronimo CGM (continuous glucose monitoring), sono dei dispositivi medici indossabili capaci di misurare la glicemia in tempo reale, ogni 1-5 minuti, per più giorni consecutivi. Questo tipo di misura fornisce un profilo di glicemia quasi continuo che risulta essere un’informazione molto utile per la gestione quotidiana della terapia del diabete. La maggior parte dei dispositivi CGM ad oggi disponibili nel mercato dispongono di un sensore di tipo elettrochimico, solitamente inserito nel tessuto sottocutaneo, che misura una corrente elettrica generata dalla reazione chimica di glucosio-ossidasi. Le misure di corrente elettrica sono fornite dal sensore con campionamento uniforme ad elevata frequenza temporale e vengono convertite in tempo reale in valori di glicemia interstiziale attraverso un processo di calibrazione. La procedura di calibrazione prevede l’acquisizione da parte del paziente di qualche misura di glicemia plasmatica di riferimento tramite dispositivi pungidito. Solitamente, le aziende produttrici di sensori CGM implementano un processo di calibrazione basato su un modello di tipo lineare che approssima, sebbene in intervalli di tempo di durata limitata, la più complessa relazione tra corrente elettrica e glicemia. Di conseguenza, si rendono necessarie frequenti calibrazioni (per esempio, due al giorno) per aggiornare i parametri del modello di calibrazione e garantire una buona accuratezza di misura. Tuttavia, ogni calibrazione prevede l’acquisizione da parte del paziente di misure di glicemia tramite dispositivi pungidito. Questo aumenta la già numerosa lista di azioni che i pazienti devono svolgere quotidianamente per gestire la loro terapia. Lo scopo di questa tesi è quello di sviluppare un nuovo algoritmo di calibrazione per sensori CGM minimamente invasivi capace di garantire una buona accuratezza di misura con il minimo numero di calibrazioni. Nello specifico, si propone i) di sostituire il guadagno ed offset tempo-invarianti solitamente utilizzati nei modelli di calibrazione di tipo lineare con delle funzioni tempo-varianti, capaci di descrivere il comportamento del sensore per intervalli di tempo di più giorni, e per cui sia disponibile dell’informazione a priori riguardante i parametri incogniti; ii) di stimare il valore numerico dei parametri del modello di calibrazione con metodo Bayesiano, sfruttando l’informazione a priori sui parametri di calibrazione in aggiunta ad alcune misure di glicemia plasmatica di riferimento. La tesi è organizzata in 6 capitoli. Nel Capitolo 1, dopo un’introduzione sulle tecnologie dei sensori CGM, viene illustrato il problema della calibrazione. In seguito, vengono discusse alcune tecniche di calibrazione che rappresentano lo stato dell’arte ed i loro problemi aperti, che risultano negli scopi della tesi descritti alla fine del capitolo. Nel Capitolo 2 vengono descritti i dataset utilizzati per l’implementazione delle tecniche di calibrazione. Inoltre, vengono illustrate le metriche di accuratezza e le tecniche di analisi statistica utilizzate per analizzare la qualità dei risultati. Nel Capitolo 3 viene illustrato un algoritmo di calibrazione recentemente proposto in letteratura (Vettoretti et al., IEEE, Trans Biomed Eng 2016). Questo algoritmo rappresenta il punto di partenza dello studio svolto in questa tesi. Più precisamente, viene dimostrato che, grazie all’utilizzo di un prior Bayesiano specifico per ogni giorno di utilizzo, l’algoritmo diventa efficace nel ridurre le calibrazioni da due a una al giorno senza perdita di accuratezza. Tuttavia, il modello lineare di calibrazione utilizzato dall’algoritmo ha dominio di validità limitato a brevi intervalli di tempo tra due calibrazioni successive, rendendo impossibile l’ulteriore riduzione delle calibrazioni a meno di una al giorno senza perdita di accuratezza. Questo determina la necessità di sviluppare un nuovo modello di calibrazione valido per intervalli di tempo più estesi, fino a più giorni consecutivi, come quello sviluppato nel resto di questa tesi. Nel Capitolo 4 viene presentato un nuovo algoritmo di calibrazione di tipo Bayesiano (Bayesian multi-day, BMD). L’algoritmo si basa su un modello della tempo-varianza delle caratteristiche del sensore nei suoi giorni di utilizzo e sulla disponibilità di informazione statistica a priori sui suoi parametri incogniti. Per ogni coppia paziente-sensore, il valore numerico dei parametri del modello è determinato tramite stima Bayesiana sfruttando alcune misure plasmatiche di riferimento acquisite dal paziente con dispositivi pungidito. Inoltre, durante la stima dei parametri, la dinamica introdotta dalla cinetica plasma-interstizio viene compensata tramite deconvoluzione nonparametrica. L’algoritmo di calibrazione BMD viene applicato a due differenti set di dati acquisiti con il sensore commerciale Dexcom (Dexocm Inc., San Diego, CA) G4 Platinum (DG4P) e con un prototipo di sensore Dexcom di nuova generazione (NGD). Nei dati acquisiti con il sensore DG4P, i risultati dimostrano che, nonostante le calibrazioni vengano ridotte (in media da 2 al giorno a 0.25 al giorno), l’ algoritmo BMD migliora significativamente l’accuratezza del sensore rispetto all’algoritmo di calibrazione utilizzato dall’azienda produttrice del sensore. Nei dati acquisiti con il sensore NGD, i risultati sono ancora migliori, permettendo di ridurre ulteriormente le calibrazioni fino a zero. Nel Capitolo 5 vengono analizzati i potenziali margini di miglioramento dell’algoritmo di calibrazione BMD discusso nel capitolo precedente e viene proposta un’ulteriore estensione dello stesso. In particolare, per meglio gestire la variabilità tra sensori e tra soggetti, viene proposto un approccio di calibrazione multi-modello e un metodo Bayesiano di selezione del modello (Multi-model Bayesian framework, MMBF) in cui il modello di calibrazione più probabile a posteriori viene scelto tra un set di possibili candidati. Tale approccio multi-modello viene analizzato in via preliminare su un set di dati simulati generati da un simulatore del paziente diabetico di tipo 1 ben noto in letteratura. I risultati dimostrano che l’accuratezza del sensore migliora in modo significativo con MMBF rispetto ad utilizzare un unico modello di calibrazione. Infine, nel Capitolo 6 vengono riassunti i principali risultati ottenuti in questa tesi, le possibili applicazioni, e i margini di miglioramento per gli sviluppi futuri.
APA, Harvard, Vancouver, ISO, and other styles
11

Murray, Lawrence. "Bayesian learning of continuous time dynamical systems with applications in functional magnetic resonance imaging." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/4157.

Full text
Abstract:
Temporal phenomena in a range of disciplines are more naturally modelled in continuous-time than coerced into a discrete-time formulation. Differential systems form the mainstay of such modelling, in fields from physics to economics, geoscience to neuroscience. While powerful, these are fundamentally limited by their determinism. For the purposes of probabilistic inference, their extension to stochastic differential equations permits a continuous injection of noise and uncertainty into the system, the model, and its observation. This thesis considers Bayesian filtering for state and parameter estimation in general non-linear, non-Gaussian systems using these stochastic differential models. It identifies a number of challenges in this setting over and above those of discrete time, most notably the absence of a closed form transition density. These are addressed via a synergy of diverse work in numerical integration, particle filtering and high performance distributed computing, engineering novel solutions for this class of model. In an area where the default solution is linear discretisation, the first major contribution is the introduction of higher-order numerical schemes, particularly stochastic Runge-Kutta, for more efficient simulation of the system dynamics. Improved runtime performance is demonstrated on a number of problems, and compatibility of these integrators with conventional particle filtering and smoothing schemes discussed. Finding compatibility for the smoothing problem most lacking, the major theoretical contribution of the work is the introduction of two novel particle methods, the kernel forward-backward and kernel two-filter smoothers. By harnessing kernel density approximations in an importance sampling framework, these attain cancellation of the intractable transition density, ensuring applicability in continuous time. The use of kernel estimators is particularly amenable to parallelisation, and provides broader support for smooth densities than a sample-based representation alone, helping alleviate the well known issue of degeneracy in particle smoothers. Implementation of the methods for large-scale problems on high performance computing architectures is provided. Achieving improved temporal and spatial complexity, highly favourable runtime comparisons against conventional techniques are presented. Finally, attention turns to real world problems in the domain of Functional Magnetic Resonance Imaging (fMRI), first constructing a biologically motivated stochastic differential model of the neural and hemodynamic activity underlying the observed signal in fMRI. This model and the methodological advances of the work culminate in application to the deconvolution and effective connectivity problems in this domain.
APA, Harvard, Vancouver, ISO, and other styles
12

Ratiu, Alin. "Continuous time signal processing for wake-up radios." Thesis, Lyon, INSA, 2015. http://www.theses.fr/2015ISAL0078/document.

Full text
Abstract:
La consommation des systèmes de communication pour l'IoT peut être réduite grâce à un nouveau paradigme de réception radio. La technique consiste à ajouter un récepteur supplémentaire à chaque noeud IoT, appelé Wake Up Radio (WU-RX). Le rôle du WU-RX est de surveiller le canal de communication et de réveiller le récepteur principal (aussi appelé récepteur de données) lors de la réception d'une demande de communication. Une analyse des implémentations des WU-RX existants montre que les systèmes de l'état de l'art sont suffisamment sensibles par rapport aux récepteurs de données classiques mais manquent de robustesse face aux brouilleurs. Pour améliorer cette caractéristique nous proposons un étage de filtrage accordable `a fréquence intermédiaire qui nous permet de scanner toute la bande FI en cherchant le canal utilisé pour la demande de réveil. Ce filtre a été implémenté en utilisant les principes du traitement numérique de données à temps continu et consiste en un CAN suivi par un processeur numérique à temps continu. Le principe de fonctionnement du CAN est basé sur les modulateurs delta, avec une boucle de retour améliorée qui lui permet la quantification des signaux de fréquence plus élevé pour une consommation énergétique plus faible. Par conséquent, il a une plage de fonctionnement entre 10MHz et 50MHz ; pour un SNDR entre 32dB et 42dB et une consommation de 24uW. Cela se traduit par une figure de mérite entre 3fJ/conv-step et 10fJ/conv-step, une des meilleures pour la gamme de fréquences sélectionnée. Le processeur numérique est constitué d'un filtre IIR suivi par un filtre FIR. L'atténuation hors bande apportée par le filtre IIR permet de réduire le taux d'activité vu par le filtre FIR qui, par conséquent, consomme moins d'énergie. Nous avons montré, en simulation, une réduction de la puissance consommée par le filtre FIR d'un facteur entre 2 et 3. Au total, les deux filtres atteignent plus que 40dB de réjection hors bande, avec une bande passante de 2MHz qui peut être délacée sur toute la bande passante du CAN. Dans un pire cas, le système proposé (CAN et processeur numérique) consomme moins de 100uW, cependant la configuration des signaux à l'entrée peut rendre cette consommation plus faible
Wake-Up Receivers (WU-RX) have been recently proposed as candidates to reduce the communication power budget of wireless networks. Their role is to sense the environment and wake up the main receivers which then handle the bulk data transfer. Existing WU-RXs achieve very high sensitivities for power consumptions below 50uW but severely degrade their performance in the presence of out-of-band blockers. We attempt to tackle this problem by implementing an ultra low power, tunable, intermediate frequency filtering stage. Its specifications are derived from standard WU-RX architectures; it is shown that classic filtering techniques are either not tunable enough or demand a power consumption beyond the total WU-RX budget of 100uW. We thus turn to the use of Continuous Time Digital Signal Processing (CT-DSP) which offers the same level of programmability as standard DSP solutions while providing an excellent scalability of the power consumption with respect to the characteristics of the input signal. A CT-DSP chain can be divided into two parts: the CT-ADC and the CT-DSP itself; the specifications of these two blocks, given the context of this work, are also discussed. The CT-ADC is based on a novel, delta modulator-based architecture which achieves a very low power consumption; its maximum operation frequency was extended by the implementation of a very fast feedback loop. Moreover, the CT nature of the ADC means that it does not do any sampling in time, hence no anti-aliasing filter is required. The proposed ADC requires only 24uW to quantize signals in the [10MHz 50MHz] bandwidth for an SNR between 32dB and 42dB, resulting in a figure of merit of 3-10fJ/conv-step, among the best reported for the selected frequency range. Finally, we present the architecture of the CT-DSP which is divided into two parts: a CT-IIR and a CT-FIR. The CT-IIR is implemented by placing a standard CT-FIR in a feedback loop around the CT-ADC. If designed correctly, the feedback loop can now cancel out certain frequencies from the CT-ADC input (corresponding to those of out-of-band interferers) while boosting the power of the useful signal. The effective amplitude of the CT-ADC input is thus reduced, making it generate a smaller number of tokens, thereby reducing the power consumption of the subsequent CT-FIR by a proportional amount. The CT-DSP consumes around 100uW while achieving more than 40dB of out-of-band rejection; for a bandpass implementation, a 2MHz passband can be shifted over the entire ADC bandwidth
APA, Harvard, Vancouver, ISO, and other styles
13

Arastuie, Makan. "Generative Models of Link Formation and Community Detection in Continuous-Time Dynamic Networks." University of Toledo / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1596718772873086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Murphy, James Kevin. "Hidden states, hidden structures : Bayesian learning in time series models." Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/250355.

Full text
Abstract:
This thesis presents methods for the inference of system state and the learning of model structure for a number of hidden-state time series models, within a Bayesian probabilistic framework. Motivating examples are taken from application areas including finance, physical object tracking and audio restoration. The work in this thesis can be broadly divided into three themes: system and parameter estimation in linear jump-diffusion systems, non-parametric model (system) estimation and batch audio restoration. For linear jump-diffusion systems, efficient state estimation methods based on the variable rate particle filter are presented for the general linear case (chapter 3) and a new method of parameter estimation based on Particle MCMC methods is introduced and tested against an alternative method using reversible-jump MCMC (chapter 4). Non-parametric model estimation is examined in two settings: the estimation of non-parametric environment models in a SLAM-style problem, and the estimation of the network structure and forms of linkage between multiple objects. In the former case, a non-parametric Gaussian process prior model is used to learn a potential field model of the environment in which a target moves. Efficient solution methods based on Rao-Blackwellized particle filters are given (chapter 5). In the latter case, a new way of learning non-linear inter-object relationships in multi-object systems is developed, allowing complicated inter-object dynamics to be learnt and causality between objects to be inferred. Again based on Gaussian process prior assumptions, the method allows the identification of a wide range of relationships between objects with minimal assumptions and admits efficient solution, albeit in batch form at present (chapter 6). Finally, the thesis presents some new results in the restoration of audio signals, in particular the removal of impulse noise (pops and clicks) from audio recordings (chapter 7).
APA, Harvard, Vancouver, ISO, and other styles
15

Sahin, Elvan. "Discrete-Time Bayesian Networks Applied to Reliability of Flexible Coping Strategies of Nuclear Power Plants." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103817.

Full text
Abstract:
The Fukushima Daiichi accident prompted the nuclear community to find a new solution to reduce the risky situations in nuclear power plants (NPPs) due to beyond-design-basis external events (BDBEEs). An implementation guide for diverse and flexible coping strategies (FLEX) has been presented by Nuclear Energy Institute (NEI) to manage the challenge of BDBEEs and to enhance reactor safety against extended station blackout (SBO). To assess the effectiveness of FLEX strategies, probabilistic risk assessment (PRA) methods can be used to calculate the reliability of such systems. Due to the uniqueness of FLEX systems, these systems can potentially carry dependencies among components not commonly modeled in NPPs. Therefore, a suitable method is needed to analyze the reliability of FLEX systems in nuclear reactors. This thesis investigates the effectiveness and applicability of Bayesian networks (BNs) and Discrete-Time Bayesian Networks (DTBNs) in the reliability analysis of FLEX equipment that is utilized to reduce the risk in nuclear power plants. To this end, the thesis compares BNs with two other reliability assessment methods: Fault Tree (FT) and Markov chain (MC). Also, it is shown that these two methods can be transformed into BN to perform the reliability analysis of FLEX systems. The comparison of the three reliability methods is shown and discussed in three different applications. The results show that BNs are not only a powerful method in modeling FLEX strategies, but it is also an effective technique to perform reliability analysis of FLEX equipment in nuclear power plants.
Master of Science
Some external events like earthquakes, flooding, and severe wind, may cause damage to the nuclear reactors. To reduce the consequences of these damages, the Nuclear Energy Institute (NEI) has proposed mitigating strategies known as FLEX (Diverse and Flexible Coping Strategies). After the implementation of FLEX in nuclear power plants, we need to analyze the failure or success probability of these engineering systems through one of the existing methods. However, the existing methods are limited in analyzing the dependencies among components in complex systems. Bayesian networks (BNs) are a graphical and quantitative technique that is utilized to model dependency among events. This thesis shows the effectiveness and applicability of BNs in the reliability analysis of FLEX strategies by comparing it with two other reliability analysis tools, known as Fault Tree Analysis and Markov Chain. According to the reliability analysis results, BN is a powerful and promising method in modeling and analyzing FLEX strategies.
APA, Harvard, Vancouver, ISO, and other styles
16

Burchett, Woodrow. "Improving the Computational Efficiency in Bayesian Fitting of Cormack-Jolly-Seber Models with Individual, Continuous, Time-Varying Covariates." UKnowledge, 2017. http://uknowledge.uky.edu/statistics_etds/27.

Full text
Abstract:
The extension of the CJS model to include individual, continuous, time-varying covariates relies on the estimation of covariate values on occasions on which individuals were not captured. Fitting this model in a Bayesian framework typically involves the implementation of a Markov chain Monte Carlo (MCMC) algorithm, such as a Gibbs sampler, to sample from the posterior distribution. For large data sets with many missing covariate values that must be estimated, this creates a computational issue, as each iteration of the MCMC algorithm requires sampling from the full conditional distributions of each missing covariate value. This dissertation examines two solutions to address this problem. First, I explore variational Bayesian algorithms, which derive inference from an approximation to the posterior distribution that can be fit quickly in many complex problems. Second, I consider an alternative approximation to the posterior distribution derived by truncating the individual capture histories in order to reduce the number of missing covariates that must be updated during the MCMC sampling algorithm. In both cases, the increased computational efficiency comes at the cost of producing approximate inferences. The variational Bayesian algorithms generally do not estimate the posterior variance very accurately and do not directly address the issues with estimating many missing covariate values. Meanwhile, the truncated CJS model provides a more significant improvement in computational efficiency while inflating the posterior variance as a result of discarding some of the data. Both approaches are evaluated via simulation studies and a large mark-recapture data set consisting of cliff swallow weights and capture histories.
APA, Harvard, Vancouver, ISO, and other styles
17

Yang, Jianxiang. "Time-delay neural network systems for stop and unstop phoneme discrimination in continuous speech signal." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/MQ31661.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wu, Xinying. "Reliability Assessment of a Continuous-state Fuel Cell Stack System with Multiple Degrading Components." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1556794664723115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Arthur, Jacob D. "Enhanced Prediction of Network Attacks Using Incomplete Data." NSUWorks, 2017. http://nsuworks.nova.edu/gscis_etd/1020.

Full text
Abstract:
For years, intrusion detection has been considered a key component of many organizations’ network defense capabilities. Although a number of approaches to intrusion detection have been tried, few have been capable of providing security personnel responsible for the protection of a network with sufficient information to make adjustments and respond to attacks in real-time. Because intrusion detection systems rarely have complete information, false negatives and false positives are extremely common, and thus valuable resources are wasted responding to irrelevant events. In order to provide better actionable information for security personnel, a mechanism for quantifying the confidence level in predictions is needed. This work presents an approach which seeks to combine a primary prediction model with a novel secondary confidence level model which provides a measurement of the confidence in a given attack prediction being made. The ability to accurately identify an attack and quantify the confidence level in the prediction could serve as the basis for a new generation of intrusion detection devices, devices that provide earlier and better alerts for administrators and allow more proactive response to events as they are occurring.
APA, Harvard, Vancouver, ISO, and other styles
20

Lebre, Sophie. "Stochastic process analysis for Genomics and Dynamic Bayesian Networks inference." Phd thesis, Université d'Evry-Val d'Essonne, 2007. http://tel.archives-ouvertes.fr/tel-00260250.

Full text
Abstract:
This thesis is dedicated to the development of statistical and computational methods for the analysis of DNA sequences and gene expression time series.

First we study a parsimonious Markov model called Mixture Transition Distribution (MTD) model which is a mixture of Markovian transitions. The overly high number of constraints on the parameters of this model hampers the formulation of an analytical expression of the Maximum Likelihood Estimate (MLE). We propose to approach the MLE thanks to an EM algorithm. After comparing the performance of this algorithm to results from the litterature, we use it to evaluate the relevance of MTD modeling for bacteria DNA coding sequences in comparison with standard Markovian modeling.

Then we propose two different approaches for genetic regulation network recovering. We model those genetic networks with Dynamic Bayesian Networks (DBNs) whose edges describe the dependency relationships between time-delayed genes expression. The aim is to estimate the topology of this graph despite the overly low number of repeated measurements compared with the number of observed genes.

To face this problem of dimension, we first assume that the dependency relationships are homogeneous, that is the graph topology is constant across time. Then we propose to approximate this graph by considering partial order dependencies. The concept of partial order dependence graphs, already introduced for static and non directed graphs, is adapted and characterized for DBNs using the theory of graphical models. From these results, we develop a deterministic procedure for DBNs inference.

Finally, we relax the homogeneity assumption by considering the succession of several homogeneous phases. We consider a multiple changepoint
regression model. Each changepoint indicates a change in the regression model parameters, which corresponds to the way an expression level depends on the others. Using reversible jump MCMC methods, we develop a stochastic algorithm which allows to simultaneously infer the changepoints location and the structure of the network within the phases delimited by the changepoints.

Validation of those two approaches is carried out on both simulated and real data analysis.
APA, Harvard, Vancouver, ISO, and other styles
21

Van, Lierde Boris. "Developing Box-Pushing Behaviours Using Evolutionary Robotics." Thesis, Högskolan Dalarna, Datateknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:du-6250.

Full text
Abstract:
The context of this report and the IRIDIA laboratory are described in the preface. Evolutionary Robotics and the box-pushing task are presented in the introduction.The building of a test system supporting Evolutionary Robotics experiments is then detailed. This system is made of a robot simulator and a Genetic Algorithm. It is used to explore the possibility of evolving box-pushing behaviours. The bootstrapping problem is explained, and a novel approach for dealing with it is proposed, with results presented.Finally, ideas for extending this approach are presented in the conclusion.
APA, Harvard, Vancouver, ISO, and other styles
22

Junuthula, Ruthwik Reddy. "Modeling, Evaluation and Analysis of Dynamic Networks for Social Network Analysis." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1544819215833249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Webb, Jared Anthony. "A Topics Analysis Model for Health Insurance Claims." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3805.

Full text
Abstract:
Mathematical probability has a rich theory and powerful applications. Of particular note is the Markov chain Monte Carlo (MCMC) method for sampling from high dimensional distributions that may not admit a naive analysis. We develop the theory of the MCMC method from first principles and prove its relevance. We also define a Bayesian hierarchical model for generating data. By understanding how data are generated we may infer hidden structure about these models. We use a specific MCMC method called a Gibbs' sampler to discover topic distributions in a hierarchical Bayesian model called Topics Over Time. We propose an innovative use of this model to discover disease and treatment topics in a corpus of health insurance claims data. By representing individuals as mixtures of topics, we are able to consider their future costs on an individual level rather than as part of a large collective.
APA, Harvard, Vancouver, ISO, and other styles
24

Kramer, Gregory Robert. "An analysis of neutral drift's effect on the evolution of a CTRNN locomotion controller with noisy fitness evaluation." Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1182196651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Vigraham, Saranyan A. "An Analog Evolvable Hardware Device for Active Control." Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1195506953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Tugui, Catalin Adrian. "Design Methodology for High-performance Circuits Based on Automatic Optimization Methods." Thesis, Supélec, 2013. http://www.theses.fr/2013SUPL0002/document.

Full text
Abstract:
Ce travail de thèse porte sur le développement d’une méthodologie efficace pour la conception analogique, des algorithmes et des outils correspondants qui peuvent être utilisés dans la conception dynamique de fonctions linéaires à temps continu. L’objectif principal est d’assurer que les performances pour un système complet peuvent être rapidement investiguées, mais avec une précision comparable aux évaluations au niveau transistor.Une première direction de recherche a impliqué le développement de la méthodologie de conception basée sur le processus d'optimisation automatique de cellules au niveau transistor et la synthèse de macro-modèles analogiques de haut niveau dans certains environnements comme Mathworks - Simulink, VHDL-AMS ou Verilog-A. Le processus d'extraction des macro-modèles se base sur un ensemble complet d'analyses (DC, AC, transitoire, paramétrique, Balance Harmonique) qui sont effectuées sur les schémas analogiques conçues à partir d’une technologie spécifique. Ensuite, l'extraction et le calcul d'une multitude de facteurs de mérite assure que les modèles comprennent les caractéristiques de bas niveau et peuvent être directement régénéré au cours de l'optimisation.L'algorithme d'optimisation utilise une méthode bayésienne, où l'espace d’évaluation est créé à partir d'un modèle de substitution (krigeage dans ce cas), et la sélection est effectuée en utilisant le critère d’amélioration (Expected Improvement - EI) sujet à des contraintes. Un outil de conception a été développé (SIMECT), qui a été intégré comme une boîte à outils Matlab, employant les algorithmes d’extraction des macro-modèles et d'optimisation automatique
The aim of this thesis is to establish an efficient analog design methodology, the algorithms and the corresponding design tools which can be employed in the dynamic conception of linear continuous-time (CT) functions. The purpose is to assure that the performance figures for a complete system can be rapidly investigated, but with comparable accuracy to the transistor-level evaluations. A first research direction implied the development of the novel design methodology based on the automatic optimization process of transistor-level cells using a modified Bayesian Kriging approach and the synthesis of robust high-level analog behavioral models in environments like Mathworks – Simulink, VHDL-AMS or Verilog-A.The macro-model extraction process involves a complete set of analyses (DC, AC, transient, parametric, Harmonic Balance) which are performed on the analog schematics implemented on a specific technology process. Then, the extraction and calculus of a multitude of figures of merit assures that the models include the low-level characteristics and can be directly regenerated during the optimization process.The optimization algorithm uses a Bayesian method, where the evaluation space is created by the means of a Kriging surrogate model, and the selection is effectuated by using the expected improvement (EI) criterion subject to constraints.A conception tool was developed (SIMECT), which was integrated as a Matlab toolbox, including all the macro-models extraction and automatic optimization techniques
APA, Harvard, Vancouver, ISO, and other styles
27

Tribastone, Mirco. "Scalable analysis of stochastic process algebra models." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4629.

Full text
Abstract:
The performance modelling of large-scale systems using discrete-state approaches is fundamentally hampered by the well-known problem of state-space explosion, which causes exponential growth of the reachable state space as a function of the number of the components which constitute the model. Because they are mapped onto continuous-time Markov chains (CTMCs), models described in the stochastic process algebra PEPA are no exception. This thesis presents a deterministic continuous-state semantics of PEPA which employs ordinary differential equations (ODEs) as the underlying mathematics for the performance evaluation. This is suitable for models consisting of large numbers of replicated components, as the ODE problem size is insensitive to the actual population levels of the system under study. Furthermore, the ODE is given an interpretation as the fluid limit of a properly defined CTMC model when the initial population levels go to infinity. This framework allows the use of existing results which give error bounds to assess the quality of the differential approximation. The computation of performance indices such as throughput, utilisation, and average response time are interpreted deterministically as functions of the ODE solution and are related to corresponding reward structures in the Markovian setting. The differential interpretation of PEPA provides a framework that is conceptually analogous to established approximation methods in queueing networks based on meanvalue analysis, as both approaches aim at reducing the computational cost of the analysis by providing estimates for the expected values of the performance metrics of interest. The relationship between these two techniques is examined in more detail in a comparison between PEPA and the Layered Queueing Network (LQN) model. General patterns of translation of LQN elements into corresponding PEPA components are applied to a substantial case study of a distributed computer system. This model is analysed using stochastic simulation to gauge the soundness of the translation. Furthermore, it is subjected to a series of numerical tests to compare execution runtimes and accuracy of the PEPA differential analysis against the LQN mean-value approximation method. Finally, this thesis discusses the major elements concerning the development of a software toolkit, the PEPA Eclipse Plug-in, which offers a comprehensive modelling environment for PEPA, including modules for static analysis, explicit state-space exploration, numerical solution of the steady-state equilibrium of the Markov chain, stochastic simulation, the differential analysis approach herein presented, and a graphical framework for model editing and visualisation of performance evaluation results.
APA, Harvard, Vancouver, ISO, and other styles
28

Tagscherer, Michael. "Dynamische Neuronale Netzarchitektur für Kontinuierliches Lernen." Doctoral thesis, Universitätsbibliothek Chemnitz, 2001. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200100725.

Full text
Abstract:
Am Beispiel moderner Automatisierungssysteme wird deutlich, dass die Steuerung und optimale Führung der technischen Prozesse eng verbunden ist mit der Verfügbarkeit eines möglichst exakten Prozessmodells. Steht jedoch kein Modell des zu steuernden Systems zur Verfügung oder ist das System nicht ausreichend genau analytisch beschreibbar, muss ein adäquates Modell auf der Basis von Beobachtungen (Messdaten) abgeleitet werden. Erschwerend wirken sich hierbei starke Nichtlinearitäten sowie der zeitvariante Charakter der zu identifizierenden Systeme aus. Die Zeitvarianz, beispielsweise durch Alterung oder Verschleiß hervorgerufen, erfordert zusätzlich eine schritthaltende Adaption an den sich verändernden Prozess. Das einmalige, zeitlich begrenzte Erstellen eines Modells ist somit nicht ausreichend. Stattdessen muss zeitlich unbegrenzt "nachtrainiert" werden, was dementsprechend als "Kontinuierliches Lernen" bezeichnet wird. Auch wenn das Ableiten eines Systemmodells anhand von Beobachtungen eine typische Aufgabenstellung für Neuronale Netze ist, stellt die Zeitvarianz Neuronale Netze dennoch vor enorme Probleme. Im Rahmen der Dissertation wurden diese Probleme identifiziert und anhand von unterschiedlichen Neuronalen Netzansätzen analysiert. Auf den sich hieraus ergebenden Ergebnissen steht anschließend die Entwicklung eines neuartigen Neuronalen Netzansatzes im Mittelpunkt. Die besondere Eigenschaft des hybriden ICE-Lernverfahrens ist die Fähigkeit, eine zur Problemkomplexität adäquate Netztopologie selbstständig zu generieren und diese entsprechend des zeitvarianten Charakters der Zielfunktion dynamisch adaptieren zu können. Diese Eigenschaft begünstigt insbesondere schnelles Initiallernen. Darüber hinaus ist das ICE-Verfahren in der Lage, parallel zur Modellausgabe Vertrauenswürdigkeitsprognosen für die aktuelle Ausgabe zur Verfügung zu stellen. Den Abschluss der Arbeit bildet eine spezielle Form des ICE-Ansatzes, bei der durch asymmetrische Aktivierungsfunktionen Parallelen zur Fuzzy-Logik hergestellt werden. Dadurch wird es möglich, automatisch Regeln abzuleiten, welche das erlernte Modell beschreiben. Die "Black-Box", die Neuronale Netze in der Regel darstellen, wird dadurch transparenter
One of the main requirements for an optimal industrial control system is the availability of a precise model of the process, e.g. for a steel rolling mill. If no model or no analytical description of such a process is available a sufficient model has to be derived from observations, i.e. system identification. While nonlinear function approximation is a well-known application for neural networks, the approximation of nonlinear functions that change over time poses many additional problems which have been in the focus of this research. The time-variance caused for example by aging or attrition requires a continuous adaptation to process changes throughout the life-time of the system, here referred to as continuous learning. Based on the analysis of different neural network approaches the novel incremental construction algorithm ICE for continuous learning tasks has been developed. One of the main advantages of the ICE-algorithm is that the number of RBF-neurons and the number of local models of the hybrid network have not to be determined in advance. This is an important feature for fast initial learning. The evolved network is automatically adapted to the time-variant target function. Another advantage of the ICE-algorithm is the ability to simultaneously learn the target function and a confidence value for the network output. Finally a special version of the ICE-algorithm with asymmetric receptive fields is introduced. Here similarities to fuzzy logic are intended. The goal is to automatically derive rules which describe the learned model of the unknown process. In general a neural network is a "black box". In contrast to that an ICE-network is more transparent
APA, Harvard, Vancouver, ISO, and other styles
29

Kocour, Martin. "Automatic Speech Recognition System Continually Improving Based on Subtitled Speech Data." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-399164.

Full text
Abstract:
V dnešnej dobe systémy rozpoznávania reči s veľkým slovníkom dosahujú pomerne vysoké presnosti. Za ich výsledkami však často stoja desiatky ba až stovky hodín manuálne oanotovaných trénovacích dát. Takéto dáta sú často bežne nedostupné alebo pre požadovaný jazyk vôbec neexistujú. Možným riešením je použitie bežne dostupných no menej kvalitných audiovizuálnych dát. Táto práca sa zaoberá technikou zpracovania práve takýchto dát a ich použitím pre trénovanie akustických modelov. Ďalej táto práca pojednáva o možnom využití týchto dát pre kontinuálne vylepšovanie modelov, kedže tieto dáta sú prakticky nevyčerpateľné. Pre tieto účely bol v rámci práce navrhnutý nový prístup pre výber dát.
APA, Harvard, Vancouver, ISO, and other styles
30

Gannon, Mark Andrew. "Passeios aleatórios em redes finitas e infinitas de filas." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-16102017-154842/.

Full text
Abstract:
Um conjunto de modelos compostos de redes de filas em grades finitas servindo como ambientes aleatorios para um ou mais passeios aleatorios, que por sua vez podem afetar o comportamento das filas, e desenvolvido. Duas formas de interacao entre os passeios aleatorios sao consideradas. Para cada modelo, e provado que o processo Markoviano correspondente e recorrente positivo e reversivel. As equacoes de balanceamento detalhado sao analisadas para obter a forma funcional da medida invariante de cada modelo. Em todos os modelos analisados neste trabalho, a medida invariante em uma grade finita tem forma produto. Modelos de redes de filas como ambientes para multiplos passeios aleatorios sao estendidos a grades infinitas. Para cada modelo estendido, sao especificadas as condicoes para a existencia do processo estocastico na grade infinita. Alem disso, e provado que existe uma unica medida invariante na rede infinita cuja projecao em uma subgrade finita e dada pela medida correspondente de uma rede finita. Finalmente, e provado que essa medida invariante na rede infinita e reversivel.
A set of models composed of queueing networks serving as random environments for one or more random walks, which themselves can affect the behavior of the queues, is developed. Two forms of interaction between the random walkers are considered. For each model, it is proved that the corresponding Markov process is positive recurrent and reversible. The detailed balance equa- tions are analyzed to obtain the functional form of the invariant measure of each model. In all the models analyzed in the present work, the invariant measure on a finite lattice has product form. Models of queueing networks as environments for multiple random walks are extended to infinite lattices. For each model extended, the conditions for the existence of the stochastic process on the infinite lattice are specified. In addition, it is proved that there exists a unique invariant measure on the infinite network whose projection on a finite sublattice is given by the corresponding finite- network measure. Finally, it is proved that that invariant measure on the infinite lattice is reversible.
APA, Harvard, Vancouver, ISO, and other styles
31

Gönner, Lorenz, Julien Vitay, and Fred Hamker. "Predictive Place-Cell Sequences for Goal-Finding Emerge from Goal Memory and the Cognitive Map: A Computational Model." Universitätsbibliothek Chemnitz, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-230378.

Full text
Abstract:
Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions.
APA, Harvard, Vancouver, ISO, and other styles
32

Adeli, Mohammad. "Recherche de caractéristiques sonores et de correspondances audiovisuelles pour des systèmes bio-inspirés de substitution sensorielle de l'audition vers la vision." Thèse, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/8194.

Full text
Abstract:
Résumé: Les systèmes de substitution sensorielle convertissent des stimuli d’une modalité sensorielle en des stimuli d’une autre modalité. Ils peuvent fournir les moyens pour les personnes handicapées de percevoir des stimuli d’une modalité défectueuse par une autre modalité. Le but de ce projet de recherche était d’étudier des systèmes de substitution de l’audition vers la vision. Ce type de substitution n’est pas bien étudié probablement en raison de la complexité du système auditif et des difficultés résultant de la désadaptation entre les sons audibles qui peuvent changer avec des fréquences allant jusqu’à 20000 Hz et des stimuli visuels qui changent très lentement avec le temps afin d’être perçus. Deux problèmes spécifiques des systèmes de substitution de l’audition vers la vision ont été ciblés par cette étude: la recherche de correspondances audiovisuelles et l’extraction de caractéristiques auditives. Une expérience audiovisuelle a été réalisée en ligne pour trouver les associations entre les caractéristiques auditives (la fréquence fondamentale et le timbre) et visuelles (la forme, la couleur, et la position verticale). Une forte corrélation entre le timbre des sons utilisés et des formes visuelles a été observée. Les sujets ont fortement associé des timbres “doux” avec des formes arrondies bleues, vertes ou gris clair, des timbres “durs” avec des formes angulaires pointues rouges, jaunes ou gris foncé et des timbres comportant simultanément des éléments de douceur et de dureté avec un mélange des deux formes visuelles arrondies et angulaires. La fréquence fondamentale n’a pas été associée à la position verticale, ni le niveau de gris ou la couleur. Étant donné la correspondance entre le timbre et une forme visuelle, dans l’étape sui- vante, un modèle hiérarchique flexible et polyvalent bio-inspiré pour analyser le timbre et extraire des caractéristiques importantes du timbre a été développé. Inspiré par les découvertes dans les domaines des neurosciences, neurosciences computationnelles et de la psychoacoustique, non seulement le modèle extrait-il des caractéristiques spectrales et temporelles d’un signal, mais il analyse également les modulations d’amplitude sur différentes échelles de temps. Il utilise un banc de filtres cochléaires pour résoudre les composantes spectrales d’un son, l’inhibition latérale pour améliorer la résolution spectrale, et un autre banc de filtres de modulation pour extraire l’enveloppe temporelle et la rugosité du son à partir des modulations d’amplitude. Afin de démontrer son potentiel pour la représentation du timbre, le modèle a été évalué avec succès pour trois applications : 1) la comparaison avec les valeurs subjectives de la rugosité 2) la classification d’instruments de musique 3) la sélection de caractéristiques pour les sons qui ont été regroupés en fonction de la forme visuelle qui leur avait été attribuée dans l’expérience audiovisuelle. La correspondance entre le timbre et la forme visuelle qui a été révélée par cette étude et le modèle proposé pour l’analyse de timbre peuvent être utilisés pour développer des systèmes de substitution de l’audition vers la vision intuitifs codant le timbre en formes visuelles.
Abstract: Sensory substitution systems encode a stimulus modality into another stimulus modality. They can provide the means for handicapped people to perceive stimuli of an impaired modality through another modality. The purpose of this study was to investigate auditory to visual substitution systems. This type of sensory substitution is not well-studied probably because of the complexities of the auditory system and the difficulties arising from the mismatch between audible sounds that can change with frequencies up to 20000 Hz and visual stimuli that should change very slowly with time to be perceived. Two specific problems of auditory to visual substitution systems were targeted in this research: the investigation of audiovisual correspondences and the extraction of auditory features. An audiovisual experiment was conducted online to find the associations between the auditory (pitch and timbre) and visual (shape, color, height) features. One hundred and nineteen subjects took part in the experiments. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the previous two shapes. Fundamental frequency was not associated with height, grayscale or color. Given the correspondence between timbre and shapes, in the next step, a flexible and multipurpose bio-inspired hierarchical model for analyzing timbre and extracting the important timbral features was developed. Inspired by findings in the fields of neuroscience, computational neuroscience, and psychoacoustics, not only does the model extract spectral and temporal characteristics of a signal, but it also analyzes amplitude modulations on different timescales. It uses a cochlear filter bank to resolve the spectral components of a sound, lateral inhibition to enhance spectral resolution, and a modulation filter bank to extract the global temporal envelope and roughness of the sound from amplitude modulations. To demonstrate its potential for timbre representation, the model was successfully evaluated in three applications: 1) comparison with subjective values of roughness, 2) musical instrument classification, and 3) feature selection for labeled timbres. The correspondence between timbre and shapes revealed by this study and the proposed model for timbre analysis can be used to develop intuitive auditory to visual substitution systems that encode timbre into visual shapes.
APA, Harvard, Vancouver, ISO, and other styles
33

(9847832), Dawei Zhang. "Network-based output tracking control for continuous-time systems." Thesis, 2012. https://figshare.com/articles/thesis/Network-based_output_tracking_control_for_continuous-time_systems/13463123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Jia, Jin. "Object highlighting : real-time boundary detection using a Bayesian network." Thesis, 2004. http://hdl.handle.net/1957/30045.

Full text
Abstract:
Image segmentation continues to be a fundamental problem in computer vision and image understanding. In this thesis, we present a Bayesian network that we use for object boundary detection in which the MPE (most probable explanation) before any evidence can produce multiple non-overlapping, non-self-intersecting closed contours and the MPE with evidence where one or more connected boundary points are provided produces a single non-self-intersecting, closed contour that accurately defines an object's boundary. We also present a near-linear-time algorithm that determines the MPE by computing the minimum-path spanning tree of a weighted, planar graph and finding the excluded edge (i.e., an edge not in the spanning tree) that forms the most probable loop. This efficient algorithm allows for real-time feedback in an interactive environment in which every mouse movement produces a recomputation of the MPE based on the new evidence (i.e., the new cursor position) and displays the corresponding closed loop. We call this interface "object highlighting" since the boundary of various objects and sub-objects appear and disappear as the mouse cursor moves around within an image.
Graduation date: 2004
APA, Harvard, Vancouver, ISO, and other styles
35

Walker, James. "Bayesian Inference and Model Selection for Partially-Observed, Continuous-Time, Stochastic Epidemic Models." Thesis, 2019. http://hdl.handle.net/2440/124703.

Full text
Abstract:
Emerging infectious diseases are an ongoing threat to the health of populations around the world. In response, countries such as the USA, UK and Australia, have outlined data collection protocols to surveil these novel diseases. One of the aims of these data collection protocols is to characterise the disease in terms of transmissibility and clinical severity in order to inform an appropriate public health response. This kind of data collection protocol is yet to be enacted in Australia, but such a protocol is likely to be tested during a seasonal in uenza ( u) outbreak in the next few years. However, it is important that methods for characterising these diseases are ready and well understood for when an epidemic disease emerges. The epidemic may only be characterised well if its dynamics are well described (by a model) and are accurately quanti ed (by precisely inferred model parameters). This thesis models epidemics and the data collection process as partially-observed continuous-time Markov chains and aims to choose between models and infer parameters using early outbreak data. It develops Bayesian methods to infer epidemic parameters from data on multiple small outbreaks, and outbreaks in a population of households. An exploratory analysis is conducted to assess the accuracy and precision of parameter estimates under di erent epidemic surveillance schemes, di erent models and di erent kinds of model misspeci cation. It describes a novel Bayesian model selection method and employs it to infer two important characteristics for understanding emerging epidemics: the shape of the infectious period distribution; and, the time of infectiousness relative to symptom onset. Lastly, this thesis outlines a method for jointly inferring model parameters and selecting between epidemic models. This new method is compared with an existing method on two epidemic models and is applied to a di cult model selection problem.
Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2020
APA, Harvard, Vancouver, ISO, and other styles
36

Lin, Yi-San, and 林怡姍. "A Bayesian-Network Risk Assessment Incorporating Human Factors Based on Continuous Fuzzy Set Theory." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/10170624603696988034.

Full text
Abstract:
碩士
國立臺灣海洋大學
商船學系所
100
Recently, maritime accidents emerge in an endless stream, and the causes of the maritime accidents result largely from human factors. Thus, a comprehensive risk assessment considering human elements needs to be developed in advance in order to reduce the risks of disasters. It is, however, difficult to acquire sufficient and historical data in the maritime industry, hence expert judgment is a critical reference sources. Fuzzy set theory is one of the methods often applied to convert the expert judgment into numerical values. How to properly express the real thoughts of experts and reasonably transform the fuzzy conclusions into probability values become extremely important issues. Some studies adopt Mass Assignment theory as a conversion mechanism. However, such a theory confines the membership functions of linguistic terms describing variables to a discrete nature and this is sometimes unable to present integrity of the data. Therefore, whether a risk assessment should adopt discrete or continuous membership functions is dependent on the nature of variables. The applications of inappropriate membership functions may violate the logic of human thoughts and affect the reliability of risk assessment. In order to solve the drawbacks aforementioned, a new risk assessment method capable of transforming expert judgment into probability values is proposed by combining curve fitting methods with fuzzy failure rate. Subsequently probability values are incorporated into the Bayesian network so as to infer causal relationship. After the validation it is concluded that the framework proposed has the capability of solving the shortcomings aforementioned.
APA, Harvard, Vancouver, ISO, and other styles
37

BARONE, ROSARIO. "MCMC methods for continuous time multi-state models and high dimensional copula models." Doctoral thesis, 2020. http://hdl.handle.net/11573/1365737.

Full text
Abstract:
In this Thesis we propose Markov chain Monte Carlo (MCMC) methods for several classes of models. We consider both parametric and nonparametric Bayesian approaches proposing either alternatives in computation to already existent methods or new computational tools. In particular, we consider continuous time multi-state models (CTMSM), that is a class of stochastic processes useful for modelling several phenomena evolving continuously in time, with a finite number of states. Inference for these models is straightforward if the processes are fully observed, while it presents some computational difficulties if the processes are discretely observed and there is no additional information about the state transitions. In particular, in the semi-Markov models case the likelihood function is not available in closed form and approximation techniques are required. In the first Chapter we provide a uniformization based algorithm for simulating continuous time semi-Markov trajectories between discretely observed points and propose a Metropolis within Gibbs algorithm in order to sample from the posterior distributions of the parameters of that class of processes. As it will be shown, our method generalizes the Markov case. In the second Chapter we present a novel Bayesian nonparametric approach for inference on CTMSM. We propose a Dirichlet Process Mixture with continuous time Markov multi-state kernels, providing a Gibbs sampler which exploit the conjugacy between the Markov CTMSM density and the chosen base measure. The method, that is applicable with fully observed and discretely observed data, represents a flexible solution which avoid parametric assumptions on the process and allows to get density estimation and clustering. In the last Chapter we focus on copulas, a class of models for dependence between random variables. The copula approach allows for the construction of joint distributions as product of marginals and copula function. In particular, we focus on the modelling of the dependence between more than two random variables. In that case, assuming a multidimensional copula model for the multivariate data implies that paired data dependencies are assumed to belong to the same parametric family. This constraint makes this class of models not very flexible. A proposed solution to this problem is the vine copula constructions, which allows us to rewrite the multivariate copula as product of pair-copulas which may belong to different copula families. Another solution may be the nonparametric approach. We present two Bayesian nonparametric methods for inference on copulas in high dimensions. The first proposal is an alternative to an already existent method for high dimensional copulas. The second method is a novel Dirichlet Process Mixture of conditional multivariate copulas, which accounts for covariates on the dependence between the considered variables. Applications with both simulated and real data are provided in the last section of the first and the second Chapters, while in the last Chapter there are only application with simulated data.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Jyun-Lin, and 陳俊霖. "Cost and Survival Prognosis Model for Lung Cancer Patients: A Continuous Gaussian Bayesian Network Approach." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/16087186742646333410.

Full text
Abstract:
碩士
國立臺灣科技大學
工業管理系
103
In Taiwan, cancer has always become one of the leading cause of death since 1982. Ministry of Health and Welfare mortality statistics showed that 44,791 people died of cancer in 2013, accounting for 29 percent of all deaths. Furthermore, lung cancer is the leading cause of mortalities no matter in men or women in 2013, which accounted 19.77% of all cancer deaths. The resources for lung cancer patients’ medical care should be considered much deep. Risk adjustment deals with the issues of equity and efficiency separately by establishing a risk equalization, which is seen as an effective way to evaluate individual medical requirement. This study presented a continuous Gaussian Bayesian network model to evaluate lung cancer patients’ survival time and expenditure from Taiwan’s National Health Insurance databank. Based on previous literatures, we summarized related risk adjustment outcomes, and also provided an overview of factors selection of lung cancer. In addition, this study presented the severity stages of risk adjustment model. For survival time estimation, the adjusted R2 performed 93.574% of stage I, 86.827% of stage II, 67.222% of stage III, and 52.940% of stage IV. For expenditure estimation, the adjusted R2 performed 32.63% of stage I, 50.301% of stage II, 50.363% of stage III, and 66.578% of stage IV. Compared with previous literatures, this study successfully increased the predictive power of risk adjustment model by using a continuous Gaussian Bayesian network. This study also performed the probability density function for all factors, as well as healthcare expenditure and overall survivability prediction. Public decision maker can utilize the proposed model to measure the lung cancer patients. According to this study, requirement planning of lung cancer patients can be evaluate properly.
APA, Harvard, Vancouver, ISO, and other styles
39

謝明佳. "A Bayesian Study on the Plant-Capture Approach for Population Size Estimation in Continuous Time." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/14406744993162482411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Robinson, Joshua Westly. "Modeling Time-Varying Networks with Applications to Neural Flow and Genetic Regulation." Diss., 2010. http://hdl.handle.net/10161/3109.

Full text
Abstract:

Many biological processes are effectively modeled as networks, but a frequent assumption is that these networks do not change during data collection. However, that assumption does not hold for many phenomena, such as neural growth during learning or changes in genetic regulation during cell differentiation. Approaches are needed that explicitly model networks as they change in time and that characterize the nature of those changes.

In this work, we develop a new class of graphical models in which the conditional dependence structure of the underlying data-generation process is permitted to change over time. We first present the model, explain how to derive it from Bayesian networks, and develop an efficient MCMC sampling algorithm that easily generalizes under varying levels of uncertainty about the data generation process. We then characterize the nature of evolving networks in several biological datasets.

We initially focus on learning how neural information flow networks change in songbirds with implanted electrodes. We characterize how they change in response to different sound stimuli and during the process of habituation. We continue to explore the neurobiology of songbirds by identifying changes in neural information flow in another habituation experiment using fMRI data. Finally, we briefly examine evolving genetic regulatory networks involved in Drosophila muscle differentiation during development.

We conclude by suggesting new experimental directions and statistical extensions to the model for predicting novel neural flow results.


Dissertation
APA, Harvard, Vancouver, ISO, and other styles
41

Lemp, Jason David. "Capturing random utility maximization behavior in continuous choice data : application to work tour scheduling." 2009. http://hdl.handle.net/2152/18643.

Full text
Abstract:
Recent advances in travel demand modeling have concentrated on adding behavioral realism by focusing on an individual’s activity participation. And, to account for trip-chaining, tour-based methods are largely replacing trip-based methods. Alongside these advances and innovations in dynamic traffic assignment (DTA) techniques, however, time-of-day (TOD) modeling remains an Achilles’ heel. As congestion worsens and operators turn to variable road pricing, sensors are added to networks, cell phones are GPS-enabled, and DTA techniques become practical, accurate time-of-day forecasts become critical. In addition, most models highlight tradeoffs between travel time and cost, while neglecting variations in travel time. Research into stated and revealed choices suggests that travel time variability can be highly consequential. This dissertation introduces a method for imputing travel time variability information as a continuous function of time-of-day, while utilizing an existing method for imputing average travel times (by TOD). The methods employ ordinary least squares (OLS) regression techniques, and rely on reported travel time information from survey data (typically available to researchers), as well as travel time and distance estimates by origin-destination (OD) pair for free-flow and peak-period conditions from network data. This dissertation also develops two models of activity timing that recognize the imputed average travel times and travel time variability. Both models are based in random utility theory and both recognize potential correlations across time-of-day alternatives. In addition, both models are estimated in a Bayesian framework using Gibbs sampling and Metropolis-Hastings (MH) algorithms, and model estimation relies on San Francisco Bay Area data collected in 2000. The first model is the continuous cross-nested logit (CCNL) and represents tour outbound departure time choice in a continuous context (rather than discretizing time) over an entire day. The model is formulated as a generalization of the discrete cross-nested logit (CNL) for continuous choice and represents the first random utility maximization model to incorporate the ability to capture correlations across alternatives in a continuous choice context. The model is then compared to the continuous logit, which represents a generalization of the multinomial logit (MNL) for continuous choice. Empirical results suggest that the CCNL out-performs the continuous logit in terms of predictive accuracy and reasonableness of predictions for three tolling policy simulations. Moreover, while this dissertation focuses on time-of-day modeling, the CCNL could be used in a number of other continuous choice contexts (e.g., location/destination, vehicle usage, trip durations, and profit-maximizing production). The second model is a bivariate multinomial probit (BVMNP) model. While the model relies on discretization of time (into 30-minute intervals), it captures both key dimensions of a tour’s timing (rather than just one, as in this dissertation’s application of the CCNL model), which is important for tour- and activity-based models of travel demand. The BVMNP’s ability to capture correlations across scheduling alternatives is something no existing two-dimensional choice models of tour timing can claim. Both models represent substantial contributions for continuous choice modeling in transportation, business, biology, and various other fields. In addition, the empirical results of the models evaluated here enhance our understanding of individuals’ time-of-day decisions. For instance, average travel time and its variance are estimated to have a negative effect on workers’ utilities, as expected, but are not found to be that practically relevant here, probably because most workers are rather constrained in their activity scheduling and/or work hours. However, correlations are found to be rather strong in both models, particularly for home-to-work journeys, suggesting that if models fail to accommodate such correlations, biased application results may emerge.
text
APA, Harvard, Vancouver, ISO, and other styles
42

Chuang, Yin Yin, and 莊茵茵. "Using Bayesian network for analyzing cycle time to find key influenced factors and Constructing cycle time evolution table to predict cycle time in PCB industry with case studies." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/qj262q.

Full text
Abstract:
碩士
國立清華大學
工業工程與工程管理學系
105
Competition in high tech industry forces the field to consider the ways to monitor the duration of cycle time and to keep produce efficiency within a budget. Particularly, Printed Circuit Board (PCB) industry is sensitive to this issue since their product characteristic is about small-volume and large-variety production. The product complexity of PCB is high, and its manufacturing processes of PCB go through thirty-six processes so how to monitor each station and to estimate the total cycle time are the issues we concerned. In this paper, we use data mining framework to build up a model for factors extraction and proposes a cycle time evolution table for estimation the cycle time. The Bayesian network extracts the main factors that significant influence on total cycle time and the cycle time evolution table estimate the total cycle time per piece of the board. This study cooperates with PCB company in Taiwan for empirical research. Proposed framework extracts critical stations which influence the total cycle time from huge data to validate the results. Furthermore, the engineers follow the results to find the indirect impact factor. On the other hand, the study also uses the cycle time evolution table on estimating cycle time. The results give decision makers a criterion on estimating cycle time and committing delivery day.
APA, Harvard, Vancouver, ISO, and other styles
43

Liu, Yen-Ling, and 劉燕玲. "A Comparative Study on Using Supervised Bayesian Network, Unsupervised DINA, G-DINA, and DINO Models in the Cognitive Diagnostic Assessment of "Time Unit" for the Fourth Graders." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/5r5336.

Full text
Abstract:
碩士
國立臺中教育大學
教育測驗統計研究所
100
The main purposes of this study are to establish the computerized diagnostic test for the “time unit” in the fourth grade based on the concept of cognitive diagnostic assessment, and use supervised Bayesian network and unsupervised DINA, G-DINA, DINO models to analyze test data. Finally, the diagnostic accuracy is estimated and compared to find the best model. The major findings of this study are summarized as follows: 1. The Cronbach α, average difficulty, and average discrimination of this computerized diagnostic test are 0.82, 0.673, and 0.474 respectively, demonstrating significant reliability of the test. 2. The average diagnosis accuracy of the math concepts measured in this computerized diagnostic test is 0.7210 for DINA model, 0.7681 for G-DINA model, 0.9130 for Bayesian network model with math concepts and questions, and 0.8338 for Bayesian network model with math concepts, error patterns, and questions. These results indicate the superior of the supervised cognitive diagnostic models over the unsupervised cognitive diagnostic models to the average diagnosis accuracy of math concepts by about 19%. 3. The average diagnosis accuracy of the error patterns measured in this computerized diagnostic test is 0.7432 for DINO model, 0.8824 for Bayesian network model with error patterns and questions, and 0.8817 for Bayesian network model with math concepts, error patterns, and questions. These indicate the superior of the supervised cognitive diagnostic models over the unsupervised cognitive diagnostic models to the average diagnosis accuracy of error patterns by about 14%. 4. Less than 50% of the fourth graders are found to be able to possess the following mathematic concepts: (1) the two-tier time unit conversion between hours, minutes and seconds; (2) the addition and subtraction of the moment and the amount of time across days; (3) the two-tier time unit conversion between days, hours, and minutes; (4) solving problems which subtraction with borrowing about compound number unit by straight computation is needed. 5. The most frequently error patterns of the students participating in this test in order are the computation of day and hour conversion as sexagesimal, error in 12-hour and 24-hour conversion, only computing part of time compound unit, and the errors are due to the conversion of the high-level units without considering that of the low-level units.
APA, Harvard, Vancouver, ISO, and other styles
44

Ribeiro, Ana Custódia da Silva. "Contabilidade de custos na definição de tabelas de preços : custeio de uma unidade de cuidados continuados." Master's thesis, 2014. http://hdl.handle.net/10400.14/17694.

Full text
Abstract:
A contabilidade de custos na área da saúde, revela-se fundamental como instrumento de informação para a tomada de decisão, num setor onde os gestores são cada vez mais incitados a providenciar mais e melhores cuidados a preços inferiores. O principal objetivo deste estudo de caso, foi determinar o custo de um dia de internamento numa unidade de cuidados continuados, para três tipologias de doente caracterizados segundo o grau de dependência, com base nos custos reais da atividade utilizando a metodologia Time-Driven Activity-Based Costing. A escolha da metodologia de custeio, relaciona-se com as características do modelo e da instituição em análise. Assim, o TDABC revelou-se mais adaptável, simples de construir e um método que reflete melhor a realidade e complexidade de um hospital, relativamente a outras metodologias, nomeadamente o Activity-Based Costing. A análise realizada permitiu identificar os principais processos e custos associados e afetá-los aos doentes tipos e concluir que o custo real de um dia de internamento de um doente moderadamente dependente e totalmente dependente é superior ao preço protocolado com a Rede Nacional de Cuidados Continuados o que em caso de convenção conduz à necessidade de reduzir custos para tornar o projeto sustentável. Permitiu ainda determinar o preço a praticar, numa situação de exploração privada. Conclui-se que numa perspetiva de exploração privada, o projeto da unidade de cuidados continuados é sustentável. Ao longo da realização deste trabalho deparei-me com limitações, nomeadamente a ausência de um sistema de informação que permita a obtenção dos custos reais da instituição, a utilização de critérios de repartição que podem influenciar a precisão dos resultados e o facto de realizar este estudo em termos prospetivos e não com dados reais de consumo da instituição. Sugere-se que logo que possível se atualize este sistema de custeio, atualizando tempos e afetação dos profissionais e dos recursos, procurando um sistema cada vez mais preciso e que seja cada vez mais reflexo da realidade da OT.
In an industry where managers are increasingly encouraged to provide more and better care at lower prices, cost accounting in healthcare becomes vital as information tool for decision making. The main objective of this case study was to determine the cost of a day in a continuing care unit for three types of patients characterized according to the degree of dependence, based on the actual cost of the activity using the methodology Time-Driven Activity-Based Costing. The choice of this costing methodology is related to the characteristics of the model and the institution in question. TDABC has proved to be more adaptable and easier to construct. It is a method that better reflects the reality and the complexity of the hospital and complexity relative to other methodologies, including the activity-based costing. The analysis has allowed us to identify key processes and associated costs, and affect them to the types of patients. The actual cost of a day in a continuing care unit for moderately dependent and totally dependent patient is higher than the contractually specified price by the National Network for Integrated Continuous Care. In case of convention it is necessary reducing costs to make the project sustainable. This study has also permitted to find the price to charge in a situation of private exploitation. Analyzing the organization's ability to generate positive results we might conclude that in a perspective of private exploitation, the projected unit continuing care unit is sustainable. Throughout this work I came across with limitations, particularly the lack of data provided by the information system that allows me or us to obtain obtaining the actual costs of the institution, the use of allocation criteria that may influence the accuracy of the results, and the fact that this study is prospective. It is suggested to update this costing system as soon as possible, updating allocation of professional and resources time, seeking an increasingly accurate system that could improve the reality of OT .
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography