Thèses sur le sujet « Continuous Time Bayesian Network »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 44 meilleures thèses pour votre recherche sur le sujet « Continuous Time Bayesian Network ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.
CODECASA, DANIELE. « Continuous time bayesian network classifiers ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/80691.
Texte intégralNodelman, Uri D. « Continuous time bayesian networks / ». May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.
Texte intégralFan, Yu. « Continuous time Bayesian Network approximate inference and social network applications ». Diss., [Riverside, Calif.] : University of California, Riverside, 2009. http://proquest.umi.com/pqdweb?index=0&did=1957308751&SrchMode=2&sid=1&Fmt=2&VInst=PROD&VType=PQD&RQT=309&VName=PQD&TS=1268330625&clientId=48051.
Texte intégralIncludes abstract. Title from first page of PDF file (viewed March 8, 2010). Available via ProQuest Digital Dissertations. Includes bibliographical references (p. 130-133). Also issued in print.
ACERBI, ENZO. « Continuos time Bayesian networks for gene networks reconstruction ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/52709.
Texte intégralVILLA, SIMONE. « Continuous Time Bayesian Networks for Reasoning and Decision Making in Finance ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/69953.
Texte intégralThe analysis of the huge amount of financial data, made available by electronic markets, calls for new models and techniques to effectively extract knowledge to be exploited in an informed decision-making process. The aim of this thesis is to introduce probabilistic graphical models that can be used to reason and to perform actions in such a context. In the first part of this thesis, we present a framework which exploits Bayesian networks to perform portfolio analysis and optimization in a holistic way. It leverages on the compact and efficient representation of high dimensional probability distributions offered by Bayesian networks and their ability to perform evidential reasoning in order to optimize the portfolio according to different economic scenarios. In many cases, we would like to reason about the market change, i.e. we would like to express queries as probability distributions over time. Continuous time Bayesian networks can be used to address this issue. In the second part of the thesis, we show how it is possible to use this model to tackle real financial problems and we describe two notable extensions. The first one concerns classification, where we introduce an algorithm for learning these classifiers from Big Data, and we describe their straightforward application to the foreign exchange prediction problem in the high frequency domain. The second one is related to non-stationary domains, where we explicitly model the presence of statistical dependencies in multivariate time-series while allowing them to change over time. In the third part of the thesis, we describe the use of continuous time Bayesian networks within the Markov decision process framework, which provides a model for sequential decision-making under uncertainty. We introduce a method to control continuous time dynamic systems, based on this framework, that relies on additive and context-specific features to scale up to large state spaces. Finally, we show the performances of our method in a simplified, but meaningful trading domain.
GATTI, ELENA. « Graphical models for continuous time inference and decision making ». Doctoral thesis, Università degli Studi di Milano-Bicocca, 2011. http://hdl.handle.net/10281/19575.
Texte intégralAlharbi, Randa. « Bayesian inference for continuous time Markov chains ». Thesis, University of Glasgow, 2019. http://theses.gla.ac.uk/40972/.
Texte intégralParton, Alison. « Bayesian inference for continuous-time step-and-turn movement models ». Thesis, University of Sheffield, 2018. http://etheses.whiterose.ac.uk/20124/.
Texte intégralElshamy, Wesam Samy. « Continuous-time infinite dynamic topic models ». Diss., Kansas State University, 2012. http://hdl.handle.net/2097/15176.
Texte intégralDepartment of Computing and Information Sciences
William Henry Hsu
Topic models are probabilistic models for discovering topical themes in collections of documents. In real world applications, these models provide us with the means of organizing what would otherwise be unstructured collections. They can help us cluster a huge collection into different topics or find a subset of the collection that resembles the topical theme found in an article at hand. The first wave of topic models developed were able to discover the prevailing topics in a big collection of documents spanning a period of time. It was later realized that these time-invariant models were not capable of modeling 1) the time varying number of topics they discover and 2) the time changing structure of these topics. Few models were developed to address this two deficiencies. The online-hierarchical Dirichlet process models the documents with a time varying number of topics. It varies the structure of the topics over time as well. However, it relies on document order, not timestamps to evolve the model over time. The continuous-time dynamic topic model evolves topic structure in continuous-time. However, it uses a fixed number of topics over time. In this dissertation, I present a model, the continuous-time infinite dynamic topic model, that combines the advantages of these two models 1) the online-hierarchical Dirichlet process, and 2) the continuous-time dynamic topic model. More specifically, the model I present is a probabilistic topic model that does the following: 1) it changes the number of topics over continuous time, and 2) it changes the topic structure over continuous-time. I compared the model I developed with the two other models with different setting values. The results obtained were favorable to my model and showed the need for having a model that has a continuous-time varying number of topics and topic structure.
Acciaroli, Giada. « Calibration of continuous glucose monitoring sensors by time-varying models and Bayesian estimation ». Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3425746.
Texte intégralI sensori minimamente invasivi per il monitoraggio in continua della glicemia, indicati con l’acronimo CGM (continuous glucose monitoring), sono dei dispositivi medici indossabili capaci di misurare la glicemia in tempo reale, ogni 1-5 minuti, per più giorni consecutivi. Questo tipo di misura fornisce un profilo di glicemia quasi continuo che risulta essere un’informazione molto utile per la gestione quotidiana della terapia del diabete. La maggior parte dei dispositivi CGM ad oggi disponibili nel mercato dispongono di un sensore di tipo elettrochimico, solitamente inserito nel tessuto sottocutaneo, che misura una corrente elettrica generata dalla reazione chimica di glucosio-ossidasi. Le misure di corrente elettrica sono fornite dal sensore con campionamento uniforme ad elevata frequenza temporale e vengono convertite in tempo reale in valori di glicemia interstiziale attraverso un processo di calibrazione. La procedura di calibrazione prevede l’acquisizione da parte del paziente di qualche misura di glicemia plasmatica di riferimento tramite dispositivi pungidito. Solitamente, le aziende produttrici di sensori CGM implementano un processo di calibrazione basato su un modello di tipo lineare che approssima, sebbene in intervalli di tempo di durata limitata, la più complessa relazione tra corrente elettrica e glicemia. Di conseguenza, si rendono necessarie frequenti calibrazioni (per esempio, due al giorno) per aggiornare i parametri del modello di calibrazione e garantire una buona accuratezza di misura. Tuttavia, ogni calibrazione prevede l’acquisizione da parte del paziente di misure di glicemia tramite dispositivi pungidito. Questo aumenta la già numerosa lista di azioni che i pazienti devono svolgere quotidianamente per gestire la loro terapia. Lo scopo di questa tesi è quello di sviluppare un nuovo algoritmo di calibrazione per sensori CGM minimamente invasivi capace di garantire una buona accuratezza di misura con il minimo numero di calibrazioni. Nello specifico, si propone i) di sostituire il guadagno ed offset tempo-invarianti solitamente utilizzati nei modelli di calibrazione di tipo lineare con delle funzioni tempo-varianti, capaci di descrivere il comportamento del sensore per intervalli di tempo di più giorni, e per cui sia disponibile dell’informazione a priori riguardante i parametri incogniti; ii) di stimare il valore numerico dei parametri del modello di calibrazione con metodo Bayesiano, sfruttando l’informazione a priori sui parametri di calibrazione in aggiunta ad alcune misure di glicemia plasmatica di riferimento. La tesi è organizzata in 6 capitoli. Nel Capitolo 1, dopo un’introduzione sulle tecnologie dei sensori CGM, viene illustrato il problema della calibrazione. In seguito, vengono discusse alcune tecniche di calibrazione che rappresentano lo stato dell’arte ed i loro problemi aperti, che risultano negli scopi della tesi descritti alla fine del capitolo. Nel Capitolo 2 vengono descritti i dataset utilizzati per l’implementazione delle tecniche di calibrazione. Inoltre, vengono illustrate le metriche di accuratezza e le tecniche di analisi statistica utilizzate per analizzare la qualità dei risultati. Nel Capitolo 3 viene illustrato un algoritmo di calibrazione recentemente proposto in letteratura (Vettoretti et al., IEEE, Trans Biomed Eng 2016). Questo algoritmo rappresenta il punto di partenza dello studio svolto in questa tesi. Più precisamente, viene dimostrato che, grazie all’utilizzo di un prior Bayesiano specifico per ogni giorno di utilizzo, l’algoritmo diventa efficace nel ridurre le calibrazioni da due a una al giorno senza perdita di accuratezza. Tuttavia, il modello lineare di calibrazione utilizzato dall’algoritmo ha dominio di validità limitato a brevi intervalli di tempo tra due calibrazioni successive, rendendo impossibile l’ulteriore riduzione delle calibrazioni a meno di una al giorno senza perdita di accuratezza. Questo determina la necessità di sviluppare un nuovo modello di calibrazione valido per intervalli di tempo più estesi, fino a più giorni consecutivi, come quello sviluppato nel resto di questa tesi. Nel Capitolo 4 viene presentato un nuovo algoritmo di calibrazione di tipo Bayesiano (Bayesian multi-day, BMD). L’algoritmo si basa su un modello della tempo-varianza delle caratteristiche del sensore nei suoi giorni di utilizzo e sulla disponibilità di informazione statistica a priori sui suoi parametri incogniti. Per ogni coppia paziente-sensore, il valore numerico dei parametri del modello è determinato tramite stima Bayesiana sfruttando alcune misure plasmatiche di riferimento acquisite dal paziente con dispositivi pungidito. Inoltre, durante la stima dei parametri, la dinamica introdotta dalla cinetica plasma-interstizio viene compensata tramite deconvoluzione nonparametrica. L’algoritmo di calibrazione BMD viene applicato a due differenti set di dati acquisiti con il sensore commerciale Dexcom (Dexocm Inc., San Diego, CA) G4 Platinum (DG4P) e con un prototipo di sensore Dexcom di nuova generazione (NGD). Nei dati acquisiti con il sensore DG4P, i risultati dimostrano che, nonostante le calibrazioni vengano ridotte (in media da 2 al giorno a 0.25 al giorno), l’ algoritmo BMD migliora significativamente l’accuratezza del sensore rispetto all’algoritmo di calibrazione utilizzato dall’azienda produttrice del sensore. Nei dati acquisiti con il sensore NGD, i risultati sono ancora migliori, permettendo di ridurre ulteriormente le calibrazioni fino a zero. Nel Capitolo 5 vengono analizzati i potenziali margini di miglioramento dell’algoritmo di calibrazione BMD discusso nel capitolo precedente e viene proposta un’ulteriore estensione dello stesso. In particolare, per meglio gestire la variabilità tra sensori e tra soggetti, viene proposto un approccio di calibrazione multi-modello e un metodo Bayesiano di selezione del modello (Multi-model Bayesian framework, MMBF) in cui il modello di calibrazione più probabile a posteriori viene scelto tra un set di possibili candidati. Tale approccio multi-modello viene analizzato in via preliminare su un set di dati simulati generati da un simulatore del paziente diabetico di tipo 1 ben noto in letteratura. I risultati dimostrano che l’accuratezza del sensore migliora in modo significativo con MMBF rispetto ad utilizzare un unico modello di calibrazione. Infine, nel Capitolo 6 vengono riassunti i principali risultati ottenuti in questa tesi, le possibili applicazioni, e i margini di miglioramento per gli sviluppi futuri.
Murray, Lawrence. « Bayesian learning of continuous time dynamical systems with applications in functional magnetic resonance imaging ». Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/4157.
Texte intégralRatiu, Alin. « Continuous time signal processing for wake-up radios ». Thesis, Lyon, INSA, 2015. http://www.theses.fr/2015ISAL0078/document.
Texte intégralWake-Up Receivers (WU-RX) have been recently proposed as candidates to reduce the communication power budget of wireless networks. Their role is to sense the environment and wake up the main receivers which then handle the bulk data transfer. Existing WU-RXs achieve very high sensitivities for power consumptions below 50uW but severely degrade their performance in the presence of out-of-band blockers. We attempt to tackle this problem by implementing an ultra low power, tunable, intermediate frequency filtering stage. Its specifications are derived from standard WU-RX architectures; it is shown that classic filtering techniques are either not tunable enough or demand a power consumption beyond the total WU-RX budget of 100uW. We thus turn to the use of Continuous Time Digital Signal Processing (CT-DSP) which offers the same level of programmability as standard DSP solutions while providing an excellent scalability of the power consumption with respect to the characteristics of the input signal. A CT-DSP chain can be divided into two parts: the CT-ADC and the CT-DSP itself; the specifications of these two blocks, given the context of this work, are also discussed. The CT-ADC is based on a novel, delta modulator-based architecture which achieves a very low power consumption; its maximum operation frequency was extended by the implementation of a very fast feedback loop. Moreover, the CT nature of the ADC means that it does not do any sampling in time, hence no anti-aliasing filter is required. The proposed ADC requires only 24uW to quantize signals in the [10MHz 50MHz] bandwidth for an SNR between 32dB and 42dB, resulting in a figure of merit of 3-10fJ/conv-step, among the best reported for the selected frequency range. Finally, we present the architecture of the CT-DSP which is divided into two parts: a CT-IIR and a CT-FIR. The CT-IIR is implemented by placing a standard CT-FIR in a feedback loop around the CT-ADC. If designed correctly, the feedback loop can now cancel out certain frequencies from the CT-ADC input (corresponding to those of out-of-band interferers) while boosting the power of the useful signal. The effective amplitude of the CT-ADC input is thus reduced, making it generate a smaller number of tokens, thereby reducing the power consumption of the subsequent CT-FIR by a proportional amount. The CT-DSP consumes around 100uW while achieving more than 40dB of out-of-band rejection; for a bandpass implementation, a 2MHz passband can be shifted over the entire ADC bandwidth
Arastuie, Makan. « Generative Models of Link Formation and Community Detection in Continuous-Time Dynamic Networks ». University of Toledo / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1596718772873086.
Texte intégralMurphy, James Kevin. « Hidden states, hidden structures : Bayesian learning in time series models ». Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/250355.
Texte intégralSahin, Elvan. « Discrete-Time Bayesian Networks Applied to Reliability of Flexible Coping Strategies of Nuclear Power Plants ». Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103817.
Texte intégralMaster of Science
Some external events like earthquakes, flooding, and severe wind, may cause damage to the nuclear reactors. To reduce the consequences of these damages, the Nuclear Energy Institute (NEI) has proposed mitigating strategies known as FLEX (Diverse and Flexible Coping Strategies). After the implementation of FLEX in nuclear power plants, we need to analyze the failure or success probability of these engineering systems through one of the existing methods. However, the existing methods are limited in analyzing the dependencies among components in complex systems. Bayesian networks (BNs) are a graphical and quantitative technique that is utilized to model dependency among events. This thesis shows the effectiveness and applicability of BNs in the reliability analysis of FLEX strategies by comparing it with two other reliability analysis tools, known as Fault Tree Analysis and Markov Chain. According to the reliability analysis results, BN is a powerful and promising method in modeling and analyzing FLEX strategies.
Burchett, Woodrow. « Improving the Computational Efficiency in Bayesian Fitting of Cormack-Jolly-Seber Models with Individual, Continuous, Time-Varying Covariates ». UKnowledge, 2017. http://uknowledge.uky.edu/statistics_etds/27.
Texte intégralYang, Jianxiang. « Time-delay neural network systems for stop and unstop phoneme discrimination in continuous speech signal ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/MQ31661.pdf.
Texte intégralWu, Xinying. « Reliability Assessment of a Continuous-state Fuel Cell Stack System with Multiple Degrading Components ». Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1556794664723115.
Texte intégralArthur, Jacob D. « Enhanced Prediction of Network Attacks Using Incomplete Data ». NSUWorks, 2017. http://nsuworks.nova.edu/gscis_etd/1020.
Texte intégralLebre, Sophie. « Stochastic process analysis for Genomics and Dynamic Bayesian Networks inference ». Phd thesis, Université d'Evry-Val d'Essonne, 2007. http://tel.archives-ouvertes.fr/tel-00260250.
Texte intégralFirst we study a parsimonious Markov model called Mixture Transition Distribution (MTD) model which is a mixture of Markovian transitions. The overly high number of constraints on the parameters of this model hampers the formulation of an analytical expression of the Maximum Likelihood Estimate (MLE). We propose to approach the MLE thanks to an EM algorithm. After comparing the performance of this algorithm to results from the litterature, we use it to evaluate the relevance of MTD modeling for bacteria DNA coding sequences in comparison with standard Markovian modeling.
Then we propose two different approaches for genetic regulation network recovering. We model those genetic networks with Dynamic Bayesian Networks (DBNs) whose edges describe the dependency relationships between time-delayed genes expression. The aim is to estimate the topology of this graph despite the overly low number of repeated measurements compared with the number of observed genes.
To face this problem of dimension, we first assume that the dependency relationships are homogeneous, that is the graph topology is constant across time. Then we propose to approximate this graph by considering partial order dependencies. The concept of partial order dependence graphs, already introduced for static and non directed graphs, is adapted and characterized for DBNs using the theory of graphical models. From these results, we develop a deterministic procedure for DBNs inference.
Finally, we relax the homogeneity assumption by considering the succession of several homogeneous phases. We consider a multiple changepoint
regression model. Each changepoint indicates a change in the regression model parameters, which corresponds to the way an expression level depends on the others. Using reversible jump MCMC methods, we develop a stochastic algorithm which allows to simultaneously infer the changepoints location and the structure of the network within the phases delimited by the changepoints.
Validation of those two approaches is carried out on both simulated and real data analysis.
Van, Lierde Boris. « Developing Box-Pushing Behaviours Using Evolutionary Robotics ». Thesis, Högskolan Dalarna, Datateknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:du-6250.
Texte intégralJunuthula, Ruthwik Reddy. « Modeling, Evaluation and Analysis of Dynamic Networks for Social Network Analysis ». University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1544819215833249.
Texte intégralWebb, Jared Anthony. « A Topics Analysis Model for Health Insurance Claims ». BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3805.
Texte intégralKramer, Gregory Robert. « An analysis of neutral drift's effect on the evolution of a CTRNN locomotion controller with noisy fitness evaluation ». Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1182196651.
Texte intégralVigraham, Saranyan A. « An Analog Evolvable Hardware Device for Active Control ». Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1195506953.
Texte intégralTugui, Catalin Adrian. « Design Methodology for High-performance Circuits Based on Automatic Optimization Methods ». Thesis, Supélec, 2013. http://www.theses.fr/2013SUPL0002/document.
Texte intégralThe aim of this thesis is to establish an efficient analog design methodology, the algorithms and the corresponding design tools which can be employed in the dynamic conception of linear continuous-time (CT) functions. The purpose is to assure that the performance figures for a complete system can be rapidly investigated, but with comparable accuracy to the transistor-level evaluations. A first research direction implied the development of the novel design methodology based on the automatic optimization process of transistor-level cells using a modified Bayesian Kriging approach and the synthesis of robust high-level analog behavioral models in environments like Mathworks – Simulink, VHDL-AMS or Verilog-A.The macro-model extraction process involves a complete set of analyses (DC, AC, transient, parametric, Harmonic Balance) which are performed on the analog schematics implemented on a specific technology process. Then, the extraction and calculus of a multitude of figures of merit assures that the models include the low-level characteristics and can be directly regenerated during the optimization process.The optimization algorithm uses a Bayesian method, where the evaluation space is created by the means of a Kriging surrogate model, and the selection is effectuated by using the expected improvement (EI) criterion subject to constraints.A conception tool was developed (SIMECT), which was integrated as a Matlab toolbox, including all the macro-models extraction and automatic optimization techniques
Tribastone, Mirco. « Scalable analysis of stochastic process algebra models ». Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4629.
Texte intégralTagscherer, Michael. « Dynamische Neuronale Netzarchitektur für Kontinuierliches Lernen ». Doctoral thesis, Universitätsbibliothek Chemnitz, 2001. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200100725.
Texte intégralOne of the main requirements for an optimal industrial control system is the availability of a precise model of the process, e.g. for a steel rolling mill. If no model or no analytical description of such a process is available a sufficient model has to be derived from observations, i.e. system identification. While nonlinear function approximation is a well-known application for neural networks, the approximation of nonlinear functions that change over time poses many additional problems which have been in the focus of this research. The time-variance caused for example by aging or attrition requires a continuous adaptation to process changes throughout the life-time of the system, here referred to as continuous learning. Based on the analysis of different neural network approaches the novel incremental construction algorithm ICE for continuous learning tasks has been developed. One of the main advantages of the ICE-algorithm is that the number of RBF-neurons and the number of local models of the hybrid network have not to be determined in advance. This is an important feature for fast initial learning. The evolved network is automatically adapted to the time-variant target function. Another advantage of the ICE-algorithm is the ability to simultaneously learn the target function and a confidence value for the network output. Finally a special version of the ICE-algorithm with asymmetric receptive fields is introduced. Here similarities to fuzzy logic are intended. The goal is to automatically derive rules which describe the learned model of the unknown process. In general a neural network is a "black box". In contrast to that an ICE-network is more transparent
Kocour, Martin. « Automatic Speech Recognition System Continually Improving Based on Subtitled Speech Data ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-399164.
Texte intégralGannon, Mark Andrew. « Passeios aleatórios em redes finitas e infinitas de filas ». Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-16102017-154842/.
Texte intégralA set of models composed of queueing networks serving as random environments for one or more random walks, which themselves can affect the behavior of the queues, is developed. Two forms of interaction between the random walkers are considered. For each model, it is proved that the corresponding Markov process is positive recurrent and reversible. The detailed balance equa- tions are analyzed to obtain the functional form of the invariant measure of each model. In all the models analyzed in the present work, the invariant measure on a finite lattice has product form. Models of queueing networks as environments for multiple random walks are extended to infinite lattices. For each model extended, the conditions for the existence of the stochastic process on the infinite lattice are specified. In addition, it is proved that there exists a unique invariant measure on the infinite network whose projection on a finite sublattice is given by the corresponding finite- network measure. Finally, it is proved that that invariant measure on the infinite lattice is reversible.
Gönner, Lorenz, Julien Vitay et Fred Hamker. « Predictive Place-Cell Sequences for Goal-Finding Emerge from Goal Memory and the Cognitive Map : A Computational Model ». Universitätsbibliothek Chemnitz, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-230378.
Texte intégralAdeli, Mohammad. « Recherche de caractéristiques sonores et de correspondances audiovisuelles pour des systèmes bio-inspirés de substitution sensorielle de l'audition vers la vision ». Thèse, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/8194.
Texte intégralAbstract: Sensory substitution systems encode a stimulus modality into another stimulus modality. They can provide the means for handicapped people to perceive stimuli of an impaired modality through another modality. The purpose of this study was to investigate auditory to visual substitution systems. This type of sensory substitution is not well-studied probably because of the complexities of the auditory system and the difficulties arising from the mismatch between audible sounds that can change with frequencies up to 20000 Hz and visual stimuli that should change very slowly with time to be perceived. Two specific problems of auditory to visual substitution systems were targeted in this research: the investigation of audiovisual correspondences and the extraction of auditory features. An audiovisual experiment was conducted online to find the associations between the auditory (pitch and timbre) and visual (shape, color, height) features. One hundred and nineteen subjects took part in the experiments. A strong association between timbre of envelope normalized sounds and visual shapes was observed. Subjects strongly associated soft timbres with blue, green or light gray rounded shapes, harsh timbres with red, yellow or dark gray sharp angular shapes and timbres having elements of softness and harshness together with a mixture of the previous two shapes. Fundamental frequency was not associated with height, grayscale or color. Given the correspondence between timbre and shapes, in the next step, a flexible and multipurpose bio-inspired hierarchical model for analyzing timbre and extracting the important timbral features was developed. Inspired by findings in the fields of neuroscience, computational neuroscience, and psychoacoustics, not only does the model extract spectral and temporal characteristics of a signal, but it also analyzes amplitude modulations on different timescales. It uses a cochlear filter bank to resolve the spectral components of a sound, lateral inhibition to enhance spectral resolution, and a modulation filter bank to extract the global temporal envelope and roughness of the sound from amplitude modulations. To demonstrate its potential for timbre representation, the model was successfully evaluated in three applications: 1) comparison with subjective values of roughness, 2) musical instrument classification, and 3) feature selection for labeled timbres. The correspondence between timbre and shapes revealed by this study and the proposed model for timbre analysis can be used to develop intuitive auditory to visual substitution systems that encode timbre into visual shapes.
(9847832), Dawei Zhang. « Network-based output tracking control for continuous-time systems ». Thesis, 2012. https://figshare.com/articles/thesis/Network-based_output_tracking_control_for_continuous-time_systems/13463123.
Texte intégralJia, Jin. « Object highlighting : real-time boundary detection using a Bayesian network ». Thesis, 2004. http://hdl.handle.net/1957/30045.
Texte intégralGraduation date: 2004
Walker, James. « Bayesian Inference and Model Selection for Partially-Observed, Continuous-Time, Stochastic Epidemic Models ». Thesis, 2019. http://hdl.handle.net/2440/124703.
Texte intégralThesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2020
Lin, Yi-San, et 林怡姍. « A Bayesian-Network Risk Assessment Incorporating Human Factors Based on Continuous Fuzzy Set Theory ». Thesis, 2012. http://ndltd.ncl.edu.tw/handle/10170624603696988034.
Texte intégral國立臺灣海洋大學
商船學系所
100
Recently, maritime accidents emerge in an endless stream, and the causes of the maritime accidents result largely from human factors. Thus, a comprehensive risk assessment considering human elements needs to be developed in advance in order to reduce the risks of disasters. It is, however, difficult to acquire sufficient and historical data in the maritime industry, hence expert judgment is a critical reference sources. Fuzzy set theory is one of the methods often applied to convert the expert judgment into numerical values. How to properly express the real thoughts of experts and reasonably transform the fuzzy conclusions into probability values become extremely important issues. Some studies adopt Mass Assignment theory as a conversion mechanism. However, such a theory confines the membership functions of linguistic terms describing variables to a discrete nature and this is sometimes unable to present integrity of the data. Therefore, whether a risk assessment should adopt discrete or continuous membership functions is dependent on the nature of variables. The applications of inappropriate membership functions may violate the logic of human thoughts and affect the reliability of risk assessment. In order to solve the drawbacks aforementioned, a new risk assessment method capable of transforming expert judgment into probability values is proposed by combining curve fitting methods with fuzzy failure rate. Subsequently probability values are incorporated into the Bayesian network so as to infer causal relationship. After the validation it is concluded that the framework proposed has the capability of solving the shortcomings aforementioned.
BARONE, ROSARIO. « MCMC methods for continuous time multi-state models and high dimensional copula models ». Doctoral thesis, 2020. http://hdl.handle.net/11573/1365737.
Texte intégralChen, Jyun-Lin, et 陳俊霖. « Cost and Survival Prognosis Model for Lung Cancer Patients : A Continuous Gaussian Bayesian Network Approach ». Thesis, 2015. http://ndltd.ncl.edu.tw/handle/16087186742646333410.
Texte intégral國立臺灣科技大學
工業管理系
103
In Taiwan, cancer has always become one of the leading cause of death since 1982. Ministry of Health and Welfare mortality statistics showed that 44,791 people died of cancer in 2013, accounting for 29 percent of all deaths. Furthermore, lung cancer is the leading cause of mortalities no matter in men or women in 2013, which accounted 19.77% of all cancer deaths. The resources for lung cancer patients’ medical care should be considered much deep. Risk adjustment deals with the issues of equity and efficiency separately by establishing a risk equalization, which is seen as an effective way to evaluate individual medical requirement. This study presented a continuous Gaussian Bayesian network model to evaluate lung cancer patients’ survival time and expenditure from Taiwan’s National Health Insurance databank. Based on previous literatures, we summarized related risk adjustment outcomes, and also provided an overview of factors selection of lung cancer. In addition, this study presented the severity stages of risk adjustment model. For survival time estimation, the adjusted R2 performed 93.574% of stage I, 86.827% of stage II, 67.222% of stage III, and 52.940% of stage IV. For expenditure estimation, the adjusted R2 performed 32.63% of stage I, 50.301% of stage II, 50.363% of stage III, and 66.578% of stage IV. Compared with previous literatures, this study successfully increased the predictive power of risk adjustment model by using a continuous Gaussian Bayesian network. This study also performed the probability density function for all factors, as well as healthcare expenditure and overall survivability prediction. Public decision maker can utilize the proposed model to measure the lung cancer patients. According to this study, requirement planning of lung cancer patients can be evaluate properly.
謝明佳. « A Bayesian Study on the Plant-Capture Approach for Population Size Estimation in Continuous Time ». Thesis, 2001. http://ndltd.ncl.edu.tw/handle/14406744993162482411.
Texte intégralRobinson, Joshua Westly. « Modeling Time-Varying Networks with Applications to Neural Flow and Genetic Regulation ». Diss., 2010. http://hdl.handle.net/10161/3109.
Texte intégralMany biological processes are effectively modeled as networks, but a frequent assumption is that these networks do not change during data collection. However, that assumption does not hold for many phenomena, such as neural growth during learning or changes in genetic regulation during cell differentiation. Approaches are needed that explicitly model networks as they change in time and that characterize the nature of those changes.
In this work, we develop a new class of graphical models in which the conditional dependence structure of the underlying data-generation process is permitted to change over time. We first present the model, explain how to derive it from Bayesian networks, and develop an efficient MCMC sampling algorithm that easily generalizes under varying levels of uncertainty about the data generation process. We then characterize the nature of evolving networks in several biological datasets.
We initially focus on learning how neural information flow networks change in songbirds with implanted electrodes. We characterize how they change in response to different sound stimuli and during the process of habituation. We continue to explore the neurobiology of songbirds by identifying changes in neural information flow in another habituation experiment using fMRI data. Finally, we briefly examine evolving genetic regulatory networks involved in Drosophila muscle differentiation during development.
We conclude by suggesting new experimental directions and statistical extensions to the model for predicting novel neural flow results.
Dissertation
Lemp, Jason David. « Capturing random utility maximization behavior in continuous choice data : application to work tour scheduling ». 2009. http://hdl.handle.net/2152/18643.
Texte intégraltext
Chuang, Yin Yin, et 莊茵茵. « Using Bayesian network for analyzing cycle time to find key influenced factors and Constructing cycle time evolution table to predict cycle time in PCB industry with case studies ». Thesis, 2017. http://ndltd.ncl.edu.tw/handle/qj262q.
Texte intégral國立清華大學
工業工程與工程管理學系
105
Competition in high tech industry forces the field to consider the ways to monitor the duration of cycle time and to keep produce efficiency within a budget. Particularly, Printed Circuit Board (PCB) industry is sensitive to this issue since their product characteristic is about small-volume and large-variety production. The product complexity of PCB is high, and its manufacturing processes of PCB go through thirty-six processes so how to monitor each station and to estimate the total cycle time are the issues we concerned. In this paper, we use data mining framework to build up a model for factors extraction and proposes a cycle time evolution table for estimation the cycle time. The Bayesian network extracts the main factors that significant influence on total cycle time and the cycle time evolution table estimate the total cycle time per piece of the board. This study cooperates with PCB company in Taiwan for empirical research. Proposed framework extracts critical stations which influence the total cycle time from huge data to validate the results. Furthermore, the engineers follow the results to find the indirect impact factor. On the other hand, the study also uses the cycle time evolution table on estimating cycle time. The results give decision makers a criterion on estimating cycle time and committing delivery day.
Liu, Yen-Ling, et 劉燕玲. « A Comparative Study on Using Supervised Bayesian Network, Unsupervised DINA, G-DINA, and DINO Models in the Cognitive Diagnostic Assessment of "Time Unit" for the Fourth Graders ». Thesis, 2012. http://ndltd.ncl.edu.tw/handle/5r5336.
Texte intégral國立臺中教育大學
教育測驗統計研究所
100
The main purposes of this study are to establish the computerized diagnostic test for the “time unit” in the fourth grade based on the concept of cognitive diagnostic assessment, and use supervised Bayesian network and unsupervised DINA, G-DINA, DINO models to analyze test data. Finally, the diagnostic accuracy is estimated and compared to find the best model. The major findings of this study are summarized as follows: 1. The Cronbach α, average difficulty, and average discrimination of this computerized diagnostic test are 0.82, 0.673, and 0.474 respectively, demonstrating significant reliability of the test. 2. The average diagnosis accuracy of the math concepts measured in this computerized diagnostic test is 0.7210 for DINA model, 0.7681 for G-DINA model, 0.9130 for Bayesian network model with math concepts and questions, and 0.8338 for Bayesian network model with math concepts, error patterns, and questions. These results indicate the superior of the supervised cognitive diagnostic models over the unsupervised cognitive diagnostic models to the average diagnosis accuracy of math concepts by about 19%. 3. The average diagnosis accuracy of the error patterns measured in this computerized diagnostic test is 0.7432 for DINO model, 0.8824 for Bayesian network model with error patterns and questions, and 0.8817 for Bayesian network model with math concepts, error patterns, and questions. These indicate the superior of the supervised cognitive diagnostic models over the unsupervised cognitive diagnostic models to the average diagnosis accuracy of error patterns by about 14%. 4. Less than 50% of the fourth graders are found to be able to possess the following mathematic concepts: (1) the two-tier time unit conversion between hours, minutes and seconds; (2) the addition and subtraction of the moment and the amount of time across days; (3) the two-tier time unit conversion between days, hours, and minutes; (4) solving problems which subtraction with borrowing about compound number unit by straight computation is needed. 5. The most frequently error patterns of the students participating in this test in order are the computation of day and hour conversion as sexagesimal, error in 12-hour and 24-hour conversion, only computing part of time compound unit, and the errors are due to the conversion of the high-level units without considering that of the low-level units.
Ribeiro, Ana Custódia da Silva. « Contabilidade de custos na definição de tabelas de preços : custeio de uma unidade de cuidados continuados ». Master's thesis, 2014. http://hdl.handle.net/10400.14/17694.
Texte intégralIn an industry where managers are increasingly encouraged to provide more and better care at lower prices, cost accounting in healthcare becomes vital as information tool for decision making. The main objective of this case study was to determine the cost of a day in a continuing care unit for three types of patients characterized according to the degree of dependence, based on the actual cost of the activity using the methodology Time-Driven Activity-Based Costing. The choice of this costing methodology is related to the characteristics of the model and the institution in question. TDABC has proved to be more adaptable and easier to construct. It is a method that better reflects the reality and the complexity of the hospital and complexity relative to other methodologies, including the activity-based costing. The analysis has allowed us to identify key processes and associated costs, and affect them to the types of patients. The actual cost of a day in a continuing care unit for moderately dependent and totally dependent patient is higher than the contractually specified price by the National Network for Integrated Continuous Care. In case of convention it is necessary reducing costs to make the project sustainable. This study has also permitted to find the price to charge in a situation of private exploitation. Analyzing the organization's ability to generate positive results we might conclude that in a perspective of private exploitation, the projected unit continuing care unit is sustainable. Throughout this work I came across with limitations, particularly the lack of data provided by the information system that allows me or us to obtain obtaining the actual costs of the institution, the use of allocation criteria that may influence the accuracy of the results, and the fact that this study is prospective. It is suggested to update this costing system as soon as possible, updating allocation of professional and resources time, seeking an increasingly accurate system that could improve the reality of OT .