Tesis sobre el tema "Complex temporal data"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Complex temporal data.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 25 mejores tesis para su investigación sobre el tema "Complex temporal data".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Renz, Matthias. "Enhanced query processing on complex spatial and temporal data". Diss., [S.l.] : [s.n.], 2006. http://edoc.ub.uni-muenchen.de/archive/00006231.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Pacella, Massimo. "High-dimensional statistics for complex data". Doctoral thesis, Universita degli studi di Salerno, 2018. http://hdl.handle.net/10556/3016.

Texto completo
Resumen
2016 - 2017
High dimensional data analysis has become a popular research topic in the recent years, due to the emergence of various new applications in several fields of sciences underscoring the need for analysing massive data sets. One of the main challenge in analysing high dimensional data regards the interpretability of estimated models as well as the computational efficiency of procedures adopted. Such a purpose can be achieved through the identification of relevant variables that really affect the phenomenon of interest, so that effective models can be subsequently constructed and applied to solve practical problems. The first two chapters of the thesis are devoted in studying high dimensional statistics for variable selection. We firstly introduce a short but exhaustive review on the main developed techniques for the general problem of variable selection using nonparametric statistics. Lastly in chapter 3 we will present our proposal regarding a feature screening approach for non additive models developed by using of conditional information in the estimation procedure... [edited by Author]
XXX ciclo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Törmänen, Patrik. "Forecasting important disease spreaders from temporal contact data". Thesis, Umeå universitet, Institutionen för fysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-56747.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Schaidnagel, Michael. "Automated feature construction for classification of complex, temporal data sequences". Thesis, University of the West of Scotland, 2016. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.692834.

Texto completo
Resumen
Data collected from internet applications are mainly stored in the form of transactions. All transactions of one user form a sequence, which shows the user´s behaviour on the site. Nowadays, it is important to be able to classify the behaviour in real time for various reasons: e.g. to increase conversion rate of customers while they are in the store or to prevent fraudulent transactions before they are placed. However, this is difficult due to the complex structure of the data sequences (i.e. a mix of categorical and continuous data types, constant data updates) and the large amounts of data that are stored. Therefore, this thesis studies the classification of complex data sequences. It surveys the fields of time series analysis (temporal data mining), sequence data mining or standard classification algorithms. It turns out that these algorithms are either difficult to be applied on data sequences or do not deliver a classification: Time series need a predefined model and are not able to handle complex data types; sequence classification algorithms such as the apriori algorithm family are not able to utilize the time aspect of the data. The strengths and weaknesses of the candidate algorithms are identified and used to build a new approach to solve the problem of classification of complex data sequences. The problem is thereby solved by a two-step process. First, feature construction is used to create and discover suitable features in a training phase. Then, the blueprints of the discovered features are used in a formula during the classification phase to perform the real time classification. The features are constructed by combining and aggregating the original data over the span of the sequence including the elapsed time by using a calculated time axis. Additionally, a combination of features and feature selection are used to simplify complex data types. This allows catching behavioural patterns that occur in the course of time. This new proposed approach combines techniques from several research fields. Part of the algorithm originates from the field of feature construction and is used to reveal behaviour over time and express this behaviour in the form of features. A combination of the features is used to highlight relations between them. The blueprints of these features can then be used to achieve classification in real time on an incoming data stream. An automated framework is presented that allows the features to adapt iteratively to a change in underlying patterns in the data stream. This core feature of the presented work is achieved by separating the feature application step from the computational costly feature construction step and by iteratively restarting the feature construction step on the new incoming data. The algorithm and the corresponding models are described in detail as well as applied to three case studies (customer churn prediction, bot detection in computer games, credit card fraud detection). The case studies show that the proposed algorithm is able to find distinctive information in data sequences and use it effectively for classification tasks. The promising results indicate that the suggested approach can be applied to a wide range of other application areas that incorporate data sequences.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Gao, Feng. "Complex medical event detection using temporal constraint reasoning". Thesis, University of Aberdeen, 2010. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=153271.

Texto completo
Resumen
The Neonatal Intensive Care Unit (NICU) is a hospital ward specializing in looking after premature and ill newborn babies. Working in such a busy and complex environment is not easy and sophisticated equipment is used to help the daily work of the medical staff . Computers are used to analyse the large amount of monitored data and extract hidden information, e.g. to detect interesting events. Unfortunately, one group of important events lacks features that are recognizable by computers. This group includes the actions taken by the medical sta , for example two actions related to the respiratory system: inserting an endotracheal tube into a baby’s trachea (ET Intubating) or sucking out the tube (ET Suctioning). These events are very important building blocks for other computer applications aimed at helping the sta . In this research, a strategy for detecting these medical actions based on contextual knowledge is proposed. This contextual knowledge specifies what other events normally occur with each target event and how they are temporally related to each other. The idea behind this strategy is that all medical actions are taken for di erent purposes hence may have di erent procedures (contextual knowledge) for performing them. This contextual knowledge is modelled using a point based framework with special attention given to various types of uncertainty. Event detection consists in searching for consistent matching between a model based on the contextual knowledge and the observed event instances - a Temporal Constraint Satisfaction Problem (TCSP). The strategy is evaluated by detecting ET Intubating and ET Suctioning events, using a specially collected NICU monitoring dataset. The results of this evaluation are encouraging and show that the strategy is capable of detecting complex events in an NICU.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Ahmad, Saif. "A temporal pattern identification and summarization method for complex time serial data". Thesis, University of Surrey, 2007. http://epubs.surrey.ac.uk/843297/.

Texto completo
Resumen
Most real-world time series data is produced by complex systems. For example, the economy is a social system which produces time series of stocks, bonds, and foreign exchange rates whereas the human body is a biological system which produces time series of heart rate variations, brain activity, and rate of blood circulation. Complex systems exhibit great variety and complexity and so does the time series emanating from these systems. However, universal principles and tools seem to govern our understanding of highly complex phenomena, processes, and dynamics. It has been argued that one of the universal properties of complex systems and time series produced by complex systems is 'scaling'. The multiscale wavelet analysis shows promise to systematically elucidate complex dynamics in time series data at various timescales. In this research we investigate whether the wavelet analysis can be used as a universal tool to study the universal property of scaling in complex systems. We have developed and evaluated a wavelet time series analysis framework for automatically assessing the state and behaviour of complex systems such as the economy and the human body. Our results are good and support the hypothesis that 'scaling' is indeed a universal property of complex systems and that the wavelet analysis can be used as a universal tool to study it. We conclude that a system based on universal principles (e.g. 'scaling') and tools (e.g. wavelet analysis) is not only robust but also renders itself useful in diverse environments. Key words: Complex systems, scaling, time series analysis, wavelet analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Jones-Todd, Charlotte M. "Modelling complex dependencies inherent in spatial and spatio-temporal point pattern data". Thesis, University of St Andrews, 2017. http://hdl.handle.net/10023/12009.

Texto completo
Resumen
Point processes are mechanisms that beget point patterns. Realisations of point processes are observed in many contexts, for example, locations of stars in the sky, or locations of trees in a forest. Inferring the mechanisms that drive point processes relies on the development of models that appropriately account for the dependencies inherent in the data. Fitting models that adequately capture the complex dependency structures in either space, time, or both is often problematic. This is commonly due to—but not restricted to—the intractability of the likelihood function, or computational burden of the required numerical operations. This thesis primarily focuses on developing point process models with some hierarchical structure, and specifically where this is a latent structure that may be considered as one of the following: (i) some unobserved construct assumed to be generating the observed structure, or (ii) some stochastic process describing the structure of the point pattern. Model fitting procedures utilised in this thesis include either (i) approximate-likelihood techniques to circumvent intractable likelihoods, (ii) stochastic partial differential equations to model continuous spatial latent structures, or (iii) improving computational speed in numerical approximations by exploiting automatic differentiation. Moreover, this thesis extends classic point process models by considering multivariate dependencies. This is achieved through considering a general class of joint point process model, which utilise shared stochastic structures. These structures account for the dependencies inherent in multivariate point process data. These models are applied to data originating from various scientific fields; in particular, applications are considered in ecology, medicine, and geology. In addition, point process models that account for the second order behaviour of these assumed stochastic structures are also considered.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

IACOBELLO, GIOVANNI. "Spatio-temporal analysis of wall-bounded turbulence: A multidisciplinary perspective via complex networks". Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2829683.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

El, Ouassouli Amine. "Discovering complex quantitative dependencies between interval-based state streams". Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI061.

Texto completo
Resumen
Les avancées significatives qu’ont connu les technologies de capteurs, leur utilisation croissante ainsi que leur intégration dans les systèmes d’information permettent d’obtenir des descriptions temporelles riches d’environnements réels. L’information générée par de telles sources de données peut être qualifiée d’hétérogène sur plusieurs plans: types de mesures physiques, domaines et primitives temporelles, modèles de données etc. Dans ce contexte, l’application de méthodes de fouille de motifs constitue une opportunité pour la découverte de relations temporelles non-triviales, directement utilisables et facilement interprétables décrivant des phénomènes complexes. Nous proposons d’utiliser un ensemble d’abstraction temporelles pour construire une représentation unifiée, sous forme des flux d’intervalles (ou états), de l’information générée par un système hétérogène. Cette approche permet d’obtenir une description temporelle de l’environnent étudié à travers des attributs (ou états), dits de haut niveau, pouvant être utilisés dans la construction des motifs temporelles. A partir de cette représentation, nous nous intéressons à la découverte de dépendances temporelles quantitatives (avec information de délais) entre plusieurs flux d’intervalles. Nous introduisons le modèle de dépendances Complex Temporal Dependency (CTD) défini de manière similaire à une forme normale conjonctive. Ce modèle permets d’exprimer un ensemble riche de relations temporelles complexes. Pour ce modèle de dépendances nous proposons des algorithmes efficaces de découverte : CTD-Miner et ITLD - Interval Time Lag Discovery. Finalement, nous évaluons les performances de notre proposition ainsi que la qualité des résultats obtenus à travers des données issues de simulations ainsi que des données réelles collectées à partir de caméras et d’analyse vidéo
The increasing utilization of sensor devices in addition to human-given data make it possible to capture real world systems complexity through rich temporal descriptions. More precisely, the usage of a multitude of data sources types allows to monitor an environment by describing the evolution of several of its dimensions through data streams. One core characteristic of such configurations is heterogeneity that appears at different levels of the data generation process: data sources, time models and data models. In such context, one challenging task for monitoring systems is to discover non-trivial temporal knowledge that is directly actionable and suitable for human interpretation. In this thesis, we firstly propose to use a Temporal Abstraction (TA) approach to express information given by heterogeneous raw data streams with a unified interval-based representation, called state streams. A state reports on a high level environment configuration that is of interest for an application domain. Such approach solves problems introduced by heterogeneity, provides a high level pattern vocabulary and also permits also to integrate expert(s) knowledge into the discovery process. Second, we introduced the Complex Temporal Dependencies (CTD) that is a quantitative interval-based pattern model. It is defined similarly to a conjunctive normal form and allows to express complex temporal relations between states. Contrary to the majority of existing pattern models, a CTD is evaluated with automatic statistical assessment of streams intersection avoiding the use of any significance user-given parameter. Third, we proposed CTD-Miner a first efficient CTD mining framework. CTD-Miner performs an incremental dependency construction. CTD-Miner benefits from pruning techniques based on a statistical correspondence relationship that aims to accelerate the exploration search space by reducing redundant information and provide a more usable result set. Finally, we proposed the Interval Time Lag Discovery (ITLD) algorithm. ITLD is based on a confidence variation heuristic that permits to reduce the complexity of the pairwise dependency discovery process from quadratic to linear w.r.t a temporal constraint Δ on time lags. Experiments on simulated and real world data showed that ITLD provides efficiently more accurate results in comparison with existing approaches. Hence, ITLD enhances significantly the accuracy, performances and scalability of CTD-Miner. The encouraging results given by CTD-Miner on our real world motion data set suggests that it is possible to integrate insights given by real time video processing approaches in a knowledge discovery process opening interesting perspectives for monitoring smart environments
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Sherwin, Jason. "A computational approach to achieve situational awareness from limited observations of a complex system". Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33955.

Texto completo
Resumen
At the start of the 21st century, the topic of complexity remains a formidable challenge in engineering, science and other aspects of our world. It seems that when disaster strikes it is because some complex and unforeseen interaction causes the unfortunate outcome. Why did the financial system of the world meltdown in 2008-2009? Why are global temperatures on the rise? These questions and other ones like them are difficult to answer because they pertain to contexts that require lengthy descriptions. In other words, these contexts are complex. But we as human beings are able to observe and recognize this thing we call 'complexity'. Furthermore, we recognize that there are certain elements of a context that form a system of complex interactions - i.e., a complex system. Many researchers have even noted similarities between seemingly disparate complex systems. Do sub-atomic systems bear resemblance to weather patterns? Or do human-based economic systems bear resemblance to macroscopic flows? Where do we draw the line in their resemblance? These are the kinds of questions that are asked in complex systems research. And the ability to recognize complexity is not only limited to analytic research. Rather, there are many known examples of humans who, not only observe and recognize but also, operate complex systems. How do they do it? Is there something superhuman about these people or is there something common to human anatomy that makes it possible to fly a plane? - Or to drive a bus? Or to operate a nuclear power plant? Or to play Chopin's etudes on the piano? In each of these examples, a human being operates a complex system of machinery, whether it is a plane, a bus, a nuclear power plant or a piano. What is the common thread running through these abilities? The study of situational awareness (SA) examines how people do these types of remarkable feats. It is not a bottom-up science though because it relies on finding general principles running through a host of varied human activities. Nevertheless, since it is not constrained by computational details, the study of situational awareness provides a unique opportunity to approach complex tasks of operation from an analytical perspective. In other words, with SA, we get to see how humans observe, recognize and react to complex systems on which they exert some control. Reconciling this perspective on complexity with complex systems research, it might be possible to further our understanding of complex phenomena if we can probe the anatomical mechanisms by which we, as humans, do it naturally. At this unique intersection of two disciplines, a hybrid approach is needed. So in this work, we propose just such an approach. In particular, this research proposes a computational approach to the situational awareness (SA) of complex systems. Here we propose to implement certain aspects of situational awareness via a biologically-inspired machine-learning technique called Hierarchical Temporal Memory (HTM). In doing so, we will use either simulated or actual data to create and to test computational implementations of situational awareness. This will be tested in two example contexts, one being more complex than the other. The ultimate goal of this research is to demonstrate a possible approach to analyzing and understanding complex systems. By using HTM and carefully developing techniques to analyze the SA formed from data, it is believed that this goal can be obtained.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Sivanathan, Aparajithan. "Ubiquitous Integration and Temporal Synchronisation (UbilTS) framework : a solution for building complex multimodal data capture and interactive systems". Thesis, Heriot-Watt University, 2014. http://hdl.handle.net/10399/2833.

Texto completo
Resumen
Contemporary Data Capture and Interactive Systems (DCIS) systems are tied in with various technical complexities such as multimodal data types, diverse hardware and software components, time synchronisation issues and distributed deployment configurations. Building these systems is inherently difficult and requires addressing of these complexities before the intended and purposeful functionalities can be attained. The technical issues are often common and similar among diverse applications. This thesis presents the Ubiquitous Integration and Temporal Synchronisation (UbiITS) framework, a generic solution to address the technical complexities in building DCISs. The proposed solution is an abstract software framework that can be extended and customised to any application requirements. UbiITS includes all fundamental software components, techniques, system level layer abstractions and reference architecture as a collection to enable the systematic construction of complex DCISs. This work details four case studies to showcase the versatility and extensibility of UbiITS framework’s functionalities and demonstrate how it was employed to successfully solve a range of technical requirements. In each case UbiITS operated as the core element of each application. Additionally, these case studies are novel systems by themselves in each of their domains. Longstanding technical issues such as flexibly integrating and interoperating multimodal tools, precise time synchronisation, etc., were resolved in each application by employing UbiITS. The framework enabled establishing a functional system infrastructure in these cases, essentially opening up new lines of research in each discipline where these research approaches would not have been possible without the infrastructure provided by the framework. The thesis further presents a sample implementation of the framework on a device firmware exhibiting its capability to be directly implemented on a hardware platform. Summary metrics are also produced to establish the complexity, reusability, extendibility, implementation and maintainability characteristics of the framework.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Pajak, Maciej. "Evolutionary conservation and diversification of complex synaptic function in human proteome". Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31108.

Texto completo
Resumen
The evolution of synapses from early proto-synaptic protein complexes in unicellular eukaryotes to sophisticated machines comprising thousands of proteins parallels the emergence of finely tuned synaptic plasticity, a molecular correlate for memory and learning. Phenotypic change in organisms is ultimately the result of evolution of their genotype at the molecular level. Selection pressure is a measure of how changes in genome sequence that arise though naturally occurring processes in populations are fixed or eliminated in subsequent generations. Inferring phylogenetic information about proteins such as the variation of selection pressure across coding sequences can provide valuable information not only about the origin of proteins, but also the contribution of specific sites within proteins to their current roles within an organism. Recent evolutionary studies of synaptic proteins have generated attractive hypotheses about the emergence of finely-tuned regulatory mechanisms in the post-synaptic proteome related to learning, however, these analyses are relatively superficial. In this thesis, I establish a scalable molecular phylogenetic modelling framework based on three new inference methodologies to investigate temporal and spatial aspects of selection pressure changes for the whole human proteome using protein orthologs from up to 68 taxa. Temporal modelling of evolutionary selection pressure reveals informative features and patterns for the entire human proteome and identifies groups of proteins that share distinct diversification timelines. Multi-ontology enrichment analysis of these gene cohorts was used to aid biological interpretation, but these approaches are statistically under powered and do not capture a clear picture of the emergence of synaptic plasticity. Subsequent pathway-centric analysis of key synaptic pathways extends the interpretation of temporal data and allows for revision of previous hypotheses about the evolution of complex synaptic function. I proceed to integrate inferred selection pressure timeline information in the context of static protein-protein interaction data. A network analysis of the full human proteome reveals systematic patterns linking the temporal profile of proteins’ evolution and their topological role in the interaction graph. These graphs were used to test a mechanistic hypothesis that proposed a propagating diversification signal between interactors using the temporal modelling data and network analysis tools. Finally, I analyse the data of amino-acid level spatial modelling of selection pressure events in Arc, one of the master regulators of synaptic plasticity, and its interactors for which detailed experimental data is available. I use the Arc interactome as an example to discuss episodic and localised diversifying selection pressure events in tightly coupled complexes of protein and showcase potential for a similar systematic analysis of larger complexes of proteins using a pathway-centric approach. Through my work I revised our understanding of temporal evolutionary patterns that shaped contemporary synaptic function through profiling of emergence and refinement of proteins in multiple pathways of the nervous system. I also uncovered systematic effects linking dependencies between proteins with their active diversification, and hypothesised about their extension to domain level selection pressure events.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Duong, Thi V. T. "Efficient duration modelling in the hierarchical hidden semi-Markov models and their applications". Thesis, Curtin University, 2008. http://hdl.handle.net/20.500.11937/1408.

Texto completo
Resumen
Modeling patterns in temporal data has arisen as an important problem in engineering and science. This has led to the popularity of several dynamic models, in particular the renowned hidden Markov model (HMM) [Rabiner, 1989]. Despite its widespread success in many cases, the standard HMM often fails to model more complex data whose elements are correlated hierarchically or over a long period. Such problems are, however, frequently encountered in practice. Existing efforts to overcome this weakness often address either one of these two aspects separately, mainly due to computational intractability. Motivated by this modeling challenge in many real world problems, in particular, for video surveillance and segmentation, this thesis aims to develop tractable probabilistic models that can jointly model duration and hierarchical information in a unified framework. We believe that jointly exploiting statistical strength from both properties will lead to more accurate and robust models for the needed task. To tackle the modeling aspect, we base our work on an intersection between dynamic graphical models and statistics of lifetime modeling. Realizing that the key bottleneck found in the existing works lies in the choice of the distribution for a state, we have successfully integrated the discrete Coxian distribution [Cox, 1955], a special class of phase-type distributions, into the HMM to form a novel and powerful stochastic model termed as the Coxian Hidden Semi-Markov Model (CxHSMM). We show that this model can still be expressed as a dynamic Bayesian network, and inference and learning can be derived analytically.Most importantly, it has four superior features over existing semi-Markov modelling: the parameter space is compact, computation is fast (almost the same as the HMM), close-formed estimation can be derived, and the Coxian is flexible enough to approximate a large class of distributions. Next, we exploit hierarchical decomposition in the data by borrowing analogy from the hierarchical hidden Markov model in [Fine et al., 1998, Bui et al., 2004] and introduce a new type of shallow structured graphical model that combines both duration and hierarchical modelling into a unified framework, termed the Coxian Switching Hidden Semi-Markov Models (CxSHSMM). The top layer is a Markov sequence of switching variables, while the bottom layer is a sequence of concatenated CxHSMMs whose parameters are determined by the switching variable at the top. Again, we provide a thorough analysis along with inference and learning machinery. We also show that semi-Markov models with arbitrary depth structure can easily be developed. In all cases we further address two practical issues: missing observations to unstable tracking and the use of partially labelled data to improve training accuracy. Motivated by real-world problems, our application contribution is a framework to recognize complex activities of daily livings (ADLs) and detect anomalies to provide better intelligent caring services for the elderly.Coarser activities with self duration distributions are represented using the CxHSMM. Complex activities are made of a sequence of coarser activities and represented at the top level in the CxSHSMM. Intensive experiments are conducted to evaluate our solutions against existing methods. In many cases, the superiority of the joint modeling and the Coxian parameterization over traditional methods is confirmed. The robustness of our proposed models is further demonstrated in a series of more challenging experiments, in which the tracking is often lost and activities considerably overlap. Our final contribution is an application of the switching Coxian model to segment education-oriented videos into coherent topical units. Our results again demonstrate such segmentation processes can benefit greatly from the joint modeling of duration and hierarchy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Pray, Keith A. "Apriori Sets And Sequences: Mining Association Rules from Time Sequence Attributes". Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0506104-150831/.

Texto completo
Resumen
Thesis (M.S.) -- Worcester Polytechnic Institute.
Keywords: mining complex data; temporal association rules; computer system performance; stock market analysis; sleep disorder data. Includes bibliographical references (p. 79-85).
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Duong, Thi V. T. "Efficient duration modelling in the hierarchical hidden semi-Markov models and their applications". Curtin University of Technology, Dept. of Computing, 2008. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=18610.

Texto completo
Resumen
Modeling patterns in temporal data has arisen as an important problem in engineering and science. This has led to the popularity of several dynamic models, in particular the renowned hidden Markov model (HMM) [Rabiner, 1989]. Despite its widespread success in many cases, the standard HMM often fails to model more complex data whose elements are correlated hierarchically or over a long period. Such problems are, however, frequently encountered in practice. Existing efforts to overcome this weakness often address either one of these two aspects separately, mainly due to computational intractability. Motivated by this modeling challenge in many real world problems, in particular, for video surveillance and segmentation, this thesis aims to develop tractable probabilistic models that can jointly model duration and hierarchical information in a unified framework. We believe that jointly exploiting statistical strength from both properties will lead to more accurate and robust models for the needed task. To tackle the modeling aspect, we base our work on an intersection between dynamic graphical models and statistics of lifetime modeling. Realizing that the key bottleneck found in the existing works lies in the choice of the distribution for a state, we have successfully integrated the discrete Coxian distribution [Cox, 1955], a special class of phase-type distributions, into the HMM to form a novel and powerful stochastic model termed as the Coxian Hidden Semi-Markov Model (CxHSMM). We show that this model can still be expressed as a dynamic Bayesian network, and inference and learning can be derived analytically.
Most importantly, it has four superior features over existing semi-Markov modelling: the parameter space is compact, computation is fast (almost the same as the HMM), close-formed estimation can be derived, and the Coxian is flexible enough to approximate a large class of distributions. Next, we exploit hierarchical decomposition in the data by borrowing analogy from the hierarchical hidden Markov model in [Fine et al., 1998, Bui et al., 2004] and introduce a new type of shallow structured graphical model that combines both duration and hierarchical modelling into a unified framework, termed the Coxian Switching Hidden Semi-Markov Models (CxSHSMM). The top layer is a Markov sequence of switching variables, while the bottom layer is a sequence of concatenated CxHSMMs whose parameters are determined by the switching variable at the top. Again, we provide a thorough analysis along with inference and learning machinery. We also show that semi-Markov models with arbitrary depth structure can easily be developed. In all cases we further address two practical issues: missing observations to unstable tracking and the use of partially labelled data to improve training accuracy. Motivated by real-world problems, our application contribution is a framework to recognize complex activities of daily livings (ADLs) and detect anomalies to provide better intelligent caring services for the elderly.
Coarser activities with self duration distributions are represented using the CxHSMM. Complex activities are made of a sequence of coarser activities and represented at the top level in the CxSHSMM. Intensive experiments are conducted to evaluate our solutions against existing methods. In many cases, the superiority of the joint modeling and the Coxian parameterization over traditional methods is confirmed. The robustness of our proposed models is further demonstrated in a series of more challenging experiments, in which the tracking is often lost and activities considerably overlap. Our final contribution is an application of the switching Coxian model to segment education-oriented videos into coherent topical units. Our results again demonstrate such segmentation processes can benefit greatly from the joint modeling of duration and hierarchy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Arsenteva, Polina. "Statistical modeling and analysis of radio-induced adverse effects based on in vitro and in vivo data". Electronic Thesis or Diss., Bourgogne Franche-Comté, 2023. http://www.theses.fr/2023UBFCK074.

Texto completo
Resumen
Dans ce travail nous abordons le problème des effets indésirables induits par la radiothérapie sur les tissus sains. L'objectif est de proposer un cadre mathématique pour comparer les effets de différentes modalités d'irradiation, afin de pouvoir éventuellement choisir les traitements qui produisent le moins d'effets indésirables pour l’utilisation potentielle en clinique. Les effets secondaires sont étudiés dans le cadre de deux types de données : en termes de réponse omique in vitro des cellules endothéliales humaines, et en termes d'effets indésirables observés sur des souris dans le cadre d'expérimentations in vivo. Dans le cadre in vitro, nous rencontrons le problème de l'extraction d'informations clés à partir de données temporelles complexes qui ne peuvent pas être traitées avec les méthodes disponibles dans la littérature. Nous modélisons le fold change radio-induit, l'objet qui code la différence d'effet de deux conditions expérimentales, d’une manière qui permet de prendre en compte les incertitudes des mesures ainsi que les corrélations entre les entités observées. Nous construisons une distance, avec une généralisation ultérieure à une mesure de dissimilarité, permettant de comparer les fold changes en termes de toutes leurs propriétés statistiques importantes. Enfin, nous proposons un algorithme computationnellement efficace effectuant le clustering joint avec l'alignement temporel des fold changes. Les caractéristiques clés extraites de ces dernières sont visualisées à l'aide de deux types de représentations de réseau, dans le but de faciliter l'interprétation biologique. Dans le cadre in vivo, l’enjeu statistique est d’établir un lien prédictif entre des variables qui, en raison des spécificités du design expérimental, ne pourront jamais être observées sur les mêmes animaux. Dans le contexte de ne pas avoir accès aux lois jointes, nous exploitons les informations supplémentaires sur les groupes observés pour déduire le modèle de régression linéaire. Nous proposons deux estimateurs des paramètres de régression, l'un basé sur la méthode des moments et l'autre basé sur le transport optimal, ainsi que des estimateurs des intervalles de confiance basés sur le bootstrap stratifié
In this work we address the problem of adverse effects induced by radiotherapy on healthy tissues. The goal is to propose a mathematical framework to compare the effects of different irradiation modalities, to be able to ultimately choose those treatments that produce the minimal amounts of adverse effects for potential use in the clinical setting. The adverse effects are studied in the context of two types of data: in terms of the in vitro omic response of human endothelial cells, and in terms of the adverse effects observed on mice in the framework of in vivo experiments. In the in vitro setting, we encounter the problem of extracting key information from complex temporal data that cannot be treated with the methods available in literature. We model the radio-induced fold change, the object that encodes the difference in the effect of two experimental conditions, in the way that allows to take into account the uncertainties of measurements as well as the correlations between the observed entities. We construct a distance, with a further generalization to a dissimilarity measure, allowing to compare the fold changes in terms of all the important statistical properties. Finally, we propose a computationally efficient algorithm performing clustering jointly with temporal alignment of the fold changes. The key features extracted through the latter are visualized using two types of network representations, for the purpose of facilitating biological interpretation. In the in vivo setting, the statistical challenge is to establish a predictive link between variables that, due to the specificities of the experimental design, can never be observed on the same animals. In the context of not having access to joint distributions, we leverage the additional information on the observed groups to infer the linear regression model. We propose two estimators of the regression parameters, one based on the method of moments and the other based on optimal transport, as well as the estimators for the confidence intervals based on the stratified bootstrap procedure
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Ferreira, Leonardo Nascimento. "Time series data mining using complex networks". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-01022018-144118/.

Texto completo
Resumen
A time series is a time-ordered dataset. Due to its ubiquity, time series analysis is interesting for many scientific fields. Time series data mining is a research area that is intended to extract information from these time-related data. To achieve it, different models are used to describe series and search for patterns. One approach for modeling temporal data is by using complex networks. In this case, temporal data are mapped to a topological space that allows data exploration using network techniques. In this thesis, we present solutions for time series data mining tasks using complex networks. The primary goal was to evaluate the benefits of using network theory to extract information from temporal data. We focused on three mining tasks. (1) In the clustering task, we represented every time series by a vertex and we connected vertices that represent similar time series. We used community detection algorithms to cluster similar series. Results show that this approach presents better results than traditional clustering results. (2) In the classification task, we mapped every labeled time series in a database to a visibility graph. We performed classification by transforming an unlabeled time series to a visibility graph and comparing it to the labeled graphs using a distance function. The new label is the most frequent label in the k-nearest graphs. (3) In the periodicity detection task, we first transform a time series into a visibility graph. Local maxima in a time series are usually mapped to highly connected vertices that link two communities. We used the community structure to propose a periodicity detection algorithm in time series. This method is robust to noisy data and does not require parameters. With the methods and results presented in this thesis, we conclude that network science is beneficial to time series data mining. Moreover, this approach can provide better results than traditional methods. It is a new form of extracting information from time series and can be easily extended to other tasks.
Séries temporais são conjuntos de dados ordenados no tempo. Devido à ubiquidade desses dados, seu estudo é interessante para muitos campos da ciência. A mineração de dados temporais é uma área de pesquisa que tem como objetivo extrair informações desses dados relacionados no tempo. Para isso, modelos são usados para descrever as séries e buscar por padrões. Uma forma de modelar séries temporais é por meio de redes complexas. Nessa modelagem, um mapeamento é feito do espaço temporal para o espaço topológico, o que permite avaliar dados temporais usando técnicas de redes. Nesta tese, apresentamos soluções para tarefas de mineração de dados de séries temporais usando redes complexas. O objetivo principal foi avaliar os benefícios do uso da teoria de redes para extrair informações de dados temporais. Concentramo-nos em três tarefas de mineração. (1) Na tarefa de agrupamento, cada série temporal é representada por um vértice e as arestas são criadas entre as séries de acordo com sua similaridade. Os algoritmos de detecção de comunidades podem ser usados para agrupar séries semelhantes. Os resultados mostram que esta abordagem apresenta melhores resultados do que os resultados de agrupamento tradicional. (2) Na tarefa de classificação, cada série temporal rotulada em um banco de dados é mapeada para um gráfico de visibilidade. A classificação é realizada transformando uma série temporal não marcada em um gráfico de visibilidade e comparando-a com os gráficos rotulados usando uma função de distância. O novo rótulo é dado pelo rótulo mais frequente nos k grafos mais próximos. (3) Na tarefa de detecção de periodicidade, uma série temporal é primeiramente transformada em um gráfico de visibilidade. Máximos locais em uma série temporal geralmente são mapeados para vértices altamente conectados que ligam duas comunidades. O método proposto utiliza a estrutura de comunidades para realizar a detecção de períodos em séries temporais. Este método é robusto para dados ruidosos e não requer parâmetros. Com os métodos e resultados apresentados nesta tese, concluímos que a teoria da redes complexas é benéfica para a mineração de dados em séries temporais. Além disso, esta abordagem pode proporcionar melhores resultados do que os métodos tradicionais e é uma nova forma de extrair informações de séries temporais que pode ser facilmente estendida para outras tarefas.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

MORENO, Bruno Neiva. "Representação e análise de encontros espaço-temporais publicados em redes sociais online". Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18621.

Texto completo
Resumen
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-04-24T14:37:15Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tese_bnm_OK.pdf: 5126585 bytes, checksum: 5ccba23295950094b489a2df805e0815 (MD5)
Made available in DSpace on 2017-04-24T14:37:15Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) tese_bnm_OK.pdf: 5126585 bytes, checksum: 5ccba23295950094b489a2df805e0815 (MD5) Previous issue date: 2016-09-09
O crescente uso de redes sociais online tem feito com que usuários compartilhem, também, informações detalhadas a respeito dos locais que os mesmos frequentam, criando uma ligação entre o mundo físico (o movimento destes usuários no globo) e o mundo virtual (o que eles expressam sobre esses movimentos nas redes). O “check-in” é a funcionalidade responsável pelo compartilhamento da localização. Em uma rede social com essa funcionalidade, qualquer usuário pode publicar o local em que o mesmo está em determinado instante de tempo. Esta tese apresenta novas abordagens de análise de redes sociais online considerando as dimensões social, espacial e temporal que são inerentes à publicação de check-ins de usuários. As informações sociais, espaciais e temporais são definidas sob a perspectiva de encontros de usuários, sendo este o objeto de estudo dessa tese. Encontros ocorrem quando duas pessoas (dimensão social), estão em algum local (dimensão espacial), em determinado instante de tempo (dimensão temporal) e decidem publicar esse encontro através de check-ins. Além de apresentar um algoritmo para detecção de encontros, é definido um modelo para representação desses encontros. Este modelo é chamado de SiST (do inglês, SocIal, Spatial and Temporal) e modela encontros por meio de redes complexas. Para validar o modelo proposto, foram utilizados dados reais de redes sociais online. Com esses dados, os encontros foram detectados e analisados sob diferentes perspectivas com o objetivo de investigar a existência de alguma lei que governe a publicação dos mesmos, bem como para identificar padrões relativos a sua ocorrência, como padrões temporais, por exemplo. Além disso, as redes construídas a partir do modelo SiST também foram analisadas em termos de suas propriedades estruturais e topológicas. Por meio de redes SiST também foram estudados padrões de movimentação de usuários, como situações em que usuários se movimentam em grupo no globo ou situações em que um usuário é seguido por outros.
The growing use of online social networks has caused users to share detailed information about the places they visit, resulting on a clear connection between the physical world (i.e. the movement of these users on the globe) and the virtual world (which they express about these movements in the social network). The functionality responsible for sharing location by users is named as “check in”. In a social network with this feature, any user can publish their visited places. This thesis presents new approaches for online social networks analysis considering the social, spatial and temporal dimensions that are implicit in the publication of users check-ins. Social, spatial and temporal information is defined from the perspective of “user encounters”, which is the study object of this thesis. Users encounters occur when two people (social dimension) are somewhere (spatial dimension) in a given time (temporal dimension) and decide to publish this meeting through check-ins. In addition to the algorithm presented for encounters detection, we also defined a model for representation of these encounters. This model is called as SiST (SocIal, Spatial and Temporal). The SiST model basically represent encounters by a graph structure. To validate the proposed approach, we used real data from online social networks. With these data the users encounters were detected and analyzed from different perspectives aiming at investigating the existence of any law governing the publication of encounters and also to identify patterns related to its occurrence, like temporal patterns, for example. Furthermore, the graphs built from SiST model were also analyzed in terms of its structural and topological properties. Through the SiST networks the users movements were studied as well, like in situations in which users move in group or situations where users are followed by other users.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Serrà, Julià Joan. "Identification of versions of the same musical composition by processing audio descriptions". Doctoral thesis, Universitat Pompeu Fabra, 2011. http://hdl.handle.net/10803/22674.

Texto completo
Resumen
This work focuses on the automatic identification of musical piece versions (alternate renditions of the same musical composition like cover songs, live recordings, remixes, etc.). In particular, we propose two core approaches for version identification: model-free and model-based ones. Furthermore, we introduce the use of post-processing strategies to improve the identification of versions. For all that we employ nonlinear signal analysis tools and concepts, complex networks, and time series models. Overall, our work brings automatic version identification to an unprecedented stage where high accuracies are achieved and, at the same time, explores promising directions for future research. Although our steps are guided by the nature of the considered signals (music recordings) and the characteristics of the task at hand (version identification), we believe our methodology can be easily transferred to other contexts and domains.
Aquest treball es centra en la identificació automàtica de versions musicals (interpretacions alternatives d'una mateixa composició: 'covers', directes, remixos, etc.). En concret, proposem dos tiupus d'estratègies: la lliure de model i la basada en models. També introduïm tècniques de post-processat per tal de millorar la identificació de versions. Per fer tot això emprem conceptes relacionats amb l'anàlisi no linial de senyals, xarxes complexes i models de sèries temporals. En general, el nostre treball porta la identificació automàtica de versions a un estadi sense precedents on s'obtenen bons resultats i, al mateix temps, explora noves direccions de futur. Malgrat que els passos que seguim estan guiats per la natura dels senyals involucrats (enregistraments musicals) i les característiques de la tasca que volem solucionar (identificació de versions), creiem que la nostra metodologia es pot transferir fàcilment a altres àmbits i contextos.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

El, Assaad Hani. "Modélisation et classification dynamique de données temporelles non stationnaires". Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1162/document.

Texto completo
Resumen
Cette thèse aborde la problématique de la classification non supervisée de données lorsque les caractéristiques des classes sont susceptibles d'évoluer au cours du temps. On parlera également, dans ce cas, de classification dynamique de données temporelles non stationnaires. Le cadre applicatif des travaux concerne le diagnostic par reconnaissance des formes de systèmes complexes dynamiques dont les classes de fonctionnement peuvent, suite à des phénomènes d'usures, des déréglages progressifs ou des contextes d'exploitation variables, évoluer au cours du temps. Un modèle probabiliste dynamique, fondé à la fois sur les mélanges de lois et sur les modèles dynamiques à espace d'état, a ainsi été proposé. Compte tenu de la structure complexe de ce modèle, une variante variationnelle de l'algorithme EM a été proposée pour l'apprentissage de ses paramètres. Dans la perspective du traitement rapide de flux de données, une version séquentielle de cet algorithme a également été développée, ainsi qu'une stratégie de choix dynamique du nombre de classes. Une série d'expérimentations menées sur des données simulées et des données réelles acquises sur le système d'aiguillage des trains a permis d'évaluer le potentiel des approches proposées
Nowadays, diagnosis and monitoring for predictive maintenance of railway components are important key subjects for both operators and manufacturers. They seek to anticipate upcoming maintenance actions, reduce maintenance costs and increase the availability of rail network. In order to maintain the components at a satisfactory level of operation, the implementation of reliable diagnostic strategy is required. In this thesis, we are interested in a main component of railway infrastructure, the railway switch; an important safety device whose failure could heavily impact the availability of the transportation system. The diagnosis of this system is therefore essential and can be done by exploiting sequential measurements acquired successively while the state of the system is evolving over time. These measurements consist of power consumption curves that are acquired during several switch operations. The shape of these curves is indicative of the operating state of the system. The aim is to track the temporal dynamic evolution of railway component state under different operating contexts by analyzing the specific data in order to detect and diagnose problems that may lead to functioning failure. This thesis tackles the problem of temporal data clustering within a broader context of developing innovative tools and decision-aid methods. We propose a new dynamic probabilistic approach within a temporal data clustering framework. This approach is based on both Gaussian mixture models and state-space models. The main challenge facing this work is the estimation of model parameters associated with this approach because of its complex structure. In order to meet this challenge, a variational approach has been developed. The results obtained on both synthetic and real data highlight the advantage of the proposed algorithms compared to other state of the art methods in terms of clustering and estimation accuracy
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Renz, Matthias [Verfasser]. "Enhanced query processing on complex spatial and temporal data / vorgelegt von Matthias Renz". 2006. http://d-nb.info/982631820/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Menninghaus, Mathias. "Automated Performance Test Generation and Comparison for Complex Data Structures - Exemplified on High-Dimensional Spatio-Temporal Indices". Doctoral thesis, 2018. https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-20180823528.

Texto completo
Resumen
There exist numerous approaches to index either spatio-temporal or high-dimensional data. None of them is able to efficiently index hybrid data types, thus spatio-temporal and high-dimensional data. As the best high-dimensional indexing techniques are only able to index point-data and not now-relative data and the best spatio-temporal indexing techniques suffer from the curse of dimensionality, this thesis introduces the Spatio-Temporal Pyramid Adapter (STPA). The STPA maps spatio-temporal data on points, now-values on the median of the data set and indexes them with the pyramid technique. For high-dimensional and spatio-temporal index structures no generally accepted benchmark exists. Most index structures are only evaluated by custom benchmarks and compared to a tiny set of competitors. Benchmarks may be biased as a structure may be created to perform well in a certain benchmark or a benchmark does not cover a certain speciality of the investigated structures. In this thesis, the Interface Based Performance Comparison (IBPC) technique is introduced. It automatically generates test sets with a high code coverage on the system under test (SUT) on the basis of all functions defined by a certain interface which all competitors support. Every test set is performed on every SUT and the performance results are weighted by the achieved coverage and summed up. These weighted performance results are then used to compare the structures. An implementation of the IBPC, the Performance Test Automation Framework (PTAF) is compared to a classic custom benchmark, a workload generator whose parameters are optimized by a genetic algorithm and a specific PTAF alternative which incorporates the specific behavior of the systems under test. This is done for a set of two high-dimensional spatio-temporal indices and twelve variants of the R-tree. The evaluation indicates that PTAF performs at least as good as the other approaches in terms of minimal test cases with a maximized coverage. Several case studies on PTAF demonstrate its widespread abilities.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Policarpio, Sean R. "An answer set programming based formal language for complex XML authorisations with temporal constraints". Thesis, 2011. http://handle.uws.edu.au:8081/1959.7/506715.

Texto completo
Resumen
The Extensible Markup Language (XML) has widely become the de facto method for the encoding of stored and shared computer data. Many of today's Internet applications utilise XML for the exchange of information. In many cases, information that is stored in XML can be regarded as sensitive or private (ie. personal, financial, or generally classified information). For this reason there is an obvious necessity to ensure that information that is deemed sensitive or private is protected with a method of security or access control. In this thesis we investigate and present such a method with the introduction of a formal language that can provide an authoritative framework for XML documents. In conjunction with the highly regarded and recognised Role-based Access Control (RBAC) model, we designed a formal language of authorisation for XML documents. With the inherent features of the RBAC model (such as subject and role based structuring, authorisation delegation and propagation, conflict resolution, separation of duty), we developed Axml(T), a formal language capable of specifying a queryable security policy base. Beyond this, we also furthered its expressive nature and capabilities by also incorporating Temporal Logic. This gives Axml(T) the ability to specify and reason upon access control temporally and is something that is rarely implemented in terms of authorisation languages. For the foundation and semantics of Axml(T), we turned to a relatively new and commonly used form of declarative programming used in Knowledge Representation and Logic Programming. Answer Set Programming (ASP) provides Axml(T) with a semantic definition and translation so that we can treat our security policy base as a logic program. This logic program translation is reasoned upon to provide an answer set (stable model) which dictates the authorisations to XML documents. As well as the description and presentation of this formal language, we also produced a software implementation to demonstrate its use and features. Using case studies, we show a level of complexity that can be accomplished by using Axml(T) to specify access control to XML documents. Finally, we also present further extensions and theories to the formal language that increase its capabilities and expressiveness and also further differentiate it from other research in XML security. These extensions, such as query containment and aggregates, increase the complexity in which Axml(T) can specify authorisation to XML documents. We formally define these extensions in Axml(T) and demonstrate them through further examples.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

MINGIONE, MARCO. "On the wide applicability of Bayesian hierarchical models". Doctoral thesis, 2022. http://hdl.handle.net/11573/1613592.

Texto completo
Resumen
This dissertation attempts to gather the main research topics I engaged during the past four years, in collaboration with several national and international researchers from ``La Sapienza'' and other universities. The primary focus is the application of Bayesian hierarchical models to phenomena in several domains such as economics, environmental health, and epidemiology. One common point is the attention to their fast implementation and results' interpretability. Typically, these two main goals are challenging to be simultaneously achieved in the Bayesian setting for two main reasons: on the one hand, the fast implementation of Bayesian machineries requires an oversimplification of the modeling structure, which does not necessarily reflect the complexity of the analyzed phenomenon; on the other hand, if the estimation of complex models is sought, parameters' interpretation may not be straightforward, especially when intricate dependence structures are present. The reader must be aware that all the presented applications with related solutions stemmed from these premises. The first chapter of this dissertation introduces the advantages of adopting the hierarchical paradigm for the model formulation from a conceptual perspective. Following this conceptual introduction, the second chapter delves more into the technical aspects of hierarchical model formulation and estimation. Far from being exhaustive, it provides all the essential ingredients for a thorough understanding of their theoretical foundations and optimal implementation. These first two chapters pave the road for the four original developments presented thereafter. In particular, the third chapter describes a new statistical protocol aiming at variable selection within a Beta regression model for the estimation of food losses percentages at the country-commodity level. The work has been carried out in collaboration with the Food and Agricultural Organization of the United Nations, which started in 2017 for my Master's thesis and led to the recent publication by Mingione et al. (2021a). The fourth chapter includes an extended version of the work developed during my Visiting Research period at the University of California, Los Angeles. It describes a modeling framework for the fast estimation of temporal Gaussian processes in the presence of high-frequency biometrical sampled data. Nowadays, such data are easily collected using new non-invasive wearable devices (e.g., accelerometers) and generate substantial interest in monitoring human activity. The work is currently under review and is available in Alaimo Di Loro et al. (2021a) as a pre-print. The fifth chapter presents two modeling proposals to estimate epidemiological incidence indicators, typically collected during an epidemic for surveillance purposes. The methodology was applied to the Italian publicly available data for the monitoring of the COVID-19 epidemic. Both proposals consider probability distributions coherent with the nature of the data, which are counts, and adopt a generalized logistic function for the parametrization of the mean term. However, the second proposal allows for a latent component accounting for dependence among geographical units. Notice that, in the first work by Alaimo Di Loro et al. (2021b), the inference is pursued under a likelihood-based framework. This work helps highlighting even more the advantages of using a Bayesian approach, as subsequently described by Mingione et al. (2021b). The last chapter summarizes the main points of the dissertation, underlining the most relevant findings, the original contributions, and stressing out how Bayesian hierarchical models altogether yield a cohesive treatment of many issues.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Wang, Wen-Jing. "Channel adaptive transmission of big data: a complete temporal characterization and its application". Thesis, 2018. https://dspace.library.uvic.ca//handle/1828/10405.

Texto completo
Resumen
We investigate the statistics of transmission time of wireless systems employing adaptive transmission. Unlike traditional transmission systems where the transmission time of a fixed amount of data is typically regarded as a constant, the transmission time with adaptive transmission systems becomes a random variable, as the transmission rate varies with the fading channel condition. To facilitate the design and optimization of wireless transmission schemes, we present an analytical framework to determine statistical characterizations for the transmission time with adaptive transmission. In particular, we derive the exact statistics of transmission time over block fading channels. The probability mass function (PMF) and cumulative distribution function (CDF) of transmission time are obtained for both slow and fast fading scenarios. We further extend our analysis to Markov channels, where the transmission time becomes a sequence of exponentially distributed random-length time slots. Analytical expression for the probability density function (PDF) of transmission time is derived for both fast and slow fading scenarios. Since the energy consumption can be characterized by the product of power consumption and transmission time, we also evaluate the energy consumption for wireless systems with adaptive transmission. Cognitive radio communication can opportunistically access underutilized spectrum for emerging wireless applications. With interweave cognitive implementation, a secondary user (SU) transmits only if a primary user does not occupy the channel and waits for transmission otherwise. Therefore, secondary packet transmission involves both transmission and waiting periods. The resulting extended delivery time (EDT) is critical to the throughput analysis of secondary system. With the statistical results of transmission time, we derive the PDF of EDT considering random-length SU transmission and waiting periods for continuous spectrum sensing and semi-periodic spectrum sensing. Taking spectrum sensing errors into account, we propose a discrete Markov chain modeling slotted secondary transmission coupled with periodic spectrum sensing. Markov modeling is applied to energy efficiency optimization and queuing performance evaluation.
Graduate
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía