Tesis sobre el tema "Change point and trend detection"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Change point and trend detection.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Change point and trend detection".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Petersson, David y Emil Backman. "Change Point Detection and Kernel Ridge Regression for Trend Analysis on Financial Data". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230729.

Texto completo
Resumen
The investing market can be a cold ruthless place for the layman. In order to get the chance of making money in this business one must place countless hours on research, with many different parameters to handle in order to reach success. To reduce the risk, one must look to many different companies operating in multiple fields and industries. In other words, it can be a hard task to manage this feat. With modern technology, there is now lots of potential to handle this tedious analysis autonomously using machine learning and clever algorithms. With this approach, the amount of analyzes is only limited by the capacity of the computer. Resulting in a number far greater than if done by hand. This study aims at exploring the possibilities to modify and implement efficient algorithms in the field of finance. The study utilizes the power of kernel methods in order to algorithmically analyze the patterns found in financial data efficiently. By combining the powerful tools of change point detection and nonlinear regression the computer can classify the different trends and moods in the market. The study culminates to a tool for analyzing data from the stock market in a way that minimizes the influence from short spikes and drops, and instead is influenced by the underlying pattern. But also, an additional tool for predicting future movements in the price.
Aktiemarknaden kan vara en hård och oförlåtande plats att investera sina pengar i som novis. För att ha någon chans att gå med vinst krävs oräkneligt många timmars efterforskning av företag och dess möjligheter. Vidare bör man sprida sina investeringar över flertalet oberoende branscher och på så sätt minska risken för stora förluster. Med många aktörer och en stor mängd parametrar som måste falla samman kan detta verka näst intill omöjligt att klara av som privatperson. Med modern teknologi finns nu stor potential till att kunna hantera dessa analyser autonomt med maskininlärning. Om man ser på problemet från denna infallsvinkel inser man snart att analysförmågan enbart begränsas av vilken datorkraft man besitter. Denna studie utforskar möjligheterna kring maskininlärning inom teknisk analys genom att kombinera effektiva algoritmer på ett nytänkande sätt. Genom att utnyttja kraften bakom kernel-metoder kan mönster i finansiella data analyseras effektivt. En ny kombination, av ickelinjär regression och algoritmer som är kapabla till att hitta brytpunkter i mönster, föreslås. Slutprodukten från denna studie är ett analysverktyg som minimerar influensen från plötsliga händelser och istället ger större vikt till de underliggande mönstren i finansiella data. Det introduceras också ett ytterligare verktyg som kan användas för att estimera framtida prisrörelser.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Gao, Zhenguo. "Variance Change Point Detection under A Smoothly-changing Mean Trend with Application to Liver Procurement". Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82351.

Texto completo
Resumen
Literature on change point analysis mostly requires a sudden change in the data distribution, either in a few parameters or the distribution as a whole. We are interested in the scenario that the variance of data may make a significant jump while the mean of data changes in a smooth fashion. It is motivated by a liver procurement experiment with organ surface temperature monitoring. Blindly applying the existing change point analysis methods to the example can yield erratic change point estimates since the smoothly-changing mean violates the sudden-change assumption. In my dissertation, we propose a penalized weighted least squares approach with an iterative estimation procedure that naturally integrates variance change point detection and smooth mean function estimation. Given the variance components, the mean function is estimated by smoothing splines as the minimizer of the penalized weighted least squares. Given the mean function, we propose a likelihood ratio test statistic for identifying the variance change point. The null distribution of the test statistic is derived together with the rates of convergence of all the parameter estimates. Simulations show excellent performance of the proposed method. Application analysis offers numerical support to the non-invasive organ viability assessment by surface temperature monitoring. The method above can only yield the variance change point of temperature at a single point on the surface of the organ at a time. In practice, an organ is often transplanted as a whole or in part. Therefore, it is generally of more interest to study the variance change point for a chunk of organ. With this motivation, we extend our method to study variance change point for a chunk of the organ surface. Now the variances become functions on a 2D space of locations (longitude and latitude) and the mean is a function on a 3D space of location and time. We model the variance functions by thin-plate splines and the mean function by the tensor product of thin-plate splines and cubic splines. However, the additional dimensions in these functions incur serious computational problems since the sample size, as a product of the number of locations and the number of sampling time points, becomes too large to run the standard multi-dimensional spline models. To overcome the computational hurdle, we introduce a multi-stages subsampling strategy into our modified iterative algorithm. The strategy involves several down-sampling or subsampling steps educated by preliminary statistical measures. We carry out extensive simulations to show that the new method can efficiently cut down the computational cost and make a practically unsolvable problem solvable with reasonable time and satisfactory parameter estimates. Application of the new method to the liver surface temperature monitoring data shows its effectiveness in providing accurate status change information for a portion of or the whole organ.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Hedberg, Sofia. "Regional Quantification of Climatic and Anthropogenic Impacts on Streamflows in Sweden". Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-269824.

Texto completo
Resumen
The anthropogenic impact on earth’s systems has rapidly increased since the middle of the last century and today it is hard to find a stream that is not influenced by human activities. The understanding of causes to changes is an important knowledge for future water management and planning and of that reason climatic and anthropogenic impact on streamflow changes in Sweden were explored and quantified. In the first step trends and abrupt changes in annual streamflow were detected and verified with the non- parametric Mann-Kendall’s and Pettitt’s test, all performed as moving window tests. In the second step HBV, a climatic driven rainfall-runoff model, was used to attribute the causes of the detected changes. Detection and attribution of changes were performed on several catchments in order to investigate regional patterns. On one hand using smaller window sizes, period higher number of detected positive and negative trends were found. On the other hand bigger window sizes resulted in positive trends in more than half of the catchments and almost no negative trends. The detected changes were highly dependent on the investigated time frame, due to periodicity, e.g. natural variability in streamflow. In general the anthropogenic impact on streamflow changes was smaller than changes due to temperature and streamflow. In median anthropogenic impact could explain 7% of the total change. No regional differences were found which indicated that anthropogenic impact varies more between individual catchments than following a regional pattern.
Sedan mitten av förra århundradet har den antropogena påverkan på jordens system ökat kraftigt. Idag är det svårt att hitta ett vattendrag som inte är påverkat av mänsklig aktivitet. Att förstå orsakerna bakom förändringarna är en viktig kunskap för framtida vattenplanering och av denna anledning undersöktes och kvantiferades den antropogen och klimatpåverkan på flödesförändringar i svenska vattendrag. I arbetets första steg användes de Mann-Kendalls och Pettitts test för att lokalisera och verifiera förändringar i årligt vattenflöde. Alla test var icke parametriska och utfördes som ett glidande fönster. I nästa steg undersöktes orsakerna till förändringar med hjälp av HBV, en klimatdriven avrinningsmodell. Ett större antal avrinningsområden undersöktes för att upptäcka regionala mönster och skillnader. Perioder med omväxlande positiva och negativa trender upptäcktes med mindre fönsterstorlekar, medan större fönster hittade positiva trender i mer än hälften av områdena och knappt några negativa trender hittades. De detekterade förändringarna var på grund av periodicitet i årligt vattenflöde till stor grad beroende på det undersöka tidsintervallet. Generellt var den antropogena påverkan större påverkan från nederbörd och temperatur, med ett medianvärde där 7 % av den totala förändringen kunde förklaras med antropogen påverkan. Inga regionala skillnader i antropogen påverkan kunde identifieras vilket indikerar att den varierar mer mellan individuella områden än följer ett regionalt mönster.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Jawa, Taghreed Mohammed. "Statistical methods of detecting change points for the trend of count data". Thesis, University of Strathclyde, 2017. http://digitool.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=28854.

Texto completo
Resumen
In epidemiology, controlling infection is a crucial element. Since healthcare associated infections (HAIs) are correlated with increasing costs and mortality rates, effective healthcare interventions are required. Several healthcare interventions have been implemented in Scotland and subsequently Health Protection Scotland (HPS) reported a reduction in HAIs [HPS (2015b, 2016a)]. The aim of this thesis is to use statistical methods and change points analysis to detect the time when the rate of HAIs changed and determine which associated interventions may have impacted such rates. Change points are estimated from polynomial generalized linear models (GLM) and confidence intervals are constructed using bootstrap and delta methods and the two techniques are compared. Segmented regression is also used to look for change points at times when specific interventions took place. A generalization of segmented regression is known as joinpoint analysis which looks for potential change points at each time point in the data, which allows the change to have occurred at any point over time. The joinpoint model is adjusted by adding a seasonal effect to account for additional variability in the rates. Confidence intervals for joinpoints are constructed using bootstrap and profile likelihood methods and the two approaches are compared. Change points from the smoother trend of the generalized additive model (GAM) are also estimated and bootstrapping is used to construct confidence intervals. All methods were found to have similar change points. Segmented regression detects the actual point when an intervention took place. Polynomial GLM, spline GAM and joinpoint analysis models are useful when the impact of an intervention occurs after a period of time. Simulation studies are used to compare polynomial GLM, segmented regression and joinpoint analysis models for detecting change points along with their confidence intervals.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Garreau, Damien. "Change-point detection and kernel methods". Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE061/document.

Texto completo
Resumen
Dans cette thèse, nous nous intéressons à une méthode de détection des ruptures dans une suite d’observations appartenant à un ensemble muni d’un noyau semi-défini positif. Cette procédure est une version « à noyaux » d’une méthode des moindres carrés pénalisés. Notre principale contribution est de montrer que, pour tout noyau satisfaisant des hypothèses raisonnables, cette méthode fournit une segmentation proche de la véritable segmentation avec grande probabilité. Ce résultat est obtenu pour un noyau borné et une pénalité linéaire, ainsi qu’une autre pénalité venant de la sélection de modèles. Les preuves reposent sur un résultat de concentration pour des variables aléatoires bornées à valeurs dans un espace de Hilbert, et nous obtenons une version moins précise de ce résultat lorsque l’on supposeseulement que la variance des observations est finie. Dans un cadre asymptotique, nous retrouvons les taux minimax usuels en détection de ruptures lorsqu’aucune hypothèse n’est faite sur la taille des segments. Ces résultats théoriques sont confirmés par des simulations. Nous étudions également de manière détaillée les liens entre différentes notions de distances entre segmentations. En particulier, nous prouvons que toutes ces notions coïncident pour des segmentations suffisamment proches. D’un point de vue pratique, nous montrons que l’heuristique du « saut de dimension » pour choisir la constante de pénalisation est un choix raisonnable lorsque celle-ci est linéaire. Nous montrons également qu’une quantité clé dépendant du noyau et qui apparaît dans nos résultats théoriques influe sur les performances de cette méthode pour la détection d’une unique rupture. Dans un cadre paramétrique, et lorsque le noyau utilisé est invariant partranslation, il est possible de calculer cette quantité explicitement. Grâce à ces calculs, nouveaux pour plusieurs d’entre eux, nous sommes capable d’étudier précisément le comportement de la constante de pénalité maximale. Pour finir, nous traitons de l’heuristique de la médiane, un moyen courant de choisir la largeur de bande des noyaux à base de fonctions radiales. Dans un cadre asymptotique, nous montrons que l’heuristique de la médiane se comporte à la limite comme la médiane d’une distribution que nous décrivons complètement dans le cadre du test à deux échantillons à noyaux et de la détection de ruptures. Plus précisément, nous montrons que l’heuristique de la médiane est approximativement normale centrée en cette valeur
In this thesis, we focus on a method for detecting abrupt changes in a sequence of independent observations belonging to an arbitrary set on which a positive semidefinite kernel is defined. That method, kernel changepoint detection, is a kernelized version of a penalized least-squares procedure. Our main contribution is to show that, for any kernel satisfying some reasonably mild hypotheses, this procedure outputs a segmentation close to the true segmentation with high probability. This result is obtained under a bounded assumption on the kernel for a linear penalty and for another penalty function, coming from model selection.The proofs rely on a concentration result for bounded random variables in Hilbert spaces and we prove a less powerful result under relaxed hypotheses—a finite variance assumption. In the asymptotic setting, we show that we recover the minimax rate for the change-point locations without additional hypothesis on the segment sizes. We provide empirical evidence supporting these claims. Another contribution of this thesis is the detailed presentation of the different notions of distances between segmentations. Additionally, we prove a result showing these different notions coincide for sufficiently close segmentations.From a practical point of view, we demonstrate how the so-called dimension jump heuristic can be a reasonable choice of penalty constant when using kernel changepoint detection with a linear penalty. We also show how a key quantity depending on the kernelthat appears in our theoretical results influences the performance of kernel change-point detection in the case of a single change-point. When the kernel is translationinvariant and parametric assumptions are made, it is possible to compute this quantity in closed-form. Thanks to these computations, some of them novel, we are able to study precisely the behavior of the maximal penalty constant. Finally, we study the median heuristic, a popular tool to set the bandwidth of radial basis function kernels. Fora large sample size, we show that it behaves approximately as the median of a distribution that we describe completely in the setting of kernel two-sample test and kernel change-point detection. More precisely, we show that the median heuristic is asymptotically normal around this value
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Niu, Yue S., Ning Hao y Heping Zhang. "Multiple Change-Point Detection: A Selective Overview". INST MATHEMATICAL STATISTICS, 2016. http://hdl.handle.net/10150/622820.

Texto completo
Resumen
Very long and noisy sequence data arise from biological sciences to social science including high throughput data in genomics and stock prices in econometrics. Often such data are collected in order to identify and understand shifts in trends, for example, from a bull market to a bear market in finance or from a normal number of chromosome copies to an excessive number of chromosome copies in genetics. Thus, identifying multiple change points in a long, possibly very long, sequence is an important problem. In this article, we review both classical and new multiple change-point detection strategies. Considering the long history and the extensive literature on the change-point detection, we provide an in-depth discussion on a normal mean change-point model from aspects of regression analysis, hypothesis testing, consistency and inference. In particular, we present a strategy to gather and aggregate local information for change-point detection that has become the cornerstone of several emerging methods because of its attractiveness in both computational and theoretical properties.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Yang, Ping. "Adaptive trend change detection and pattern recognition in physiological monitoring". Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/8932.

Texto completo
Resumen
Advances in monitoring technology have resulted in the collection of a vast amount of data that exceeds the simultaneous surveillance capabilities of expert clinicians in the clinical environment. To facilitate the clinical decision-making process, this thesis solves two fundamental problems in physiological monitoring: signal estimation and trend-pattern recognition. The general approach is to transform changes in different trend features to nonzero level-shifts by calculating the model-based forecast residuals and then to apply a statistical test or Bayesian approach on the residuals to detect changes. The EWMA-Cusum method describes a signal as the exponentially moving weighted average (EWMA) of historical data. This method is simple, robust, and applicable to most variables. The method based on the Dynamic Linear Model (refereed to as Adaptive-DLM method) describes a signal using the linear growth model combined with an EWMA model. An adaptive Kalman filter is used to estimate the second-order characteristics and adjust the change-detection process online. The Adaptive-DLM method is designed for monitoring variables measured at a high sampling rate. To address the intraoperative variability in variables measured at a low sampling rate, a generalized hidden Markov model is used to classify trend changes into different patterns and to describe the transition between these patterns as a first-order Markov-chain process. Trend patterns are recognized online with a quantitative evaluation of the occurrence probability. In addition to the univariate methods, a test statistic based on Factor Analysis is also proposed to investigate the inver-variable relationship and to reveal subtle clinical events. A novel hybrid median filter is also proposed to fuse heart-rate measurements from the ECG monitor, pulse oximeter, and arterial BP monitor to obtain accurate estimates of HR in the presence of artifacts. These methods have been tested using simulated and clinical data. The EWMA-Cusum and Adaptive-DLM methods have been implemented in a software system iAssist and evaluated by clinicians in the operating room. The results demonstrate that the proposed methods can effectively detect trend changes and assist clinicians in tracking the physiological state of a patient during surgery.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Mei, Yajun Lorden Gary. "Asymptotically optimal methods for sequential change-point detection /". Diss., Pasadena, Calif. : California Institute of Technology, 2003. http://resolver.caltech.edu/CaltechETD:etd-05292003-133431.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Geng, Jun. "Quickest Change-Point Detection with Sampling Right Constraints". Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-dissertations/440.

Texto completo
Resumen
The quickest change-point detection problems with sampling right constraints are considered. Specially, an observer sequentially takes observations from a random sequence, whose distribution will change at an unknown time. Based on the observation sequence, the observer wants to identify the change-point as quickly as possible. Unlike the classic quickest detection problem in which the observer can take an observation at each time slot, we impose a causal sampling right constraint to the observer. In particular, sampling rights are consumed when the observer takes an observation and are replenished randomly by a stochastic process. The observer cannot take observations if there is no sampling right left. The causal sampling right constraint is motivated by several practical applications. For example, in the application of sensor network for monitoring the abrupt change of its ambient environment, the sensor can only take observations if it has energy left in its battery. With this additional constraint, we design and analyze the optimal detection and sampling right allocation strategies to minimize the detection delay under various problem setups. As one of our main contributions, a greedy sampling right allocation strategy, by which the observer spends sampling rights in taking observations as long as there are sampling rights left, is proposed. This strategy possesses a low complexity structure, and leads to simple but (asymptotically) optimal detection algorithms for the problems under consideration. Specially, our main results include: 1) Non-Bayesian quickest change-point detection: we consider non-Bayesian quickest detection problem with stochastic sampling right constraint. Two criteria, namely the algorithm level average run length (ARL) and the system level ARL, are proposed to control the false alarm rate. We show that the greedy sampling right allocation strategy combined with the cumulative sum (CUSUM) algorithm is optimal for Lorden's setup with the algorithm level ARL constraint and is asymptotically optimal for both Lorden's and Pollak's setups with the system level ARL constraint. 2) Bayesian quickest change-point detection: both limited sampling right constraint and stochastic sampling right constraint are considered in the Bayesian quickest detection problem. The limited sampling right constraint can be viewed as a special case of the stochastic sampling right constraint with a zero sampling right replenishing rate. The optimal solutions are derived for both sampling right constraints. However, the structure of the optimal solutions are rather complex. For the problem with the limited sampling right constraint, we provide asymptotic upper and lower bounds for the detection delay. For the problem with the stochastic sampling right constraint, we show that the greedy sampling right allocation strategy combined with Shiryaev's detection rule is asymptotically optimal. 3) Quickest change-point detection with unknown post-change parameters: we extend previous results to the quickest detection problem with unknown post-change parameters. Both non-Bayesian and Bayesian setups with stochastic sampling right constraints are considered. For the non-Bayesian problem, we show that the greedy sampling right allocation strategy combined with the M-CUSUM algorithm is asymptotically optimal. For the Bayesian setups, we show that the greedy sampling right allocation strategy combined with the proposed M-Shiryaev algorithm is asymptotically optimal.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Schröder, Anna Louise. "Methods for change-point detection with additional interpretability". Thesis, London School of Economics and Political Science (University of London), 2016. http://etheses.lse.ac.uk/3421/.

Texto completo
Resumen
The main purpose of this dissertation is to introduce and critically assess some novel statistical methods for change-point detection that help better understand the nature of processes underlying observable time series. First, we advocate the use of change-point detection for local trend estimation in financial return data and propose a new approach developed to capture the oscillatory behaviour of financial returns around piecewise-constant trend functions. Core of the method is a data-adaptive hierarchically-ordered basis of Unbalanced Haar vectors which decomposes the piecewise-constant trend underlying observed daily returns into a binary-tree structure of one-step constant functions. We illustrate how this framework can provide a new perspective for the interpretation of change points in financial returns. Moreover, the approach yields a family of forecasting operators for financial return series which can be adjusted flexibly depending on the forecast horizon or the loss function. Second, we discuss change-point detection under model misspecification, focusing in particular on normally distributed data with changing mean and variance. We argue that ignoring the presence of changes in mean or variance when testing for changes in, respectively, variance or mean, can affect the application of statistical methods negatively. After illustrating the difficulties arising from this kind of model misspecification we propose a new method to address these using sequential testing on intervals with varying length and show in a simulation study how this approach compares to competitors in mixed-change situations. The third contribution of this thesis is a data-adaptive procedure to evaluate EEG data, which can improve the understanding of an epileptic seizure recording. This change-point detection method characterizes the evolution of frequencyspecific energy as measured on the human scalp. It provides new insights to this high dimensional high frequency data and has attractive computational and scalability features. In addition to contrasting our method with existing approaches, we analyse and interpret the method’s output in the application to a seizure data set.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Björk, Tim. "Exploring Change Point Detection in Network Equipment Logs". Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-85626.

Texto completo
Resumen
Change point detection (CPD) is the method of detecting sudden changes in timeseries, and its importance is great concerning network traffic. With increased knowledge of occurring changes in data logs due to updates in networking equipment,a deeper understanding is allowed for interactions between the updates and theoperational resource usage. In a data log that reflects the amount of network traffic, there are large variations in the time series because of reasons such as connectioncount or external changes to the system. To circumvent these unwanted variationchanges and assort the deliberate variation changes is a challenge. In this thesis, we utilize data logs retrieved from a network equipment vendor to detect changes, then compare the detected changes to when firmware/signature updates were applied, configuration changes were made, etc. with the goal to achieve a deeper understanding of any interaction between firmware/signature/configuration changes and operational resource usage. Challenges in the data quality and data processing are addressed through data manipulation to counteract anomalies and unwanted variation, as well as experimentation with parameters to achieve the most ideal settings. Results are produced through experiments to test the accuracy of the various change pointdetection methods, and for investigation of various parameter settings. Through trial and error, a satisfactory configuration is achieved and used in large scale log detection experiments. The results from the experiments conclude that additional information about how changes in variation arises is required to derive the desired understanding.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Shcherbakova, Evgenia y Olga Gogoleva. "On-line change-point detection procedures for Initial Public Offerings". Thesis, Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-13940.

Texto completo
Resumen
In this thesis we investigate the case of monitoring of stocks havingjust been introduced for public trading on the nancial market. Theempirical distribution of the change-point for 20 assets for 60 days was calculated to check the support for the assumption that the priceinitially drop or rise to some steady level.The price process X = {Xt : t in Z} is assumed to be an AR(1) process with a shift in the mean value from a slope to a constant. The Shiryaev-Roberts, Shewhart, EWMA, Likelihood ratio and CUSUM proceduresfor detecting a change-point in such a process are derived. The expecteddelay of the motivated alarm according to these methods is achievedunder the assumptions of a Poisson, uniform, binomial and geometric distributed by means of simulations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Mihalache, Stefan-Radu [Verfasser]. "Sequential change-point detection for diffusion processes / Stefan-Radu Mihalache". Köln : Universitäts- und Stadtbibliothek Köln, 2011. http://d-nb.info/1013739531/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Bergsjö, Joline. "Photogrammetric point cloud generation and surface interpolation for change detection". Thesis, KTH, Geodesi och satellitpositionering, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190882.

Texto completo
Resumen
In recent years the science revolving image matching algorithms has gotten an upswing mostly due to its benefits in computer vision. This has led to new opportunities for photogrammetric methods to compete with LiDAR data when it comes to 3D-point clouds and generating surface models. In Sweden a project to create a high resolution national height model started in 2009 and today almost the entirety of Sweden has been scanned with LiDAR sensors. The objective for this project is to achieve a height model with high spatial resolution and high accuracy in height. As for today no update of this model is planned in the project so it’s up to each municipality or company who needs a recent height model to update themselves. This thesis aims to investigate the benefits and shortcomings of using photogrammetric measures for generating and updating surface models. Two image matching software are used, ERDAS photogrammetry and Spacemetric Keystone, to generate a 3D point cloud of a rural area in Botkyrka municipality. The point clouds are interpolated into surface models using different interpolation percentiles and different resolutions. The photogrammetric point clouds are evaluated on how well they fit a reference point cloud, the surfaces are evaluated on how they are affected by the different interpolation percentiles and image resolutions. An analysis to see if the accuracy improves when the point cloud is interpolated into a surface. The result shows that photogrammetric point clouds follows the profile of the ground well but contains a lot of noise in the forest covered areas. A lower image resolution improves the accuracy for the forest feature in the surfaces. The results also show that noise-reduction is essential to generate a surface with decent accuracy. Furthermore, the results identify problem areas in dry deciduous forest where the photogrammetric method fails to capture the forest.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Turner, Ryan Darby. "Gaussian processes for state space models and change point detection". Thesis, University of Cambridge, 2012. https://www.repository.cam.ac.uk/handle/1810/242181.

Texto completo
Resumen
This thesis details several applications of Gaussian processes (GPs) for enhanced time series modeling. We first cover different approaches for using Gaussian processes in time series problems. These are extended to the state space approach to time series in two different problems. We also combine Gaussian processes and Bayesian online change point detection (BOCPD) to increase the generality of the Gaussian process time series methods. These methodologies are evaluated on predictive performance on six real world data sets, which include three environmental data sets, one financial, one biological, and one from industrial well drilling. Gaussian processes are capable of generalizing standard linear time series models. We cover two approaches: the Gaussian process time series model (GPTS) and the autoregressive Gaussian process (ARGP).We cover a variety of methods that greatly reduce the computational and memory complexity of Gaussian process approaches, which are generally cubic in computational complexity. Two different improvements to state space based approaches are covered. First, Gaussian process inference and learning (GPIL) generalizes linear dynamical systems (LDS), for which the Kalman filter is based, to general nonlinear systems for nonparametric system identification. Second, we address pathologies in the unscented Kalman filter (UKF).We use Gaussian process optimization (GPO) to learn UKF settings that minimize the potential for sigma point collapse. We show how to embed mentioned Gaussian process approaches to time series into a change point framework. Old data, from an old regime, that hinders predictive performance is automatically and elegantly phased out. The computational improvements for Gaussian process time series approaches are of even greater use in the change point framework. We also present a supervised framework learning a change point model when change point labels are available in training.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Du, Yang. "Comparison of change-point detection algorithms for vector time series". Thesis, Linköpings universitet, Statistik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-59925.

Texto completo
Resumen
Change-point detection aims to reveal sudden changes in sequences of data. Special attention has been paid to the detection of abrupt level shifts, and applications of such techniques can be found in a great variety of fields, such as monitoring of climate change, examination of gene expressions and quality control in the manufacturing industry. In this work, we compared the performance of two methods representing frequentist and Bayesian approaches, respectively. The frequentist approach involved a preliminary search for level shifts using a tree algorithm followed by a dynamic programming algorithm for optimizing the locations and sizes of the level shifts. The Bayesian approach involved an MCMC (Markov chain Monte Carlo) implementation of a method originally proposed by Barry and Hartigan. The two approaches were implemented in R and extensive simulations were carried out to assess both their computational efficiency and ability to detect abrupt level shifts. Our study showed that the overall performance regarding the estimated location and size of change-points was comparable for the Bayesian and frequentist approach. However, the Bayesian approach performed better when the number of change-points was small; whereas the frequentist became stronger when the change-point proportion increased. The latter method was also better at detecting simultaneous change-points in vector time series. Theoretically, the Bayesian approach has a lower computational complexity than the frequentist approach, but suitable settings for the combined tree and dynamic programming can greatly reduce the processing time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Diskin, Yakov. "Volumetric Change Detection Using Uncalibrated 3D Reconstruction Models". University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1429293660.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Bulunga, Meshack Linda. "Change-point detection in dynamical systems using auto-associative neural networks". Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/20267.

Texto completo
Resumen
Thesis (MScEng)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: In this research work, auto-associative neural networks have been used for changepoint detection. This is a nonlinear technique that employs the use of artificial neural networks as inspired among other by Frank Rosenblatt’s linear perceptron algorithm for classification. An auto-associative neural network was used successfully to detect change-points for various types of time series data. Its performance was compared to that of singular spectrum analysis developed by Moskvina and Zhigljavsky. Fraction of Explained Variance (FEV) was also used to compare the performance of the two methods. FEV indicators are similar to the eigenvalues of the covariance matrix in principal component analysis. Two types of time series data were used for change-point detection: Gaussian data series and nonlinear reaction data series. The Gaussian data had four series with different types of change-points, namely a change in the mean value of the time series (T1), a change in the variance of the time series (T2), a change in the autocorrelation of the time series (T3), and a change in the crosscorrelation of two time series (T4). Both linear and nonlinear methods were able to detect the changes for T1, T2 and T4. None of them could detect the changes in T3. With the Gaussian data series, linear singular spectrum analysis (LSSA) performed as well as the NLSSA for the change point detection. This is because the time series was linear and the nonlinearity of the NLSSA was therefore not important. LSSA did even better than NLSSA when comparing FEV values, since it is not subject to suboptimal solutions as could sometimes be the case with autoassociative neural networks. The nonlinear data consisted of the Belousov-Zhabotinsky (BZ) reactions, autocatalytic reaction time series data and data representing a predator-prey system. With the NLSSA methods, change points could be detected accurately in all three systems, while LSSA only managed to detect the change-point on the BZ reactions and the predator-prey system. The NLSSA method also fared better than the LSSA method when comparing FEV values for the BZ reactions. The LSSA method was able to model the autocatalytic reactions fairly accurately, being able to explain 99% of the variance in the data with one component only. NLSSA with two nodes on the bottleneck attained an FEV of 87%. The performance of both NLSSA and LSSA were comparable for the predator-prey system, both systems, where both could attain FEV values of 92% with a single component. An auto-associative neural network is a good technique for change point detection in nonlinear time series data. However, it offers no advantage over linear techniques when the time series data are linear.
AFRIKAANSE OPSOMMING: In hierdie navorsing is outoassosiatiewe neurale netwerk gebruik vir veranderingspuntwaarneming. Dis is ‘n nielineêre tegniek wat neurale netwerke gebruik soos onder andere geïnspireer deur Frank Rosnblatt se lineêre perseptronalgoritme vir klassifikasie. ‘n Outoassosiatiewe neurale netwerk is suksesvol gebruik om veranderingspunte op te spoor in verskeie tipes tydreeksdata. Die prestasie van die outoassosiatiewe neurale netwerk is vergelyk met singuliere spektrale oontleding soos ontwikkel deur Moskvina en Zhigljavsky. Die fraksie van die verklaarde variansie (FEV) is ook gebruik om die prestasie van die twee metodes te vergelyk. FEV indikatore is soortgelyk aan die eiewaardes van die kovariansiematriks in hoofkomponentontleding. Twee tipes tydreeksdata is gebruik vir veranderingspuntopsporing: Gaussiaanse tydreekse en nielineêre reaksiedatareekse. Die Gaussiaanse data het vier reekse gehad met verskillende veranderingspunte, naamlik ‘n verandering in die gemiddelde van die tydreeksdata (T1), ‘n verandering in die variansie van die tydreeksdata (T2), ‘n verandering in die outokorrelasie van die tydreeksdata (T3), en ‘n verandering in die kruiskorrelasie van twee tydreekse (T4). Beide lineêre en nielineêre metodes kon die veranderinge in T1, T2 en T4 opspoor. Nie een het egter daarin geslaag om die verandering in T3 op te spoor nie. Met die Gaussiaanse tydreeks het lineêre singuliere spektrumanalise (LSSA) net so goed gevaar soos die outoassosiatiewe neurale netwerk of nielineêre singuliere spektrumanalise (NLSSA), aangesien die tydreekse lineêr was en die vermoë van die NLSSA metode om nielineêre gedrag te identifiseer dus nie belangrik was nie. Inteendeel, die LSSA metode het ‘n groter FEV waarde getoon as die NLSSA metode, omdat LSSA ook nie blootgestel is aan suboptimale oplossings, soos wat soms die geval kan wees met die afrigting van die outoassosiatiewe neural netwerk nie. Die nielineêre data het bestaan uit die Belousov-Zhabotinsky (BZ) reaksiedata, ‘n outokatalitiese reaksietydreeksdata en data wat ‘n roofdier-prooistelsel verteenwoordig het. Met die NLSSA metode kon veranderingspunte betroubaar opgespoor word in al drie tydreekse, terwyl die LSSA metode net die veranderingspuntin die BZ reaksie en die roofdier-prooistelsel kon opspoor. Die NLSSA metode het ook beter gevaaar as die LSSA metode wanneer die FEV waardes vir die BZ reaksies vergelyk word. Die LSSA metode kon die outokatalitiese reaksies redelik akkuraat modelleer, en kon met slegs een komponent 99% van die variansie in die data verklaar. Die NLSSA metode, met twee nodes in sy bottelneklaag, kon ‘n FEV waarde van slegs 87% behaal. Die prestasie van beide metodes was vergelykbaar vir die roofdier-prooidata, met beide wat FEV waardes van 92% kon behaal met hulle een komponent. ‘n Outoassosiatiewe neural netwerk is ‘n goeie metode vir die opspoor van veranderingspunte in nielineêre tydreeksdata. Dit hou egter geen voordeel in wanneer die data lineêr is nie.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Elango, Veeresh. "Change Point Detection in Sequential Sensor Data using Recurrent Neural Networks". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235770.

Texto completo
Resumen
Change-point detection is the problem of recognizing the abrupt variations in sequential data. This covers a wide range of real world problems within medical, meteorology and automotive industry, and has been actively addressed in the community of statistics and data mining. In the automotive industry, sequential data is collected from various components of the vehicles. The changes in the underlying distribution of the sequential data might indicate component failure, sensor degradation or different activity of the vehicle, which explains the need for detecting these deviations in this industry. The research question of this thesis focuses on how different architectures of the recurrent neural network (RNN) perform in detecting the change points of sequential sensor data. In this thesis, the sliding window method was utilised to represent the variable sequence length into fixed length. Then this fixed length sequences were provided to many input single output (MISO) and many input many output (MIMO) architectures of RNN to perform two different tasks such as sequence detection, where the position of the change point in the sequence is recognized and sequence classification, where the sequence is checked for the presence of a change point. The stacking ensemble technique was employed to combine results of sequence classification with the sequence detection to further enhance the performance. The result of the thesis shows that the MIMO architecture has higher precision than recall whereas MISO architecture has higher recall than precision but both having almost similar f1-score. The ensemble technique exhibit a boost in the performance of both the architectures.
Ändringspunktdetektering är problemet med att upptäcka den plötsliga förändringen av egenskaperna hos sekventiell data. Detta täcker ett brett spektrum av problem inom t.ex. medicin, meteorologi och fordonsindustrin, och har diskuteras aktivt i statistikoch datavinnnings området. I bilindustrin samlas sekventiella data från olika delar av fordonet. Förändringen i egenskapen hos sekventiella data kan indikera komponentfel, sensornedbrytning eller förändring av fordonets användning, vilket förklarar behovet av att detektera dessa avvikelser i denna bransch. I denna uppsats undersöker vi olika arkitekturer av återkopplade neurala nätverk (engelska recurrent neural networks, RNN), såsom många insignaler och en utsignal (MISO) och många inoch utsignaler (MIMO) arkitekturer, för att detektera detektera förändringspunkter över sekventiella data från fordonsensorer. I denna uppsats användes ett glidande fönster för att omvandla de variabla sekvenslängderna till sekvenser av fasta längder. Dessa sekvenser tillhandahölls till MISOoch MIMO-arkitekturerna för RNN för att utföra två olika uppgifter: sekvensdetektering för att hitta positionen av ändringspunkten i sekvensen och sekvensklassifiering för att upptäcka om sekvensen innehåller en ändringspunkt. Stapling av klassificerare har använts för att kombinera resultaten av sekvensklassificering med sekvensdetektering för att ytterligare förbättra prestanda. Resultatet av uppsats visar att MIMO-arkitekturen har högre precision än känslighet medan MISO-arkitekturen har högre känslighet än precision men båda har liknande f1-poäng. Stackningssamlingstekniken förbättrar resultaten i båda arkitekturerna.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Jiang, Tao. "Information Approach for Change Point Detection of Weibull Models with Applications". Bowling Green State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1434382384.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Makris, Alexia Melissa. "A Monte Carlo Approach to Change Point Detection in a Liver Transplant". Scholar Commons, 2013. http://scholarcommons.usf.edu/etd/4824.

Texto completo
Resumen
Patient survival post liver transplant (LT) is important to both the patient and the center's accreditation, but over the years physicians have noticed that distant patients struggle with post LT care. I hypothesized that patient's distance from the transplant center had a detrimental effect on post LT survival. I suspected Hepatitis C (HCV) and Hepatocellular Carcinoma (HCC) patients would deteriorate due to their recurrent disease and there is a need for close monitoring post LT. From the current literature it was not clear if patients' distance from a transplant center affects outcomes post LT. Firozvi et al. (Firozvi AA, 2008) reported no difference in outcomes of LT recipients living 3 hours away or less. This study aimed to examine outcomes of LT recipients based on distance from a transplant center. I hypothesized that the effect of distance from a LT center was detrimental after adjusting for HCV and HCC status. Methods: This was a retrospective single center study of LT recipients transplanted between 1996 and 2012. 821 LT recipients were identified who qualified for inclusion in the study. Survival analysis was performed using standard methods as well as a newly developed Monte Carlo (MC) approach for change point detection. My new methodology, allowed for detection of both a change point in distance and a time by maximizing the two parameter score function (M2p) over a two dimensional grid of distance and time values. Extensive simulations using both standard distributions and data resembling the LT data structure were used to prove the functionality of the model. Results: Five year survival was 0.736 with a standard error of 0.018. Using Cox PH it was demonstrated that patients living beyond 180 miles had a hazard ratio (HR) of 2.68 (p-value<0.004) compared to those within 180 miles from the transplant center. I was able to confirm these results using KM and HCV/HCC adjusted AFT, while HCV and HCC adjusted LR confirmed the distance effect at 180 miles (p=0.0246), one year post LT. The new statistic that has been labeled M2p allows for simultaneous dichotomization of distance in conjunction with the identification of a change point in the hazard function. It performed much better than the previously available statistics in the standard simulations. The best model for the data was found to be extension 3 which dichotomizes the distance Z, replacing it by I(Z>c), and then estimates the change point c and tau. Conclusions: Distance had a detrimental effect and this effect was observed at 180 miles from the transplant center. Patients living beyond 180 miles from the transplant center had 2.68 times the death rate compared to those living within the 180 mile radius. Recipients with HCV fared the worst with the distance effect being more pronounced (HR of 3.72 vs. 2.68). Extensive simulations using different parameter values in both standard simulations and simulations resembling LT data, proved that these new approaches work for dichotomizing a continuous variable and finding a point beyond which there is an incremental effect from this variable. The recovered values were very close to the true values and p-values were small.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Piyadi, Gamage Ramadha D. "Empirical Likelihood For Change Point Detection And Estimation In Time Series Models". Bowling Green State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1495457528719879.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Ko, Kyungduk. "Bayesian wavelet approaches for parameter estimation and change point detection in long memory processes". Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/2804.

Texto completo
Resumen
The main goal of this research is to estimate the model parameters and to detect multiple change points in the long memory parameter of Gaussian ARFIMA(p, d, q) processes. Our approach is Bayesian and inference is done on wavelet domain. Long memory processes have been widely used in many scientific fields such as economics, finance and computer science. Wavelets have a strong connection with these processes. The ability of wavelets to simultaneously localize a process in time and scale domain results in representing many dense variance-covariance matrices of the process in a sparse form. A wavelet-based Bayesian estimation procedure for the parameters of Gaussian ARFIMA(p, d, q) process is proposed. This entails calculating the exact variance-covariance matrix of given ARFIMA(p, d, q) process and transforming them into wavelet domains using two dimensional discrete wavelet transform (DWT2). Metropolis algorithm is used for sampling the model parameters from the posterior distributions. Simulations with different values of the parameters and of the sample size are performed. A real data application to the U.S. GNP data is also reported. Detection and estimation of multiple change points in the long memory parameter is also investigated. The reversible jump MCMC is used for posterior inference. Performances are evaluated on simulated data and on the Nile River dataset.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Dou, Baojun. "Three essays on time series : spatio-temporal modelling, dimension reduction and change-point detection". Thesis, London School of Economics and Political Science (University of London), 2015. http://etheses.lse.ac.uk/3242/.

Texto completo
Resumen
Modelling high dimensional time series and non-stationary time series are two import aspects in time series analysis nowadays. The main objective of this thesis is to deal with these two problems. The first two parts deal with high dimensionality and the third part considers a change point detection problem. In the first part, we consider a class of spatio-temporal models which extend popular econometric spatial autoregressive panel data models by allowing the scalar coefficients for each location (or panel) different from each other. The model is of the following form: yt = D(λ0)Wyt + D(λ1)yt−1 + D(λ2)Wyt−1 + εt, (1) where yt = (y1,t, . . . , yp,t) T represents the observations from p locations at time t, D(λk) = diag(λk1, . . . , λkp) and λkj is the unknown coefficient parameter for the j-th location, and W is the p×p spatial weight matrix which measures the dependence among different locations. All the elements on the main diagonal of W are zero. It is a common practice in spatial econometrics to assume W known. For example, we may let wij = 1/(1 + dij ), for i ̸= j, where dij ≥ 0 is an appropriate distance between the i-th and the j-th location. It can simply be the geographical distance between the two locations or the distance reflecting the correlation or association between the variables at the two locations. In the above model, D(λ0) captures the pure spatial effect, D(λ1) captures the pure dynamic effect, and D(λ2) captures the time-lagged spatial effect. We also assume that the error term εt = (ε1,t, ε2,t, . . . , εp,t) T in (1) satisfies the condition Cov (yt−1, εt) = 0. When λk1 = · · · = λkp for all k = 1, 2, 3, (1) reduces to the model of Yu et al. (2008), in which there are only 3 unknown regressive coefficient parameters. In general the regression function in (1) contains 3p unknown parameters. To overcome the innate endogeneity, we propose a generalized Yule-Walker estimation method which applies the least squares estimation to a Yule-Walker equation. The asymptotic theory is developed under the setting that both the sample size and the number of locations (or panels) tend to infinity under a general setting for stationary and α-mixing processes, which includes spatial autoregressive panel data models driven by i.i.d. innovations as special cases. The proposed methods are illustrated using both simulated and real data. In part 2, we consider a multivariate time series model which decomposes a vector process into a latent factor process and a white noise process. Let yt = (y1,t, · · · , yp,t) T be an observable p × 1 vector time series process. The factor model decomposes yt in the following form: yt = Axt + εt , (2) where xt = (x1,t, · · · , xr,t) T is a r × 1 latent factor time series with unknown r ≤ p and A = (a1, a2, · · · , ar) is a p × r unknown constant matrix. εt is a white noise process with mean 0 and covariance matrix Σε. The first part of (2) is a dynamic part and the serial dependence of yt is driven by xt. We will achieve dimension reduction once r ≪ p in the sense that the dynamics of yt is driven by a much lower dimensional process xt. Motivated by practical needs and the characteristic of high dimensional data, the sparsity assumption on factor loading matrix is imposed. Different from Lam, Yao and Bathia (2011)’s method, which is equivalent to an eigenanalysis of a non negative definite matrix, we add a constraint to control the number of nonzero elements in each column of the factor loading matrix. Our proposed sparse estimator is then the solution of a constrained optimization problem. The asymptotic theory is developed under the setting that both the sample size and the dimensionality tend to infinity. When the common factor is weak in the sense that δ > 1/2 in Lam, Yao and Bathia (2011)’s paper, the new sparse estimator may have a faster convergence rate. Numerically, we employ the generalized deflation method (Mackey (2009)) and the GSLDA method (Moghaddam et al. (2006)) to approximate the estimator. The tuning parameter is chosen by cross validation. The proposed method is illustrated with both simulated and real data examples. The third part is a change point detection problem. we consider the following covariance structural break detection problem: Cov(yt)I(tj−1 ≤ t < tj ) = Σtj−1, j = 1, · · · , m + 1, where yt is a p × 1 vector time series, Σtj−1̸ = Σtj and {t1, . . ., tm} are change points, 1 = t0 < t1 < · · · < tm+1 = n. In the literature, the number of change points m is usually assumed to be known and small, because a large m would involve a huge amount of computational burden for parameters estimation. By reformulating the problem in a variable selection context, the group least absolute shrinkage and selection operator (LASSO) is proposed to estimate m and the locations of the change points {t1, . . ., tm}. Our method is model free, it can be extensively applied to multivariate time series, such as GARCH and stochastic volatility models. It is shown that both m and the locations of the change points {t1, . . . , tm} can be consistently estimated from the data, and the computation can be efficiently performed. An improved practical version that incorporates group LASSO and the stepwise regression variable selection technique are discussed. Simulation studies are conducted to assess the finite sample performance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Shakil, Sadia. "Windowing effects and adaptive change point detection of dynamic functional connectivity in the brain". Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/55006.

Texto completo
Resumen
Evidence of networks in the resting-brain reflecting the spontaneous brain activity is perhaps the most significant discovery to understand intrinsic brain functionality. Moreover, subsequent detection of dynamics in these networks can be milestone in differentiating the normal and disordered brain functions. However, capturing the correct dynamics is a challenging task since no ground truths' are present for comparison of the results. The change points of these networks can be different for different subjects even during normal brain functions. Even for the same subject and session, dynamics can be different at the start and end of the session based on the fatigue level of the subject scanned. Despite the absence of ground truths, studies have analyzed these dynamics using the existing methods and some of them have developed new algorithms too. One of the most commonly used method for this purpose is sliding window correlation. However, the result of the sliding window correlation is dependent on many parameters and without the ground truth there is no way of validating the results. In addition, most of the new algorithms are complicated, computationally expensive, and/or focus on just one aspect on these dynamics. This study applies the algorithms and concepts from signal processing, image processing, video processing, information theory, and machine learning to analyze the results of the sliding window correlation and develops a novel algorithm to detect change points of these networks adaptively. The findings in this study are divided into three parts: 1) Analyzing the extent of variability in well-defined networks of rodents and humans with sliding window correlation applying concepts from information theory and machine learning domains. 2) Analyzing the performance of sliding window correlation using simulated networks as ground truths for best parameters’ selection, and exploring its dependence on multiple frequency components of the correlating signals by processing the signals in time and Fourier domains. 3) Development of a novel algorithm based on image similarity measures from image and video processing that maybe employed to identify change points of these networks adaptively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Han, Sung Won. "Efficient change detection methods for bio and healthcare surveillance". Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34828.

Texto completo
Resumen
For the last several decades, sequential change point problems have been studied in both the theoretical area (sequential analysis) and the application area (industrial SPC). In the conventional application, the baseline process is assumed to be stationary, and the shift pattern is a step function that is sustained after the shift. However, in biosurveillance, the underlying assumptions of problems are more complicated. This thesis investigates several issues in biosurveillance such as non-homogeneous populations, spatiotemporal surveillance methods, and correlated structures in regional data. The first part of the thesis discusses popular surveillance methods in sequential change point problems and off-line problems based on count data. For sequential change point problems, the CUSUM and the EWMA have been used in healthcare and public health surveillance to detect increases in the rates of diseases or symptoms. On the other hand, for off-line problems, scan statistics are widely used. In this chapter, we link the method for off-line problems to those for sequential change point problems. We investigate three methods--the CUSUM, the EWMA, and scan statistics--and compare them by conditional expected delay (CED). The second part of the thesis pertains to the on-line monitoring problem of detecting a change in the mean of Poisson count data with a non-homogeneous population size. The most common detection schemes are based on generalized likelihood ratio statistics, known as an optimal method under Lodern's criteria. We propose alternative detection schemes based on the weighted likelihood ratios and the adaptive threshold method, which perform better than generalized likelihood ratio statistics in an increasing population. The properties of these three detection schemes are investigated by both a theoretical approach and numerical simulation. The third part of the thesis investigates spatiotemporal surveillance based on likelihood ratios. This chapter proposes a general framework for spatiotemporal surveillance based on likelihood ratio statistics over time windows. We show that the CUSUM and other popular likelihood ratio statistics are the special cases under such a general framework. We compare the efficiency of these surveillance methods in spatiotemporal cases for detecting clusters of incidence using both Monte Carlo simulations and a real example. The fourth part proposes multivariate surveillance methods based on likelihood ratio tests in the presence of spatial correlations. By taking advantage of spatial correlations, the proposed methods can perform better than existing surveillance methods by providing the faster and more accurate detection. We illustrate the application of these methods with a breast cancer case in New Hampshire when observations are spatially correlated.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Bolton, Alexander. "Bayesian change point models for regime detection in stochastic processes with applications in cyber security". Thesis, Imperial College London, 2016. http://hdl.handle.net/10044/1/48484.

Texto completo
Resumen
Some important cyber security data can be modelled using stochastic processes that undergo changes in behaviour over time. Consider a piece of malicious software (malware) that performs different functions as it runs. Data obtained from this software switch between different behaviours that correspond to different functions. Coders create new strains of similar malware by making minor changes to existing malware; these new samples cannot be detected by methods that only identify whether an exact executable file has been seen before. Comparing data from new malware and existing malware, in order to detect similar behaviours, is a cyber security challenge. Methods that can detect these similar behaviours are used to identify similar malware samples. This thesis presents a generalised change point model for stochastic processes that includes regimes, i.e. recurring parameters. For generality the stochastic processes are assumed to be multivariate. A new reversible jump Markov chain Monte Carlo (RJMCMC) sampler is presented for inferring model parameters. The number of change points or regimes need not be specified before inference as the RJMCMC sampler allows these to be inferred. The RJMCMC sampler is applied in different contexts, including estimating malware similarity. A new sequential Monte Carlo (SMC) sampler is also presented. Like the RJMCMC sampler, the SMC sampler infers change points and regimes, but the SMC inference is computed online. The SMC sampler is also applied to detect regimes in a variety of contexts, including connections made in a computer network.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Ratnasingam, Suthakaran. "Sequential Change-point Detection in Linear Regression and Linear Quantile Regression Models Under High Dimensionality". Bowling Green State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu159050606401363.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

AL, Cihan y Kubra Koroglu. "Detection of the Change Point and Optimal Stopping Time by Using Control Charts on Energy Derivatives". Thesis, Högskolan i Halmstad, Tillämpad matematik och fysik (MPE-lab), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-17371.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Zhai, Hongru. "Prominent variable detection in lipid nanoparticle experiments : A simulation study on non-parametric change point analysis". Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-419865.

Texto completo
Resumen
Images as important sources of information, powered by the newest robotic microscopy technologies, make the volume of data available larger than ever before. From the images, hundreds of variables can be calculated. The medical research group within the HASTE project conducts a LNP (Lipid Nano-particle) experiment that is designed to transport a drug to target cells. In the LNP experiment, microscopy images are taken over time to record if the drug is uptaken. We propose that the non-parametric change point analysis can be used to identify the variable which shows the earliest state change (potentially signifying the drug uptake) among all variables calculated from the images. Two algorithms for non-parametric change point analysis, an agglomerative and a divisive, are studied through simulation leading us to implement the agglomerative algorithm on the LNP experiment data. Furthermore, the simulation results show that the prominent variable detection accuracy improves when more time points are included in the experiment. In the application, correlation is most likely to be detected as the sole prominent variable.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Zhu, Yanjun. "Some New Methods for Online Change Point Detection in The Covariance Structure of High-dimensional Data". Kent State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=kent1555861286963026.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Dürre, Alexander [Verfasser], Roland [Akademischer Betreuer] Fried, Daniel [Akademischer Betreuer] Vogel y Christine H. [Gutachter] Müller. "Robust change-point detection and dependence modeling / Alexander Dürre ; Gutachter: Christine H. Müller ; Roland Fried, Daniel Vogel". Dortmund : Universitätsbibliothek Dortmund, 2017. http://d-nb.info/1142519910/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Pashami, Sepideh. "Change detection in metal oxide gas sensor signals for open sampling systems". Doctoral thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-46845.

Texto completo
Resumen
This thesis addresses the problem of detecting changes in the activity of a distant gas source from the response of an array of metal oxide (MOX) gas sensors deployed in an Open Sampling System (OSS). Changes can occur due to gas source activity such as a sudden alteration in concentration or due to exposure to a different compound. Applications such as gas-leak detection in mines or large-scale pollution monitoring can benefit from reliable change detection algorithms, especially where it is impractical to continuously store or transfer sensor readings, or where reliable calibration is difficult to achieve. Here, it is desirable to detect a change point indicating a significant event, e.g. presence of gas or a sudden change in concentration. The main challenges are turbulent dispersion of gas and the slow response and recovery times of MOX sensors. Due to these challenges, the gas sensor response exhibits fluctuations that interfere with the changes of interest. The contributions of this thesis are centred on developing change detection methods using MOX sensor responses. First, we apply the Generalized Likelihood Ratio algorithm (GLR), a commonly used method that does not make any a priori assumption about change events. Next, we propose TREFEX, a novel change point detection algorithm, which models the response of MOX sensors as a piecewise exponential signal and considers the junctions between consecutive exponentials as change points. We also propose the rTREFEX algorithm as an extension of TREFEX. The core idea behind rTREFEX is an attempt to improve the fitted exponentials of TREFEX by minimizing the number of exponentials even further. GLR, TREFEX and rTREFEX are evaluated for various MOX sensors and gas emission profiles. A sensor selection algorithm is then introduced and the change detection algorithms are evaluated with the selected sensor subsets. A comparison between the three proposed algorithms shows clearly superior performance of rTREFEX both in detection performance and in estimating the change time. Further, rTREFEX is evaluated in real-world experiments where data is gathered by a mobile robot. Finally, a gas dispersion simulation was developed which integrates OpenFOAM flow simulation and a filament-based gas propagation model to simulate gas dispersion for compressible flows with a realistic turbulence model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Li, Lingjun. "Statistical Inference for Change Points in High-Dimensional Offline and Online Data". Kent State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=kent1586206330858843.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Gösmann, Josua Nicolas [Verfasser], Holger [Gutachter] Dette y Herold [Gutachter] Dehling. "New aspects of sequential change point detection / Josua Nicolas Gösmann ; Gutachter: Holger Dette, Herold Dehling ; Fakultät für Mathematik". Bochum : Ruhr-Universität Bochum, 2020. http://d-nb.info/1221370189/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Gösmann, Josua [Verfasser], Holger [Gutachter] Dette y Herold [Gutachter] Dehling. "New aspects of sequential change point detection / Josua Nicolas Gösmann ; Gutachter: Holger Dette, Herold Dehling ; Fakultät für Mathematik". Bochum : Ruhr-Universität Bochum, 2020. http://d-nb.info/1221370189/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Hasan, Abeer. "A Study of non-central Skew t Distributions and their Applications in Data Analysis and Change Point Detection". Bowling Green State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1371055538.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Huang, Rong [Verfasser], Uwe [Akademischer Betreuer] Stilla, Helmut [Gutachter] Mayer y Uwe [Gutachter] Stilla. "Change detection of construction sites based on 3D point clouds / Rong Huang ; Gutachter: Helmut Mayer, Uwe Stilla ; Betreuer: Uwe Stilla". München : Universitätsbibliothek der TU München, 2021. http://d-nb.info/1240832850/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

García, Arboleda Isabel Cristina [Verfasser], Herold [Gutachter] Dehling y Martin [Gutachter] Wendler. "Change point detection in mean of short memory process / Isabel Cristina García Arboleda ; Gutachter: Herold Dehling, Martin Wendler ; Fakultät für Mathematik". Bochum : Ruhr-Universität Bochum, 2018. http://d-nb.info/1155588061/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Diop, Lamine. "Assessing and predicting stream-flow at different time scales in the context of climate change: Case of the upper Senegal River basin". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1496332453864627.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Stöhr, Christina [Verfasser] y Claudia [Gutachter] Kirch. "Sequential change point procedures based on U-statistics and the detection of covariance changes in functional data / Christina Stöhr ; Gutachter: Claudia Kirch". Magdeburg : Universitätsbibliothek Otto-von-Guericke-Universität, 2019. http://d-nb.info/1219966282/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Osunmadewa, Babatunde A., Christine Wessollek y Pierre Karrasch. "Linear and segmented linear trend detection for vegetation cover using GIMMS normalized difference vegetation index data in semiarid regions of Nigeria". SPIE, 2015. https://tud.qucosa.de/id/qucosa%3A35266.

Texto completo
Resumen
Quantitative analysis of trends in vegetation cover, especially in Kogi state, Nigeria, where agriculture plays a major role in the region’s economy, is very important for detecting long-term changes in the phenological behavior of vegetation over time. This study employs the use of normalized difference vegetation index (NDVI) [global inventory modeling and mapping studies 3g (GIMMS)] data from 1983 to 2011 with detailed methodological and statistical approach for analyzing trends within the NDVI time series for four selected locations in Kogi state. Based on the results of a comprehensive study of seasonalities in the time series, the original signals are decomposed. Different linear regression models are applied and compared. In order to detect structural changes over time a detailed breakpoint analysis is performed. The quality of linear modeling is evaluated by means of statistical analyses of the residuals. Standard deviations of the regressions are between 0.015 and 0.021 with R2 of 0.22–0.64. Segmented linear regression modeling is performed for improvement and a decreasing standard deviation of 33%–40% (0.01–0.013) and R2 up to 0.82 are obtained. The approach used in this study demonstrates the added value of long-term time series analyses of vegetation cover for the assessment of agricultural and rural development in the Guinea savannah region of Kogi state, Nigeria.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Liu, Wenjie. "Estimation and bias correction of the magnitude of an abrupt level shift". Thesis, Linköpings universitet, Statistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-84618.

Texto completo
Resumen
Consider a time series model which is stationary apart from a single shift in mean. If the time of a level shift is known, the least squares estimator of the magnitude of this level shift is a minimum variance unbiased estimator. If the time is unknown, however, this estimator is biased. Here, we first carry out extensive simulation studies to determine the relationship between the bias and three parameters of our time series model: the true magnitude of the level shift, the true time point and the autocorrelation of adjacent observations. Thereafter, we use two generalized additive models to generalize the simulation results. Finally, we examine to what extent the bias can be reduced by multiplying the least squares estimator with a shrinkage factor. Our results showed that the bias of the estimated magnitude of the level shift can be reduced when the level shift does not occur close to the beginning or end of the time series. However, it was not possible to simultaneously reduce the bias for all possible time points and magnitudes of the level shift.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Alkalei, Osama. "Developing fixed-point photography methodologies for assessing post-fire mountain fynbos vegetation succession as a tool for biodiversity management". University of Western Cape, 2020. http://hdl.handle.net/11394/8058.

Texto completo
Resumen
Magister Scientiae (Biodiversity and Conservation Biology) - MSc (Biodiv and Cons Biol)
Areas of high biodiversity and complex species assemblages are often difficult to manage and to set up meaningful monitoring and evaluations programmes. Mountain Fynbos is such an ecosystem and in the Cape of Good Hope (part of the Table Mountain National Park) plant biodiversity over the last five decades has been in decline. The reasons are difficult to speculate since large herbivores, altered fire regimes and even climate change could be contributors to this decline which has been quantified using fixed quadrats and standard cover-abundance estimates based on a Braun-Blanquet methodology. To provide more detailed data that has more resolution in terms of identifying ecological processes, Fixed-Point Repeat Photography has been presented as a management “solution”. However, photography remains a difficult method to standardize subjects and has certain operational limitations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Mohammadian, Jeela. "Monitoring portfolio weights by means of the Shewhart method". Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-4341.

Texto completo
Resumen

The distribution of asset returns may lead to structural breaks. Thesebreaks may result in changes of the optimal portfolio weights. For a port-folio investor, the ability of timely detection of any systematic changesin the optimal portfolio weights is of a great interest.In this master thesis work, the use of the Shewhart method, as amethod for detecting a sudden parameter change, the implied changein the multivariate portfolio weights and its performance is reviewed.

 

Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Lama, Salomon Abraham. "Digital State Models for Infrastructure Condition Assessment and Structural Testing". Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/84502.

Texto completo
Resumen
This research introduces and applies the concept of digital state models for civil infrastructure condition assessment and structural testing. Digital state models are defined herein as any transient or permanent 3D model of an object (e.g. textured meshes and point clouds) combined with any electromagnetic radiation (e.g., visible light, infrared, X-ray) or other two-dimensional image-like representation. In this study, digital state models are built using visible light and used to document the transient state of a wide variety of structures (ranging from concrete elements to cold-formed steel columns and hot-rolled steel shear-walls) and civil infrastructures (bridges). The accuracy of digital state models was validated in comparison to traditional sensors (e.g., digital caliper, crack microscope, wire potentiometer). Overall, features measured from the 3D point clouds data presented a maximum error of ±0.10 in. (±2.5 mm); and surface features (i.e., crack widths) measured from the texture information in textured polygon meshes had a maximum error of ±0.010 in. (±0.25 mm). Results showed that digital state models have a similar performance between all specimen surface types and between laboratory and field experiments. Also, it is shown that digital state models have great potential for structural assessment by significantly improving data collection, automation, change detection, visualization, and augmented reality, with significant opportunities for commercial development. Algorithms to analyze and extract information from digital state models such as cracks, displacement, and buckling deformation are developed and tested. Finally, the extensive data sets collected in this effort are shared for research development in computer vision-based infrastructure condition assessment, eliminating the major obstacle for advancing in this field, the absence of publicly available data sets.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Alisic, Rijad. "Privacy of Sudden Events in Cyber-Physical Systems". Licentiate thesis, KTH, Reglerteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-299845.

Texto completo
Resumen
Cyberattacks against critical infrastructures has been a growing problem for the past couple of years. These infrastructures are a particularly desirable target for adversaries, due to their vital importance in society. For instance, a stop in the operation of a critical infrastructure could result in a crippling effect on a nation's economy, security or public health. The reason behind this increase is that critical infrastructures have become more complex, often being integrated with a large network of various cyber components. It is through these cyber components that an adversary is able to access the system and conduct their attacks. In this thesis, we consider methods which can be used as a first line of defence against such attacks for Cyber-Physical Systems (CPS). Specifically, we start by studying how information leaks about a system's dynamics helps an adversary to generate attacks that are difficult to detect. In many cases, such attacks can be detrimental to a CPS since they can drive the system to a breaking point without being detected by the operator that is tasked to secure the system. We show that an adversary can use small amounts of data procured from information leaks to generate these undetectable attacks. In particular, we provide the minimal amount of information that is needed in order to keep the attack hidden even if the operator tries to probe the system for attacks.  We design defence mechanisms against such information leaks using the Hammersley-Chapman-Robbins lower bound. With it, we study how information leakage could be mitigated through corruption of the data by injection of measurement noise. Specifically, we investigate how information about structured input sequences, which we call events, can be obtained through the output of a dynamical system and how this leakage depends on the system dynamics. For example, it is shown that a system with fast dynamical modes tends to disclose more information about an event compared to a system with slower modes. However, a slower system leaks information over a longer time horizon, which means that an adversary who starts to collect information long after the event has occured might still be able to estimate it. Additionally, we show how sensor placements can affect the information leak. These results are then used to aid the operator to detect privacy vulnerabilities in the design of a CPS. Based on the Hammersley-Chapman-Robbins lower bound, we provide additional defensive mechanisms that can be deployed by an operator online to minimize information leakage. For instance, we propose a method to modify the structured inputs in order to maximize the usage of the existing noise in the system. This mechanism allows us to explicitly deal with the privacy-utility trade-off, which is of interest when optimal control problems are considered. Finally, we show how the adversary's certainty of the event increases as a function of the number of samples they collect. For instance, we provide sufficient conditions for when their estimation variance starts to converge to its final value. This information can be used by an operator to estimate when possible attacks from an adversary could occur, and change the CPS before that, rendering the adversary's collected information useless.
De senaste åren har cyberanfall mot kritiska infrastructurer varit ett växande problem. Dessa infrastrukturer är särskilt utsatta för cyberanfall, eftersom de uppfyller en nödvändig function för att ett samhälle ska fungera. Detta gör dem till önskvärda mål för en anfallare. Om en kritisk infrastruktur stoppas från att uppfylla sin funktion, då kan det medföra förödande konsekvenser för exempelvis en nations ekonomi, säkerhet eller folkhälsa. Anledningen till att mängden av attacker har ökat beror på att kritiska infrastrukturer har blivit alltmer komplexa eftersom de numera ingår i stora nätverk dör olika typer av cyberkomponenter ingår. Det är just genom dessa cyberkomponenter som en anfallare kan få tillgång till systemet och iscensätta cyberanfall. I denna avhandling utvecklar vi metoder som kan användas som en första försvarslinje mot cyberanfall på cyberfysiska system (CPS). Vi med att undersöka hur informationsläckor om systemdynamiken kan hjälpa en anfallare att skapa svårupptäckta attacker. Oftast är sådana attacker förödande för CPS, eftersom en anfallare kan tvinga systemet till en bristningsgräns utan att bli upptäcka av operatör vars uppgift är att säkerställa systemets fortsatta funktion. Vi bevisar att en anfallare kan använda relativt små mängder av data för att generera dessa svårupptäckta attacker. Mer specifikt så härleder ett uttryck för den minsta mängd information som krävs för att ett anfall ska vara svårupptäckt, även för fall då en operatör tar till sig metoder för att undersöka om systemet är under attack. I avhandlingen konstruerar vi försvarsmetoder mot informationsläcker genom Hammersley-Chapman-Robbins olikhet. Med denna olikhet kan vi studera hur informationsläckan kan dämpas genom att injicera brus i datan. Specifikt så undersöker vi hur mycket information om strukturerade insignaler, vilket vi kallar för händelser, till ett dynamiskt system som en anfallare kan extrahera utifrån dess utsignaler. Dessutom kollar vi på hur denna informationsmängd beror på systemdynamiken. Exempelvis så visar vi att ett system med snabb dynamik läcker mer information jämfört med ett långsammare system. Däremot smetas informationen ut över ett längre tidsintervall för långsammare system, vilket leder till att anfallare som börjar tjuvlyssna på ett system långt efter att händelsen har skett kan fortfarande uppskatta den. Dessutom så visar vi jur sensorplaceringen i ett CPS påverkar infromationsläckan. Dessa reultat kan användas för att bistå en operatör att analysera sekretessen i ett CPS. Vi använder även Hammersley-Chapman-Robbins olikhet för att utveckla försvarslösningar mot informationsläckor som kan användas \textit{online}. Vi föreslår modifieringar till den strukturella insignalen så att systemets befintliga brus utnyttjas bättre för att gömma händelsen. Om operatören har andra mål den försöker uppfylla med styrningen så kan denna metod användas för att styra avvängingen mellan sekretess och operatorns andra mål. Slutligen så visar vi hur en anfallares uppskattning av händelsen förbättras som en funktion av mängden data får tag på. Operatorn kan använda informationen för att ta reda på när anfallaren kan tänka sig vara redo att anfalla systemet, och därefter ändra systemet innan detta sker, vilket gör att anfallarens information inte längre är användbar.

QC 20210820

Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Do, Van Long. "Sequential detection and isolation of cyber-physical attacks on SCADA systems". Thesis, Troyes, 2015. http://www.theses.fr/2015TROY0032/document.

Texto completo
Resumen
Cette thèse s’inscrit dans le cadre du projet « SCALA » financé par l’ANR à travers le programme ANR-11-SECU-0005. Son objectif consiste à surveiller des systèmes de contrôle et d’acquisition de données (SCADA) contre des attaques cyber-physiques. Il s'agit de résoudre un problème de détection-localisation séquentielle de signaux transitoires dans des systèmes stochastiques et dynamiques en présence d'états inconnus et de bruits aléatoires. La solution proposée s'appuie sur une approche par redondance analytique composée de deux étapes : la génération de résidus, puis leur évaluation. Les résidus sont générés de deux façons distinctes, avec le filtre de Kalman ou par projection sur l’espace de parité. Ils sont ensuite évalués par des méthodes d’analyse séquentielle de rupture selon de nouveaux critères d’optimalité adaptés à la surveillance des systèmes à sécurité critique. Il s'agit donc de minimiser la pire probabilité de détection manquée sous la contrainte de niveaux acceptables pour la pire probabilité de fausse alarme et la pire probabilité de fausse localisation. Pour la tâche de détection, le problème d’optimisation est résolu dans deux cas : les paramètres du signal transitoire sont complètement connus ou seulement partiellement connus. Les propriétés statistiques des tests sous-optimaux obtenus sont analysées. Des résultats préliminaires pour la tâche de localisation sont également proposés. Les algorithmes développés sont appliqués à la détection et à la localisation d'actes malveillants dans un réseau d’eau potable
This PhD thesis is registered in the framework of the project “SCALA” which received financial support through the program ANR-11-SECU-0005. Its ultimate objective involves the on-line monitoring of Supervisory Control And Data Acquisition (SCADA) systems against cyber-physical attacks. The problem is formulated as the sequential detection and isolation of transient signals in stochastic-dynamical systems in the presence of unknown system states and random noises. It is solved by using the analytical redundancy approach consisting of two steps: residual generation and residual evaluation. The residuals are firstly generated by both Kalman filter and parity space approaches. They are then evaluated by using sequential analysis techniques taking into account certain criteria of optimality. However, these classical criteria are not adequate for the surveillance of safety-critical infrastructures. For such applications, it is suggested to minimize the worst-case probability of missed detection subject to acceptable levels on the worst-case probability of false alarm and false isolation. For the detection task, the optimization problem is formulated and solved in both scenarios: exactly and partially known parameters. The sub-optimal tests are obtained and their statistical properties are investigated. Preliminary results for the isolation task are also obtained. The proposed algorithms are applied to the detection and isolation of malicious attacks on a simple SCADA water network
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Hedman, Pontus y Vasilios Skepetzis. "Intrångsdetektering på CAN bus data : En studie för likvärdig jämförelse av metoder". Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-42354.

Texto completo
Resumen
Utförda hacker-attacker på moderna fordon belyser ett behov av snabb detektering av hot inom denna miljö, särskilt när det förekommer en trend inom denna industri där moderna fordon idag kan klassas som IoT-enheter. Det förekommer kända fall av attacker där en angripare förmår stoppa fordon i drift, eller ta bromsar ur funktion, och detta har påvisats ske fjärrstyrt. Denna studie undersöker detektion av utförda attacker, på en riktig bil, genom studie av CAN bus meddelanden. De två modellerna CUSUM, från området Change Point Detection, och Random Forests, från området maskininlärning, tillämpas på riktig datamängd, för att sedan jämföras på simulerad data sinsemellan. En ny hypotesdefinition introduceras vilket möjliggör att evalueringsmetoden Conditional expected delay kan nyttjas för fallet Random Forests, där resultat förmås jämföras med evalueringsresultat från CUSUM. Conditional expected delay har inte tidigare studerats för metod av maskininlärning. De båda metoderna evalueras också genom ROC-kurva. Sammantaget förmås de båda metoderna jämföras sinsemellan, med varandras etablerade evalueringsmetoder. Denna studie påvisar metod och hypotes för att brygga de två områdena change point detection och maskininlärning, för att evaluera de två enligt gemensamt motiverade parametervärden.
There are known hacker attacks which have been conducted on modern vehicles. These attacks illustrates a need for early threat detection in this environment. Development of security systems in this environment is of special interest due to the increasing interconnection of vehicles and their newfound classification as IoT devices. Known attacks, that have even been carried out remotely on modern vehicles, include attacks which allow a perpetrator to stop vehicles, or to disable brake mechanisms. This study examines the detection of attacks carried out on a real vehicle, by studying CAN bus messages. The two methods CUSUM, from the field of Change Point Detection, and Random Forests, from the field of Machine Learning, are both applied to real data, and then later comparably evaluated on simulated data. A new hypothesis defintion is introduced which allows for the evaluation method Conditional expected delay to be used in the case of Random Forests, where results may be compared to evaluation results from CUSUM. Conditional expected delay has not been studied in the machinelarning case before. Both methods are also evaluated by method of ROC curve. The combined hypothesis definition for the two separate fields, allow for a comparison between the two models, in regard to each other's established evaluation methods. This study present a method and hypothesis to bridge the two separate fields of study, change point detection, and machinelearning, to achieve a comparable evaluation between the two.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Le, bars Batiste. "Event detection and structure inference for graph vectors". Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPASM003.

Texto completo
Resumen
Cette thèse aborde différents problèmes autour de l'analyse et la modélisation de signaux sur graphes, autrement dit des données vectorielles observées sur des graphes. Nous nous intéressons en particulier à deux tâches spécifique. La première est le problème de détection d'événements, c'est-à-dire la détection d'anomalies ou de ruptures, dans un ensemble de vecteurs sur graphes. La seconde tâche consiste en l'inférence de la structure de graphe sous-jacente aux vecteurs contenus dans un ensemble de données. Dans un premier temps notre travail est orienté vers l'application. Nous proposons une méthode pour détecter des pannes ou des défaillances d'antenne dans un réseau de télécommunication.La méthodologie proposée est conçue pour être efficace pour des réseaux de communication au sens large et tient implicitement compte de la structure sous-jacente des données. Dans un deuxième temps, une nouvelle méthode d'inférence de graphes dans le cadre du Graph Signal Processing est étudiée. Dans ce problème, des notions de régularité local et global, par rapport au graphe sous-jacent, sont imposées aux vecteurs. Enfin, nous proposons de combiner la tâche d'apprentissage des graphes avec le problème de détection de ruptures. Cette fois, un cadre probabiliste est considéré pour modéliser les vecteurs, supposés ainsi être distribués selon un certain champ aléatoire de Markov. Dans notre modélisation, le graphe sous-jacent aux données peut changer dans le temps et un point de rupture est détecté chaque fois qu'il change de manière significative
This thesis addresses different problems around the analysis and the modeling of graph signals i.e. vector data that are observed over graphs. In particular, we are interested in two tasks. The rst one is the problem of event detection, i.e. anomaly or changepoint detection, in a set of graph vectors. The second task concerns the inference of the graph structure underlying the observed graph vectors contained in a data set. At first, our work takes an application oriented aspect in which we propose a method for detecting antenna failures or breakdowns in a telecommunication network. The proposed approach is designed to be eective for communication networks in a broad sense and it implicitly takes into account the underlying graph structure of the data. In a second time, a new method for graph structure inference within the framework of Graph Signal Processing is investigated. In this problem, notions of both local and globalsmoothness, with respect to the underlying graph, are imposed to the vectors.Finally, we propose to combine the graph learning task with the change-point detection problem. This time, a probabilistic framework is considered to model the vectors, assumed to be distributed from a specifc Markov Random Field. In the considered modeling, the graph underlying the data is allowed to evolve in time and a change-point is actually detected whenever this graph changes significantly
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía