Thèses sur le sujet « Data correlation with time stamp »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Data correlation with time stamp.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 36 meilleures thèses pour votre recherche sur le sujet « Data correlation with time stamp ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Yang, Hsueh-szu, et Benjamin Kupferschmidt. « Time Stamp Synchronization in Video Systems ». International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605988.

Texte intégral
Résumé :
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
Synchronized video is crucial for data acquisition and telecommunication applications. For real-time applications, out-of-sync video may cause jitter, choppiness and latency. For data analysis, it is important to synchronize multiple video channels and data that are acquired from PCM, MIL-STD-1553 and other sources. Nowadays, video codecs can be easily obtained to play most types of video. However, a great deal of effort is still required to develop the synchronization methods that are used in a data acquisition system. This paper will describe several methods that TTC has adopted in our system to improve the synchronization of multiple data sources.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Ahsan, Ramoza. « Time Series Data Analytics ». Digital WPI, 2019. https://digitalcommons.wpi.edu/etd-dissertations/529.

Texte intégral
Résumé :
Given the ubiquity of time series data, and the exponential growth of databases, there has recently been an explosion of interest in time series data mining. Finding similar trends and patterns among time series data is critical for many applications ranging from financial planning, weather forecasting, stock analysis to policy making. With time series being high-dimensional objects, detection of similar trends especially at the granularity of subsequences or among time series of different lengths and temporal misalignments incurs prohibitively high computation costs. Finding trends using non-metric correlation measures further compounds the complexity, as traditional pruning techniques cannot be directly applied. My dissertation addresses these challenges while meeting the need to achieve near real-time responsiveness. First, for retrieving exact similarity results using Lp-norm distances, we design a two-layered time series index for subsequence matching. Time series relationships are compactly organized in a directed acyclic graph embedded with similarity vectors capturing subsequence similarities. Powerful pruning strategies leveraging the graph structure greatly reduce the number of time series as well as subsequence comparisons, resulting in a several order of magnitude speed-up. Second, to support a rich diversity of correlation analytics operations, we compress time series into Euclidean-based clusters augmented by a compact overlay graph encoding correlation relationships. Such a framework supports a rich variety of operations including retrieving positive or negative correlations, self correlations and finding groups of correlated sequences. Third, to support flexible similarity specification using computationally expensive warped distance like Dynamic Time Warping we design data reduction strategies leveraging the inexpensive Euclidean distance with subsequent time warped matching on the reduced data. This facilitates the comparison of sequences of different lengths and with flexible alignment still within a few seconds of response time. Comprehensive experimental studies using real-world and synthetic datasets demonstrate the efficiency, effectiveness and quality of the results achieved by our proposed techniques as compared to the state-of-the-art methods.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Hedlund, Tobias, et Xingya Zhou. « Correlation and Graphical Presentation of Event Data from a Real-Time System ». Thesis, Uppsala University, Department of Information Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-88741.

Texte intégral
Résumé :

Event data from different parts of a system might be found recorded in event logs. Often the individual logs only show a small part of the system, but by correlating different sources into a consistent context it will be possible to gain further information and a wider view. This would facilitate in finding source of errors or certain behaviors within the system.

This thesis will present the correlation possibilities between event data from different layers of the Ericsson Connectivity Packet Platform (CPP). This was done first by developing and using a test base application for the OSE operating system through which the event data can be recorded for the same test cases. The log files containing the event data have been studied and results will be presented regarding format, structure and content. For reading and storing the event data, suggestions of interpreters and data models are also provided. Finally a prototype application will be presented, which will provide the defined interpreters, data models and a graphical user interface to represent the event data and event data correlations. The programming was conducted using Java and the application is implemented as an Eclipse Plug-in. With the help of the application the user will get a better overview and a more intuitive way of working with the event data.

Styles APA, Harvard, Vancouver, ISO, etc.
4

Zhang, Kang M. Eng Massachusetts Institute of Technology. « Learning time series data using cross correlation and its application in bitcoin price prediction ». Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91884.

Texte intégral
Résumé :
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
Cataloged from PDF version of thesis.
In this work, we developed an quantitative trading algorithm for bitcoin that is shown to be profitable. The algorithm establishes a framework that combines parametric variables and non-parametric variables in a logistical regression model, capturing information in both the static states and the evolution of states. The combination improves the performance of the strategy. In addition, we demonstrated that we can discovery curve similarity of time series using cross correlation and L2 distance. The similarity metrics can be efficiently computed using convolution and can help us learn from the past instance using an ensemble voting scheme.
by Kang Zhang.
M. Eng.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Huo, Shiyin. « Detecting Self-Correlation of Nonlinear, Lognormal, Time-Series Data via DBSCAN Clustering Method, Using Stock Price Data as Example ». The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1321989426.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

黎文傑 et Man-kit Lai. « Some results on the statistical analysis of directional data ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31211550.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Lai, Man-kit. « Some results on the statistical analysis of directional data / ». [Hong Kong : University of Hong Kong], 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13787950.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Zheng, Xueying, et 郑雪莹. « Robust joint mean-covariance model selection and time-varying correlation structure estimation for dependent data ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hub.hku.hk/bib/B50899703.

Texte intégral
Résumé :
In longitudinal and spatio-temporal data analysis, repeated measurements from a subject can be either regional- or temporal-dependent. The correct specification of the within-subject covariance matrix cultivates an efficient estimation for mean regression coefficients. In this thesis, robust estimation for the mean and covariance jointly for the regression model of longitudinal data within the framework of generalized estimating equations (GEE) is developed. The proposed approach integrates the robust method and joint mean-covariance regression modeling. Robust generalized estimating equations using bounded scores and leverage-based weights are employed for the mean and covariance to achieve robustness against outliers. The resulting estimators are shown to be consistent and asymptotically normally distributed. Robust variable selection method in a joint mean and covariance model is considered, by proposing a set of penalized robust generalized estimating equations to estimate simultaneously the mean regression coefficients, the generalized autoregressive coefficients and innovation variances introduced by the modified Cholesky decomposition. The set of estimating equations select important covariate variables in both mean and covariance models together with the estimating procedure. Under some regularity conditions, the oracle property of the proposed robust variable selection method is developed. For these two robust joint mean and covariance models, simulation studies and a hormone data set analysis are carried out to assess and illustrate the small sample performance, which show that the proposed methods perform favorably by combining the robustifying and penalized estimating techniques together in the joint mean and covariance model. Capturing dynamic change of time-varying correlation structure is both interesting and scientifically important in spatio-temporal data analysis. The time-varying empirical estimator of the spatial correlation matrix is approximated by groups of selected basis matrices which represent substructures of the correlation matrix. After projecting the correlation structure matrix onto the space spanned by basis matrices, varying-coefficient model selection and estimation for signals associated with relevant basis matrices are incorporated. The unique feature of the proposed model and estimation is that time-dependent local region signals can be detected by the proposed penalized objective function. In theory, model selection consistency on detecting local signals is provided. The proposed method is illustrated through simulation studies and a functional magnetic resonance imaging (fMRI) data set from an attention deficit hyperactivity disorder (ADHD) study.
published_or_final_version
Statistics and Actuarial Science
Doctoral
Doctor of Philosophy
Styles APA, Harvard, Vancouver, ISO, etc.
9

Abou-Galala, Feras Moustafa. « True-time all optical performance monitoring by means of optical correlation ». Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1180549555.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Aslan, Sipan. « Comparison Of Missing Value Imputation Methods For Meteorological Time Series Data ». Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612426/index.pdf.

Texte intégral
Résumé :
Dealing with missing data in spatio-temporal time series constitutes important branch of general missing data problem. Since the statistical properties of time-dependent data characterized by sequentiality of observations then any interruption of consecutiveness in time series will cause severe problems. In order to make reliable analyses in this case missing data must be handled cautiously without disturbing the series statistical properties, mainly as temporal and spatial dependencies. In this study we aimed to compare several imputation methods for the appropriate completion of missing values of the spatio-temporal meteorological time series. For this purpose, several missing imputation methods are assessed on their imputation performances for artificially created missing data in monthly total precipitation and monthly mean temperature series which are obtained from the climate stations of Turkish State Meteorological Service. Artificially created missing data are estimated by using six methods. Single Arithmetic Average (SAA), Normal Ratio (NR) and NR Weighted with Correlations (NRWC) are the three simple methods used in the study. On the other hand, we used two computational intensive methods for missing data imputation which are called Multi Layer Perceptron type Neural Network (MLPNN) and Monte Carlo Markov Chain based on Expectation-Maximization Algorithm (EM-MCMC). In addition to these, we propose a modification in the EM-MCMC method in which results of simple imputation methods are used as auxiliary variables. Beside the using accuracy measure based on squared errors we proposed Correlation Dimension (CD) technique for appropriate evaluation of imputation performances which is also important subject of Nonlinear Dynamic Time Series Analysis.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Tardivo, Gianmarco. « Methods for gap filling in long term meteorological series and correlation analysis of meteorological networks ». Doctoral thesis, Università degli studi di Padova, 2013. http://hdl.handle.net/11577/3422634.

Texte intégral
Résumé :
Climate data are very useful in many fields of the scientific research. Nowadays, in many cases these data are available through giant data-base that are often yielded by automatic meteorological networks. In order to make possible research analysis and the running of computational models, these data base need to be validated, homogenized, and to be without missing values. Validation and homogenization are common operations, nowadays: the organizations that manage these data-base provide these services. The main problem remain the reconstruction of the missing data. This dissertation deal with two main topics: (a) the reconstruction of missing values of daily precipitation and temperature datasets; (b) a base analysis on the time and space correlation between stations of a meteorological network. (a) At first, a new adaptive method to reconstruct temperature data is described. This method is compare with a non-adaptive one. A detailed analysis of the effects of the number of predictors for a regression-based approach (to reconstruct daily temperature data) and their search strategy is then presented. Precipitation and temperature are the most important climatological variables, so, a method to reconstruct daily precipitation data is chosen through a comparison of four technique. (b) The methods selected in phase (a) make it possible to reconstruct the two data-base (precipitation and temperature) that will be used for the next and last work: the correlation analysis, through time and space of network data.
I dati climatologici sono molto utili in molti campi della ricerca scientifica. Oggigiorno, molte volte questi dati sono disponibili sottoforma di enormi data-base che sono spesso prodotti da stazioni meteorologiche automatiche. Affinché analisi di ricerca e lavori di modellistica siano possibili su questi data-base, essi devono subire un’opera di omogeneizzazione, validazione e ricostruzione dei dati mancanti. Le operazioni di validazione ed omogeneizzazione sono già per lo più condotte dalle organizzazioni che gestiscono questi dati. Il problema principale rimane quello della ricostruzione dei dati mancanti. Questa tesi si occupa principalmente di due argomenti: (a) la ricostruzione di valori mancanti di insiemi di dati di precipitazione e temperatura giornalieri; (b) un’analisi fondamentale sulla correlazione spazio-temporale tra le stazioni di una rete meteorologica. (a) Per prima cosa, si presenta un nuovo modello adattivo per ricostruire i dati di temperatura. Questo modello viene confrontato con uno non adattivo. Poi si presenterà un’analisi dettagliata sulla scelta ed il numero di predittori per metodi di ricostruzione di tipo multi-regressivo. Precipitazioni e temperatura sono le più importanti variabili climatologiche, così, viene scelto un metodo per ricostruire anche i dati giornalieri di pioggia, questa scelta viene fatta attraverso un confronto fra 4 tecniche. (b) Questi due metodi (ricostruzione di pioggia e temperature) permettono di ricostruire i data-base che vengono usati per il prossimo ed ultimo lavoro: l’analisi di correlazione, attraverso le coordinate spaziale e temporale della rete.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Olli, Oscar. « Big Data in Small Tunnels : Turning Alarms Into Intelligence ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-292946.

Texte intégral
Résumé :
In this thesis we examine methods for evaluating a traffic alarm system. Nuisance alarms can quickly increase the volume of alarms experienced by the alarm operator and obstruct their work. We propose two methods for removing a number of these nuisance alarms, so that events of higher priority can be targeted. A parallel correlation analysis demonstrated significant correlation between single and clusters of alarms, presenting a strong cause for causality. While a serial correlation was performed, it could not conclude evidence of consequential alarms. In order to assist Trafikverket with maintenance scheduling, a long short-term model (LSTM) model, to predict univariate time-series of discretely binned alarm sequences. Experiments conclude that the LSTM model provides higher precision for alarm sequences with higher repeatability and recurring patterns. For other, randomly occurring alarms, the model performs unsatisfactory.
Den här examensuppsatsen granskar olika metoder för att utvärdera ett larmsystem med inriktning mot trafiksäkerhet. Störande larm kan skapa stora mängder larm som försvårar arbetet för larmoperatörer. Vi föreslår två metoder för att avlägsna störande larm, så att uppmärksamhet kan riktas mot varningar med högre prioritet. En parallell korrelationsanalys som demonstrerade hög korrelation mellan både enskilda och kluster av larm. Detta presenterar ett starkt orsakssamband. En korskorrelation utfördes även, men denna kunde inte fastställa existens av s.k. följdlarm. För att assistera Trafikverket med schemaläggning av underhåll har en long short-term memory (LSTM) modell implementerats för att förutspå univariata tidsserier av diskretiserade larmsekvenser. Utförda experiment sammanfattar att LSTM modellen presterar bättre för larmsekvenser med återkommande mönster. För mera slumpmässigt genererade larmsekvenser, presterar modellen med lägre precision.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Patel, Tejashkumar. « Anaysis of the Trend of Historical Temperature and Historic CO2 Levels Over the Past 800,000 Years by Short Time Cross Correlation Technique ». Thesis, Linnéuniversitetet, Institutionen för fysik och elektroteknik (IFE), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-105031.

Texte intégral
Résumé :
Carbon Dioxide concentration in Earth’s atmosphere is currently at 417 Parts permillion (ppm) and keep rising. Historic CO2 levels and historic temperature levels hasbeen cycling over the past 800,000 years. To study the trend of CO2 and temperatureover past 800,00 years, one needs to find out the relation between historic CO2 andhistoric temperature levels. In this project, we will perform different tasks to identify thetrend influencer between CO2 and temperature. Cross correlation technique is used tofind out the relation between two random signals. Temperature and CO2 data areconsidered as two random signals. Re-sampling by Interpolation techniques are imposedon both CO2 and temperature data for the change of sampling rate. Short time crosscorrelation technique is employed on the CO2 and temperature data over the differenttime windows to find out the time lag. Time lag refers to how far the signals are offset.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Su, Yu. « Big Data Management Framework based on Virtualization and Bitmap Data Summarization ». The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1420738636.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Schneider, Raimund [Verfasser], Joachim von [Akademischer Betreuer] Zanthier et Joachim von [Gutachter] Zanthier. « Correlation experiments and data evaluation techniques with classical light sources in space and time / Raimund Schneider ; Gutachter : Joachim von Zanthier ; Betreuer : Joachim von Zanthier ». Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2018. http://d-nb.info/1176809768/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

Chen, I.-Chen. « Improved Methods and Selecting Classification Types for Time-Dependent Covariates in the Marginal Analysis of Longitudinal Data ». UKnowledge, 2018. https://uknowledge.uky.edu/epb_etds/19.

Texte intégral
Résumé :
Generalized estimating equations (GEE) are popularly utilized for the marginal analysis of longitudinal data. In order to obtain consistent regression parameter estimates, these estimating equations must be unbiased. However, when certain types of time-dependent covariates are presented, these equations can be biased unless an independence working correlation structure is employed. Moreover, in this case regression parameter estimation can be very inefficient because not all valid moment conditions are incorporated within the corresponding estimating equations. Therefore, approaches using the generalized method of moments or quadratic inference functions have been proposed for utilizing all valid moment conditions. However, we have found that such methods will not always provide valid inference and can also be improved upon in terms of finite-sample regression parameter estimation. Therefore, we propose a modified GEE approach and a selection method that will both ensure the validity of inference and improve regression parameter estimation. In addition, these modified approaches assume the data analyst knows the type of time-dependent covariate, although this likely is not the case in practice. Whereas hypothesis testing has been used to determine covariate type, we propose a novel strategy to select a working covariate type in order to avoid potentially high type II error rates with these hypothesis testing procedures. Parameter estimates resulting from our proposed method are consistent and have overall improved mean squared error relative to hypothesis testing approaches. Finally, for some real-world examples the use of mean regression models may be sensitive to skewness and outliers in the data. Therefore, we extend our approaches from their use with marginal quantile regression to modeling the conditional quantiles of the response variable. Existing and proposed methods are compared in simulation studies and application examples.
Styles APA, Harvard, Vancouver, ISO, etc.
17

Al, Rababa'A Abdel Razzaq. « Uncovering hidden information and relations in time series data with wavelet analysis : three case studies in finance ». Thesis, University of Stirling, 2017. http://hdl.handle.net/1893/25961.

Texte intégral
Résumé :
This thesis aims to provide new insights into the importance of decomposing aggregate time series data using the Maximum Overlap Discrete Wavelet Transform. In particular, the analysis throughout this thesis involves decomposing aggregate financial time series data at hand into approximation (low-frequency) and detail (high-frequency) components. Following this, information and hidden relations can be extracted for different investment horizons, as matched with the detail components. The first study examines the ability of different GARCH models to forecast stock return volatility in eight international stock markets. The results demonstrate that de-noising the returns improves the accuracy of volatility forecasts regardless of the statistical test employed. After de-noising, the asymmetric GARCH approach tends to be preferred, although that result is not universal. Furthermore, wavelet de-noising is found to be more important at the key 99% Value-at-Risk level compared to the 95% level. The second study examines the impact of fourteen macroeconomic news announcements on the stock and bond return dynamic correlation in the U.S. from the day of the announcement up to sixteen days afterwards. Results conducted over the full sample offer very little evidence that macroeconomic news announcements affect the stock-bond return dynamic correlation. However, after controlling for the financial crisis of 2007-2008 several announcements become significant both on the announcement day and afterwards. Furthermore, the study observes that news released early in the day, i.e. before 12 pm, and in the first half of the month, exhibit a slower effect on the dynamic correlation than those released later in the month or later in the day. While several announcements exhibit significance in the 2008 crisis period, only CPI and Housing Starts show significant and consistent effects on the correlation outside the 2001, 2008 and 2011 crises periods. The final study investigates whether recent returns and the time-scaled return can predict the subsequent trading in ten stock markets. The study finds little evidence that recent returns do predict the subsequent trading, though this predictability is observed more over the long-run horizon. The study also finds a statistical relation between trading and return over the long-time investment horizons of [8-16] and [16-32] day periods. Yet, this relation is mostly a negative one, only being positive for developing countries. It also tends to be economically stronger during bull-periods.
Styles APA, Harvard, Vancouver, ISO, etc.
18

Larsson, Klara, et Freja Ling. « Time Series forecasting of the SP Global Clean Energy Index using a Multivariate LSTM ». Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301904.

Texte intégral
Résumé :
Clean energy and machine learning are subjects that play significant roles in shaping our future. The current climate crisis has forced the world to take action towards more sustainable solutions. Arrangements such as the UN’s Sustainable Development Goals and the Paris Agreement are causing an increased interest in renewable energy solutions. Further, the EU Taxonomy Regulation, applied in 2020, aims to scale up sustainable investments and to direct cash flows toward sustainable projects and activities. These measures create interest in investing in renewable energy alternatives and predicting future movements of stocks related to these businesses. Machine learning models have previously been used to predict time series with promising results. However, predicting time series in the form of stock price indices has, throughout previous attempts, proved to be a difficult task due to the complexity of the variables that play a role in the indices’ movements. This paper uses the machine learning algorithm long short-term memory (LSTM) to predict the S&P Global Clean Energy Index. The research question revolves around how well the LSTM model performs on this specific index and how the result is affected when past returns from correlating variables are added to the model. The researched variables are crude oil price, gold price, and interest. A model for each correlating variable was created, as well as one with all three, and one standard model which used only historical data from the index. The study found that while the model with the variable which had the strongest correlation performed best among the multivariate models, the standard model using only the target variable gave the most accurate result of any of the LSTM models.
Den pågående klimatkrisen har tvingat allt fler länder till att vidta åtgärder, och FN:s globala hållbarhetsmål och Parisavtalet ökar intresset för förnyelsebar energi. Vidare lanserade EU-kommissionen den 21 april 2021 ett omfattande åtgärdspaket, med syftet att öka investeringar i hållbara verksamheter. Detta skapar i sin tur ett ökat intresse för investeringar i förnyelsebar energi och metoder för att förutspå aktiepriser för dessa bolag. Maskininlärningsmodeller har tidigare använts för tidsserieanalyser med goda resultat, men att förutspå aktieindex har visat sig svårt till stor del på grund av uppgiftens komplexitet och antalet variabler som påverkar börsen. Den här uppsatsen använder sig av maskininlärningsmodellen long short-term memory (LSTM) för att förutspå S&P:s Global Clean Energy Index. Syftet är att ta reda på hur träffsäkert en LSTM-modell kan förutspå detta index, och hur resultatet påverkas då modellen används med ytterligare variabler som korrelerar med indexet. De variabler som undersöks är priset på råolja, priset på guld, och ränta. Modeller för var variabel skapades, samt en modell med samtliga variabler och en med endast historisk data från indexet. Resultatet visar att den modell med den variabel som korrelerar starkast med indexet presterade bäst bland flervariabelmodellerna, men den modell som endast användes med historisk data från indexet gav det mest träffsäkra resultatet.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Xu, Tianbing. « Nonparametric evolutionary clustering ». Diss., Online access via UMI:, 2009.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
20

Kaphle, Manindra R. « Analysis of acoustic emission data for accurate damage assessment for structural health monitoring applications ». Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/53201/1/Manindra_Kaphle_Thesis.pdf.

Texte intégral
Résumé :
Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.
Styles APA, Harvard, Vancouver, ISO, etc.
21

dePillis-Lindheim, Lydia. « Disease Correlation Model : Application to Cataract Incidence in the Presence of Diabetes ». Scholarship @ Claremont, 2013. http://scholarship.claremont.edu/scripps_theses/294.

Texte intégral
Résumé :
Diabetes is a major risk factor for the development of cataract [3,14,20,22]. In this thesis, we create a model that allows us to understand the incidence of one disease in the context of another; in particular, cataract in the presence of diabetes. The World Health Organization's Vision 2020 blindness-prevention initiative administers surgeries to remove cataracts, the leading cause of blindness worldwide [24]. One of the geographic areas most impacted by cataract-related blindness is Sub-Saharan Africa. In order to plan the number of surgeries to administer, the World Health Organization uses data on cataract prevalence. However, an estimation of the incidence of cataract is more useful than prevalence data for the purpose of resource planning. In 2012, Dray and Williams developed a method for estimating incidence based on prevalence data [5]. Incidence estimates can be further refined by considering associated risk factors such as diabetes. We therefore extend the Dray and Williams model to include diabetes prevalence when calculating cataract incidence estimates. We explore two possible approaches to our model construction, one a detailed extension, and the other, a simplification of that extension. We provide a discussion comparing the two approaches.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Chen, Kai. « Mitigating Congestion by Integrating Time Forecasting and Realtime Information Aggregation in Cellular Networks ». FIU Digital Commons, 2011. http://digitalcommons.fiu.edu/etd/412.

Texte intégral
Résumé :
An iterative travel time forecasting scheme, named the Advanced Multilane Prediction based Real-time Fastest Path (AMPRFP) algorithm, is presented in this dissertation. This scheme is derived from the conventional kernel estimator based prediction model by the association of real-time nonlinear impacts that caused by neighboring arcs’ traffic patterns with the historical traffic behaviors. The AMPRFP algorithm is evaluated by prediction of the travel time of congested arcs in the urban area of Jacksonville City. Experiment results illustrate that the proposed scheme is able to significantly reduce both the relative mean error (RME) and the root-mean-squared error (RMSE) of the predicted travel time. To obtain high quality real-time traffic information, which is essential to the performance of the AMPRFP algorithm, a data clean scheme enhanced empirical learning (DCSEEL) algorithm is also introduced. This novel method investigates the correlation between distance and direction in the geometrical map, which is not considered in existing fingerprint localization methods. Specifically, empirical learning methods are applied to minimize the error that exists in the estimated distance. A direction filter is developed to clean joints that have negative influence to the localization accuracy. Synthetic experiments in urban, suburban and rural environments are designed to evaluate the performance of DCSEEL algorithm in determining the cellular probe’s position. The results show that the cellular probe’s localization accuracy can be notably improved by the DCSEEL algorithm. Additionally, a new fast correlation technique for overcoming the time efficiency problem of the existing correlation algorithm based floating car data (FCD) technique is developed. The matching process is transformed into a 1-dimensional (1-D) curve matching problem and the Fast Normalized Cross-Correlation (FNCC) algorithm is introduced to supersede the Pearson product Moment Correlation Co-efficient (PMCC) algorithm in order to achieve the real-time requirement of the FCD method. The fast correlation technique shows a significant improvement in reducing the computational cost without affecting the accuracy of the matching process.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Lutshete, Sizwe. « An analysis of the correlation beween packet loss and network delay on the perfomance of congested networks and their impact : case study University of Fort Hare ». Thesis, University of Fort Hare, 2013. http://hdl.handle.net/10353/d1006843.

Texte intégral
Résumé :
In this paper we study packet delay and loss rate at the University of Fort Hare network. The focus of this paper is to evaluate the information derived from a multipoint measurement of, University of Fort Hare network which will be collected for a duration of three Months during June 2011 to August 2011 at the TSC uplink and Ethernet hubs outside and inside relative to the Internet firewall host. The specific value of this data set lies in the end to end instrumentation of all devices operating at the packet level, combined with the duration of observation. We will provide measures for the normal day−to−day operation of the University of fort hare network both at off-peak and during peak hours. We expect to show the impact of delay and loss rate at the University of Fort Hare network. The data set will include a number of areas, where service quality (delay and packet loss) is extreme, moderate, good and we will examine the causes and impacts on network users.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Köthur, Patrick. « Visual analytics for detection and assessment of process-related patterns in geoscientific spatiotemporal data ». Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17397.

Texte intégral
Résumé :
Diese Arbeit untersucht, inwiefern Visual Analytics die Analyse von Prozessen in geowissenschaftlichen raum-zeitlichen Daten unterstützen kann. Hierzu wurden drei neuartige Visual Analytics Ansätze entwickelt. Jeder Ansatz addressiert eine wichtige Analyseperspektive. Der erste Ansatz erlaubt es, wichtige räumliche Zustände in den Daten sowie deren auftreten in der Zeit zu untersuchen. Mittels hierarchischem Clustering werden alle in den Daten enthaltenen räumlichen Zustände in einer Clusterhierarchie verortet. Interaktive visuelle Analyse ermöglicht es, verschiedene räumliche Zustände aus den Daten zu extrahieren und die dazugehörigen raum-zeitlichen Muster zu interpretieren und zu bewerten. Der zweite Ansatz unterstützt die systematische Analyse des in den Daten zu beobachtenden zeitlichen Verhaltens sowie dessen Auftreten im geographischen Raum mittels einer Kombination aus Cluster Ensembles und interaktiver visueller Exploration. Der dritte Ansatz gestattet die Detektion und Analyse von zeitlichen Zusammenhängen in den Daten. Hierzu wurde eine etablierte Methode zur Analyse von zeitlichen Zusammenhängen zwischen zwei einzelnen Zeitreihen, gefensterte Kreuzkorrelation, durch Visual Analytics auf den Vergleich von Zeitreihenensembles erweitert. Dadurch ist es nicht nur möglich, Zusammenhänge zwischen Zeitreihen zu untersuchen, sondern auch Unsicherheiten in den Daten zu berücksichtigen. Alle Ansätze wurden anhand einer nutzer- und aufgabenorientierten Methodik entwickelt und erfolgreich in Anwendungsfällen aus der Erdsystem-Modellierung, der Ozeanmodellierung, der Paläoklimatologie und sogar den Kognitionswissenschaften eingesetzt. Diese Dissertation zeigt, dass Visual Analytics einen wertvollen Ansatz zur Analyse von Prozess-bezogenen Mustern in raum-zeitlichen Daten darstellt. Es kann die Grenzen existierender Analysemethoden erweitern und ermöglicht Geowissenschaftlern neue, aufschlussreiche Sichtweisen auf Daten und die darin beschriebenen Prozesse.
This thesis studied how visual analytics can facilitate the analysis of processes in geoscientific spatiotemporal data. Three novel visual analytics solutions were developed, each addressing an important analysis perspective. The first solution addresses the analysis of prominent spatial situations in the data and their occurrence over time. Hierarchical clustering is used to arrange all spatial situations in the data in a hierarchy of clusters. The combination with interactive visual analysis enables geoscientists to explore and alter the resulting hierarchy, to extract different sets of representative spatial situations, and to interpret and assess the corresponding spatiotemporal patterns. The second solution supports geoscientists in the analysis of prominent types of temporal behavior and their location in geographic space. Cluster ensembles are integrated with interactive visual exploration to enable users to systematically detect and interpret various types of temporal behavior in different data sets and to use this information for assessment of simulation model output. The third solution enables geoscientists to detect and analyze interrelations of temporal behavior in the data. Windowed cross-correlation, a technique for comparison of two individual time series, was extended to the comparison of entire ensembles of time series through visual analytics. This not only allows scientists to study interrelations, but also to assess how much these interrelations vary between two ensembles. All visual analytics solutions were developed following a rigorous user- and task-centered methodology and successfully applied to use cases in Earth system modeling, ocean modeling, paleoclimatology, and even cognitive science. The results of this thesis demonstrate that visual analytics successfully addresses important analysis perspectives and that it is a valuable approach to the analysis of process-related patterns in geoscientific spatiotemporal data.
Styles APA, Harvard, Vancouver, ISO, etc.
25

Uhrich, Pierre. « Etude prospective d'un processeur optique en lumiere incoherente pour le traitement temps reel des donnees de radar a vision laterale ». Université Louis Pasteur (Strasbourg) (1971-2008), 1988. http://www.theses.fr/1988STR13189.

Texte intégral
Résumé :
On presente une etude prospective pour un nouveau processeur optique effectuant le traitement des donnees de radar a vision laterale (sar) en temps reel. L'etage de correlation du calculateur optique, base sur un fonctionnement particulier des matrices ccd dit "add and shift" est valide experimentalement. On envisage egalement l'utilisation de ce processeur pour le traitement des signaux differents
Styles APA, Harvard, Vancouver, ISO, etc.
26

Sognestrand, Johanna, et Matilda Österberg. « KOLLEKTIVTRAFIKENS GEOGRAFISKA VARIATIONER I TID OCH KOSTNAD – HUR PÅVERKAR DETTA BOSTADSPRISERNA ? : Fallstudie Uppsala län med pendlingsomland ». Thesis, University of Gävle, Ämnesavdelningen för samhällsbyggnad, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-5881.

Texte intégral
Résumé :

The distance between home and work has increased in recent decades. By the development of infrastructure and public transport, jobs farther from home have become more accessible and this development has in turn increased commuting. Commuting travellers often pass over administrative boundaries which often serve as borders for public transport pricing. Also the market control prices. Research shows that travel times and costs significantly affect commuting choice. Many people have an upper limit of 60 minutes commuting distance between home and work. How commuting costs affect the individual's choice of commuting will vary depending on the individual's income and housing costs. The aim of our study was to see how public transport costs and travel times may vary geographically. GIS, Geographic Information System was used to make a network analysis which showed time distances and travel costs on maps. We also examined whether there was a link between towns accessibility by public transport and housing market which we did with help of correlation and regression analysis. In order to answer our questions we started from a study area consisting of Uppsala County with its surrounding commuting area. The maps showed how accessibility to larger towns varies among the smaller towns. The access is often best between bigger towns while there is less accessibility between smaller towns. The distance to bus stops or railway station also has a significant effect on how long the total travel time will be. Urban areas with access to rail services had the best opportunities to reach larger cities and that give also better access to labour market. From our study of the Uppsala County with a monocentric structure, we could indicate a link between accessibility to the bigger cities and housing prices in the surrounding towns. The higher commuting costs and longer travel time to the central place the lower the housing prices. A similar study of Stockholm which has a polycentric structure showed that the relationship between accessibility and house prices not are applicable to all regions. Here we can conclude that housing markets depends on many other factors than access to rapid public transport. House prices can depend on things like closeness to nature and water.


Avståndet mellan bostad och arbete har ökat under de senaste decennierna. Utvecklingen av infrastruktur och kollektivtrafik har lett till att arbetsplatser längre från hemmet har blivit mer tillgängliga och denna utveckling har i sin tur bidragit till en ökad arbetspendling i samhället. Pendlingsresenärer passerar ofta över administrativa gränser och dessa gränser styr ofta över kollektivtrafikens prissättning men även efterfrågan kan styra priset. Forskning visar att restider och kostnader i hög grad påverkar pendlingsvalet. Många människor föredrar ett pendlingsavstånd, mellan hem och arbete på högst 60 minuter. Hur pendlingskostnader påverkar individens val till pendling varierar bland annat beroende på individens inkomst och boendekostnader.

Syftet med vår studie var att se hur kollektivtrafikens kostnader och restider kan variera geografiskt. GIS, Geografiska Informationssystem, användes vid utförandet av en nätverks- och kostnadsanalys vilket visade tidsmässigt avstånd och kostnad på kartor. Vi undersökte också om det fanns ett samband mellan orters tillgänglighet med kollektivtrafik och bostadsmarknaden genom att utföra korrelations- och regressionsanalyser. För att svara på våra frågeställningar utgick vi från ett undersökningsområde bestående av Uppsala län med pendlingsomland.

Kartbilderna visade tydligt hur tillgängligheten till större städer varierar mellan olika orter och att tillgängligheten ofta är bäst mellan större tätorter medan det är sämre tillgänglighet mellan mindre tätorter. Avståndet till hållplatser har också betydande påverkan på hur lång den totala restiden blir. Tätorter med tillgång till järnvägstrafik hade det bästa möjligheterna att nå större tätorter och därmed blir arbetsmarknaden större för dessa orter. Från vår studie över Uppsala län som kan anses ha monocentrisk struktur kunde vi även tyda ett samband mellan tätorters tillgänglighet till centralorten och orternas bostadspriser. Ju högre pendlingskostnad och längre restid till centralorten desto lägre var orternas bostadspriser. En likadan studie över Stockholm som har en mer polycentrisk struktur visade dock att detta samband mellan tillgänglighet och bostadspriser inte gäller för alla regioner. Här kan vi dra den slutsatsen att bostadsmarknaden styrs av många andra faktorer än tillgång till snabb kollektivtrafik och att vissa områdens bostadspriser mer styrs av exempelvis närhet till natur och vatten.

Styles APA, Harvard, Vancouver, ISO, etc.
27

Hunter, Brandon. « Channel Probing for an Indoor Wireless Communications Channel ». BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/64.

Texte intégral
Résumé :
The statistics of the amplitude, time and angle of arrival of multipaths in an indoor environment are all necessary components of multipath models used to simulate the performance of spatial diversity in receive antenna configurations. The model presented by Saleh and Valenzuela, was added to by Spencer et. al., and included all three of these parameters for a 7 GHz channel. A system was built to measure these multipath parameters at 2.4 GHz for multiple locations in an indoor environment. Another system was built to measure the angle of transmission for a 6 GHz channel. The addition of this parameter allows spatial diversity at the transmitter along with the receiver to be simulated. The process of going from raw measurement data to discrete arrivals and then to clustered arrivals is analyzed. Many possible errors associated with discrete arrival processing are discussed along with possible solutions. Four clustering methods are compared and their relative strengths and weaknesses are pointed out. The effects that errors in the clustering process have on parameter estimation and model performance are also simulated.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Bossi, Luca. « A novel microwave imaging RADAR for anti-personnel landmine detection and its integration on a multi-sensor robotic scanner ». Doctoral thesis, 2022. http://hdl.handle.net/2158/1272665.

Texte intégral
Résumé :
Per mezzo del finanziamento ottenuto con il progetto North Atlantic Treaty Organization, Science for Peace and Security (NATO SPS) G-5014 è stata sviluppata una piattaforma robotica multi-sensore in grado di individuare oggetti sepolti plastici e metallici e generare dati per la successiva classificazione degli ordigni attraverso l’analisi di operatori specializzati. Utilizzare una piattaforma robotica permette di aumentare la sicurezza per gli operatori, perché completamente controllabile in remoto tramite un’interfaccia software web e permette di utilizzare diversi sensori per massimizzare la probabilità di rivelazione delle mine, mantenendo minima la probabilità di ricevere falsi allarmi. I sensori principali installati sono due RADAR operanti nello spettro delle microonde ( ≃2 GHz): un UWB Ground Penetrating RADAR (GPR), sviluppato appositamente per rilevare la posizione dell’oggetto sepolto all’interno dell’area illuminata, capace di rilevare oggetti sepolti durante il moto della piattaforma e un Holographic Subsurface RADAR (HSR), operante ad onda continua e singola frequenza, in grado di generare immagini olografiche che permettono di osservare la forma e le dimensioni degli oggetti sepolti nei primi 15 - 20 cm del sottosuolo e, tramite l’elaborazione con algoritmi di inversione del campo elettromegnetico, permette di ricostruire la scena tridimensionale che si trova di fronte all’apertura sintetica del RADAR. Le immagini che genera questo dispositivo consentono di discriminare gli ordigni da altri oggetti riflettenti le microonde ma del tutto inoffensivi (clutter). Il HSR progettato nel corso del progetto NATO SPS G-5014 costituisce un primo prototipo che soddisfava i requisiti richiesti dal progetto. Il frutto di questo lavoro ha riscosso interesse nella comunità scentifica e presso NATO SPS, generando un seguito: il progetto NATO SPS G-5731, tutt’ora in corso. Nell’ambito di quest’ultimo progetto si colloca il mio lavoro: ho contribuito allo sviluppo di un sistema RADAR per immagini a microonde in grado di migliorare, in termini di qualità di immagini prodotte (incrementando il rapporto segnale-rumore e la risoluzione) e di profondità di penetrazione (studiando le caratteristiche elettromagnetiche del suolo di interesse), le prestazioni del HSR. Mi sono occupato di individuare i parametri su cui poter intervenire: la risoluzione ottenibile applicando la matematica dell’olografia, le tecniche e gli algoritmi di inversione del campo elettromagnetico, lo studio dell’ambiente elettromagnetico irradiato e i requisiti dell’elemento radiante (tipo di antenna, forma, dimensioni, potenza irradiata) reailzzandone uno con la tecnologia della stampa tridimensionale. Ho valutato e studiato una soluzione per migliorare la compatibilità elettromagnetica con il sistema robotico su cui dovrà operare il RADAR. Per realizzare un prototipo funzionante mi sono occupato di definire i requisiti dell’elettronica di pilotaggio e della programmazione dei dispositivi implementati. Questo testo si conclude con la dimostrazione, mediante l’esposizione di prove sperimentali in ambiente controllato, delle prestazioni del nuovo RADAR, evidenziandone le differenze rispetto al HSR originale. ------------- Thanks to the funding obtained with the North Atlantic Treaty Organization, Science for Peace and Security (NATO SPS) G-5014 project, a multi-sensor robotic platform was developed capable of identifying buried plastic and metal objects and generating data for subsequent classification. of ordnance through the analysis of specialized operators. Using a robotic platform allows you to increase safety for operators, because it can be completely remotely controlled via a web software interface and allows you to use different sensors to maximize the probability of mine detection, while keeping the probability of receiving false alarms to a minimum. The main sensors installed are two RADARs operating in the microwave spectrum (≃2 GHz): a UWB Ground Penetrating RADAR (GPR), specially developed to detect the position of the buried object within the illuminated area, capable of detecting buried objects during the motion of the platform and a Holographic Subsurface RADAR (HSR), operating at continuous wave and single frequency, capable of generating holographic images that allow to observe the shape and dimensions of the objects buried in the first 15 - 20 cm of the subsoil and, through the processing with electromagnetic field inversion algorithms, it allows to reconstruct the three-dimensional scene that is in front of the synthetic opening of the RADAR. The images that this device generates allow to discriminate the bombs from other objects reflecting the microwaves but completely harmless (clutter). The HSR designed during the NATO SPS G-5014 project constitutes a first prototype that met the requirements of the project. The fruit of this work has attracted interest in the scientific community and at NATO SPS, generating a sequel: the NATO SPS G-5731 project, which is still underway. My work is part of this last project: I contributed to the development of a RADAR system for microwave images capable of improving, in terms of the quality of images produced (by increasing the signal-to-noise ratio and resolution) and depth of penetration (studying the electromagnetic characteristics of the soil of interest), the performance of the HSR. I worked on identifying the parameters on which to intervene: the resolution obtainable by applying the mathematics of holography, the techniques and algorithms of electromagnetic field inversion, the study of the radiated electromagnetic environment and the requirements of the radiant element (type of antenna , shape, size, radiated power) by realizing one with the technology of three-dimensional printing. I have evaluated and studied a solution to improve the electromagnetic compatibility with the robotic system on which the RADAR will have to operate. To create a working prototype, I worked on defining the requirements of the driving electronics and programming of the implemented devices. This text ends with the demonstration, through the display of experimental tests in a controlled environment, of the performance of the new RADAR, highlighting the differences compared to the original HSR.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Nguyen, Philon. « Fast and scalable similarity and correlation queries on time series data ». Thesis, 2009. http://spectrum.library.concordia.ca/976363/1/MR63255.pdf.

Texte intégral
Résumé :
Time series are ubiquitous in many fields ranging from financial applications such as the stock market to scientific applications and sensor data. Hence, there has been an increasing interest in time series indexing over the past years because there has been an increasing need for fast methods for analyzing and querying these datasets that are often too big for practical brute force analysis. We start with the main contributions to the field over the past decade and a half. We will then proceed by describing new solutions to correlation analysis on time series datasets using an existing index called the Compact Multi-Resolution Index (CMRI). We describe new algorithms for indexed correlation analysis using Pearson's product moment coefficient and using the multidimensional correlation coefficient and introduce a new measure called Dynamic Time Warping Correlation (DTWC) based on Dynamic Time Warping (DTW). In addition to these linear correlation algorithms, we propose an algorithm called rank order correlation on a non-linear/monotonic measure. To support these algorithms, we revised the Compact Multi-Resolution Index (CMRI) and propose a new index for time series datasets which improves over the sizes, speed and precision of CMRI. We call this index the reduced Compact Multi-Resolution Index (rCMRI). We evaluate the performance of rCMRI compared to CMRI for range queries and range query based queries.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Hou, Cheng-Yu, et 侯承育. « Explore The Correlation Between Appliance Use Time and Decline Curve Based on Big Data ». Thesis, 2016. http://ndltd.ncl.edu.tw/handle/hx9wdt.

Texte intégral
Résumé :
碩士
國立中正大學
資訊工程研究所
104
Due to the fast development of Internet of Everything, there is a rapid rise in the electronic data producing by appliances. For long time data collection, data operation, data analysis and applications will cause big data. To solve these problems, the main purpose of this paper is using RHadoop to build Bayesian regression model. The appliance data are collected from smart meter, and converted into power features. After identifying the state of power data by the state identification method, the system will build regression model. The dependent variable is appliance use time (weeks) and the independent variable is power feature. The score model and evaluate model is to decide which power feature is most suitable for being independent variable at last. The technology is used to explore the correlation between appliance use time and decline curve and to predict appliance use time, in order to enhance the overall behavior analysis in Smart Home.
Styles APA, Harvard, Vancouver, ISO, etc.
31

Alves, Pedro Miguel Carregueiro Jordão. « The effect of serial correlation in time-aggregation of annual sharpe ratios from monthly data ». Master's thesis, 2018. http://hdl.handle.net/10362/32318.

Texte intégral
Résumé :
The Sharpe ratio is one of the most widely used measures of risk-adjusted returns. It rests on the estimation of the mean and standard deviation of returns, which is subject to estimation errors. Moreover, it assumes identically and independently distributed returns, normality and no serial correlation, which are very restrictive assumptions in general. By using the Generalized Method of Moments approach to estimate these quantities, the assumptions may be relaxed and a more efficient estimator can be derived, by allowing serial correlation in returns. The purpose of this research is to show how serial correlation can affect the timeaggregation of Sharpe ratios, changing the ordering of a ranking of assets based on the ratio.
Styles APA, Harvard, Vancouver, ISO, etc.
32

Guo, T. « Real-time analytics for complex structure data ». Thesis, 2015. http://hdl.handle.net/10453/38990.

Texte intégral
Résumé :
University of Technology Sydney. Faculty of Engineering and Information Technology.
The advancement of data acquisition and analysis technology has resulted in many real-world data being dynamic and containing rich content and structured information. More specifically, with the fast development of information technology, many current real-world data are always featured with dynamic changes, such as new instances, new nodes and edges, and modifications to the node content. Different from traditional data, which are represented as feature vectors, data with complex relationships are often represented as graphs to denote the content of the data entries and their structural relationships, where instances (nodes) are not only characterized by the content but are also subject to dependency relationships. Plus, real-time availability is one of outstanding features of today’s data. Real-time analytics is dynamic analysis and reporting based on data entered into a system before the actual time of use. Real-time analytics emphasizes on deriving immediate knowledge from dynamic data sources, such as data streams, and knowledge discovery and pattern mining are facing complex, dynamic data sources. However, how to combine structure information and node content information for accurate and real-time data mining is still a big challenge. Accordingly, this thesis focuses on real-time analytics for complex structure data. We explore instance correlation in complex structure data and utilises it to make mining tasks more accurate and applicable. To be specific, our objective is to combine node correlation with node content and utilize them for three different tasks, including (1) graph stream classification, (2) super-graph classification and clustering, and (3) streaming network node classification. Understanding the role of structured patterns for graph classification: the thesis introduces existing works on data mining from an complex structured perspective. Then we propose a graph factorization-based fine-grained representation model, where the main objective is to use linear combinations of a set of discriminative cliques to represent graphs for learning. The optimization-oriented factorization approach ensures minimum information loss for graph representation, and also avoids the expensive sub-graph isomorphism validation process. Based on this idea, we propose a novel framework for fast graph stream classification. A new structure data classification algorithm: The second method introduces a new super-graph classification and clustering problem. Due to the inherent complex structure representation, all existing graph classification methods cannot be applied to super-graph classification. In the thesis, we propose a weighted random walk kernel which calculates the similarity between two super-graphs by assessing (a) the similarity between super-nodes of the super-graphs, and (b) the common walks of the super-graphs. Our key contribution is: (1) a new super-node and super-graph structure to enrich existing graph representation for real-world applications; (2) a weighted random walk kernel considering node and structure similarities between graphs; (3) a mixed-similarity considering structured content inside super-nodes and structural dependency between super-nodes; and (4) an effective kernel-based super-graph classification method with sound theoretical basis. Empirical studies show that the proposed methods significantly outperform the state-of-the-art methods. Real-time analytics framework for dynamic complex structure data: For streaming networks, the essential challenge is to properly capture the dynamic evolution of the node content and node interactions in order to support node classification. While streaming networks are dynamically evolving, for a short temporal period, a subset of salient features are essentially tied to the network content and structures, and therefore can be used to characterize the network for classification. To achieve this goal, we propose to carry out streaming network feature selection (SNF) from the network, and use selected features as gauge to classify unlabeled nodes. A Laplacian based quality criterion is proposed to guide the node classification, where the Laplacian matrix is generated based on node labels and network topology structures. Node classification is achieved by finding the class label that results in the minimal gauging value with respect to the selected features. By frequently updating the features selected from the network, node classification can quickly adapt to the changes in the network for maximal performance gain. Experiments and comparisons on real-world networks demonstrate that SNOC is able to capture dynamics in the network structures and node content, and outperforms baseline approaches with significant performance gain.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Hiah, Pier Juhng, et 連培中. « Data Stream Mining Technology for ECG Signals of Chronic Pain : Real-Time Tracking and Clinical Correlation ». Thesis, 2017. http://ndltd.ncl.edu.tw/handle/67yzx2.

Texte intégral
Résumé :
碩士
國立交通大學
電機資訊國際學程
105
Evaluating and tracking the progress of treatment for chronic pain is challenging because pain is a subjective experience and can be measured only by self-report. Electrocardiography (ECG) has been proven to be a promising source of physiological biomarkers for chronic pain. Previous studies had demonstrated that heart rate variability (HRV) could be associated with different types of pain and also pain perception. This study aims to identify the relationship between HRV indices and chronic pain through collecting resting ECG data and subjective pain severity from patients with chronic migraine and fibromyalgia before and after treatments. In addition, resting ECG data from healthy controls were also collected for comparison. The results derived from time, frequency, and non-linear analyses showed that the HRV of chronic patients were generally lower than that of healthy control subjects. Besides, the HRV of the chronic pain patients in the responder group significantly increased after the medical treatment, indicating that a useful biomarker of the treatment efficacy. Among 10 HRV indices, the non-linear Poincaré plot analysis is a promising HRV indices in monitoring pain severity as well as determining treatment efficacy. Finally, a data stream mining platform was developed for real-time streaming and analyzing of multimodal data. This platform is presented such that they can be used as an aid for biofeedback treatment of chronic pain in the future.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Kudela, Maria Aleksandra. « Statistical methods for high-dimensional data with complex correlation structure applied to the brain dynamic functional connectivity studyDY ». Diss., 2017. http://hdl.handle.net/1805/12835.

Texte intégral
Résumé :
Indiana University-Purdue University Indianapolis (IUPUI)
A popular non-invasive brain activity measurement method is based on the functional magnetic resonance imaging (fMRI). Such data are frequently used to study functional connectivity (FC) defined as statistical association among two or more anatomically distinct fMRI signals (Friston, 1994). FC has emerged in recent years as a valuable tool for providing a deeper understanding of neurodegenerative diseases and neuropsychiatric disorders, such as Alzheimer's disease and autism. Information about complex association structure in high-dimensional fMRI data is often discarded by a calculating an average across complex spatiotemporal processes without providing an uncertainty measure around it. First, we propose a non-parametric approach to estimate the uncertainty of dynamic FC (dFC) estimates. Our method is based on three components: an extension of a boot strapping method for multivariate time series, recently introduced by Jentsch and Politis (2015); sliding window correlation estimation; and kernel smoothing. Second, we propose a two-step approach to analyze and summarize dFC estimates from a task-based fMRI study of social-to-heavy alcohol drinkers during stimulation with avors. In the first step, we apply our method from the first paper to estimate dFC for each region subject combination. In the second step, we use semiparametric additive mixed models to account for complex correlation structure and model dFC on a population level following the study's experimental design. Third, we propose to utilize the estimated dFC to study the system's modularity defined as the mutually exclusive division of brain regions into blocks with intra-connectivity greater than the one obtained by chance. As a result, we obtain brain partition suggesting the existence of common functionally-based brain organization. The main contribution of our work stems from the combination of the methods from the fields of statistics, machine learning and network theory to provide statistical tools for studying brain connectivity from a holistic, multi-disciplinary perspective.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Han, Wen-Hao, et 韓文豪. « The Effect of Time Series Correlation by Using Different Time Spans and KDE Bandwidths – A Case Study of the Price and Volume Data of TSMC and UMC ». Thesis, 2019. http://ndltd.ncl.edu.tw/handle/9j37vs.

Texte intégral
Résumé :
碩士
國立臺灣大學
統計碩士學位學程
107
If stock volumes are defined by “event”, they may happen many times or not to happen in one single time point at the aspect of event. Also, when there are two or more event variables like that in the dataset, the reaction of delay will probably happen between them. Therefore, if we want to figure out the true correlation between each two of them, methods like using same time span to integrate data, using kernel density estimation (KDE) to gain optimal density, or multivariate time series analysis can be used to try and test. Dataset of this study is the ten year (2009~2018) daily price and volume data of TSMC (Taiwan Semiconductor Manufacturing Company, Limited) and UMC (United Microelectronics Corporation), retrieved from the official website of Taiwan Stock Exchange. By using correlation bootstrap confidence intervals and T-tests, the study can select the appropriate bandwidths and time spans. After that, the study constructs different situations of data transformation by the former result, and fit VARIMA models in each situation. In conclusion, using KDE or time span can both make correlations of variables closer to the real, and the following VARIMA model can have better explanation. Meanwhile, VARIMA(1,1,2) model explained the best among all the situations in this empirical research, so it can be a reference of pairs trading strategies of the two companies.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Sebatjane, Phuti. « Understanding patterns of aggregation in count data ». Diss., 2016. http://hdl.handle.net/10500/22067.

Texte intégral
Résumé :
The term aggregation refers to overdispersion and both are used interchangeably in this thesis. In addressing the problem of prevalence of infectious parasite species faced by most rural livestock farmers, we model the distribution of faecal egg counts of 15 parasite species (13 internal parasites and 2 ticks) common in sheep and goats. Aggregation and excess zeroes is addressed through the use of generalised linear models. The abundance of each species was modelled using six different distributions: the Poisson, negative binomial (NB), zero-inflated Poisson (ZIP), zero-inflated negative binomial (ZINB), zero-altered Poisson (ZAP) and zero-altered negative binomial (ZANB) and their fit was later compared. Excess zero models (ZIP, ZINB, ZAP and ZANB) were found to be a better fit compared to standard count models (Poisson and negative binomial) in all 15 cases. We further investigated how distributional assumption a↵ects aggregation and zero inflation. Aggregation and zero inflation (measured by the dispersion parameter k and the zero inflation probability) were found to vary greatly with distributional assumption; this in turn changed the fixed-effects structure. Serial autocorrelation between adjacent observations was later taken into account by fitting observation driven time series models to the data. Simultaneously taking into account autocorrelation, overdispersion and zero inflation proved to be successful as zero inflated autoregressive models performed better than zero inflated models in most cases. Apart from contribution to the knowledge of science, predictability of parasite burden will help farmers with effective disease management interventions. Researchers confronted with the task of analysing count data with excess zeroes can use the findings of this illustrative study as a guideline irrespective of their research discipline. Statistical methods from model selection, quantifying of zero inflation through to accounting for serial autocorrelation are described and illustrated.
Statistics
M.Sc. (Statistics)
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie