Dissertations / Theses on the topic 'Interval data'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Interval data.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Oller, Piqué Ramon. "Survival analysis issues with interval-censored data." Doctoral thesis, Universitat Politècnica de Catalunya, 2006. http://hdl.handle.net/10803/6520.
Full textAquesta tesi doctoral es divideix en dues parts que tracten dues qüestions importants que fan referència a dades amb censura en un interval. La primera part la formen els capítols 2 i 3 els quals tracten sobre condicions formals que asseguren que la versemblança simplificada pot ser utilitzada en l'estimació de la distribució del temps de vida. La segona part la formen els capítols 4 i 5 que es dediquen a l'estudi de procediments estadístics pel problema de k mostres. El treball que reproduïm conté diversos materials que ja s'han publicat o ja s'han presentat per ser considerats com objecte de publicació.
En el capítol 1 introduïm la notació bàsica que s'utilitza en la tesi doctoral. També fem una descripció de l'enfocament no paramètric en l'estimació de la funció de distribució del temps de vida. Peto (1973) i Turnbull (1976) van ser els primers autors que van proposar un mètode d'estimació basat en la versió simplificada de la funció de versemblança. Altres autors han estudiat la unicitat de la solució obtinguda en aquest mètode (Gentleman i Geyer, 1994) o han millorat el mètode amb noves propostes (Wellner i Zhan, 1997).
El capítol 2 reprodueix l'article d'Oller et al. (2004). Demostrem l'equivalència entre les diferents caracteritzacions de censura no informativa que podem trobar a la bibliografia i definim una condició de suma constant anàloga a l'obtinguda en el context de censura per la dreta. També demostrem que si la condició de no informació o la condició de suma constant són certes, la versemblança simplificada es pot utilitzar per obtenir l'estimador de màxima versemblança no paramètric (NPMLE) de la funció de distribució del temps de vida. Finalment, caracteritzem la propietat de suma constant d'acord amb diversos tipus de censura. En el capítol 3 estudiem quina relació té la propietat de suma constant en la identificació de la distribució del temps de vida. Demostrem que la distribució del temps de vida no és identificable fora de la classe dels models de suma constant. També demostrem que la probabilitat del temps de vida en cadascun dels intervals observables és identificable dins la classe dels models de suma constant. Tots aquests conceptes els
il·lustrem amb diversos exemples.
El capítol 4 s'ha publicat parcialment en l'article de revisió metodològica de Gómez et al. (2004). Proporciona una visió general d'aquelles tècniques que s'han aplicat en el problema no paramètric de comparació de dues o més mostres amb dades censurades en un interval. També hem desenvolupat algunes rutines amb S-Plus que implementen la versió permutacional del tests de Wilcoxon, Logrank i de la t de Student per a dades censurades en un interval (Fay and Shih, 1998). Aquesta part de la tesi doctoral es complementa en el capítol 5 amb diverses propostes d'extensió del test de Jonckeere. Amb l'objectiu de provar una tendència en el problema de k mostres, Abel (1986) va realitzar una de les poques generalitzacions del test de Jonckheere per a dades censurades en un interval. Nosaltres proposem altres generalitzacions d'acord amb els resultats presentats en el capítol 4. Utilitzem enfocaments permutacionals i de Monte Carlo. Proporcionem programes informàtics per a cada proposta i realitzem un estudi de simulació per tal de comparar la potència de cada proposta sota diferents models paramètrics i supòsits de tendència. Com a motivació de la metodologia, en els dos capítols s'analitza un conjunt de dades d'un estudi sobre els beneficis de la zidovudina en pacients en els primers estadis de la infecció del virus VIH (Volberding et al., 1995).
Finalment, el capítol 6 resumeix els resultats i destaca aquells aspectes que s'han de completar en el futur.
Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. Interval censoring refers to the situation when the event of interest cannot be directly observed and it is only known to have occurred during a random interval of time. This kind of censoring has produced a lot of work in the last years and typically occurs for individuals in a study being inspected or observed intermittently, so that an individual's lifetime is known only to lie between two successive observation times.
This PhD thesis is divided into two parts which handle two important issues of interval censored data. The first part is composed by Chapter 2 and Chapter 3 and it is about formal conditions which allow estimation of the lifetime distribution to be based on a well known simplified likelihood. The second part is composed by Chapter 4 and Chapter 5 and it is devoted to the study of test procedures for the k-sample problem. The present work reproduces several material which has already been published or has been already submitted.
In Chapter 1 we give the basic notation used in this PhD thesis. We also describe the nonparametric approach to estimate the distribution function of the lifetime variable. Peto (1973) and Turnbull (1976) were the first authors to propose an estimation method which is based on a simplified version of the likelihood function. Other authors have studied the uniqueness of the solution given by this method (Gentleman and Geyer, 1994) or have improved it with new proposals (Wellner and Zhan, 1997).
Chapter 2 reproduces the paper of Oller et al. (2004). We prove the equivalence between different characterizations of noninformative censoring appeared in the literature and we define an analogous constant-sum condition to the one derived in the context of right censoring. We prove as well that when the noninformative condition or the constant-sum condition holds, the simplified likelihood can be used to obtain the nonparametric maximum likelihood estimator (NPMLE) of the failure time distribution function. Finally, we characterize the constant-sum property according to different types of censoring. In Chapter 3 we study the relevance of the constant-sum property in the identifiability of the lifetime distribution. We show that the lifetime distribution is not identifiable outside the class of constant-sum models. We also show that the lifetime probabilities assigned to the observable intervals are identifiable inside the class of constant-sum models. We illustrate all these notions with several examples.
Chapter 4 has partially been published in the survey paper of Gómez et al. (2004). It gives a general view of those procedures which have been applied in the nonparametric problem of the comparison of two or more interval-censored samples. We also develop some S-Plus routines which implement the permutational version of the Wilcoxon test, the Logrank test and the t-test for interval censored data (Fay and Shih, 1998). This part of the PhD thesis is completed in Chapter 5 by different proposals of extension of the Jonckeere's test. In order to test for an increasing trend in the k-sample problem, Abel (1986) gives one of the few generalizations of the Jonckheree's test for interval-censored data. We also suggest different Jonckheere-type tests according to the tests presented in Chapter 4. We use permutational and Monte Carlo approaches. We give computer programs for each proposal and perform a simulation study in order compare the power of each proposal under different parametric assumptions and different alternatives. We motivate both chapters with the analysis of a set of data from a study of the benefits of zidovudine in patients in the early stages of the HIV infection (Volberding et al., 1995).
Finally, Chapter 6 summarizes results and address those aspects which remain to be completed.
Long, Yongxian, and 龙泳先. "Semiparametric analysis of interval censored survival data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45541152.
Full textGorelick, Jeremy Sun Jianguo. "Nonparametric analysis of interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri--Columbia, 2009. http://hdl.handle.net/10355/7009.
Full textZhang, Yue. "Bayesian Cox Models for Interval-Censored Survival Data." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1479476510362603.
Full textWinarko, Edi, and edwin@ugm ac id. "The Discovery and Retrieval of Temporal Rules in Interval Sequence Data." Flinders University. Informatics and Engineering, 2007. http://catalogue.flinders.edu.au./local/adt/public/adt-SFU20080107.164033.
Full textShuma, Mercy Violet 1957. "Design of a microcomputer "time interval board" for time interval statistical analysis of nuclear systems." Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276685.
Full textWang, Lianming. "Statistical analysis of multivariate interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4375.
Full textThe entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (May 2, 2007) Vita. Includes bibliographical references.
Lim, Hee-Jeong. "Statistical analysis of interval-censored and truncated survival data /." free to MU campus, to others for purchase, 2001. http://wwwlib.umi.com/cr/mo/fullcit?p3025635.
Full textZhao, Qiang. "Nonparametric treatment comparisons for interval-censored failure time data /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p3144474.
Full textChen, Man-Hua. "Statistical analysis of multivariate interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/4776.
Full textThe entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on March 6, 2009) Includes bibliographical references.
Ding, Lili. "Bayesian Frailty Models for Correlated Interval-Censored Survival Data." University of Cincinnati / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1267454031.
Full textWan, Lijie. "CONTINUOUS TIME MULTI-STATE MODELS FOR INTERVAL CENSORED DATA." UKnowledge, 2016. http://uknowledge.uky.edu/statistics_etds/19.
Full textSaad, Aya Hassan [Verfasser]. "CDF-intervals: a probabilistic interval constraint framework to reason about data with uncertainty / Aya Hassan Saad." Ulm : Universität Ulm, 2016. http://d-nb.info/1101578289/34.
Full textHauli, D. E. "Spline functions and their application to analysis of interval data : Breastfeeding durations and closed birth intervals." Thesis, University of Southampton, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375510.
Full textSävhammar, Simon. "Uniform interval normalization : Data representation of sparse and noisy data sets for machine learning." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19194.
Full textZhu, Chao. "Nonparametric and semiparametric methods for interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4415.
Full textThe entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (May 2, 2007) Vita. Includes bibliographical references.
Wei, Shaoceng. "MULTI-STATE MODELS FOR INTERVAL CENSORED DATA WITH COMPETING RISK." UKnowledge, 2015. http://uknowledge.uky.edu/statistics_etds/10.
Full textXiao, Fei. "Hexahedral Mesh Generation from Volumetric Data by Dual Interval Volume." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1532003347814656.
Full textAlparslan, Gok Sirma Zeynep. "Cooperative Interval Games." Phd thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12610337/index.pdf.
Full textHildreth, John C. "The Use of Short-Interval GPS Data for Construction Operations Analysis." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/26120.
Full textPh. D.
Topp, Rebekka. "Regression and residual analysis in linear models with interval censored data." Doctoral thesis, Universitat Politècnica de Catalunya, 2002. http://hdl.handle.net/10803/6511.
Full textIn the first part of this work I develop an estimation theory for the regression parameters of the linear model where both dependent and independent variables are interval censored. In doing so I use a semi-parametric maximum likelihood approach which determines the parameter estimates via maximization of the likelihood function of the data. Since the density function of the covariate is unknown due to interval censoring, the maximization problem is solved through an algorithm which frstly determines the unknown density function of the covariate and then maximizes the complete data likelihood function. The unknown covariate density is hereby determined nonparametrically through a modification of the approach of Turnbull (1976). The resulting parameter estimates are given under the assumption that the distribution of the model errors belong to the exponential familiy or are Weibull. In addition I extend my extimation theory to the case that the regression model includes both an interval censored and an uncensored covariate. Since the derivation of the theoretical statistical properties of the developed parameter estimates is rather complex, simulations were carried out to determine the quality of the estimates. As a result it can be seen that the estimated values for the regression parameters are always very close the real ones. Finally, some alternative estimation methods for this regression problem are discussed.
In the second part of this work I develop a residual theory for the linear regression model where the covariate is interval censored, but the depending variable can be observed exactly. In this case the model errors appear to be interval censored, and so the residuals. This leads to the problem of not directly observable residuals which is solved in the following way: Since one assumption of the linear regression model is the N(0,2)-distribution of the model errors, it follows that the distribtuion of the interval censored errors is a truncated normal distribution, the truncation being determined by the observed model error intervals. Consequently, the distribution of the interval censored residuals is a -distribution, truncated in the respective residual interval, where the estimation of the residual variance is accomplished through the method of Gómez et al. (2002). In a simulation study I compare the behaviour of the so constructed residuals with those of Gómez et al. (2002) and a naïve type of resiudals which considers the middle of the residual interval as the observed residual. The results show that my residuals can be used for most of the simulated scenarios, wheras this is not the case for the other two types of residuals. Finally, my new residual theory is applied to a data set from a clinical study.
Ketchum, Jessica McKinney. "A Normal-Mixture Model with Random-Effects for RR-Interval Data." VCU Scholars Compass, 2006. http://hdl.handle.net/10156/1979.
Full textTehranchi, Babak 1968. "Time-interval quantization in a high-density optical data storage system." Thesis, The University of Arizona, 1992. http://hdl.handle.net/10150/278151.
Full textWong, Kin-yau, and 黃堅祐. "Analysis of interval-censored failure time data with long-term survivors." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48199473.
Full textpublished_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
Bergenholm, Linnéa. "Predicting QRS and PR interval prolongations in humans using nonclinical data." Thesis, University of Warwick, 2017. http://wrap.warwick.ac.uk/92121/.
Full textEl, Ouassouli Amine. "Discovering complex quantitative dependencies between interval-based state streams." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI061.
Full textThe increasing utilization of sensor devices in addition to human-given data make it possible to capture real world systems complexity through rich temporal descriptions. More precisely, the usage of a multitude of data sources types allows to monitor an environment by describing the evolution of several of its dimensions through data streams. One core characteristic of such configurations is heterogeneity that appears at different levels of the data generation process: data sources, time models and data models. In such context, one challenging task for monitoring systems is to discover non-trivial temporal knowledge that is directly actionable and suitable for human interpretation. In this thesis, we firstly propose to use a Temporal Abstraction (TA) approach to express information given by heterogeneous raw data streams with a unified interval-based representation, called state streams. A state reports on a high level environment configuration that is of interest for an application domain. Such approach solves problems introduced by heterogeneity, provides a high level pattern vocabulary and also permits also to integrate expert(s) knowledge into the discovery process. Second, we introduced the Complex Temporal Dependencies (CTD) that is a quantitative interval-based pattern model. It is defined similarly to a conjunctive normal form and allows to express complex temporal relations between states. Contrary to the majority of existing pattern models, a CTD is evaluated with automatic statistical assessment of streams intersection avoiding the use of any significance user-given parameter. Third, we proposed CTD-Miner a first efficient CTD mining framework. CTD-Miner performs an incremental dependency construction. CTD-Miner benefits from pruning techniques based on a statistical correspondence relationship that aims to accelerate the exploration search space by reducing redundant information and provide a more usable result set. Finally, we proposed the Interval Time Lag Discovery (ITLD) algorithm. ITLD is based on a confidence variation heuristic that permits to reduce the complexity of the pairwise dependency discovery process from quadratic to linear w.r.t a temporal constraint Δ on time lags. Experiments on simulated and real world data showed that ITLD provides efficiently more accurate results in comparison with existing approaches. Hence, ITLD enhances significantly the accuracy, performances and scalability of CTD-Miner. The encouraging results given by CTD-Miner on our real world motion data set suggests that it is possible to integrate insights given by real time video processing approaches in a knowledge discovery process opening interesting perspectives for monitoring smart environments
Neethling, Willem Francois. "Comparison of methods to calculate measures of inequality based on interval data." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/97780.
Full textENGLISH ABSTRACT: In recent decades, economists and sociologists have taken an increasing interest in the study of income attainment and income inequality. Many of these studies have used census data, but social surveys have also increasingly been utilised as sources for these analyses. In these surveys, respondents’ incomes are most often not measured in true amounts, but in categories of which the last category is open-ended. The reason is that income is seen as sensitive data and/or is sometimes difficult to reveal. Continuous data divided into categories is often more difficult to work with than ungrouped data. In this study, we compare different methods to convert grouped data to data where each observation has a specific value or point. For some methods, all the observations in an interval receive the same value; an example is the midpoint method, where all the observations in an interval are assigned the midpoint. Other methods include random methods, where each observation receives a random point between the lower and upper bound of the interval. For some methods, random and non-random, a distribution is fitted to the data and a value is calculated according to the distribution. The non-random methods that we use are the midpoint-, Pareto means- and lognormal means methods; the random methods are the random midpoint-, random Pareto- and random lognormal methods. Since our focus falls on income data, which usually follows a heavy-tailed distribution, we use the Pareto and lognormal distributions in our methods. The above-mentioned methods are applied to simulated and real datasets. The raw values of these datasets are known, and are categorised into intervals. These methods are then applied to the interval data to reconvert the interval data to point data. To test the effectiveness of these methods, we calculate some measures of inequality. The measures considered are the Gini coefficient, quintile share ratio (QSR), the Theil measure and the Atkinson measure. The estimated measures of inequality, calculated from each dataset obtained through these methods, are then compared to the true measures of inequality.
AFRIKAANSE OPSOMMING: Oor die afgelope dekades het ekonome en sosioloë ʼn toenemende belangstelling getoon in studies aangaande inkomsteverkryging en inkomste-ongelykheid. Baie van die studies maak gebruik van sensus data, maar die gebruik van sosiale opnames as bronne vir die ontledings het ook merkbaar toegeneem. In die opnames word die inkomste van ʼn persoon meestal in kategorieë aangedui waar die laaste interval oop is, in plaas van numeriese waardes. Die rede vir die kategorieë is dat inkomste data as sensitief beskou word en soms is dit ook moeilik om aan te dui. Kontinue data wat in kategorieë opgedeel is, is meeste van die tyd moeiliker om mee te werk as ongegroepeerde data. In dié studie word verskeie metodes vergelyk om gegroepeerde data om te skakel na data waar elke waarneming ʼn numeriese waarde het. Vir van die metodes word dieselfde waarde aan al die waarnemings in ʼn interval gegee, byvoorbeeld die ‘midpoint’ metode waar elke waarde die middelpunt van die interval verkry. Ander metodes is ewekansige metodes waar elke waarneming ʼn ewekansige waarde kry tussen die onder- en bogrens van die interval. Vir sommige van die metodes, ewekansig en nie-ewekansig, word ʼn verdeling oor die data gepas en ʼn waarde bereken volgens die verdeling. Die nie-ewekansige metodes wat gebruik word, is die ‘midpoint’, ‘Pareto means’ en ‘Lognormal means’ en die ewekansige metodes is die ‘random midpoint’, ‘random Pareto’ en ‘random lognormal’. Ons fokus is op inkomste data, wat gewoonlik ʼn swaar stertverdeling volg, en om hierdie rede maak ons gebruik van die Pareto en lognormaal verdelings in ons metodes. Al die metodes word toegepas op gesimuleerde en werklike datastelle. Die rou waardes van die datastelle is bekend en word in intervalle gekategoriseer. Die metodes word dan op die interval data toegepas om dit terug te skakel na data waar elke waarneming ʼn numeriese waardes het. Om die doeltreffendheid van die metodes te toets word ʼn paar maatstawwe van ongelykheid bereken. Die maatstawwe sluit in die Gini koeffisiënt, ‘quintile share ratio’ (QSR), die Theil en Atkinson maatstawwe. Die beraamde maatstawwe van ongelykheid, wat bereken is vanaf die datastelle verkry deur die metodes, word dan vergelyk met die ware maatstawwe van ongelykheid.
Wang, Jiaping. "The generalized MLE with the interval centered and masked competing risks data." Diss., Online access via UMI:, 2009.
Find full textDavidse, Alistair. "A comparison of methods for analysing interval-censored and truncated survival data." Master's thesis, University of Cape Town, 2004. http://hdl.handle.net/11427/4376.
Full textThis thesis examines three methods for analysing right-censored data: the Cox proportional hazards model (Cox, 1972), the Buckley-James regression model (Buckley and James, 1979) and the accelerated failure time model. These models are extended to incorporate the analysis of interval-censored and left-truncated data. The models are compared in an attempt to determine whether one model performs better than the others in terms of goodness-of-it and in terms of predictive power. Plots of the residuals and random effects from the Cox proportional hazards model are also examined.
Pantoja, Galicia Norberto. "Interval Censoring and Longitudinal Survey Data." Thesis, 2007. http://hdl.handle.net/10012/3224.
Full textLee, Pei-Fen, and 李佩芬. "Technical Efficiency Estimation with Interval Data." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/96328111969308893849.
Full text國立交通大學
工業工程與管理系所
97
Non-parametric efficiency and productivity analysis assumes deterministic data are properly provided. This underlying assumption however is not always true in reality. In this work, we investigate how to estimate technical efficiency when only interval data are given due to imprecise information. Rather than assuming the probability distributions of data uncertainty, interval data represent the ranges for possible realization. We approach the problem by proposing some necessary properties for proper estimations of efficient frontiers and technical efficiency based on interval data. Two estimation models with respect to the conventional deterministic models -free disposal hull (FDH) and variable returns to scales (VRS)-are also proposed. It is shown that both proposed models satisfy the necessary properties, and thus they are appropriate estimations.
Kuo, Hsien-Chun, and 郭獻駿. "A clustering method for interval data." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/96657625457981344710.
Full text中原大學
應用數學研究所
99
In fuzzy clustering for different data types, there are many different clustering methods. The main purpose of clustering algorithms is for clustering a given data set. In this thesis, we propose a clustering algorithm by extending Yang and Wu’s clustering algorithm, called SCM, such that it can handle interval data sets with the best representative of the group range and also the best number of clusters. In order to demonstrate this method as a good clustering algorithm, we perform some simulations with sampling data and also some real data sets. The results show that a range of information through this algorithm has good clustering results.
ZHI, HUANG-WEN, and 黃文志. "Fuzzy clustering algorithms for interval data." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/m9w7eg.
Full text中原大學
數學研究所
102
In cluster analysis, the fuzzy c-means (FCM) clustering algorithm is the most used method. The main purpose of clustering analysis is for clustering a given data set. In this thesis,we propose a clustering algorithm by extending Yang and Wu [6] clustering algorithm, called AFCM, such that it can handle interval data with the best representative. In order to show this method as a good clustering algorithm, we perform some simulations with sampling data and also some real data sets,and also compare it with interval Fuzzy C-Means (IFCM) proposed by de Carvalho [2]. The results show that the proposed method has good clustering results.
Yang, Tzung-cheng, and 楊宗承. "Analysis of Interval Time Series Data." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/2wsk9e.
Full text國立中山大學
應用數學系研究所
103
Conventional time series methods are developed for analyzing point-valued data. However, in practice there are many interval-valued time series data, which usually contain more information than point-valued data. It is thus important to develop time series modeling and forecasting techniques for interval-valued data. In this paper, we introduce concepts of interval stationarity and related interval statistics and investigate methodology for interval time series analysis. We use vector autoregressive (VAR) and vector error correction (VEC) models to build time series models for interval statistics including : medium, radius, upper and lower bounds, and obtain interval forecasts. We compare the forecast performance of the proposed methods with classical filtering technique : the exponential smoothing method, and nonparametric technique : the k-Nearest Neighbors (k-NN) algorithm. Finally, in the empirical study, we use stock and index data to evaluate the forecast performance and efficiency of the proposed interval time series models.
Cai, Hao Xu, and 蔡皓旭. "Interval regression analysis with fuzzy data." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/48136091858014961077.
Full text國立政治大學
應用數學系
104
Objective: This study concerns how to develop effective fuzzy regression models. In the literature, little is addressed on how to evaluate the effectiveness of fuzzy regression models developed with different regression methods. We consider this issue in this work and present a framework for such evaluation. Method: We consider fuzzy regression models developed with different regression approaches. A method to evaluate the developed models is proposed. We then show that the proposed method possesses desirable mathematical properties and it is applied to compare the two-dimensional regression method and the traditional least square based regression method in our case studies: predicating the concentration of and the volatility of the weighted price index of the Taiwanese stock exchange. Innovation: We propose a new metric to define a distance between two fuzzy numbers. This metric can be used to evaluate the performance of different fuzzy regression models. When a prediction from one model is closest to the sample data measured in terms of the proposed metric, it can be recognized as the optimal predication. Results: Based on the proposed metric, it can be obtained that the two-dimensional fuzzy regression method is better than the traditional least square based regression method. Especially, its resulting generalized residual is smaller. Conclusion: In the literature, no unified framework has been previously proposed in evaluating the effectiveness of developed fuzzy regression models. In this work, we present a metric to achieve this goal. It facilitates the work to determine whether a fuzzy regression model suitably fits obtained samples and whether the model has potential to provide sufficient accuracy for follow-up analysis in a considered problem.
Chen, Ming-Te, and 陳明德. "Modeling Technology of Interval Values Data." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/65322880443915245970.
Full text國立宜蘭大學
電機工程學系碩士班
98
In the literature, some of the methods are proposed for the symbolic interval-values data. There are three methods, the Centre method (CM), the MinMax method and the Centre and Range method(CRM). Among the above methods, they need to solve the inverse of matrix. Moreover, the condition number of this matrix is large, which may cause a large error of solution (i.e. Ill-Conditioned). The above methods do not guarantee that the linear regression models of predicted values of the lower bounds will be lower than the predicted values of the upper bounds. To overcome the problems above, the Evolutionary Computation(EC) is applied to constrain search range to find the best parameters of linear regression models. According to the simulation, the method proposed can provide satisfactory result.
"Survival analysis issues with interval-censored data." Universitat Politècnica de Catalunya, 2006. http://www.tesisenxarxa.net/TDX-1009106-110207/.
Full textChuang, Chung-mine, and 莊重明. "Sampled-data analysis of interval control system." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/64351052890703380396.
Full text中正理工學院
電機工程研究所
86
The main purpose of this research is to develop the unified state-space formation of synchronous and nonsynchronous digital control for interval systems. Some important matrix operations are also investigated, it is shown that the traditional method can not be used directly for such a problem. Based on derived model, any discrte-time response in the control system is easily obtained. Illustrative examples are demonstrated to show the effectiveness of the proposed method.
Chen, Z.-Ying, and 陳姿穎. "Distribution Function Estimation for Interval-Censored Data." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/cg2hd2.
Full text中原大學
應用數學研究所
105
We study two check-out times in this paper. In other words, we using Case Ⅱ interval-censored data analysis of the survival cumulative distribution function. In the course of research we refer to Chang et al. (2005) that Bernstein Polynomial to describe the cumulative distribution function. Further, we find out likelihood function. We use the method of maximum likelihood estimation (MLE) is not an easy task. Therefore, we will use Markov chain Monte Carlo - Simulated Annealing to estimate the parameters, and the simulate results are quite good.
Liao, Shih-Feng, and 廖士鋒. "Estimation of Odds for Interval-Censored Data." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/syet4r.
Full text中原大學
應用數學研究所
105
We all want to know the odds of a disease. For instance, what''s the ratio of the last death toll and the number of rehabilitation for people who get the flu? The thesis aims at studying the odds of Case Ⅱ interval-censored data. We apply the Bernstein polynomial to describe the statistical method of odds. We use the maximum approximate estimation method, however ; calculating the maximum approximate estimate is very complicated and difficult. Therefore, we adopt Markov chain Monte Carlo - Simulated Annealing to estimate its parameters, and we also conducted a lot of statistical simulation, and the results are quite satisfied.
Liu, Fei-Lin, and 劉飛麟. "Model Fitting and Diagnostics for Interval Data." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/48578s.
Full text國立中正大學
數學系統計科學研究所
106
Unlike classical data, interval data consists of a collection of intervals instead of single values. Since its unique structure, statistical theories and methods developed for classical data may not be applied directly. Thus, in this thesis we focus on studying two aspects of interval data. First, we aim at developing a new approach related to the model fitting for interval data, and compare our proposed method with the existing methods. The presence of outliers might bring seriously adverse effects on the results of model fitting leading to the inaccurate conclusion. Hence, outlier detection is an essential procedures in the process of statictical analysis. This becomes the second focus of this thesis. To this end, we propose a new approach via constructing the likelihood functions of order statistics, where the underlying mean and variance functions are involved with the linear regression model. Then we employ the local influence introduced by Cook (1986) to identify aberrant intervals. Last but not least, simulation studies and real data examples are provided to illustrate our proposed methods.
PAN, CHIEN-CHENG, and 潘建政. "Linear Regression Analysis for Symbolic Interval Data." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/70421291962916502656.
Full text國立中正大學
數學系統計科學研究所
104
In the network technology era, the collected data are growing more and more complex, and become larger than before. It brings the difficulty to analyze by using the standard statistical tools. In this thesis, we focus on estimates of the linear regression parameters for symbolic interval data. We propose two approaches to estimate regression parameters for symbolic interval data under two different data models and compare our proposed approaches with the existing methods via simulations. Finally, we analyze two real datasets with the proposed methods for illustrations.
Chang, Rong-Fong, and 張榮豐. "Applying Interval Indexing on Music Data Retrieval." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/30195408697672069494.
Full text樹德科技大學
電腦與通訊研究所
92
In the recent years, the music retrieval is getting more attention in the multimedia database. Extracting efficient music features plays a significant role in music data retrieval. Most of the retrieval methods were based on extracting the specific music feature for efficient indexing. These specific music features were than constructed into a tree-like structure results in complexity increasing. Beside, these traditional methods suffer from the difficulty to find out the matched querying sequence from the music database. To rectify these limitations, a novel music retrieval method based on the music interval and sliding window techniques is proposed in this thesis. Empirical results showed promise for the proposed approach to achieve efficient retrieval and the average search time is less than 1 second from the 1022 songs music database.
Shen, Jiun-Yi, and 沈駿壹. "Estimation in parametric model under interval data." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/35402161041552616803.
Full text國立成功大學
數學系應用數學碩博士班
90
In survival data analysis , exact event times are sometimes not ascertainable. For example , an hemophiliac has received contaminated blood with HIV . After six months , his blood test shows HIV negative , but HIV positive after one year . Therefore , the observation of HIV incubation is (6,12] . This kind of observation is called interval data. The purpose of this thesis is to propose an approximation mehod by the idea of method of moment estimation .We apply it to the exponential distribution and the Weibull distribution .We also compare it with the methods using in Dain(1995) via simulation and example.
Lai, Meng-Hua, and 賴盟化. "Robust Radial basis function networks with linear interval regression for symbolic interval-values data." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/06772982827189326151.
Full text國立宜蘭大學
電機工程學系碩士班
99
As in real life, a lot of information there may be exceptions, we will call it separation point (outliner), while the radial basis function (RBFNs) with the linear method of regression weights range and can not effectively deal with separation point issues raised in this article therefore robust radial basis function type and weight of linear interval regression approach to the problems arising from separation point. We will be divided into minimax value notation (MinMax method) and the center and the range of value representation (Center and Range method) for discussion. In the proposed structure, the learning process into two stages, namely, initialization, and fine-tuning. In the first stage, the range of groups by strong competition type classification (Robust Interval Competitive Agglomeration, RICA) initialization (that is, find out how many hidden nodes, and can adjust the radial basis function) the proposed structure. In the second phase, re-use of learning algorithms fall slope (the gradient-descent kind of learning algorithms) to fine-tune the coefficient of radial basis function and the parameters of the linear regression model range. Finally,simulation results proved this method than other methods provide better performance.
Liu, Chin-Lin, and 劉錦霖. "Radial basis function networks with linear interval regression weights for symbolic interval-values data." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/98329780757511352970.
Full text國立宜蘭大學
電機工程學系碩士班
97
This paper introduces a new structure of radial basis function networks (RBFNs) to modeling the symbolic interval-values data. In the structure, the Gaussian functions and the synaptic weights in the structure of the traditional RBFNs are replaced by the Gaussian function with interval distance measure and linear interval regression weights, respectively. In the linear interval regression weights, the lower and upper bounds of the interval-valued data as well as the centre and range of the interval-valued data are considered. In the proposed structures, two stages in the learning process are considered. In the stage 1, the initial structure (i.e. the number of hidden node and the adjustable parameters of radial basis function) of proposed structure is obtained by the interval competitive agglomeration clustering algorithm. In the stage 2, the gradient-descent kind of learning algorithms is applied to fine-tuning the parameters of radial basis function and the coefficients of the linear interval regression weights. In the simulation results, the average behavior of the root mean squared error and the square of the correlation coefficient in the framework of a Monte Carlo experiment are used as the performance index. Simulation results are provided to show the proposed structures has a nice performance.
Tolusso, David. "Robust Methods for Interval-Censored Life History Data." Thesis, 2008. http://hdl.handle.net/10012/3868.
Full textYu, Chia-Sheng, and 游家昇. "Advanced Interval Reduction Test on Data Dependence Analysis." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/37614619392539562742.
Full text樹德科技大學
電腦與通訊研究所
91
In the computer parallel processing, the data dependence analysis is crucial for compiling in loop programs. In briefly, the data dependence analysis is utilized for determining whether each variable accesses the same memory address of the loop program. At present, the literatures of the data dependence analysis are too numerous to enumerate. In the thesis, a novel exact and efficient data dependence test called the advanced interval reduction (AIR) test is proposed. This test combines the GCD test, the Banerjee test and the IR test for analyzing the data dependence of the loop program. The dependent data may also have dependence with each other. Hence, a new dependent data grouping method called the dynamic table grouping (DTG) is proposed. It is able to group the dependent data and independent data dynamically and then assign to hosts for processing. Besides, a new multi-variable data dependence method is proposed for the multi-variable loop program, which is called the enhanced-advanced interval reduction (Enhanced-AIR) test. Finally, the real data dependence experiments were implemented to verify the exactness and the high efficiency of the proposed data dependency algorithm.
Luo, Ren yo, and 羅仁佑. "The Competitive Agglomeration Clustering Algorithm for Interval Data." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/39864853665102296120.
Full text國立宜蘭大學
電機工程學系碩士班
96
In this study, an interval competitive agglomeration (ICA) clustering algorithm is proposed to overcome the problems of the unknown clusters number and the initialization of prototypes in the clustering algorithm for interval-values data. In the proposed ICA clustering algorithm, both the Euclidean distance measure and the Hausdorff distance measure for the interval-values data are independently considered. Besides, the advantages of both hierarchical clustering algorithm and partitional clustering algorithm are also incorporated into the ICA clustering algorithm. Hence, the ICA clustering algorithm converges fast in a few iterations regardless of the initial number of clusters. Moreover, it converges fast to the same optimal partition regardless of its initialization. Experiments with simply data sets and real data sets show the merits and usefulness of the ICA clustering algorithm for the interval-values data.
"Covariance structure analysis with polytomous and interval data." Chinese University of Hong Kong, 1992. http://library.cuhk.edu.hk/record=b5887020.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 1992.
Includes bibliographical references (leaves 95-96).
Chapter Chapter 1 --- Introduction --- p.1
Chapter Chapter 2 --- Estimation of the Correlation between Polytomous and Interval Data --- p.6
Chapter 2.1 --- Model --- p.6
Chapter 2.2 --- Maximum Likelihood Estimation --- p.8
Chapter 2.3 --- Partition Maximum Likelihood Estimation --- p.10
Chapter 2.4 --- Optimization Procedure and Simulation Study --- p.18
Chapter Chapter 3 --- Three-stage Procedure for Covariance Structure Analysis --- p.25
Chapter 3.1 --- Model --- p.25
Chapter 3.2 --- Three-stage Estimation Method --- p.26
Chapter 3.3 --- Optimization Procedure and Simulation Study --- p.38
Chapter Chapter 4 --- Two-stage Procedure for Correlation Structure Analysis --- p.46
Chapter 4.1 --- Model --- p.47
Chapter 4.2 --- Two-stage Estimation Method --- p.47
Chapter 4.3 --- Optimization Procedure and Monte Carlo Study --- p.50
Chapter 4.4 --- Comparison of Two Methods --- p.53
Chapter Chapter 5 --- Conclusion --- p.56
Tables --- p.58
References --- p.95