Dissertations / Theses on the topic 'Interval data'

To see the other types of publications on this topic, follow the link: Interval data.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Interval data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Oller, Piqué Ramon. "Survival analysis issues with interval-censored data." Doctoral thesis, Universitat Politècnica de Catalunya, 2006. http://hdl.handle.net/10803/6520.

Full text
Abstract:
L'anàlisi de la supervivència s'utilitza en diversos àmbits per tal d'analitzar dades que mesuren el temps transcorregut entre dos successos. També s'anomena anàlisi de la història dels esdeveniments, anàlisi de temps de vida, anàlisi de fiabilitat o anàlisi del temps fins a l'esdeveniment. Una de les dificultats que té aquesta àrea de l'estadística és la presència de dades censurades. El temps de vida d'un individu és censurat quan només és possible mesurar-lo de manera parcial o inexacta. Hi ha diverses circumstàncies que donen lloc a diversos tipus de censura. La censura en un interval fa referència a una situació on el succés d'interès no es pot observar directament i només tenim coneixement que ha tingut lloc en un interval de temps aleatori. Aquest tipus de censura ha generat molta recerca en els darrers anys i usualment té lloc en estudis on els individus són inspeccionats o observats de manera intermitent. En aquesta situació només tenim coneixement que el temps de vida de l'individu es troba entre dos temps d'inspecció consecutius.

Aquesta tesi doctoral es divideix en dues parts que tracten dues qüestions importants que fan referència a dades amb censura en un interval. La primera part la formen els capítols 2 i 3 els quals tracten sobre condicions formals que asseguren que la versemblança simplificada pot ser utilitzada en l'estimació de la distribució del temps de vida. La segona part la formen els capítols 4 i 5 que es dediquen a l'estudi de procediments estadístics pel problema de k mostres. El treball que reproduïm conté diversos materials que ja s'han publicat o ja s'han presentat per ser considerats com objecte de publicació.

En el capítol 1 introduïm la notació bàsica que s'utilitza en la tesi doctoral. També fem una descripció de l'enfocament no paramètric en l'estimació de la funció de distribució del temps de vida. Peto (1973) i Turnbull (1976) van ser els primers autors que van proposar un mètode d'estimació basat en la versió simplificada de la funció de versemblança. Altres autors han estudiat la unicitat de la solució obtinguda en aquest mètode (Gentleman i Geyer, 1994) o han millorat el mètode amb noves propostes (Wellner i Zhan, 1997).

El capítol 2 reprodueix l'article d'Oller et al. (2004). Demostrem l'equivalència entre les diferents caracteritzacions de censura no informativa que podem trobar a la bibliografia i definim una condició de suma constant anàloga a l'obtinguda en el context de censura per la dreta. També demostrem que si la condició de no informació o la condició de suma constant són certes, la versemblança simplificada es pot utilitzar per obtenir l'estimador de màxima versemblança no paramètric (NPMLE) de la funció de distribució del temps de vida. Finalment, caracteritzem la propietat de suma constant d'acord amb diversos tipus de censura. En el capítol 3 estudiem quina relació té la propietat de suma constant en la identificació de la distribució del temps de vida. Demostrem que la distribució del temps de vida no és identificable fora de la classe dels models de suma constant. També demostrem que la probabilitat del temps de vida en cadascun dels intervals observables és identificable dins la classe dels models de suma constant. Tots aquests conceptes els
il·lustrem amb diversos exemples.

El capítol 4 s'ha publicat parcialment en l'article de revisió metodològica de Gómez et al. (2004). Proporciona una visió general d'aquelles tècniques que s'han aplicat en el problema no paramètric de comparació de dues o més mostres amb dades censurades en un interval. També hem desenvolupat algunes rutines amb S-Plus que implementen la versió permutacional del tests de Wilcoxon, Logrank i de la t de Student per a dades censurades en un interval (Fay and Shih, 1998). Aquesta part de la tesi doctoral es complementa en el capítol 5 amb diverses propostes d'extensió del test de Jonckeere. Amb l'objectiu de provar una tendència en el problema de k mostres, Abel (1986) va realitzar una de les poques generalitzacions del test de Jonckheere per a dades censurades en un interval. Nosaltres proposem altres generalitzacions d'acord amb els resultats presentats en el capítol 4. Utilitzem enfocaments permutacionals i de Monte Carlo. Proporcionem programes informàtics per a cada proposta i realitzem un estudi de simulació per tal de comparar la potència de cada proposta sota diferents models paramètrics i supòsits de tendència. Com a motivació de la metodologia, en els dos capítols s'analitza un conjunt de dades d'un estudi sobre els beneficis de la zidovudina en pacients en els primers estadis de la infecció del virus VIH (Volberding et al., 1995).

Finalment, el capítol 6 resumeix els resultats i destaca aquells aspectes que s'han de completar en el futur.
Survival analysis is used in various fields for analyzing data involving the duration between two events. It is also known as event history analysis, lifetime data analysis, reliability analysis or time to event analysis. One of the difficulties which arise in this area is the presence of censored data. The lifetime of an individual is censored when it cannot be exactly measured but partial information is available. Different circumstances can produce different types of censoring. Interval censoring refers to the situation when the event of interest cannot be directly observed and it is only known to have occurred during a random interval of time. This kind of censoring has produced a lot of work in the last years and typically occurs for individuals in a study being inspected or observed intermittently, so that an individual's lifetime is known only to lie between two successive observation times.

This PhD thesis is divided into two parts which handle two important issues of interval censored data. The first part is composed by Chapter 2 and Chapter 3 and it is about formal conditions which allow estimation of the lifetime distribution to be based on a well known simplified likelihood. The second part is composed by Chapter 4 and Chapter 5 and it is devoted to the study of test procedures for the k-sample problem. The present work reproduces several material which has already been published or has been already submitted.

In Chapter 1 we give the basic notation used in this PhD thesis. We also describe the nonparametric approach to estimate the distribution function of the lifetime variable. Peto (1973) and Turnbull (1976) were the first authors to propose an estimation method which is based on a simplified version of the likelihood function. Other authors have studied the uniqueness of the solution given by this method (Gentleman and Geyer, 1994) or have improved it with new proposals (Wellner and Zhan, 1997).

Chapter 2 reproduces the paper of Oller et al. (2004). We prove the equivalence between different characterizations of noninformative censoring appeared in the literature and we define an analogous constant-sum condition to the one derived in the context of right censoring. We prove as well that when the noninformative condition or the constant-sum condition holds, the simplified likelihood can be used to obtain the nonparametric maximum likelihood estimator (NPMLE) of the failure time distribution function. Finally, we characterize the constant-sum property according to different types of censoring. In Chapter 3 we study the relevance of the constant-sum property in the identifiability of the lifetime distribution. We show that the lifetime distribution is not identifiable outside the class of constant-sum models. We also show that the lifetime probabilities assigned to the observable intervals are identifiable inside the class of constant-sum models. We illustrate all these notions with several examples.

Chapter 4 has partially been published in the survey paper of Gómez et al. (2004). It gives a general view of those procedures which have been applied in the nonparametric problem of the comparison of two or more interval-censored samples. We also develop some S-Plus routines which implement the permutational version of the Wilcoxon test, the Logrank test and the t-test for interval censored data (Fay and Shih, 1998). This part of the PhD thesis is completed in Chapter 5 by different proposals of extension of the Jonckeere's test. In order to test for an increasing trend in the k-sample problem, Abel (1986) gives one of the few generalizations of the Jonckheree's test for interval-censored data. We also suggest different Jonckheere-type tests according to the tests presented in Chapter 4. We use permutational and Monte Carlo approaches. We give computer programs for each proposal and perform a simulation study in order compare the power of each proposal under different parametric assumptions and different alternatives. We motivate both chapters with the analysis of a set of data from a study of the benefits of zidovudine in patients in the early stages of the HIV infection (Volberding et al., 1995).

Finally, Chapter 6 summarizes results and address those aspects which remain to be completed.
APA, Harvard, Vancouver, ISO, and other styles
2

Long, Yongxian, and 龙泳先. "Semiparametric analysis of interval censored survival data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45541152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gorelick, Jeremy Sun Jianguo. "Nonparametric analysis of interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri--Columbia, 2009. http://hdl.handle.net/10355/7009.

Full text
Abstract:
Title from PDF of title page (University of Missouri--Columbia, viewed on Feb 26, 2010). The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Dissertation advisor: Dr. (Tony) Jianguo Sun. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Yue. "Bayesian Cox Models for Interval-Censored Survival Data." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1479476510362603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Winarko, Edi, and edwin@ugm ac id. "The Discovery and Retrieval of Temporal Rules in Interval Sequence Data." Flinders University. Informatics and Engineering, 2007. http://catalogue.flinders.edu.au./local/adt/public/adt-SFU20080107.164033.

Full text
Abstract:
Data mining is increasingly becoming important tool in extracting interesting knowledge from large databases. Many industries are now using data mining tools for analysing their large collections of databases and making business decisions. Many data mining problems involve temporal aspects, with examples ranging from engineering to scientific research, finance and medicine. Temporal data mining is an extension of data mining which deals with temporal data. Mining temporal data poses more challenges than mining static data. While the analysis of static data sets often comes down to the question of data items, with temporal data there are many additional possible relations. One of the tasks in temporal data mining is the pattern discovery task, whose objective is to discover time-dependent correlations, patterns or rules between events in large volumes of data. To date, most temporal pattern discovery research has focused on events existing at a point in time rather than over a temporal interval. In comparison to static rules, mining with respect to time points provides semantically richer rules. However, accommodating temporal intervals offers rules that are richer still. This thesis addresses several issues related to the pattern discovery from interval sequence data. Despite its importance, this area of research has received relatively little attention and there are still many issues that need to be addressed. Three main issues that this thesis considers include the definition of what constitutes an interesting pattern in interval sequence data, the efficient mining for patterns in the data, and the identification of interesting patterns from a large number of discovered patterns. In order to deal with these issues, this thesis formulates the problem of discovering rules, which we term richer temporal association rules, from interval sequence databases. Furthermore, this thesis develops an efficient algorithm, ARMADA, for discovering richer temporal association rules. The algorithm does not require candidate generation. It utilizes a simple index, and only requires at most two database scans. In this thesis, a retrieval system is proposed to facilitate the selection of interesting rules from a set of discovered richer temporal association rules. To this end, a high-level query language specification, TAR-QL, is proposed to specify the criteria of the rules to be retrieved from the rule sets. Three low-level methods are developed to evaluate queries involving rule format conditions. In order to improve the performance of the methods, signature file based indexes are proposed. In addition, this thesis proposes the discovery of inter-transaction relative temporal association rules from event sequence databases.
APA, Harvard, Vancouver, ISO, and other styles
6

Shuma, Mercy Violet 1957. "Design of a microcomputer "time interval board" for time interval statistical analysis of nuclear systems." Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276685.

Full text
Abstract:
A microcomputer based hardware, the Time Interval Board, was designed and the software interface control program was developed. The board measures time intervals between consecutive pulses from a discriminator output. The data is stored in on-board 16K x 16 memory. The microcomputer empties and processes the data when the on-board memory is filled. Data collection continues until the preset collection period is finished or a forced end is initiated. During this period, control is passed between the hardware and the microcomputer via the interface circuit. The designed hardware is IBM PC compatible.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Lianming. "Statistical analysis of multivariate interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4375.

Full text
Abstract:
Thesis (Ph.D.)--University of Missouri-Columbia, 2006.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (May 2, 2007) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
8

Lim, Hee-Jeong. "Statistical analysis of interval-censored and truncated survival data /." free to MU campus, to others for purchase, 2001. http://wwwlib.umi.com/cr/mo/fullcit?p3025635.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Qiang. "Nonparametric treatment comparisons for interval-censored failure time data /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p3144474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Man-Hua. "Statistical analysis of multivariate interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/4776.

Full text
Abstract:
Thesis (Ph.D.)--University of Missouri-Columbia, 2007.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on March 6, 2009) Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
11

Ding, Lili. "Bayesian Frailty Models for Correlated Interval-Censored Survival Data." University of Cincinnati / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1267454031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wan, Lijie. "CONTINUOUS TIME MULTI-STATE MODELS FOR INTERVAL CENSORED DATA." UKnowledge, 2016. http://uknowledge.uky.edu/statistics_etds/19.

Full text
Abstract:
Continuous-time multi-state models are widely used in modeling longitudinal data of disease processes with multiple transient states, yet the analysis is complex when subjects are observed periodically, resulting in interval censored data. Recently, most studies focused on modeling the true disease progression as a discrete time stationary Markov chain, and only a few studies have been carried out regarding non-homogenous multi-state models in the presence of interval-censored data. In this dissertation, several likelihood-based methodologies were proposed to deal with interval censored data in multi-state models. Firstly, a continuous time version of a homogenous Markov multi-state model with backward transitions was proposed to handle uneven follow-up assessments or skipped visits, resulting in the interval censored data. Simulations were used to compare the performance of the proposed model with the traditional discrete time stationary Markov chain under different types of observation schemes. We applied these two methods to the well-known Nun study, a longitudinal study of 672 participants aged ≥ 75 years at baseline and followed longitudinally with up to ten cognitive assessments per participant. Secondly, we constructed a non-homogenous Markov model for this type of panel data. The baseline intensity was assumed to be Weibull distributed to accommodate the non-homogenous property. The proportional hazards method was used to incorporate risk factors into the transition intensities. Simulation studies showed that the Weibull assumption does not affect the accuracy of the parameter estimates for the risk factors. We applied our model to data from the BRAiNS study, a longitudinal cohort of 531 subjects each cognitively intact at baseline. Last, we presented a parametric method of fitting semi-Markov models based on Weibull transition intensities with interval censored cognitive data with death as a competing risk. We relaxed the Markov assumption and took interval censoring into account by integrating out all possible unobserved transitions. The proposed model also allowed for incorporating time-dependent covariates. We provided a goodness-of-fit assessment for the proposed model by the means of prevalence counts. To illustrate the methods, we applied our model to the BRAiNS study.
APA, Harvard, Vancouver, ISO, and other styles
13

Saad, Aya Hassan [Verfasser]. "CDF-intervals: a probabilistic interval constraint framework to reason about data with uncertainty / Aya Hassan Saad." Ulm : Universität Ulm, 2016. http://d-nb.info/1101578289/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Hauli, D. E. "Spline functions and their application to analysis of interval data : Breastfeeding durations and closed birth intervals." Thesis, University of Southampton, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.375510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Sävhammar, Simon. "Uniform interval normalization : Data representation of sparse and noisy data sets for machine learning." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19194.

Full text
Abstract:
The uniform interval normalization technique is proposed as an approach to handle sparse data and to handle noise in the data. The technique is evaluated transforming and normalizing the MoodMapper and Safebase data sets, the predictive capabilities are compared by forecasting the data set with aLSTM model. The results are compared to both the commonly used MinMax normalization technique and MinMax normalization with a time2vec layer. It was found the uniform interval normalization performed better on the sparse MoodMapper data set, and the denser Safebase data set. Future works consist of studying the performance of uniform interval normalization on other data sets and with other machine learning models.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhu, Chao. "Nonparametric and semiparametric methods for interval-censored failure time data." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4415.

Full text
Abstract:
Thesis (Ph.D.)--University of Missouri-Columbia, 2006.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (May 2, 2007) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
17

Wei, Shaoceng. "MULTI-STATE MODELS FOR INTERVAL CENSORED DATA WITH COMPETING RISK." UKnowledge, 2015. http://uknowledge.uky.edu/statistics_etds/10.

Full text
Abstract:
Multi-state models are often used to evaluate the effect of death as a competing event to the development of dementia in a longitudinal study of the cognitive status of elderly subjects. In this dissertation, both multi-state Markov model and semi-Markov model are used to characterize the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient, cognitive states and death as a competing risk. Firstly, a multi-state Markov model with three transient states: intact cognition, mild cognitive impairment (M.C.I.) and global impairment (G.I.) and one absorbing state: dementia is used to model the cognitive panel data. A Weibull model and a Cox proportional hazards (Cox PH) model are used to fit the time to death based on age at entry and the APOE4 status. A shared random effect correlates this survival time with the transition model. Secondly, we further apply a Semi-Markov process in which we assume that the wait- ing times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for the likelihood based estimation. At the end of this dissertation we extend a non-parametric “local EM algorithm” to obtain a smooth estimator of the cause-specific hazard function (CSH) in the presence of competing risk. All the proposed methods are justified by simulation studies and applications to the Nun Study data, a longitudinal study of late life cognition in a cohort of 461 subjects.
APA, Harvard, Vancouver, ISO, and other styles
18

Xiao, Fei. "Hexahedral Mesh Generation from Volumetric Data by Dual Interval Volume." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1532003347814656.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Alparslan, Gok Sirma Zeynep. "Cooperative Interval Games." Phd thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12610337/index.pdf.

Full text
Abstract:
Interval uncertainty affects our decision making activities on a daily basis making the data structure of intervals of real numbers more and more popular in theoretical models and related software applications. Natural questions for people or businesses that face interval uncertainty in their data when dealing with cooperation are how to form the coalitions and how to distribute the collective gains or costs. The theory of cooperative interval games is a suitable tool for answering these questions. In this thesis, the classical theory of cooperative games is extended to cooperative interval games. First, basic notions and facts from classical cooperative game theory and interval calculus are given. Then, the model of cooperative interval games is introduced and basic definitions are given. Solution concepts of selection-type and interval-type for cooperative interval games are intensively studied. Further, special classes of cooperative interval games like convex interval games and big boss interval games are introduced and various characterizations are given. Some economic and Operations Research situations such as airport, bankruptcy and sequencing with interval data and related interval games have been also studied. Finally, some algorithmic aspects related with the interval Shapley value and the interval core are considered.
APA, Harvard, Vancouver, ISO, and other styles
20

Hildreth, John C. "The Use of Short-Interval GPS Data for Construction Operations Analysis." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/26120.

Full text
Abstract:
The global positioning system (GPS) makes use of extremely accurate measures of the time to determine position. The times required for electronic signals to travel at the speed of light from at least four orbiting satellites to a receiver on earth is measured precisely and used to calculate the distances from the satellites to the receiver. The calculated distances are used to determine the position of the receiver through triangulation. This research takes an approach opposite the original GPS research, focusing on the use of position to determine the time at which events occur. Specifically, this work addresses the question: Can the information pertaining to position and speed contained in a GPS record be used to autonomously identify the times at which critical events occur within a production cycle? The research question was answered by determining the hardware needs for collecting the desired data in a useable format an developing a unique data collection tool to meet those needs. The tool was field evaluated and the data collected was used to determine the software needs for automated reduction of the data to the times at which key events occurred. The software tools were developed in the form of Time Identification Modules (TIMs). The TIMs were used to reduce data collected from a load and haul earthmoving operation to duration measures for the load, haul, dump, and return activities. The value of the developed system was demonstrated by investigating correlations between performance times in construction operations and by using field data to verify the results obtained from productivity estimating tools. Use of the system was shown to improve knowledge and provide additional insight into operations analysis studies.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Topp, Rebekka. "Regression and residual analysis in linear models with interval censored data." Doctoral thesis, Universitat Politècnica de Catalunya, 2002. http://hdl.handle.net/10803/6511.

Full text
Abstract:
This work consists of two parts, both related with regression analysis for interval censored data. Interval censored data x have the property that their value cannot be observed exactly but only the respective interval [xL,xR] which contains the true value x with probability one.

In the first part of this work I develop an estimation theory for the regression parameters of the linear model where both dependent and independent variables are interval censored. In doing so I use a semi-parametric maximum likelihood approach which determines the parameter estimates via maximization of the likelihood function of the data. Since the density function of the covariate is unknown due to interval censoring, the maximization problem is solved through an algorithm which frstly determines the unknown density function of the covariate and then maximizes the complete data likelihood function. The unknown covariate density is hereby determined nonparametrically through a modification of the approach of Turnbull (1976). The resulting parameter estimates are given under the assumption that the distribution of the model errors belong to the exponential familiy or are Weibull. In addition I extend my extimation theory to the case that the regression model includes both an interval censored and an uncensored covariate. Since the derivation of the theoretical statistical properties of the developed parameter estimates is rather complex, simulations were carried out to determine the quality of the estimates. As a result it can be seen that the estimated values for the regression parameters are always very close the real ones. Finally, some alternative estimation methods for this regression problem are discussed.

In the second part of this work I develop a residual theory for the linear regression model where the covariate is interval censored, but the depending variable can be observed exactly. In this case the model errors appear to be interval censored, and so the residuals. This leads to the problem of not directly observable residuals which is solved in the following way: Since one assumption of the linear regression model is the N(0,2)-distribution of the model errors, it follows that the distribtuion of the interval censored errors is a truncated normal distribution, the truncation being determined by the observed model error intervals. Consequently, the distribution of the interval censored residuals is a -distribution, truncated in the respective residual interval, where the estimation of the residual variance is accomplished through the method of Gómez et al. (2002). In a simulation study I compare the behaviour of the so constructed residuals with those of Gómez et al. (2002) and a naïve type of resiudals which considers the middle of the residual interval as the observed residual. The results show that my residuals can be used for most of the simulated scenarios, wheras this is not the case for the other two types of residuals. Finally, my new residual theory is applied to a data set from a clinical study.
APA, Harvard, Vancouver, ISO, and other styles
22

Ketchum, Jessica McKinney. "A Normal-Mixture Model with Random-Effects for RR-Interval Data." VCU Scholars Compass, 2006. http://hdl.handle.net/10156/1979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Tehranchi, Babak 1968. "Time-interval quantization in a high-density optical data storage system." Thesis, The University of Arizona, 1992. http://hdl.handle.net/10150/278151.

Full text
Abstract:
A hardware system for investigating Intersymbol Interference (ISI) in an optical data storage system has been designed and constructed by the author. The system consists of a pattern generator which produces data patterns of variable lengths and bit rates to be recorded on the optical disk. Data marks of the readback signal are quantized by a light-speed clock-counter system, and transferred in parallel to a personal computer for analysis. SNR values for collected data are obtained by computing mark size deviations of the readback signal from the original marks. A pseudo-random pattern of 31 bits is used for calculating SNR values for different spot sizes. Finally, Additive Interleaving Detection (AID) technique is implemented to compute another set of SNR values. 3-5db SNR improvement is observed when AID technique is used.
APA, Harvard, Vancouver, ISO, and other styles
24

Wong, Kin-yau, and 黃堅祐. "Analysis of interval-censored failure time data with long-term survivors." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B48199473.

Full text
Abstract:
Failure time data analysis, or survival analysis, is involved in various research fields, such as medicine and public health. One basic assumption in standard survival analysis is that every individual in the study population will eventually experience the event of interest. However, this assumption is usually violated in practice, for example when the variable of interest is the time to relapse of a curable disease resulting in the existence of long-term survivors. Also, presence of unobservable risk factors in the group of susceptible individuals may introduce heterogeneity to the population, which is not properly addressed in standard survival models. Moreover, the individuals in the population may be grouped in clusters, where there are associations among observations from a cluster. There are methodologies in the literature to address each of these problems, but there is yet no natural and satisfactory way to accommodate the coexistence of a non-susceptible group and the heterogeneity in the susceptible group under a univariate setting. Also, various kinds of associations among survival data with a cure are not properly accommodated. To address the above-mentioned problems, a class of models is introduced to model univariate and multivariate data with long-term survivors. A semiparametric cure model for univariate failure time data with long-term survivors is introduced. It accommodates a proportion of non-susceptible individuals and the heterogeneity in the susceptible group using a compound- Poisson distributed random effect term, which is commonly called a frailty. It is a frailty-Cox model which does not place any parametric assumption on the baseline hazard function. An estimation method using multiple imputation is proposed for right-censored data, and the method is naturally extended to accommodate interval-censored data. The univariate cure model is extended to a multivariate setting by introducing correlations among the compound- Poisson frailties for individuals from the same cluster. This multivariate cure model is similar to a shared frailty model where the degree of association among each pair of observations in a cluster is the same. The model is further extended to accommodate repeated measurements from a single individual leading to serially correlated observations. Similar estimation methods using multiple imputation are developed for the multivariate models. The univariate model is applied to a breast cancer data and the multivariate models are applied to the hypobaric decompression sickness data from National Aeronautics and Space Administration, although the methodologies are applicable to a wide range of data sets.
published_or_final_version
Statistics and Actuarial Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
25

Bergenholm, Linnéa. "Predicting QRS and PR interval prolongations in humans using nonclinical data." Thesis, University of Warwick, 2017. http://wrap.warwick.ac.uk/92121/.

Full text
Abstract:
Risk of cardiac conduction slowing (QRS/PR interval prolongations in monitored electrocardiograms) is assessed in nonclinical studies, where the current AstraZeneca strategy involves ensuring high margins to in vitro effects and statistical tests to identify in vivo effects. This thesis aims to improve QRS/PR risk assessment using pharmacokinetic-pharmacodynamic modelling for describing QRS/PR effects and evaluating translation to human effects. Data for six compounds were collected from the literature and previously performed in vitro (sodium/calcium channel), in vivo (guinea pig/dog) and clinical AstraZeneca studies. Mathematical models were developed and evaluated to describe and compare effects across compounds and species. Key results were that proportional drug effect models often suffice for small QRS/PR changes (up to 20%), while larger effects require nonlinear models. Heartrate correction and circadian rhythm models reduced residuals primarily for describing baseline PR intervals, with highest impact in humans followed by dogs and guinea pigs. Meaningful (10%) human QRS/PR changes correlated to low levels of sodium channel block (3-7%) and calcium channel binding (13-21%) and to small effects in guinea pigs and dogs (QRS 2.3-4.6% and PR 2.3-10%). This suggests that worst case human effects can be predicted by assuming four times greater effects at the same concentration from dog/guinea pig. Small changes in vitro and in vivo consistently translate to meaningful PR/QRS changes in humans across compounds. Accurate characterisation of concentration-effect relationships therefore require a model-based approach. Although the presented work is limited by the small number of investigated compounds, it provides a starting point for predicting human risk using routine QRS/PR data to improve the safety of new drugs.
APA, Harvard, Vancouver, ISO, and other styles
26

El, Ouassouli Amine. "Discovering complex quantitative dependencies between interval-based state streams." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI061.

Full text
Abstract:
Les avancées significatives qu’ont connu les technologies de capteurs, leur utilisation croissante ainsi que leur intégration dans les systèmes d’information permettent d’obtenir des descriptions temporelles riches d’environnements réels. L’information générée par de telles sources de données peut être qualifiée d’hétérogène sur plusieurs plans: types de mesures physiques, domaines et primitives temporelles, modèles de données etc. Dans ce contexte, l’application de méthodes de fouille de motifs constitue une opportunité pour la découverte de relations temporelles non-triviales, directement utilisables et facilement interprétables décrivant des phénomènes complexes. Nous proposons d’utiliser un ensemble d’abstraction temporelles pour construire une représentation unifiée, sous forme des flux d’intervalles (ou états), de l’information générée par un système hétérogène. Cette approche permet d’obtenir une description temporelle de l’environnent étudié à travers des attributs (ou états), dits de haut niveau, pouvant être utilisés dans la construction des motifs temporelles. A partir de cette représentation, nous nous intéressons à la découverte de dépendances temporelles quantitatives (avec information de délais) entre plusieurs flux d’intervalles. Nous introduisons le modèle de dépendances Complex Temporal Dependency (CTD) défini de manière similaire à une forme normale conjonctive. Ce modèle permets d’exprimer un ensemble riche de relations temporelles complexes. Pour ce modèle de dépendances nous proposons des algorithmes efficaces de découverte : CTD-Miner et ITLD - Interval Time Lag Discovery. Finalement, nous évaluons les performances de notre proposition ainsi que la qualité des résultats obtenus à travers des données issues de simulations ainsi que des données réelles collectées à partir de caméras et d’analyse vidéo
The increasing utilization of sensor devices in addition to human-given data make it possible to capture real world systems complexity through rich temporal descriptions. More precisely, the usage of a multitude of data sources types allows to monitor an environment by describing the evolution of several of its dimensions through data streams. One core characteristic of such configurations is heterogeneity that appears at different levels of the data generation process: data sources, time models and data models. In such context, one challenging task for monitoring systems is to discover non-trivial temporal knowledge that is directly actionable and suitable for human interpretation. In this thesis, we firstly propose to use a Temporal Abstraction (TA) approach to express information given by heterogeneous raw data streams with a unified interval-based representation, called state streams. A state reports on a high level environment configuration that is of interest for an application domain. Such approach solves problems introduced by heterogeneity, provides a high level pattern vocabulary and also permits also to integrate expert(s) knowledge into the discovery process. Second, we introduced the Complex Temporal Dependencies (CTD) that is a quantitative interval-based pattern model. It is defined similarly to a conjunctive normal form and allows to express complex temporal relations between states. Contrary to the majority of existing pattern models, a CTD is evaluated with automatic statistical assessment of streams intersection avoiding the use of any significance user-given parameter. Third, we proposed CTD-Miner a first efficient CTD mining framework. CTD-Miner performs an incremental dependency construction. CTD-Miner benefits from pruning techniques based on a statistical correspondence relationship that aims to accelerate the exploration search space by reducing redundant information and provide a more usable result set. Finally, we proposed the Interval Time Lag Discovery (ITLD) algorithm. ITLD is based on a confidence variation heuristic that permits to reduce the complexity of the pairwise dependency discovery process from quadratic to linear w.r.t a temporal constraint Δ on time lags. Experiments on simulated and real world data showed that ITLD provides efficiently more accurate results in comparison with existing approaches. Hence, ITLD enhances significantly the accuracy, performances and scalability of CTD-Miner. The encouraging results given by CTD-Miner on our real world motion data set suggests that it is possible to integrate insights given by real time video processing approaches in a knowledge discovery process opening interesting perspectives for monitoring smart environments
APA, Harvard, Vancouver, ISO, and other styles
27

Neethling, Willem Francois. "Comparison of methods to calculate measures of inequality based on interval data." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/97780.

Full text
Abstract:
Thesis (MComm)—Stellenbosch University, 2015.
ENGLISH ABSTRACT: In recent decades, economists and sociologists have taken an increasing interest in the study of income attainment and income inequality. Many of these studies have used census data, but social surveys have also increasingly been utilised as sources for these analyses. In these surveys, respondents’ incomes are most often not measured in true amounts, but in categories of which the last category is open-ended. The reason is that income is seen as sensitive data and/or is sometimes difficult to reveal. Continuous data divided into categories is often more difficult to work with than ungrouped data. In this study, we compare different methods to convert grouped data to data where each observation has a specific value or point. For some methods, all the observations in an interval receive the same value; an example is the midpoint method, where all the observations in an interval are assigned the midpoint. Other methods include random methods, where each observation receives a random point between the lower and upper bound of the interval. For some methods, random and non-random, a distribution is fitted to the data and a value is calculated according to the distribution. The non-random methods that we use are the midpoint-, Pareto means- and lognormal means methods; the random methods are the random midpoint-, random Pareto- and random lognormal methods. Since our focus falls on income data, which usually follows a heavy-tailed distribution, we use the Pareto and lognormal distributions in our methods. The above-mentioned methods are applied to simulated and real datasets. The raw values of these datasets are known, and are categorised into intervals. These methods are then applied to the interval data to reconvert the interval data to point data. To test the effectiveness of these methods, we calculate some measures of inequality. The measures considered are the Gini coefficient, quintile share ratio (QSR), the Theil measure and the Atkinson measure. The estimated measures of inequality, calculated from each dataset obtained through these methods, are then compared to the true measures of inequality.
AFRIKAANSE OPSOMMING: Oor die afgelope dekades het ekonome en sosioloë ʼn toenemende belangstelling getoon in studies aangaande inkomsteverkryging en inkomste-ongelykheid. Baie van die studies maak gebruik van sensus data, maar die gebruik van sosiale opnames as bronne vir die ontledings het ook merkbaar toegeneem. In die opnames word die inkomste van ʼn persoon meestal in kategorieë aangedui waar die laaste interval oop is, in plaas van numeriese waardes. Die rede vir die kategorieë is dat inkomste data as sensitief beskou word en soms is dit ook moeilik om aan te dui. Kontinue data wat in kategorieë opgedeel is, is meeste van die tyd moeiliker om mee te werk as ongegroepeerde data. In dié studie word verskeie metodes vergelyk om gegroepeerde data om te skakel na data waar elke waarneming ʼn numeriese waarde het. Vir van die metodes word dieselfde waarde aan al die waarnemings in ʼn interval gegee, byvoorbeeld die ‘midpoint’ metode waar elke waarde die middelpunt van die interval verkry. Ander metodes is ewekansige metodes waar elke waarneming ʼn ewekansige waarde kry tussen die onder- en bogrens van die interval. Vir sommige van die metodes, ewekansig en nie-ewekansig, word ʼn verdeling oor die data gepas en ʼn waarde bereken volgens die verdeling. Die nie-ewekansige metodes wat gebruik word, is die ‘midpoint’, ‘Pareto means’ en ‘Lognormal means’ en die ewekansige metodes is die ‘random midpoint’, ‘random Pareto’ en ‘random lognormal’. Ons fokus is op inkomste data, wat gewoonlik ʼn swaar stertverdeling volg, en om hierdie rede maak ons gebruik van die Pareto en lognormaal verdelings in ons metodes. Al die metodes word toegepas op gesimuleerde en werklike datastelle. Die rou waardes van die datastelle is bekend en word in intervalle gekategoriseer. Die metodes word dan op die interval data toegepas om dit terug te skakel na data waar elke waarneming ʼn numeriese waardes het. Om die doeltreffendheid van die metodes te toets word ʼn paar maatstawwe van ongelykheid bereken. Die maatstawwe sluit in die Gini koeffisiënt, ‘quintile share ratio’ (QSR), die Theil en Atkinson maatstawwe. Die beraamde maatstawwe van ongelykheid, wat bereken is vanaf die datastelle verkry deur die metodes, word dan vergelyk met die ware maatstawwe van ongelykheid.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Jiaping. "The generalized MLE with the interval centered and masked competing risks data." Diss., Online access via UMI:, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
29

Davidse, Alistair. "A comparison of methods for analysing interval-censored and truncated survival data." Master's thesis, University of Cape Town, 2004. http://hdl.handle.net/11427/4376.

Full text
Abstract:
Bibliography: leaves 49-50.
This thesis examines three methods for analysing right-censored data: the Cox proportional hazards model (Cox, 1972), the Buckley-James regression model (Buckley and James, 1979) and the accelerated failure time model. These models are extended to incorporate the analysis of interval-censored and left-truncated data. The models are compared in an attempt to determine whether one model performs better than the others in terms of goodness-of-it and in terms of predictive power. Plots of the residuals and random effects from the Cox proportional hazards model are also examined.
APA, Harvard, Vancouver, ISO, and other styles
30

Pantoja, Galicia Norberto. "Interval Censoring and Longitudinal Survey Data." Thesis, 2007. http://hdl.handle.net/10012/3224.

Full text
Abstract:
Being able to explore a relationship between two life events is of great interest to scientists from different disciplines. Some issues of particular concern are, for example, the connection between smoking cessation and pregnancy (Thompson and Pantoja-Galicia 2003), the interrelation between entry into marriage for individuals in a consensual union and first pregnancy (Blossfeld and Mills 2003), and the association between job loss and divorce (Charles and Stephens 2004, Huang 2003 and Yeung and Hofferth 1998). Establishing causation in observational studies is seldom possible. Nevertheless, if one of two events tends to precede the other closely in time, a causal interpretation of an association between these events can be more plausible. The role of longitudinal surveys is crucial, then, since they allow sequences of events for individuals to be observed. Thompson and Pantoja-Galicia (2003) discuss in this context several notions of temporal association and ordering, and propose an approach to investigate a possible relationship between two lifetime events. In longitudinal surveys individuals might be asked questions of particular interest about two specific lifetime events. Therefore the joint distribution might be advantageous for answering questions of particular importance. In follow-up studies, however, it is possible that interval censored data may arise due to several reasons. For example, actual dates of events might not have been recorded, or are missing, for a subset of (or all) the sampled population, and can be established only to within specified intervals. Along with the notions of temporal association and ordering, Thompson and Pantoja-Galicia (2003) also discuss the concept of one type of event "triggering" another. In addition they outline the construction of tests for these temporal relationships. The aim of this thesis is to implement some of these notions using interval censored data from longitudinal complex surveys. Therefore, we present some proposed tools that may be used for this purpose. This dissertation is divided in five chapters, the first chapter presents a notion of a temporal relationship along with a formal nonparametric test. The mechanisms of right censoring, interval censoring and left truncation are also overviewed. Issues on complex surveys designs are discussed at the end of this chapter. For the remaining chapters of the thesis, we note that the corresponding formal nonparametric test requires estimation of a joint density, therefore in the second chapter a nonparametric approach for bivariate density estimation with interval censored survey data is provided. The third chapter is devoted to model shorter term triggering using complex survey bivariate data. The semiparametric models in Chapter 3 consider both noncensoring and interval censoring situations. The fourth chapter presents some applications using data from the National Population Health Survey and the Survey of Labour and Income Dynamics from Statistics Canada. An overall discussion is included in the fifth chapter and topics for future research are also addressed in this last chapter.
APA, Harvard, Vancouver, ISO, and other styles
31

Lee, Pei-Fen, and 李佩芬. "Technical Efficiency Estimation with Interval Data." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/96328111969308893849.

Full text
Abstract:
碩士
國立交通大學
工業工程與管理系所
97
Non-parametric efficiency and productivity analysis assumes deterministic data are properly provided. This underlying assumption however is not always true in reality. In this work, we investigate how to estimate technical efficiency when only interval data are given due to imprecise information. Rather than assuming the probability distributions of data uncertainty, interval data represent the ranges for possible realization. We approach the problem by proposing some necessary properties for proper estimations of efficient frontiers and technical efficiency based on interval data. Two estimation models with respect to the conventional deterministic models -free disposal hull (FDH) and variable returns to scales (VRS)-are also proposed. It is shown that both proposed models satisfy the necessary properties, and thus they are appropriate estimations.
APA, Harvard, Vancouver, ISO, and other styles
32

Kuo, Hsien-Chun, and 郭獻駿. "A clustering method for interval data." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/96657625457981344710.

Full text
Abstract:
碩士
中原大學
應用數學研究所
99
In fuzzy clustering for different data types, there are many different clustering methods. The main purpose of clustering algorithms is for clustering a given data set. In this thesis, we propose a clustering algorithm by extending Yang and Wu’s clustering algorithm, called SCM, such that it can handle interval data sets with the best representative of the group range and also the best number of clusters. In order to demonstrate this method as a good clustering algorithm, we perform some simulations with sampling data and also some real data sets. The results show that a range of information through this algorithm has good clustering results.
APA, Harvard, Vancouver, ISO, and other styles
33

ZHI, HUANG-WEN, and 黃文志. "Fuzzy clustering algorithms for interval data." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/m9w7eg.

Full text
Abstract:
碩士
中原大學
數學研究所
102
In cluster analysis, the fuzzy c-means (FCM) clustering algorithm is the most used method. The main purpose of clustering analysis is for clustering a given data set. In this thesis,we propose a clustering algorithm by extending Yang and Wu [6] clustering algorithm, called AFCM, such that it can handle interval data with the best representative. In order to show this method as a good clustering algorithm, we perform some simulations with sampling data and also some real data sets,and also compare it with interval Fuzzy C-Means (IFCM) proposed by de Carvalho [2]. The results show that the proposed method has good clustering results.
APA, Harvard, Vancouver, ISO, and other styles
34

Yang, Tzung-cheng, and 楊宗承. "Analysis of Interval Time Series Data." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/2wsk9e.

Full text
Abstract:
碩士
國立中山大學
應用數學系研究所
103
Conventional time series methods are developed for analyzing point-valued data. However, in practice there are many interval-valued time series data, which usually contain more information than point-valued data. It is thus important to develop time series modeling and forecasting techniques for interval-valued data. In this paper, we introduce concepts of interval stationarity and related interval statistics and investigate methodology for interval time series analysis. We use vector autoregressive (VAR) and vector error correction (VEC) models to build time series models for interval statistics including : medium, radius, upper and lower bounds, and obtain interval forecasts. We compare the forecast performance of the proposed methods with classical filtering technique : the exponential smoothing method, and nonparametric technique : the k-Nearest Neighbors (k-NN) algorithm. Finally, in the empirical study, we use stock and index data to evaluate the forecast performance and efficiency of the proposed interval time series models.
APA, Harvard, Vancouver, ISO, and other styles
35

Cai, Hao Xu, and 蔡皓旭. "Interval regression analysis with fuzzy data." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/48136091858014961077.

Full text
Abstract:
碩士
國立政治大學
應用數學系
104
Objective: This study concerns how to develop effective fuzzy regression models. In the literature, little is addressed on how to evaluate the effectiveness of fuzzy regression models developed with different regression methods. We consider this issue in this work and present a framework for such evaluation. Method: We consider fuzzy regression models developed with different regression approaches. A method to evaluate the developed models is proposed. We then show that the proposed method possesses desirable mathematical properties and it is applied to compare the two-dimensional regression method and the traditional least square based regression method in our case studies: predicating the concentration of and the volatility of the weighted price index of the Taiwanese stock exchange. Innovation: We propose a new metric to define a distance between two fuzzy numbers. This metric can be used to evaluate the performance of different fuzzy regression models. When a prediction from one model is closest to the sample data measured in terms of the proposed metric, it can be recognized as the optimal predication. Results: Based on the proposed metric, it can be obtained that the two-dimensional fuzzy regression method is better than the traditional least square based regression method. Especially, its resulting generalized residual is smaller. Conclusion: In the literature, no unified framework has been previously proposed in evaluating the effectiveness of developed fuzzy regression models. In this work, we present a metric to achieve this goal. It facilitates the work to determine whether a fuzzy regression model suitably fits obtained samples and whether the model has potential to provide sufficient accuracy for follow-up analysis in a considered problem.
APA, Harvard, Vancouver, ISO, and other styles
36

Chen, Ming-Te, and 陳明德. "Modeling Technology of Interval Values Data." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/65322880443915245970.

Full text
Abstract:
碩士
國立宜蘭大學
電機工程學系碩士班
98
In the literature, some of the methods are proposed for the symbolic interval-values data. There are three methods, the Centre method (CM), the MinMax method and the Centre and Range method(CRM). Among the above methods, they need to solve the inverse of matrix. Moreover, the condition number of this matrix is large, which may cause a large error of solution (i.e. Ill-Conditioned). The above methods do not guarantee that the linear regression models of predicted values of the lower bounds will be lower than the predicted values of the upper bounds. To overcome the problems above, the Evolutionary Computation(EC) is applied to constrain search range to find the best parameters of linear regression models. According to the simulation, the method proposed can provide satisfactory result.
APA, Harvard, Vancouver, ISO, and other styles
37

"Survival analysis issues with interval-censored data." Universitat Politècnica de Catalunya, 2006. http://www.tesisenxarxa.net/TDX-1009106-110207/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Chuang, Chung-mine, and 莊重明. "Sampled-data analysis of interval control system." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/64351052890703380396.

Full text
Abstract:
碩士
中正理工學院
電機工程研究所
86
The main purpose of this research is to develop the unified state-space formation of synchronous and nonsynchronous digital control for interval systems. Some important matrix operations are also investigated, it is shown that the traditional method can not be used directly for such a problem. Based on derived model, any discrte-time response in the control system is easily obtained. Illustrative examples are demonstrated to show the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Z.-Ying, and 陳姿穎. "Distribution Function Estimation for Interval-Censored Data." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/cg2hd2.

Full text
Abstract:
碩士
中原大學
應用數學研究所
105
We study two check-out times in this paper. In other words, we using Case Ⅱ interval-censored data analysis of the survival cumulative distribution function. In the course of research we refer to Chang et al. (2005) that Bernstein Polynomial to describe the cumulative distribution function. Further, we find out likelihood function. We use the method of maximum likelihood estimation (MLE) is not an easy task. Therefore, we will use Markov chain Monte Carlo - Simulated Annealing to estimate the parameters, and the simulate results are quite good.
APA, Harvard, Vancouver, ISO, and other styles
40

Liao, Shih-Feng, and 廖士鋒. "Estimation of Odds for Interval-Censored Data." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/syet4r.

Full text
Abstract:
碩士
中原大學
應用數學研究所
105
We all want to know the odds of a disease. For instance, what''s the ratio of the last death toll and the number of rehabilitation for people who get the flu? The thesis aims at studying the odds of Case Ⅱ interval-censored data. We apply the Bernstein polynomial to describe the statistical method of odds. We use the maximum approximate estimation method, however ; calculating the maximum approximate estimate is very complicated and difficult. Therefore, we adopt Markov chain Monte Carlo - Simulated Annealing to estimate its parameters, and we also conducted a lot of statistical simulation, and the results are quite satisfied.
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Fei-Lin, and 劉飛麟. "Model Fitting and Diagnostics for Interval Data." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/48578s.

Full text
Abstract:
碩士
國立中正大學
數學系統計科學研究所
106
Unlike classical data, interval data consists of a collection of intervals instead of single values. Since its unique structure, statistical theories and methods developed for classical data may not be applied directly. Thus, in this thesis we focus on studying two aspects of interval data. First, we aim at developing a new approach related to the model fitting for interval data, and compare our proposed method with the existing methods. The presence of outliers might bring seriously adverse effects on the results of model fitting leading to the inaccurate conclusion. Hence, outlier detection is an essential procedures in the process of statictical analysis. This becomes the second focus of this thesis. To this end, we propose a new approach via constructing the likelihood functions of order statistics, where the underlying mean and variance functions are involved with the linear regression model. Then we employ the local influence introduced by Cook (1986) to identify aberrant intervals. Last but not least, simulation studies and real data examples are provided to illustrate our proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
42

PAN, CHIEN-CHENG, and 潘建政. "Linear Regression Analysis for Symbolic Interval Data." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/70421291962916502656.

Full text
Abstract:
碩士
國立中正大學
數學系統計科學研究所
104
In the network technology era, the collected data are growing more and more complex, and become larger than before. It brings the difficulty to analyze by using the standard statistical tools. In this thesis, we focus on estimates of the linear regression parameters for symbolic interval data. We propose two approaches to estimate regression parameters for symbolic interval data under two different data models and compare our proposed approaches with the existing methods via simulations. Finally, we analyze two real datasets with the proposed methods for illustrations.
APA, Harvard, Vancouver, ISO, and other styles
43

Chang, Rong-Fong, and 張榮豐. "Applying Interval Indexing on Music Data Retrieval." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/30195408697672069494.

Full text
Abstract:
碩士
樹德科技大學
電腦與通訊研究所
92
In the recent years, the music retrieval is getting more attention in the multimedia database. Extracting efficient music features plays a significant role in music data retrieval. Most of the retrieval methods were based on extracting the specific music feature for efficient indexing. These specific music features were than constructed into a tree-like structure results in complexity increasing. Beside, these traditional methods suffer from the difficulty to find out the matched querying sequence from the music database. To rectify these limitations, a novel music retrieval method based on the music interval and sliding window techniques is proposed in this thesis. Empirical results showed promise for the proposed approach to achieve efficient retrieval and the average search time is less than 1 second from the 1022 songs music database.
APA, Harvard, Vancouver, ISO, and other styles
44

Shen, Jiun-Yi, and 沈駿壹. "Estimation in parametric model under interval data." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/35402161041552616803.

Full text
Abstract:
碩士
國立成功大學
數學系應用數學碩博士班
90
In survival data analysis , exact event times are sometimes not ascertainable. For example , an hemophiliac has received contaminated blood with HIV . After six months , his blood test shows HIV negative , but HIV positive after one year . Therefore , the observation of HIV incubation is (6,12] . This kind of observation is called interval data. The purpose of this thesis is to propose an approximation mehod by the idea of method of moment estimation .We apply it to the exponential distribution and the Weibull distribution .We also compare it with the methods using in Dain(1995) via simulation and example.
APA, Harvard, Vancouver, ISO, and other styles
45

Lai, Meng-Hua, and 賴盟化. "Robust Radial basis function networks with linear interval regression for symbolic interval-values data." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/06772982827189326151.

Full text
Abstract:
碩士
國立宜蘭大學
電機工程學系碩士班
99
As in real life, a lot of information there may be exceptions, we will call it separation point (outliner), while the radial basis function (RBFNs) with the linear method of regression weights range and can not effectively deal with separation point issues raised in this article therefore robust radial basis function type and weight of linear interval regression approach to the problems arising from separation point. We will be divided into minimax value notation (MinMax method) and the center and the range of value representation (Center and Range method) for discussion. In the proposed structure, the learning process into two stages, namely, initialization, and fine-tuning. In the first stage, the range of groups by strong competition type classification (Robust Interval Competitive Agglomeration, RICA) initialization (that is, find out how many hidden nodes, and can adjust the radial basis function) the proposed structure. In the second phase, re-use of learning algorithms fall slope (the gradient-descent kind of learning algorithms) to fine-tune the coefficient of radial basis function and the parameters of the linear regression model range. Finally,simulation results proved this method than other methods provide better performance.
APA, Harvard, Vancouver, ISO, and other styles
46

Liu, Chin-Lin, and 劉錦霖. "Radial basis function networks with linear interval regression weights for symbolic interval-values data." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/98329780757511352970.

Full text
Abstract:
碩士
國立宜蘭大學
電機工程學系碩士班
97
This paper introduces a new structure of radial basis function networks (RBFNs) to modeling the symbolic interval-values data. In the structure, the Gaussian functions and the synaptic weights in the structure of the traditional RBFNs are replaced by the Gaussian function with interval distance measure and linear interval regression weights, respectively. In the linear interval regression weights, the lower and upper bounds of the interval-valued data as well as the centre and range of the interval-valued data are considered. In the proposed structures, two stages in the learning process are considered. In the stage 1, the initial structure (i.e. the number of hidden node and the adjustable parameters of radial basis function) of proposed structure is obtained by the interval competitive agglomeration clustering algorithm. In the stage 2, the gradient-descent kind of learning algorithms is applied to fine-tuning the parameters of radial basis function and the coefficients of the linear interval regression weights. In the simulation results, the average behavior of the root mean squared error and the square of the correlation coefficient in the framework of a Monte Carlo experiment are used as the performance index. Simulation results are provided to show the proposed structures has a nice performance.
APA, Harvard, Vancouver, ISO, and other styles
47

Tolusso, David. "Robust Methods for Interval-Censored Life History Data." Thesis, 2008. http://hdl.handle.net/10012/3868.

Full text
Abstract:
Interval censoring arises frequently in life history data, as individuals are often only observed at a sequence of assessment times. This leads to a situation where we do not know when an event of interest occurs, only that it occurred somewhere between two assessment times. Here, the focus will be on methods of estimation for recurrent event data, current status data, and multistate data, subject to interval censoring. With recurrent event data, the focus is often on estimating the rate and mean functions. Nonparametric estimates are readily available, but are not smooth. Methods based on local likelihood and the assumption of a Poisson process are developed to obtain smooth estimates of the rate and mean functions without specifying a parametric form. Covariates and extra-Poisson variation are accommodated by using a pseudo-profile local likelihood. The methods are assessed by simulations and applied to a number of datasets, including data from a psoriatic arthritis clinic. Current status data is an extreme form of interval censoring that occurs when each individual is observed at only one assessment time. If current status data arise in clusters, this must be taken into account in order to obtain valid conclusions. Copulas offer a convenient framework for modelling the association separately from the margins. Estimating equations are developed for estimating marginal parameters as well as association parameters. Efficiency and robustness to the choice of copula are examined for first and second order estimating equations. The methods are applied to data from an orthopedic surgery study as well as data on joint damage in psoriatic arthritis. Multistate models can be used to characterize the progression of a disease as individuals move through different states. Considerable attention is given to a three-state model to characterize the development of a back condition known as spondylitis in psoriatic arthritis, along with the associated risk of mortality. Robust estimates of the state occupancy probabilities are derived based on a difference in distribution functions of the entry times. A five-state model which differentiates between left-side and right-side spondylitis is also considered, which allows us to characterize what effect spondylitis on one side of the body has on the development of spondylitis on the other side. Covariate effects are considered through multiplicative time homogeneous Markov models. The robust state occupancy probabilities are also applied to data on CMV infection in patients with HIV.
APA, Harvard, Vancouver, ISO, and other styles
48

Yu, Chia-Sheng, and 游家昇. "Advanced Interval Reduction Test on Data Dependence Analysis." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/37614619392539562742.

Full text
Abstract:
碩士
樹德科技大學
電腦與通訊研究所
91
In the computer parallel processing, the data dependence analysis is crucial for compiling in loop programs. In briefly, the data dependence analysis is utilized for determining whether each variable accesses the same memory address of the loop program. At present, the literatures of the data dependence analysis are too numerous to enumerate. In the thesis, a novel exact and efficient data dependence test called the advanced interval reduction (AIR) test is proposed. This test combines the GCD test, the Banerjee test and the IR test for analyzing the data dependence of the loop program. The dependent data may also have dependence with each other. Hence, a new dependent data grouping method called the dynamic table grouping (DTG) is proposed. It is able to group the dependent data and independent data dynamically and then assign to hosts for processing. Besides, a new multi-variable data dependence method is proposed for the multi-variable loop program, which is called the enhanced-advanced interval reduction (Enhanced-AIR) test. Finally, the real data dependence experiments were implemented to verify the exactness and the high efficiency of the proposed data dependency algorithm.
APA, Harvard, Vancouver, ISO, and other styles
49

Luo, Ren yo, and 羅仁佑. "The Competitive Agglomeration Clustering Algorithm for Interval Data." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/39864853665102296120.

Full text
Abstract:
碩士
國立宜蘭大學
電機工程學系碩士班
96
In this study, an interval competitive agglomeration (ICA) clustering algorithm is proposed to overcome the problems of the unknown clusters number and the initialization of prototypes in the clustering algorithm for interval-values data. In the proposed ICA clustering algorithm, both the Euclidean distance measure and the Hausdorff distance measure for the interval-values data are independently considered. Besides, the advantages of both hierarchical clustering algorithm and partitional clustering algorithm are also incorporated into the ICA clustering algorithm. Hence, the ICA clustering algorithm converges fast in a few iterations regardless of the initial number of clusters. Moreover, it converges fast to the same optimal partition regardless of its initialization. Experiments with simply data sets and real data sets show the merits and usefulness of the ICA clustering algorithm for the interval-values data.
APA, Harvard, Vancouver, ISO, and other styles
50

"Covariance structure analysis with polytomous and interval data." Chinese University of Hong Kong, 1992. http://library.cuhk.edu.hk/record=b5887020.

Full text
Abstract:
by Yin-Ping Leung.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1992.
Includes bibliographical references (leaves 95-96).
Chapter Chapter 1 --- Introduction --- p.1
Chapter Chapter 2 --- Estimation of the Correlation between Polytomous and Interval Data --- p.6
Chapter 2.1 --- Model --- p.6
Chapter 2.2 --- Maximum Likelihood Estimation --- p.8
Chapter 2.3 --- Partition Maximum Likelihood Estimation --- p.10
Chapter 2.4 --- Optimization Procedure and Simulation Study --- p.18
Chapter Chapter 3 --- Three-stage Procedure for Covariance Structure Analysis --- p.25
Chapter 3.1 --- Model --- p.25
Chapter 3.2 --- Three-stage Estimation Method --- p.26
Chapter 3.3 --- Optimization Procedure and Simulation Study --- p.38
Chapter Chapter 4 --- Two-stage Procedure for Correlation Structure Analysis --- p.46
Chapter 4.1 --- Model --- p.47
Chapter 4.2 --- Two-stage Estimation Method --- p.47
Chapter 4.3 --- Optimization Procedure and Monte Carlo Study --- p.50
Chapter 4.4 --- Comparison of Two Methods --- p.53
Chapter Chapter 5 --- Conclusion --- p.56
Tables --- p.58
References --- p.95
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography