Dissertations / Theses on the topic 'Discrete data analysis'

To see the other types of publications on this topic, follow the link: Discrete data analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Discrete data analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Brinza, Dumitru. "Discrete Algorithms for Analysis of Genotype Data." Digital Archive @ GSU, 2007. http://digitalarchive.gsu.edu/cs_diss/19.

Full text
Abstract:
Accessibility of high-throughput genotyping technology makes possible genome-wide association studies for common complex diseases. When dealing with common diseases, it is necessary to search and analyze multiple independent causes resulted from interactions of multiple genes scattered over the entire genome. The optimization formulations for searching disease-associated risk/resistant factors and predicting disease susceptibility for given case-control study have been introduced. Several discrete methods for disease association search exploiting greedy strategy and topological properties of case-control studies have been developed. New disease susceptibility prediction methods based on the developed search methods have been validated on datasets from case-control studies for several common diseases. Our experiments compare favorably the proposed algorithms with the existing association search and susceptibility prediction methods.
APA, Harvard, Vancouver, ISO, and other styles
2

Bagshaw, Richard William. "Production data analysis for discrete component manufacture." Thesis, Loughborough University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Solis-Trapala, Ivonne Lissette. "Likelihood methods for the analysis of discrete longitudinal data." Thesis, University of Cambridge, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.615839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Moore, Page Casey Seaman John Weldon. "A restriction method for the analysis of discrete longitudinal missing data." Waco, Tex. : Baylor University, 2006. http://hdl.handle.net/2104/4880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Günther, David [Verfasser], and Tino [Akademischer Betreuer] Weinkauf. "Topological analysis of discrete scalar data / David Günther. Betreuer: Tino Weinkauf." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2013. http://d-nb.info/1052779700/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Yang. "High-order pattern discovery and analysis of discrete-valued data sets." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq22245.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, J. "Nonparametric structural analysis of discrete data : the quantile-based control function approach." Thesis, University College London (University of London), 2010. http://discovery.ucl.ac.uk/516136/.

Full text
Abstract:
The first chapter is introduction and Chapter 2 proposes formal frameworks for identifiability and testability of structural features allowing for set identification. The results in Chapter 2 are used in other chapters. The second section of Chapter 3, Chapter 4 and Chapter 5 contain new results. Chapter 3 has two sections. The first section introduces the quantile-based control function approach (QCFA) proposed by Chesher (2003) to compare and contrast other results in Chapter 4 and 5. The second section contains new findings on the local endogeneity bias and testability of endogeneity. Chapter 4 assumes that the structural relations are differentiable and applies the QCFA to several models for discrete outcomes. Chapter 4 reports point identification results of partial derivatives with respect to a continuously varying endogenous variable. Chapter 5 relaxes differentiability assumptions and apply the QCFA with an ordered discrete endogeneous variable. The model in Chapter 5 set identifies partial differences of a nonseparable structural function.
APA, Harvard, Vancouver, ISO, and other styles
8

Rao, Shaoqi. "Genetic Analysis of Sheep Discrete Reproductive Traits Using Simulation and Field Data." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30490.

Full text
Abstract:
The applicability of restricted maximum likelihood (REML) in genetic analyses of categorical data was evaluated using simulation and field data. Four genetic models were used to simulate underlying phenotypic variates, which were derived as the sum of additive genetic and environmental effects (Model 1A and 1B) or additive genetic and permanent and temporary environmental effects (Model 2A and 2B). Fifty-eight replicates were simulated, each of which contained 5000 ewes by 500 sires and 5000 dams and with up to five records per ewe. The usual transformation of heritability estimated on the categorical scale to the normal scale for fertility and litter size performed better for a simple animal model than for a repeatability model. Genetic correlation estimates between the two categorical traits for Model 1B and 2B were .49 ± .01 and .48 ± .04, respectively, and were close to the expected value of .50. However, permanent and temporary environmental correlations whose input values were each .50 were underestimated with estimates of .41 ± .05 and .26 ± .03, respectively for Model 2B, and .33 ± .02 for the temporary environmental correlation for Model 1B. Bivariate genetic analyses of litter size with growth and fleece traits were carried out by REML for the data of Suffolk, Targhee and Polypay. Direct heritabilities for most growth traits in all the breeds were low (<.20). Maternal genetic and maternal permanent environmental effects were important for all body weights except for the weaning weight at 120 d for Polypay sheep. Estimates of heritability and permanent environmental effects for litter size for these breeds ranged from .09 to .12 and .00 to .05, respectively. Heritabilities for grease fleece weight and fiber diameter were high for Targhee and Polypay sheep. Direct genetic correlations between growth and litter size were favorable for Suffolk and Targhee but weak for Polypay sheep. Genetic correlations between maternal effects for growth and direct effects for litter size for the breeds were generally small. Within-trait maternal-direct genetic correlations for growth in the breeds were variable and generally negative. Direct genetic correlations of litter size with grease fleece weight and fiber diameter were variable across the breeds.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Gaumer, Madelyn. "Using Neural Networks to Classify Discrete Circular Probability Distributions." Scholarship @ Claremont, 2019. https://scholarship.claremont.edu/hmc_theses/226.

Full text
Abstract:
Given the rise in the application of neural networks to all sorts of interesting problems, it seems natural to apply them to statistical tests. This senior thesis studies whether neural networks built to classify discrete circular probability distributions can outperform a class of well-known statistical tests for uniformity for discrete circular data that includes the Rayleigh Test1, the Watson Test2, and the Ajne Test3. Each neural network used is relatively small with no more than 3 layers: an input layer taking in discrete data sets on a circle, a hidden layer, and an output layer outputting probability values between 0 and 1, with 0 mapping to uniform and 1 mapping to nonuniform. In evaluating performances, I compare the accuracy, type I error, and type II error of this class of statistical tests and of the neural networks built to compete with them. 1 Jammalamadaka, S. Rao(1-UCSB-PB); SenGupta, A.(6-ISI-ASU)Topics in circular statistics. (English summary) With 1 IBM-PC floppy disk (3.5 inch; HD). Series on Multivariate Analysis, 5. World Scientific Publishing Co., Inc., River Edge, NJ, 2001. xii+322 pp. ISBN: 981-02-3778-2 2 Watson, G. S.Goodness-of-fit tests on a circle. II. Biometrika 49 1962 57–63. 3 Ajne, B.A simple test for uniformity of a circular distribution. Biometrika 55 1968 343–354.
APA, Harvard, Vancouver, ISO, and other styles
10

Freeland, R. Keith. "Statistical analysis of discrete time series with application to the analysis of workers' compensation claims data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq27144.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Cannon, Paul C. "Extending the information partition function : modeling interaction effects in highly multivariate, discrete data /." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2263.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Carvalho, Fernando Pedro. "'Powellsnakes' : a fast Bayesian approach to discrete object detection in multi-frequency astronomical data sets." Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/245147.

Full text
Abstract:
In this work we introduce a fast Bayesian algorithm designed for detecting compact objects immersed in a diffuse background. A general methodology is presented in terms of formal correctness and optimal use of all the available information in a consistent unified framework, where no distinction is made between point sources (unresolved objects), SZ clusters, single or multi-channel detection. An emphasis is placed on the necessity of a multi-frequency, multi-model detection algorithm in order to achieve optimality. We have chosen to use the Bayes/Laplace probability theory as it grants a fully consistent extension of formal deductive logic to a more general inferential system with optimal inclusion of all ancillary information [Jaynes, 2004]. Nonetheless, probability theory only informs us about the plausibility, a ‘degree-of-belief ’, of a proposition given the data, the model that describes it and all ancillary (prior) information. However, detection or classification is mostly about making educated choices and a wrong decision always carries a cost/loss. Only resorting to ‘Decision Theory’, supported by probability theory, one can take the best decisions in terms of maximum yield at minimal cost. Despite the rigorous and formal approach employed, practical efficiency and applicability have always been kept as primary design goals. We have attempted to select and employ the relevant tools to explore a likelihood form and its manifold symmetries to achieve the very high computational performance required not only by our ‘decision machine’ but mostly to tackle large realistic contemporary cosmological data sets. As an illustration, we successfully applied the methodology to ESA’s (European Space Agency) Planck satellite data [Planck Collaboration et al., 2011d]. This data set is large, complex and typical of the contemporary precision observational cosmology state-of-the-art. Two catalogue products are already released: (i) A point sources catalogue [Planck Collaboration et al., 2011e], (ii) A catalogue of galaxy clusters [Planck Collaboration et al., 2011f]. Many other contributions, in science products, as an estimation device, have recently been issued [Planck et al., 2012; Planck Collaboration et al., 2011g,i, 2012a,b,c]. This new method is called ‘PowellSnakes’ (PwS).
APA, Harvard, Vancouver, ISO, and other styles
13

Brueckman, Christina. "Reliability analysis of discrete fracture network projections from borehole to shaft scale discontinuity data." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/58405.

Full text
Abstract:
When listing the risk factors that may impact the feasibility and success of a block cave operation, Brown (2003) highlights the adequacy of the geotechnical data available as a primary risk. Detailed data on the major structures, rock mass properties, and in situ stresses are necessary to assess the caveability of the orebody, and the excavation stability on the operating levels below, including the potential for fault slip and rockburst hazards when mining in a higher stress environment. The source of this essential data, especially at feasibility-level design, is almost always limited to borehole data. This is emphasized by Laubscher (2000) who notes that most block cave mines are designed solely on borehole data. When restricted to borehole data, significant effort is expended on obtaining oriented core and/or televiewer logs to derive critical data regarding the frequency and orientation of discontinuities and the presence of major faults. Subsequent analysis of the spatial relationships between discontinuities is facilitated by the use of Discrete Fracture Network (DFN) modelling. The value of DFN models for assessing in situ fragmentation and rock mass strength identifies a critical limitation of borehole data. Required DFN inputs include the orientation, intensity, and size distributions of the discontinuities to allow the stochastic generation of a representative fracture network. The evaluation of the discontinuity orientation is relatively easy, intensity or spacing is possible with sufficient effort, but the discontinuity size is not possible given the small “observation window” of a borehole. This thesis reports the results from research carried out to compare analyses of discontinuity data sampled across different spatial scales to improve our understanding and reduce uncertainty in the characterization and projection of discontinuity networks, specifically with respect to fracture spacing and persistence within the rock mass. This work is undertaken using discontinuity data from a deep geotechnical borehole and co-located large diameter shaft. The close proximity of the borehole and shaft provided an opportunity to ground-truth borehole projections based on traditional core logging and televiewer logs. The comparative analysis was completed with the use of DFN modelling. The improved understanding of the spacing and persistence of the discontinuities will aid in further development of guidelines for rapid geotechnical characterization.
Science, Faculty of
Earth, Ocean and Atmospheric Sciences, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
14

Zureiqat, Hazem Marwan. "Fare policy analysis for public transport : a discrete-continuous modeling approach using panel data." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43748.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2008.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 115-117).
In many large metropolitan areas, public transport is very heavily used, and ridership is approaching system capacity in the peak periods. This has caused a shift in attention by agency decision-makers to strategies that can more effectively manage the demand for public transport, rather than simply increase overall demand. In other words, a need has arisen to understand not only why people use public transport as opposed to other modes but also how they use public transport, in terms of their ticket, mode, and time-of-day choices. To that end, fares become an increasingly important policy tool that can trigger certain behavioral changes among riders. This thesis develops a methodology to model, at the disaggregate level, the response of public transport users to fare changes. A discrete-continuous framework is proposed in which ticket choice is modeled at the higher (discrete) level and frequencies of public transport use, based on mode and time-of-day, are modeled at the lower (continuous) level. This framework takes advantage of the availability of smartcard data over time, allowing individual-specific behavioral changes with various fare policies to be captured. This methodology is applied to London's public transport system using Oyster smartcard data collected between November 2005 and February 2008. The results indicate a strong inertia effect in terms of ticket choice among public transport users in London. An individual's prior ticket choice is found to be a very important factor in determining their future ticket choice. This is also evident when we simulate the effects of two policy changes on ticket choices. We find that the impact of changing the prices of period tickets may take several months or more to fully materialize. In terms of the frequency of public transport use, the results indicate estimated short and long-run fare elasticities of -0.40 and -0.64, respectively, for travel on the London Underground and equivalent estimates of -0.08 and -0.13 for travel on bus.
(cont) The estimated Underground fare elasticities are comparable to those in the literature. The bus fare elasticities, on the other hand, are relatively smaller, in absolute value, than prior estimates. This difference reflects the small variations in bus fares in the dataset on which the model was estimated and the low fare sensitivity for users under such variations. Furthermore, we apply the model, in conjunction with related assumptions and findings from previous research, to evaluate an AM peak pricing scheme on the London Underground, in which travelers are charged £2.00 between 8:30am and 9:15am, rather than the current fare of £1.50. This application estimates that such a policy could potentially decrease AM "peak-of-the-peak" demand on the Underground by about 9%, with the reduction in ridership shifting either to a different mode or to a different time period.
by Hazem Marwan Zureiqat.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
15

Amrani, Naoufal, Joan Serra-Sagrista, Miguel Hernandez-Cabronero, and Michael Marcellin. "Regression Wavelet Analysis for Progressive-Lossy-to-Lossless Coding of Remote-Sensing Data." IEEE, 2016. http://hdl.handle.net/10150/623190.

Full text
Abstract:
Regression Wavelet Analysis (RWA) is a novel wavelet-based scheme for coding hyperspectral images that employs multiple regression analysis to exploit the relationships among spectral wavelet transformed components. The scheme is based on a pyramidal prediction, using different regression models, to increase the statistical independence in the wavelet domain For lossless coding, RWA has proven to be superior to other spectral transform like PCA and to the best and most recent coding standard in remote sensing, CCSDS-123.0. In this paper we show that RWA also allows progressive lossy-to-lossless (PLL) coding and that it attains a rate-distortion performance superior to those obtained with state-of-the-art schemes. To take into account the predictive significance of the spectral components, we propose a Prediction Weighting scheme for JPEG2000 that captures the contribution of each transformed component to the prediction process.
APA, Harvard, Vancouver, ISO, and other styles
16

Cannon, Paul C. "Extending the Information Partition Function: Modeling Interaction Effects in Highly Multivariate, Discrete Data." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/1234.

Full text
Abstract:
Because of the huge amounts of data made available by the technology boom in the late twentieth century, new methods are required to turn data into usable information. Much of this data is categorical in nature, which makes estimation difficult in highly multivariate settings. In this thesis we review various multivariate statistical methods, discuss various statistical methods of natural language processing (NLP), and discuss a general class of models described by Erosheva (2002) called generalized mixed membership models. We then propose extensions of the information partition function (IPF) derived by Engler (2002), Oliphant (2003), and Tolley (2006) that will allow modeling of discrete, highly multivariate data in linear models. We report results of the modified IPF model on the World Health Organization's Survey on Global Aging (SAGE).
APA, Harvard, Vancouver, ISO, and other styles
17

Egger, Peter Johann. "Event history analysis : discrete-time models including unobserved heterogeneity, with applications to birth history data." Thesis, University of Southampton, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zheng, Xiyu. "SENSITIVITY ANALYSIS IN HANDLING DISCRETE DATA MISSING AT RANDOM IN HIERARCHICAL LINEAR MODELS VIA MULTIVARIATE NORMALITY." VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4403.

Full text
Abstract:
Abstract In a two-level hierarchical linear model(HLM2), the outcome as well as covariates may have missing values at any of the levels. One way to analyze all available data in the model is to estimate a multivariate normal joint distribution of variables, including the outcome, subject to missingness conditional on covariates completely observed by maximum likelihood(ML); draw multiple imputation (MI) of missing values given the estimated joint model; and analyze the hierarchical model given the MI [1,2]. The assumption is data missing at random (MAR). While this method yields efficient estimation of the hierarchical model, it often estimates the model given discrete missing data that is handled under multivariate normality. In this thesis, we evaluate how robust it is to estimate a hierarchical linear model given discrete missing data by the method. We simulate incompletely observed data from a series of hierarchical linear models given discrete covariates MAR, estimate the models by the method, and assess the sensitivity of handling discrete missing data under the multivariate normal joint distribution by computing bias, root mean squared error, standard error, and coverage probability in the estimated hierarchical linear models via a series of simulation studies. We want to achieve the following aim: Evaluate the performance of the method handling binary covariates MAR. We let the missing patterns of level-1 and -2 binary covariates depend on completely observed variables and assess how the method handles binary missing data given different values of success probabilities and missing rates. Based on the simulation results, the missing data analysis is robust under certain parameter settings. Efficient analysis performs very well for estimation of level-1 fixed and random effects across varying success probabilities and missing rates. MAR estimation of level-2 binary covariate is not well estimated when the missing rate in level-2 binary covariate is greater than 10%. The rest of the thesis is organized as follows: Section 1 introduces the background information including conventional methods for hierarchical missing data analysis, different missing data mechanisms, and the innovation and significance of this study. Section 2 explains the efficient missing data method. Section 3 represents the sensitivity analysis of the missing data method and explain how we carry out the simulation study using SAS, software package HLM7, and R. Section 4 illustrates the results and useful recommendations for researchers who want to use the missing data method for binary covariates MAR in HLM2. Section 5 presents an illustrative analysis National Growth of Health Study (NGHS) by the missing data method. The thesis ends with a list of useful references that will guide the future study and simulation codes we used.
APA, Harvard, Vancouver, ISO, and other styles
19

Kriström, Bengt. "Valuing environmental benefits using the contingent valuation method : an econometric analysis." Doctoral thesis, Umeå universitet, Institutionen för nationalekonomi, 1990. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-90578.

Full text
Abstract:
The purpose of this study is to investigate methods for assessing the value people place on preserving our natural environments and resources. It focuses on the contingent valuation method, which is a method for directly asking people about their preferences. In particular, the study focuses on the use of discrete response data in contingent valuation experiments.The first part of the study explores the economic theory of the total value of a natural resource, where the principal components of total value are analyzed; use values and non-use values. Our application is a study of the value Swedes' attach to the preservation of eleven forest areas that contain high recreational values and contain unique environmental qualities. Six forests were selected on the basis of an official investigation which includes virgin forests and other areas with unique environmental qualities. In addition, five virgin forests were selected.Two types of valuation questions are analyzed, the continuous and the discrete. The first type of question asks directly about willingness to pay, while the second type suggests a price that the respondent may reject or accept. The results of the continuous question suggest an average willingness to pay of about 1,000 SEK per household for preservation of the areas. Further analysis of the data suggests that this value depends on severi characteristics of the respondent: such as the respondent's income and whether or not the respondent is an altruist.Two econometric approaches are used to analyze the discrete responses; a flexible parametric approach and a non-parametric approach. In addition, a Bayesian approach is described. It is shown that the results of a contingent valuation experiment may depend to some extent on the choice of the probability model. A re-sampling approach and a Monte-Carlo approach is used to shed light on the design of a contingent valuation experiment with discrete responses. The econometric analysis ends with an analysis of the often observed disparity between discrete and continuous valuation questions.A cost-benefit analysis is performed in the final chapter. The purpose of this analysis is to illustrate how the contingent valuation approach may be combined with opportunity cost data to improve the decision-basis in the environmental policy domain. This analysis does not give strong support for a cutting alternative. Finally, the results of this investigation are compared with evidence from other studies.The main conclusion of this study is that assessment of peoples' sentiments towards changes of our natural environments and resources can be a useful supplement to decisions about the proper husbandry of our natural environments and resources. It also highlights the importance of careful statistical analysis of data gained from contingent valuation experiments.
digitalisering@umu
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Jiayuan. "Algorithms for Guaranteed Denoising of Data and Their Applications." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1577966043088083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Löhr, Andrea. "A noise reduction method based upon statistical analysis for the detection of weak signals in discrete data." [S.l.] : [s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=968817505.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Johnston, Ann. "Markov Bases for Noncommutative Harmonic Analysis of Partially Ranked Data." Scholarship @ Claremont, 2011. http://scholarship.claremont.edu/hmc_theses/4.

Full text
Abstract:
Given the result $v_0$ of a survey and a nested collection of summary statistics that could be used to describe that result, it is natural to ask which of these summary statistics best describe $v_0$. In 1998 Diaconis and Sturmfels presented an approach for determining the conditional significance of a higher order statistic, after sampling a space conditioned on the value of a lower order statistic. Their approach involves the computation of a Markov basis, followed by the use of a Markov process with stationary hypergeometric distribution to generate a sample.This technique for data analysis has become an accepted tool of algebraic statistics, particularly for the study of fully ranked data. In this thesis, we explore the extension of this technique for data analysis to the study of partially ranked data, focusing on data from surveys in which participants are asked to identify their top $k$ choices of $n$ items. Before we move on to our own data analysis, though, we present a thorough discussion of the Diaconis–Sturmfels algorithm and its use in data analysis. In this discussion, we attempt to collect together all of the background on Markov bases, Markov proceses, Gröbner bases, implicitization theory, and elimination theory, that is necessary for a full understanding of this approach to data analysis.
APA, Harvard, Vancouver, ISO, and other styles
23

Smolinskas, Mindaugas. "Dvejetainės informacijos kodavimo naudojant bazinius skaidinius analizė: teoriniai ir praktiniai aspektai." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2005. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2005~D_20050113_155133-50925.

Full text
Abstract:
A new approach (designed to lower computational complexity) to compression of finite binary data, based on the application of “exclusive-or” operation, is presented in the paper. The new concept of an identifying sequence, associated with a particular decomposition scheme, is introduced. A new heuristic algorithm for calculation of the identifying sequences has been developed. Two robust algorithms – for compressing and streaming the sequences (using a priori compiled decomposition tables), and for real time decoding of the compressed streams – were proposed and implemented. The experimental results confirmed that the developed approach gave a data compression effect on an ordinary computer system.
APA, Harvard, Vancouver, ISO, and other styles
24

Tacic, Ivan. "Efficient Synchronized Data Distribution Management in Distributed Simulations." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6822.

Full text
Abstract:
Data distribution management (DDM) is a mechanism to interconnect data producers and data consumers in a distributed application. Data producers provide useful data to consumers in the form of messages. For each message produced, DDM determines the set of data consumers interested in receiving the message and delivers it to those consumers. We are particularly interested in DDM techniques for parallel and distributed discrete event simulations. Thus far, researchers have treated synchronization of events (i.e. time management) and DDM independent of each other. This research focuses on how to realize time managed DDM mechanisms. The main reason for time-managed DDM is to ensure that changes in the routing of messages from producers to consumers occur in a correct sequence. Also time managed DDM avoids non-determinism in the federation execution, which may result in non-repeatable executions. An optimistic approach to time managed DDM is proposed where one allows DDM events to be processed out of time stamp order, but a detection and recovery procedure is used to recover from such errors. These mechanisms are tailored to the semantics of the DDM operations to ensure an efficient realization. A correctness proof is presented to verify the algorithm correctly synchronizes DDM events. We have developed a fully distributed implementation of the algorithm within the framework of the Georgia Tech Federated Simulation Development Kit (FDK) software. A performance evaluation of the synchronized DDM mechanism has been completed in a loosely coupled distributed system consisting of a network of workstations connected over a local area network (LAN). We compare time-managed versus unsynchronized DDM for two applications that exercise different mobility patterns: one based on a military simulation and a second utilizing a synthetic workload. The experiments and analysis illustrate that synchronized DDM performance depends on several factors: the simulations model (e.g. lookahead), applications mobility patterns and the network hardware (e.g. size of network buffers). Under certain mobility patterns, time-managed DDM is as efficient as unsynchronized DDM. There are also mobility patterns where time-managed DDM overheads become significant, and we show how they can be reduced.
APA, Harvard, Vancouver, ISO, and other styles
25

Adams, Richelle Vive-Anne. "Infinitesimal Perturbation Analysis for Active Queue Management." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19844.

Full text
Abstract:
Active queue management (AQM) techniques for congestion control in Internet Protocol (IP) networks have been designed using both heuristic and analytical methods. But so far, there has been found no AQM scheme designed in the realm of stochastic optimization. Of the many options available in this arena, the gradient-based stochastic approximation method using Infintesimal Perturbation Analysis (IPA) gradient estimators within the Stochastic Fluid Model (SFM) framework is very promising. The research outlined in this thesis provides the theoretical basis and foundational layer for the development of IPA-based AQM schemes. Algorithms for computing the IPA gradient estimators for loss volume and queue workload were derived for the following cases: a single-stage queue with instantaneous, additive loss-feedback, a single-stage queue with instantaneous, additive loss-feedback and an unresponsive competing flow, a single-stage queue with delayed, additive loss-feedback, and a multi-stage tandem network of $m$ queues with instantaneous, additive loss-feedback. For all cases, the IPA gradient estimators were derived with the control parameter, $ heta$, being the buffer-limits of the queue(s). For the single-stage case and the multi-stage case with instantaneous, additive loss-feedback, the IPA gradient estimators for when the control parameter, $ heta$, is the loss-feedback constant, were also derived. Sensitivity analyses and optimizations were performed with control parameter, $ heta$, being the buffer-limits of the queue(s), as well as the loss-feedback constant.
APA, Harvard, Vancouver, ISO, and other styles
26

Al, Rababa'A Abdel Razzaq. "Uncovering hidden information and relations in time series data with wavelet analysis : three case studies in finance." Thesis, University of Stirling, 2017. http://hdl.handle.net/1893/25961.

Full text
Abstract:
This thesis aims to provide new insights into the importance of decomposing aggregate time series data using the Maximum Overlap Discrete Wavelet Transform. In particular, the analysis throughout this thesis involves decomposing aggregate financial time series data at hand into approximation (low-frequency) and detail (high-frequency) components. Following this, information and hidden relations can be extracted for different investment horizons, as matched with the detail components. The first study examines the ability of different GARCH models to forecast stock return volatility in eight international stock markets. The results demonstrate that de-noising the returns improves the accuracy of volatility forecasts regardless of the statistical test employed. After de-noising, the asymmetric GARCH approach tends to be preferred, although that result is not universal. Furthermore, wavelet de-noising is found to be more important at the key 99% Value-at-Risk level compared to the 95% level. The second study examines the impact of fourteen macroeconomic news announcements on the stock and bond return dynamic correlation in the U.S. from the day of the announcement up to sixteen days afterwards. Results conducted over the full sample offer very little evidence that macroeconomic news announcements affect the stock-bond return dynamic correlation. However, after controlling for the financial crisis of 2007-2008 several announcements become significant both on the announcement day and afterwards. Furthermore, the study observes that news released early in the day, i.e. before 12 pm, and in the first half of the month, exhibit a slower effect on the dynamic correlation than those released later in the month or later in the day. While several announcements exhibit significance in the 2008 crisis period, only CPI and Housing Starts show significant and consistent effects on the correlation outside the 2001, 2008 and 2011 crises periods. The final study investigates whether recent returns and the time-scaled return can predict the subsequent trading in ten stock markets. The study finds little evidence that recent returns do predict the subsequent trading, though this predictability is observed more over the long-run horizon. The study also finds a statistical relation between trading and return over the long-time investment horizons of [8-16] and [16-32] day periods. Yet, this relation is mostly a negative one, only being positive for developing countries. It also tends to be economically stronger during bull-periods.
APA, Harvard, Vancouver, ISO, and other styles
27

Gustafsson, Rasmus. "Time study and flow-simulation : Current and future analysis." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-17378.

Full text
Abstract:
Discrete-event simulations are increasingly being used to solve problems and to aid in decision making which are proving useful in the manufacturing industry. The main aim for this thesis was to compare the current production line and how implementing changes for a future state as a supporting basis for making the decision. The theoretical framework focused on the Lean philosophy merged with simulation-based methods. The simulation model was build using the collected data. A time study was conducted in order to verify the process and setup times since these were only estimated at the time. Two simulation models were built for the current production line and the future state. The future state was based on the current one since no changes in the process and setup times would be made during the two simulation-model. Experiments were then done to compare the different states, one with batch and the other single-piece flow. The parameters were set on equal terms and the compared values were throughput (TH), lead-time (LT) and work in process (WIP). The conclusion drawn from the results is that the future state would be more efficient.
APA, Harvard, Vancouver, ISO, and other styles
28

Shreedharan, Srisharan. "Stability Investigations of Tunnels in a Coal Mine in China Through 3D-Discontinuum Numerical Modeling and Field Deformation Monitoring Data." Thesis, The University of Arizona, 2016. http://hdl.handle.net/10150/612541.

Full text
Abstract:
An imperative task for successful underground mining is to ensure the stability of underground structures, since it influences the safety, and in turn, the production capacity and economic performance of the mine. This is more so for deep excavations in soft rock which may be under significantly high stresses. In this thesis, stability studies on two tunnels, a horseshoe-shaped and an inverted arch-shaped tunnel, have been presented. The tunnels, running at a depth of 1325 m, are part of the Xiezhuang Coal Mine, in the Xinwen mining area, in China. Using the available information on stratigraphy, geological structures, in-situ stress measurements and geo-mechanical properties of intact rock and discontinuity interfaces, a three-dimensional numerical model has been built using the 3DEC 3-Dimensional Distinct Element Code to simulate the stress conditions around the tunnels. Based on available discontinuity geometry constraints, the rock mass has been modelled as a mixture of a discontinuum medium close to the tunnels and as an equivalent-continuum in the far field. Due to the unavailability of field measurements for rock mass mechanical parameters, the parameters have been estimated by incorporating the available intact rock mechanical properties and field deformation monitoring data into a strength reduction model calibration procedure. This back-analysis (calibration) has been carried out through a pseudo-time dependent support installation routine which incorporates the effect of time through a stress-relaxation mechanism. The results from the back-analysis indicate that the rock mass cohesion, tensile strength, uniaxial compressive strength, and elastic modulus values are about 35-45 % of the corresponding intact rock property values. Additionally, the importance of incorporating stress relaxation before support installation in numerical modeling has been illustrated, for the first time in literature, through the increased support factors of safety and reduced grout failures. The calibrated models have been analyzed for different supported and unsupported cases in an attempt to quantify the effect of supports in stabilizing the tunnels and to estimate the adequacy of the existing supports being used in the mine. A direct outcome is that the findings indicate that longer supports may be better suited for the existing geo-mining conditions around the tunnels since they have fractured zones that are larger than the supports currently in use at the mine. The effects of supports have been demonstrated using changes in deformations and yield zones around the tunnels, and changes in the average factors of safety and grout failures of the supports. The use of longer supports and floor bolting has provided greater stability for the rock masses around the tunnels. A comparison between the closure strains in the two differently shaped tunnels indicates that the inverted arch tunnel may be more efficient in reducing roof sag and floor heave for the existing geo-mining conditions. Additional analyses focusing on parametric sensitivity studies on the rock and joint mechanical properties show that the tunnel stability is highly sensitive to changes in cohesion and internal friction angle of the intact rock, and changes in joint basic friction angle. Tunnel stability is seen to not be very sensitive to changes in intact rock tensile strength and joint shear stiffness for the tunnels being studied. Finally, support optimization studies conducted by studying the effect of changing cable diameters and grout uniaxial compressive strengths on support factors of safety and grout failures show the trade-off that is necessary in selecting cable strength vis-à-vis grout strength. The results indicate that simply increasing either one of cable or grout strength parameters without considering their interactions and compatibilities could be detrimental to the stability of the support system.
APA, Harvard, Vancouver, ISO, and other styles
29

Kaeslin, Alain E. "Performance Optimisation of Discrete-Event Simulation Software on Multi-Core Computers." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-191132.

Full text
Abstract:
SIMLOX is a discrete-event simulation software developed by Systecon AB for analysing logistic support solution scenarios. To cope with ever larger problems, SIMLOX's simulation engine was recently enhanced with a parallel execution mechanism in order to take advantage of multi-core processors. However, this extension did not result in the desired reduction in runtime for all simulation scenarios even though the parallelisation strategy applied had promised linear speedup. Therefore, an in-depth analysis of the limiting scalability bottlenecks became necessary and has been carried out in this project. Through the use of a low-overhead profiler and microarchitecture analysis, the root causes were identified: atomic operations causing a high communication overhead, poor locality leading to translation lookaside buffer thrashing, and hot spots that consume significant amounts of CPU time. Subsequently, appropriate optimisations to overcome the limiting factors were implemented: eliminating the expensive operations, more efficient handling of heap memory through the use of a scalable memory allocator, and data structures that make better use of caches. Experimental evaluation using real world test cases demonstrated a speedup of at least 6.75x on an eight-core processor. Most cases even achieve a speedup of more than 7.2x. The various optimisations implemented further helped to lower run times for sequential execution by 1.5x or more. It can be concluded that achieving nearly linear speedup on a multi-core processor is possible in practice for discrete-event simulation.
SIMLOX är en kommersiell mjukvara utvecklad av Systecon AB, vars huvudsakliga funktion är en händelsestyrd simuleringskärna för analys av underhållslösningar för komplexa tekniska system. För hantering av stora problem så används parallellexekvering för simuleringen, vilket i teorin borde ge en nästan linjär skalning med antal trådar. Prestandaförbättringen som observerats i praktiken var dock ytterst begränsad, varför en ordentlig analys av skalbarheten har gjorts i detta projekt. Genom användandet av ett profileringsverktyg med liten overhead och mikroarkitektur-analys, så kunde orsakerna hittas: atomiska operationer som skapar mycket overhead för kommunikation, dålig lokalitet ger fragmentering vid översättning till fysiska adresser och dåligt utnyttjande av TLB-cachen, och vissa flaskhalsar som kräver mycket CPU-kraft. Därefter implementerades och testade optimeringar för att undvika de identifierade problem. Testade lösningar inkluderar eliminering av dyra operationer, ökad effektivitet i minneshantering genom skalbara minneshanteringsalgoritmer och implementation av datastrukturer som ger bättre lokalitet och därmed bättre användande av cache-strukturen. Verifiering på verkliga testfall visade på uppsnabbningar på åtminstone 6.75 gånger på en processor med 8 kärnor. De flesta fall visade på en uppsnabbning med en faktor större än 7.2. Optimeringarna gav även en uppsnabbning med en faktor på åtminstone 1.5 vid sekventiell exekvering i en tråd. Slutsatsen är därmed att det är möjligt att uppnå nästan linjär skalning med antalet kärnor för denna typ av händelsestyrd simulering.
APA, Harvard, Vancouver, ISO, and other styles
30

Comas, Cufí Marc. "Aportacions de l'anàlisi composicional a les mixtures de distribucions." Doctoral thesis, Universitat de Girona, 2018. http://hdl.handle.net/10803/664902.

Full text
Abstract:
The present thesis is a compendium of three original works produced between 2014 and 2018. The papers have a common link: they are different contributions made by compositional data analysis to the study of the models based on mixtures of probability distributions. In brief, we could say that compositional data analysis is a methodology that consists of studying a sample of measures that are strictly positive from a relative point of view. Mixtures of distributions are a specific type of probability distribution defined to be the convex linear combination of other distributions
La present tesi representa un compendi de tres treballs originals realitzats durant els anys 2014-2018. Aquests treballs comparteixen un nexe comú: tots ells són diferents aportacions de l'anàlisi composicional a l'estudi dels models basats en mixtures de distribucions de probabilitat. D'una forma molt breu, podríem dir que l'anàlisi composicional és una metodologia consistent en estudiar una mostra de mesures estrictament positives des d'un punt de vista relatiu. Les mixtures de distribucions, també anomenades barreges de distribucions, són un tipus particular de distribucions de probabilitat definides com la combinació lineal convexa d'altres distribucions
APA, Harvard, Vancouver, ISO, and other styles
31

Kularatne, Thamarasi. "Economics of optimal management of tourism resources : a demand and supply analysis." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/104115/1/Telwadana%20Mudiyanselage%20Thamarasi_Kularatne_Thesis.pdf.

Full text
Abstract:
This thesis is a study of tourism demand and supply with the objective of revealing the optimal management of resources used in tourism. The first part investigates consumer preferences with respect to nature-based tourism together with a study of the manner in which tourists’ experiences impact on non-market valuations. The second part describes an efficiency evaluation of the hotel industry which focuses on the impact of eco-friendly practices on hotel operations. The analytical techniques used include discrete choice modelling, structural choice modelling and data envelopment analysis. The findings contribute to the important goal of creating a sustainable tourism industry in Sri Lanka and similar destinations.
APA, Harvard, Vancouver, ISO, and other styles
32

Memedi, Mevludin. "Mobile systems for monitoring Parkinson's disease." Licentiate thesis, Örebro universitet, Akademin för naturvetenskap och teknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-20552.

Full text
Abstract:
This thesis presents the development and evaluation of IT-based methods and systems for supporting assessment of symptoms and enabling remote monitoring of Parkinson‟s disease (PD) patients. PD is a common neurological disorder associated with impaired body movements. Its clinical management regarding treatment outcomes and follow-up of patients is complex. In order to reveal the full extent of a patient‟s condition, there is a need for repeated and time-stamped assessments related to both patient‟s perception towards common symptoms and motor function. In this thesis, data from a mobile device test battery, collected during a three year clinical study, was used for the development and evaluation of methods. The data was gathered from a series of tests, consisting of selfassessments and motor tests (tapping and spiral drawing). These tests were carried out repeatedly in a telemedicine setting during week-long test periods. One objective was to develop a computer method that would process tracedspiral drawings and generate a score representing PD-related drawing impairments. The data processing part consisted of using the discrete wavelet transform and principal component analysis. When this computer method was evaluated against human clinical ratings, the results showed that it could perform quantitative assessments of drawing impairment in spirals comparatively well. As a part of this objective, a review of systems and methods for detecting the handwriting and drawing impairment using touch screens was performed. The review showed that measures concerning forces, accelerations, and radial displacements were the most important ones in detecting fine motor movement anomalies. Another objective of this thesis work was to design and evaluate an information system for delivering assessment support information to the treating clinical staff for monitoring PD symptoms in their patients. The system consisted of a patient node for data collection based on the mobile device test battery, a service node for data storage and processing, and a web application for data presentation. A system module was designed for compiling the test battery time series into summary scores on a test period level. The web application allowed adequate graphic feedback of the summary scores to the treating clinical staff. The evaluation results for this integrated system indicate that it can be used as a tool for frequent PD symptom assessments in home environments.
APA, Harvard, Vancouver, ISO, and other styles
33

Olson, Brent. "Evaluating the error of measurement due to categorical scaling with a measurement invariance approach to confirmatory factor analysis." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/332.

Full text
Abstract:
It has previously been determined that using 3 or 4 points on a categorized response scale will fail to produce a continuous distribution of scores. However, there is no evidence, thus far, revealing the number of scale points that may indeed possess an approximate or sufficiently continuous distribution. This study provides the evidence to suggest the level of categorization in discrete scales that makes them directly comparable to continuous scales in terms of their measurement properties. To do this, we first introduced a novel procedure for simulating discretely scaled data that was both informed and validated through the principles of the Classical True Score Model. Second, we employed a measurement invariance (MI) approach to confirmatory factor analysis (CFA) in order to directly compare the measurement quality of continuously scaled factor models to that of discretely scaled models. The simulated design conditions of the study varied with respect to item-specific variance (low, moderate, high), random error variance (none, moderate, high), and discrete scale categorization (number of scale points ranged from 3 to 101). A population analogue approach was taken with respect to sample size (N = 10,000). We concluded that there are conditions under which response scales with 11 to 15 scale points can reproduce the measurement properties of a continuous scale. Using response scales with more than 15 points may be, for the most part, unnecessary. Scales having from 3 to 10 points introduce a significant level of measurement error, and caution should be taken when employing such scales. The implications of this research and future directions are discussed.
APA, Harvard, Vancouver, ISO, and other styles
34

Najman, Laurent. "Morphologie mathématique, systèmes dynamiques et applications au traitement des images." Habilitation à diriger des recherches, Université de Marne la Vallée, 2006. http://tel.archives-ouvertes.fr/tel-00715406.

Full text
Abstract:
Ce mémoire résume une quinzaine d'années de recherche dans le monde industriel et universitaire. Il est divisé en trois parties, traitant respectivement de la théorie de la morphologie mathématique, des systèmes dynamiques et enfin des applications au traitement des images. En morphologie mathématique, nos travaux ont porté principalement sur le filtrage et la segmentation d'images. En ce qui concerne le filtrage, nous avons proposé un algorithme quasi-linéaire pour calculer l'arbre des composantes, une des structures d'organisation naturelles des ensembles de niveaux d'une image. En segmentation, nous nous sommes principalement intéressé à la ligne de partage des eaux. Nous en avons proposé une définition continue. En nous servant du formalisme récemment introduit par Gilles Bertrand, nous avons comparé les propriétés de plusieurs définitions discrètes et nous avons proposé un algorithme quasi-linéaire permettant de calculer la ligne de partage des eaux topologique. Notre algorithme repose en partie sur celui de l'arbre des composantes. Enfin, nous avons proposé des schémas hiérarchiques pour utiliser la ligne de partage des eaux, et en particulier, nous avons proposé de valuer les contours produits par un critère de saillance donnant l'importance du contour dans la hiérarchie. Ces études nous ont conduit à proposer des classes de graphes adaptés pour la fusion de région, mettant en particulier en évidence l'équivalence existant entre une de ces classes de graphes et la classe des graphes pour lesquels toute ligne de partage des eaux binaire est mince. En ce qui concerne les systèmes dynamiques, nous avons utilisé les outils issus du cadre de l'analyse multivoque et de la théorie de la viabilité pour proposer un algorithme (dit des "Montagnes Russes") qui permet de converger vers le minimum global d'une fonction dont on connaît l'infimum. L'association du cadre algébrique des treillis complets, de l'algèbre et de la théorie des inclusions différentielles nous a permis de donner des propriétés algébriques et de continuité d'applications agissant sur des ensembles fermés, comme l'ensemble atteignable ou le noyau de viabilité. Nous avons utilisé le cadre des équations mutationnelles, permettant de dériver des tubes de déformations de formes dans des espaces métriques, pour prouver de manière rigoureuse et sans hypothèse de régularité sur la forme, l'intuition selon laquelle la dilatation transforme la forme dans la direction des normales à celle-ci en chacun de ses points. Nous avons adapté au cadre des équations mutationnelles le théorème d'Euler, qui permet d'approcher une solution à une équation mutationnelle par une s'equence de points dans un espace métrique. Enfin, nous avons proposé une approche générique de simulation, fondée sur les systèmes de particules, qui a prouvé son efficacité par sa mise en œuvre en milieu industriel, en particulier, pour la simulation de foules, pour la synthèse d'images, ou encore pour la simulation du déploiement d'airbags. Nous pensons que de bonnes études théoriques aident à réaliser des applications de qualité. Inversement, de bons problèmes théoriques peuvent trouver une source dans de bons problèmes applicatifs. On trouvera dans ce mémoire le résumé d'un certain nombre de travaux dont l'intérêt industriel a été prouvé par des brevets ou des logiciels. Citons par exemple un outil de segmentation 4D (3D+temps) en imagerie cardiaque utilisé par des médecins dans le cadre de leur pratique. Nous avons travaillé pendant plusieurs années dans le domaine du traitement d'images de documents. Nous avons proposé un outil basé sur la morphologie mathématique permettant d'estimer l'angle d'inclinaison d'un document scanné. Plus particulièrement, nous avons étudié des problèmes liés à l'indexation et à la reconnaissance de dessins techniques.
APA, Harvard, Vancouver, ISO, and other styles
35

Sjölander, Martina, and Philip Viklund. "Simuleringsarbete i medicinteknisk produktion : En fallstudie med dataanalys, verifiering och validering samt flaskhalsanalys av en simuleringsmodell." Thesis, Uppsala universitet, Industriell teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-415351.

Full text
Abstract:
The world is constantly changing and evolving faster and faster. In the industry, it is important to constantly evolve to adapt to the market and maintain competitiveness. The ongoing fourth industrial revolution is urging companies to increase digitalization and automate production. Simulation is a tool that can be used to develop, test and control production. In the pharmaceutical industry, the manufacturing and the product are required to ensure good quality. At the same time, the demand of pharmaceuticals and medical devices is high. With simulation, production systems can be evaluated, developed and improved a safer and more efficient way.  The purpose of the study is to investigate, through a simulation, whether the mass-producing pharmaceutical industry can utilize Discrete Event Simulation to improve production planning and control. The purpose was partly fulfilled by a situation assessment, which together with a literature study, was used to find out what data is required to establish a simulation model. Finally, a simulation model was established, representative of reality and a bottleneck analysis was performed as part of the validation. The study is based on a case study in which qualitative and quantitative methods have been used to answer the research questions. The method is designed to be able to be carried out remotely, since it has not been possible to attend the company where the study was conducted. Two different simulation models, which differ in detail level, are presented in the results of the study. The more detailed level model could be used as a basis for improvement work. The simplified model reflects reality thoroughly and can be used for production planning and production capacity estimations. The bottleneck analysis shows which process is the bottleneck and corresponds to the company's perception. The largest bottleneck was assembly and blister packing. The most important suggestion of improvement was increase of the process availability in the blister packing, which increases production throughput with 6 percent and reduces lead time with 1,5 days. To utilize the detailed simulation model, data collection at sub-process level is also proposed.
Världen förändras hela tiden och utvecklas snabbare och snabbare. Inom industrin gäller det att ständigt utvecklas för att kunna anpassa sig till marknaden och bibehålla konkurrenskraft. Den pågående fjärde industriella revolutionen uppmanar företag att öka digitalisering och automatisera produktionen. Simulering är ett verktyg som kan användas för att utveckla, testa och kontrollera produktionen. I läkemedelsbranschen kräver kunder att tillverkningen och produkten tillser god kvalitet. Samtidigt är efterfrågan stor på läkemedel och medicintekniska produkter. Med simulering kan produktionssystem utvecklas på ett säkrare och mer effektivt sätt.   Syftet med studien är att genom en simulering undersöka om massproducerande läkemedelsindustrin kan utnyttja händelsestyrd simulering för att förbättra produktionsplanering och styrning. Syftet uppfylldes delvis genom en nulägesanalys, som tillsammans med en litteraturstudie användes för att ta reda på vilken data som krävs för att upprätta en simuleringsmodell. Avslutningsvis upprättades en simuleringsmodell, representativ till verkligheten och en flaskhalsanalys genomfördes som del av valideringen. Studien grundar sig i en fallstudie där kvalitativa och kvantitativa metoder har använts för att besvara frågeställningarna. Metoden är utformad för att kunna utföras på distans då det ej har varit möjligt att närvara på företaget där studien är genomförd.   Två olika simuleringsmodeller, som skiljer sig i detaljnivå, presenteras i resultatet av studien. Den mer detaljerade skulle kunna användas som ett underlag för förbättringsarbete. Den förenklade modellen återspeglar produktionen översiktligt och kan användas för produktionsplanering. Flaskhalsaanalysen påvisar vilken process som är flaskhalsen och stämmer väl överens med företagets uppfattning. Montering och blistring utger sig som den största flaskhalsen och förbättringsförslag kopplat till detta är främst en ökad tillgänglighet hos processen blistring som ökar produktionstakten med 6 procentenheter och sänker produktionsledtiden med 1,5 dagar. För att kunna utnyttja den detaljerade simuleringsmodellen föreslås vidare datainsamling på delprocessnivå.
APA, Harvard, Vancouver, ISO, and other styles
36

Spinnato, Juliette. "Modèles de covariance pour l'analyse et la classification de signaux électroencéphalogrammes." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4727/document.

Full text
Abstract:
Cette thèse s’inscrit dans le contexte de l’analyse et de la classification de signaux électroencéphalogrammes (EEG) par des méthodes d’analyse discriminante. Ces signaux multi-capteurs qui sont, par nature, très fortement corrélés spatialement et temporellement sont considérés dans le plan temps-fréquence. En particulier, nous nous intéressons à des signaux de type potentiels évoqués qui sont bien représentés dans l’espace des ondelettes. Par la suite, nous considérons donc les signaux représentés par des coefficients multi-échelles et qui ont une structure matricielle électrodes × coefficients. Les signaux EEG sont considérés comme un mélange entre l’activité d’intérêt que l’on souhaite extraire et l’activité spontanée (ou "bruit de fond"), qui est largement prépondérante. La problématique principale est ici de distinguer des signaux issus de différentes conditions expérimentales (classes). Dans le cas binaire, nous nous focalisons sur l’approche probabiliste de l’analyse discriminante et des modèles de mélange gaussien sont considérés, décrivant dans chaque classe les signaux en termes de composantes fixes (moyenne) et aléatoires. Cette dernière, caractérisée par sa matrice de covariance, permet de modéliser différentes sources de variabilité. Essentielle à la mise en oeuvre de l’analyse discriminante, l’estimation de cette matrice (et de son inverse) peut être dégradée dans le cas de grandes dimensions et/ou de faibles échantillons d’apprentissage, cadre applicatif de cette thèse. Nous nous intéressons aux alternatives qui se basent sur la définition de modèle(s) de covariance(s) particulier(s) et qui permettent de réduire le nombre de paramètres à estimer
The present thesis finds itself within the framework of analyzing and classifying electroencephalogram signals (EEG) using discriminant analysis. Those multi-sensor signals which are, by nature, highly correlated spatially and temporally are considered, in this work, in the timefrequency domain. In particular, we focus on low-frequency evoked-related potential-type signals (ERPs) that are well described in the wavelet domain. Thereafter, we will consider signals represented by multi-scale coefficients and that have a matrix structure electrodes × coefficients. Moreover, EEG signals are seen as a mixture between the signal of interest that we want to extract and spontaneous activity (also called "background noise") which is overriding. The main problematic is here to distinguish signals from different experimental conditions (class). In the binary case, we focus on the probabilistic approach of the discriminant analysis and Gaussian mixtures are used, describing in each class the signals in terms of fixed (mean) and random components. The latter, characterized by its covariance matrix, allow to model different variability sources. The estimation of this matrix (and of its inverse) is essential for the implementation of the discriminant analysis and can be deteriorated by high-dimensional data and/or by small learning samples, which is the application framework of this thesis. We are interested in alternatives that are based on specific covariance model(s) and that allow to decrease the number of parameters to estimate
APA, Harvard, Vancouver, ISO, and other styles
37

Mercat, Christian. "Holomorphie discrète et modèle d'Ising." Phd thesis, Université Louis Pasteur - Strasbourg I, 1998. http://tel.archives-ouvertes.fr/tel-00001851.

Full text
Abstract:
Ma thèse généralise la notion de criticité pour le modèle d'Ising en dimension 2. J'y définis une nouvelle notion d'holomorphie discrète sur une décomposition cellulaire d'une surface de Riemann. Le modèle d'Ising converge, à la limite thermodynamique vers une théorie conforme continue, quand la limite est prise sur un réseau (carré, triangulaire), près de la température critique. J'étends cette criticité à des décompositions cellulaires générales et je décompose le spineur en parties holomorphes et antiholomorphes discrètes, analogues discrets des blocs conformes. On définit une équation de Cauchy-Riemann discrète sur le double d'une décomposition cellulaire. Des théorèmes classiques sont encore transposables: harmonicité, base des différentielles, pôle, théorème des résidus. Il y a des différences, le produit point par point ne préserve pas l'holomorphie, les pôles sont d'ordre un, l'espace des formes holomorphes est de dimension double du genre. On définit une carte comme étant semi-critique si d'une fonction holomorphe discrète $f$ et d'une carte locale plate $Z$ on peut faire une $1$-forme fermée $fdZ$ et critique si $fdZ$ est holomorphe. Cette classe contient les réseaux mais bien plus. Une suite convergente de fonctions holomorphes discrètes sur une suite convergente de cartes critiques a pour limite une fonction holomorphe sur la surface de Riemann. Dans le cas des réseaux triangulaires et carrés, on démontre que la criticité statistique d'Ising équivaut à notre criticité pour une structure conforme reliée aux constantes d'intéraction. On définit une équation de Dirac sans masse, l'existence d'une solution équivaut à la criticité. Le spineur de Dirac permet alors de décomposer le fermion d'Ising en une partie holomorphe et une partie antiholomorphe.
APA, Harvard, Vancouver, ISO, and other styles
38

Suriadi, Suriadi. "Strengthening and formally verifying privacy in identity management systems." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/39345/1/Suriadi_Suriadi_Thesis.pdf.

Full text
Abstract:
In a digital world, users’ Personally Identifiable Information (PII) is normally managed with a system called an Identity Management System (IMS). There are many types of IMSs. There are situations when two or more IMSs need to communicate with each other (such as when a service provider needs to obtain some identity information about a user from a trusted identity provider). There could be interoperability issues when communicating parties use different types of IMS. To facilitate interoperability between different IMSs, an Identity Meta System (IMetS) is normally used. An IMetS can, at least theoretically, join various types of IMSs to make them interoperable and give users the illusion that they are interacting with just one IMS. However, due to the complexity of an IMS, attempting to join various types of IMSs is a technically challenging task, let alone assessing how well an IMetS manages to integrate these IMSs. The first contribution of this thesis is the development of a generic IMS model called the Layered Identity Infrastructure Model (LIIM). Using this model, we develop a set of properties that an ideal IMetS should provide. This idealized form is then used as a benchmark to evaluate existing IMetSs. Different types of IMS provide varying levels of privacy protection support. Unfortunately, as observed by Jøsang et al (2007), there is insufficient privacy protection in many of the existing IMSs. In this thesis, we study and extend a type of privacy enhancing technology known as an Anonymous Credential System (ACS). In particular, we extend the ACS which is built on the cryptographic primitives proposed by Camenisch, Lysyanskaya, and Shoup. We call this system the Camenisch, Lysyanskaya, Shoup - Anonymous Credential System (CLS-ACS). The goal of CLS-ACS is to let users be as anonymous as possible. Unfortunately, CLS-ACS has problems, including (1) the concentration of power to a single entity - known as the Anonymity Revocation Manager (ARM) - who, if malicious, can trivially reveal a user’s PII (resulting in an illegal revocation of the user’s anonymity), and (2) poor performance due to the resource-intensive cryptographic operations required. The second and third contributions of this thesis are the proposal of two protocols that reduce the trust dependencies on the ARM during users’ anonymity revocation. Both protocols distribute trust from the ARM to a set of n referees (n > 1), resulting in a significant reduction of the probability of an anonymity revocation being performed illegally. The first protocol, called the User Centric Anonymity Revocation Protocol (UCARP), allows a user’s anonymity to be revoked in a user-centric manner (that is, the user is aware that his/her anonymity is about to be revoked). The second protocol, called the Anonymity Revocation Protocol with Re-encryption (ARPR), allows a user’s anonymity to be revoked by a service provider in an accountable manner (that is, there is a clear mechanism to determine which entity who can eventually learn - and possibly misuse - the identity of the user). The fourth contribution of this thesis is the proposal of a protocol called the Private Information Escrow bound to Multiple Conditions Protocol (PIEMCP). This protocol is designed to address the performance issue of CLS-ACS by applying the CLS-ACS in a federated single sign-on (FSSO) environment. Our analysis shows that PIEMCP can both reduce the amount of expensive modular exponentiation operations required and lower the risk of illegal revocation of users’ anonymity. Finally, the protocols proposed in this thesis are complex and need to be formally evaluated to ensure that their required security properties are satisfied. In this thesis, we use Coloured Petri nets (CPNs) and its corresponding state space analysis techniques. All of the protocols proposed in this thesis have been formally modeled and verified using these formal techniques. Therefore, the fifth contribution of this thesis is a demonstration of the applicability of CPN and its corresponding analysis techniques in modeling and verifying privacy enhancing protocols. To our knowledge, this is the first time that CPN has been comprehensively applied to model and verify privacy enhancing protocols. From our experience, we also propose several CPN modeling approaches, including complex cryptographic primitives (such as zero-knowledge proof protocol) modeling, attack parameterization, and others. The proposed approaches can be applied to other security protocols, not just privacy enhancing protocols.
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Yunming. "Machine vision algorithms for mining equipment automation." Thesis, Queensland University of Technology, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
40

Gland, Nicolas. "Etude numérique des propriétés mécaniques et des processus de déformation et d'endommagement des matériaux granulaires." Phd thesis, Université Paris Sud - Paris XI, 2003. http://tel.archives-ouvertes.fr/tel-00006671.

Full text
Abstract:
La compréhension des propriétés structurales et mécaniques des assemblages granulaires reste un problème ouvert. La modélisation numérique de ces milieux par Dynamique Moléculaire permet de répondre à certaines interrogations. Tout d'abord, nous étudions les effets des paramètres de mélange sur le degré d'ordre et la distribution des contraintes dans ces arrangements de grains. Ensuite, nous nous intéressons aux propriétés acoustiques de ces milieux, très nonlinéaires comparées aux solides élastiques continus; nos simulations servent à clarifier les déficiences de la Théorie de Milieu Effectif, qui rend partiellement compte de la dépendance en pression des modules élastiques, et elles montrent que la relaxation des grains après une déformation infinitésimale est une composante essentielle du module de cisaillement. Il y a donc un besoin pour une formulation alternative et l'approche traditionnelle élasto-plastique a été remise en cause par des nouveaux modèles. Si l'expérience numérique menée de mesure de la fonction réponse (champ de contrainte généré par une force ponctuelle) montre une remise à l'échelle élastique, elle rend également compte de phénomènes importants de relaxation. Les mécanismes de déformation des matériaux granulaires ont aussi été étudiés en simulant des essais biaxiaux; ils reproduisent qualitativement les données mécaniques classiques (pic de contrainte, transition compaction/dilatance) et aussi la localisation de la déformation; à faible pression, les rotations de grains accommodent la déformation tandis qu'à forte pression, la mobilisation du frottement est plus marquée. Enfin, nous proposons un modèle probabilistique d'endommagement de roches granulaires qui prédit une dépendance de l'endommagement avec à la fois l'hétérogénéité de cimentation et la taille d'échantillon. Ces prédictions sont confirmées par des simulations qui permettent également de mettre en évidence deux régimes d'endommagement successifs, diffus puis concentré en amas.
APA, Harvard, Vancouver, ISO, and other styles
41

Bringmann, Philipp. "Adaptive least-squares finite element method with optimal convergence rates." Doctoral thesis, Humboldt-Universität zu Berlin, 2021. http://dx.doi.org/10.18452/22350.

Full text
Abstract:
Die Least-Squares Finite-Elemente-Methoden (LSFEMn) basieren auf der Minimierung des Least-Squares-Funktionals, das aus quadrierten Normen der Residuen eines Systems von partiellen Differentialgleichungen erster Ordnung besteht. Dieses Funktional liefert einen a posteriori Fehlerschätzer und ermöglicht die adaptive Verfeinerung des zugrundeliegenden Netzes. Aus zwei Gründen versagen die gängigen Methoden zum Beweis optimaler Konvergenzraten, wie sie in Carstensen, Feischl, Page und Praetorius (Comp. Math. Appl., 67(6), 2014) zusammengefasst werden. Erstens scheinen fehlende Vorfaktoren proportional zur Netzweite den Beweis einer schrittweisen Reduktion der Least-Squares-Schätzerterme zu verhindern. Zweitens kontrolliert das Least-Squares-Funktional den Fehler der Fluss- beziehungsweise Spannungsvariablen in der H(div)-Norm, wodurch ein Datenapproximationsfehler der rechten Seite f auftritt. Diese Schwierigkeiten führten zu einem zweifachen Paradigmenwechsel in der Konvergenzanalyse adaptiver LSFEMn in Carstensen und Park (SIAM J. Numer. Anal., 53(1), 2015) für das 2D-Poisson-Modellproblem mit Diskretisierung niedrigster Ordnung und homogenen Dirichlet-Randdaten. Ein neuartiger expliziter residuenbasierter Fehlerschätzer ermöglicht den Beweis der Reduktionseigenschaft. Durch separiertes Markieren im adaptiven Algorithmus wird zudem der Datenapproximationsfehler reduziert. Die vorliegende Arbeit verallgemeinert diese Techniken auf die drei linearen Modellprobleme das Poisson-Problem, die Stokes-Gleichungen und das lineare Elastizitätsproblem. Die Axiome der Adaptivität mit separiertem Markieren nach Carstensen und Rabus (SIAM J. Numer. Anal., 55(6), 2017) werden in drei Raumdimensionen nachgewiesen. Die Analysis umfasst Diskretisierungen mit beliebigem Polynomgrad sowie inhomogene Dirichlet- und Neumann-Randbedingungen. Abschließend bestätigen numerische Experimente mit dem h-adaptiven Algorithmus die theoretisch bewiesenen optimalen Konvergenzraten.
The least-squares finite element methods (LSFEMs) base on the minimisation of the least-squares functional consisting of the squared norms of the residuals of first-order systems of partial differential equations. This functional provides a reliable and efficient built-in a posteriori error estimator and allows for adaptive mesh-refinement. The established convergence analysis with rates for adaptive algorithms, as summarised in the axiomatic framework by Carstensen, Feischl, Page, and Praetorius (Comp. Math. Appl., 67(6), 2014), fails for two reasons. First, the least-squares estimator lacks prefactors in terms of the mesh-size, what seemingly prevents a reduction under mesh-refinement. Second, the first-order divergence LSFEMs measure the flux or stress errors in the H(div) norm and, thus, involve a data resolution error of the right-hand side f. These difficulties led to a twofold paradigm shift in the convergence analysis with rates for adaptive LSFEMs in Carstensen and Park (SIAM J. Numer. Anal., 53(1), 2015) for the lowest-order discretisation of the 2D Poisson model problem with homogeneous Dirichlet boundary conditions. Accordingly, some novel explicit residual-based a posteriori error estimator accomplishes the reduction property. Furthermore, a separate marking strategy in the adaptive algorithm ensures the sufficient data resolution. This thesis presents the generalisation of these techniques to three linear model problems, namely, the Poisson problem, the Stokes equations, and the linear elasticity problem. It verifies the axioms of adaptivity with separate marking by Carstensen and Rabus (SIAM J. Numer. Anal., 55(6), 2017) in three spatial dimensions. The analysis covers discretisations with arbitrary polynomial degree and inhomogeneous Dirichlet and Neumann boundary conditions. Numerical experiments confirm the theoretically proven optimal convergence rates of the h-adaptive algorithm.
APA, Harvard, Vancouver, ISO, and other styles
42

Durant, Stéphane. "Propagation de la lumière en milieu aléatoire : rôle de l'absorption, de la diffusion dépendante et du couplage surface-volume." Phd thesis, Ecole Centrale Paris, 2003. http://tel.archives-ouvertes.fr/tel-00004154.

Full text
Abstract:
L'étude de la propagation de la lumière à travers un milieu diffusant est un sujet de recherche d'une portée à la fois fondamentale et appliquée : - fondamentale pour l'étude des phénomènes de transport en général; - et appliquée par exemple pour évaluer les propriétés radiatives de revêtements (comme des peintures diffusantes), pour faire de l'imagerie à travers un milieu diffusant (tissus vivants) etc... De manière générale, la lumière est diffusée (scattered en anglais) lorsqu'elle se propage dans un milieu hétérogène. Du point de vue de la théorie électromagnétique, un milieu est dit hétérogène quand l'indice de réfraction varie spatialement. En principe, la solution exacte du problème s'obtient par la résolution numérique des équations de Maxwell, mais si le système hétérogène étudié dépasse en taille quelques longueurs d'onde, alors le nombre d'inconnues est beaucoup trop important pour que l'on puisse résoudre exactement le problème. Il est nécessaire de moyenner les propriétés de diffusion du système. Pour un milieu fortement diffusant, et de taille macroscopique, on peut décrire phénoménologiquement le transport de la lumière au moyen d'une équation de diffusion (diffusion en anglais) tout comme est décrit le transport de chaleur ou de masse. On peut aussi décrire le transport de la lumière au moyen d'une Equation de Transfert Radiatif (ETR) plus générale que l'équation de la diffusion. Cette équation, une fois résolue, donne des résultats qui sont en très bon accord avec l'expérience si les paramètres de cette équation sont correctement évalués.

La majeure partie de cette thèse est consacrée à cet aspect : la détermination des paramètres de l'ETR pour un milieu contenant des particules de taille comparable à la longueur d'onde, aléatoirement disposées dans un milieu absorbant. Alors que les modèles pour l'obtention de ces paramètres décrits dans la littérature en présence d'absorption sont tous phénoménologiques, nous présentons une méthode basée sur une théorie de champ rigoureuse et qui permet de définir sans ambiguïté ces paramètres. Nous analysons par ailleurs le rôle des corrélations en milieu absorbant (diffusion dépendante).

Nous nous intéressons aux milieux fortement chargés pour lesquels les corrélations sur les positions des diffuseurs jouent un rôle fondamental et rendent le calcul très complexe. Nous présentons les premiers pas d'une méthode numérique capable de prendre en compte toutes les corrélations entre les paires de particules, ce qui permet de calculer le coefficient d'extinction au delà de l'approximation de la diffusion indépendante.

Enfin, nous cherchons à évaluer les propriétés radiatives d'un système couramment rencontré : celui d'une couche diffusante a la fois en surface et en volume. Nous étudions en particulier le rôle de la diffusion multiple entre l'interface rugueuse et le volume chargé.
APA, Harvard, Vancouver, ISO, and other styles
43

Rouquier, Jean-Baptiste. "Robustesse et émergence dans les systèmes complexes : le modèle des automates cellulaires." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2008. http://tel.archives-ouvertes.fr/tel-00354042.

Full text
Abstract:
L'objet de ce travail est de mieux comprendre ce qui se produit lorsque l'on perturbe un système complexe, en utilisant les automates cellulaires comme modèle. Nous nous intéressons principalement à deux perturbations. La première concerne l'écoulement du temps : contrairement au modèle habituel, nous utilisons des mises à jour asynchrones, c'est-à-dire que, à chaque étape, seulement une partie des cellules sont mises à jour. L'autre perturbation concerne la topologie, c'est-à-dire le graphe d'interaction entre les cellules.
Une première partie étudie expérimentalement l'apparition de la percolation dirigée dans les automates cellulaires, notamment dans le cadre du "damage spreading". Le dernier chapitre de cette partie prouve une équivalence entre une classe d'automates cellulaires probabilistes et les automates cellulaires asynchrones.
La seconde partie étudie dans un premier chapitre l'interaction des deux perturbations évoquées: asynchronisme et topologie. Alors que le modèle habituel utilise une grille Zd, nous étudions une grille où certains liens sont temporairement coupés. Puis un second chapitre démontre des propriétés théoriques sur la règles minorité lorsque la topologie est un arbre.
Nous avons dans cette thèse mené à la fois des études expérimentales et des études théoriques. Une préoccupation transversale est la simulation formelle entre modèles. L'enjeu de ces travaux est, à terme, de savoir comment obtenir des systèmes ayant un comportement global prédéfini, ou bien comment rendre robuste à certaines perturbations un système complexe donné.
APA, Harvard, Vancouver, ISO, and other styles
44

Tröger, Ralph. "Supply Chain Event Management – Bedarf, Systemarchitektur und Nutzen aus Perspektive fokaler Unternehmen der Modeindustrie." Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-155014.

Full text
Abstract:
Supply Chain Event Management (SCEM) bezeichnet eine Teildisziplin des Supply Chain Management und ist für Unternehmen ein Ansatzpunkt, durch frühzeitige Reaktion auf kritische Ausnahmeereignisse in der Wertschöpfungskette Logistikleistung und -kosten zu optimieren. Durch Rahmenbedingungen wie bspw. globale Logistikstrukturen, eine hohe Artikelvielfalt und volatile Geschäftsbeziehungen zählt die Modeindustrie zu den Branchen, die für kritische Störereignisse besonders anfällig ist. In diesem Sinne untersucht die vorliegende Dissertation nach einer Beleuchtung der wesentlichen Grundlagen zunächst, inwiefern es in der Modeindustrie tatsächlich einen Bedarf an SCEM-Systemen gibt. Anknüpfend daran zeigt sie nach einer Darstellung bisheriger SCEM-Architekturkonzepte Gestaltungsmöglichkeiten für eine Systemarchitektur auf, die auf den Designprinzipien der Serviceorientierung beruht. In diesem Rahmen erfolgt u. a. auch die Identifikation SCEM-relevanter Business Services. Die Vorzüge einer serviceorientierten Gestaltung werden detailliert anhand der EPCIS (EPC Information Services)-Spezifikation illustriert. Abgerundet wird die Arbeit durch eine Betrachtung der Nutzenpotenziale von SCEM-Systemen. Nach einer Darstellung von Ansätzen, welche zur Nutzenbestimmung infrage kommen, wird der Nutzen anhand eines Praxisbeispiels aufgezeigt und fließt zusammen mit den Ergebnissen einer Literaturrecherche in eine Konsolidierung von SCEM-Nutzeffekten. Hierbei wird auch beleuchtet, welche zusätzlichen Vorteile sich für Unternehmen durch eine serviceorientierte Architekturgestaltung bieten. In der Schlussbetrachtung werden die wesentlichen Erkenntnisse der Arbeit zusammengefasst und in einem Ausblick sowohl beleuchtet, welche Relevanz die Ergebnisse der Arbeit für die Bewältigung künftiger Herausforderungen innehaben als auch welche Anknüpfungspunkte sich für anschließende Forschungsarbeiten ergeben.
APA, Harvard, Vancouver, ISO, and other styles
45

Chang, Hsin-Yun, and 張馨云. "A Study on the Discrete Data in Data Envelopment Analysis." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/jn7263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Kuo, Han-Wei, and 郭漢偉. "The Analysis of Grey GM(1,1) Model for Discrete Data." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/61409882204374352773.

Full text
Abstract:
碩士
清雲科技大學
機械工程研究所
93
Calculation by using similar random distribution series and functional values, it is found that the prediction accuracy of three alpha GM(1,1) in four points rolling models, is better than traditional grey system GM(1,1) models. However, it is worse in functional value prediction. Due to over accumulating generation, it will make the rule of exponential getting grey, such that those two models are still not good enough to predict the cross point of functional values. Only traditional GM(1,1) model is more effective to foresee the cross point of 20 weeks moving average line and 120 weeks moving average line. The trend of moving average line can be judged by Granvile eight rules; it helps to forecast the trend when we can understand the tendency of moving average line in advance.
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Yi-Jing, and 黃翊倞. "Applications of discrete-time survival analysis to biological control data." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/21541765207424426357.

Full text
Abstract:
碩士
國立臺灣大學
農藝學研究所
97
This thesis uses the Discrete-Time Survival model to analyze the biological control data. The data was provided by Biological Control Laboratory, Department of Plant Protection, National PingTung University of Science & Technology. In the experiment , three kinds of Rhizoctonia solani Ktihn( R1, R2 and R3 respectively), and seven Trichoderma spp. (T1-T7) were used. The main purpose is to determine which Trichoderma spp. would best restrain the propagation of Rhizoctonia solani Ktihn. When event times are highly discrete due to a problem known as “ties”, the Discrete-Time Survival model is suitable to analyze the data. By constructing the models, plotting the hazard functions and survival functions, we can know the effects of different Trichoderma spp. that restrain the propagation of Rhizoctonia solani Ktihn. Moreover, this thesis discusses the alternative specification of time and the effect of using another link function. In addition, we also check the assumptions of the model. Finally, the results are also compared with those obtained by Chan(2003) and Lin(2007).
APA, Harvard, Vancouver, ISO, and other styles
48

Saei, Ayoub. "Random component threshold models for ordered and discrete response data." Phd thesis, 1996. http://hdl.handle.net/1885/144100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Taylor, Christopher M. "Algorithmic analysis of human DNA replication timing from discrete microarray data /." 2008. http://wwwlib.umi.com/dissertations/fullcit/3312155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Braslavsky, Julio Hernán. "Frequency domain analysis of sampled-data control systems." 1995. http://www.newcastle.edu.au/services/library/adt/public/adt-NNCU20021218.164930/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography