Dissertations / Theses on the topic 'Nonlinear regression analysi'

To see the other types of publications on this topic, follow the link: Nonlinear regression analysi.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Nonlinear regression analysi.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lopresti, Mattia. "Non-destructive X-ray based characterization of materials assisted by multivariate methods of data analysis: from theory to application." Doctoral thesis, Università del Piemonte Orientale, 2022. http://hdl.handle.net/11579/143020.

Full text
Abstract:
X-ray based non-destructive techniques are an increasingly important tool in many fields, ranging from industry to fine arts, from medicine to basic research. Over the last century, the study of the physical phenomena underlying the interaction between X-rays and matter has led to the development of many different techniques suitable for morphological, textural, elementary, and compositional analysis. Furthermore, with the development of the hardware technology and its automation thanks to IT advancements, enormous progress has been made also from the point of view of data collection and nowadays it is possible to carry out measurement campaigns by collecting many GigaBytes of data in a few hours. Already huge data sets are further enlarged when samples are analyzed with a multi-technique approach and/or at in situ conditions with time, space, temperature, and concentration becoming additional variables. In the present work, new data collection and analysis methods are presented along with applicative studies in which innovative materials have been developed and characterized. These materials are currently of high application interest and involve composites for radiation protection, ultralight magnesium alloys and eutectic mixtures. The new approaches have been grown up from an instrumental viewpoint and with regard to the analysis of the data obtained, for which the use and development of multivariate methods was central. In this context, extensive use has been made of principal component analysis and experimental design methods. One prominent topic of the study involved the development of in situ analysis methods of evolving samples as a response to different types of gradients. In fact, while in large structures such as synchrotrons carrying out analyzes under variable conditions is now consolidated practice, on a laboratory scale this type of experiments is still relatively young and the methods of data analysis of data sets evolving systems have large perspectives for development especially, if integrated by multivariate methods.
APA, Harvard, Vancouver, ISO, and other styles
2

NARBAEV, TIMUR. "Forecasting cost at completion with growth models and Earned Value Management." Doctoral thesis, Politecnico di Torino, 2012. http://hdl.handle.net/11583/2506248.

Full text
Abstract:
Reliable forecasting of the final cost at completion is one of the vital components of project monitoring. Accuracy and stability in the forecast of an ongoing project is a critical criterion that ensures the project’s on budget and timely completion. The purpose of this dissertation is to develop a new Cost Estimate at Completion (CEAC) methodology to assist project managers in the task of forecasting the final cost at completion of ongoing projects. This forecasting methodology interpolates intrinsic characteristics of an S-shaped growth model and combines the Earned Schedule (ES) concepts into its equation to provide more accurate and stable cost estimates. Widely used conventional index-based methods for CEAC have inherent limitations such as reliance on past performance only, unreliable forecasts in early stages of a project life, and no count of forecasting statistics. To achieve its purpose the dissertation carried out five tasks. It, first, developed the method’s equation based on the integration of the four candidate S-shaped models and the earned schedule concepts. Second, the models’ equations were tested on past projects to assess their applicability and, then, the accuracy of CEACs was compared with ones found by the Cost Performance Index (CPI)-based formula. The scope of third task included comparing CEACs found by statistically valid and the most accurate Gompertz model (GM)-based equation against ones computed with the CPI-based method at each time point of the projects life. Then, the stability test was performed to determine if the method, with its corresponding performance indices that achieves the earlier stability, provides more accurate CEAC. Finally, the analysis was conducted to determine the existence of a correlation between schedule progress and the CEAC accuracy. Based on the research results it was determined that the GM-based method is the only valid model for cost estimates in all three stages and it provides more accurate estimates than the CPI-based formula does. Further comparative analysis showed that the two (the GM and CPI-based) methods’ performance index that achieved the earlier stability provided more accurate CEACs for that method, and finally, the new methodology takes into account the schedule impact as a factor of the cost performance in forecasting the CEAC. The developed methodology enhances forecasting capabilities of the existing Earned Value Management methods by refining traditional index-based approach through nonlinear regression analysis. The main novelty of the research is that this is a cost-schedule integrated approach which interpolates characteristics of a sigmoidal growth model with the ES technique to calculate a project’s CEAC. Two major contributions are brought to the Project Management. First, the dissertation extends the body of knowledge by introducing the methodology which combined two separate methods in one statistical technique that, so far, have been considered as two separate streams of project management research. Second, this technique advances the project management practice as it is a practical cost-schedule integrated approach that takes into account schedule progress (advance/delay) as a factor of cost behavior in calculation of CEAC.
APA, Harvard, Vancouver, ISO, and other styles
3

Sulieman, Hana. "Parametric sensitivity analysis in nonlinear regression." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape15/PQDD_0004/NQ27858.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Carvalho, Renato de Souza. "Nonlinear regression application to well test analysis /." Access abstract and link to full text, 1993. http://0-wwwlib.umi.com.library.utulsa.edu/dissertations/fullcit/9416602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Neugebauer, Shawn Patrick. "Robust Analysis of M-Estimators of Nonlinear Models." Thesis, Virginia Tech, 1996. http://hdl.handle.net/10919/36557.

Full text
Abstract:
Estimation of nonlinear models finds applications in every field of engineering and the sciences. Much work has been done to build solid statistical theories for its use and interpretation. However, there has been little analysis of the tolerance of nonlinear model estimators to deviations from assumptions and normality. We focus on analyzing the robustness properties of M-estimators of nonlinear models by studying the effects of deviations from assumptions and normality on these estimators. We discuss St. Laurent and Cook's Jacobian Leverage and identify the relationship of the technique to the robustness concept of influence. We derive influence functions for M-estimators of nonlinear models and show that influence of position becomes, more generally, influence of model. The result shows that, for M-estimators, we must bound not only influence of residual but also influence of model. Several examples highlight the unique problems of nonlinear model estimation and demonstrate the utility of the influence function.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Galarza, Morales Christian Eduardo 1988. "Quantile regression for mixed-effects models = Regressão quantílica para modelos de efeitos mistos." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306681.

Full text
Abstract:
Orientador: Víctor Hugo Lachos Dávila
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matemática Estatística e Computação Científica
Made available in DSpace on 2018-08-27T06:40:31Z (GMT). No. of bitstreams: 1 GalarzaMorales_ChristianEduardo_M.pdf: 5076076 bytes, checksum: 0967f08c9ad75f9e7f5df339563ef75a (MD5) Previous issue date: 2015
Resumo: Os dados longitudinais são frequentemente analisados usando modelos de efeitos mistos normais. Além disso, os métodos de estimação tradicionais baseiam-se em regressão na média da distribuição considerada, o que leva a estimação de parâmetros não robusta quando a distribuição do erro não é normal. Em comparação com a abordagem de regressão na média convencional, a regressão quantílica (RQ) pode caracterizar toda a distribuição condicional da variável de resposta e é mais robusta na presença de outliers e especificações erradas da distribuição do erro. Esta tese desenvolve uma abordagem baseada em verossimilhança para analisar modelos de RQ para dados longitudinais contínuos correlacionados através da distribuição Laplace assimétrica (DLA). Explorando a conveniente representação hierárquica da DLA, a nossa abordagem clássica segue a aproximação estocástica do algoritmo EM (SAEM) para derivar estimativas de máxima verossimilhança (MV) exatas dos efeitos fixos e componentes de variância em modelos lineares e não lineares de efeitos mistos. Nós avaliamos o desempenho do algoritmo em amostras finitas e as propriedades assintóticas das estimativas de MV através de experimentos empíricos e aplicações para quatro conjuntos de dados reais. Os algoritmos SAEMs propostos são implementados nos pacotes do R qrLMM() e qrNLMM() respectivamente
Abstract: Longitudinal data are frequently analyzed using normal mixed effects models. Moreover, the traditional estimation methods are based on mean regression, which leads to non-robust parameter estimation for non-normal error distributions. Compared to the conventional mean regression approach, quantile regression (QR) can characterize the entire conditional distribution of the outcome variable and is more robust to the presence of outliers and misspecification of the error distribution. This thesis develops a likelihood-based approach to analyzing QR models for correlated continuous longitudinal data via the asymmetric Laplace distribution (ALD). Exploiting the nice hierarchical representation of the ALD, our classical approach follows the stochastic Approximation of the EM (SAEM) algorithm for deriving exact maximum likelihood (ML) estimates of the fixed-effects and variance components in linear and nonlinear mixed effects models. We evaluate the finite sample performance of the algorithm and the asymptotic properties of the ML estimates through empirical experiments and applications to four real life datasets. The proposed SAEMs algorithms are implemented in the R packages qrLMM() and qrNLMM() respectively
Mestrado
Estatistica
Mestre em Estatística
APA, Harvard, Vancouver, ISO, and other styles
7

Cui, Chenhao. "Nonlinear multiple regression methods for spectroscopic analysis : application to NIR calibration." Thesis, University College London (University of London), 2018. http://discovery.ucl.ac.uk/10058694/.

Full text
Abstract:
Chemometrics has been applied to analyse near-infrared (NIR) spectra for decades. Linear regression methods such as partial least squares (PLS) regression and principal component regression (PCR) are simple and widely used solutions for spectroscopic calibration. My dissertation connects spectroscopic calibration with nonlinear machine learning techniques. It explores the feasibility of applying nonlinear methods for NIR calibration. Investigated nonlinear regression methods include least squares support vec- tor machine (LS-SVM), Gaussian process regression (GPR), Bayesian hierarchical mixture of linear regressions (HMLR) and convolutional neural networks (CNN). Our study focuses on the discussion of various design choices, interpretation of nonlinear models and providing novel recommendations and insights for the con- struction nonlinear regression models for NIR data. Performances of investigated nonlinear methods were benchmarked against traditional methods on multiple real-world NIR datasets. The datasets have differ- ent sizes (varying from 400 samples to 7000 samples) and are from various sources. Hypothesis tests on separate, independent test sets indicated that nonlinear methods give significant improvements in most practical NIR calibrations.
APA, Harvard, Vancouver, ISO, and other styles
8

Fernández-Val, Iván. "Three essays on nonlinear panel data models and quantile regression analysis." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/32408.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Economics, 2005.
Includes bibliographical references.
This dissertation is a collection of three independent essays in theoretical and applied econometrics, organized in the form of three chapters. In the first two chapters, I investigate the properties of parametric and semiparametric fixed effects estimators for nonlinear panel data models. The first chapter focuses on fixed effects maximum likelihood estimators for binary choice models, such as probit, logit, and linear probability model. These models are widely used in economics to analyze decisions such as labor force participation, union membership, migration, purchase of durable goods, marital status, or fertility. The second chapter looks at generalized method of moments estimation in panel data models with individual-specific parameters. An important example of these models is a random coefficients linear model with endogenous regressors. The third chapter (co-authored with Joshua Angrist and Victor Chernozhukov) studies the interpretation of quantile regression estimators when the linear model for the underlying conditional quantile function is possibly misspecified.
by Iván Fernández-Val.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
9

Hyung, Namwon. "Essays on panel and nonlinear time series analysis /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9958858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Arai, Yoichi. "Nonlinear nonstationary time series analysis and its application /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2004. http://wwwlib.umi.com/cr/ucsd/fullcit?p3144311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Ying. "Symbolic Regression of Thermo-Physical Model Using Genetic Programming." Scholar Commons, 2004. https://scholarcommons.usf.edu/etd/1316.

Full text
Abstract:
The symbolic regression problem is to find a function, in symbolic form, that fits a given data set. Symbolic regression provides a means for function identification. This research describes an adaptive hybrid system for symbolic function identification of thermo-physical model that combines the genetic programming and a modified Marquardt nonlinear regression algorithm. Genetic Programming (GP) system can extract knowledge from the data in the form of symbolic expressions, i.e. a tree structure, that are used to model and derive equation of state, mixing rules and phase behavior from the experimental data (properties estimation). During the automatic evolution process of GP, the function structure of generated individual module could be highly complicated. To ensure the convergence of the regression, a modified Marquardt regression algorithm is used. Two stop criteria are attached to the traditional Marquardt algorithm to enforce the algorithm repeat the regression process before it stops. Statistic analysis is applied to the fitted model. Residual plot is used to test the goodness of fit. The χ2-test is used to test the model's adequacy. Ten experiments are run with different form of input variables, number of data points, standard errors added to data set, and fitness functions. The results show that the system is able to find models and optimize for its parameters successfully.
APA, Harvard, Vancouver, ISO, and other styles
12

Kim, Hyun-Joo. "Model selection criteria based on Kullback information measures for Weibull, logistic, and nonlinear regression frameworks /." free to MU campus, to others for purchase, 2000. http://wwwlib.umi.com/cr/mo/fullcit?p9988677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Sahin, Mehmet Altug. "Regional Flood Frequency Analysis For Ceyhan Basin." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615439/index.pdf.

Full text
Abstract:
Regional flood frequency techniques are commonly used to estimate flood quantiles when flood data are unavailable or the record length at an individual gauging station is insufficient for reliable analyses. These methods compensate for limited or unavailable data by pooling data from nearby gauged sites. This requires the delineation of hydrologically homogeneous regions in which the flood regime is sufficiently similar to allow the spatial transfer of information. Therefore, several Regional Flood Frequency Analysis (RFFA) methods are applied to the Ceyhan Basin. Dalyrmple (1960) Method is applied as a common RFFA method used in Turkey. Multivariate statistical techniques which are Stepwise and Nonlinear Regression Analysis are also applied to flood statistics and basin characteristics for gauging stations. Rainfall, Perimeter, Length of Main River, Circularity, Relative Relief, Basin Relief, Hmax, Hmin, Hmean and H&Delta
are the simple additional basin characteristics. Moreover, before the analysis started, stations are clustered according to their basin characteristics by using the combination of Ward&rsquo
s and k-means clustering techniques. At the end of the study, the results are compared considering the Root Mean Squared Errors, Nash-Sutcliffe Efficiency Index and % difference of results. Using additional basin characteristics and making an analysis with multivariate statistical techniques have positive effect for getting accurate results compared to Dalyrmple (1960) Method in Ceyhan Basin. Clustered region data give more accurate results than non-clustered region data. Comparison between clustered region and non-clustered region Q100/Q2.33 reduced variate values for whole region is 3.53, for cluster-2 it is 3.43 and for cluster-3 it is 3.65. This show that clustering has positive effect in the results. Nonlinear Regression Analysis with three clusters give less errors which are 29.54 RMSE and 0.735 Nash-Sutcliffe Index, when compared to other methods in Ceyhan Basin.
APA, Harvard, Vancouver, ISO, and other styles
14

Pitrun, Ivet 1959. "A smoothing spline approach to nonlinear inference for time series." Monash University, Dept. of Econometrics and Business Statistics, 2001. http://arrow.monash.edu.au/hdl/1959.1/8367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Khartabil, Hussam. "Using nonlinear regression analysis and statistical experimental design principles to determine thermal resistance from overall heat exchanger measurements /." The Ohio State University, 1991. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487758680162453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Lind, Ingela. "Regressor and Structure Selection : Uses of ANOVA in System Identification." Doctoral thesis, Linköping : Linköpings universitet, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Cheng, Yueming. "Pressure transient testing and productivity analysis for horizontal wells." Texas A&M University, 2003. http://hdl.handle.net/1969.1/1187.

Full text
Abstract:
This work studied the productivity evaluation and well test analysis of horizontal wells. The major components of this work consist of a 3D coupled reservoir/wellbore model, a productivity evaluation, a deconvolution technique, and a nonlinear regression technique improving horizontal well test interpretation. A 3D coupled reservoir/wellbore model was developed using the boundary element method for realistic description of the performance behavior of horizontal wells. The model is able to flexibly handle multiple types of inner and outer boundary conditions, and can accurately simulate transient tests and long-term production of horizontal wells. Thus, it can serve as a powerful tool in productivity evaluation and analysis of well tests for horizontal wells. Uncertainty of productivity prediction was preliminarily explored. It was demonstrated that the productivity estimates can be distributed in a broad range because of the uncertainties of reservoir/well parameters. A new deconvolution method based on a fast-Fourier-transform algorithm is presented. This new technique can denoise "noisy" pressure and rate data, and can deconvolve pressure drawdown and buildup test data distorted by wellbore storage. For cases with no rate measurements, a "blind" deconvolution method was developed to restore the pressure response free of wellbore storage distortion, and to detect the afterflow/unloading rate function using Fourier analysis of the observed pressure data. This new deconvolution method can unveil the early time behavior of a reservoir system masked by variable-wellbore-storage distortion, and thus provides a powerful tool to improve pressure transient test interpretation. The applicability of the method is demonstrated with a variety of synthetic and actual field cases for both oil and gas wells. A practical nonlinear regression technique for analysis of horizontal well testing is presented. This technique can provide accurate and reliable estimation of well-reservoir parameters if the downhole flow rate data are available. In the situation without flow rate measurement, reasonably reliable parameter estimation can be achieved by using the detected flow rate from blind deconvolution. It has the advantages of eliminating the need for estimation of the wellbore storage coefficient and providing reasonable estimates of effective wellbore length. This technique provides a practical tool for enhancement of horizontal well test interpretation, and its practical significance is illustrated by synthetic and actual field cases.
APA, Harvard, Vancouver, ISO, and other styles
18

Yerlikaya, Fatma. "A New Contribution To Nonlinear Robust Regression And Classification With Mars And Its Applications To Data Mining For Quality Control In Manufacturing." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12610037/index.pdf.

Full text
Abstract:
Multivariate adaptive regression spline (MARS) denotes a modern methodology from statistical learning which is very important in both classification and regression, with an increasing number of applications in many areas of science, economy and technology. MARS is very useful for high dimensional problems and shows a great promise for fitting nonlinear multivariate functions. MARS technique does not impose any particular class of relationship between the predictor variables and outcome variable of interest. In other words, a special advantage of MARS lies in its ability to estimate the contribution of the basis functions so that both the additive and interaction effects of the predictors are allowed to determine the response variable. The function fitted by MARS is continuous, whereas the one fitted by classical classification methods (CART) is not. Herewith, MARS becomes an alternative to CART. The MARS algorithm for estimating the model function consists of two complementary algorithms: the forward and backward stepwise algorithms. In the first step, the model is built by adding basis functions until a maximum level of complexity is reached. On the other hand, the backward stepwise algorithm is began by removing the least significant basis functions from the model. In this study, we propose not to use the backward stepwise algorithm. Instead, we construct a penalized residual sum of squares (PRSS) for MARS as a Tikhonov regularization problem, which is also known as ridge regression. We treat this problem using continuous optimization techniques which we consider to become an important complementary technology and alternative to the concept of the backward stepwise algorithm. In particular, we apply the elegant framework of conic quadratic programming which is an area of convex optimization that is very well-structured, herewith, resembling linear programming and, hence, permitting the use of interior point methods. The boundaries of this optimization problem are determined by the multiobjective optimization approach which provides us many alternative solutions. Based on these theoretical and algorithmical studies, this MSc thesis work also contains applications on the data investigated in a TÜ
BiTAK project on quality control. By these applications, MARS and our new method are compared.
APA, Harvard, Vancouver, ISO, and other styles
19

Allgaier, Nicholas. "Reverse Engineering the Human Brain: An Evolutionary Computation Approach to the Analysis of fMRI." ScholarWorks @ UVM, 2015. http://scholarworks.uvm.edu/graddis/383.

Full text
Abstract:
The field of neuroimaging has truly become data rich, and as such, novel analytical methods capable of gleaning meaningful information from large stores of imaging data are in high demand. Those methods that might also be applicable on the level of individual subjects, and thus potentially useful clinically, are of special interest. In this dissertation we introduce just such a method, called nonlinear functional mapping (NFM), and demonstrate its application in the analysis of resting state fMRI (functional Magnetic Resonance Imaging) from a 242-subject subset of the IMAGEN project, a European study of risk-taking behavior in adolescents that includes longitudinal phenotypic, behavioral, genetic, and neuroimaging data. Functional mapping employs a computational technique inspired by biological evolution to discover and mathematically characterize interactions among ROI (regions of interest), without making linear or univariate assumptions. Statistics of the resulting interaction relationships comport with recent independent work, constituting a preliminary cross-validation. Furthermore, nonlinear terms are ubiquitous in the models generated by NFM, suggesting that some of the interactions characterized here are not discoverable by standard linear methods of analysis. One such nonlinear interaction is discussed in the context of a direct comparison with a procedure involving pairwise correlation, designed to be an analogous linear version of functional mapping. Another such interaction suggests a novel distinction in brain function between drinking and non-drinking adolescents: a tighter coupling of ROI associated with emotion, reward, and interceptive processes such as thirst, among drinkers. Finally, we outline many improvements and extensions of the methodology to reduce computational expense, complement other analytical tools like graph-theoretic analysis, and possibly allow for voxel level functional mapping to eliminate the necessity of ROI selection.
APA, Harvard, Vancouver, ISO, and other styles
20

Sando, Simon Andrew. "Estimation of a class of nonlinear time series models." Thesis, Queensland University of Technology, 2004. https://eprints.qut.edu.au/15985/1/Simon_Sando_Thesis.pdf.

Full text
Abstract:
The estimation and analysis of signals that have polynomial phase and constant or time-varying amplitudes with the addititve noise is considered in this dissertation.Much work has been undertaken on this problem over the last decade or so, and there are a number of estimation schemes available. The fundamental problem when trying to estimate the parameters of these type of signals is the nonlinear characterstics of the signal, which lead to computationally difficulties when applying standard techniques such as maximum likelihood and least squares. When considering only the phase data, we also encounter the well known problem of the unobservability of the true noise phase curve. The methods that are currently most popular involve differencing in phase followed by regression, or nonlinear transformations. Although these methods perform quite well at high signal to noise ratios, their performance worsens at low signal to noise, and there may be significant bias. One of the biggest problems to efficient estimation of these models is that the majority of methods rely on sequential estimation of the phase coefficients, in that the highest-order parameter is estimated first, its contribution removed via demodulation, and the same procedure applied to estimation of the next parameter and so on. This is clearly an issue in that errors in estimation of high order parameters affect the ability to estimate the lower order parameters correctly. As a result, stastical analysis of the parameters is also difficult. In thie dissertation, we aim to circumvent the issues of bias and sequential estiamtion by considering the issue of full parameter iterative refinement techniques. ie. given a possibly biased initial estimate of the phase coefficients, we aim to create computationally efficient iterative refinement techniques to produce stastically efficient estimators at low signal to noise ratios. Updating will be done in a multivariable manner to remove inaccuracies and biases due to sequential procedures. Stastical analysis and extensive simulations attest to the performance of the schemes that are presented, which include likelihood, least squares and bayesian estimation schemes. Other results of importance to the full estimatin problem, namely when there is error in the time variable, the amplitude is not constant, and when the model order is not known, are also condsidered.
APA, Harvard, Vancouver, ISO, and other styles
21

Sando, Simon Andrew. "Estimation of a class of nonlinear time series models." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15985/.

Full text
Abstract:
The estimation and analysis of signals that have polynomial phase and constant or time-varying amplitudes with the addititve noise is considered in this dissertation.Much work has been undertaken on this problem over the last decade or so, and there are a number of estimation schemes available. The fundamental problem when trying to estimate the parameters of these type of signals is the nonlinear characterstics of the signal, which lead to computationally difficulties when applying standard techniques such as maximum likelihood and least squares. When considering only the phase data, we also encounter the well known problem of the unobservability of the true noise phase curve. The methods that are currently most popular involve differencing in phase followed by regression, or nonlinear transformations. Although these methods perform quite well at high signal to noise ratios, their performance worsens at low signal to noise, and there may be significant bias. One of the biggest problems to efficient estimation of these models is that the majority of methods rely on sequential estimation of the phase coefficients, in that the highest-order parameter is estimated first, its contribution removed via demodulation, and the same procedure applied to estimation of the next parameter and so on. This is clearly an issue in that errors in estimation of high order parameters affect the ability to estimate the lower order parameters correctly. As a result, stastical analysis of the parameters is also difficult. In thie dissertation, we aim to circumvent the issues of bias and sequential estiamtion by considering the issue of full parameter iterative refinement techniques. ie. given a possibly biased initial estimate of the phase coefficients, we aim to create computationally efficient iterative refinement techniques to produce stastically efficient estimators at low signal to noise ratios. Updating will be done in a multivariable manner to remove inaccuracies and biases due to sequential procedures. Stastical analysis and extensive simulations attest to the performance of the schemes that are presented, which include likelihood, least squares and bayesian estimation schemes. Other results of importance to the full estimatin problem, namely when there is error in the time variable, the amplitude is not constant, and when the model order is not known, are also condsidered.
APA, Harvard, Vancouver, ISO, and other styles
22

Alegria, Elvis Omar Jara 1986. "Estimação On-Line de parâmetros dependentes do estado (State Dependent Parameter - SDP) em modelos de regressão não lineares." [s.n.], 2015. http://repositorio.unicamp.br/jspui/handle/REPOSIP/258834.

Full text
Abstract:
Orientador: Celso Pascoli Bottura
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-27T02:15:32Z (GMT). No. of bitstreams: 1 Alegria_ElvisOmarJara_M.pdf: 5581682 bytes, checksum: cd5b08b04c7ba4bcd505ab00e5335ffc (MD5) Previous issue date: 2015
Resumo: Este trabalho é sobre a identificação recursiva em tempo real das dependências parâmetro-estado em modelos de regressão de series temporais estocásticas. O descobrimento dessas dependências é útil para obter uma nova, e mais acurada, estrutura do modelo. Os métodos recursivos convencionais de estimação de parâmetros variantes no tempo, não conseguem bons resultados quando os modelos apresentam parâmetros dependentes do estado (SDP) pois eles tem comportamento altamente não linear e inclusive caótico. Nossa proposta está baseada no estudo de Peter Young para SDPs no caso Off-Line. É discutido o método que ele propõe para reduzir a entropia das séries nos modelos com SDP e para isto se apresenta umas transformações dos dados. São propostas mudanças no seu algoritmo Off-Line que o fazem mais rápido, eficiente e manejável para a implementação do modo On-Line. Finalmente, três exemplos numéricos são mostrados para validar as nossas propostas e a sua aplicação na área de detecção de falhas paramétricas. Todas as funções foram implementadas no MATLAB e conformam um toolbox para identificação de SDP em modelos de regressão
Abstract: This work is about the identification of the dependency among parameters and states in regression models of stochastic time series. The discovery of that dependency can be useful to obtain a more accurate model structure. Conventional recursive algorithms for estimation of Time Variable Parameters do not provide good results in models with state-dependent parameters (SDP) because these may have highly non-linear and even chaotic behavior. This work is based on Peter Young's studies about Off-Line SDP. Young's methods to data entropy reduction are discussed and some data transformations are proposed for this. Later, are proposed some changes on the Off-Line algorithm in order to improve its velocity, accuracy, and tractability to generate the On-Line version. Finally, three numeric examples to validate our proposal are shown. All the functions were implemented in MATLAB and conform a Toolbox to the SDP identification in regression models
Mestrado
Automação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
23

Santos, Alessandra dos. "Regressão não linear no desdobramento da interação em experimentos com mais de um fator." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-19022013-153816/.

Full text
Abstract:
Em experimentos que envolvam um fator quantitativo e um qualitativo, é aconselhável que, se detectado efeito significativo da interação entre os fatores na análise de variância, recorra-se a análise de regressão no desdobramento da mesma, no entanto nem sempre a utilização de modelos de regressão linear é a forma mais adequada para avaliar o efeito do fator quantitativo. Neste trabalho é apresentada a forma de ajuste de um modelo de regressão não linear em um experimento com medida repetida no tempo. No experimento considerou-se o ganho de peso, em quilos, de ovinos, machos e fêmeas, da raça Santa Inês em doze idades diferentes. Conduzido como parcela subdividida, pois fator tempo não foi aleatorizado, a análise variância necessita de correção dos graus de liberdade devido à condição de esfericidade não satisfeita. A correção de Geisser e Greenhouse (G-G) foi utilizada para os efeitos da interação e do tempo. O teste F na análise de variância apresentou resultado significativo para interação entre os fatores e, no desdobramento da interação, para avaliação do efeito do fator tempo em cada nível do fator sexo foi proposto o ajuste do modelo Gompertz bem como um teste de aderência para o modelo. Após o ajuste do modelo aos dados de peso de ovinos também foi considerado no estudo a comparação dos parâmetros das curvas de machos e fêmeas. Pela análise proposta foi possível concluir que o modelo univariado, com esquema de parcelas subdivididas, pode ser utilizado em experimentos de crescimento animal, porém sua aplicação está sujeita a verificação da condição de esfericidade. Também foi verificado que incorporar, no desdobramento de interações, o ajuste do modelo Gompertz é um procedimento viável e permitiu avaliar a real qualidade de ajuste do modelo aos dados. Com a comparação dos parâmetros das curvas ajustadas verificou-se que ovinos machos e fêmeas apresentam valores estatisticamente iguais para os parâmetros α e γ, ambos relacionados com o peso ao nascer dos animais. O peso máximo esperado para fêmea (40,7kg) é estatisticamente inferior ao encontrado para os machos (57,3kg), no entanto, sua taxa de crescimento (0,011kg/dia para fêmeas) é superior (0,007kg/dia para machos), ou seja, as fêmeas atingem o peso de estabilização mais rapidamente que os machos.
In experiments involving a qualitative and a quantitative factor, it is advisable that if a significant interaction is detected between factors in the analysis of variance, one should perform regression analysis of the splitting factors. However, the use of linear regression models is not always the most appropriate way to assess the effect of the quantitative factor. This paper presents a way to fit a nonlinear regression model in an experiment with repeated measurements over time. In the experiment, the weight gain of male and female Santa Inês breed sheep, in pounds, in twelve different ages is measured. Conducted in a split-plot design, as the time factor was not randomized, the analysis of variance requires correction of the degrees of freedom, as the sphericity condition is not satisfied. The Greenhouse and Geisser correction (G-G) was used for the purposes of interaction and time. The F test in the analysis of variance showed a significant result for the interaction between the factors and the splitting of the interaction. In order to evaluate the effect of the time factor at each level of the gender factor, a Gompertz model was proposed, as well as a test of model adherence. After fitting the model to the data, a comparison study of the parameters for males and females was also made. For the proposed analysis, we concluded that the univariate model, with split-plot design, can be used in experiments of animal growth, but its application is prone to verification of the sphericity condition. They also found that the incorporation of the splitting of interactions, by adjusting the Gompertz model, is a viable procedure and allowed to evaluate the real quality of fit. By comparing the fitted parameter values, it was found that males and females have statistically identical values for the parameters α and γ, both related to the birth weight of the animals. The maximum weight expected for a female (40.7 kg) is statistically lower than that found for the males (57.3 kg), however, their growth rate (0.011 kg / day for females) is greater than the males\' (0.007 kg / day for males), i.e., females reach weight stabilization faster than males.
APA, Harvard, Vancouver, ISO, and other styles
24

Smejkalová, Veronika. "Aproximace prostorově distribuovaných hierarchicky strukturovaných dat." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2018. http://www.nusl.cz/ntk/nusl-392841.

Full text
Abstract:
The forecast of the waste production is an important information for planning in waste management. The historical data often consists of short time series, therefore traditional prognostic approaches fail. The mathematical model for forecasting of future waste production based on spatially distributed data with hierarchically structure is suggested in this thesis. The approach is based on principles of regression analysis with final balance to ensure the compliance of aggregated data values. The selection of the regression function is a part of mathematical model for high-quality description of data trend. In addition, outlier values are cleared, which occur abundantly in the database. The emphasis is on decomposition of extensive model into subtasks, which lead to a simpler implementation. The output of this thesis is tool tested within case study on municipal waste production data in the Czech Republic.
APA, Harvard, Vancouver, ISO, and other styles
25

Altintas, Suleyman Serkan. "Attenuation Relationship For Peak Ground Velocity Based On Strong Ground Motion Data Recorded In Turkey." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12608067/index.pdf.

Full text
Abstract:
Estimation of the ground motion parameters is extremely important for engineers to make the structures safer and more economical, so it is one of the main issues of Earthquake Engineering. Peak values of the ground motions obtained either from existing records or with the help of attenuation relationships, have been used as a useful parameter to estimate the effect of an earthquake on a specific location. Peak Ground Velocities (PGV) of a ground motion is used extensively in the recent years as a measure of intensity and as the primary source of energy-related analysis of structures. Consequently, PGV values are used to construct emergency response systems like Shake Maps or to determine the deformation demands of structures. Despite the importance of the earthquakes for Turkey, there is a lack of suitable attenuation relationships for velocity developed specifically for the country. The aim of this study is to address this deficiency by developing an attenuation relationship for the Peak Ground Velocities of the chosen database based on the strong ground motion records of Turkey. A database is processed with the established techniques and corrected database for the chosen ground motions is formed. Five different forms of equations that were used in the previous studies are selected to be used as models and by using nonlinear regression analysis, best fitted mathematical relation for attenuation is obtained. The result of this study can be used as an effective tool for seismic hazard assessment studies for Turkey. Besides, being a by-product of this study, a corrected database of strong ground motion recordings of Turkey may prone to be a valuable source for the future researchers.
APA, Harvard, Vancouver, ISO, and other styles
26

Frazão, Italo Marcus da Mota. "Modelos com sobreviventes de longa duração paramétricos e semi-paramétricos aplicados a um ensaio clínico aleatorizado." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-13032013-093628/.

Full text
Abstract:
Diversos modelos têm sido propostos na literatura com o objetivo de analisar dados de sobrevivência em que a população sob estudo é assumida ser uma mistura de indivíduos suscetíveis (em risco) e não suscetíveis a um específico evento de interesse. Tais modelos são usualmente denominados modelos com sobreviventes de longa duração ou modelos com fração de cura. Neste trabalho, diversos desses modelos (nos contextos paramétrico e semi-paramétrico) foram considerados para analisar os dados de um ensaio clínico aleatorizado conduzido com o objetivo de comparar três estratégias terapêuticas (cirurgia, angioplastia e medicamentoso) utilizadas no tratamento de pacientes com doença coronariana multiarterial. Em todos os modelos, as funções de ligação logito e complemento log-log foram utilizadas para modelar a proporção de sobreviventes de longa duração (indivíduos não suscetíveis). Quanto à função de sobrevivência dos indivíduos suscetíveis, foram utilizados os modelos de Weibull e de Cox. Covariáveis foram consideradas tanto na proporção de sobreviventes de longa duração quanto na função de sobrevivência dos indivíduos suscetíveis. De modo geral, os modelos considerados se mostraram adequados para analisar os dados do ensaio clínico aleatorizado, indicando a cirurgia como a estratégia terapêutica mais eficiente. Indicaram também, que as covariáveis idade, hipertensão e diabetes mellitus exercem influência na ocorrência do óbito cardíaco, mas não no tempo até a ocorrência deste óbito nos pacientes suscetíveis.
Several models have been proposed in the literature with the aim of analyzing survival data when the population under study is assumed to be a mixture of susceptible (at risk) and not susceptible individuals to a specific event of interest. Such models are usually called long-term survivors models or cure rate models. In this work, several of these models (under both parametric and semi-parametric approaches) were considered to analyze the data from a randomized clinical trial conducted in order to compare three therapeutic strategies (surgery, angioplasty and medicine) used in the treatment of patients with multivessel coronary artery disease. For all models the logit and complementary log-log link functions were used to model the proportion of long-term survivors (not susceptible individuals). In regards to the survival function of the susceptible individuals, the Weibull and Cox models were used. Covariates were considered both in the proportion of longterm survivors and in the survival function of the susceptible individuals. Overall, the models considered were suitable for analyzing the data from the randomized clinical trial indicating surgery as the most effective therapeutic strategy. They also indicated that the covariates age, hypertension and diabetes mellitus exhibit influence on the occurrence of cardiac death, but not on the time to the occurrence of this death in susceptible patients.
APA, Harvard, Vancouver, ISO, and other styles
27

Malik, Mohammad Rafi. "Reduced-orderCombustion Models for Innovative Energy Conversion Technologies." Doctoral thesis, Universite Libre de Bruxelles, 2021. https://dipot.ulb.ac.be/dspace/bitstream/2013/318799/4/TOC.pdf.

Full text
Abstract:
The present research seeks to advance the understanding and application of Principal Component Analysis (PCA)-based combustion modelling for practical systems application. This work is a consistent extension to the standard PC-transport model, and integrates the use of Gaussian Process Regression (GPR) in order to increase the accuracy and the potential of size reduction offered by PCA. This new model, labelled PC-GPR, is successively applied and validated in a priori and a posteriori studies.In the first part of this dissertation, the PC-GPR model is validated in an a priori study based on steady and unsteady perfectly stirred reactor (PSR) calculations. The model showed its great accuracy in the predictions for methane and propane, using large kinetic mechanisms. In particular, for methane, the use of GPR allowed to model accurately the system with only 2 principal components (PCs) instead of the 34 variables in the original GRI-3.0 kinetic mechanism. For propane, the model was applied to two different mechanisms consisting of 50 species and 162 species respectively. The PC-GPR model was able to achieve a very significant reduction, and the thermo-chemical state-space was accurately predicted using only 2 PCs for both mechanisms.The second part of this work is dedicated to the application of the PC-GPR model in the framework of non-premixed turbulent combustion in a fully three-dimensional Large Eddy Simulation (LES). To this end, an a posteriori validation is performed on the Sandia flames D, E and F. The PC-GPR model showed very good accuracy in the predictions of the three flames when compared with experimental data using only 2 PCs, instead of the 35 species originally present in the GRI 3.0 mechanism. Moreover, the PC-GPR model was also able to handle the extinction and re-ignition phenomena in flames E and F, thanks to the unsteady data in the training manifold. A comparison with the FPV model showed that the combination of the unsteady data set and the best controlling variables for the system defined by PCA provide an alternative to the use of steady flamelets parameterized by user-defined variables and combined with a PDF approach.The last part of this research focuses on the application of the PC-GPR model in a more challenging case, a lifted methane/air flame. Several key features of the model are investigated: the sensitivty to the training data set, the influence of the scaling methods, the issue of data sampling and the potential of a subgrid scale (SGS) closure. In particular, it is shown that the training data set must contain the effects of diffusion in order to accurately predict the different properties of the lifted flame. Moreover, the kernel density weighting method, used to address the issue of non-homogenous data density usually found in numerical data sets, allowed to improve the predictions of the PC-GPR model. Finally, the integration of subgrid scale closure to the PC-GPR model allowed to significantly improve the simulations results using a presumed PDF closure. A qualitative comparison with the FPV model showed that the results provided by the PC-GPR model are overall very comparable to the FPV results, with a reduced numerical cost as PC-GPR requires a 4D lookup table, instead of a 5D in the case of FPV.
Le double défi de l'énergie et du changement climatique mettent en avant lanécessité de développer des nouvelles technologies de combustion, étantdonné que les projections les plus réalistes montrent que la plus grandeaugmentation de l'offre d'énergie pour les décennies à venir se fera à partirde combustibles fossiles. Ceci représente donc une forte motivation pour larecherche sur l'efficacité énergétique et les technologies propres. Parmicelles-ci, la combustion sans flamme est un concept nouvellementdéveloppé qui permet d'obtenir des rendements thermiques élevés avecdes économies de carburant tout en maintenant les émissions polluantes àun niveau très bas. L'intérêt croissant pour cette technologie est égalementmotivé par sa grande flexibilité de carburant, ce qui représente uneprécieuse opportunité pour les carburants à faible valeur calorifique, lesdéchets industriels à haute valeur calorifique et les combustibles à based'hydrogène. Etant donné que cette technologie est plutôt récente, elle estde ce fait encore mal comprise. Les solutions d'une application industriellesont très difficiles à transposer à d'autres. Pour améliorer les connaissancesdans le domaine de la combustion sans flamme, il est nécessaire de menerdes études fondamentales sur ce nouveau procédé de combustion afin defavoriser son développement. En particulier, il y a deux différencesmajeures par rapport aux flammes classiques :d’une part, les niveaux deturbulence rencontrés dans la combustion sans flamme sont rehaussés, enraison des gaz de recirculation, réduisant ainsi les échelles de mélange.D'autre part, les échelles chimiques sont augmentées, en raison de ladilution des réactifs. Par conséquent, les échelles turbulentes et chimiquessont du même ordre de grandeur, ce qui conduit à un couplage très fort.Après un examen approfondi de l'état de l'art sur la modélisation de lacombustion sans flamme, le coeur du projet représentera le développementd'une nouvelle approche pour le traitement de l'interaction turbulence /chimie pour les systèmes sans flamme dans le contexte des simulationsaux grandes échelles (Large Eddy Simulations, LES). Cette approche serafondée sur la méthode PCA (Principal Component Analysis) afin d'identifierles échelles chimiques de premier plan du processus d'oxydation. Cetteprocédure permettra de ne suivre sur la grille LES qu'un nombre réduit descalaires non conservés, ceux contrôlant l'évolution du système. Destechniques de régression non-linéaires seront couplées avec PCA afind’augmenter la précision et la réductibilité du modèle. Après avoir été validégrâce à des données expérimentales de problèmes simplifiés, le modèlesera mis à l'échelle afin de gérer des applications plus grandes, pertinentespour la combustion sans flamme. Les données expérimentales etnumériques seront validées en utilisant des indicateurs de validationappropriés pour évaluer les incertitudes expérimentales et numériques.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
28

Kucukarslan, Sertac. "A Finite Element Study On The Effective Width Of Flanged Sections." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612180/index.pdf.

Full text
Abstract:
Most of the reinforced concrete systems are monolithic. During construction, concrete from the bottom of the deepest beam to the top of slab, is placed at once. Therefore the slab serves as the top flange of the beams. Such a beam is referred to as T-beam. In a floor system made of T-beams, the compressive stress is a maximum over the web, dropping between the webs. The distribution of compressive stress on the flange depends on the relative dimensions of the cross section, span length, support and loading conditions. For simplification, the varying distribution of compressive stress can be replaced by an equivalent uniform distribution. This gives us an effective flange width, which is smaller than the real flange width. In various codes there are recommendations for effective flange width formulas. But these formulas are expressed only in terms of span length or flange and web thicknesses and ignore the other important variables. In this thesis, three-dimensional finite element analysis has been carried out on continuous T-beams under different loading conditions to assess the effective flange width based on displacement criterion. The formulation is based on a combination of the elementary bending theory and the finite element method, accommodating partial interaction in between. The beam spacing, beam span length, total depth of the beam, the web and the flange thicknesses are considered as independent variables. Depending on the type of loading, the numerical value of the moment of inertia of the transformed beam crosssection and hence the effective flange width are calculated. The input data and the finite element displacement results are then used in a nonlinear regression analysis and two explicit design formulas for effective flange width have been derived. Comparisons are made between the proposed formulas and the ACI, Eurocode, TS-500 and BS-8110 code recommendations.
APA, Harvard, Vancouver, ISO, and other styles
29

Tarek, Md Tawhid Bin. "Optimal High-Speed Design and Rotor Shape Modification of Multiphase Permanent Magnet Assisted Synchronous Reluctance Machines for Stress Reduction." University of Akron / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=akron1510617496931844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lindén, David. "Exploration of implicit weights in composite indicators : The case of resilience assessment of countries’ electricity supply." Thesis, KTH, Hållbar utveckling, miljövetenskap och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-239687.

Full text
Abstract:
Composite indicators, also called indices, are widely used synthetic measures for ranking and benchmarking alternatives across complex concepts. The aim of constructing a composite indicator is, among other things, to simplify and condense the information of a plurality of underlying indicators. However, to avoid misleading results, it is important to ensure that the construction is performed in a transparent and representative manner. To this end, this thesis aims to aid the construction of the Electricity Supply Resilience Index (ESRI) – which is a novel energy index, developed within the Future Resilient Systems (FRS) programme at the Singapore-ETH Centre (SEC) – by looking at the complementary and fundamental component of index aggregation, namely the weighting of the indicators. Normally, weights are assigned to reflect the relative importance of each indicator, based on stakeholders’ or decision-makers’ preferences. Consequently, the weights are often perceived to be importance coefficients, independent from the dataset under analysis. However, it has recently been shown that the structure of the dataset and correlations between the indicators often have a decisive effect on each indicator’s importance in the index. In fact, their importance rarely coincides with the assigned weights. This phenomenon is sometimes referred to as implicit weights. The aim of this thesis is to assess the implicit weights in the aggregation of ESRI.  For this purpose, a six-step analytical framework, based on a novel variance-based sensitivity analysis approach, is presented and applied to ESRI. The resulting analysis shows that statistical dependencies between ESRI’s underlying indicators have direct implications on the outcome values – the equal weights assigned a-priori do not correspond to an equal influence from each indicator. Furthermore, when attempting to optimise the weights to balance the contribution of each indicator, it is found that this would require a highly unbalanced set of weights and come at the expense of representing the indicators in an effective manner. Thereby, it can be concluded that there are significant dependencies between the indicators and that their correlations need to be accounted for to achieve a balanced and representative index construction. Guided by these findings, this thesis provides three recommendations for improving the statistical representation and conceptual coherence of ESRI. These include: (1) avoid aggregating a negatively correlated indicator – keep it aside, (2) remove a conceptually problematic indicator – revise its construction or conceptual contribution, and (3) aggregate three collinear and conceptually intersecting indicators into a sub-index, prior to aggregation – limit their overrepresentation. By revising the index according to these three recommendations, it is found that ESRI showcases a greater conceptual and statistical coherence. It can thus be concluded that the analytical framework, proposed in this thesis, can aid the development of representative indices.
Kompositindikatorer (eller index) är populära verktyg som ofta används vid rankning och benchmarking av olika alternativ utifrån komplexa koncept. Syftet med att konstruera ett index är, bland annat, att förenkla och sammanfatta informationen från ett flertal underliggande indikatorer. För att undvika missvisande resultat är det därmed viktigt att konstruera index på ett transparent och representativt sätt. Med detta i åtanke, avser denna uppsats att stödja konstruktionen av Electricity Supply Resilience Index (ESRI) – vilket är ett nyutvecklat energiindex, framtaget inom Future Resilient Systems (FRS) programmet på Singapore-ETH Centre (SEC). Detta görs genom att studera ett vanligt fenomen (s.k. implicita vikter) som gör sig gällande i ett av konstruktionsstegen, då de underliggande indikatorerna ska viktas och aggregeras till ett index. I detta steg tilldelas vanligtvis vikter till de enskilda indikatorerna som ska spegla deras relativa betydelse i indexet. Det har dock nyligen visats att datastrukturen och korrelationer mellan indikatorerna har en avgörande påverkan på varje indikators betydelse i indexet, vilket ibland kan vara helt oberoende av vikten de tilldelats. Detta fenomen kallas ibland för implicita vikter, då de ej är explicit tilldelade utan uppkommer från datastrukturen. Syftet med denna uppsatts är således att undersöka de implicita vikterna i aggregationen av ESRI.  För detta ändamål sker en tillämpning och utökning av en nyutvecklad variansbaserad känslighetsanalys, baserad på olinjär regression, för bedömning av implicita vikter i kompositindikatorer. Resultaten från denna analys visar att statistiska beroenden mellan ESRIs underliggande indikatorer har direkt inverkan på varje indikators betydelse i indexet. Detta medför att vikterna ej överensstämmer med indikatorernas betydelse. Följaktligen utförs en vikt-optimering, för att balansera bidraget från varje indikator. Utifrån resultaten av denna vikt-optimering kan det konstateras att det inte är tänkbart att balansera bidraget från varje indikator genom att justera vikterna. Om så görs, skulle det ske på bekostnad av att kunna representera varje indikator på ett effektivt sätt. Därmed kan slutsatsen dras att det finns tydliga beroenden mellan indikatorer och att deras korrelationerna måste tas i hänsyn för att uppnå en balanserad och representativ indexkonstruktion. Utifrån dessa insikter presenteras tre rekommendationer för att förbättra den statistiska representationen och konceptuella samstämmigheten i ESRI. Dessa innefattar: (1) Undvik att aggregera en negativt korrelerad indikator - behåll den vid sidan av, (2) ta bort en konceptuellt problematisk indikator - revidera dess konstruktion eller konceptuella bidrag, och (3) sammanställ tre kollinära och konceptuellt överlappande indikatorer i ett sub-index, före aggregering - begränsa deras överrepresentation. När dessa rekommendationer implementerats står det klart att den reviderade ESRI påvisar en förbättrad konceptuell och statistisks samstämmighet. Därmed kan det fastställas att det analytiska verktyg som presenteras i denna uppsats kan bidra till utvecklingen av representativa index.
APA, Harvard, Vancouver, ISO, and other styles
31

Havlíček, David. "Vytvoření nových predikčních modulů v systému pro dolování z dat na platformě NetBeans." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236768.

Full text
Abstract:
The issue of this master's thesis is a creation of new prediction unit for existing system of knowledge discovery in database. The first part of project deal with general problems of knowledge discovery in database and predictive analysis. The second part of the project deal with system developed on FIT, for which is module implemented, used technologies, concept and implementation of mining module for this system. The solution is implemented in Java language and is a built on the NetBeans platform.
APA, Harvard, Vancouver, ISO, and other styles
32

Yamin, Moh'd. "LANDSLIDE STABILIZATION USING A SINGLE ROW OF ROCK-SOCKETED DRILLED SHAFTS AND ANALYSIS OF LATERALLY LOADED DRILLED SHAFTS USING SHAFT DEFLECTION DATA." University of Akron / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=akron1196960547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Sánchez, Rocio Paola Maehara. "An extension of Birnbaum-Saunders distributions based on scale mixtures of skew-normal distributions with applications to regression models." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-14052018-202935/.

Full text
Abstract:
The aim of this work is to present an inference and diagnostic study of an extension of the lifetime distribution family proposed by Birnbaum and Saunders (1969a,b). This extension is obtained by considering a skew-elliptical distribution instead of the normal distribution. Specifically, in this work we develop a Birnbaum-Saunders (BS) distribution type based on scale mixtures of skew-normal distributions (SMSN). The resulting family of lifetime distributions represents a robust extension of the usual BS distribution. Based on this family, we reproduce the usual properties of the BS distribution, and present an estimation method based on the EM algorithm. In addition, we present regression models associated with the BS distributions (based on scale mixtures of skew-normal), which are developed as an extension of the sinh-normal distribution (Rieck and Nedelman, 1991). For this model we consider an estimation and diagnostic study for uncensored data.
O objetivo deste trabalho é apresentar um estudo de inferência e diagnóstico em uma extensão da família de distribuições de tempos de vida proposta por Birnbaum e Saunders (1969a,b). Esta extensão é obtida ao considerar uma distribuição skew-elíptica em lugar da distribuição normal. Especificamente, neste trabalho desenvolveremos um tipo de distribuição Birnbaum-Saunders (BS) baseda nas distribuições mistura de escala skew-normal (MESN). Esta família resultante de distribuições de tempos de vida representa uma extensão robusta da distribuição BS usual. Baseado nesta família, vamos reproduzir as propriedades usuais da distribuição BS, e apresentar um método de estimação baseado no algoritmo EM. Além disso, vamos apresentar modelos de regressão associado à distribuições BS (baseada na distribuição mistura de escala skew-normal), que é desenvolvida como uma extensão da distribuição senh-normal (Rieck e Nedelman, 1991), para estes vamos considerar um estudo de estimação e diagnóstisco para dados sem censura.
APA, Harvard, Vancouver, ISO, and other styles
34

MORASCHINI, LUCA. "Likelihood free and likelihood based approaches to modeling and analysis of functional antibody titers with applications to group B Streptococcus vaccine development." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2015. http://hdl.handle.net/10281/76794.

Full text
Abstract:
Opsonophagocytic killing assays (OPKA) are routinely used for the quantification of bactericidal antibodies against Gram-positive bacteria in clinical trial samples. The OPKA readout, the titer, is traditionally estimated using non-linear dose-response regressions as the highest serum dilution yielding a predefined threshold level of bacterial killing. Therefore, these titers depend on a specific killing threshold value and on a specific dose-response model. This thesis describes a novel OPKA titer definition, the threshold free titer, which preserves biological interpretability whilst not depending on any killing threshold. First, a model-free version of this titer is presented and shown to be more precise than the traditional threshold-based titers when using simulated and experimental group B Streptococcus (GBS) OPKA experimental data. Second, a model-based threshold-free titer is introduced to automatically take into account the potential saturation of the OPKA killing curve. The posterior distributions of threshold-based and threshold-free titers is derived for each analysed sample using importance sampling embedded within a Markov chain Monte Carlo sampler of the coefficients of a 4PL logistic dose-response model. The posterior precision of threshold-free titers is again shown to be higher than that of threshold-based titers. The biological interpretability and operational characteristics demonstrated here indicate that threshold-free titers can substantially improve the routine analysis of OPKA experimental and clinical data.
APA, Harvard, Vancouver, ISO, and other styles
35

González, Rojas Victor Manuel. "Análisis conjunto de múltiples tablas de datos mixtos mediante PLS." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284659.

Full text
Abstract:
The fundamental content of this thesis corresponds to the development of the GNM-NIPALIS, GNM-PLS2 and GNM-RGCCA methods, used to quantify qualitative variables parting from the first k components given by the appropriate methods in the analysis of J matrices of mixed data. These methods denominated GNM-PLS (General Non Metric Partial Least Squares) are an extension of the NM-PLS methods that only take the first principal component in the quantification function. The transformation of the qualitative variables is done through optimization processes, usually maximizing functions of covariance or correlation, taking advantage of the flexibility of the PLS algorithms and keeping the properties of group belonging and order if it exists; The metric variables are keep their original state as well, excepting standardization. GNM-NIPALS has been created for the purpose of treating one (J = 1) mixed data matrix through the quantification via ACP type reconstruction of the qualitative variables parting from a k components aggregated function. GNM-PLS2 relates two (J = 2) mixed data sets Y~X through PLS regression, quantifying the qualitative variables of a space with the first H PLS components aggregated function of the other space, obtained through cross validation under PLS2 regression. When the endogenous matrix Y contains only one answer variable the method is denominated GNM-PLS1. Finally, in order to analyze more than two blocks (J = 2) of mixed data Y~X1+...+XJ through their latent variables (LV) the GNM-RGCCA was created, based on the RGCCA (Regularized Generalized Canonical Correlation Analysis) method, that modifies the PLS-PM algorithm implementing the new mode A and specifies the covariance or correlation maximization functions related to the process. The quantification of the qualitative variables on each Xj block is done through the inner Zj = Σj ej Yj function, which has J dimension due to the aggregation of the outer Yj estimations. Zj, as well as Yj estimate the ξj component associated to the j-th block.
El contenido fundamental de esta tesis corresponde al desarrollo de los métodos GNM-NIPALS, GNM-PLS2 y GNM-RGCCA para la cuantificación de las variables cualitativas a partir de las primeras k componentes proporcionadas por los métodos apropiados en el análisis de J matrices de datos mixtos. Estos métodos denominados GNM-PLS (General Non Metric Partial Least Squares) son una extensión de los métodos NM-PLS que toman sólo la primera componente principal en la función de cuantificación. La trasformación de las variables cualitativas se lleva a cabo mediante procesos de optimización maximizando generalmente funciones de covarianza o correlación, aprovechando la flexibilidad de los algoritmos PLS y conservando las propiedades de pertenencia grupal y orden si existe; así mismo se conservan las variables métricas en su estado original excepto por estandarización. GNM-NIPALS ha sido creado para el tratamiento de una (J=1) matriz de datos mixtos mediante la cuantificación vía reconstitución tipo ACP de las variables cualitativas a partir de una función agregada de k componentes. GNM-PLS2 relaciona dos (J=2) conjuntos de datos mixtos Y~X mediante regresión PLS, cuantificando las variables cualitativas de un espacio con la función agregada de las primeras H componentes PLS del otro espacio, obtenidas por validación cruzada bajo regresión PLS2. Cuando la matriz endógena Y contiene sólo una variable de respuesta el método se denomina GNM-PLS1. Finalmente para el análisis de más de dos bloques (J>2) de datos mixtos Y~X1+...+XJ a través de sus variables latentes (LV) se implementa el método NM-RGCCA basado en el método RGCCA (Regularized Generalized Canonical Correlation Analysis) que modifica el algoritmo PLS-PM implementando el nuevo modo A y especifica las funciones de maximización de covarianzas o correlaciones asociadas al proceso. La cuantificación de las variables cualitativas en cada bloque Xj se realiza mediante la función inner Zj de dimensión J debido a la agregación de las estimaciones outer Yj. Tanto Zj como Yj estiman la componente ξj asociad al j-ésimo bloque.
APA, Harvard, Vancouver, ISO, and other styles
36

Běhounek, Tomáš. "Imaging Reflectometry Measuring Thin Films Optical Properties." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-233857.

Full text
Abstract:
V této práci je prezentována inovativní metoda zvaná \textit{Zobrazovací Reflektometrie}, která je založena na principu spektroskopické reflektometrie a je určena pro vyhodnocování optických vlastností tenkých vrstev .\ Spektrum odrazivosti je získáno z map intenzit zaznamenaných CCD kamerou. Každý záznam odpovídá předem nastavené vlnové délce a spektrum odrazivosti může být určeno ve zvoleném bodu nebo ve vybrané oblasti.\ Teoretický model odrazivosti se fituje na naměřená data pomocí Levenberg~-~Marquardtova algoritmu, jehož výsledky jsou optické vlastnosti vrstvy, jejich přesnost, a určení spolehlivosti dosažených výsledků pomocí analýzy citlivosti změn počátečních nastavení optimalizačního algoritmu.
APA, Harvard, Vancouver, ISO, and other styles
37

Cayemitte, Jean-Marie. "Accumulation des biens, croissance et monnaie." Thesis, Paris 2, 2014. http://www.theses.fr/2014PA020001/document.

Full text
Abstract:
Cette thèse construit un modèle théorique qui renouvelle l’approche traditionnelle de l’équilibre du marché. En introduisant dans le paradigme néo-classique le principe de préférence pour la quantité, il génère de façon optimale des stocks dans un marché concurrentiel. Les résultats sont très importants, car ils expliquent à la fois l’émergence des invendus et l’existence de cycles économiques. En outre, il étudie le comportement optimal du monopole dont la puissance de marché dépend non seulement de la quantité de biens étalés, mais aussi de celle de biens achetés. Contrairement à l’hypothèse traditionnelle selon laquelle le monopoleur choisit le prix ou la quantité qui maximise son profit, il attire, via un indice de Lerner généralisé la demande à la fois par le prix et la quantité de biens exposés. Quelle que soit la structure du marché, le phénomène d’accumulation des stocks de biens apparaît dans l’économie. De plus, il a l’avantage d’expliquer explicitement les achats impulsifs non encore traités par la théorie économique. Pour vérifier la robustesse des résultats du modèle théorique, ils sont testés sur des données américaines. En raison de leur non-linéarité, la méthode de Gauss-Newton est appropriée pour analyser l’impact de la préférence pour la quantité sur la production et l’accumulation de biens, et par conséquent sur les prévisions de PIB. Enfin, cette thèse construit un modèle à générations imbriquées à deux pays qui étend l’équilibre dynamique à un gamma-équilibre dynamique sans friction. Sur la base de la contrainte de détention préalable d’encaisse, il ressort les conditions de sur-accumulation du capital et les conséquences de la mobilité du capital sur le bien-être dans un contexte d’accumulation du stock d’invendus
This thesis constructs a theoretical model that renews the traditional approach of the market equilibrium. By introducing into the neoclassical paradigm the principle of preference for quantity, it optimally generates inventories within a competitive market. The results are very important since they explain both the emergence of unsold goods and the existence of economic cycles. In addition, it studies the optimal behavior of a monopolist whose the market power depends not only on the quantity of displayed goods but also that of goods that the main consumer is willing to buy. Contrary to the traditional assumption that the monopolist chooses price or quantity that maximizes its profit, through a generalized Lerner index (GLI) it attracts customers’ demand by both the price and the quantity of displayed goods. Whatever the market structure, the phenomenon of inventory accumulation appears in the economy. Furthermore, it has the advantage of explicitly explaining impulse purchases untreated by economics. To check the robustness of the results,the theoretical model is fitted to U.S. data. Due to its nonlinearity, the Gauss-Newtonmethod is appropriate to highlight the impact of consumers’ preference for quantity on production and accumulation of goods and consequently GDP forecast. Finally, this thesis builds a two-country overlapping generations (OLG) model which extends the dynamic OLG equilibrium to a frictionless dynamic OLG gamma-equilibrium. Based on the cash-inadvance constraint, it highlights the conditions of over-accumulation of capital and welfare implications of capital mobility in a context of accumulation of stock of unsold goods
APA, Harvard, Vancouver, ISO, and other styles
38

"Weighted quantile regression and oracle model selection." Thesis, 2009. http://library.cuhk.edu.hk/record=b6074984.

Full text
Abstract:
In this dissertation I suggest a new (regularized) weighted quantile regression estimation approach for nonlinear regression models and double threshold ARCH (DTARCH) models. I allow the number of parameters in the nonlinear regression models to be fixed or diverge. The proposed estimation method is robust and efficient and is applicable to other models. I use the adaptive-LASSO and SCAD regularization to select parameters in the nonlinear regression models. I simultaneously estimate the AR and ARCH parameters in the DTARCH model using the proposed weighted quantile regression. The values of the proposed methodology are revealed.
Keywords: Weighted quantile regression, Adaptive-LASSO, High dimensionality, Model selection, Oracle property, SCAD, DTARCH models.
Under regularity conditions, I establish asymptotic distributions of the proposed estimators, which show that the model selection methods perform as well as if the correct submodels are known in advance. I also suggest an algorithm for fast implementation of the proposed methodology. Simulations are conducted to compare different estimators, and a real example is used to illustrate their performance.
Jiang, Xuejun.
Adviser: Xinyuan Song.
Source: Dissertation Abstracts International, Volume: 73-01, Section: B, page: .
Thesis (Ph.D.)--Chinese University of Hong Kong, 2009.
Includes bibliographical references (leaves 86-92).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [201-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
APA, Harvard, Vancouver, ISO, and other styles
39

Velázquez, Ricardo. "Nonlinear measurement error models with multivariate and differently scaled surrogates /." 2002. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3070224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hsu, Wen-Jun, and 許文俊. "The Application of Focus Measure Algorithms Combined with the Nonlinear Regression Analysis." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/16272799303948633960.

Full text
Abstract:
碩士
國立高雄師範大學
物理學系
93
In this research, we use stepper motor, CCD camera and GPIB interface card to form optical autofocusing system, draw and perform by three kinds of filter algorithms to do treatment plainly separately in image that and then seize CCD, and then we utilize nonlinear regression to guess the extreme value and move the stepper motor fast; In addition, we test accuracy of autofocusing system to three different patterns in the experiment, and compare the results guessed in nonlinear regression and searching of the whole range of the stepper motor (global search ). The results can apply in the autofocusing procedure of microscope, adaptive optics, material process, storage materials and optics examine.
APA, Harvard, Vancouver, ISO, and other styles
41

Lee, Sooyoung. "Statistical inference of nonlinear Granger causality: a semiparametric time series regression analysis." Thesis, 2013. http://hdl.handle.net/2440/81576.

Full text
Abstract:
Since the seminal work of Granger (1969), Granger causality has become a useful concept and tool in the study of the dynamic linkages between economic variables and to explore whether or not an economic variable helps forecast another one. Researchers have suggested a variety of methods to test the existence of Grangercausality in the literature. In particular, linear Granger causality testing has been remarkably developed; (see, for example, Toda & Philips (1993), Sims, Stock & Watson (1990), Geweke (1982), Hosoya (1991) and Hidalgo (2000)). However, in practice, the real economic relationship between different variables may often be nonlinear. Hiemstra & Jones (1994) and Nishiyama, Hitomi, Kawasaki & Jeong (2011) recently proposed different methods to test the existence of any non-linear Granger causality between a pair of economic variables under a α-mixing framework of data generating process. Their methods are general with nonparametric features, which however suffer from curse of dimensionality when high lag orders need to be taken into consideration in applications. In this thesis, the main objective is to develop a class of semiparametric time series regression models that are of partially linear structures, with statistical theory established under a more general framework of near epoch dependent (NED) data generating processes, which can be easily used in exploring nonlinear Granger causality. This general NED framework, which takes the α-mixing one as a very special case, is popular in nonlinear econometric analysis. The reasons why we will adopt such a semi-parametric model structure for time series regression under the general NED framework are not only because it is a natural extension of the structure of linear Granger causality analysis but also because it enables us to estimate and hence understand the likely structure of the non-linear Granger-causality if it exists. To our best knowledge, in the literature, this is still an early effort that seeks the causality structure in nonlinear Granger-causality beyond the testing of its existence. Furthermore, semiparametric structures help to reduce the curse of dimensionality that purely nonparametric methods suffer from in Granger-causality testing. We study the semiparametric regression models under a more general framework of NED time series processes, which include, for example, the popular ARMA(p,q)-GARCH(r,m) model in financial econometrics which is hard to be shown to be α-mixing except in some very special cases. By using the idea of Robinson (1988) and the theory developed in Lu & Linton (2007) under NED, we can construct the estimators of both the parameters and the unknown function in our model. We have also established asymptotic theory for these estimators under NED. The estimated unknown functional part and its confidence interval can tell us useful information on whether or not there exists a nonlinear Granger causality between the variables, and moreover, we can find the functional form of the nonlinear Granger causality. In order to examine the finite-sample performance of the proposed estimators for our model under study, besides the developed large-sample theory, we have also conducted Monte Carlo simulation studies. The simulation results clearly demonstrate that we can estimate both the parameters and the unknown function rather accurately under moderate sample sizes. Finally, we have empirically applied the proposed methodology to examining the existence or effects of nonlinear Granger-causality between each pair of the financial markets involving Australia and the USA, UK and China which are closely related to Australia in economy. Weekly return data are used to avoid the possible market micro structure differences. Interestingly, we have found that there is a strongly nonlinear Granger causality from the UK FTSE100 to the Australian ASX200, and some linear or nonlinear Granger causality from the USA S&P500 to the Australian ASX200, as well as some negatively linear Granger causality from the China SSE index to the Australian ASX200. From these results, we can fairly say that the USA, UK and Chinese stock markets have close linkages with and impacts on the Australian market, with the stock prices from these foreign countries 1 week ago can help to predict current Australian stock market behavior. We have also found that there is some linear Granger causality from the UK FTSE100 and the USA S&P500 to the China SSE index, respectively, but we cannot see the Granger-causality from the Australian ASX200 to the China SSE. We either can not find apparent evidence of the Granger causalities from the other countries to the UK or the USA. In addition, we have examined the lag 2 Granger-causality effect between each pair of these markets, and only find that the Chinese stock price 2 weeks ago appears to help predict this week's Australian stock price. In summary, the main contributions of this thesis are as follows: • We have suggested semiparametric time series regression models of partially linear structures to examine possibly nonlinear Granger causality, with methods to estimate both the parameters and the unknown nonparametric function proposed. • We have established the consistency and asymptotic normality for the estimators under a more general, popular data generating framework of near epoch dependence. • Monte Carlo simulation studies reveal that the proposed methodology works well for both linear and nonlinear functional forms of Granger causality in finite samples. • Interesting empirical applications find that there are clear dynamic linkages of the Australian stock market impacted by the USA, UK and Chinese markets.
Thesis (M.Phil.) -- University of Adelaide, School of Mathematical Sciences, 2013
APA, Harvard, Vancouver, ISO, and other styles
42

Jheng, Yu-lun, and 鄭宇倫. "Nonlinear Regression Models on Site Traffic Impact Analysis–The Case of Taipei County." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/2dtj74.

Full text
Abstract:
碩士
國立中央大學
土木工程研究所
97
In the metropolitan areas, the existing shortage of transport facilities of site development often caused serious traffic impact in the neighborhood. However, the detailed traffic impact assessment (TIA) on the prior of its planning and evaluation would cost a huge amount of money on surveying and analysis. Therefore, regarding the common site development of a metropolis, the study hoped using a simpler way of regression analysis to establish a forecasting model and estimate the road traffic impact more efficiently. The mainstream of the study were based on the nonlinear regression analysis which established the forecasting model of road traffic impact, and it referred to the past literatures, the implemented experiences and every related provision about traffic impact. Furthermore, it considered the feasibility of collected data and constructed model initially with related variables between major and objective bases which selected from every development project. Due to the consideration of existing nonlinear relationship between variables, it required to transform the used variables into a linear relationship by variable transformation, and then estimated the relevant parameters, finally, established a complete nonlinear traffic impact forecasting model. The experimental objects were the final drafts of traffic impact reports from Taipei County site cases at 2007 and 2008, the results revealed that this regression forecasting model could get a great outcome. However, the poor characteristics of the sample data could be seen from the experimental objects through the model accuracy. It could be applied to forecast the impact of general site development analysis at other metropolitan areas in the future, and cooperated to the traffic present situation or other impact evaluation criteria, and even combined with each local geographic information system and spatial database, so that it facilitated to acquire samples and provided a basis of preliminary investigation and assessment on traffic impact from site development.
APA, Harvard, Vancouver, ISO, and other styles
43

HSU, CHEN-WANG, and 徐鎮旺. "Analysis Nonlinear Threshold Effects on Oversea Tour aspiration – Panel Smooth Transition Regression Approach." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/6928p8.

Full text
Abstract:
碩士
開南大學
觀光運輸學院碩士在職專班
103
With the coming of ageing society by average age increase, distribution for population age exist structure change condition in addition, economic growth also tend to easy and smooth, at the atmosphere of focusing recreation, oversea tour has been a trend in domestic hence, it is more worth to investigate the effect factors on oversea tour. This paper investigate the affection factors of Taiwanese oversea tour aspiration by panel data smooth transition regression method to control other factors which can affect oversea tour aspiration and potentially explain the heterogeneity exit in time and the aspiration to oversea tour to different countries. First, we provide the evidence of people numbers for oversea tour is varied with time and changed smoothly by nonlinear characters and be determined by exchange rate and oil price volatility. Second, explanatory variables correspond to the threshold of transition variable will present negative marginal effects, it means that variables to oversea tour is not constant but variables with time and price cost. Finally, when adopt exchange rate to be threshold, transition speed of the model is 20 times higher then that of threshold of oil price. It means that exchange rate reflect to oversea tour cost is higher and faster then that comes from oil price and can reflect to the aspiration of oversea tour immediately.
APA, Harvard, Vancouver, ISO, and other styles
44

Sun, Jia-Wei, and 孫嘉偉. "Nonlinear regression analysis in the Presence of Censored Response and Error-Prone Predictors." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/30546346636481989590.

Full text
Abstract:
碩士
國立彰化師範大學
統計資訊研究所
95
In many regression problems the regressor is usually measured witherrors. Errors in the predictors can cause severe bias in estimation unless some auxiliary adjustment has been made. Nonlinear measurement error models have received progressive attention since Carroll and Li's (1992) paper appeared. Besides, regression techniques, e.g. scatterplot, can suggest the forms of the models. But for censored responses, these techniques might fail. In order to resolve the problem caused by unobserved censored data, Fan and Gijbel (1994) proposed a Kaplan-Meier like approach. Based on regression calibration proposed by Carroll and Li (1992) as well as a Kaplan-Meier like Transformation for censored data proposed by Fan and Gijbel (1994) in dealing with the nonparametric regression model with censored response, we may consider several general models for those defective data sets with both censored responses and error-prone explanatory variables. The aims of this thesis are twofold: first, to consider a general partially linear single-index models with censored response and error-prone regressors, and to estimate the parameters as well as the unknown link function; second, to consider the problem of dimension reduction of a nonlinear model with unknown link function for censored dependent variable and independent variables with measurement errors. In seeking to reach these objectives, we modify both of Lu and Cheng's (2007) and Lue's (2004) approach to simultaneously overcome the difficulty of estimations caused by censored responses and error-prone regressors. Moreover, we generalize both of their works. The illustrative simulation results verify the validity of our method.
APA, Harvard, Vancouver, ISO, and other styles
45

Chen, Yakuan. "Methods for functional regression and nonlinear mixed-effects models with applications to PET data." Thesis, 2017. https://doi.org/10.7916/D87W6QJ9.

Full text
Abstract:
The overall theme of this thesis focuses on methods for functional regression and nonlinear mixed-effects models with applications to PET data. The first part considers the problem of variable selection in regression models with functional responses and scalar predictors. We pose the function-on-scalar model as a multivariate regression problem and use group-MCP for variable selection. We account for residual covariance by "pre-whitening" using an estimate of the covariance matrix, and establish theoretical properties for the resulting estimator. We further develop an iterative algorithm that alternately updates the spline coefficients and covariance. Our method is illustrated by the application to two-dimensional planar reaching motions in a study of the effects of stroke severity on motor control. The second part introduces a functional data analytic approach for the estimation of the IRF, which is necessary for describing the binding behavior of the radiotracer. Virtually all existing methods have three common aspects: summarizing the entire IRF with a single scalar measure; modeling each subject separately; and the imposition of parametric restrictions on the IRF. In contrast, we propose a functional data analytic approach that regards each subject's IRF as the basic analysis unit, models multiple subjects simultaneously, and estimates the IRF nonparametrically. We pose our model as a linear mixed effect model in which shrinkage and roughness penalties are incorporated to enforce identifiability and smoothness of the estimated curves, respectively, while monotonicity and non-negativity constraints impose biological information on estimates. We illustrate this approach by applying it to clinical PET data. The third part discusses a nonlinear mixed-effects modeling approach for PET data analysis under the assumption of a compartment model. The traditional NLS estimators of the population parameters are applied in a two-stage analysis, which brings instability issue and neglects the variation in rate parameters. In contrast, we propose to estimate the rate parameters by fitting nonlinear mixed-effects (NLME) models, in which all the subjects are modeled simultaneously by allowing rate parameters to have random effects and population parameters can be estimated directly from the joint model. Simulations are conducted to compare the power of detecting group effect in both rate parameters and summarized measures of tests based on both NLS and NLME models. We apply our NLME approach to clinical PET data to illustrate the model building procedure.
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, Jung-Li, and 楊榮力. "Financial Time Series Forecasting using Nonlinear Independent Component Analysis, Support Vector Regression and Neural Network." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/n756e7.

Full text
Abstract:
碩士
國立臺北科技大學
商業自動化與管理研究所
97
There are a lot of factors in time series, because have non-linear relation among these factors, it is often unable to receive useful information to observe directly. It is actually a difficult task because of the many correlated factors that become involved. Nonlinear Independent component analysis (NLICA) is a novel feature extraction technique. It aims at recovering independent sources from their mixtures, without knowing the mixing procedure or any specific knowledge of the sources. In this research a time series prediction model integrating NLICA and BPN/SVR is proposed for stock price. The proposed approach first uses NLICA on the input space composed of original forecasting variables into the feature space consisting of independent components representing hidden information of the original data. The hidden information of the original data could be discovered in these ICs. The ICs are then used as the input variables of the BPN and SVR for building the prediction model. In order to evaluate the performance of the proposed approach, the time series used as the illustrative example.
APA, Harvard, Vancouver, ISO, and other styles
47

Robledo, Ricardo Luis. "Nonlinear Stochastic Analysis of Motorcycle Dynamics." Thesis, 2013. http://hdl.handle.net/1911/72032.

Full text
Abstract:
Off-road and racing motorcycles require a particular setup of the suspension to improve the comfort and the safety of the rider. Further, due to ground unevenness, off-road motorcycle suspensions usually experience extreme and erratic excursions in performing their function. In this regard, the adoption of nonlinear devices, such as progressive springs and hydro pneumatic shock absorbers, can help limiting both the acceleration experienced by the sprung mass and the excursions of the suspensions. For dynamic analysis purposes, this option involves the solution of the nonlinear differential equations that govern the motion of the motorcycle, which is excited by the stochastic road ground profile. In this study a 4 degrees-of-freedom (4-DOF) nonlinear motorcycle model is considered. The model involves suspension elements with asymmetric behaviour. Further, it is assumed that the motorcycle is exposed to loading of a stochastic nature as it moves with a specified speed over a road profile defined by a particular power spectrum. It is shown that a meaningful analysis of the motorcycle response can be conducted by using the technique of statistical linearization. The validity of the proposed approach is established by comparison with results from pertinent Monte Carlo studies. In this context the applicability of auto-regressive (AR) filters for efficient implementation of the Monte Carlo simulation is pointed out. The advantages of these methods for the synthesis of excitation signals from a given power spectrum, are shown by comparison with other methods. It is shown that the statistical linearization method allows the analysis of multi-degree-of-freedom (M-DOF) systems that present strong nonlinearities, exceeding other nonlinear analysis methods in both accuracy and applicability. It is expected that the proposed approaches, can be used for a variety of parameter/ride quality studies and as preliminary design tool by the motorcycle industry.
APA, Harvard, Vancouver, ISO, and other styles
48

Fan, Cheng-Jui, and 范成瑞. "Financial Time Series Forecasting using Nonlinear Independent Component Analysis, Particle Swarm Optimization and Support Vector Regression." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/446ay9.

Full text
Abstract:
碩士
國立臺北科技大學
商業自動化與管理研究所
98
Stock market is complicated and sensitive, which could be easily influenced by many factors. Due to there are nonlinear relationships between these factors and exist noise, using raw data to make predictions often cannot obtain good performance. So, we employ feature extraction methods to transform stock price data into feature space to reduce the drawbacks which above-mentioned and to rise the effectiveness of prediction outputs. In this research, nonlinear independent component analysis (NLICA) is applied as feature extraction method to time series data in order to discover hidden information, and particle swarm optimization (PSO) is applied as parameter-optimizing tool of support vector regression (SVR) to build up NLICA-PSO-SVR integrated time series forecasting model. The proposed model is compared with models which are integrated with other feature extraction methods. Due to NLICA can extract independent components (ICs) which are representing the main trend of stock price from observed data effectively, the performance of proposed model is better than other integrated models. On the other hand, integrating PSO to forecasting model can reduce computing cost and time-wasting dramatically and is also beneficial to forecasting performance.
APA, Harvard, Vancouver, ISO, and other styles
49

Sadri, Sara. "Frequency Analysis of Droughts Using Stochastic and Soft Computing Techniques." Thesis, 2010. http://hdl.handle.net/10012/5198.

Full text
Abstract:
In the Canadian Prairies recurring droughts are one of the realities which can have significant economical, environmental, and social impacts. For example, droughts in 1997 and 2001 cost over $100 million on different sectors. Drought frequency analysis is a technique for analyzing how frequently a drought event of a given magnitude may be expected to occur. In this study the state of the science related to frequency analysis of droughts is reviewed and studied. The main contributions of this thesis include development of a model in Matlab which uses the qualities of Fuzzy C-Means (FCMs) clustering and corrects the formed regions to meet the criteria of effective hydrological regions. In FCM each site has a degree of membership in each of the clusters. The algorithm developed is flexible to get number of regions and return period as inputs and show the final corrected clusters as output for most case scenarios. While drought is considered a bivariate phenomena with two statistical variables of duration and severity to be analyzed simultaneously, an important step in this study is increasing the complexity of the initial model in Matlab to correct regions based on L-comoments statistics (as apposed to L-moments). Implementing a reasonably straightforward approach for bivariate drought frequency analysis using bivariate L-comoments and copula is another contribution of this study. Quantile estimation at ungauged sites for return periods of interest is studied by introducing two new classes of neural network and machine learning: Radial Basis Function (RBF) and Support Vector Machine Regression (SVM-R). These two techniques are selected based on their good reviews in literature in function estimation and nonparametric regression. The functionalities of RBF and SVM-R are compared with traditional nonlinear regression (NLR) method. As well, a nonlinear regression with regionalization method in which catchments are first regionalized using FCMs is applied and its results are compared with the other three models. Drought data from 36 natural catchments in the Canadian Prairies are used in this study. This study provides a methodology for bivariate drought frequency analysis that can be practiced in any part of the world.
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Chien-Feng, and 陳建豐. "The Application of Discrete Cosine Transform (DCT) Combined with the Nonlinear Regression Analysis on Optical Auto-Focusing." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/15902479623677292509.

Full text
Abstract:
碩士
國立高雄師範大學
物理學系
96
This research presents a fast and accurate real-time optical auto-focusing system, which utilizes a frequency component of the discrete cosine transform (DCT) as the focus measure. Besides, a nonlinear regression routine is combined in the algorithm to quickly move a rotational stepper motor to the best focus. The concise and effective algorithm can be applied to digital cameras, microscopes and optical inspection instruments.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography