Dissertations / Theses on the topic 'Weighted least squares'

To see the other types of publications on this topic, follow the link: Weighted least squares.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Weighted least squares.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Zhen. "Semi-parametric Bayesian Models Extending Weighted Least Squares." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1236786934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wei, Fei. "Weighted least-squares finite element methods for PIV data assimilation." Thesis, Montana State University, 2011. http://etd.lib.montana.edu/etd/2011/wei/WeiF0811.pdf.

Full text
Abstract:
The ability to diagnose irregular flow patterns clinically in the left ventricle (LV) is currently very challenging. One potential approach for non-invasively measuring blood flow dynamics in the LV is particle image velocimetry (PIV) using microbubbles. To obtain local flow velocity vectors and velocity maps, PIV software calculates displacements of microbubbles over a given time interval, which is typically determined by the actual frame rate. In addition to the PIV, ultrasound images of the left ventricle can be used to determine the wall position as a function of time, and the inflow and outflow fluid velocity during the cardiac cycle. Despite the abundance of data, ultrasound and PIV alone are insufficient for calculating the flow properties of interest to clinicians. Specifically, the pressure gradient and total energy loss are of primary importance, but their calculation requires a full three-dimensional velocity field. Echo-PIV only provides 2D velocity data along a single plane within the LV. Further, numerous technical hurdles prevent three-dimensional ultrasound from having a sufficiently high frame rate (currently approximately 10 frames per second) for 3D PIV analysis. Beyond microbubble imaging in the left ventricle, there are a number of other settings where 2D velocity data is available using PIV, but a full 3D velocity field is desired. This thesis develops a novel methodology to assimilate two-dimensional PIV data into a three-dimensional Computational Fluid Dynamics simulation with moving domains. To illustrate and validate our approach, we tested the approach on three different problems: a flap displaced by a fluid jut; an expanding hemisphere; and an expanding half ellipsoid representing the left ventricle of the heart. To account for the changing shape of the domain in each problem, the CFD mesh was deformed using a pseudo-solid domain mapping technique at each time step. The incorporation of experimental PIV data can help to identify when the imposed boundary conditions are incorrect. This approach can also help to capture effects that are not modeled directly like the impacts of heart valves on the flow of blood into the left ventricle.
APA, Harvard, Vancouver, ISO, and other styles
3

Rosopa, Patrick. "A COMPARISON OF ORDINARY LEAST SQUARES, WEIGHTED LEAST SQUARES, AND OTHER PROCEDURES WHEN TESTING FOR THE EQUALITY OF REGRESSION." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2311.

Full text
Abstract:
When testing for the equality of regression slopes based on ordinary least squares (OLS) estimation, extant research has shown that the standard F performs poorly when the critical assumption of homoscedasticity is violated, resulting in increased Type I error rates and reduced statistical power (Box, 1954; DeShon & Alexander, 1996; Wilcox, 1997). Overton (2001) recommended weighted least squares estimation, demonstrating that it outperformed OLS and performed comparably to various statistical approximations. However, Overton's method was limited to two groups. In this study, a generalization of Overton's method is described. Then, using a Monte Carlo simulation, its performance was compared to three alternative weight estimators and three other methods. The results suggest that the generalization provides power levels comparable to the other methods without sacrificing control of Type I error rates. Moreover, in contrast to the statistical approximations, the generalization (a) is computationally simple, (b) can be conducted in commonly available statistical software, and (c) permits post hoc analyses. Various unique findings are discussed. In addition, implications for theory and practice in psychology and future research directions are discussed.
Ph.D.
Department of Psychology
Sciences
Psychology
APA, Harvard, Vancouver, ISO, and other styles
4

Fang, Xing [Verfasser]. "Weighted total least squares solutions for applications in geodesy / Xing Fang." Hannover : Technische Informationsbibliothek und Universitätsbibliothek Hannover (TIB), 2011. http://d-nb.info/1015446590/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Viklands, Thomas. "Algorithms for the Weighted Orthogonal Procrustes Problem and other Least Squares Problems." Doctoral thesis, Umeå : Umeå universitet, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cheng, Chao-heh. "Calculations for positioning with the Global Navigation Satellite System." Ohio : Ohio University, 1998. http://www.ohiolink.edu/etd/view.cgi?ohiou1176839268.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kamunge, Daniel. "A non-linear weighted least squares gas turbine diagnostic approach and multi-fuel performance simulation." Thesis, Cranfield University, 2011. http://dspace.lib.cranfield.ac.uk/handle/1826/5612.

Full text
Abstract:
The gas turbine which has found numerous applications in Air, Land and Sea applications, as a propulsion system, electricity generator and prime mover, is subject to deterioration of its individual components. In the past, various methodologies have been developed to quantify this deterioration with varying degrees of success. No single method addresses all issues pertaining to gas turbine diagnostics and thus, room for improvement exists. The first part of this research investigates the feasibility of non-linear W eighted Least Squares as a gas turbine component deterioration quantification tool. Two new weighting schemes have been developed to address measurement noise. Four cases have been run to demonstrate the non-linear weighted least squares method, in conjunction with the new weighting schemes. Results demonstrate that the non-linear weighted least squares method effectively addresses measurement noise and quantifies gas path component faults with improved accuracy over its linear counterpart and over methods that do not address measurement noise. Since Gas turbine diagnostics is based on analysis of engine performance at given ambient and power setting conditions; accurate and reliable engine performance modelling and simulation models are essential for meaningful gas turbine diagnostics. The second part of this research therefore sought to develop a multi-fuel and multi-caloric simulation method with the view of improving simulation accuracy. The method developed is based on non-linear interpolation of fuel tables. Fuel tables for Jet-A, UK Natural gas, Kerosene and Diesel were produced. Six case studies were carried out and the results demonstrate that the method has significantly improved accuracy over linear interpolation based methods and methods that assume thermal perfection.
APA, Harvard, Vancouver, ISO, and other styles
8

Shulga, Yelena A. "Model-based calibration of a non-invasive blood glucose monitor." Digital WPI, 2006. https://digitalcommons.wpi.edu/etd-theses/58.

Full text
Abstract:
This project was dedicated to the problem of improving a non-invasive blood glucose monitor being developed by the VivaScan Corporation. The company has made some progress in the non-invasive blood glucose device development and approached WPI for a statistical assistance in the improvement of their model in order to predict the glucose level more accurately. The main goal of this project was to improve the ability of the non-invasive blood glucose monitor to predict the glucose values more precisely. The goal was achieved by finding and implementing the best regression model. The methods included ordinary least squared regression, partial least squares regression, robust regression method, weighted least squares regression, local regression, and ridge regression. VivaScan calibration data for seven patients were analyzed in this project. For each of these patients, the individual regression models were built and compared based on the two factors that evaluate the model prediction ability. It was determined that partial least squares and ridge regressions are two best methods among the others that were considered in this work. Using these two methods gave better glucose prediction. The additional problem of data reduction to minimize the data collection time was also considered in this work.
APA, Harvard, Vancouver, ISO, and other styles
9

Oxby, Paul W. "Multivariate weighted least squares as a preferable alternative to the determinant criterion for multiresponse parameter estimation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq22225.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Wonho. "An improved bus signal priority system for networks with nearside bus stops." Texas A&M University, 2004. http://hdl.handle.net/1969.1/1460.

Full text
Abstract:
Bus Signal Priority (BSP), which has been deployed in many cities around the world, is a traffic signal enhancement strategy that facilitates efficient movement of buses through signalized intersections. Most BSP systems do not work well in transit networks with nearside bus stop because of the uncertainty in dwell time. Unfortunately, most bus stops on arterial roadways are of this type in the U.S. This dissertation showed that dwell time at nearside bus stops could be modeled using weighted least squares regression. More importantly, the prediction intervals associated with the estimate dwell time were calculated. These prediction intervals were subsequently used in the improved BSP algorithm that attempted to reduce the negative effects of nearside bus stops on BSP operations. The improved BSP algorithm was tested on urban arterial section of Bellaire Boulevard in Houston, Texas. VISSIM, a micro simulation model was used to evaluate the performance of the BSP operations. Prior to evaluating the algorithm, the parameters of the micro simulation model were calibrated using an automated Genetic Algorithm based methodology in order to make the model accurately represent the traffic conditions observed in the field. It was shown that the improved BSP algorithm significantly improved the bus operations in terms of bus delay. In addition, it was found that the delay to other vehicles on the network was not statistically different from other BSP algorithms currently being deployed. It is hypothesized that the new approach would be particularly useful in North America where there are many transit systems that utilize nearside bus stops in their networks.
APA, Harvard, Vancouver, ISO, and other styles
11

Zheng, Shimin, and A. K. Gupta. "A New Approach to Statistical Efficiency of Weighted Least Squares Fitting Algorithms for Reparameterization of Nonlinear Regression Models." Digital Commons @ East Tennessee State University, 2012. https://dc.etsu.edu/etsu-works/36.

Full text
Abstract:
We study nonlinear least-squares problem that can be transformed to linear problem by change of variables. We derive a general formula for the statistically optimal weights and prove that the resulting linear regression gives an optimal estimate (which satisfies an analogue of the Rao–Cramer lower bound) in the limit of small noise.
APA, Harvard, Vancouver, ISO, and other styles
12

Nusrat, Nazia. "Development of novel electrical power distribution system state estimation and meter placement algorithms suitable for parallel processing." Thesis, Brunel University, 2015. http://bura.brunel.ac.uk/handle/2438/10902.

Full text
Abstract:
The increasing penetration of distributed generation, responsive loads and emerging smart metering technologies will continue the transformation of distribution systems from passive to active network conditions. In such active networks, State Estimation (SE) tools will be essential in order to enable extensive monitoring and enhanced control technologies. In future distribution management systems, the novel electrical power distribution system SE requires development in a scalable manner in order to accommodate small to massive size networks, be operable with limited real time measurements and a restricted time frame. Furthermore, a significant phase of new sensor deployment is inevitable to enable distribution system SE, since present-day distribution networks lack the required level of measurement and instrumentation. In the above context, the research presented in this thesis investigates five SE optimization solution methods with various case studies related to expected scenarios of future distribution networks to determine their suitability. Hachtel's Augmented Matrix method is proposed and developed as potential SE optimizer for distribution systems due to its potential performance characteristics with regard to accuracy and convergence. Differential Evolution Algorithm (DEA) and Overlapping Zone Approach (OZA) are investigated to achieve scalability of SE tools; followed by which the network division based OZA is proposed and developed. An OZA requiring additional measurements is also proposed to provide a feasible solution for voltage estimation at a reduced computation cost. Realising the requirement of additional measurements deployment to enable distribution system SE, the development of a novel meter placement algorithm that provides economical and feasible solutions is demonstrated. The algorithm is strongly focused on reducing the voltage estimation errors and is capable of reducing the error below desired threshold with limited measurements. The scalable SE solution and meter placement algorithm are applied on a multi-processor system in order to examine effective reduction of computation time. Significant improvement in computation time is observed in both cases by dividing the problem into smaller segments. However, it is important to note that enhanced network division reduces computation time further at the cost of accuracy of estimation. Different networks including both idealised (16, 77, 356 and 711 node UKGDS) and real (40 and 43 node EG) distribution network data are used as appropriate to the requirement of the applications throughout this thesis.
APA, Harvard, Vancouver, ISO, and other styles
13

Can, Mutan Oya. "Comparison Of Regression Techniques Via Monte Carlo Simulation." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/3/12605175/index.pdf.

Full text
Abstract:
The ordinary least squares (OLS) is one of the most widely used methods for modelling the functional relationship between variables. However, this estimation procedure counts on some assumptions and the violation of these assumptions may lead to nonrobust estimates. In this study, the simple linear regression model is investigated for conditions in which the distribution of the error terms is Generalised Logistic. Some robust and nonparametric methods such as modified maximum likelihood (MML), least absolute deviations (LAD), Winsorized least squares, least trimmed squares (LTS), Theil and weighted Theil are compared via computer simulation. In order to evaluate the estimator performance, mean, variance, bias, mean square error (MSE) and relative mean square error (RMSE) are computed.
APA, Harvard, Vancouver, ISO, and other styles
14

Freeman, Laura J. "Statistical Methods for Reliability Data from Designed Experiments." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/37729.

Full text
Abstract:
Product reliability is an important characteristic for all manufacturers, engineers and consumers. Industrial statisticians have been planning experiments for years to improve product quality and reliability. However, rarely do experts in the field of reliability have expertise in design of experiments (DOE) and the implications that experimental protocol have on data analysis. Additionally, statisticians who focus on DOE rarely work with reliability data. As a result, analysis methods for lifetime data for experimental designs that are more complex than a completely randomized design are extremely limited. This dissertation provides two new analysis methods for reliability data from life tests. We focus on data from a sub-sampling experimental design. The new analysis methods are illustrated on a popular reliability data set, which contains sub-sampling. Monte Carlo simulation studies evaluate the capabilities of the new modeling methods. Additionally, Monte Carlo simulation studies highlight the principles of experimental design in a reliability context. The dissertation provides multiple methods for statistical inference for the new analysis methods. Finally, implications for the reliability field are discussed, especially in future applications of the new analysis methods.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
15

Portas, Lance O. "An unstructured numerical method for computational aeroacoustics." Thesis, Loughborough University, 2009. https://dspace.lboro.ac.uk/2134/16694.

Full text
Abstract:
The successful application of Computational Aeroacoustics (CAA) requires high accuracy numerical schemes with good dissipation and dispersion characteristics. Unstructured meshes have a greater geometrical flexibility than existing high order structured mesh methods. This work investigates the suitability of unstructured mesh techniques by computing a two-dimensionallinearised Euler problem with various discretisation schemes and different mesh types. The goal of the present work is the development of an unstructured numerical method with the high accuracy, low dissipation and low dispersion required to be an effective tool in the study of aeroacoustics. The suitability of the unstructured method is investigated using aeroacoustic test cases taken from CAA Benchmark Workshop proceedings. Comparisons are made with exact solutions and a high order structured method. The higher order structured method was based upon a standard central differencing spatial discretisation. For the unstructured method a vertex-based data structure is employed. A median-dual control volume is used for the finite volume approximation with the option of using a Green-Gauss gradient approximation technique or a Least Squares approximation. The temporal discretisation used for both the structured and unstructured numerical methods is an explicit Runge-Kutta method with local timestepping. For the unstructured method, the gradient approximation technique is used to compute gradients at each vertex, these are then used to reconstruct the fluxes at the control volume faces. The unstructured mesh types used to evaluate the numerical method include semi-structured and purely unstructured triangular meshes. The semi-structured meshes were created directly from the associated structured mesh. The purely unstructured meshes were created using a commercial paving algorithm. The Least Squares method has the potential to allow high order reconstruction. Results show that a Weighted Least gradient approximation gives better solutions than unweighted and Green-Gauss gradient computation. The solutions are of acceptable accuracy on these problems with the absolute error of the unstructured method approaching that of a high order structured solution on an equivalent mesh for specific aeroacoustic scenarios.
APA, Harvard, Vancouver, ISO, and other styles
16

Sporre, Göran. "On Some Properties of Interior Methods for Optimization." Doctoral thesis, KTH, Mathematics, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3472.

Full text
Abstract:

This thesis consists of four independent papers concerningdifferent aspects of interior methods for optimization. Threeof the papers focus on theoretical aspects while the fourth oneconcerns some computational experiments.

The systems of equations solved within an interior methodapplied to a convex quadratic program can be viewed as weightedlinear least-squares problems. In the first paper, it is shownthat the sequence of solutions to such problems is uniformlybounded. Further, boundedness of the solution to weightedlinear least-squares problems for more general classes ofweight matrices than the one in the convex quadraticprogramming application are obtained as a byproduct.

In many linesearch interior methods for nonconvex nonlinearprogramming, the iterates can "falsely" converge to theboundary of the region defined by the inequality constraints insuch a way that the search directions do not converge to zero,but the step lengths do. In the sec ond paper, it is shown thatthe multiplier search directions then diverge. Furthermore, thedirection of divergence is characterized in terms of thegradients of the equality constraints along with theasymptotically active inequality constraints.

The third paper gives a modification of the analytic centerproblem for the set of optimal solutions in linear semidefiniteprogramming. Unlike the normal analytic center problem, thesolution of the modified problem is the limit point of thecentral path, without any strict complementarity assumption.For the strict complementarity case, the modified problem isshown to coincide with the normal analytic center problem,which is known to give a correct characterization of the limitpoint of the central path in that case.

The final paper describes of some computational experimentsconcerning possibilities of reusing previous information whensolving system of equations arising in interior methods forlinear programming.

Keywords:Interior method, primal-dual interior method,linear programming, quadratic programming, nonlinearprogramming, semidefinite programming, weighted least-squaresproblems, central path.

Mathematics Subject Classification (2000):Primary90C51, 90C22, 65F20, 90C26, 90C05; Secondary 65K05, 90C20,90C25, 90C30.

APA, Harvard, Vancouver, ISO, and other styles
17

Nilsson, Max. "Performance Comparison of Localization Algorithms for UWB Measurements with Closely Spaced Anchors." Thesis, Luleå tekniska universitet, Rymdteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-70996.

Full text
Abstract:
Tracking objects or people in an indoor environment has a wide variety of uses in many different areas, similarly to positioning systems outdoors. Indoor positioning systems operate in a very different environment however, having to deal with obstructions while also having high accuracy. A common solution for indoor positioning systems is to have three or more stationary anchor antennas spread out around the perimeter of the area that is to be monitored. The position of a tag antenna moving in range of the anchors can then be found using trilateration. One downside of such a setup is that the anchors must be setup in advance, meaning that rapid deployment to new areas of such a system may be impractical. This thesis aims to investigate the possibility of using a different setup, where three anchors are placed close together, so as to fit in a small hand-held device. This would allow the system to be used without any prior setup of anchors, making rapid deployment into new areas more feasible. The measurements done by the antennas for use in trilateration will always contain noise, and as such algorithms have had to be developed in order to obtain an approximation of the position of a tag in the presence of noise. These algorithms have been developed with the setup of three spaced out anchors in mind, and may not be sufficiently accurate when the anchors are spaced very closely together. To investigate the feasibility of such a setup, this thesis tested four different algorithms with the proposed setup, to see its impact on the performance of the algorithms. The algorithms tested are the Weighted Block Newton, Weighted Clipped Block Newton, Linear Least Squares and Non-Linear Least Squares algorithms. The Linear Least Squares algorithm was also run with measurements that were first run through a simple Kalman filter. Previous studies have used the algorithms to find an estimated position of the tag and compared their efficiency using the positional error of the estimate. This thesis will also use the positional estimates to determine the angular position of the estimate in relation to the anchors, and use that to compare the algorithms. Measurements were done using DWM1001 Ultra Wideband (UWB) antennas, and four different cases were tested. In case 1 the anchors and tag were 10 meters apart in line-of-sight, case two were the same as case 1 but with a person standing between the tag and the anchors. In case 3 the tag was moved behind a wall with an adjacent open door, and in case 4 the tag was in the same place as in case 3 but the door was closed. The Linear Least Squares algorithm using the filtered measurements was found to be the most effective in all cases, with a maximum angular error of less than 5$^\circ$ in the worst case. The worst case here was case 2, showing that the influence of a human body has a strong effect on the UWB signal, causing large errors in the estimates of the other algorithms. The presence of a wall in between the anchors and tag was found to have a minimal impact on the angular error, while having a larger effect on the spatial error. Further studies regarding the effects of the human body on UWB signals may be necessary to determine the feasibility of handheld applications, as well as the effect of the tag and/or the anchors moving on the efficiency of the algorithms.
APA, Harvard, Vancouver, ISO, and other styles
18

Boruvka, Audrey. "Data-driven estimation for Aalen's additive risk model." Thesis, Kingston, Ont. : [s.n.], 2007. http://hdl.handle.net/1974/489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Luo, Hao. "Some Aspects on Confirmatory Factor Analysis of Ordinal Variables and Generating Non-normal Data." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-149423.

Full text
Abstract:
This thesis, which consists of five papers, is concerned with various aspects of confirmatory factor analysis (CFA) of ordinal variables and the generation of non-normal data. The first paper studies the performances of different estimation methods used in CFA when ordinal data are encountered.  To take ordinality into account the four estimation methods, i.e., maximum likelihood (ML), unweighted least squares, diagonally weighted least squares, and weighted least squares (WLS), are used in combination with polychoric correlations. The effect of model sizes and number of categories on the parameter estimates, their standard errors, and the common chi-square measure of fit when the models are both correct and misspecified are examined. The second paper focuses on the appropriate estimator of the polychoric correlation when fitting a CFA model. A non-parametric polychoric correlation coefficient based on the discrete version of Spearman's rank correlation is proposed to contend with the situation of non-normal underlying distributions. The simulation study shows the benefits of using the non-parametric polychoric correlation under conditions of non-normality. The third paper raises the issue of simultaneous factor analysis. We study the effect of pooling multi-group data on the estimation of factor loadings. Given the same factor loadings but different factor means and correlations, we investigate how much information is lost by pooling the groups together and only estimating the combined data set using the WLS method. The parameter estimates and their standard errors are compared with results obtained by multi-group analysis using ML. The fourth paper uses a Monte Carlo simulation to assess the reliability of the Fleishman's power method under various conditions of skewness, kurtosis, and sample size. Based on the generated non-normal samples, the power of D'Agostino's (1986) normality test is studied. The fifth paper extends the evaluation of algorithms to the generation of multivariate non-normal data.  Apart from the requirement of generating reliable skewness and kurtosis, the generated data also need to possess the desired correlation matrices.  Four algorithms are investigated in terms of simplicity, generality, and reliability of the technique.
APA, Harvard, Vancouver, ISO, and other styles
20

Davis, Brett Andrew, and Brett Davis@abs gov au. "Inference for Discrete Time Stochastic Processes using Aggregated Survey Data." The Australian National University. Faculty of Economics and Commerce, 2003. http://thesis.anu.edu.au./public/adt-ANU20040806.104137.

Full text
Abstract:
We consider a longitudinal system in which transitions between the states are governed by a discrete time finite state space stochastic process X. Our aim, using aggregated sample survey data of the form typically collected by official statistical agencies, is to undertake model based inference for the underlying process X. We will develop inferential techniques for continuing sample surveys of two distinct types. First, longitudinal surveys in which the same individuals are sampled in each cycle of the survey. Second, cross-sectional surveys which sample the same population in successive cycles but with no attempt to track particular individuals from one cycle to the next. Some of the basic results have appeared in Davis et al (2001) and Davis et al (2002).¶ Longitudinal surveys provide data in the form of transition frequencies between the states of X. In Chapter Two we develop a method for modelling and estimating the one-step transition probabilities in the case where X is a non-homogeneous Markov chain and transition frequencies are observed at unit time intervals. However, due to their expense, longitudinal surveys are typically conducted at widely, and sometimes irregularly, spaced time points. That is, the observable frequencies pertain to multi-step transitions. Continuing to assume the Markov property for X, in Chapter Three, we show that these multi-step transition frequencies can be stochastically interpolated to provide accurate estimates of the one-step transition probabilities of the underlying process. These estimates for a unit time increment can be used to calculate estimates of expected future occupation time, conditional on an individual’s state at initial point of observation, in the different states of X.¶ For reasons of cost, most statistical collections run by official agencies are cross-sectional sample surveys. The data observed from an on-going survey of this type are marginal frequencies in the states of X at a sequence of time points. In Chapter Four we develop a model based technique for estimating the marginal probabilities of X using data of this form. Note that, in contrast to the longitudinal case, the Markov assumption does not simplify inference based on marginal frequencies. The marginal probability estimates enable estimation of future occupation times (in each of the states of X) for an individual of unspecified initial state. However, in the applications of the technique that we discuss (see Sections 4.4 and 4.5) the estimated occupation times will be conditional on both gender and initial age of individuals.¶ The longitudinal data envisaged in Chapter Two is that obtained from the surveillance of the same sample in each cycle of an on-going survey. In practice, to preserve data quality it is necessary to control respondent burden using sample rotation. This is usually achieved using a mechanism known as rotation group sampling. In Chapter Five we consider the particular form of rotation group sampling used by the Australian Bureau of Statistics in their Monthly Labour Force Survey (from which official estimates of labour force participation rates are produced). We show that our approach to estimating the one-step transition probabilities of X from transition frequencies observed at incremental time intervals, developed in Chapter Two, can be modified to deal with data collected under this sample rotation scheme. Furthermore, we show that valid inference is possible even when the Markov property does not hold for the underlying process.
APA, Harvard, Vancouver, ISO, and other styles
21

Walczak, Katarzyna I. "Prototype decision support framework using geospatial technologies for analysing human health risk." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/103630/1/Katarzyna%20Izabella_Walczak_Thesis.pdf.

Full text
Abstract:
This thesis concentrates on the development of a prototype Decision Support Framework based on the landscape epidemiology concept and using GIS to determine human health risk in Semarang (Indonesia). This site was selected as representative of a rapidly urbanizing area in a developing country. The decision support framework examines climatic, landscape and socio-economic factors identified as having significant impacts on water quality and subsequent causation of waterborne and water-related diseases. The research outcomes potentially may be applied worldwide to identify and isolate areas most vulnerable to the effects of the mentioned diseases thus improving quality of life in developing countries.
APA, Harvard, Vancouver, ISO, and other styles
22

Katsikatsou, Myrsini. "Composite Likelihood Estimation for Latent Variable Models with Ordinal and Continuous, or Ranking Variables." Doctoral thesis, Uppsala universitet, Statistiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-188342.

Full text
Abstract:
The estimation of latent variable models with ordinal and continuous, or ranking variables is the research focus of this thesis. The existing estimation methods are discussed and a composite likelihood approach is developed. The main advantages of the new method are its low computational complexity which remains unchanged regardless of the model size, and that it yields an asymptotically unbiased, consistent, and normally distributed estimator. The thesis consists of four papers. The first one investigates the two main formulations of the unrestricted Thurstonian model for ranking data along with the corresponding identification constraints. It is found that the extra identifications constraints required in one of them lead to unreliable estimates unless the constraints coincide with the true values of the fixed parameters. In the second paper, a pairwise likelihood (PL) estimation is developed for factor analysis models with ordinal variables. The performance of PL is studied in terms of bias and mean squared error (MSE) and compared with that of the conventional estimation methods via a simulation study and through some real data examples. It is found that the PL estimates and standard errors have very small bias and MSE both decreasing with the sample size, and that the method is competitive to the conventional ones. The results of the first two papers lead to the next one where PL estimation is adjusted to the unrestricted Thurstonian ranking model. As before, the performance of the proposed approach is studied through a simulation study with respect to relative bias and relative MSE and in comparison with the conventional estimation methods. The conclusions are similar to those of the second paper. The last paper extends the PL estimation to the whole structural equation modeling framework where data may include both ordinal and continuous variables as well as covariates. The approach is demonstrated through an example run in R software. The code used has been incorporated in the R package lavaan (version 0.5-11).
APA, Harvard, Vancouver, ISO, and other styles
23

Frazão, Rodrigo José Albuquerque. "MÉTODOS ALTERNATIVOS PARA ESTIMAÇÃO DE ESTADO EM SISTEMAS DE ENERGIA ELÉTRICA." Universidade Federal do Maranhão, 2012. http://tedebc.ufma.br:8080/jspui/handle/tede/475.

Full text
Abstract:
Made available in DSpace on 2016-08-17T14:53:18Z (GMT). No. of bitstreams: 1 Dissertacao Rodrigo Albuquerque.pdf: 3312916 bytes, checksum: c9ee0be229b62b8aafd7816c3400351d (MD5) Previous issue date: 2012-01-23
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The state estimation process applied to electric power systems aims to provide a trustworthy ―image‖, coherent and complete of the system operation, allowing an efficient monitoring. The state estimation is one of the most important functions of energy management systems. In this work, will be proposed alternative methods of state estimation for electric power systems in the levels of transmission, subtransmission and distribution. For transmission systems are proposed two hybrid methods considering the insertion of conventional measurements combined with phasor measurements based on phasor measurement unit (PMU). To estimate the state in subtransmission systems is proposed an alternative method which, in occurrence of failures in active and/or reactive meters in the substations, uses a load forecasting model based on criteria similar days and application of artificial neural networks. This process of load forecasting is used as a generator of pseudo measurements in state estimation problem, which takes place through the propagation of phasor measurements provided by a PMU placed in the boundary busbar. For the distribution system state estimation the proposed method uses the mathematical method of weighted least squares with equality constraints by modifying the set of measurements and the state variables. It is also proposed a methodology evaluation of the PMUs measurement channel availability for observability analysis. The application of the proposed methods to test systems shows that the results are satisfactory.
O processo de estimação de estado aplicado a sistemas elétricos de energia tem como objetivo fornecer uma imagem confiável, coerente e completa da operação do sistema, permitindo um monitoramento eficiente. A estimação de estado é uma das funções mais importantes dos sistemas de gerenciamento de energia. Neste trabalho são propostos métodos alternativos de estimação de estado para sistemas elétricos nos níveis de transmissão, subtransmissão e de distribuição. Para sistemas de transmissão são propostos dois métodos híbridos considerando a inserção das medições convencionais combinadas com medições fasoriais baseadas na unidade de medição fasorial (PMU - Phasor Measurement Unit). Para a estimação de estado em sistemas de subtransmissão é proposto um método alternativo que, na ocorrência de falhas nos medidores de potência ativa e/ou reativa das subestações, utiliza um modelo de previsão de carga baseado no critério de dias similares e na aplicação de redes neurais artificiais. Esse processo de previsão de carga é utilizado como gerador de pseudomedições na estimação de estado, que se dá através da propagação da medição fasorial fornecida por uma PMU alocada no barramento de fronteira. Para sistemas de distribuição o método de estimação de estado proposto consiste em aplicar o método de mínimos quadrados ponderados com restrições de igualdade, modificando-se o plano de medição e as variáveis de estado. Também é proposta uma metodologia para avaliação da disponibilidade dos canais de medições da PMU e o seu impacto na observabilidade do sistema. A aplicação dos métodos propostos a sistemas teste mostram que os resultados obtidos são satisfatórios.
APA, Harvard, Vancouver, ISO, and other styles
24

Oqielat, Moa'ath Nasser. "Modelling water droplet movement on a leaf surface." Thesis, Queensland University of Technology, 2009. https://eprints.qut.edu.au/30232/1/Moa%27ath_Oqielat_Thesis.pdf.

Full text
Abstract:
The central aim for the research undertaken in this PhD thesis is the development of a model for simulating water droplet movement on a leaf surface and to compare the model behavior with experimental observations. A series of five papers has been presented to explain systematically the way in which this droplet modelling work has been realised. Knowing the path of the droplet on the leaf surface is important for understanding how a droplet of water, pesticide, or nutrient will be absorbed through the leaf surface. An important aspect of the research is the generation of a leaf surface representation that acts as the foundation of the droplet model. Initially a laser scanner is used to capture the surface characteristics for two types of leaves in the form of a large scattered data set. After the identification of the leaf surface boundary, a set of internal points is chosen over which a triangulation of the surface is constructed. We present a novel hybrid approach for leaf surface fitting on this triangulation that combines Clough-Tocher (CT) and radial basis function (RBF) methods to achieve a surface with a continuously turning normal. The accuracy of the hybrid technique is assessed using numerical experimentation. The hybrid CT-RBF method is shown to give good representations of Frangipani and Anthurium leaves. Such leaf models facilitate an understanding of plant development and permit the modelling of the interaction of plants with their environment. The motion of a droplet traversing this virtual leaf surface is affected by various forces including gravity, friction and resistance between the surface and the droplet. The innovation of our model is the use of thin-film theory in the context of droplet movement to determine the thickness of the droplet as it moves on the surface. Experimental verification shows that the droplet model captures reality quite well and produces realistic droplet motion on the leaf surface. Most importantly, we observed that the simulated droplet motion follows the contours of the surface and spreads as a thin film. In the future, the model may be applied to determine the path of a droplet of pesticide along a leaf surface before it falls from or comes to a standstill on the surface. It will also be used to study the paths of many droplets of water or pesticide moving and colliding on the surface.
APA, Harvard, Vancouver, ISO, and other styles
25

Oqielat, Moa'ath Nasser. "Modelling water droplet movement on a leaf surface." Queensland University of Technology, 2009. http://eprints.qut.edu.au/30232/.

Full text
Abstract:
The central aim for the research undertaken in this PhD thesis is the development of a model for simulating water droplet movement on a leaf surface and to compare the model behavior with experimental observations. A series of five papers has been presented to explain systematically the way in which this droplet modelling work has been realised. Knowing the path of the droplet on the leaf surface is important for understanding how a droplet of water, pesticide, or nutrient will be absorbed through the leaf surface. An important aspect of the research is the generation of a leaf surface representation that acts as the foundation of the droplet model. Initially a laser scanner is used to capture the surface characteristics for two types of leaves in the form of a large scattered data set. After the identification of the leaf surface boundary, a set of internal points is chosen over which a triangulation of the surface is constructed. We present a novel hybrid approach for leaf surface fitting on this triangulation that combines Clough-Tocher (CT) and radial basis function (RBF) methods to achieve a surface with a continuously turning normal. The accuracy of the hybrid technique is assessed using numerical experimentation. The hybrid CT-RBF method is shown to give good representations of Frangipani and Anthurium leaves. Such leaf models facilitate an understanding of plant development and permit the modelling of the interaction of plants with their environment. The motion of a droplet traversing this virtual leaf surface is affected by various forces including gravity, friction and resistance between the surface and the droplet. The innovation of our model is the use of thin-film theory in the context of droplet movement to determine the thickness of the droplet as it moves on the surface. Experimental verification shows that the droplet model captures reality quite well and produces realistic droplet motion on the leaf surface. Most importantly, we observed that the simulated droplet motion follows the contours of the surface and spreads as a thin film. In the future, the model may be applied to determine the path of a droplet of pesticide along a leaf surface before it falls from or comes to a standstill on the surface. It will also be used to study the paths of many droplets of water or pesticide moving and colliding on the surface.
APA, Harvard, Vancouver, ISO, and other styles
26

Pereira, Larissa Rocha. "Ajuste de curva B-spline fechada com peso." Universidade Federal de Uberlândia, 2014. https://repositorio.ufu.br/handle/123456789/14947.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The aim of this work is to develop a method of curve fitting using closed B-spline closed for application on reconstruction of cross-sections of objects. For this study specifically where the sections are closed curves, it has been implemented a method to close the curve B-spline curve, in such way that the curve is smooth on the closing point. The developed method is based on least squares approximation with weights, which defines that the curve should be as close as possible to the real curve. The weights in this case are responsible for the tightness of the approximation to each data points, whose points represent the coordinate of the object section that will be rebuild. Moreover, adjustments and impositions on the curve have been proposed so that it has a better result and represent more accurately the desired cross section. Particular characteristics of the curve were used to help enforce and define the settings. For the analysis, B-spline curves using the developed method, were obtained showing good results.
O objetivo desse trabalho é desenvolver um método de ajuste de curvas B-spline fechada para a aplicação na reconstrução de seções transversais de um objeto. Por especificamente nesse trabalho as seções serem seções fechadas, foi implementado um método para o fechamento da curva B-spline, de modo que a mesma possuía suavidade no seu fechamento. O método desenvolvido e utilizado foi baseado na aproximação por mínimos quadrados com pesos, que define que a curva obtida deva ser mais próxima possível da curva real. Os pesos nesse caso são responsáveis pela aproximação ou afastamento da curva em relação aos pontos dados, pontos esses que melhor representam as coordenadas da seção do objeto que se deseja reconstruir. Além disso, foram desenvolvidos ajustes e imposições na curva para que ela tivesse um melhor resultado e representasse de forma mais fiel a seção transversal desejada. Para a imposição e definição dos ajustes foram utilizadas características particulares da curva. Para a análise, curvas B-spline utilizando o método desenvolvido, foram traçadas e foram constatados os resultados desejados.
Mestre em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Desheng. "The Effect of Psychometric Parallelism among Predictors on the Efficiency of Equal Weights and Least Squares Weights in Multiple Regression." Thesis, University of North Texas, 1996. https://digital.library.unt.edu/ark:/67531/metadc278996/.

Full text
Abstract:
There are several conditions for applying equal weights as an alternative to least squares weights. Psychometric parallelism, one of the conditions, has been suggested as a necessary and sufficient condition for equal-weights aggregation. The purpose of this study is to investigate the effect of psychometric parallelism among predictors on the efficiency of equal weights and least squares weights. Target correlation matrices with 10,000 cases were simulated so that the matrices had varying degrees of psychometric parallelism. Five hundred samples with six ratios of observation to predictor = 5/1, 10/1, 20/1, 30/1, 40/1, and 50/1 were drawn from each population. The efficiency is interpreted as the accuracy and the predictive power estimated by the weighting methods. The accuracy is defined by the deviation between the population R² and the sample R² . The predictive power is referred to as the population cross-validated R² and the population mean square error of prediction. The findings indicate there is no statistically significant relationship between the level of psychometric parallelism and the accuracy of least squares weights. In contrast, the correlation between the level of psychometric parallelism and the accuracy of equal weights is significantly negative. Under different conditions, the minimum p value of χ² for testing psychometric parallelism among predictors is also different in order to prove equal weights more powerful than least squares weights. The higher the number of predictors is, the higher the minimum p value. The higher the ratio of observation to predictor is, the higher the minimum p value. The higher the magnitude of intercorrelations among predictors is, the lower the minimum p value. This study demonstrates that the most frequently used levels of significance, 0.05 and 0.01, are no longer the only p values for testing the null hypotheses of psychometric parallelism among predictors when replacing least squares weights with equal weights.
APA, Harvard, Vancouver, ISO, and other styles
28

Ly, Mouhamadou Moustapha. "Trois essais sur les effets de la politique budgétaire dans les pays en développement." Phd thesis, Université d'Auvergne - Clermont-Ferrand I, 2011. http://tel.archives-ouvertes.fr/tel-00606175.

Full text
Abstract:
La réflexion sur l‟utilisation de la politique budgétaire comme outil de stabilisation et de relance connaît un net regain d‟intérêt ces dernières années. Après près de trois décennies qui ont vu la dominance des idées néo-classique, la récente crise financière des années 2008 a consacré le retour aux idées keynésiennes sur l‟efficacité de l‟outil budgétaire. Cette thèse s‟intéresse à ce thème et essaie de caractériser la politique budgétaire dans le contexte des pays en développement et son objectif final est de préciser dans quelle mesure cet outil de politique économique serait efficace pour ces pays. Le chapitre 2 traite de la question des effets des politiques budgétaires surprises. Autrement dit, et à partir d‟une modélisation en VAR structurels, cette partie se pose la question de savoir si le budget peut être utilisé de façon surprise pour relancer une économie et quels sont les défis que pose une telle mesure dans le contexte d‟une économie en développement. Le troisième chapitre à partir d‟un modèle de gravité analyse les relations entre la situation budgétaire dans les économies avancées ainsi que celle des pays émergents et les flux d‟investissement vers les économies à revenu intermédiaire. Cette étude montre qu‟un effet d‟éviction entre pays (développés et émergents) existe mais aussi que l‟économie mondiale tend vers un nouveau paradigme. Le dernier chapitre quant à lui étudie la cyclicité des politiques budgétaires pour un échantillon de pays d‟Afrique subsaharienne et d‟Amérique latine. La méthode choisie a permis de suivre l‟évolution de la procyclicité des politiques budgétaires d‟année en année et de montrer que les pays en développement surtout africains progressivement adoptent des politiques de plus en plus disciplinées et prudentes
APA, Harvard, Vancouver, ISO, and other styles
29

Gulliksson, Mårten. "Algorithms for overdetermined systems of equations." Doctoral thesis, Umeå universitet, Institutionen för datavetenskap, 1993. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-111107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Pavlík, Vít. "Sledování objektů ve videosekvencích." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255397.

Full text
Abstract:
Master's thesis addresses the long-term image tracking in video sequences. The project was intended to demonstrate the techniques that are needed for handling the long-term tracking. It primarily describes the techniques which application leads to construction of adaptive tracking system which is able to deal with the change of appearance of the object and unstable character of the surrounding environement appropriately.
APA, Harvard, Vancouver, ISO, and other styles
31

Yeo, Wan Sieng. "Adaptive Soft Sensors for Non-Gaussian Chemical Process Plant Data Based on Locally Weighted Partial Least Square." Thesis, Curtin University, 2019. http://hdl.handle.net/20.500.11937/77028.

Full text
Abstract:
This thesis proposes an improved algorithm attributed to its abilities to deal with non-Gaussian distributed and nonlinear data and missing measurements. It was formulated through a modification on locally weighted partial least square by incorporating an ensemble method, Kernel function and independent component analysis and expectation maximisation algorithms. The algorithm was then tested using process data generated from six simulated plants. Simulation results indicate superiority of this algorithm compared to the existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
32

Carvalho, Breno Elias Bretas de. "Estimação de estado: a interpretação geométrica aplicada ao processamento de erros grosseiros em medidas." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-29042013-114003/.

Full text
Abstract:
Este trabalho foi proposto com o objetivo de implementar um programa computacional para estimar os estados (tensões complexas nodais) de um sistema elétrico de potência (SEP) e aplicar métodos alternativos para o processamento de erros grosseiros (EGs), baseados na interpretação geométrica dos erros e no conceito de inovação das medidas. Através da interpretação geométrica, BRETAS et al. (2009), BRETAS; PIERETI (2010), BRETAS; BRETAS; PIERETI (2011) e BRETAS et al. (2013) demonstraram matematicamente que o erro da medida se compõe de componentes detectáveis e não detectáveis, e ainda que a componente detectável do erro é exatamente o resíduo da medida. As metodologias até então utilizadas, para o processamento de EGs, consideram apenas a componente detectável do erro, e como consequência, podem falhar. Na tentativa de contornar essa limitação, e baseadas nos trabalhos citados previamente, foram estudadas e implementadas duas metodologias alternativas para processar as medidas portadoras de EGs. A primeira, é baseada na análise direta das componentes dos erros das medidas; a segunda, de forma similar às metodologias tradicionais, é baseada na análise dos resíduos das medidas. Entretanto, o diferencial da segunda metodologia proposta reside no fato de não considerarmos um valor limiar fixo para a detecção de medidas com EGs. Neste caso, adotamos um novo valor limiar (TV, do inglês: Threshold Value), característico de cada medida, como apresentado no trabalho de PIERETI (2011). Além disso, com o intuito de reforçar essa teoria, é proposta uma forma alternativa para o cálculo destes valores limiares, através da análise da geometria da função densidade de probabilidade da distribuição normal multivariável, referente aos resíduos das medidas.
This work was proposed with the objective of implementing a computer program to estimate the states (complex nodal voltages) in an electrical power system (EPS) and apply alternative methods for processing gross errors (GEs), based on the geometrical interpretation of the measurements errors and the innovation concept. Through the geometrical interpretation, BRETAS et al. (2009), BRETAS; PIERETI (2010), BRETAS; BRETAS; PIERETI (2011) and BRETAS et al. (2013) proved mathematically that the measurement error is composed of detectable and undetectable components, and also showed that the detectable component of the error is exactly the residual of the measurement. The methods hitherto used, for processing GEs, consider only the detectable component of the error, then as a consequence, may fail. In an attempt to overcome this limitation, and based on the works cited previously, were studied and implemented two alternative methodologies for process measurements with GEs. The first one is based on the direct analysis of the components of the errors of the measurements, the second one, in a similar way to the traditional methods, is based on the analysis of the measurements residuals. However, the differential of the second proposed methodology lies in the fact that it doesn\'t consider a fixed threshold value for detecting measurements with GEs. In this case, we adopted a new threshold value (TV ) characteristic of each measurement, as presented in the work of PIERETI (2011). Furthermore, in order to reinforce this theory, we propose an alternative way to calculate these thresholds, by analyzing the geometry of the probability density function of the multivariate normal distribution, relating to the measurements residuals.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Zongjun. "Adaptive Robust Regression Approaches in data analysis and their Applications." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1445343114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Gaspard, Guetchine. "FLOOD LOSS ESTIMATE MODEL: RECASTING FLOOD DISASTER ASSESSMENT AND MITIGATION FOR HAITI, THE CASE OF GONAIVES." OpenSIUC, 2013. https://opensiuc.lib.siu.edu/theses/1236.

Full text
Abstract:
This study aims at developing a model to estimate flood damage cost caused in Gonaives, Haiti by Hurricane Jeanne in 2004. In order to reach this goal, the influence of income, inundation duration and inundation depth, slope, population density and distance to major roads on the loss costs was investigated. Surveyed data were analyzed using Excel and ArcGIS 10 software. The ordinary least square and the geographically weighted regression analyses were used to predict flood damage costs. Then, the estimates were delineated using voronoi geostatistical map tool. As a result, the factors account for the costs as high as 83%. The flood damage cost in a household varies between 24,315 through 37,693 Haitian Gourdes (approximately 607.875 through 942.325 U.S. Dollars). Severe damages were spotted in the urban area and in the rural section of Bassin whereas very low and low losses are essentially found in Labranle. The urban area was more severely affected by comparison with the rural area. Damages in the urban area are estimated at 41,206,869.57USD against 698,222,174.10 17,455,554.35USD in the rural area. In the urban part, damages were more severe in Raboteau-Jubilée and in Downtown but Bigot-Parc Vincent had the highest overall damage cost estimated at 9,729,368.95 USD. The lowest cost 7,602,040.42USD was recorded in Raboteau. Approximately, 39.38% of the rural area underwent very low to moderate damages. Bassin was the most severely struck by the 2004 floods, but Bayonnais turned out to have the highest loss cost: 4,988,487.66 USD. Bassin along with Labranle had the least damage cost, 2,956,131.11 and 2,268,321.41 USD respectively. Based on the findings, we recommended the implementation and diversification of income-generating activities, the maintenance and improvement of drains, sewers and gullies cleaning and the establishment of conservation practices upstream of the watersheds. In addition, the model should be applied and validated using actual official records as reference data. Finally, the use of a calculation-based approach is suggested to determine flood damage costs in order to reduce subjectivity during surveys.
APA, Harvard, Vancouver, ISO, and other styles
35

Haberstich, Cécile. "Adaptive approximation of high-dimensional functions with tree tensor networks for Uncertainty Quantification." Thesis, Ecole centrale de Nantes, 2020. http://www.theses.fr/2020ECDN0045.

Full text
Abstract:
Les problèmes de quantification d'incertitudes des modèles numériques nécessitent de nombreuses simulations, souvent très coûteuses (en temps de calcul et/ou en mémoire). C'est pourquoi il est essentiel de construire des modèles approchés qui sont moins coûteux à évaluer. En pratique, si la réponse d'un modèle numérique est représentée par une fonction, on cherche à en construire une approximation.L'objectif de cette thèse est de construire l'approximation d'une fonction qui soit contrôlée tout en utilisant le moins d'évaluations possible de la fonction.Dans un premier temps, nous proposons une nouvelle méthode basée sur les moindres carrés pondérés pour construire l'approximation d'une fonction dans un espace vectoriel. Nous prouvons que la projection vérifie une propriété de stabilité numérique presque sûrement et une propriété de quasi-optimalité en espérance. En pratique on observe que la taille de l'échantillon est plus proche de la dimension de l'espace d'approximation que pour les autres techniques de moindres carrés pondérées existantes.Pour l'approximation en grande dimension et afin d’exploiter de potentielles structures de faible dimension, nous considérons dans cette thèse des approximations dans des formats de tenseurs basés sur des arbres. Ces formats admettent une paramétrisation multilinéaire avec des paramètres formant un réseau de tenseurs de faible ordre et sont ainsi également appelés réseaux de tenseurs basés sur des arbres. Dans cette thèse, nous proposons un algorithme pour construire l'approximation de fonctions dans des formats de tenseurs basés sur des arbres. Il consiste à construire une hiérarchie de sous-espaces imbriqués associés aux différents niveaux de l'arbre. La construction de ces espaces s'appuie sur l'analyse en composantes principales étendue aux fonctions multivariées et sur l'utilisation de la nouvelle méthode des moindres carrés pondérés. Afin de réduire le nombre d'évaluations nécessaires pour construire l'approximation avec une certaine précision, nous proposons des stratégies adaptatives pour le contrôle de l'erreur de discrétisation, la sélection de l'arbre, le contrôle des rangs et l'estimation des composantes principales
Uncertainty quantification problems for numerical models require a lot of simulations, often very computationally costly (in time and/or memory). This is why it is essential to build surrogate models that are cheaper to evaluate. In practice, the output of a numerical model is represented by a function, then the objective is to construct an approximation.The aim of this thesis is to construct a controlled approximation of a function while using as few evaluations as possible.In a first time, we propose a new method based on weighted least-squares to construct the approximation of a function onto a linear approximation space. We prove that the projection verifies a numerical stability property almost surely and a quasi-optimality property in expectation. In practice we observe that the sample size is closer to the dimension of the approximation space than with existing weighted least-squares methods.For high-dimensional approximation, and in order to exploit potential low-rank structures of functions, we consider the model class of functions in tree-based tensor formats. These formats admit a multilinear parametrization with parameters forming a tree network of low-order tensors and are therefore also called tree tensor networks. In this thesis we propose an algorithm for approximating functions in tree-based tensor formats. It consists in constructing a hierarchy of nested subspaces associated to the different levels of the tree. The construction of these subspaces relies on principal component analysis extended to multivariate functions and the new weighted least-squares method. To reduce the number of evaluations necessary to build the approximation with a certain precision, we propose adaptive strategies for the control of the discretization error, the tree selection, the control of the ranks and the estimation of the principal components
APA, Harvard, Vancouver, ISO, and other styles
36

Oliveira, Bráulio César de. "Estimação de estados em sistemas de distribuição: uma abordadgem trifásica e descentralizada." Universidade Federal de Juiz de Fora (UFJF), 2016. https://repositorio.ufjf.br/jspui/handle/ufjf/3141.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-01-09T11:36:05Z No. of bitstreams: 1 brauliocesardeoliveira.pdf: 2150243 bytes, checksum: 62faa254539b7873aa1393d8cd8f1bf2 (MD5)
Approved for entry into archive by Diamantino Mayra (mayra.diamantino@ufjf.edu.br) on 2017-01-31T11:23:24Z (GMT) No. of bitstreams: 1 brauliocesardeoliveira.pdf: 2150243 bytes, checksum: 62faa254539b7873aa1393d8cd8f1bf2 (MD5)
Made available in DSpace on 2017-01-31T11:23:24Z (GMT). No. of bitstreams: 1 brauliocesardeoliveira.pdf: 2150243 bytes, checksum: 62faa254539b7873aa1393d8cd8f1bf2 (MD5) Previous issue date: 2016-03-08
O presente trabalho tem por objetivo apresentar uma metodologia para estimação de estados em sistemas de distribuição de energia elétrica. São utilizadas como variáveis de estado as correntes nos ramos. As medições são obtidas por meio de medições fasoriais sincronizadas(PhasorMeasurementUnits-PMUs),sendoqueostiposdemedidasadvindos desses equipamentos são as tensões nodais e as correntes nos ramos. A abordagem é trifásica, portanto representa as características próprias de um sistema de distribuição. A metodologia consiste em resolver um problema de otimização não linear cuja função objetivo associa o erro quadrático das medidas em relação aos estados estimados sujeito às restrições de carga das barras da rede que não possuem PMUs instaladas baseadas em estimativas de cargas obtidas para o instante “t-1”, partindo-se da premissa que em curtos intervalos de tempo a carga não sofre grandes variações, sendo esta em conjunto com a abordagem trifásica as principais contribuições deste trabalho. Outra contribuição do trabalho é a descentralização, com esta técnica pode-se dividir uma determinada rede em vários subsistemas que podem ser resolvidos de forma separada e independente. Isso torna o processo mais rápido do ponto de vista computacional além de permitir o uso do processamento paralelo, visto que já existe um paralelismo natural entre as tarefas que devem ser resolvidas. Outra vantagem da divisão em subsistemas reside no fato do monitoramento de áreas de interesse. Para utilizar a descentralização foi proposta uma alternativa de alocação de PMUs que consiste em posicionar duas unidades em cada ramificação do sistema, uma no começo e outra no final do trecho, procurando utilizar o menor número possível e que não comprometa a qualidade dos estados estimados. A resolução do problema de otimização é realizada através da implementação computacional do Método de Pontos Interiores com Barreira de Segurança (Safety Barrier Interior Point Method - SFTB - IPM) proposto na literatura especializada. As medidas das PMUs foram obtidas através de um Fluxo de Potência Trifásico via Injeção de Correntes (FPTIC). Foram realizadas diversas simulações variando-se o percentual da carga e os resultados obtidos foram comparados com outra metodologia existente na literatura e com os valores verdadeiros que foram obtidos através do FPTIC para as barras não monitoradas. Foram tambémcomparadosotempocomputacionalentreaexecuçãoserialeaexecuçãoutilizando o processamento paralelo. Os testes mostraram bons resultados o que torna a metodologia proposta aplicável na supervisão de sistemas de distribuição.
This work aims to present a methodology for static state estimation in electric power distribution systems. Branch currents are used as state variables. Measurements are obtained by means of Phasor Measurement Units (PMUs), in which voltage and current branches measurements are used. The approach is three-phase, thus represents the distribution system characteristics. The methodology consists of solving a nonlinear optimization problem minimizing a quadratic objective function associated with the estimated measurements and states subject to load constraints for the non monitored loads based on estimated load obtained from the ‘t-1’ instant, starting from the assumption that in short time intervals the load does not have large variations, which together with the the three-phase approach are the main contributions of this work. Another contribution of this work is the descentralided approach, with this assumption the network can be divided into several subnetworks that can be solved separately and independently. This speeds up the process of being solved from a computational point of view and allows the use of parallel processing, since there is already a natural parallelism among tasks to be solved. Another advantage of the division into subsystems is the fact that the monitoring areas of interest. With the aim of allowing the decentralization was proposed PMUs allocation strategy that consists of allocating two units for each lateral feeder, one at the beginning and one at the end, trying to use as little PMUs as possible in such a way that the quality of the estimated states are not compromised. The resolution of the optimization problem is done through a computer implementation of Interior Point Method with Security Barrier (SFTB - IPM) proposed in the literature. The PMUs measurements were emulated using a Three-PhasePowerFlowusingtheCurrentInjectionmethod(FPTIC).Severalsimulations were performed varying the load percentage and the results obtained were compared with other existing methodology in literature and also the true values that were obtained from the FPTIC to non monitored loads. The computational time using serial and parallel processing were also compared. Results show good results which makes the proposed methodology applicable in monitoring distribution systems.
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Jiaxiong. "Power System State Estimation Using Phasor Measurement Units." UKnowledge, 2013. http://uknowledge.uky.edu/ece_etds/35.

Full text
Abstract:
State estimation is widely used as a tool to evaluate the real time power system prevailing conditions. State estimation algorithms could suffer divergence under stressed system conditions. This dissertation first investigates impacts of variations of load levels and topology errors on the convergence property of the commonly used weighted least square (WLS) state estimator. The influence of topology errors on the condition number of the gain matrix in the state estimator is also analyzed. The minimum singular value of gain matrix is proposed to measure the distance between the operating point and state estimation divergence. To study the impact of the load increment on the convergence property of WLS state estimator, two types of load increment are utilized: one is the load increment of all load buses, and the other is a single load increment. In addition, phasor measurement unit (PMU) measurements are applied in state estimation to verify if they could solve the divergence problem and improve state estimation accuracy. The dissertation investigates the impacts of variations of line power flow increment and topology errors on convergence property of the WLS state estimator. A simple 3-bus system and the IEEE 118-bus system are used as the test cases to verify the common rule. Furthermore, the simulation results show that adding PMU measurements could generally improve the robustness of state estimation. Two new approaches for improving the robustness of the state estimation with PMU measurements are proposed. One is the equality-constrained state estimation with PMU measurements, and the other is Hachtel's matrix state estimation with PMU measurements approach. The dissertation also proposed a new heuristic approach for optimal placement of phasor measurement units (PMUs) in power system for improving state estimation accuracy. In the problem of adding PMU measurements into the estimator, two methods are investigated. Method I is to mix PMU measurements with conventional measurements in the estimator, and method II is to add PMU measurements through a post-processing step. These two methods can achieve very similar state estimation results, but method II is a more time-efficient approach which does not modify the existing state estimation software.
APA, Harvard, Vancouver, ISO, and other styles
38

Blandin, Vassili. "Estimation de paramètres pour des processus autorégressifs à bifurcation." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00842856.

Full text
Abstract:
Les processus autorégressifs à bifurcation (BAR) ont été au centre de nombreux travaux de recherche ces dernières années. Ces processus, qui sont l'adaptation à un arbre binaire des processus autorégressifs, sont en effet d'intérêt en biologie puisque la structure de l'arbre binaire permet une analogie aisée avec la division cellulaire. L'objectif de cette thèse est l'estimation les paramètres de variantes de ces processus autorégressifs à bifurcation, à savoir les processus BAR à valeurs entières et les processus BAR à coefficients aléatoires. Dans un premier temps, nous nous intéressons aux processus BAR à valeurs entières. Nous établissons, via une approche martingale, la convergence presque sûre des estimateurs des moindres carrés pondérés considérés, ainsi qu'une vitesse de convergence de ces estimateurs, une loi forte quadratique et leur comportement asymptotiquement normal. Dans un second temps, on étudie les processus BAR à coefficients aléatoires. Cette étude permet d'étendre le concept de processus autorégressifs à bifurcation en généralisant le côté aléatoire de l'évolution. Nous établissons les mêmes résultats asymptotiques que pour la première étude. Enfin, nous concluons cette thèse par une autre approche des processus BAR à coefficients aléatoires où l'on ne pondère plus nos estimateurs des moindres carrés en tirant parti du théorème de Rademacher-Menchov.
APA, Harvard, Vancouver, ISO, and other styles
39

Alves, Guilherme de Oliveira. "Uma nova metodologia para estimação de estados em sistemas de distribuição radiais utilizando PMUs." Universidade Federal de Juiz de Fora, 2015. https://repositorio.ufjf.br/jspui/handle/ufjf/1528.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2016-05-16T17:51:25Z No. of bitstreams: 1 guilhermedeoliveiraalves.pdf: 1293169 bytes, checksum: a76074780b2af177b66be7c6435b16d1 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2016-06-28T12:25:31Z (GMT) No. of bitstreams: 1 guilhermedeoliveiraalves.pdf: 1293169 bytes, checksum: a76074780b2af177b66be7c6435b16d1 (MD5)
Made available in DSpace on 2016-06-28T12:25:31Z (GMT). No. of bitstreams: 1 guilhermedeoliveiraalves.pdf: 1293169 bytes, checksum: a76074780b2af177b66be7c6435b16d1 (MD5) Previous issue date: 2015-09-18
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
O presente trabalho tem por objetivo apresentar uma nova metodologia para estimação estática de estados em sistemas de distribuição de energia elétrica que estima as correntes nos ramos como variáveis de estado utilizando medições de tensão e corrente de ramo fasoriais oriundas de unidades de medição fasorial (Phasor Measurement Units - PMUs). A metodologia consiste em resolver um problema de otimização não linear minimizando uma função objetivo quadrática associada com as medições e estados estimados sujeito às restrições de carga das barras da rede que não apresentam PMUs instaladas baseadas em dados históricos, sendo esta a principal contribuição deste trabalho. Uma proposta de alocação de PMUs também é apresentada e que consiste em alocar duas unidades em cada ramificação do sistema, uma no começo e outra no final do trecho, procurando utilizar o menor número possível e que não comprometa a qualidade dos estados estimados. A resolução do problema de otimização é realizada de duas formas, através da ‘toolbox fmincon’ do software Matlab, que é uma ferramenta muito utilizada na resolução de problemas de otimização, e através da implementação computacional do Método de Pontos Interiores com Barreira de Segurança (Safety Barrier Interior Point Method - SFTB - IPM) proposto na literatura utilizada. Durante o processo de estimação de estados são utilizadas medidas obtidas através de um fluxo de potência que simulam as PMUs instaladas nos sistemas analisados variando o carregamento de cada sistema em torno da sua média histórica de carga até atingir os limites superior e inferior estabelecidos, sendo verificado o comportamento do estimador de estados perante a ocorrência de ruídos brancos nas medidas de todos os sistemas analisados. Foram analisados um sistema de distribuição tutorial de 15 barras e três sistemas encontrados na literatura contendo 33, 50 e 70 barras respectivamente. No sistema tutorial e no de 70 barras foram incluídas unidades de geração distribuída para se verificar o comportamento do estimador de estados. Todos os resultados do processo de estimação de estados são obtidos com os dois métodos de resolução apresentados e são comparados o desempenho de cada método, principalmente em relação ao tempo computacional. Todos os resultados obtidos foram validados usando um programa de fluxo de potência convencional e apresentam boa precisão com valor de função objetivo baixo mesmo na presença de ruídos nas medidas refletindo de maneira confiável o real estado do sistema de distribuição, o que torna a metodologia proposta atraente.
This work aims at presenting a new methodology for static state estimation in electric power distribution systems which estimates the branch currents as state variables using voltage measurements and current phasor branch obtained from phasor measurement units (Phasor Measurement Units - PMUs). The methodology consists of solving a nonlinear optimization problem minimizing a quadratic objective function associated with the estimated measurements and states, subject to load constraints for the non monitored loads based on historical data, which is the main contribution of this work. A PMU allocation strategy is presented which consists of allocating two PMUs for each system branch, one at the beginning and another at the end, trying to use as little PMUs as possible in such a way that the quality of the estimated states are not compromised. The solution of the optimization problem is obtained through two ways, the first is the toolbox ‘fmincon’ from Matlab solver software which is a widely used tool in the optimization problem. The second is a computer implementation of interior point method with security barrier (SFTB - IPM) proposed in the literature. Comparisons of computing times and results obtained with both methods are shown. A power flow program is used to obtain the voltages and branch currents in order to emulate the PMUs data in the state estimation process. Additionaly the non monitored loads are varied from the minimum bounds to their maximum, allowing white noise errors from the PMUs measurements. A tutorial test system of 15 buses is fully explored and three IEEE test systems of 33, 50 and 70 buses are used to show the effectiveness of the proposed methodology. For the tutorial and 70 bus systems, distribued generation units were included to see the state estimator behavior. All results from the state estimation process are obtained considering the two presented solving methods and the computing times performance compared. The results obtained were validated using a conventional power flow program and have good accuracy with low objective function value even in the presence of white noise errors in the measurements reflecting the reliability of the proposed methodology, making it very attractive for distribution system monitoring.
APA, Harvard, Vancouver, ISO, and other styles
40

Vieira, Camila Silva. "Processamento de erros grosseiros através do índice de não-detecção de erros e dos resíduos normalizados." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-22032018-144505/.

Full text
Abstract:
Esta dissertação trata do problema de processamento de Erros Grosseiros (EGs) com base na aplicação do chamado Índice de Não-Detecção de Erros, ou apenas UI (Undetectability Index), na análise dos resíduos do estimador de estado por mínimos quadrados ponderados. O índice UI foi desenvolvido recentemente e possibilita a classificação das medidas de acordo com as suas características de não refletirem grande parcela de seus erros nos resíduos daquele estimador. As medidas com maiores UIs são aquelas cujos erros são mais difíceis de serem detectados através de métodos que fazem uso da análise dos resíduos, pois grande parcela do erro dessas medidas não aparece no resíduo. Inicialmente demonstrou-se, nesta dissertação, que erros das estimativas das variáveis de estado em um sistema com EG não-detectável (em uma medida de alto índice UI) podem ser mais significativos que em medidas com EGs detectáveis (em medidas com índices UIs baixos). Justificando, dessa forma, a importância de estudos para tornar possível o processamento de EGs em medidas com alto índice UI. Realizou-se, então, nesta dissertação, diversas simulações computacionais buscando analisar a influência de diferentes ponderações de medidas no UI e também nos erros das estimativas das variáveis de estado. Encontrou-se, então, uma maneira que destacou-se como a mais adequada para ponderação das medidas. Por fim, ampliaram-se, nesta dissertação, as pesquisas referentes ao UI para um estimador de estado por mínimos quadrados ponderados híbrido.
This dissertation deals with the problem of Gross Errors processing based on the use of the so-called Undetectability Index, or just UI. This index was developed recently and it is capable to classify the measurements according to their characteristics of not reflecting their errors into the residuals of the weighted least squares state estimation process. Gross errors in measurements with higher UIs are very difficult to be detected by methods based on the residual analysis, as the errors in those measurements are masked, i.e., they are not reflected in the residuals. Initially, this dissertation demonstrates that a non-detectable gross error (error in a measurement with high UI) may affect more the accuracy of the estimated state variables than a detectable gross error (error in a measurement with low UI). Therefore, justifying the importance of studies that make possible gross errors processing in measurements with high UI. In this dissertation, several computational simulations are carried out to analyze the influence of different weights of measurements in the UI index and also in the accuracy of the estimated state variables. It is chosen a way that stood out as the most appropriate for weighing the measurements. Finally, in this dissertation, the studies referring to the UI is extended for a hybrid weighted least squares state estimator.
APA, Harvard, Vancouver, ISO, and other styles
41

Sundin, Daniel. "Natural gas storage level forecasting using temperature data." Thesis, Linköpings universitet, Produktionsekonomi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-169856.

Full text
Abstract:
Even though the theory of storage is historically a popular view to explain commodity futures prices, many authors focus on the oil price link. Past studies have shown an increased futures price volatility on Mondays and days when natural gas storage levels are released, which could both implicate that storage levels and temperature data are incorporated in the prices. In this thesis, the U.S. natural gas storage level change is studied as a function of the consumption and production. Consumption and production are furthered segmented and separately forecasted by modelling inverse problems that are solved by least squares regression using temperature data and timeseries analysis. The results indicate that each consumer consumption segment is highly dependent of the temperature with R2-values of above 90%. However, modelling each segment completely by time-series analysis proved to be more efficient due to lack of flexibility in the polynomials, lack of used weather stations and seasonal patterns in addition to the temperatures. Although the forecasting models could not beat analysts’ consensus estimates, these present natural gas storage level drivers and can thus be used to incorporate temperature forecasts when estimating futures prices.
APA, Harvard, Vancouver, ISO, and other styles
42

Esquincalha, Agnaldo da Conceição. "Estimação de parâmetros de sinais gerados por sistemas lineares invariantes no tempo." Universidade do Estado do Rio de Janeiro, 2009. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=1238.

Full text
Abstract:
Fundação Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro
Nesta dissertação é apresentado um estudo sobre a recuperação de sinais modelados por somas ponderadas de exponenciais complexas. Para tal, são introduzidos conceitos elementares em teoria de sinais e sistemas, em particular, os sistemas lineares invariantes no tempo, SLITs, que podem ser representados matematicamente por equações diferenciais, ou equações de diferenças, para sinais analógicos ou digitais, respectivamente. Equações deste tipo apresentam como solução somas ponderadas de exponenciais complexas, e assim fica estabelecida a relação entre os sistemas de tipo SLIT e o modelo em estudo. Além disso, são apresentadas duas combinações de métodos utilizadas na recuperação dos parâmetros dos sinais: métodos de Prony e mínimos quadrados, e métodos de Kung e mínimos quadrados, onde os métodos de Prony e Kung recuperam os expoentes das exponenciais e o método dos mínimos quadrados recupera os coeficientes lineares do modelo. Finalmente, são realizadas cinco simulações de recuperação de sinais, sendo a última, uma aplicação na área de modelos de qualidade de água.
A study on the recovery of signals modeled by weighted sums of complex exponentials complex is presented. For this, basic concepts of signals and systems theory are introduced. In particular, the linear time invariant systems (LTI Systems) are considered, which can be mathematically represented by differential equations or difference equations, respectively, for analog or digital signals. The solution of these types of equations is given by a weighted sum of complex exponentials, so the relationship between the LTI Systems and the model of study is established. Furthermore, two combinations of methods are used to recover the parameters of the signals: Prony and least squares methods, and Kung and least squares methods, where Prony and Kung methods are used to recover the exponents of the exponentials and the least square method is used to recover the linear coefficients of the model. Finally, five simulations are performed for the recovery of signals, the last one being an application in the area of water quality models.
APA, Harvard, Vancouver, ISO, and other styles
43

Chen, Long. "Méthodes itératives de reconstruction tomographique pour la réduction des artefacts métalliques et de la dose en imagerie dentaire." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112015/document.

Full text
Abstract:
Cette thèse est constituée de deux principaux axes de recherche portant sur l'imagerie dentaire par la tomographie à rayons X : le développement de nouvelles méthodes itératives de reconstruction tomographique afin de réduire les artefacts métalliques et la réduction de la dose délivrée au patient. Afin de réduire les artefacts métalliques, nous prendrons en compte le durcissement du spectre des faisceaux de rayons X et le rayonnement diffusé. La réduction de la dose est abordée dans cette thèse en diminuant le nombre des projections traitées. La tomographie par rayons X a pour objectif de reconstruire la cartographie des coefficients d'atténuations d'un objet inconnu de façon non destructive. Les bases mathématiques de la tomographie repose sur la transformée de Radon et son inversion. Néanmoins des artefacts métalliques apparaissent dans les images reconstruites en inversant la transformée de Radon (la méthode de rétro-projection filtrée), un certain nombre d'hypothèse faites dans cette approche ne sont pas vérifiées. En effet, la présence de métaux exacerbe les phénomènes de durcissement de spectre et l'absence de prise en compte du rayonnement diffusé. Nous nous intéressons dans cette thèse aux méthodes itératives issues d'une méthodologie Bayésienne. Afin d'obtenir des résultats de traitement compatible avec une application clinique de nos nouvelles approches, nous avons choisi un modèle direct relativement simple et classique (linéaire) associé à des approches de corrections de données. De plus, nous avons pris en compte l'incertitude liée à la correction des données en utilisant la minimisation d'un critère de moindres carrés pondérés. Nous proposons donc une nouvelle méthode de correction du durcissement du métal sans connaissances du spectre de la source et des coefficients d'atténuation des matériaux. Nous proposons également une nouvelle méthode de correction du diffusé associée sur les mesures sous certaines conditions notamment de faible dose. En imagerie médicale par tomographie à rayons X, la surexposition ou exposition non nécessaire irradiante augmente le risque de cancer radio-induit lors d'un examen du patient. Notre deuxième axe de recherche porte donc sur la réduction de la dose en diminuant le nombre de projections. Nous avons donc introduit un nouveau mode d'acquisition possédant un échantillonnage angulaire adaptatif. On utilise pour définir cette acquisition notre connaissance a priori de l'objet. Ce mode d'acquisition associé à un algorithme de reconstruction dédié, nous permet de réduire le nombre de projections tout en obtenant une qualité de reconstruction comparable au mode d'acquisition classique. Enfin, dans certains modes d’acquisition des scanners dentaires, nous avons un détecteur qui n'arrive pas à couvrir l'ensemble de l'objet. Pour s'affranchir aux problèmes liés à la tomographie locale qui se pose alors, nous utilisons des acquisitions multiples suivant des trajectoires circulaires. Nous avons adaptés les résultats développés par l’approche « super short scan » [Noo et al 2003] à cette trajectoire très particulière et au fait que le détecteur mesure uniquement des projections tronquées. Nous avons évalué nos méthodes de réduction des artefacts métalliques et de réduction de la dose en diminuant le nombre des projections sur les données réelles. Grâce à nos méthodes de réduction des artefacts métalliques, l'amélioration de qualité des images est indéniable et il n'y a pas d'introduction de nouveaux artefacts en comparant avec la méthode de l'état de l'art NMAR [Meyer et al 2010]. Par ailleurs, nous avons réussi à réduire le nombre des projections avec notre nouveau mode d'acquisition basé sur un « super short scan » appliqué à des trajectoires multiples. La qualité obtenue est comparable aux reconstructions obtenues avec les modes d'acquisition classiques ou short-scan mais avec une réduction d’au moins 20% de la dose radioactive
This thesis contains two main themes: development of new iterative approaches for metal artifact reduction (MAR) and dose reduction in dental CT (Computed Tomography). The metal artifacts are mainly due to the beam-hardening, scatter and photon starvation in case of metal in contrast background like metallic dental implants in teeth. The first issue concerns about data correction on account of these effects. The second one involves the radiation dose reduction delivered to a patient by decreasing the number of projections. At first, the polychromatic spectra of X-ray beam and scatter can be modeled by a non-linear direct modeling in the statistical methods for the purpose of the metal artifacts reduction. However, the reconstruction by statistical methods is too much time consuming. Consequently, we proposed an iterative algorithm with a linear direct modeling based on data correction (beam-hardening and scatter). We introduced a new beam-hardening correction without knowledge of the spectra of X-ray source and the linear attenuation coefficients of the materials and a new scatter estimation method based on the measurements as well. Later, we continued to study the iterative approaches of dose reduction since the over-exposition or unnecessary exposition of irradiation during a CT scan has been increasing the patient's risk of radio-induced cancer. In practice, it may be useful that one can reconstruct an object larger than the field of view of scanner. We proposed an iterative algorithm on super-short-scans on multiple scans in this case, which contain a minimal set of the projections for an optimal dose. Furthermore, we introduced a new scanning mode of variant angular sampling to reduce the number of projections on a single scan. This was adapted to the properties and predefined interesting regions of the scanned object. It needed fewer projections than the standard scanning mode of uniform angular sampling to reconstruct the objet. All of our approaches for MAR and dose reduction have been evaluated on real data. Thanks to our MAR methods, the quality of reconstructed images was improved noticeably. Besides, it did not introduce some new artifacts compared to the MAR method of state of art NMAR [Meyer et al 2010]. We could reduce obviously the number of projections with the proposed new scanning mode and schema of super-short-scans on multiple scans in particular case
APA, Harvard, Vancouver, ISO, and other styles
44

Annamalai, Andy S. K. "An adaptive autopilot design for an uninhabited surface vehicle." Thesis, University of Plymouth, 2014. http://hdl.handle.net/10026.1/3100.

Full text
Abstract:
An adaptive autopilot design for an uninhabited surface vehicle Andy SK Annamalai The work described herein concerns the development of an innovative approach to the design of autopilot for uninhabited surface vehicles. In order to fulfil the requirements of autonomous missions, uninhabited surface vehicles must be able to operate with a minimum of external intervention. Existing strategies are limited by their dependence on a fixed model of the vessel. Thus, any change in plant dynamics has a non-trivial, deleterious effect on performance. This thesis presents an approach based on an adaptive model predictive control that is capable of retaining full functionality even in the face of sudden changes in dynamics. In the first part of this work recent developments in the field of uninhabited surface vehicles and trends in marine control are discussed. Historical developments and different strategies for model predictive control as applicable to surface vehicles are also explored. This thesis also presents innovative work done to improve the hardware on existing Springer uninhabited surface vehicle to serve as an effective test and research platform. Advanced controllers such as a model predictive controller are reliant on the accuracy of the model to accomplish the missions successfully. Hence, different techniques to obtain the model of Springer are investigated. Data obtained from experiments at Roadford Reservoir, United Kingdom are utilised to derive a generalised model of Springer by employing an innovative hybrid modelling technique that incorporates the different forward speeds and variable payload on-board the vehicle. Waypoint line of sight guidance provides the reference trajectory essential to complete missions successfully. The performances of traditional autopilots such as proportional integral and derivative controllers when applied to Springer are analysed. Autopilots based on modern controllers such as linear quadratic Gaussian and its innovative variants are integrated with the navigation and guidance systems on-board Springer. The modified linear quadratic Gaussian is obtained by combining various state estimators based on the Interval Kalman filter and the weighted Interval Kalman filter. Change in system dynamics is a challenge faced by uninhabited surface vehicles that result in erroneous autopilot behaviour. To overcome this challenge different adaptive algorithms are analysed and an innovative, adaptive autopilot based on model predictive control is designed. The acronym ‘aMPC’ is coined to refer to adaptive model predictive control that is obtained by combining the advances made to weighted least squares during this research and is used in conjunction with model predictive control. Successful experimentation is undertaken to validate the performance and autonomous mission capabilities of the adaptive autopilot despite change in system dynamics.
APA, Harvard, Vancouver, ISO, and other styles
45

Benamar, Mohamed Amine. "Développement d’une approche numérique et expérimentale par la mesure VLD pour la propagation acoustique mutimodale en conduit avec écoulement." Thesis, Compiègne, 2021. http://www.theses.fr/2021COMP2624.

Full text
Abstract:
La Vélocimétrie Laser à effet Doppler (VLD) est un moyen de mesure non intrusif de la vitesse particulaire classiquement utilisé en mécanique des fluides. La vitesse acoustique est une grandeur très importante en acoustique car elle permet de caractériser les champs de propagation acoustique indispensable pour la compréhension de certains phénomènes de propagation en proche paroi ou pour des géométries complexes. Le banc DUCAT installé au laboratoire de l’équipe acoustique et vibration de l’Université de Technologie de Compiègne avait pour but de caractériser les performances acoustiques de différents systèmes d’absorption acoustique tel que les SDOF ou les poreux métalliques pour des utilisations aéronautiques à travers la mesure amont/aval de la vitesse et la pression acoustiques à travers deux sondes automatisées contenants un capteur à fil chaud ainsi qu’un microphone avec ogive. L’objectif de cette thèse est de permettre la mesure de la vitesse acoustique en propagation multimodale et en présence d’écoulement en utilisant la VLD. Le signal mesuré par la VLD est échantillonné aléatoirement et présente un bruit de fond assez important dû à la présence de l’écoulement dans le conduit. La nature complexe du signal mesuré demande des méthodes de traitement de signal particulières pour pouvoir en extraire la vitesse acoustique qui nous importe. La première partie de cette thèse, présente un benchmark des différentes méthodes présentes dans la littérature ainsi que leur validité pour les conditions expérimentales actuelles du banc DUCAT. Une simulation du signal VLD mesuré est développé en guise de référence de validation des méthodes qu’elles soient spectrales ou temporelles. La méthode des moindres carrés pondérés est finalement sélectionnée et adaptée suite à cette étude pour l’estimation des différents paramètres acoustiques à partir du signal brut. La deuxième partie concerne la présentation des outils numériques utilisés ou développés pour la simulation de la propagation acoustique dans les conduits infinis. L’outil numérique est un code éléments finis aéroacoustique basé sur les équations de Galbrun couplées à une couche absorbante virtuelle dite PML (Perfect Matched Layer). En raison de la présence de la PML, la résolution numérique du problème inverse devient compliquée et un code de résolution des problèmes aux valeurs propres non linéaires basé sur la méthode du Contour Intégral a dû être développé. La troisième partie de ce travail présente les différents composants du banc expérimental. Le banc permet la propagation acoustique multimodale (jusqu’à 5000 Hz) en présence d’un écoulement en aspiration/expiration pouvant atteindre une vitesse de Mach 0.25. La quatrième partie présente une comparaison numérique et expérimentale des outils présentés et développés durant la thèse. Une première comparaison pour une propagation multimodale dans un conduit droit permet de conclure sur l’efficacité du système de mesure et de traitement de signal avec une erreur relative inférieure à 1 dB. Une seconde comparaison a été réalisée pour l’étude des modes piégés acoustiques dans le cas d’un conduit cylindrique avec changement brusque de section
Laser Doppler Velocimetry (LDV) is a non-intrusive measurement of particle velocity classically used in fluid mechanics. The acoustic velocity is a very important quantity in acoustics for the characterization of acoustic propagation fields, which is essential for the understanding of certain propagation phenomena in near walls or for complex geometries. The DUCAT bench installed in the laboratory of the Acoustics and Vibration team of the University of Technology of Compiègne aimed at characterizing the acoustic performances of various acoustic absorption systems such as SDOF or metallic porous materials for aeronautical uses through the measurement of the acoustic velocity and pressure through two automated probes containing a hot wire sensor as well as a microphone with ogive. The objective of this thesis is to allow the measurement of acoustic velocity in multimodal propagation and in the presence of flow using the VLD. The signal measured by the VLD is randomly sampled and has a fairly large background noise due to the presence of flow in the duct. The complex nature of the measured signal requires special signal processing methods to extract the acoustic velocity that is important to us. The first part of this thesis presents a benchmark of the different methods available in the literature and their validity for the current experimental conditions of the DUCAT bench. A simulation of the measured VLD signal is developed as a reference to validate the methods, whether they are spectral or temporal. The weighted least squares method is finally selected and adapted following this study for the estimation of the various acoustic parameters from the raw signal. The second part concerns the presentation of the numerical tools used or developed for the simulation of the acoustic propagation in infinite ducts. The main numerical tool is an aeroacoustic finite element code developed in the lab based on Galbrun’s equations coupled to a virtual absorbing layer called PML (Perfect Matched Layer). Due to the presence of the PML, the numerical solution of the inverse problem becomes complicated, which led us to develop a code for solving nonlinear eigenvalue problems based on the Integral Contour method. The third part of this work presents the different components of the modified version of the bench as well as the characteristics of these different components. The bench allows the experimentation of multimodal acoustic propagation (up to 5000 Hz) in the presence of a suction/expiration flow that can reach a speed of Mach 0.25. The fourth and last part, presents a protocol of experimental numerical validation of all the tools presented and developed. The test/calculation comparisons are presented for a multimodal propagation in a straight duct at first. The results allow to conclude on the efficiency of the measurement and signal processing system with a relative error lower than 1 dB. The same protocol is then used for the experimental study of the acoustic trapped modes in the case of a cylindrical duct with an abrupt change of section
APA, Harvard, Vancouver, ISO, and other styles
46

Deilami, Kaveh. "Modelling the urban heat island intensities of alternative urban growth management policies in Brisbane." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/107656/1/Kaveh_Deilami_Thesis.pdf.

Full text
Abstract:
When urban areas experience higher temperature than their surrounding rural areas, this phenomenon is called the urban heat island (UHI) effect. UHI contributes to global warming. Urban planning policy plays a significant role in controlling the UHI. This study examines the UHI effects of urban planning policy scenarios for Brisbane, including: a) business as usual; b) transit oriented development; c) infill development; d) motorway oriented development; and e) sprawl development. The findings show Infill development will be effective but will generate pockets of extreme UHI. Sprawl development will generate a moderate UHI effect but will be distributed throughout the city.
APA, Harvard, Vancouver, ISO, and other styles
47

Destino, G. (Giuseppe). "Positioning in wireless networks:non-cooperative and cooperative algorithms." Doctoral thesis, Oulun yliopisto, 2012. http://urn.fi/urn:isbn:9789514299742.

Full text
Abstract:
Abstract In the last few years, location-awareness has emerged as a key technology for the future development of mobile, ad hoc and sensor networks. Thanks to location information, several network optimization strategies as well as services can be developed. However, the problem of determining accurate location, i.e. positioning, is still a challenge and robust algorithms are yet to be developed. In this thesis, we focus on the development of distance-based non-cooperative and cooperative algorithms, which is derived based on a non-parametric non- Bayesian framework, specifically with a Weighted Least Square (WLS) optimization. From a theoretic perspective, we study the WLS problem and establish the optimality through the relationship with a Maximum Likelihood (ML) estimator. We investigate the fundamental limits and derive the consistency conditions by creating a connection between Euclidean geometry and inference theory. Furthermore, we derive the closed-form expression of a distance-model based Cramér-Rao Lower Bound (CRLB), as well as the formulas, that characterize information coupling in the Fisher information matrix. Non-cooperative positioning is addressed as follows. We propose a novel framework, namely the Distance Contraction, to develop robust non-cooperative positioning techniques. We prove that distance contraction can mitigate the global minimum problem and structured distance contraction yields nearly optimal performance in severe channel conditions. Based on these results, we show how classic algorithms such as the Weighted Centroid (WC) and the Non-Linear Least Square (NLS) can be modified to cope with biased ranging. For cooperative positioning, we derive a novel, low complexity and nearly optimal global optimization algorithm, namely the Range-Global Distance Continuation method, to use in centralized and distributed positioning schemes. We propose an effective weighting strategy to cope with biased measurements, which consists of a dispersion weight that captures the effect of noise while maximizing the diversity of the information, and a geometric-based penalty weight, that penalizes the assumption of bias-free measurements. Finally, we show the results of a positioning test where we employ the proposed algorithms and utilize commercial Ultra-Wideband (UWB) devices
Tiivistelmä Viime vuosina paikkatietoisuudesta on tullut eräs merkittävä avainteknologia mobiili- ja sensoriverkkojen tulevaisuuden kehitykselle. Paikkatieto mahdollistaa useiden verkko-optimointistrategioiden sekä palveluiden kehittämisen. Kuitenkin tarkan paikkatiedon määrittäminen, esimerkiksi kohteen koordinaattien, on edelleen vaativa tehtävä ja robustit algoritmit vaativat kehittämistä. Tässä väitöskirjassa keskitytään etäisyyspohjaisten, yhteistoiminnallisten sekä ei-yhteistoiminnallisten, algoritmien kehittämiseen. Algoritmit pohjautuvat parametrittömään ei-bayesilaiseen viitekehykseen, erityisesti painotetun pienimmän neliösumman (WLS) optimointimenetelmään. Väitöskirjassa tutkitaan WLS ongelmaa teoreettisesti ja osoitetaan sen optimaalisuus todeksi tarkastelemalla sen suhdetta suurimman todennäköisyyden (ML) estimaattoriin. Lisäksi tässä työssä tutkitaan perustavanlaatuisia raja-arvoja sekä johdetaan yhtäpitävyysehdot luomalla yhteys euklidisen geometrian ja inferenssiteorian välille. Väitöskirjassa myös johdetaan suljettu ilmaisu etäisyyspohjaiselle Cramér-Rao -alarajalle (CRLB) sekä esitetään yhtälöt, jotka karakterisoivat informaation liittämisen Fisherin informaatiomatriisiin. Väitöskirjassa ehdotetaan uutta viitekehystä, nimeltään etäisyyden supistaminen, robustin ei-yhteistoiminnallisen paikannustekniikan perustaksi. Tässä työssä todistetaan, että etäisyyden supistaminen pienentää globaali minimi -ongelmaa ja jäsennetty etäisyyden supistaminen johtaa lähes optimaaliseen suorituskykyyn vaikeissa radiokanavan olosuhteissa. Näiden tulosten pohjalta väitöskirjassa esitetään, kuinka klassiset algoritmit, kuten painotetun keskipisteen (WC) sekä epälineaarinen pienimmän neliösumman (NLS) menetelmät, voidaan muokata ottamaan huomioon etäisyysmittauksen harha. Yhteistoiminnalliseksi paikannusmenetelmäksi johdetaan uusi, lähes optimaalinen algoritmi, joka on kompleksisuudeltaan matala. Algoritmi on etäisyyspohjainen globaalin optimoinnin menetelmä ja sitä käytetään keskitetyissä ja hajautetuissa paikannusjärjestelmissä. Lisäksi tässä työssä ehdotetaan tehokasta painotusstrategiaa ottamaan huomioon mittausharha. Strategia pitää sisällään dispersiopainon, joka tallentaa häiriön aiheuttaman vaikutuksen maksimoiden samalla informaation hajonnan, sekä geometrisen sakkokertoimen, joka rankaisee harhattomuuden ennakko-oletuksesta. Lopuksi väitöskirjassa esitetään tulokset kokeellisista mittauksista, joissa ehdotettuja algoritmeja käytettiin kaupallisissa erittäin laajakaistaisissa (UWB) laitteissa
APA, Harvard, Vancouver, ISO, and other styles
48

Hussain, Sibt Ul. "Apprentissage machine pour la détection des objets." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00722632.

Full text
Abstract:
Le but de cette thèse est de développer des méthodes pratiques plus performantes pour la détection d'instances de classes d'objets de la vie quotidienne dans les images. Nous présentons une famille de détecteurs qui incorporent trois types d'indices visuelles performantes - histogrammes de gradients orientés (Histograms of Oriented Gradients, HOG), motifs locaux binaires (Local Binary Patterns, LBP) et motifs locaux ternaires (Local Ternary Patterns, LTP) - dans des méthodes de discrimination efficaces de type machine à vecteur de support latent (Latent SVM), sous deux régimes de réduction de dimension - moindres carrées partielles (Partial Least Squares, PLS) et sélection de variables par élagage de poids SVM (SVM Weight Truncation). Sur plusieurs jeux de données importantes, notamment ceux du PASCAL VOC2006 et VOC2007, INRIA Person et ETH Zurich, nous démontrons que nos méthodes améliorent l'état de l'art du domaine. Nos contributions principales sont : Nous étudions l'indice visuelle LTP pour la détection d'objets. Nous démontrons que sa performance est globalement mieux que celle des indices bien établies HOG et LBP parce qu'elle permet d'encoder à la fois la texture locale de l'objet et sa forme globale, tout en étant résistante aux variations d'éclairage. Grâce à ces atouts, LTP fonctionne aussi bien pour les classes qui sont caractérisées principalement par leurs structures que pour celles qui sont caractérisées par leurs textures. En plus, nous démontrons que les indices HOG, LBP et LTP sont bien complémentaires, de sorte qu'un jeux d'indices étendu qui intègre tous les trois améliore encore la performance. Les jeux d'indices visuelles performantes étant de dimension assez élevée, nous proposons deux méthodes de réduction de dimension afin d'améliorer leur vitesse et réduire leur utilisation de mémoire. La première, basée sur la projection moindres carrés partielles, diminue significativement le temps de formation des détecteurs linéaires, sans réduction de précision ni perte de vitesse d'exécution. La seconde, fondée sur la sélection de variables par l'élagage des poids du SVM, nous permet de réduire le nombre d'indices actives par un ordre de grandeur avec une réduction minime, voire même une petite augmentation, de la précision du détecteur. Malgré sa simplicité, cette méthode de sélection de variables surpasse toutes les autres approches que nous avons mis à l'essai.
APA, Harvard, Vancouver, ISO, and other styles
49

Sheu, Yu Jen, and 許玉珍. "Compare M-Smoothers with Iterative Weighted Least Squares." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/99556913680728148219.

Full text
Abstract:
碩士
國立東華大學
應用數學系
89
Abstract An important operation in image processing is denoising, where a digital signal is extracted from data consisting of the signal plus noise. And the goal is to recover as much of the signal as possible from noisy data. A well-known method that accomplishes some of these goals is classical smoothing; that is , local averaging. But smoothing is inappropriate when the image has jumps or edges (which happens very frequently) between regions, because smoothing tends to blur the edges. The fact that the smooth is continuous means it can not properly capture the discontinuous jumps of underlying signal. The purpose of this thesis is to discuss that improve the edge - preserving M-smoothers and iterative weighted least squares. Compare M-smoothers with iterative weighted least squares by bias、standard deviation、mean square error (MSE) and another methods. Key Words:image processing;denoising;edge - preserving; M-smoothers;iterative weighted least squares。
APA, Harvard, Vancouver, ISO, and other styles
50

Lin, Hon-Ron, and 林鴻蓉. "Weighted Least Squares Analysis for Repeated Ordinal Data." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/07172440518309483691.

Full text
Abstract:
碩士
中原大學
應用數學研究所
94
A new approach to analyze the repeated outcomes is proposed. By transforming each of subjects to a rank component vector and then applying the multivariate central limit theory and the delta method, the proposed method can be used to test the difference within group and between groups. This methodology makes no assumptions concerning the time dependence among the repeated measurements. It is based only on the multinomial distribution for count data. The practical examples testing the linear and quadratic components of the time effect illustrate the use of the proposed method. The underlying model for the weighted least squares approach is the multinomial distribution. Although the distribution assumptions are much weaker, one still must make some basic assumptions concerning the marginal distributions at each time point. In addition, the assumptions of specific ordinal data methods such as the proportional odds model may be inappropriate. In all of these situations, nonparametric methods for analyzing repeated measurements may be of use. The proposed method is to assign ranks to repeated measurements from the smallest value to the largest value for each subject. The vector of rank means can be computed by the linear transformation of these ranks. Then the multivariate central limit theory and the delta method are applied to obtain the test statistics. The methods make no assumptions concerning the distribution of the response variable. Two practical examples will be illustrated the use of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography