To see the other types of publications on this topic, follow the link: Mean square errors.

Dissertations / Theses on the topic 'Mean square errors'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Mean square errors.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ponce, Hector F. "Is It More Advantageous to Administer Libqual+® Lite Over Libqual+®? an Analysis of Confidence Intervals, Root Mean Square Errors, and Bias." Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc283825/.

Full text
Abstract:
The Association of Research Libraries (ARL) provides an option for librarians to administer a combination of LibQUAL+® and LibQUAL+® Lite to measure users' perceptions of library service quality. LibQUAL+® Lite is a shorter version of LibQUAL+® that uses planned missing data in its design. The present study investigates the loss of information in commonly administered proportions of LibQUAL+® and LibQUAL+® Lite when compared to administering LibQUAL+® alone. Data from previous administrations of LibQUAL+® protocol (2005, N = 525; 2007, N = 3,261; and 2009, N = 2,103) were used to create simulated datasets representing various proportions of LibQUAL+® versus LibQUAL+® Lite administration (0.2:0.8, 0.4:0.6. 0.5:0.5, 0.6:0.4, and 0.8:0.2). Statistics (i.e., means, adequacy and superiority gaps, standard deviations, Pearson product-moment correlation coefficients, and polychoric correlation coefficients) from simulated and real data were compared. Confidence intervals captured the original values. Root mean square errors and absolute and relative biases of correlations showed that accuracy in the estimates decreased with increase in percentage of planned missing data. The recommendation is to avoid using combinations with more than 20% planned missing data.
APA, Harvard, Vancouver, ISO, and other styles
2

Ainkaran, Ponnuthurai. "Analysis of Some Linear and Nonlinear Time Series Models." Thesis, The University of Sydney, 2004. http://hdl.handle.net/2123/582.

Full text
Abstract:
Abstract This thesis considers some linear and nonlinear time series models. In the linear case, the analysis of a large number of short time series generated by a first order autoregressive type model is considered. The conditional and exact maximum likelihood procedures are developed to estimate parameters. Simulation results are presented and compare the bias and the mean square errors of the parameter estimates. In Chapter 3, five important nonlinear models are considered and their time series properties are discussed. The estimating function approach for nonlinear models is developed in detail in Chapter 4 and examples are added to illustrate the theory. A simulation study is carried out to examine the finite sample behavior of these proposed estimates based on the estimating functions.
APA, Harvard, Vancouver, ISO, and other styles
3

Ainkaran, Ponnuthurai. "Analysis of Some Linear and Nonlinear Time Series Models." University of Sydney. Mathematics & statistics, 2004. http://hdl.handle.net/2123/582.

Full text
Abstract:
Abstract This thesis considers some linear and nonlinear time series models. In the linear case, the analysis of a large number of short time series generated by a first order autoregressive type model is considered. The conditional and exact maximum likelihood procedures are developed to estimate parameters. Simulation results are presented and compare the bias and the mean square errors of the parameter estimates. In Chapter 3, five important nonlinear models are considered and their time series properties are discussed. The estimating function approach for nonlinear models is developed in detail in Chapter 4 and examples are added to illustrate the theory. A simulation study is carried out to examine the finite sample behavior of these proposed estimates based on the estimating functions.
APA, Harvard, Vancouver, ISO, and other styles
4

Degtyarena, Anna Semenovna. "The window least mean square error algorithm." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2385.

Full text
Abstract:
In order to improve the performance of LMS (least mean square) algorithm by decreasing the amount of calculations this research proposes to make an update on each step only for those elements from the input data set, that fall within a small window W near the separating hyperplane surface. This work aims to describe in detail the results that can be achieved by using the proposed LMS with window learning algorithm in information systems that employ the methodology of neural network for the purposes of classification.
APA, Harvard, Vancouver, ISO, and other styles
5

Cui, Xiangchen. "Mean-Square Error Bounds and Perfect Sampling for Conditional Coding." DigitalCommons@USU, 2000. https://digitalcommons.usu.edu/etd/7107.

Full text
Abstract:
In this dissertation, new theoretical results are obtained for bounding convergence and mean-square error in conditional coding. Further new statistical methods for the practical application of conditional coding are developed. Criteria for the uniform convergence are first examined. Conditional coding Markov chains are aperiodic, π-irreducible, and Harris recurrent. By applying the general theories of uniform ergodicity of Markov chains on genera l state space, one can conclude that conditional coding Markov cha ins are uniformly ergodic and further, theoretical convergence rates based on Doeblin's condition can be found. Conditional coding Markov chains can be also viewed as having finite state space. This allows use of techniques to get bounds on the second largest eigenvalue which lead to bounds on convergence rate and the mean-square error of sample averages. The results are applied in two examples showing that these bounds are useful in practice. Next some algorithms for perfect sampling in conditional coding are studied. An application of exact sampling to the independence sampler is shown to be equivalent to standard rejection sampling. In case of single-site updating, traditional perfect sampling is not directly applicable when the state space has large cardinality and is not stochastically ordered, so a new procedure is developed that gives perfect samples at a predetermined confidence interval. In last chapter procedures and possibilities of applying conditional coding to mixture models are explored. Conditional coding can be used for analysis of a finite mixture model. This methodology is general and easy to use.
APA, Harvard, Vancouver, ISO, and other styles
6

Strobel, Matthias. "Estimation of minimum mean squared error with variable metric from censored observations." [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-35333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fodor, Balázs [Verfasser]. "Contributions to Statistical Modeling for Minimum Mean Square Error Estimation in Speech Enhancement / Balázs Fodor." Aachen : Shaker, 2015. http://d-nb.info/1070151815/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xing, Chengwen, and 邢成文. "Linear minimum mean-square-error transceiver design for amplify-and-forward multiple antenna relaying systems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B44769738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Nicolson, Aaron M. "Deep Learning for Minimum Mean-Square Error and Missing Data Approaches to Robust Speech Processing." Thesis, Griffith University, 2020. http://hdl.handle.net/10072/399974.

Full text
Abstract:
Speech corrupted by background noise (or noisy speech) can cause misinterpretation and fatigue during phone and conference calls, and for hearing aid users. Noisy speech can also severely impact the performance of speech processing systems such as automatic speech recognition (ASR), automatic speaker verification (ASV), and automatic speaker identification (ASI) systems. Currently, deep learning approaches are employed in an end-to-end fashion to improve robustness. The target speech (or clean speech) is used as the training target or large noisy speech datasets are used to facilitate multi-condition training. In this dissertation, we propose competitive alternatives to the preceding approaches by updating two classic robust speech processing techniques using deep learning. The two techniques include minimum mean-square error (MMSE) and missing data approaches. An MMSE estimator aims to improve the perceived quality and intelligibility of noisy speech. This is accomplished by suppressing any background noise without distorting the speech. Prior to the introduction of deep learning, MMSE estimators were the standard speech enhancement approach. MMSE estimators require the accurate estimation of the a priori signal-to-noise ratio (SNR) to attain a high level of speech enhancement performance. However, current methods produce a priori SNR estimates with a large tracking delay and a considerable amount of bias. Hence, we propose a deep learning approach to a priori SNR estimation that is significantly more accurate than previous estimators, called Deep Xi. Through objective and subjective testing across multiple conditions, such as real-world non-stationary and coloured noise sources at multiple SNR levels, we show that Deep Xi allows MMSE estimators to produce the highest quality enhanced speech amongst all clean speech magnitude spectrum estimators. Missing data approaches improve robustness by performing inference only on noisy speech features that reliably represent clean speech. In particular, the marginalisation method was able to significantly increase the robustness of Gaussian mixture model (GMM)-based speech classification systems (e.g. GMM-based ASR, ASV, or ASI systems) in the early 2000s. However, deep neural networks (DNNs) used in current speech classification systems are non-probabilistic, a requirement for marginalisation. Hence, multi-condition training or noisy speech pre-processing is used to increase the robustness of DNN-based speech classification systems. Recently, sum-product networks (SPNs) were proposed, which are deep probabilistic graphical models that can perform the probabilistic queries required for missing data approaches. While available toolkits for SPNs are in their infancy, we show through an ASI task that SPNs using missing data approaches could be a strong alternative for robust speech processing in the future. This dissertation demonstrates that MMSE estimators and missing data approaches are still relevant approaches to robust speech processing when assisted by deep learning.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Eng & Built Env
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
10

Dear, K. B. G. "A generalisation of mean squared error and its application to variance component estimation." Thesis, University of Reading, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.379691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Septarina, Septarina. "Micro-Simulation of the Roundabout at Idrottsparken Using Aimsun : A Case Study of Idrottsparken Roundabout in Norrköping, Sweden." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-79964.

Full text
Abstract:
Microscopic traffic simulation is useful tool in analysing traffic and estimating the capacity and level of service of road networks. In this thesis, the four legged Idrottsparken roundabout in the city of Norrkoping in Sweden is analysed by using the microscopic traffic simulation package AIMSUN. For this purpose, data regarding traffic flow counts, travel times and queue lengths were collected for three consecutive weekdays during both the morning and afternoon peak periods. The data were then used in model building for simulation of traffic of the roundabout. The Root Mean Square Error (RMSE) method is used to get the optimal parameter value between queue length and travel time data and validation of travel time data are carried out to obtain the basic model which represents the existing condition of the system. Afterward, the results of the new models were evaluated and compared to the results of a SUMO model for the same scenario model. Based on calibrated and validated model, three alternative scenarios were simulated and analysed to improve efficiency of traffic network in the roundabout. The three scenarios includes: (1) add one free right turn in the north and east sections; (2) add one free right turn in the east and south sections; and (3) addition of one lane in roundabout. The analysis of these scenarios shows that the first and second scenario are only able to reduce the queue length and travel time in two or three legs, while the third scenario is not able to improve the performance of the roundabout. In this research, it can be concluded that the first scenario is considered as the best scenario compared to the second scenario and the third scenario. The comparison between AIMSUN and SUMO for the same scenario shows that the results have no significance differences. In calibration process, to get the optimal parameter values between the model measurements and the field measurements, both of AIMSUN and SUMO uses two significantly influencing parametersfor queue and travel time. AIMSUN package uses parameter of driver reaction time and the maximum acceleration, while SUMO package uses parameter of driver imperfection and also the driver rection time.
APA, Harvard, Vancouver, ISO, and other styles
12

Nassr, Husam, and Kurt Kosbar. "PERFORMANCE EVALUATION FOR DECISION-FEEDBACK EQUALIZER WITH PARAMETER SELECTION ON UNDERWATER ACOUSTIC COMMUNICATION." International Foundation for Telemetering, 2017. http://hdl.handle.net/10150/626999.

Full text
Abstract:
This paper investigates the effect of parameter selection for the decision feedback equalization (DFE) on communication performance through a dispersive underwater acoustic wireless channel (UAWC). A DFE based on minimum mean-square error (MMSE-DFE) criterion has been employed in the implementation for evaluation purposes. The output from the MMSE-DFE is input to the decoder to estimate the transmitted bit sequence. The main goal of this experimental simulation is to determine the best selection, such that the reduction in the computational overload is achieved without altering the performance of the system, where the computational complexity can be reduced by selecting an equalizer with a proper length. The system performance is tested for BPSK, QPSK, 8PSK and 16QAM modulation and a simulation for the system is carried out for Proakis channel A and real underwater wireless acoustic channel estimated during SPACE08 measurements to verify the selection.
APA, Harvard, Vancouver, ISO, and other styles
13

Kahaei, Mohammad Hossein. "Performance analysis of adaptive lattice filters for FM signals and alpha-stable processes." Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/36044/7/36044_Digitised_Thesis.pdf.

Full text
Abstract:
The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
14

Garcia-Alis, Daniel. "On adaptive MMSE receiver strategies for TD-CDMA." Thesis, University of Strathclyde, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Yapici, Yavuz. "A Bidirectional Lms Algorithm For Estimation Of Fast Time-varying Channels." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613220/index.pdf.

Full text
Abstract:
Effort to estimate unknown time-varying channels as a part of high-speed mobile communication systems is of interest especially for next-generation wireless systems. The high computational complexity of the optimal Wiener estimator usually makes its use impractical in fast time-varying channels. As a powerful candidate, the adaptive least mean squares (LMS) algorithm offers a computationally efficient solution with its simple first-order weight-vector update equation. However, the performance of the LMS algorithm deteriorates in time-varying channels as a result of the eigenvalue disparity, i.e., spread, of the input correlation matrix in such chan nels. In this work, we incorporate the L MS algorithm into the well-known bidirectional processing idea to produce an extension called the bidirectional LMS. This algorithm is shown to be robust to the adverse effects of time-varying channels such as large eigenvalue spread. The associated tracking performance is observed to be very close to that of the optimal Wiener filter in many cases and the bidirectional LMS algorithm is therefore referred to as near-optimal. The computational complexity is observed to increase by the bidirectional employment of the LMS algorithm, but nevertheless is significantly lower than that of the optimal Wiener filter. The tracking behavior of the bidirectional LMS algorithm is also analyzed and eventually a steady-state step-size dependent mean square error (MSE) expression is derived for single antenna flat-fading channels with various correlation properties. The aforementioned analysis is then generalized to include single-antenna frequency-selective channels where the so-called ind ependence assumption is no more applicable due to the channel memory at hand, and then to multi-antenna flat-fading channels. The optimal selection of the step-size values is also presented using the results of the MSE analysis. The numerical evaluations show a very good match between the theoretical and the experimental results under various scenarios. The tracking analysis of the bidirectional LMS algorithm is believed to be novel in the sense that although there are several works in the literature on the bidirectional estimation, none of them provides a theoretical analysis on the underlying estimators. An iterative channel estimation scheme is also presented as a more realistic application for each of the estimation algorithms and the channel models under consideration. As a result, the bidirectional LMS algorithm is observed to be very successful for this real-life application with its increased but still practical level of complexity, the near-optimal tracking performa nce and robustness to the imperfect initialization.
APA, Harvard, Vancouver, ISO, and other styles
16

Ding, Minhua. "Multiple-input multiple-output wireless system designs with imperfect channel knowledge." Thesis, Kingston, Ont. : [s.n.], 2008. http://hdl.handle.net/1974/1335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Potter, Chris. "Modeling Channel Estimation Error in Continuously Varying MIMO Channels." International Foundation for Telemetering, 2007. http://hdl.handle.net/10150/604490.

Full text
Abstract:
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada
The accuracy of channel estimation plays a crucial role in the demodulation of data symbols sent across an unknown wireless medium. In this work a new analytical expression for the channel estimation error of a multiple input multiple output (MIMO) system is obtained when the wireless medium is continuously changing in the temporal domain. Numerical examples are provided to illustrate our findings.
APA, Harvard, Vancouver, ISO, and other styles
18

Thompson, Grant. "Effects of DEM resolution on GIS-based solar radiation model output: A comparison with the National Solar Radiation Database." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1258663688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Leksono, Catur Yudo, and Tina Andriyana. "Roundabout Microsimulation using SUMO : A Case Study in Idrottsparken RoundaboutNorrkӧping, Sweden." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-79771.

Full text
Abstract:
Idrottsparken roundabout in Norrkoping is located in the more dense part of the city.Congestion occurs in peak hours causing queue and extended travel time. This thesis aims to provide alternative model to reduce queue and travel time. Types ofobservation data are flow, length of queue, and travel time that are observed during peakhours in the morning and afternoon. Calibration process is done by minimising root meansquare error of queue, travel time, and combination both of them between observation andcalibrated model. SUMO version 0.14.0 is used to perform the microsimulation. There are two proposed alternatives, namely Scenario 1: the additional lane for right turnfrom East leg to North and from North leg to West and Scenario 2: restriction of heavy goodsvehicles passing Kungsgatan which is located in Northern leg of Idrottsparken roundaboutduring peak hours. For Scenario 1, the results from SUMO will be compared with AIMSUNin terms of queue and travel time. The result of microsimulation shows that parameters that have big influence in the calibrationprocess for SUMO are driver imperfection and driver’s reaction time, while for AIMSUN isdriver’s reaction time and maximum acceleration. From analysis found that the model of thecurrent situation at Idrottsparken can be represented by model simulation which usingcombination between root mean square error of queue and travel time in calibration andvalidation process. Moreover, scenario 2 is the best alternative for SUMO because itproduces the decrease of queue and travel time almost in all legs at morning and afternoonpeak hour without accompanied by increase significant value of them in the other legs. Thecomparison between SUMO and AIMSUN shows that, in general, the AIMSUN has higherchanges value in terms of queue and travel time due to the limited precision in SUMO forroundabout modelling.
APA, Harvard, Vancouver, ISO, and other styles
20

Williams, Ian E. "Channel Equalization and Spatial Diversity for Aeronautical Telemetry Applications." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605946.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
This work explores aeronautical telemetry communication performance with the SOQPSK- TG ARTM waveforms when frequency-selective multipath corrupts received information symbols. A multi-antenna equalization scheme is presented where each antenna's unique multipath channel is equalized using a pilot-aided optimal linear minimum mean-square error filter. Following independent channel equalization, a maximal ratio combining technique is used to generate a single receiver output for detection. This multi-antenna equalization process is shown to improve detection performance over maximal ratio combining alone.
APA, Harvard, Vancouver, ISO, and other styles
21

Stephens, Christopher Neil. "An investigation into the psychometric properties of the proportional reduction of mean squared error and augmented scores." Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/3539.

Full text
Abstract:
Augmentation procedures are designed to provide better estimates for a given test or subtest through the use of collateral information. The main purpose of this dissertation was to use Haberman's and Wainer's augmentation procedures on a large-scale, standardized achievement test to understand the relationship between reliability and correlation that exist to create the proportional reduction of mean squared error (PRMSE) statistic and to compare the practical effects of Haberman's augmentation procedure with the practical effects of Wainer's augmentation procedure. In this dissertation, Haberman's and Wainer's augmentation procedures were used on a data set that consisted of a large-scale, standardized achievement test with tests in three different content areas, reading, language arts, and mathematics, in both 4th and 8th grade. Each test could be broken down into different content area subtests, between two and five depending on the content area. The data sets contained between 2,500 and 3,000 examinees for each test. The PRMSE statistic was used on the all of the data sets to evaluate two augmentation procedures, one proposed by Haberman and one by Wainer. Following the augmentation analysis, the relationship between the reliability of the subtest to be augmented and that subtest's correlation with the rest of the test was investigated using a pseudo-simulated data set, which consisted of different values for those variables. Lastly, the Haberman and Wainer augmentation procedures were used on the data sets and the augmented data was analyzed to determine the magnitude of the effects of using these augmentation procedures. The main findings based on the data analysis and pseudo-simulated data analysis were as follows: (1) the more questions the better the estimates and the better the augmentation procedures; (2) there is virtually no difference between the Haberman and Wainer augmentation procedures, except for certain correlational relationships; (3) there is a significant effect of using the Haberman or Wainer augmentation procedures, however as the reliability increases, this effect lessens. The limitations of the study and possible future research are also discussed in the dissertation.
APA, Harvard, Vancouver, ISO, and other styles
22

Kulkarni, Aditya. "Performance Analysis of Zero Forcing and Minimum Mean Square Error Equalizers on Multiple Input Multiple Output System on a Spinning Vehicle." International Foundation for Telemetering, 2014. http://hdl.handle.net/10150/577482.

Full text
Abstract:
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA
Channel equalizers based on minimum mean square error (MMSE) and zero forcing (ZF) criteria have been formulated for a general scalable multiple input multiple output (MIMO) system and implemented for a 2x2 MIMO system with spatial multiplexing (SM) for Rayleigh channel associated with additive white Gaussian noise. A model to emulate transmitters and receivers on a spinning vehicle has been developed. A transceiver based on the BLAST architecture is developed in this work. A mathematical framework to explain the behavior of the ZF and MMSE equalizers is formulated. The performance of the equalizers has been validated for a case with one of the communication entities being a spinning aero vehicle. Performance analysis with respect to variation of angular separation between the antennas and relative antenna gain for each case is presented. Based on the simulation results a setup with optimal design parameters for placement of antennas, choice of the equalizers and transmit power is proposed.
APA, Harvard, Vancouver, ISO, and other styles
23

Alexandridis, Roxana Antoanela. "Minimum disparity inference for discrete ranked set sampling data." Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1126033164.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xi, 124 p.; also includes graphics. Includes bibliographical references (p. 121-124). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
24

Karaer, Arzu. "Optimum bit-by-bit power allocation for minimum distortion transmission." Texas A&M University, 2005. http://hdl.handle.net/1969.1/4760.

Full text
Abstract:
In this thesis, bit-by-bit power allocation in order to minimize mean-squared error (MSE) distortion of a basic communication system is studied. This communication system consists of a quantizer. There may or may not be a channel encoder and a Binary Phase Shift Keying (BPSK) modulator. In the quantizer, natural binary mapping is made. First, the case where there is no channel coding is considered. In the uncoded case, hard decision decoding is done at the receiver. It is seen that errors that occur in the more significant information bits contribute more to the distortion than less significant bits. For the uncoded case, the optimum power profile for each bit is determined analytically and through computer-based optimization methods like differential evolution. For low signal-to-noise ratio (SNR), the less significant bits are allocated negligible power compared to the more significant bits. For high SNRs, it is seen that the optimum bit-by-bit power allocation gives constant MSE gain in dB over the uniform power allocation. Second, the coded case is considered. Linear block codes like (3,2), (4,3) and (5,4) single parity check codes and (7,4) Hamming codes are used and soft-decision decoding is done at the receiver. Approximate expressions for the MSE are considered in order to find a near-optimum power profile for the coded case. The optimization is done through a computer-based optimization method (differential evolution). For a simple code like (7,4) Hamming code simulations show that up to 3 dB MSE gain can be obtained by changing the power allocation on the information and parity bits. A systematic method to find the power profile for linear block codes is also introduced given the knowledge of input-output weight enumerating function of the code. The information bits have the same power, and parity bits have the same power, and the two power levels can be different.
APA, Harvard, Vancouver, ISO, and other styles
25

DeNooyer, Eric-Jan D. "Statistical Idealities and Expected Realities in the Wavelet Techniques Used for Denoising." Scholar Commons, 2010. http://scholarcommons.usf.edu/etd/3929.

Full text
Abstract:
In the field of signal processing, one of the underlying enemies in obtaining a good quality signal is noise. The most common examples of signals that can be corrupted by noise are images and audio signals. Since the early 1980's, a time when wavelet transformations became a modernly defined tool, statistical techniques have been incorporated into processes that use wavelets with the goal of maximizing signal-to-noise ratios. We provide a brief history of wavelet theory, going back to Alfréd Haar's 1909 dissertation on orthogonal functions, as well as its important relationship to the earlier work of Joseph Fourier (circa 1801), which brought about that famous mathematical transformation, the Fourier series. We demonstrate how wavelet theory can be used to reconstruct an analyzed function, ergo, that it can be used to analyze and reconstruct images and audio signals as well. Then, in order to ground the understanding of the application of wavelets to the science of denoising, we discuss some important concepts from statistics. From all of these, we introduce the subject of wavelet shrinkage, a technique that combines wavelets and statistics into a "thresholding" scheme that effectively reduces noise without doing too much damage to the desired signal. Subsequently, we discuss how the effectiveness of these techniques are measured, both in the ideal sense and in the expected sense. We then look at an illustrative example in the application of one technique. Finally, we analyze this example more generally, in accordance with the underlying theory, and make some conclusions as to when wavelets are an effective technique in increasing a signal-to-noise ratio.
APA, Harvard, Vancouver, ISO, and other styles
26

Gagakuma, Edem Coffie. "Multipath Channel Considerations in Aeronautical Telemetry." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6529.

Full text
Abstract:
This thesis describes the use of scattering functions to characterize time-varying multipath radio channels. Channel Impulse responses were measured at Edwards Air Force Base (EAFB) and scattering functions generated from the impulse response data. From the scattering functions we compute the corresponding Doppler power spectrum and multipath intensity profile. These functions completely characterize the signal delay and the time varying nature of the channel in question and are used by systems engineers to design reliable communications links. We observe from our results that flight paths with ample reflectors exhibit significant multipath events. We also examine the bit error rate (BER) performance of a reduced-complexity equalizer for a truncated version of the pulse amplitude modulation (PAM) representation of SOQPSK-TG in a multipath channel. Since this reduced-complexity equalizer is based on the maximum likelihood (ML) principle, we expect it to perform optimally than any of the filter-based equalizers used in estimating received SOQPSK-TG symbols. As such we present a comparison between this ML detector and a minimum mean square error (MMSE) equalizer for the same example channel. The example channel used was motivated by the statistical channel characterizations described in thisthesis. Our analysis shows that the ML equalizer outperforms the MMSE equalizer in estimating received SOQPSK-TG symbols.
APA, Harvard, Vancouver, ISO, and other styles
27

Savaux, Vincent. "Contribution to multipath channel estimation in an OFDM modulation context." Phd thesis, Supélec, 2013. http://tel.archives-ouvertes.fr/tel-00988283.

Full text
Abstract:
In wireless communications systems, the transmission channel between the transmitter and the receiver antennas is one of the main sources of disruption for the signal. The multicarrier modulations, such as the orthogonal frequency division multiplexing (OFDM), are very robust against the multipath effect, and allow to recover the transmitted signal with a low error rate, when they are combined with a channel encoding. The channel estimation then plays a key role in the performance of the communications systems. In this PhD thesis, we study techniques based on least square (LS) and minimum mean square error (MMSE) estimators. The MMSE is optimal, but is much more complex than LS, and requires the a priori knowledge of the second order moment of the channel and the noise. In this presentation, two methods that allow to reach a performance close to the one of LMMSE while getting around its drawback are investigated. In another way, a third part of the presentation investigates the errors of estimation due to the interpolations.
APA, Harvard, Vancouver, ISO, and other styles
28

Nounagnon, Jeannette Donan. "Using Kullback-Leibler Divergence to Analyze the Performance of Collaborative Positioning." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/86593.

Full text
Abstract:
Geolocation accuracy is a very crucial and a life-or-death factor for rescue teams. Natural disasters or man-made disasters are just a few convincing reasons why fast and accurate position location is necessary. One way to unleash the potential of positioning systems is through the use of collaborative positioning. It consists of simultaneously solving for the position of two nodes that need to locate themselves. Although the literature has addressed the benefits of collaborative positioning in terms of accuracy, a theoretical foundation on the performance of collaborative positioning has been disproportionally lacking. This dissertation uses information theory to perform a theoretical analysis of the value of collaborative positioning.The main research problem addressed states: 'Is collaboration always beneficial? If not, can we determine theoretically when it is and when it is not?' We show that the immediate advantage of collaborative estimation is in the acquisition of another set of information between the collaborating nodes. This acquisition of new information reduces the uncertainty on the localization of both nodes. Under certain conditions, this reduction in uncertainty occurs for both nodes by the same amount. Hence collaboration is beneficial in terms of uncertainty. However, reduced uncertainty does not necessarily imply improved accuracy. So, we define a novel theoretical model to analyze the improvement in accuracy due to collaboration. Using this model, we introduce a variational analysis of collaborative positioning to deter- mine factors that affect the improvement in accuracy due to collaboration. We derive range conditions when collaborative positioning starts to degrade the performance of standalone positioning. We derive and test criteria to determine on-the-fly (ahead of time) whether it is worth collaborating or not in order to improve accuracy. The potential applications of this research include, but are not limited to: intelligent positioning systems, collaborating manned and unmanned vehicles, and improvement of GPS applications.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
29

Enqvist, Martin. "Linear Models of Nonlinear Systems." Doctoral thesis, Linköping : Linköpings universitet, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5330.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Behrle, Charles D. "Computer simulation studies of multiple broadband target localization via frequency domain beamforming for planar arrays." Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/22976.

Full text
Abstract:
Approved for public release; distribution is unlimited
Computer simulation studies of a frequency domain adaptive beamforming algorithm are presented. These simulation studies were conducted to determine the multiple broadband target localization capability and the full angular coverage capability of the algorithm. The algorithm was evaluated at several signal-to-noise ratios with varying sampling rates. The number of iterations that the adaptive algorithm took to reach a minimum estimation error was determined. Results of the simulation studies indicate that the algorithm can localize multiple broadband targets and has full angular coverage capability.
http://archive.org/details/computersimulati00behr
Lieutenant, United States Navy
APA, Harvard, Vancouver, ISO, and other styles
31

Syntetos, Argyrios. "Forecasting of intermittent demand." Thesis, Online version, 2001. http://bibpurl.oclc.org/web/26215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Drvoštěp, Tomáš. "Ekonomie vychýleného odhadu." Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-193409.

Full text
Abstract:
This thesis investigates optimality of heuristic forecasting. According to Goldstein a Gigerenzer (2009), heuristics can be viewed as predictive models, whose simplicity is exploiting the bias-variance trade-off. Economic agents learning in the context of rational expectations (Marcet a Sargent 1989) employ, on the contrary, complex models of the whole economy. Both of these approaches can be perceived as an optimal response complexity of the prediction task and availability of observations. This work introduces a straightforward extension to the standard model of decision making under uncertainty, where agents utility depends on accuracy of their predictions and where model complexity is moderated by regularization parameter. Results of Monte Carlo simulations reveal that in complicated environments, where few observations are at disposal, it is beneficial to construct simple models resembling heuristics. Unbiased models are preferred in more convenient conditions.
APA, Harvard, Vancouver, ISO, and other styles
33

Vavruška, Marek. "Realised stochastic volatility in practice." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-165381.

Full text
Abstract:
Realised Stochastic Volatility model of Koopman and Scharth (2011) is applied to the five stocks listed on NYSE in this thesis. Aim of this thesis is to investigate the effect of speeding up the trade data processing by skipping the cleaning rule requiring the quote data. The framework of the Realised Stochastic Volatility model allows the realised measures to be biased estimates of the integrated volatility, which further supports this approach. The number of errors in recorded trades has decreased significantly during the past years. Different sample lengths were used to construct one day-ahead forecasts of realised measures to examine the forecast precision sensitivity to the rolling window length. Use of the longest window length does not lead to the lowest mean square error. The dominance of the Realised Stochastic Volatility model in terms of the lowest mean square errors of one day-ahead out-of-sample forecasts has been confirmed.
APA, Harvard, Vancouver, ISO, and other styles
34

Thomas, Robin Rajan. "Optimisation of adaptive localisation techniques for cognitive radio." Diss., University of Pretoria, 2012. http://hdl.handle.net/2263/27076.

Full text
Abstract:
Spectrum, environment and location awareness are key characteristics of cognitive radio (CR). Knowledge of a user’s location as well as the surrounding environment type may enhance various CR tasks, such as spectrum sensing, dynamic channel allocation and interference management. This dissertation deals with the optimisation of adaptive localisation techniques for CR. The first part entails the development and evaluation of an efficient bandwidth determination (BD) model, which is a key component of the cognitive positioning system. This bandwidth efficiency is achieved using the Cramer-Rao lower bound derivations for a single-input-multiple-output (SIMO) antenna scheme. The performances of the single-input-single-output (SISO) and SIMO BD models are compared using three different generalised environmental models, viz. rural, urban and suburban areas. In the case of all three scenarios, the results reveal a marked improvement in the bandwidth efficiency for a SIMO antenna positioning scheme, especially for the 1×3 urban case, where a 62% root mean square error (RMSE) improvement over the SISO system is observed. The second part of the dissertation involves the presentation of a multiband time-of arrival (TOA) positioning technique for CR. The RMSE positional accuracy is evaluated using a fixed and dynamic bandwidth availability model. In the case of the fixed bandwidth availability model, the multiband TOA positioning model is initially evaluated using the two-step maximum-likelihood (TSML) location estimation algorithm for a scenario where line-of-sight represents the dominant signal path. Thereafter, a more realistic dynamic bandwidth availability model has been proposed, which is based on data obtained from an ultra-high frequency spectrum occupancy measurement campaign. The RMSE performance is then verified using the non-linear least squares, linear least squares and TSML location estimation techniques, using five different bandwidths. The proposed multiband positioning model performs well in poor signal-to-noise ratio conditions (-10 dB to 0 dB) when compared to a single band TOA system. These results indicate the advantage of opportunistic TOA location estimation in a CR environment.
Dissertation (MEng)--University of Pretoria, 2012.
Electrical, Electronic and Computer Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
35

Zang, Xin. "Over-the-air Computation for Large-scale Wireless Data Fusion." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25100.

Full text
Abstract:
For future Internet-of-Things based Big Data applications, sensors need to collect a vast volume of data from sensors and the environment. To interpret the meaning behind the collected data, it is challenging for an edge fusion center running extensive computing tasks over large data sets with limited computation capacity. To tackle the challenge, by exploiting the superposition property of the multiple-access channel and the functional decomposition, the recently proposed over-the-air computation (AirComp) technique, enables an effective joint data collection and computation from concurrent sensor transmissions. In this thesis, we first consider a single-antenna AirComp system consisting of K sensors and one receiver. We formulate an optimization problem to minimize the computation mean-squared error (MSE) of the K sensors' signals at the receiver by optimizing the transmission and receiver processing policy, under the peak power constraint of each sensor. Although the problem is not convex, we derive the computation-optimal policy in a closed form. We comprehensively investigate the ergodic performance of the AirComp system, the scaling laws of the average computation MSE and the average power consumption of different policies with respect to K. For the computation-optimal policy, we show that the policy has a varnishing average computation MSE and a varnishing average power consumption with the increasing K. In most of the existing work on AirComp, the optimal system-parameter design is commonly considered under the peak-power constraint of each sensor. In my second work, we propose an optimal transmitter-receiver parameter design problem to minimize the computation MSE of an AirComp system under the sum-power constraint of the sensors. We solve the non-convex problem and obtain a closed-form solution. We investigate another problem that minimizes the sum power of the sensors under the constraint of computation MSE. Our results show that in both problems, the sensors with poor and good channel conditions should use less power than the ones with moderate channel conditions. Most existing work on AirComp assumes computation of spatial-and-temporal independent sensor signals, though in practice different sensor measurements are usually correlated and the current measurements are normally related to the previous ones. In my third work, we propose an AirComp system with spatial-and-temporal correlated sensor signals for the first time in the literature, and formulate the optimal AirComp policy design problem for achieving the minimum computation MSE. We derive the optimal AirComp policy to achieve the minimum computation MSE in each time step by utilizing the current and the previously received signals. We also propose and optimize a low-complexity AirComp policy in a closed form, which approaches the performance of the optimal policy.
APA, Harvard, Vancouver, ISO, and other styles
36

Chitte, Sree Divya. "Source localization from received signal strength under lognormal shadowing." Thesis, University of Iowa, 2010. https://ir.uiowa.edu/etd/477.

Full text
Abstract:
This thesis considers statistical issues in source localization from the received signal strength (RSS) measurements at sensor locations, under the practical assumption of log-normal shadowing. Distance information of source from sensor locations can be estimated from RSS measurements and many algorithms directly use powers of distances to localize the source, even though distance measurements are not directly available. The first part of the thesis considers the statistical analysis of distance estimation from RSS measurments. We show that the underlying problem is inefficient and there is only one unbiased estimator for this problem and its mean square error (MSE) grows exponentially with noise power. Later, we provide the linear minimum mean square error (MMSE) estimator whose bias and MSE are bounded in noise power. The second part of the thesis establishes an isomorphism between estimates of differences between squares of distances and the source location. This is used to completely characterize the class of unbiased estimates of the source location and to show that their MSEs grow exponentially with noise powers. Later, we propose an estimate based on the linear MMSE estimate of distances that has error variance and bias that is bounded in the noise variance.
APA, Harvard, Vancouver, ISO, and other styles
37

Ogorodnikova, Natalia. "Pareto πps sampling design vs. Poisson πps sampling design. : Comparison of performance in terms of mean-squared error and evaluation of factors influencing the performance measures." Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-67978.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Jones, Haley M., and Haley Jones@anu edu au. "On multipath spatial diversity in wireless multiuser communications." The Australian National University. Research School of Information Sciences and Engineering, 2001. http://thesis.anu.edu.au./public/adt-ANU20050202.152811.

Full text
Abstract:
The study of the spatial aspects of multipath in wireless communications environments is an increasingly important addition to the study of the temporal aspects in the search for ways to increase the utilization of the available wireless channel capacity. Traditionally, multipath has been viewed as an encumbrance in wireless communications, two of the major impairments being signal fading and intersymbol interference. However, recently the potential advantages of the diversity offered by multipath rich environments in multiuser communications have been recognised. Space time coding, for example, is a recent technique which relies on a rich scattering environment to create many practically uncorrelated signal transmission channels. Most often, statistical models have been used to describe the multipath environments in such applications. This approach has met with reasonable success but is limited when the statistical nature of a field is not easily determined or is not readily described by a known distribution.¶ Our primary aim in this thesis is to probe further into the nature of multipath environments in order to gain a greater understanding of their characteristics and diversity potential. We highlight the shortcomings of beamforming in a multipath multiuser access environment. We show that the ability of a beamformer to resolve two or more signals in angle directly limits its achievable capacity.¶ We test the probity of multipath as a source of spatial diversity, the limiting case of which is co-located users. We introduce the concept of separability to define the fundamental limits of a receiver to extract the signal of a desired user from interfering users’ signals and noise. We consider the separability performances of the minimum mean square error (MMSE), decorrelating (DEC) and matched filter (MF) detectors as we bring the positions of a desired and an interfering user closer together. We show that both the MMSE and DEC detectors are able to achieve acceptable levels of separability with the users as close as λ/10.¶ In seeking a better understanding of the nature of multipath fields themselves, we take two approaches. In the first we take a path oriented approach. The effects on the variation of the field power of the relative values of parameters such as amplitude and propagation direction are considered for a two path field. The results are applied to a theoretical analysis of the behaviour of linear detectors in multipath fields. This approach is insightful for fields with small numbers of multipaths, but quickly becomes mathematically complex.¶ In a more general approach, we take a field oriented view, seeking to quantify the complexity of arbitrary fields. We find that a multipath field has an intrinsic dimensionality of (πe)R/λ≈8.54R/λ, for a field in a two dimensional circular region, increasing only linearly with the radius R of the region. This result implies that there is no such thing as an arbitrarily complicated multipath field. That is, a field generated by any number of nearfield and farfield, specular and diffuse multipath reflections is no more complicated than a field generated by a limited number of plane waves. As such, there are limits on how rich multipath can be. This result has significant implications including means: i) to determine a parsimonious parameterization for arbitrary multipath fields and ii) of synthesizing arbitrary multipath fields with arbitrarily located nearfield or farfield, spatially discrete or continuous sources. The theoretical results are corroborated by examples of multipath field analysis and synthesis.
APA, Harvard, Vancouver, ISO, and other styles
39

Huang, Deng. "Experimental planning and sequential kriging optimization using variable fidelity data." Connect to this title online, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1110297243.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xi, 120 p.; also includes graphics (some col.). Includes bibliographical references (p. 114-120). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
40

Jun, Shi. "Frequentist Model Averaging For Functional Logistic Regression Model." Thesis, Uppsala universitet, Statistiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-352519.

Full text
Abstract:
Frequentist model averaging as a newly emerging approach provides us a way to overcome the uncertainty caused by traditional model selection in estimation. It acknowledges the contribution of multiple models, instead of making inference and prediction purely based on one single model. Functional logistic regression is also a burgeoning method in studying the relationship between functional covariates and a binary response. In this paper, the frequentist model averaging approach is applied to the functional logistic regression model. A simulation study is implemented to compare its performance with model selection. The analysis shows that when conditional probability is taken as the focus parameter, model averaging is superior to model selection based on BIC. When the focus parameter is the intercept and slopes, model selection performs better.
APA, Harvard, Vancouver, ISO, and other styles
41

Glickman, Mark. "Disturbance monitoring in distributed power systems." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16497/1/Mark_Glickman_Thesis.pdf.

Full text
Abstract:
Power system generators are interconnected in a distributed network to allow sharing of power. If one of the generators cannot meet the power demand, spare power is diverted from neighbouring generators. However, this approach also allows for propagation of electric disturbances. An oscillation arising from a disturbance at a given generator site will affect the normal operation of neighbouring generators and might cause them to fail. Hours of production time will be lost in the time it takes to restart the power plant. If the disturbance is detected early, appropriate control measures can be applied to ensure system stability. The aim of this study is to improve existing algorithms that estimate the oscillation parameters from acquired generator data to detect potentially dangerous power system disturbances. When disturbances occur in power systems (due to load changes or faults), damped oscillations (or "modes") are created. Modes which are heavily damped die out quickly and pose no threat to system stability. Lightly damped modes, by contrast, die out slowly and are more problematic. Of more concern still are "negatively damped" modes which grow exponentially with time and can ultimately cause the power system to fail. Widespread blackouts are then possible. To avert power system failures it is necessary to monitor the damping of the oscillating modes. This thesis proposes a number of damping estimation algorithms for this task. If the damping is found to be very small or even negative, then additional damping needs to be introduced via appropriate control strategies. This thesis presents a number of new algorithms for estimating the damping of modal oscillations in power systems. The first of these algorithms uses multiple orthogonal sliding windows along with least-squares techniques to estimate the modal damping. This algorithm produces results which are superior to those of earlier sliding window algorithms (that use only one pair of sliding windows to estimate the damping). The second algorithm uses a different modification of the standard sliding window damping estimation algorithm - the algorithm exploits the fact that the Signal to Noise Ratio (SNR) within the Fourier transform of practical power system signals is typically constant across a wide frequency range. Accordingly, damping estimates are obtained at a range of frequencies and then averaged. The third algorithm applied to power system analysis is based on optimal estimation theory. It is computationally efficient and gives optimal accuracy, at least for modes which are well separated in frequency.
APA, Harvard, Vancouver, ISO, and other styles
42

Glickman, Mark. "Disturbance monitoring in distributed power systems." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16497/.

Full text
Abstract:
Power system generators are interconnected in a distributed network to allow sharing of power. If one of the generators cannot meet the power demand, spare power is diverted from neighbouring generators. However, this approach also allows for propagation of electric disturbances. An oscillation arising from a disturbance at a given generator site will affect the normal operation of neighbouring generators and might cause them to fail. Hours of production time will be lost in the time it takes to restart the power plant. If the disturbance is detected early, appropriate control measures can be applied to ensure system stability. The aim of this study is to improve existing algorithms that estimate the oscillation parameters from acquired generator data to detect potentially dangerous power system disturbances. When disturbances occur in power systems (due to load changes or faults), damped oscillations (or "modes") are created. Modes which are heavily damped die out quickly and pose no threat to system stability. Lightly damped modes, by contrast, die out slowly and are more problematic. Of more concern still are "negatively damped" modes which grow exponentially with time and can ultimately cause the power system to fail. Widespread blackouts are then possible. To avert power system failures it is necessary to monitor the damping of the oscillating modes. This thesis proposes a number of damping estimation algorithms for this task. If the damping is found to be very small or even negative, then additional damping needs to be introduced via appropriate control strategies. This thesis presents a number of new algorithms for estimating the damping of modal oscillations in power systems. The first of these algorithms uses multiple orthogonal sliding windows along with least-squares techniques to estimate the modal damping. This algorithm produces results which are superior to those of earlier sliding window algorithms (that use only one pair of sliding windows to estimate the damping). The second algorithm uses a different modification of the standard sliding window damping estimation algorithm - the algorithm exploits the fact that the Signal to Noise Ratio (SNR) within the Fourier transform of practical power system signals is typically constant across a wide frequency range. Accordingly, damping estimates are obtained at a range of frequencies and then averaged. The third algorithm applied to power system analysis is based on optimal estimation theory. It is computationally efficient and gives optimal accuracy, at least for modes which are well separated in frequency.
APA, Harvard, Vancouver, ISO, and other styles
43

Hsiao, Wen-Hsin. "Aspects of Fourier imaging." Thesis, University of Canterbury. Electrical and Computer Engineering, 2008. http://hdl.handle.net/10092/1245.

Full text
Abstract:
A number of topics related to Fourier imaging are investigated. Relationships between the magnitude of errors in the amplitude and phase of the Fourier transform of images and the mean square error in reconstructed images are derived. The differing effects of amplitude and phase errors are evaluated, and "equivalent" amplitude and phase errors are derived. A model of the probability density function of the Fourier amplitudes of images is derived. The fundamental basis of phase dominance is studied and quantitated. Inconsistencies in published counter-examples of phase dominance are highlighted. The key characteristics of natural images that lead to their observed power spectral behaviour with spatial frequency are determined.
APA, Harvard, Vancouver, ISO, and other styles
44

Zhao, Zhanlue. "Performance Appraisal of Estimation Algorithms and Application of Estimation Algorithms to Target Tracking." ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/394.

Full text
Abstract:
This dissertation consists of two parts. The first part deals with the performance appraisal of estimation algorithms. The second part focuses on the application of estimation algorithms to target tracking. Performance appraisal is crucial for understanding, developing and comparing various estimation algorithms. In particular, with the evolvement of estimation theory and the increase of problem complexity, performance appraisal is getting more and more challenging for engineers to make comprehensive conclusions. However, the existing theoretical results are inadequate for practical reference. The first part of this dissertation is dedicated to performance measures which include local performance measures, global performance measures and model distortion measure. The second part focuses on application of the recursive best linear unbiased estimation (BLUE) or lineae minimum mean square error (LMMSE) estimation to nonlinear measurement problem in target tracking. Kalman filter has been the dominant basis for dynamic state filtering for several decades. Beyond Kalman filter, a more fundamental basis for the recursive best linear unbiased filtering has been thoroughly investigated in a series of papers by Dr. X. Rong Li. Based on the so-called quasirecursive best linear unbiased filtering technique, the constraints of the Kalman filter Linear-Gaussian assumptions can be relaxed such that a general linear filtering technique for nonlinear systems can be achieved. An approximate optimal BLUE filter is implemented for nonlinear measurements in target tracking which outperforms the existing method significantly in terms of accuracy, credibility and robustness.
APA, Harvard, Vancouver, ISO, and other styles
45

Shikhar. "COMPRESSIVE IMAGING FOR DIFFERENCE IMAGE FORMATION AND WIDE-FIELD-OF-VIEW TARGET TRACKING." Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/194741.

Full text
Abstract:
Use of imaging systems for performing various situational awareness tasks in militaryand commercial settings has a long history. There is increasing recognition,however, that a much better job can be done by developing non-traditional opticalsystems that exploit the task-specific system aspects within the imager itself. Insome cases, a direct consequence of this approach can be real-time data compressionalong with increased measurement fidelity of the task-specific features. In others,compression can potentially allow us to perform high-level tasks such as direct trackingusing the compressed measurements without reconstructing the scene of interest.In this dissertation we present novel advancements in feature-specific (FS) imagersfor large field-of-view surveillence, and estimation of temporal object-scene changesutilizing the compressive imaging paradigm. We develop these two ideas in parallel.In the first case we show a feature-specific (FS) imager that optically multiplexesmultiple, encoded sub-fields of view onto a common focal plane. Sub-field encodingenables target tracking by creating a unique connection between target characteristicsin superposition space and the target's true position in real space. This isaccomplished without reconstructing a conventional image of the large field of view.System performance is evaluated in terms of two criteria: average decoding time andprobability of decoding error. We study these performance criteria as a functionof resolution in the encoding scheme and signal-to-noise ratio. We also includesimulation and experimental results demonstrating our novel tracking method. Inthe second case we present a FS imager for estimating temporal changes in the objectscene over time by quantifying these changes through a sequence of differenceimages. The difference images are estimated by taking compressive measurementsof the scene. Our goals are twofold. First, to design the optimal sensing matrixfor taking compressive measurements. In scenarios where such sensing matrices arenot tractable, we consider plausible candidate sensing matrices that either use theavailable a priori information or are non-adaptive. Second, we develop closed-form and iterative techniques for estimating the difference images. We present results to show the efficacy of these techniques and discuss the advantages of each.
APA, Harvard, Vancouver, ISO, and other styles
46

Lin, Lizhen. "Nonparametric Inference for Bioassay." Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/222849.

Full text
Abstract:
This thesis proposes some new model independent or nonparametric methods for estimating the dose-response curve and the effective dosage curve in the context of bioassay. The research problem is also of importance in environmental risk assessment and other areas of health sciences. It is shown in the thesis that our new nonparametric methods while bearing optimal asymptotic properties also exhibit strong finite sample performance. Although our specific emphasis is on bioassay and environmental risk assessment, the methodology developed in this dissertation applies broadly to general order restricted inference.
APA, Harvard, Vancouver, ISO, and other styles
47

Challakere, Nagaravind. "Carrier Frequency Offset Estimation for Orthogonal Frequency Division Multiplexing." DigitalCommons@USU, 2012. https://digitalcommons.usu.edu/etd/1423.

Full text
Abstract:
This thesis presents a novel method to solve the problem of estimating the carrier frequency set in an Orthogonal Frequency Division Multiplexing (OFDM) system. The approach is based on the minimization of the probability of symbol error. Hence, this approach is called the Minimum Symbol Error Rate (MSER) approach. An existing approach based on Maximum Likelihood (ML) is chosen to benchmark the performance of the MSER-based algorithm. The MSER approach is computationally intensive. The thesis evaluates the approximations that can be made to the MSER-based objective function to make the computation tractable. A modified gradient function based on the MSER objective is developed which provides better performance characteristics than the ML-based estimator. The estimates produced by the MSER approach exhibit lower Mean Squared Error compared to the ML benchmark. The performance of MSER-based estimator is simulated with Quaternary Phase Shift Keying (QPSK) symbols, but the algorithm presented is applicable to all complex symbol constellations.
APA, Harvard, Vancouver, ISO, and other styles
48

Thiebaut, Nicolene Magrietha. "Statistical properties of forward selection regression estimators." Diss., University of Pretoria, 2011. http://hdl.handle.net/2263/29520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Ramalho, Guilherme Matiussi. "Uma abordagem estatística para o modelo do preço spot da energia elétrica no submercado sudeste/centro-oeste brasileiro." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-26122014-145848/.

Full text
Abstract:
O objetivo deste trabalho e o desenvolvimento de uma ferramenta estatistica que sirva de base para o estudo do preco spot da energia eletrica do subsistema Sudeste/Centro-Oeste do Sistema Interligado Nacional, utilizando a estimacao por regressao linear e teste de razao de verossimilhanca como instrumentos para desenvolvimento e avaliacao dos modelos. Na analise dos resultados estatsticos descritivos dos modelos, diferentemente do que e observado na literatura, a primeira conclusao e a verificacao de que as variaveis sazonais, quando analisadas isoladamente, apresentam resultados pouco aderentes ao preco spot PLD. Apos a analise da componente sazonal e verificada a influencia da energia fornecida e a energia demandada como variaveis de entrada, com o qual conclui-se que especificamente a energia armazenada e producao de energia termeletrica sao as variaveis que mais influenciam os precos spot no subsistema estudado. Entre os modelos testados, o que particularmente ofereceu os melhores resultados foi um modelo misto criado a partir da escolha das melhores variaveis de entrada dos modelos testados preliminarmente, alcancando um coeficiente de determinacao R2 de 0.825, resultado esse que pode ser considerado aderente ao preco spot. No ultimo capitulo e apresentada uma introducao ao modelo de predicao do preco spot, possibilitando dessa forma a analise do comportamento do preco a partir da alteracao das variaveis de entrada.
The objective of this work is the development of a statistical method to study the spot prices of the electrical energy of the Southeast/Middle-West (SE-CO) subsystem of the The Brazilian National Connected System, using the Least Squares Estimation and Likelihood Ratio Test as tools to perform and evaluate the models. Verifying the descriptive statistical results of the models, differently from what is observed in the literature, the first observation is that the seasonal component, when analyzed alone, presented results loosely adherent to the spot price PLD. It is then evaluated the influence of the energy supply and the energy demand as input variables, verifying that specifically the stored water and the thermoelectric power production are the variables that the most influence the spot prices in the studied subsystem. Among the models, the one that offered the best result was a mixed model created from the selection of the best input variables of the preliminarily tested models, achieving a coeficient of determination R2 of 0.825, a result that can be considered adherent to the spot price. At the last part of the work It is presented an introduction to the spot price prediction model, allowing the analysis of the price behavior by the changing of the input variables.
APA, Harvard, Vancouver, ISO, and other styles
50

Bernal, Regina Tomie Ivata. "Inquéritos por telefone: inferências válidas em regiões com baixa taxa de cobertura de linhas residenciais." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/6/6132/tde-09092011-120701/.

Full text
Abstract:
Introdução: O inquérito por telefone, quando comparado ao inquérito domiciliar possui vários atrativos, em especial baixo custo operacional e rapidez do processo de divulgação de resultados. No entanto, a exclusão de domicílios sem telefone fixo, pode representar série questão de validade nas estimativas obtidas. Objetivo: Avaliar vícios potenciais nos resultados divulgados no Sistema de Vigilância de Fatores de Risco para Doenças Crônicas por Inquérito Telefônico (VIGITEL) em município de baixa cobertura de domicílios com telefone fixo. Métodos: A partir de resultados levantados pelo Inquérito Domiciliar realizado no município de Rio Branco-AC, com cobertura de 41 por cento dos domicílios com telefone fixo, tentou-se localizar vícios introduzidos nos resultados do Vigitel. Foi usado método alternativo de ponderação para diminuir o vício da estimativa do Vigitel. Resultados: O Vigitel subestima a maioria das prevalências estimadas. Os pesos de pós-estratificação eliminam parcialmente o vício, cuja origem é proveniente de baixa taxa de cobertura de domicílios com telefone fixo. Por outro lado, o uso desses pesos, quando não necessário, potencializou o vício das variáveis não associadas à posse de telefone fixo. Conclusões: Em municípios de baixa taxa de cobertura de domicílios com telefone fixo, torna-se necessária a implementação de novo método de ponderação e estratégia de seleção de variáveis externas para construção dos pesos de pós estratificação, que minimizem o vício nas estimativas das variáveis levantadas
Introdução: O inquérito por telefone, quando comparado ao inquérito domiciliar possui vários atrativos, em especial baixo custo operacional e rapidez do processo de divulgação de resultados. No entanto, a exclusão de domicílios sem telefone fixo, pode representar série questão de validade nas estimativas obtidas. Objetivo: Avaliar vícios potenciais nos resultados divulgados no Sistema de Vigilância de Fatores de Risco para Doenças Crônicas por Inquérito Telefônico (VIGITEL) em município de baixa cobertura de domicílios com telefone fixo. Métodos: A partir de resultados levantados pelo Inquérito Domiciliar realizado no município de Rio Branco-AC, com cobertura de 41 por cento dos domicílios com telefone fixo, tentou-se localizar vícios introduzidos nos resultados do Vigitel. Foi usado método alternativo de ponderação para diminuir o vício da estimativa do Vigitel. Resultados: O Vigitel subestima a maioria das prevalências estimadas. Os pesos de pós-estratificação eliminam parcialmente o vício, cuja origem é proveniente de baixa taxa de cobertura de domicílios com telefone fixo. Por outro lado, o uso desses pesos, quando não necessário, potencializou o vício das variáveis não associadas à posse de telefone fixo. Conclusões: Em municípios de baixa taxa de cobertura de domicílios com telefone fixo, torna-se necessária a implementação de novo método de ponderação e estratégia de seleção de variáveis externas para construção dos pesos de pós estratificação, que minimizem o vício nas estimativas das variáveis levantadas
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography