To see the other types of publications on this topic, follow the link: Estimation multiple de moyennes.

Dissertations / Theses on the topic 'Estimation multiple de moyennes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Estimation multiple de moyennes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Fermanian, Jean-Baptiste. "High dimensional multiple means estimation and testing with applications to machine learning." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASM035.

Full text
Abstract:
Nous étudions dans cette thèse l'influence de la grande dimension dans des problèmes de test et d'estimation. Notre analyse porte sur la dépendance en la dimension de la vitesse de séparation d'un test de proximité et du risque quadratique de l'estimation multiples de vecteurs. Nous complétons les résultats existants en étudiant ces dépendances dans le cas de distributions non isotropes. Pour de telles distributions, le rôle de la dimension est alors joué par des notions de dimension effective définies à partir de la covariance des distributions. Ce cadre permet d'englober des données de dimension infinie comme le kernel mean embedding, outil de machine learning que nous chercherons à estimer. A l'aide de cette analyse, nous construisons des méthodes d'estimation simultanée de vecteurs moyennes de différentes distributions à partir d'échantillons indépendants de chacune. Ces estimateurs ont de meilleures performances théorique et pratique relativement aux moyennes empiriques, en particulier dans des situations défavorables où la dimension (effective) est grande. Ces méthodes utilisent explicitement ou implicitement la relative facilité du test par rapport à l'estimation. Elles reposent sur la construction d'estimateurs de distances et de moments de la covariance pour lesquels nous fournissons des bornes de concentration non asymptotiques. Un intérêt particulier est porté à l'étude de données bornées pour lesquels une analyse spécifique est nécessaire. Nos méthodes sont accompagnées d'une analyse minimax justifiant leur optimalité. Dans une dernière partie, nous proposons une interprétation du mécanisme d'attention utilisé dans les réseaux de neurones Transformers comme un problème d'estimation multiple de vecteurs. Dans un cadre simplifié, ce mécanisme partage des idées similaires avec nos approches et nous mettons en évidence son effet de débruitage en grande dimension
In this thesis, we study the influence of high dimension in testing and estimation problems. We analyze the dimension dependence of the separation rate of a closeness test and of the quadratic risk of multiple vector estimation. We complement existing results by studying these dependencies in the case of non-isotropic distributions. For such distributions, the role of dimension is played by notions of effective dimension defined from the covariance of the distributions. This framework covers infinite-dimensional data such as kernel mean embedding, a machine learning tool we will be seeking to estimate. Using this analysis, we construct methods for simultaneously estimating mean vectors of different distributions from independent samples of each. These estimators perform better theoretically and practically than the empirical mean in unfavorable situations where the (effective) dimension is large. These methods make explicit or implicit use of the relative ease of testing compared with estimation. They are based on the construction of estimators of distances and moments of covariance, for which we provide non-asymptotic concentration bounds. Particular interest is given to the study of bounded data, for which a specific analysis is required. Our methods are accompanied by a minimax analysis justifying their optimality. In a final section, we propose an interpretation of the attention mechanism used in Transformer neural networks as a multiple vector estimation problem. In a simplified framework, this mechanism shares similar ideas with our approaches, and we highlight its denoising effect in high dimension
APA, Harvard, Vancouver, ISO, and other styles
2

Tran, Nguyen Duy. "Performance bounds in terms of estimation and resolution and applications in array processing." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00777503.

Full text
Abstract:
This manuscript concerns the performance analysis in signal processing and consists into two parts : First, we study the lower bounds in characterizing and predicting the estimation performance in terms of mean square error (MSE). The lower bounds on the MSE give the minimum variance that an estimator can expect to achieve and it can be divided into two categories depending on the parameter assumption: the so-called deterministic bounds dealing with the deterministic unknown parameters, and the so-called Bayesian bounds dealing with the random unknown parameter. Particularly, we derive the closed-form expressions of the lower bounds for two applications in two different fields: (i) The first one is the target localization using the multiple-input multiple-output (MIMO) radar in which we derive the lower bounds in the contexts with and without modeling errors, respectively. (ii) The other one is the pulse phase estimation of X-ray pulsars which is a potential solution for autonomous deep space navigation. In this application, we show the potential universality of lower bounds to tackle problems with parameterized probability density function (pdf) different from classical Gaussian pdf since in X-ray pulse phase estimation, observations are modeled with a Poisson distribution. Second, we study the statistical resolution limit (SRL) which is the minimal distance in terms of the parameter of interest between two signals allowing to correctly separate/estimate the parameters of interest. More precisely, we derive the SRL in two contexts: array processing and MIMO radar by using two approaches based on the estimation theory and information theory. We also present in this thesis the usefulness of SRL in optimizing the array system.
APA, Harvard, Vancouver, ISO, and other styles
3

Ferrieux, Dominique. "Estimation de densités de mesures moyennes de processus ponctuels associés." Montpellier 2, 1996. http://www.theses.fr/1996MON20245.

Full text
Abstract:
L'objet principal de ce travail est l'estimation par la methode du noyau de la densite de la mesure moyenne d'une mesure aleatoire discrete, ou d'un processus ponctuel, sous hypothese d'association. Le premier chapitre donne les proprietes generales des suites de mesures aleatoires associees et des exemples. Le second chapitre donne les principales proprietes asymptotiques de l'estimateur telles que les convergences en probabilite et presque sure, la loi limite, et le choix optimal de la fenetre. Dans le troisieme chapitre ces resultats sont exploites pour l'estimation de la derivee de deux mesures moyennes. Le quatrieme chapitre etudie un point de vue nouveau sur la statistique des processus ponctuels quand une seule observation est disponible. Le dernier chapitre presente quelques simulations
APA, Harvard, Vancouver, ISO, and other styles
4

Wiklund, Åsa. "Multiple Platform Bias Error Estimation." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2126.

Full text
Abstract:

Sensor fusion has long been recognized as a mean to improve target tracking. Sensor fusion deals with the merging of several signals into one to get a better and more reliable result. To get an improved and more reliable result you have to trust the incoming data to be correct and not contain unknown systematic errors. This thesis tries to find and estimate the size of the systematic errors that appear when we have a multi platform environment and data is shared among the units. To be more precise, the error estimated within the scope of this thesis appears when platforms cannot determine their positions correctly and share target tracking data with their own corrupted position as a basis for determining the target's position. The algorithms developed in this thesis use the Kalman filter theory, including the extended Kalman filter and the information filter, to estimate the platform location bias error. Three algorithms are developed with satisfying result. Depending on time constraints and computational demands either one of the algorithms could be preferred.

APA, Harvard, Vancouver, ISO, and other styles
5

Helversen, Bettina von. "Quantitative estimation from multiple cues." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2008. http://dx.doi.org/10.18452/15718.

Full text
Abstract:
Wie schätzen Menschen quantitative Größen wie zum Beispiel den Verkaufspreis eines Autos? Oft benutzen Menschen zur Lösung von Schätzproblemen sogenannte Cues, Informationen, die probabilistisch mit dem zu schätzenden Kriterium verknüpft sind. Um den Verkaufspreis eines Autos zu schätzen, könnte man zum Beispiel Informationen über das Baujahr, die Automarke, oder den Kilometerstand des Autos verwenden. Um menschliche Schätzprozesse zu beschreiben, werden häufig linear additive Modelle herangezogen. In meiner Dissertation schlage ich alternative ein heuristisches Modell zur Schätzung quantitativer Größen vor: das Mapping-Modell. Im ersten Kapitel meiner Dissertation teste ich das Mapping-Modell gegen weitere, in der Literatur etablierte, Schätzmodelle. Es zeigte sich, dass das Mapping-Modell unter unterschiedlichen Bedingungen in der Lage war, die Schätzungen der Untersuchungsteilnehmer akkurat vorherzusagen. Allerdings bestimmte die Struktur der Aufgabe - im Einklang mit dem Ansatz der „adaptiven Werkzeugkiste“ - im großen Maße, welches Modell am besten geeignet war, die Schätzungen zu erfassen. Im zweiten Kapitel meiner Dissertation greife ich diesen Ansatz auf und untersuche, in wie weit die Aufgabenstruktur bestimmt, welches Modell die Schätzprozesse am Besten beschreibt. Meine Ergebnisse zeigten, dass das Mapping-Modell am Besten dazu geeignet war die Schätzungen der Versuchsteilnehmer zu beschreiben, wenn explizites Wissen über die Aufgabe vorhanden war, während ein Exemplar-Modell den Schätzprozess erfasste, wenn die Abstraktion von Wissen schwierig war. Im dritten Kapitel meiner Dissertation, wende ich das Mapping-Modell auf juristische Entscheidungen an. Eine Analyse von Strafakten ergab, dass das Mapping-Modell Strafzumessungsvorschläge von Staatsanwälten besser vorhersagte als eine lineare Regression. Dies zeigt, dass das Mapping-Modell auch außerhalb von Forschungslaboratorien dazu geeignet ist menschliche Schätzprozesse zu beschreiben.
How do people make quantitative estimations, such as estimating a car’s selling price? Often people rely on cues, information that is probabilistically related to the quantity they are estimating. For instance, to estimate the selling price of a car they could use information, such as the car’s manufacturer, age, mileage, or general condition. Traditionally, linear regression type models have been employed to capture the estimation process. In my dissertation, I propose an alternative cognitive theory for quantitative estimation: The mapping model which offers a heuristic approach to quantitative estimations. In the first part of my dissertation l test the mapping model against established alternative models of estimation, namely, linear regression, an exemplar model, and a simple estimation heuristic. The mapping model provided a valid account of people’s estimates outperforming the other models in a variety of conditions. Consistent with the “adaptive toolbox” approach on decision, which model was best in predicting participants’ estimations was a function of the task environment. In the second part of my dissertation, I examined further how different task features affect the performance of the models make. My results indicate that explicit knowledge about the cues is decisive. When knowledge about the cues was available, the mapping model was the best model; however, if knowledge about the task was difficult to abstract, participants’ estimations were best described by the exemplar model. In the third part of my dissertation, I applied the mapping model in the field of legal decision making. In an analysis of fining and incarceration decisions, I showed that the prosecutions’ sentence recommendations were better captured by the mapping model than by legal policy modeled with a linear regression. These results indicated that the mapping model is a valid model which can be applied to model actual estimation processes outside of the laboratory.
APA, Harvard, Vancouver, ISO, and other styles
6

Hemmendorff, Magnus. "Single and Multiple Motion Field Estimation." Licentiate thesis, Linköping University, Linköping University, Computer Vision, 1999. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54343.

Full text
Abstract:

This thesis presents a framework for estimation of motion fields both for single and multiple layers. All the methods have in common that they generate or use constraints on the local motion. Motion constraints are represented by vectors whose directions describe one component of the local motion and whose magnitude indicate confidence.

Two novel methods for estimating these motion constraints are presented. Both methods take two images as input and apply orientation sensitive quadrature filters. One method is similar to a gradient method applied on the phase from the complex filter outputs. The other method is based on novel results using canonical correlation presented in this thesis.

Parametric models, e.g. affine or FEM, are used to estimate motion from constraints on local motion. In order to estimate smooth fields for models with many parameters, cost functions on deformations are introduced.

Motions of transparent multiple layers are estimated by implicit or explicit clustering of motion constraints into groups. General issues and difficulties in analysis of multiple motions are described. An extension of the known EM algorithm is presented together with experimental results on multiple transparent layers with affine motions. Good accuracy in estimation allows reconstruction of layers using a backprojection algorithm. As an alternative to the EM algorithm, this thesis also introduces a method based on higher order tensors.

A result with potential applicatications in a number of diffeerent research fields is the extension of canonical correlation to handle complex variables. Correlation is maximized using a novel method that can handle singular covariance matrices.

APA, Harvard, Vancouver, ISO, and other styles
7

Burney, S. M. A. "Estimation methods for multiple time series." Thesis, University of Strathclyde, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.382231.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

PLAKSIENKO, ANNA. "Joint estimation of multiple graphical models." Doctoral thesis, Gran Sasso Science Institute, 2021. http://hdl.handle.net/20.500.12571/21632.

Full text
Abstract:
The fast development of high-throughput technologies such as microarray or next-generation sequencing, and the consequent in-depth investigation of the genome in several international large scale projects, have led to the generation of large amounts of high-dimensional omics datasets. Scientists can use such data to acquire a deep understanding of complex cellular mechanisms, the molecular basis of diseases’ development, etc. Among other questions, relationships between different genes or other similar units can reveal regulatory mechanisms whose disruption can be associated with diseases. Network inference methods and, more specifically, graphical models estimation can be used to identify gene relationships and direct interactions not mediated by other factors. Simply speaking, a graphical model is a graph whose vertices correspond to random variables and edges denote conditional dependence relationships between them. There are plenty of methods for carrying out graphical model inference from a given dataset, even in the high-dimensional setting where the number of variables is much larger than the number of samples (a common situation in omics studies for the enormous number of genes involved and a limited number of samples collected). However, nowadays, it is common to collect and analyze more than one dataset. Multiple datasets can be obtained in different laboratories or with different technologies, arise from various studies, or be of different omics types. Their joint analysis can lead to a more accurate characterization of the underlying biological system, but it also requires specific techniques. In this thesis, we propose jewel – a novel method for the joint analysis of multiple datasets under the assumption that they are drawn from Gaussian distributions that share the same network dependency. In this context, the conditional dependence relationships between variables (genes) are encoded by the inverse covariance matrix. Although we assume that the conditional dependence structure is the same between different conditions, we let the covariance matrices be different to account for different sources of data origin. In this setting, combining the individual datasets into a single one and estimating a sole graphical model would mask the covariance matrices’ heterogeneity, while estimating separate models for each case would not take advantage of the common underlying structure. Therefore, a joint analysis of the datasets is preferable, and to this aim in this thesis we present a novel joint estimation method jewel. It extends the Meinshausen and Bühlmann regression-based approach to the case of multiple datasets by the mean of a group lasso penalty which guarantees the symmetry of the solution. We design a fast algorithm for the method’s implementation, incorporating the smart active shooting approach for a fixed regularization parameter and the warm start approach for an entire grid of regularization parameters. We also state a theorem for jewel’s consistency, providing upper and lower bounds for regularization parameter. Moreover, we extend the Bayesian information criterion and cross-validation procedures to the multiple datasets framework to provide a practical tool for real case applications. We explore the behavior of jewel in different simulation settings, analyzing the influence of various input parameters, and comparing the method to other available alternatives for joint estimation, revealing good and competitive performances. Finally, we illustrate the method’s performance in real data example regarding transcriptional regulatory networks based on gene expression data. We implement the proposed method in the novel R package jewel.
APA, Harvard, Vancouver, ISO, and other styles
9

Lee, Joonsung. "Acoustic signal estimation using multiple blind observations." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35603.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
Includes bibliographical references (p. 109-111).
This thesis proposes two algorithms for recovering an acoustic signal from multiple blind measurements made by sensors (microphones) over an acoustic channel. Unlike other algorithms that use a posteriori probabilistic models to fuse the data in this problem, the proposed algorithms use results obtained in the context of data communication theory. This constitutes a new approach to this sensor fusion problem. The proposed algorithms determine inverse channel filters with a predestined support (number of taps). The Coordinated Recovery of Signals From Sensors (CROSS) algorithm is an indirect method, which uses an estimate of the acoustic channel. Using the estimated channel coefficients from a Least-Squares (LS) channel estimation method, we propose an initialization process (zero-forcing estimate) and an iteration process (MMSE estimate) to produce optimal inverse filters accounting for the room characteristics, additive noise and errors in the estimation of the parameters of the room characteristics.
(cont.) Using a measured room channel, we analyze the performance of the algorithm through simulations and compare its performance with the theoretical performance. Also, in this thesis, the notion of channel diversity is generalized and the Averaging Row Space Intersection (ARSI) algorithm is proposed. The ARSI algorithm is a direct method, which does not use the channel estimate.
by Joonsung Lee.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
10

De, Melo F. E. "Multiple-object estimation techniques for challenging scenarios." Thesis, University of Liverpool, 2017. http://livrepository.liverpool.ac.uk/3013627/.

Full text
Abstract:
A series of methods for solving the multi-object estimation problem in the context sequential Bayesian inference is presented. These methods concentrate on dealing with challenging scenarios of multiple target tracking, involving fundamental problems of nonlinearity and non-Gaussianity of processes, high state dimensionality, high number of targets, statistical dependence between target states, and degenerate cases of low signal-to-noise ratio, high uncertainty, lowly observable states or uninformative observations. These difficulties pose obstacles to most practical multi-object inference problems, lying at the heart of the shortcomings reported for state-of-the-art methods, and so elicit novel treatments to enable tackling a broader class of real problems. The novel algorithms offered as solutions in this dissertation address such challenges by acting on the root causes of the associated problems. Often this involves essential dilemmas commonly manifested in Statistics and Decision Theory, such as trading off estimation accuracy with algorithm complexity, soft versus hard decision, generality versus tractability, conciseness versus interpretativeness etc. All proposed algorithms constitute stochastic filters, each of which is formulated to address specific aspects of the challenges at hand while offering tools to achieve judicious compromises in the aforementioned dilemmas. Two of the filters address the weight degeneracy observed in sequential Monte Carlo filters, particularly for nonlinear processes. One of these filters is designed for nonlinear non-Gaussian high-dimensional problems, delivering representativeness of the uncertainty in high-dimensional states while mitigating part of the inaccuracies that arise from the curse of dimensionality. This filter is shown to cope well with scenarios of multimodality, high state uncertainty, uninformative observations and high number of false alarms. A multi-object filter deals with the problem of considering dependencies between target states in a way that is scalable to a large number of targets, by resorting to probabilistic graphical structures. Another multi-object filter treats the problem of reducing the computational complexity of a state-of-the-art cardinalized filter to deal with a large number of targets, without compromising accuracy significantly. Finally, a framework for associating measurements across observation sessions for scenarios of low state observability is proposed, with application to an important Space Surveillance task: cataloging of space debris in the geosynchronous/geostationary belt. The devised methods treat the considered challenges by bringing about rather general questions, and provide not only principled solutions but also analyzes the essence of the investigated problems, extrapolating the implemented techniques to a wider spectrum of similar problems in Signal Processing.
APA, Harvard, Vancouver, ISO, and other styles
11

Sellami, Moncef. "Recoverable reserve estimation in multiple seam mines." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-09192009-040606/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Murthy, Chandra Ramabhadra. "Channel estimation and feedback for multiple antenna communication." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3220395.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2006.
Title from first page of PDF file (viewed September 5, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 158-164).
APA, Harvard, Vancouver, ISO, and other styles
13

Senses, Engin Utku. "Blur Estimation And Superresolution From Multiple Registered Images." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609929/index.pdf.

Full text
Abstract:
Resolution is the most important criterion for the clarity of details on an image. Therefore, high resolution images are required in numerous areas. However, obtaining high resolution images has an evident technological cost and the value of these costs change with the quality of used optical systems. Image processing methods are used to obtain high resolution images with low costs. This kind of image improvement is named as superresolution image reconstruction. This thesis focuses on two main titles, one of which is the identification methods of blur parameters, one of the degradation operators, and the stochastic SR image reconstruction methods. The performances of different stochastic SR image reconstruction methods and blur identification methods are shown and compared. Then the identified blur parameters are used in superresolution algorithms and the results are shown.
APA, Harvard, Vancouver, ISO, and other styles
14

Miao, H. (Honglei). "Channel estimation and positioning for multiple antenna systems." Doctoral thesis, University of Oulu, 2007. http://urn.fi/urn:isbn:9789514284113.

Full text
Abstract:
Abstract The multiple–input multiple–output (MIMO) technique, applying several transmit and receive antennas in wireless communications, has emerged as one of the most prominent technical breakthroughs of the last decade. Wideband MIMO parameter estimation and its applications to the MIMO orthogonal frequency division multiplexing (MIMO–OFDM) channel estimation and mobile positioning are studied in this thesis. Two practical MIMO channel models, i.e., correlated-receive independent-transmit channel and correlated-transmit-receive channel, and associated space-time parameter estimation algorithms are considered. Thanks to the specified structure of the proposed training signals for multiple transmit antennas, the iterative quadrature maximum likelihood (IQML) algorithm is applied to estimate the time delay and spatial signature for the correlated-receive independent-transmit MIMO channels. For the correlated-transmit-receive MIMO channels, the spatial signature matrix corresponding to a time delay can be further decomposed in such a way that the angle of arrival (AOA) and the angle of departure (AOD) can be estimated simultaneously by the 2-D unitary ESPRIT algorithm. Therefore, the combination of the IQML algorithm and the 2-D unitary ESPRIT algorithm provides a novel solution to jointly estimate the time delay, the AOA and the AOD for the correlated-transmit-receive MIMO channels. It is demonstrated from the numerical examples that the proposed algorithms can obtain good performance at a reasonable cost. Considering the correlated-receive independent-transmit MIMO channels, channel coefficient estimation for the MIMO–OFDM system is studied. Based on the parameters of the correlated-receive independent-transmit MIMO channels, the channel statistics in terms of the correlation matrix are developed. By virtue of the derived channel statistics, a joint spatial-temporal (JST) filtering based MMSE channel estimator is proposed which takes full advantage of the channel correlation properties. The mean square error (MSE) of the proposed channel estimator is analyzed, and its performance is also demonstrated by Monte Carlo computer simulations. It is shown that the proposed JST minimum mean square error (MMSE) channel estimator outperforms the more conventional temporal MMSE channel estimator in terms of the MSE when the signals in the receive antenna array elements are significantly correlated. The closed form bit error probability of the space-time block coded OFDM system with correlation at the receiver is also developed by taking the channel estimation errors and channel statistics, i.e., correlation at the receiver, into account. Mobile positioning in the non-line of sight (NLOS) scenarios is studied. With the knowledge of the time delay, the AOA and the AOD associated with each NLOS propagation path, a novel geometric approach is proposed to calculate the MS's position by only exploiting two NLOS paths. On top of this, the least squares and the maximum likelihood (ML) algorithms are developed to utilize multiple NLOS paths to improve the positioning accuracy. Moreover, the ML algorithm is able to estimate the scatterers' positions as well as those of the MSs. The Cramer-Rao lower bound related to the position estimation in the NLOS scenarios is derived. It is shown both analytically and through computer simulations that the proposed algorithms are able to estimate the mobile position only by employing the NLOS paths.
APA, Harvard, Vancouver, ISO, and other styles
15

Kabzinska, Ewa Joanna. "Empirical likelihood approach for estimation from multiple sources." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/422166/.

Full text
Abstract:
Empirical likelihood is a non-parametric, likelihood-based inference approach. In the design-based empirical likelihood approach introduced by Berger and De La Riva Torres (2016), the parameter of interest is expressed as a solution to an estimating equation. The maximum empirical likelihood point estimator is obtained by maximising the empirical likelihood function under a system of constraints. A single vector of weights, which can be used to estimate various parameters, is created. Design-based empirical likelihood confidence intervals are based on the χ2 approximation of the empirical likelihood ratio function. The confidence intervals are range-preserving and asymmetric, with the shape driven by the distribution of the data. In this thesis we focus on the extension and application of design-based empirical likelihood methods to various problems occurring in survey inference. First, a design-based empirical likelihood methodology for parameter estimation in two surveys context, in presence of alignment and benchmark constraints, is developed. Second, a design-based empirical likelihood multiplicity adjusted estimator for multiple frame surveys is proposed. Third, design-based empirical likelihood is applied to a practical problem of census coverage estimation. The main contribution of this thesis is defining the empirical likelihood methodology for the studied problems and showing that the aligned and multiplicity adjusted empirical likelihood estimators are √n-design-consistent. We also discuss how the original proofs presented by Berger and De La Riva Torres (2016) can be adjusted to show that the empirical likelihood ratio statistic is pivotal and follows a χ2 distribution under alignment constraints and when the multiplicity adjustments are used. We evaluate the asymptotic performance of the empirical likelihood estimators in a series of simulations on real and artificial data. We also discuss the computational aspects of the calculations necessary to obtain empirical likelihood point estimates and confidence intervals and propose a practical way to obtain empirical likelihood confidence intervals in situations when they might be difficult to obtain using standard approaches.
APA, Harvard, Vancouver, ISO, and other styles
16

KWON, YEIL. "NONPARAMETRIC EMPIRICAL BAYES SIMULTANEOUS ESTIMATION FOR MULTIPLE VARIANCES." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/495491.

Full text
Abstract:
Statistics
Ph.D.
The shrinkage estimation has proven to be very useful when dealing with a large number of mean parameters. In this dissertation, we consider the problem of simultaneous estimation of multiple variances and construct a shrinkage type, non-parametric estimator. We take the non-parametric empirical Bayes approach by starting with an arbitrary prior on the variances. Under an invariant loss function, the resultant Bayes estimator relies on the marginal cumulative distribution function of the sample variances. Replacing the marginal cdf by the empirical distribution function, we obtain a Non-parametric Empirical Bayes estimator for multiple Variances (NEBV). The proposed estimator converges to the corresponding Bayes version uniformly over a large set. Consequently, the NEBV works well in a post-selection setting. We then apply the NEBV to construct condence intervals for mean parameters in a post-selection setting. It is shown that the intervals based on the NEBV are shortest among all the intervals which guarantee a desired coverage probability. Through real data analysis, we have further shown that the NEBV based intervals lead to the smallest number of discordances, a desirable property when we are faced with the current "replication crisis".
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
17

Baştürk, İlhan Özbek Berna. "Iterative Channel Estimation Techniques For Multiple İnput Multiple Output Orthogonal Frequency Division Multiplexing Systems/." [s.l.]: [s.n.], 2007. http://library.iyte.edu.tr/tezler/master/elektrikveelektronikmuh/T000653.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chiong, Choo Wee Raymond. "Efficient channel estimation algorithms for cooperative multiple-input multiple-output (MIMO) wireless communication networks." Thesis, Curtin University, 2014. http://hdl.handle.net/20.500.11937/1780.

Full text
Abstract:
Multiple-input multiple-output (MIMO) relay communication systems have been identified to be one of the promising solutions to high rate wireless communications. In optimizing the MIMO relay networks, the knowledge of channel state information (CSI) is essential. This thesis develops novel channel estimation algorithms for MIMO relay communication networks, considering the amplify-and-forward relaying scheme. The proposed algorithms outperform existing schemes in estimating the CSI of each hop in MIMO relay networks.
APA, Harvard, Vancouver, ISO, and other styles
19

Kane, Roma. "Multiuser TDMA channel estimation." Diss., Columbia, Mo. : University of Missouri-Columbia, 2004. http://hdl.handle.net/10355/5810.

Full text
Abstract:
Thesis (M.S.)--University of Missouri-Columbia, 2004.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file viewed on (June 30, 2006) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
20

Tang, On-yee. "Estimation for generalized linear mixed model via multiple imputations." Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B30687652.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ayer, Serge. "Sequential and competitive methods for estimation of multiple motions /." Lausanne : Ecole polytechnique fédérale, 1995. http://library.epfl.ch/theses/?nr=1405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Xudong. "Vehicle health monitoring system using multiple-model adaptive estimation." Thesis, University of Hawaii at Manoa, 2003. http://hdl.handle.net/10125/7051.

Full text
Abstract:
In this thesis, we propose two failure detection and identification (FDI) approaches based on the multiple-model estimation algorithm to monitor the health of vehicles, specifically aircraft applications. They detect and identify failing components of the vehicle, and the system variations. The dynamics of the vehicle are modeled as a stochastic hybrid system with uncertainty-unknown model structure or parameters. FDI performance is evaluated for each approach. We demonstrate the reliability, validity of these approaches by applying them to simulate aircraft machinery experiencing component failures or structural variations. The approaches that we surveyed are: (i) Multiple-Hypothesis Kalman Filter, and (ii) Interacting Multiple-Model (IMM) Estimator. By coupling the fault detection and identification (FDI) scheme with the reconfigurable controller design scheme, a fault-tolerant control system based on the multiple-model estimation algorithm is defined.
vii, 59 leaves
APA, Harvard, Vancouver, ISO, and other styles
23

Usabiaga, Jorge. "Global hand pose estimation by multiple camera ellipse tracking." abstract and full text PDF (free order & download UNR users only), 2004. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1433386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Mauroy, Gilles Patrick. "Multiple target tracking using neural networks and set estimation." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/13748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Allen, Randal. "Robust Estimation and Adaptive Guidance for Multiple UAVS' Cooperation." Doctoral diss., University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4350.

Full text
Abstract:
In this paper, an innovative cooperative navigation method is proposed for multiple Unmanned Air Vehicles (UAVs) based on online target position measurements. These noisy position measurement signals are used to estimate the target's velocity for non-maneuvering targets or the target's velocity and acceleration for maneuvering targets. The estimator's tracking capability is physically constrained due to the target's kinematic limitations and therefore is potentially improvable by designing a higher performance estimator. An H-infinity filter is implemented to increase the robustness of the estimation accuracy. The performance of the robust estimator is compared to a Kalman filter and the results illustrate more precise estimation of the target's motion in compensating for surrounding noises and disturbances. Furthermore, an adaptive guidance algorithm, based on the seeker's field-of-view and linear region, is used to deliver the pursuer to the maneuvering target. The initial guidance algorithm utilizes the velocity pursuit guidance law because of its insensitivity to target motion; while the terminal guidance algorithm leverages the acceleration estimates (from the H-infinity filter) to augment the proportional navigation guidance law for increased accuracy in engaging maneuvering targets. The main objective of this work is to develop a robust estimator/tracker and an adaptive guidance algorithm which are directly applicable UAVs.
Ph.D.
Department of Mechanical, Materials and Aerospace Engineering;
Engineering and Computer Science
Mechanical Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
26

Ižo, Tomáš 1979. "Simultaneous camera calibration and pose estimation from multiple views." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Allen, Randal T. "Robust estimation and adaptive guidance for multiple UAVs' cooperation." Orlando, Fla. : University of Central Florida, 2009. http://purl.fcla.edu/fcla/etd/CFE0002535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Yang, Xiaoyi. "Background reconstruction from multiple images." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT020/document.

Full text
Abstract:
La problématique générale de cette thèse est de reconstituer la scène de fond à partir d’une séquence d’images en présence de masques d’avant-plan. Nous nous sommes intéressés aux méthodes pour détecter ce qui constitue le fond ainsi que les solutions pour corriger les parties cachées et les distor­sions géométrique et chromatique introduites lors de la photographie.Une série de processus est proposée, dont la mise en œuvre comporte dans l'ordre l’aligne­ment géométrique, le réglage chromatique, la fusion des images et la correction des défauts.Nous nous plaçons dans l’hypothèse où le fond est porté sur une surface plane. L'aligne­ment géométrique est alors réalisé par calcul de l'homographie entre une image quelconque et l’image qui sert de référence, suivi d’une interpolation bilinéaire.Le réglage chromatique vise à retrouver un même contraste dans les différentes images. Nous proposons de modéliser la mise en cor­respondance chromatique entre images par une approximation linéaire dont les para­mètres sont déterminés par les résultats de la mise en correspondance des points de contrôle (SIFT).Ces deux étapes sont suivies par une étape de fusion. Plusieurs techniques sont comparées.La première proposition est d’étendre la définition de la médiane dans l’espace vec­toriel. Elle est robuste lorsqu’il y a plus de la moitié des images qui voient les pixels d’arrière-plan. En outre, nous concevons un algorithme original basé sur la notion de clique. Il permet de détecter le plus grand nuage de pixels dans l'espace RGB. Cette approche est fiable même lorsque les pixels d’arrière-plan sont minoritaires.Lors de la mise en œuvre de ce protocole, on constate que certains résultats de fusion présentent des défauts de type flou dus à l’existence d’erreurs d’alignement géomé­trique. Nous proposons donc un traitement complémentaire. Il est basé sur une compa­raison entre le résultat de fusion et les images alignées après passage d'un filtre gaussien. Sa sortie est un assemblage des morceaux très détaillés d'image alignés qui ressemblent le plus au résultat de fusion associés.La performance de nos méthodes est éva­luée par un ensemble de données contenant de nombreuses images de qualités diffé­rentes. Les expériences confirment la fiabi­lisé et la robustesse de notre conception dans diverses conditions de photographie
The general topic of this thesis is to reconstruct the background scene from a burst of images in presence of masks. We focus on the background detection methods as well as on solutions to geometric and chromatic distortions introduced during ph-otography. A series of process is proposed, which con­sists of geometric alignment, chromatic adjustment, image fusion and defect correction.We consider the case where the background scene is a flat surface. The geometric align­ment between a reference image and any other images in the sequence, depends on the computation of a homography followed by a bilinear interpolation.The chromatic adjustment aims to attach a similar contrast to the scene in different im­ages. We propose to model the chromatic mapping between images with linear approximations whose parameters are decided by matched pixels of SIFT .These two steps are followed by a discus­sion on image fusion. Several methods have been compared.The first proposition is a generation of typical median filter to the vector range. It is robust when more than half of the images convey the background information. Besides, we design an original algorithm based on the notion of clique. It serves to distinguish the biggest cloud of pixels in RGB space. This approach is highly reliable even when the background pixels are the minority.During the implementation, we notice that some fusion results bear blur-like defects due to the existence of geometric alignment errors. We provide therefore a combination method as a complementary step to ameli-orate the fusion results. It is based on a com-parison between the fusion image and other aligned images after applying a Gaussian filter. The output is a mosaic of patches with clear details issued from the aligned images which are the most similar to their related fusion patches.The performance of our methods is evaluated by a data set containing extensive images of different qualities. Experiments confirm the reliability and robustness of our design under a variety of photography conditions
APA, Harvard, Vancouver, ISO, and other styles
29

Ganesh, Nadarajasundaram. "Small area estimation and prediction problems spatial models, Bayesian multiple comparisons and robust MSE estimation /." College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/7241.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Mathematics. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
30

Chouha, Michel. "Contribution au calcul de puissance résiduelle : estimation des énergies moyennes béta et gamma des produits de fission." Paris 11, 1986. http://www.theses.fr/1986PA112311.

Full text
Abstract:
Lorsqu’on arrête un réacteur nucléaire, après l’évanescence totale du processus des fissions en chaîne, une quantité non négligeable d’énergie (environ sept pour cent de la puissance totale du réacteur) continue à se dégager dans le cœur, elle est appelée : puissance résiduelle. La principale source de cette énergie provient des décroissances radioactives des produits de fission. Sa valeur est à tout instant la somme des puissances dégagées par ces différents noyaux (P = ΣiPi). Chacune des puissances P est le produit de trois termes : la concentration du noyau, sa constante radioactive et l'énergie moyenne totale (bêta + gamma) qu’il libère lors d'une désintégration. La détermination des deux premiers termes ne présente pas de difficultés particulières, les données qui les concernent sont assez précises, par contre le calcul des énergies moyennes pose quelques problèmes. En effet, les spectres bêta et gamma de nombreux produits de fission ne sont que partiellement connus : dans certains cas ils sont encore totalement inconnus. La présente étude se destine, après une analyse critique de l'actuelle méthode de détermination des énergies moyennes, à proposer une nouvelle modélisation de ce calcul. Le nouveau modèle testé sur un certain nombre de nucléides bien connus s'avère bien précis ; son application aux cas d'une soixantaine de nucléides insuffisamment connus a conduit à une nette amélioration des résultats de calcul en comparaison avec les mesures des deux composantes bêta et gamma de la puissance résiduelle. En conséquence, ce nouveau modèle, qui permet de prendre en considération le manque de branchement bêta correspondant aux niveaux les plus élevés du noyau produit dans la décroissance bêta, pourra être adopté pour le calcul des énergies moyennes des nucléides à spectres insuffisamment connus.
APA, Harvard, Vancouver, ISO, and other styles
31

Chouha, Michel. "Contribution au calcul de puissance résiduelle estimation des énergies moyennes béta et gamma des produits de fission." Grenoble 2 : ANRT, 1986. http://catalogue.bnf.fr/ark:/12148/cb375967264.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yang, Xiaoyi. "Background reconstruction from multiple images." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLT020.

Full text
Abstract:
La problématique générale de cette thèse est de reconstituer la scène de fond à partir d’une séquence d’images en présence de masques d’avant-plan. Nous nous sommes intéressés aux méthodes pour détecter ce qui constitue le fond ainsi que les solutions pour corriger les parties cachées et les distor­sions géométrique et chromatique introduites lors de la photographie.Une série de processus est proposée, dont la mise en œuvre comporte dans l'ordre l’aligne­ment géométrique, le réglage chromatique, la fusion des images et la correction des défauts.Nous nous plaçons dans l’hypothèse où le fond est porté sur une surface plane. L'aligne­ment géométrique est alors réalisé par calcul de l'homographie entre une image quelconque et l’image qui sert de référence, suivi d’une interpolation bilinéaire.Le réglage chromatique vise à retrouver un même contraste dans les différentes images. Nous proposons de modéliser la mise en cor­respondance chromatique entre images par une approximation linéaire dont les para­mètres sont déterminés par les résultats de la mise en correspondance des points de contrôle (SIFT).Ces deux étapes sont suivies par une étape de fusion. Plusieurs techniques sont comparées.La première proposition est d’étendre la définition de la médiane dans l’espace vec­toriel. Elle est robuste lorsqu’il y a plus de la moitié des images qui voient les pixels d’arrière-plan. En outre, nous concevons un algorithme original basé sur la notion de clique. Il permet de détecter le plus grand nuage de pixels dans l'espace RGB. Cette approche est fiable même lorsque les pixels d’arrière-plan sont minoritaires.Lors de la mise en œuvre de ce protocole, on constate que certains résultats de fusion présentent des défauts de type flou dus à l’existence d’erreurs d’alignement géomé­trique. Nous proposons donc un traitement complémentaire. Il est basé sur une compa­raison entre le résultat de fusion et les images alignées après passage d'un filtre gaussien. Sa sortie est un assemblage des morceaux très détaillés d'image alignés qui ressemblent le plus au résultat de fusion associés.La performance de nos méthodes est éva­luée par un ensemble de données contenant de nombreuses images de qualités diffé­rentes. Les expériences confirment la fiabi­lisé et la robustesse de notre conception dans diverses conditions de photographie
The general topic of this thesis is to reconstruct the background scene from a burst of images in presence of masks. We focus on the background detection methods as well as on solutions to geometric and chromatic distortions introduced during ph-otography. A series of process is proposed, which con­sists of geometric alignment, chromatic adjustment, image fusion and defect correction.We consider the case where the background scene is a flat surface. The geometric align­ment between a reference image and any other images in the sequence, depends on the computation of a homography followed by a bilinear interpolation.The chromatic adjustment aims to attach a similar contrast to the scene in different im­ages. We propose to model the chromatic mapping between images with linear approximations whose parameters are decided by matched pixels of SIFT .These two steps are followed by a discus­sion on image fusion. Several methods have been compared.The first proposition is a generation of typical median filter to the vector range. It is robust when more than half of the images convey the background information. Besides, we design an original algorithm based on the notion of clique. It serves to distinguish the biggest cloud of pixels in RGB space. This approach is highly reliable even when the background pixels are the minority.During the implementation, we notice that some fusion results bear blur-like defects due to the existence of geometric alignment errors. We provide therefore a combination method as a complementary step to ameli-orate the fusion results. It is based on a com-parison between the fusion image and other aligned images after applying a Gaussian filter. The output is a mosaic of patches with clear details issued from the aligned images which are the most similar to their related fusion patches.The performance of our methods is evaluated by a data set containing extensive images of different qualities. Experiments confirm the reliability and robustness of our design under a variety of photography conditions
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Chunxiao. "Estimation of 3D human motion kinematics model from multiple cameras." Thesis, University of Central Lancashire, 2009. http://clok.uclan.ac.uk/19932/.

Full text
Abstract:
Estimation of articulated human motion based on video sequences acquired from multiple synchronised cameras is an active and challenging research area. This is mainly due to the need of high dimensional non-linear models to describe the human motion, cluttered data, and occlusions present in the captured images. Although many diverse techniques have been proposed to solve this problem, none of the existing solutions is fully satisfactory. In this thesis, upper body motion tracking and full body motion tracking based on the annealed particle filter (APP) approach are presented. To successfully implement a body motion tracking algorithm, the first requirement is to prepare and pre-process the data. The work performed in this area includes calibration of multiple cameras, colour image segmentation to extract body silhouettes from the cluttered background, and visual hull reconstruction to provide voxels representing a human volume in 3D space. The second requirement is to build the models. Two set of models are proposed in this thesis. The first set is for upper body tracking and it contains point models and two-segment articulated arm models; the second set is for full body tracking and it contains five articulated chains as a full human model. The final requirement is to design a measurement method for aligning the models to the data. Two novel measurement methods are proposed for the motion tracking: one is based on a combination of different penalties tailored to each body part based on the percentage of the 3D to 2D projected body points, falling inside and outside the body silhouette, and the other is based on the symmetrical property of the intensity profile obtained from the body silhouette bisected by the 3D to 2D projection of the estimated skeletal model. Various evaluations were carried out to demonstrate the effectiveness of the algorithms implemented and the excellent performance of the proposed methods for upper body and full body motion tracking. These include the accuracy analysis of cameras calibration and image segmentation; the accuracy and speed of APF applied to the articulated arm model in tracking of the infra-red marker based human motion data; as well as the visual and quantitative assessments of the final results obtained from the proposed upper body and full body motion tracking.
APA, Harvard, Vancouver, ISO, and other styles
34

Uggla, Lingvall Kristoffer. "Remote heart rate estimation by evaluating measurements from multiple signals." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210303.

Full text
Abstract:
Heart rate can say a lot about a person's health. While most conventional methods for heart rate measurement require contact with the subject, these are not always applicable. In this thesis, a non-invasive method for pulse detection is implemented and analyzed. Different signals from the color of the forehead—including the green channel, the hue channel and different ICA and PCA components—are inspected, and their resulted heart rates are weighted together according to the significance of their FFT peaks. The system is tested on videos with different difficulties regarding the amount of movement and setting of the scene. The results show that the approach of weighting measurements from different signals together has great potential. The system in this thesis, however, does not perform very well on videos with a lot of movement because of motion noise. Though, with better, less noisy signals, good results can be expected.
En människas puls säger en hel del om dennes hälsa. För att mäta pulsenanvänds vanligtvis metoder som vidrör människan, vilket iblandär en nackdel. I det här examensarbetet tas en metod för pulsmätningpå avstånd fram, som endast använder klipp från en vanlig videokamera. Färgen i pannan mäts och utifrån den genereras flera signalersom analyseras, vilket resulterar i olika mätvärden för pulsen. Genomatt värdera dessa mätvärden med avseende på hur tydliga signalernaär, beräknas ett viktat medelvärde som ett slutgiltigt estimat på medelpulsen. Metoden testas på videoklipp med varierande svårighetsgrad,beroende på hur mycket rörelser som förekommer och på vilketavstånd från kameran försökspersonen står. Resultaten visar att metodenhar mycket god potential och att man kan man förvänta sig finaresultat med bättre, mindre brusiga signaler.
APA, Harvard, Vancouver, ISO, and other styles
35

Saur, Stephan. "Channel estimation for mobile wideband code division multiple access (WCDMA)." Aachen Shaker, 2008. http://d-nb.info/989680819/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Campanella, Gianluca. "Estimation of infection rate in epidemic models with multiple populations." Master's thesis, Faculdade de Ciências e Tecnologia, 2011. http://hdl.handle.net/10362/6118.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Matemática e Aplicações Especialização em Actuariado, Estatística e Investigação Operacional
The e ect of infectious diseases on human development throughout history is well established, and investigation on the causes of infectious epidemics { and plagues in particular { dates back at least to Hippocrates,the father of Western medicine. The mechanisms by which diseases spread, however, could not be fully understood until the late nineteenth century, with the discovery of microorganisms and the understanding of their role as infectious agents. Eventually, at the turn of the twentieth century, the foundations of the mathematical epidemiology of infectious diseases were laid by the seminal work of En'ko, Ross, and Kermack and McKendrick. More recently, the application of graph theory to epidemiology has given rise to models that consider the spread of diseases not only at the level of individuals belonging to a single population (population models), but also in systems with multiple populations linked by a transportation network(meta-population models). The aim of meta-populations models is to understand how movement of individuals between populations generates the geographical spread of diseases, a challenging goal whose importance is all the greater now that long-range displacements are facilitated by inexpensive air travel possibilities. A problem of particular interest in all epidemic models is the estimation of parameters from sparse and inaccurate real-world data, especially the socalled infection rate, whose estimation cannot be carried out directly through clinical observation. Focusing on meta-population models, in this thesis we introduce a new estimation method for this crucial parameter that is able to accurately infer it from the arrival times of the rst infective individual in each population. Moreover, we test our method and its accuracy by means of computer simulations.
APA, Harvard, Vancouver, ISO, and other styles
37

Deneire, Luc. "Estimation aveugle de canal et accès multiple par répartition spatiale." Paris, ENST, 1998. http://www.theses.fr/1998ENST0020.

Full text
Abstract:
Cette thèse traite du problème de l'identification aveugle de canal en utilisant uniquement les statistiques de second ordre, dans un contexte de communications, soit en présence d'un seul utilisateur, soit en présence de plusieurs utilisateurs partageant la même fréquence, au même moment et sans étalement de spectre. Nous avons explore les statistiques cycliques du second ordre, débouchant sur une famille d'algorithmes dont les performances sont indépendantes de la couleur du bruit et un nouveau modèle de canal. Nous avons développé une méthode sous-espace sans décomposition en éléments propres asymptotiquement équivalente à la méthode initiale. L'incorporation de la matrice de pondération optimale permet de déduire des équivalences asymptotiques entre diverses méthodes. En se basant sur des critères de complexité et de robustesse, nous avons explore divers algorithmes dérives de la prédiction linéaire. Cette étude a montré que les algorithmes rapides de type levinson donnent de bonnes performances et présentent de bonnes propriétés de robustesse. Ces algorithmes permettent une détection de l'ordre du processus ar associe au canal, par extension des méthodes de détection d'ordre de processus multivariés au cas de processus singuliers. Nous avons développé une méthode originale d'identification de canal multi-utilisateurs en nous basant sur la décomposition ldu de la matrice de covariance et sur l'algorithme généralise de schur. Cet algorithme de complexité faible est un candidat sérieux pour l'identification aveugle de canal en sdma. Nous présentons un algorithme au maximum de vraisemblance pour le modèle déterministe, qui présente des performances prometteuses, avec bon nombre de garanties théoriques.
APA, Harvard, Vancouver, ISO, and other styles
38

McDonagh, Steven George. "Building models from multiple point sets with kernel density estimation." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10568.

Full text
Abstract:
One of the fundamental problems in computer vision is point set registration. Point set registration finds use in many important applications and in particular can be considered one of the crucial stages involved in the reconstruction of models of physical objects and environments from depth sensor data. The problem of globally aligning multiple point sets, representing spatial shape measurements from varying sensor viewpoints, into a common frame of reference is a complex task that is imperative due to the large number of critical functions that accurate and reliable model reconstructions contribute to. In this thesis we focus on improving the quality and feasibility of model and environment reconstruction through the enhancement of multi-view point set registration techniques. The thesis makes the following contributions: First, we demonstrate that employing kernel density estimation to reason about the unknown generating surfaces that range sensors measure allows us to express measurement variability, uncertainty and also to separate the problems of model design and viewpoint alignment optimisation. Our surface estimates define novel view alignment objective functions that inform the registration process. Our surfaces can be estimated from point clouds in a datadriven fashion. Through experiments on a variety of datasets we demonstrate that we have developed a novel and effective solution to the simultaneous multi-view registration problem. We then focus on constructing a distributed computation framework capable of solving generic high-throughput computational problems. We present a novel task-farming model that we call Semi-Synchronised Task Farming (SSTF), capable of modelling and subsequently solving computationally distributable problems that benefit from both independent and dependent distributed components and a level of communication between process elements. We demonstrate that this framework is a novel schema for parallel computer vision algorithms and evaluate the performance to establish computational gains over serial implementations. We couple this framework with an accurate computation-time prediction model to contribute a novel structure appropriate for addressing expensive real-world algorithms with substantial parallel performance and predictable time savings. Finally, we focus on a timely instance of the multi-view registration problem: modern range sensors provide large numbers of viewpoint samples that result in an abundance of depth data information. The ability to utilise this abundance of depth data in a feasible and principled fashion is of importance to many emerging application areas making use of spatial information. We develop novel methodology for the registration of depth measurements acquired from many viewpoints capturing physical object surfaces. By defining registration and alignment quality metrics based on our density estimation framework we construct an optimisation methodology that implicitly considers all viewpoints simultaneously. We use a non-parametric data-driven approach to consider varying object complexity and guide large view-set spatial transform optimisations. By aligning large numbers of partial, arbitrary-pose views we evaluate this strategy quantitatively on large view-set range sensor data where we find that we can improve registration accuracy over existing methods and contribute increased registration robustness to the magnitude of coarse seed alignment. This allows large-scale registration on problem instances exhibiting varying object complexity with the added advantage of massive parallel efficiency.
APA, Harvard, Vancouver, ISO, and other styles
39

Deneire, Luc. "Estimation aveugle de canal et accès multiple par répartition spatiale /." Paris : École nationale supérieure des télécommunications, 1998. http://catalogue.bnf.fr/ark:/12148/cb37047180m.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Sakamaki, Joshua Y. "Cooperative Estimation for a Vision-Based Multiple Target Tracking System." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/6072.

Full text
Abstract:
In this thesis, the Recursive-Random Sample Consensus (R-RANSAC) algorithm is applied to a vision-based, cooperative target tracking system. Unlike previous applications, which focused on a single camera platform tracking targets in the image frame, this work uses multiple camera platforms to track targets in the inertial or world frame. The process of tracking targets in the inertial frame is commonly referred to as geolocation.In practical applications sensor biases cause the geolocated target estimates to be biased from truth. The method for cooperative estimation developed in this thesis first estimates the relative rotational and translational biases that exist between tracks from different vehicles. It then accounts for the biases and performs the track-to-track association, which determines if the tracks originate from the same target. The track-to-track association is based on a sliding window approach that accounts for the correlation between tracks sharing common process noise and the correlation in time between individual estimation errors, yielding a chi-squared distribution. Typically, accounting for the correlation in time requires the inversion of a Nnx x Nnx covariance matrix, where N is the length of the window and nx is the number of states. Note that this inversion must occur every time the track-to-track association is to be performed. However, it is shown that by making a steady-state assumption, the inverse has a simple closed-form solution, requiring the inversion of only two nx x nx matrices, and can be calculated offline. Distributed data fusion is performed on tracks where the hypothesis test is satisfied. The proposed method is demonstrated on data collected from an actual vision-based tracking system.A novel method is also developed to cooperatively estimate the location and size of occlusions. This capability is important for future target tracking research involving optimized path planning/gimbal pointing, where a geographical map is unavailable. The method is demonstrated in simulation.
APA, Harvard, Vancouver, ISO, and other styles
41

Potter, Chris. "Modeling Channel Estimation Error in Continuously Varying MIMO Channels." International Foundation for Telemetering, 2007. http://hdl.handle.net/10150/604490.

Full text
Abstract:
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada
The accuracy of channel estimation plays a crucial role in the demodulation of data symbols sent across an unknown wireless medium. In this work a new analytical expression for the channel estimation error of a multiple input multiple output (MIMO) system is obtained when the wireless medium is continuously changing in the temporal domain. Numerical examples are provided to illustrate our findings.
APA, Harvard, Vancouver, ISO, and other styles
42

Muhammed, Salih Muhsin. "Nanoémulsion multiple autoémulsionnable pour délivrance de médicaments polaires : applications à la gentamicine." Paris 5, 2008. http://www.theses.fr/2008PA05P642.

Full text
Abstract:
La gentamicine (GM) est un médicament hydrophile polaire référence de la classe III BCS, reconnu comme non absorbé par voie orale. L’objectif du présent travail est de formuler un véhicule capable de permettre l’absorption orale de la GM. Une nouvelle forme galénique Nanoémulsion Multiple H/L/H Autoémulsionnable a été mise au point. Cette formule est obtenue par autoémulsification d’une microémulsion H/L primaire, composée d’huile (triglycérides à chaîne moyenne), de surfactifs (polysorbate85 et Labrasol®) et d’une phase hydrophile (eau). La GM est incorporée dans la phase interne aqueuse. L’émulsion finale est formée spontanément par l’addition de la microémulsion H/L primaire dans l’eau (ratio1-2). Les diagrammes de phases pseudo-ternaires ont été établis pour déterminer la zone isotrope et limpide de la microémulsion H/L. Les préparations ont été caractérisées par différentes méthodes ; comme l’aspect macroscopique, la viscosité, la taille, la forme et la charge de particules, en utilisant la spectroscopie à corrélation de photon, la diffraction laser et la microscopie électronique à transmission. Les études in vivo ont été conduites sur le modèle de rat éveillé. Les concentrations sanguines de GM ont été déterminées par HPLC-spectroscopie de masse. Les résultats ont montré une biodisponibilité élevée avec un profil d’absorption prolongé permettant d’envisager une administration orale. La nouvelle formulation a été dénommée Multiple Self Emulsifying Nanoemulsion for Polar Drugs Delivery MSEN-PDD®
The aminoglycoside gentamicin sulphate is a representative orally non absorbable hydrophilic polar drug corresponding to class III BCS. This work proposes a new lipid-based formulation designed to improve oral absorption of class III drugs. The formulation is a Multiple w/o/w Self-Emulsifying Nanoemulsion obtained by efficient spontaneous emulsification of a former w/o microemulsion consisting of Medium Chains Triglycerides (MCT) as oil, polysorbat85/Labrasol® as surfactants and gentamicin was incorporated in the aqueous internal phase of former w/o microemulsions. Pseudoternary phase diagrams were established to determine w/o microemulsions zones. Formulations were characterized; physical properties like macroscopic aspects, viscosity also particle size, form and charge were studied using photon correlation spectroscopy (PCS), laser diffraction, transmission electron microscopy (TEM) and other techniques. Oral absorption studies were conducted in vivo using conscious rat model. Plasma levels were quantified by HPLC-Mass Spectrometry to study oral absorption bioavailability. After administration the results show high bioavailability and the absorption shows sustained release profile which permit oral administration. The new formulation was named Multiple Self Emulsifying Nanoemulsion for Polar Drugs Delivery MSEN-PDD®
APA, Harvard, Vancouver, ISO, and other styles
43

Abd, El-Sallam Amar. "Low order channel estimation for CDMA systems." Thesis, Curtin University, 2005. http://hdl.handle.net/20.500.11937/2420.

Full text
Abstract:
New approaches and algorithms are developed for the identification and estimation of low order models that represent multipath channel effects in Code Division Multiple Access (CDMA) communication systems. Based on these parsimonious channel models, low complexity receivers such as RAKE receivers are considered to exploit these propagation effects and enhance the system performance. We consider the scenario where multipath is frequency selective slowly fading and where the channel components including delays and attenuation coefficients are assumed to be constant over one or few signalling intervals. We model the channel as a long FIR-like filter (or a tapped delay line filter) with the number of taps related to the ratio between the channel delay-spread and the chip duration. Due to the high data rate of new CDMA systems, the channel length in terms of the chip duration will be very large. With classical channel estimation techniques this will result in poor estimates of many of the channel parameters where most of them are zero leading to a reduction in the system performance. Unlike classical techniques which estimate directly the channel response given the number of taps or given an estimate of the channel length, the proposed techniques in this work will firstly identify the significant multipath parameters using model selection techniques, then estimate these identified parameters. Statistical tests are proposed to determine whether or not each individual parameter is significant. A low complexity RAKE receiver is then considered based on estimates of these identified parameters only. The level of significance with which we will make this assertion will be controlled based on statistical tests such as multiple hypothesis tests. Frequency and time domain based approaches and model selection techniques are proposed to achieve the above proposed objectives.The frequency domain approach for parsimonious channel estimation results in an efficient implementation of RAKE receivers in DS-CDMA systems. In this approach, we consider a training based strategy and estimate the channel delays and attenuation using the averaged periodogram and modified time delay estimation techniques. We then use model selection techniques such as the sphericity test and multiple hypotheses tests based on F-Statistics to identify the model order and select the significant channel paths. Simulations show that for a pre-defined level of significance, the proposed technique correctly identifies the significant channel parameters and the parsimonious RAKE receiver shows improved statistical as well as computational performance over classical methods. The time domain approach is based on the Bootstrap which is appropriate for the case when the distribution of the test statistics required by the multiple hypothesis tests is unknown. In this approach we also use short training data and model the channel response as an FIR filter with unknown length. Model parameters are then estimated using low complexity algorithms in the time domain. Based on these estimates, bootstrap based multiple hypotheses tests are applied to identify the non-zero coefficients of the FIR filter. Simulation results demonstrate the power of this technique for RAKE receivers in unknown noise environments. Finally we propose adaptive blind channel estimation algorithms for CDMA systems. Using only the spreading code of the user of interest and the received data sequence, four different adaptive blind estimation algorithms are proposed to estimate the impulse response of frequency selective and frequency non-selective fading channels. Also the idea is based on minimum variance receiver techniques. Tracking of a frequency selective varying fading channel is also considered.A blind based hierarchical MDL model selection method is also proposed to select non-zero parameters of the channel response. Simulation results show that the proposed algorithms perform better than previously proposed algorithms. They have lower complexity and have a faster convergence rate. The proposed algorithms can also be applied to the design of adaptive blind channel estimation based RAKE receivers.
APA, Harvard, Vancouver, ISO, and other styles
44

Sadough, Seyed Mohammad Sajad. "Ultra wideband OFDM systems : channel estimation and improved detection accounting for estimation inaccuracies." Paris 11, 2008. http://www.theses.fr/2008PA112001.

Full text
Abstract:
Les travaux présentés dans cette thèse se situent dans le cadre de la transmission OFDM appliqué au contexte Ultra Large Bande (UWB). L’objectif principal va être l’estimation du canal de propagation et la conception de récepteurs en prenant en compte la connaissance non idéale du canal. On propose d’abord une approche semi-aveugle d’estimation du canal qui bénéficie de la parcimonie du canal UWB dans le domaine des ondelettes pour réduire le nombre de paramètre à estimer. Ensuite, on propose des structures de réception itérative où la conception du détecteur prend en compte la présence des erreurs d’estimation de canal. La détection au sens maximum de vraisemblance (MV) est améliorée en modifiant sa métrique de détection pour le cas où le canal est estimé de manière imparfaite. La métrique MV améliorée ainsi obtenue nous permet de remettre en question le schéma de détection de type turbo MAP dans un contexte BICM et l’adapter à la connaissance imparfaite du canal. De plus, on dérive les débits de coupure atteignables (achievable outage rates) avec le détecteur MV améliorée ou un détecteur MV désadapté (utilisant les estimés de canal comme s’il s’agissait des vraies) qu’on comparera avec un détecteur théorique défini comme étant le meilleur récepteur possible en présence d’erreurs d’estimation de canal. Enfin, un récepteur itératif à complexité réduite basé sur un filtrage MMSE et l’annulation parallèle d’interférence (turbo-PIC) est introduit et ensuite adapté à la connaissance imparfaite du canal. Il est important de souligner que les améliorations proposées dans cette thèse n’impliquent pas une augmentation sensible de la complexité au niveau du récepteur
The aim of this thesis is to study the problem of iterative data detection in an ultra wideband (UWB) OFDM system, where the receiver disposes only of an imperfect (and possibly poor) estimate of the unknown channel parameters. First, we propose an efficient receiver jointly estimating the channel and the transmitted symbols in an iterative manner. This receiver is based on a wavelet representation of the unknown channel and exploits the sparseness property of UWB channels in the wavelet domain to reduce the receiver’s computational complexity. Second, we rely on the statistics characterizing the quality of the channel estimation as a mean to integrate the imperfect channel knowledge into the design of iterative receivers. In this way, we formulate an improved maximum likelihood (ML) detection metric taking into account the presence of channel estimation errors. A modified iterative MAP detector is derived by an appropriate use of this metric. The results are compared to those obtained by using the classical mismatched ML detector, which uses the channel estimate as if it was the perfect channel. Furthermore, we calculate the achieved throughputs associated to both improved and mismatched ML detectors, in terms of achievable outage rates. Finally, we propose an improved low-complexity iterative detector based on soft parallel interference cancellation and linear MMSE filtering where we takes into account the presence of channel estimation errors in the formulation of the detector. The important point is that the performance improvements reported in this thesis are obtained while imposing practically no additional complexity to the receiver
APA, Harvard, Vancouver, ISO, and other styles
45

Muralidhar, Aditya. "Multiple-Input Multiple Output System on a Spinning Vehicle with Unknown Channel State Information." International Foundation for Telemetering, 2012. http://hdl.handle.net/10150/581828.

Full text
Abstract:
This paper presents the investigations into the performance of a multiple-input multiple-output (MIMO) system with its transmitters on a spinning vehicle and no available channel state information (CSI) at the transmitter or the receiver. The linear least squares approach is used to estimate the channel and the estimation error is measured. Spinning gives rise to a periodic component in the channel which can be estimated based on the spin rate relative to the data rate of the system. It is also determined that spinning causes the bit error rate of the system to degrade by a few dB.
APA, Harvard, Vancouver, ISO, and other styles
46

Rambach, Kilian [Verfasser]. "Direction of Arrival Estimation Using a Multiple-Input-Multiple-Output Radar with Applications to Automobiles / Kilian Rambach." München : Verlag Dr. Hut, 2017. http://d-nb.info/1126298379/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Bhati, Amit. "Envelope: estimation of bottleneck and available bandwidth over multiple congested links." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/3288.

Full text
Abstract:
Bandwidth estimation has been extensively researched in the past. The majority of existing methods assume either negligible or fluid cross-traffic in the network during the analysis. However, on the present-day Internet, these assumptions do not always hold right. Hence, over such paths the existing bandwidth estimation techniques become inaccurate. In this thesis, we explore the problem assuming arbitrary cross-traffic and develop a new probing method called Envelope, which can simultaneously estimate bottleneck and available bandwidth over an end-to-end path with multiple heavily congested links. Envelope is based on a recursive extension of the stochastic queuing model first proposed by Kang, Liu, Dai and Loguinov (2004), and a modified packet-train methodology. We use two small packets to surround the probing packet-trains and preserve the inter-packet spacing of probe traffic at each router in the path-suffix. The preserved spacings are then used by the receiver to estimate bandwidth. We first reproduce results for a single congested router case using the model proposed by Kang et al. Next, we extend it to the case of multiple congested routers with arbitrary cross-traffic and develop the methodology Envelope. We evaluate the performance of Envelope in various network path topologies and cross-traffic conditions through extensive NS-2 simulations. We also evaluate various probe-traffic parameters which affect the accuracy of this method and obtain the range of values for these parameters that provide good estimation results. Finally, we compare the bandwidth estimation results of our method with the results of other existing methods such as IGI (2003) , Spruce (2003), Pathload (2002), and CapProbe (June 2004) using simulation in Network Simulator (NS-2) with varied network topologies and cross-traffic.
APA, Harvard, Vancouver, ISO, and other styles
48

Chotikakamthorn, Nopporn. "A pre-filtering maximum likelihood approach to multiple source direction estimation." Thesis, Imperial College London, 1996. http://hdl.handle.net/10044/1/8634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Lakshmanan, Nithya M. "Estimation and control of nonlinear batch processes using multiple linear models." Thesis, Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/11835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Pichette, Alexandre. "Multiple model estimation and detection for adaptive guidance of hybrid systems." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=80134.

Full text
Abstract:
The interception of a tactical ballistic missile (TBM) represents a great challenge to the guidance and control community as the maneuver potential of the TBM is almost in the same order as the interceptor missile. The maneuver advantage of the interceptor over the TBM being reduced, a new approach for estimation and guidance for the interception of a maneuvering TBM is presented in this research. The target is assumed to use its maneuvering potential by performing a single evasive bang-bang maneuver. A simple detection algorithm for an evasive maneuver of the target combined with the use of a multiple model estimators like the interactive multiple models (IMM) led to a new adaptive guidance law, the DGL/Adapt law. Simulations results using this approach gave a reduction in the miss distance in comparison with a static guidance law.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography