Dissertations / Theses on the topic 'Statistics - Applications'

To see the other types of publications on this topic, follow the link: Statistics - Applications.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Statistics - Applications.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Marriott, Paul. "Applications of differential geometry to statistics." Thesis, University of Warwick, 1990. http://wrap.warwick.ac.uk/55719/.

Full text
Abstract:
Chapters 1 and 2 are both surveys of the current work in applying geometry to statistics. Chapter 1 is a broad outline of all the work done so far, while Chapter 2 studies, in particular, the work of Amari and that of Lauritzen. In Chapters 3 and 4 we study some open problems which have been raised by Lauritzen's work. In particular we look in detail at some of the differential geometric theory behind Lauritzen's defmition of a Statistical manifold. The following chapters follow a different line of research. We look at a new non symmetric differential geometric structure which we call a preferred point manifold. We show how this structure encompasses the work of Amari and Lauritzen, and how it points the way to many generalizations of their results. In Chapter 5 we define this new structure, and compare it to the Statistical manifold theory. Chapter 6 develops some examples of the new geometry in a statistical context. Chapter 7 starts the development of the pure theory of these preferred point manifolds. In Chapter 8 we outline possible paths of research in which the new geometry may be applied to statistical theory. We include, in an appendix, a copy of a joint paper which looks at some direct applications of differential geometry to a statistical problem, in this case it is the problem of the behaviour of the Wald test with nonlinear restriction functions.
APA, Harvard, Vancouver, ISO, and other styles
2

Kasebzadeh, Pedram. "Clutter Detection in Radar Applications." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171547.

Full text
Abstract:
Radars have been used for detection purposes in safety applications (i.e., blind spot detection radar in cars) extensively. The existing detection methods, however, are not flawless. So far, the main focus of these methods is on detecting an object based on its reflectiveness. In this thesis, the limitation of conventional methods are addressed, and alternative approaches are proposed. The main objective is to model/identify the noise with statistical and machine learning approaches as an alternative to conventional methods that focus on the object. The second objective is to improve the time efficiency of these methods. The data for this thesis contains measurements collected from radars at ABB AB, Sweden. These measurements reflect the received signal strength. These radars are meant to be used in safety applications, such as in industrial environments. Thus, the trade-off between accuracy and complexity of the algorithms is crucial. One way to ensure there is nothing but noise in the surveillance field of the radar is to model the noise only. A new input can then be compared to this model and be classified as noise or not noise (object). One-class classifiers can be employed to approach this problem as they only need noise for training; hence they have been one of the initial proposals in this thesis. Alternatively, binary classifiers are investigated to classify noise and object given a new input data. Moreover, a mathematical model for noise is computed using the Fourier series expansion. While the derived model holds useful information in itself, it can be used, e.g., for hypothesis testing purposes. Furthermore, to make the classification more time-efficient, dimension reduction methods are considered. Feature extraction has been performed for this purpose with the help of the derived noise model. In order to evaluate the performance of the considered methods, three different datasets have been formed. In the first dataset,
APA, Harvard, Vancouver, ISO, and other styles
3

RAMAZZOTTI, DANIELE. "An Observational Study: The Effect of Diuretics Administration on Outcomes of Mortality and Mean Duration of I.C.U. Stay." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2012. http://hdl.handle.net/10281/54268.

Full text
Abstract:
This thesis conducts an observational study into whether diuretics should be administered to ICU patients with sepsis when length of stay in the ICU and 30-day post-hospital mortality are considered. The central contribution of the thesis is a stepwise, reusable software-based approach for examining the outcome of treatment vs no-treatment decisions with observational data. The thesis implements, demonstrates and draws findings via three steps: Step 1. Form a study group and prepare modeling variables. Step 2. Model the propensity of the study group with respect to the administration of diuretics with a propensity score function and create groups of patients balanced in this propensity. Step 3. Statistically model each outcome with study variables to decide whether the administration of diuretics has a significant impact. Additionally, the thesis presents a preliminary machine learning based method using Genetic Programming to predict mortality and length of stay in ICU outcomes for the study group. The thesis finds, for its study group, in three of five propensity balanced quintiles, a statistically significant longer length of stay when diuretics are administered. For a less sick subset of patients (SAPS ICU admission score < 17) the administration of diuretics has a significant negative effect on mortality.
APA, Harvard, Vancouver, ISO, and other styles
4

Gustin, Sara. "Investigation of some tests for homogeneity of intensity with applications to insurance data." Thesis, Uppsala universitet, Matematisk statistik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-164588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Xiaoxi. "Applications of nonparametric regression in survey statistics." [Ames, Iowa : Iowa State University], 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ahmad, Muhammad Idrees. "Applications of statistics in flood frequency analysis." Thesis, University of St Andrews, 1989. http://hdl.handle.net/10023/2666.

Full text
Abstract:
Estimation of the probability of occurrence of future flood events at one or more locations across a river system is frequently required for the design of bridges, culverts, spillways, dams and other engineering works. This study investigates some of the statistical aspects for estimating the flood frequency distribution at a single site and on regional basis. It is demonstrated that generalized logistic (GL) distribution has many properties well suited for the modelling of flood frequency data. The GL distribution performs better than the other commonly recommended flood frequency distributions in terms of several key properties. Specifically, it is capable of reproducing almost the same degree of skewness typically present in observed flood data. It appears to be more robust to the presence of extreme outliers in the upper tail of the distribution. It has a relatively simpler mathematical form. Thus all the well known methods of parameter estimation can be easily implemented. It is shown that the method of probability weighted moments (PWM) using the conventionally recommended plotting position substantially effects the estimation of the shape parameter of the generalized extreme value (GEV) distribution by relocating the annual maximum flood series. A location invariant plotting position is introduced to use in estimating, by the method of PWM, the parameters of the GEV and the GL distributions. Tests based on empirical distribution function (EDF) statistics are proposed to assess the goodness of fit of the flood frequency distributions. A modified EDF test is derived that gives greater emphasis to the upper tail of a distribution which is more important for flood frequency prediction. Significance points are derived for the GEV and GL distributions when the parameters are to be estimated from the sample data by the method of PWMs. The critical points are considerably smaller than for the case where the parameters of a distribution are assumed to be specified. Approximate formulae over the whole range of the distribution for these tests are also developed which can be used for regional assessment of GEV and GL models based on all the annual maximum series simultaneously in a hydrological region. In order to pool at-site flood data across a region into a single series for regional analysis, the effect of standardization by at-site mean on the estimation of the regional shape parameter of the GEV distribution is examined. Our simulation study based on various synthetic regions reveals that the standardization by the at-site mean underestimates the shape parameter of the GEV by about 30% of its true value and also contributes to the separation of skewness of observed and simulated floods. A two parameter standardization by the at-site estimates of location and scale parameters is proposed. It does not distort the shape of the flood frequency data in the pooling process. Therefore, it offers significantly improved estimate of the shape parameter, allows pooling data with heterogeneous coefficients of variation and helps to explain the separation of skewness effect. Regions on the basis of flood statistics L-CV and USKEW are derived for Scotland and North England. Only about 50% of the basins could be correctly identified as belonging to these regions by a set of seven catchment characteristics. The alternative approach of grouping basins solely on the basis of physical properties is preferable. Six physically homogeneous groups of basins are identified by WARD's multivariate clustering algorithm using the same seven characteristics. These regions have hydrological homogeneity in addition to their physical homogeneity. Dimensionless regional flood frequency curves are produced by fitting GEV and GL distributions for each region. The GEV regional growth curves imply a larger return period for a given magnitude flood. When floods are described by GL model the respective return periods are considerably smaller.
APA, Harvard, Vancouver, ISO, and other styles
7

Warnes, J. "Applications of spatial statistics in petroleum geology." Thesis, University of Strathclyde, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.382393.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vrahimis, Andreas. "Smoothing methodology with applications to nonparametric statistics." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/smoothing-methodology-with-applications-to-nonparametric-statistics(6d6567f2-1bfa-4e77-8dbb-71fea7564185).html.

Full text
Abstract:
The work in this thesis is based on kernel smoothing techniques with applications to nonparametric statistical methods and especially kernel density estimation and nonparametric regression. We examine a bootstrap iterative method of choosing the smoothing parameter, in univariate kernel density estimation, and propose an empirical smoothness correction that generally improves the method for small-medium sample sizes tested. In a simulation study performed, the corrected bootstrap iterative method shows consistent overall performance and can compete with other popular widely used methods. The theoretical asymptotic properties of the smoothed bootstrap method, in univariate kernel density estimation, are examined and an adaptive data-based choice of fixed pilot smoothing parameter formed, that provides a good performance trade-off among distributions of various shapes, with fast relative rate of convergence to the optimal. The asymptotic and practical differences of the smoothed bootstrap method, when the diagonal terms of the error criterion are included or omitted, are also examined. The exclusion of the diagonal terms yields faster relative rates of convergence of the smoothing parameter to the optimal but a simulation study shows that for smaller sample sizes, including the diagonal terms can be favourable. In a real data set application both methods produced similar smoothing parameters and the resulting kernel density estimates were of reasonable smoothness.Existing methods of kernel density estimation in two dimensions are discussed and the corrected bootstrap iterative method is adapted to work in the bivariate kernel density estimation, with considerable success. Additionally, the theoretical asymptotic properties of the smoothed bootstrap method, in the bivariate kernel density estimation, are examined, and adaptive data-based choices for the fixed pilot smoothing parameters formed, that provide fast relative rates of convergence to the optimal, compared to other popular methods. The smoothed bootstrap method with the diagonal terms of the error criterion omitted, exhibits slightly faster relative rates of convergence, compared to the method which includes the diagonal terms, and in a simulation study they performed considerably well, compared to other methods. Also, we discover that a scaling transformation of the data, before applying the method, leads to poor results for distributions of various shapes, and it should be generally avoided. In an application using the iris flowers data set, both smoothed bootstrap versions suggested, produce reasonable kernel density estimates. We also look at various methods of estimating the variance of the errors in nonparametric regression and suggest a simple robust method of estimating the error variance, for the homoscedastic fixed design. The method is based on a multiplicative correction of the variance of the residuals and a comparison with popular difference-based methods shows favourable results, especially when the local linear estimator is employed.
APA, Harvard, Vancouver, ISO, and other styles
9

Björnberg, Dag. "Wavelets : Introduction and Applications for Economic Time Series." Thesis, Uppsala universitet, Tillämpad matematik och statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-325555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pienaar, Etienne A. D. "Non-Linear diffusion processes and applications." Doctoral thesis, University of Cape Town, 2016. http://hdl.handle.net/11427/22973.

Full text
Abstract:
Diffusion models are useful tools for quantifying the dynamics of continuously evolving processes. Using diffusion models it is possible to formulate compact descriptions for the dynamics of real-world processes in terms of stochastic differential equations. Despite the exibility of these models, they can often be extremely difficult to work with. This is especially true for non-linear and/or time-inhomogeneous diffusion models where even basic statistical properties of the process can be elusive. As such, we explore various techniques for analysing non-linear diffusion models in contexts ranging from conducting inference under discrete observation and solving first passage time problems, to the analysis of jump diffusion processes and highly non-linear diffusion processes. We apply the methodology to a number of real-world ecological and financial problems of interest and demonstrate how non-linear diffusion models can be used to better understand such phenomena. In conjunction with the methodology, we develop a series of software packages that can be used to accurately and efficiently analyse various classes of non-linear diffusion models.
APA, Harvard, Vancouver, ISO, and other styles
11

Nounou, Mohamed Numan. "Multiscale bayesian linear modeling and applications /." The Ohio State University, 2000. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488203552781115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Chiu, Jing-Er. "Applications of bayesian methods to arthritis research /." free to MU campus, to others for purchase, 2001. http://wwwlib.umi.com/cr/mo/fullcit?p3036813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Rossiter, Jane E. "Epidemiological applications of density estimation." Thesis, University of Oxford, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.291543.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

譚維新 and Wai-san Wilson Tam. "Implementation and applications of additive models." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31221671.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Tam, Wai-san Wilson. "Implementation and applications of additive models /." Hong Kong : University of Hong Kong, 1999. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20715444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Rogers, James Anthony. "Confidence sets for multiplicity problems : two applications /." The Ohio State University, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=osu14863985285588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ho, Christine. "Statistical Modeling and Analysis for Biomedical Applications." Thesis, University of California, Berkeley, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10248676.

Full text
Abstract:

This dissertation discusses approaches to two different applied statistical challenges arising from the fields of genomics and biomedical research. The first takes advantage of the richness of whole genome sequencing data, which can uncover both regions of chromosomal aberration and highly specific information on point mutations. We propose a method to reconstruct parts of a tumor's history of chromosomal aberration using only data from a single time-point. We provide an application of the method, which was the first of its kind, to data from eight patients with squamous cell skin cancer, in which we were able to find that knockout of the tumor suppressor gene TP53 occur early in that cancer type.

While the first chapter highlights what's possible with a deep analysis of data from a single patient, the second chapter of this dissertation looks at the opposite situation, aggregating data from several patients to identify gene expression signals for disease phenotypes. In this chapter, we provide a method for hierarchical multilabel classification from several separate classifiers for each node in the hierarchy. The first calls produced by our method improve upon the state-of-the-art, resulting in better performance in the early part of the precision-recall curve. We apply the method to disease classifiers constructed from public microarray data, and whose relationships to each other are given in a known medical hierarchy.

APA, Harvard, Vancouver, ISO, and other styles
18

Sayrol, Clols Elisa. "Higher-order statistics applications in image sequence processing." Doctoral thesis, Universitat Politècnica de Catalunya, 1994. http://hdl.handle.net/10803/6950.

Full text
Abstract:
Aqueta tesi tracta dues aplicacions dels estadístics d'ordre superior al tractament d'imatges.En primer lloc, es proposa l'ús de mètodes basats en estadístics d'ordre superior per a larestauració d'imatges. Primerament, es consideren imatges degradades per filtres de blurringde fase lineal o zero i soroll Gaussià aditiu. S'examina un segon model de degradació perimatges astronòmiques on el blurring es causat per les turbulències de l'atmosfera i lesaberracions del telescopi. L'estratègia de restauració en amdós casos es basa en el fet de que lafase del senyal original i la dels seus estadístics d'ordre superior no es ditorsionen per lafunció de blurring. Les dificultats associades a combinar senyals de dues dimensions i elsseus estadístics d'ordre superior, es redueixen gràcies a la utilització de la transformada deRadon. La projecció a cada angle de la imatge de dues dimensions és un senyal d'unadimensió que pot ser processada per qualsevol mètode de reconstrucció d'una dimensió. Enaquesta part de la tesi es desenvolupen mètodes que utilitzen el Bicepstrum IterativeReconstruction Algorithm i el Weight Slice Algorithm. Un cop es reconstrueixen lesprojeccions originals, la transformada inversa de Radon ens dóna la imatge restaurada.En la segona part de la tesi es proposa una classe de funcions de cost, basades novament enestadístics d'ordre superior, per estimar el vector de moviment entre imatges consecutivesd'una seqüència. En cas de que les imatges estiguin degradades per soroll Gaussià aditiu decovariancia desconeguda, la utilització d'estadístics d'ordre superior és molt apropiada ja queels cumulants de processos Gaussians són nuls. Per a obtenir estimacions consistents esnecessiten varies realitzacions de la mateixa seqüència, cosa que generalment no és possible.Tanmateix, imatges prèvies de la seqüència on el problema d'estimació del moviment ja s'haresolt, poden ser utilitzades per a obtenir estimacions assimptòticament no esbiaixades. Aixòes possible quan es pot suposar estacionaritat entre les imatges de la seqüència empreades.L'objectiu d'aquesta part del treball d'investigació es l'ús de tècniques basades en estadísticsd'ordre superior que puguin estimar moviment fins i tot per a regions o blocs relativamentpetits. Es defineix també una estimació alternativa quan només es disposa de dues imatges,que supera altres tècniques existents. Finalment es desenvolupa una versió recursiva per casosen què es tingui accés a informació a priori.
APA, Harvard, Vancouver, ISO, and other styles
19

Dolan, David M. "Spatial statistics using quasi-likelihood methods with applications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0029/NQ66201.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Teterukovskiy, Alexei. "Computational statistics with environmental and remote sensing applications /." Umeå : Dept. of Forest Economics, Swedish Univ. of Agricultural Sciences, 2003. http://epsilon.slu.se/s277.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Pouli, Foteini Tania. "Statistics of image categories for computer graphics applications." Thesis, University of Bristol, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.540879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Castro, Rodríguez Daniel Alberto. "Applications of robust multivariate statistics in process monitoring." Thesis, University of Newcastle Upon Tyne, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Bogle, S. M. "Linear structural models in statistics and their applications." Thesis, University of Leeds, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.353806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Lochner, Michelle Aileen Anne. "New applications of statistics in astronomy and cosmology." Doctoral thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/12864.

Full text
Abstract:
Includes bibliographical references.
Over the last few decades, astronomy and cosmology have become data-driven fields. The parallel increase in computational power has naturally lead to the adoption of more sophisticated statistical techniques for data analysis in these fields, and in particular, Bayesian methods. As the next generation of instruments comes online, this trend should be continued since previously ignored effects must be considered rigorously in order to avoid biases and incorrect scientific conclusions being drawn from the ever-improving data. In the context of supernova cosmology, an example of this is the challenge from contamination as supernova datasets will become too large to spectroscopically confirm the types of all objects. The technique known as BEAMS (Bayesian Estimation Applied to Multiple Species) handles this contamination with a fully Bayesian mixture model approach, which allows unbiased estimates of the cosmological parameters. Here, we extend the original BEAMS formalism to deal with correlated systematics in supernovae data, which we test extensively on thousands of simulated datasets using numerical marginalization and Markov Chain Monte Carlo (MCMC) sampling over the unknown type of the supernova, showing that it recovers unbiased cosmological parameters with good coverage. We then apply Bayesian statistics to the field of radio interferometry. This is particularly relevant in light of the SKA telescope, where the data will be of such high quantity and quality that current techniques will not be adequate to fully exploit it. We show that the current approach to deconvolution of radio interferometric data is susceptible to biases induced by ignored and unknown instrumental effects such as pointing errors, which in general are correlated with the science parameters. We develop an alternative approach - Bayesian Inference for Radio Observations (BIRO) - which is able to determine the joint posterior for all scientific and instrumental parameters. We test BIRO on several simulated datasets and show that it is superior to the standard CLEAN and source extraction algorithms. BIRO fits all parameters simultaneously while providing unbiased estimates - and errors - for the noise, beam width, pointing errors and the fluxes and shapes of the sources.
APA, Harvard, Vancouver, ISO, and other styles
25

Walker, Andrew D. "Statistics of the Earth's magnetic field with applications /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1997. http://wwwlib.umi.com/cr/ucsd/fullcit?p9737387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

吳浩存 and Hao-cun Wu. "Independent component analysis and its applications in finance." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B39559099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Lindell, Andreas. "Theoretical and Practical Applications of Probability : Excursions in Brownian Motion, Risk Capital Stress Testing, and Hedging of Power Derivatives." Doctoral thesis, Stockholm : Department of Mathematics, Stockholm university, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-8570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Sonesson, Christian. "On statistical surveillance issues of optimality and medical applications /." Göteborg, Sweden : Stockholm : Statistical Research Unit, Göteborg University ; Almqvist & Wiksell International, 2003. http://catalog.hathitrust.org/api/volumes/oclc/53500706.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Yehua. "Topics in functional data analysis with biological applications." [College Station, Tex. : Texas A&M University, 2006. http://hdl.handle.net/1969.1/ETD-TAMU-1867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Bulla, Jan. "Computational Advances and Applications of Hidden (Semi-)Markov Models." Habilitation à diriger des recherches, Université de Caen, 2013. http://tel.archives-ouvertes.fr/tel-00987183.

Full text
Abstract:
The document is my habilitation thesis, which is a prerequisite for obtaining the "habilitation à diriger des recherche (HDR)" in France (https://fr.wikipedia.org/wiki/Habilitation_universitaire#En_France). The thesis is of cumulative form, thus providing an overview of my published works until summer 2013.
APA, Harvard, Vancouver, ISO, and other styles
31

Quinn, Kathleen Anne Sara. "Combinatorial structures with applications to information theory." Thesis, Royal Holloway, University of London, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Fritsch, Kathleen R. Steiner. "Sharper and more accurate multiple comparisons methods, with applications /." The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu14879461035669.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Shen. "Prediction of deterministic functions with applications in computer experiments /." The Ohio State University, 1998. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487951214939115.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Janlow, Christoffer. "EVALUATING THE EFFECT OF SKILL COMPETITIONS ON APPLICATIONS TO HIGH SCHOOL PROGRAMS." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Jung, Min Kyung. "Statistical methods for biological applications." [Bloomington, Ind.] : Indiana University, 2007. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3278454.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Mathematics, 2007.
Source: Dissertation Abstracts International, Volume: 68-10, Section: B, page: 6740. Adviser: Elizabeth A. Housworth. Title from dissertation home page (viewed May 20, 2008).
APA, Harvard, Vancouver, ISO, and other styles
36

Wei, Wutao. "Model Based Clustering Algorithms with Applications." Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10830711.

Full text
Abstract:

In machine learning predictive area, unsupervised learning will be applied when the labels of the data are unavailable, laborious to obtain or with limited proportion. Based on the special properties of data, we can build models by understanding the properties and making some reasonable assumptions. In this thesis, we will introduce three practical problems and discuss them in detail. This thesis produces 3 papers as follow: Wei, Wutao, et al. "A Non-parametric Hidden Markov Clustering Model with Applications to Time Varying User Activity Analysis." ICMLA2015 Wei, Wutao, et al. "Dynamic Bayesian predictive model for box office forecasting." IEEE Big Data 2017. Wei, Wutao, Bowei Xi, and Murat Kantarcioglu. "Adversarial Clustering: A Grid Based Clustering Algorithm Against Active Adversaries." Submitted

User Profiling Clustering: Activity data of individual users on social media are easily accessible in this big data era. However, proper modeling strategies for user profiles have not been well developed in the literature. Existing methods or models usually have two limitations. The first limitation is that most methods target the population rather than individual users, and the second is that they cannot model non-stationary time-varying patterns. Different users in general demonstrate different activity modes on social media. Therefore, one population model may fail to characterize activities of individual users. Furthermore, online social media are dynamic and ever evolving, so are users’ activities. Dynamic models are needed to properly model users’ activities. In this paper, we introduce a non-parametric hidden Markov model to characterize the time-varying activities of social media users. In addition, based on the proposed model, we develop a clustering method to group users with similar activity patterns.

Adversarial Clustering: Nowadays more and more data are gathered for detecting and preventing cyber-attacks. Unique to the cyber security applications, data analytics techniques have to deal with active adversaries that try to deceive the data analytics models and avoid being detected. The existence of such adversarial behavior motivates the development of robust and resilient adversarial learning techniques for various tasks. In the past most of the work focused on adversarial classification techniques, which assumed the existence of a reasonably large amount of carefully labeled data instances. However, in real practice, labeling the data instances often requires costly and time-consuming human expertise and becomes a significant bottleneck. Meanwhile, a large number of unlabeled instances can also be used to understand the adversaries' behavior. To address the above mentioned challenges, we develop a novel grid based adversarial clustering algorithm. Our adversarial clustering algorithm is able to identify the core normal regions, and to draw defensive walls around the core positions of the normal objects utilizing game theoretic ideas. Our algorithm also identifies sub-clusters of attack objects, the overlapping areas within clusters, and outliers which may be potential anomalies.

Dynamic Bayesian Update for Profiling Clustering: Movie industry becomes one of the most important consumer business. The business is also more and more competitive. As a movie producer, there is a big cost in movie production and marketing; as an owner of a movie theater, it is also a problem that how to arrange the limited screens to the current movies in theater. However, all the current models in movie industry can only give an estimate of the opening week. We improve the dynamic linear model with a Bayesian framework. By using this updating method, we are also able to update the streaming adversarial data and make defensive recommendation for the defensive systems.

APA, Harvard, Vancouver, ISO, and other styles
37

Coupal, Louis. "The EM algorithm : an overview with applications to medical data." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=56644.

Full text
Abstract:
Owing to their complex design and use of live subjects as experimental units, missing or incomplete data is common place in medical experiments. The great increase in difficulty of maximum likelihood based analysis of incomplete data experiments compared to a similar complete data analysis encourages many medical researchers to ignore cases with missing data in favour of performing a "complete" cases analysis.
The expectation maximization algorithm (EM for short) is often an easily implemented algorithm that provides estimates of parameters in models with missing data. The EM algorithm unifies the theory of maximum likelihood estimation in the context of "missing" data. The general problem of missing data also includes structurally unobservable quantities such as parameters, hyperparameters and latent variables. The nature of its defining steps, the expectation or E-step and the maximization or M-step, gives the user intuitive understanding of the maximization process.
In this Thesis, the EM algorithm is first illustrated through an example borrowed from the field of genetics. The theory of the EM algorithm is formally developed and the special case of exponential families is considered. Issues concerning convergence and inference are discussed. Many examples taken from the medical literature serve to highlight the method's broad spectrum of application in both missing data and unobservable parameter problems.
APA, Harvard, Vancouver, ISO, and other styles
38

Fook, Chong Stéphanie M. C. "A study of Hougaard distributions, Hougaard processes and their applications /." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=57001.

Full text
Abstract:
This thesis describes an investigation of Hougaard distributions, Hougaard processes and their applications. The aim is to assemble and synthesize known results about the subject, to provide further insight into its theoretical foundations, to extend existing methods and develop some new methods, to discuss and illustrate applications, and finally to motivate other statisticians to make greater use of Hougaard distributions and Hougaard processes in their own investigations. Although the family of Hougaard distributions is relatively unknown, it includes the inverse Gaussian, gamma and positive stable distributions as special cases and these are well known.
APA, Harvard, Vancouver, ISO, and other styles
39

French, Alan Paul. "Specification transformation techniques with applications to operations research." Thesis, University of East Anglia, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Saad, Nadia Abdel Samie Basyouni Kotb. "Random Matrix Theory with Applications in Statistics and Finance." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23698.

Full text
Abstract:
This thesis investigates a technique to estimate the risk of the mean-variance (MV) portfolio optimization problem. We call this technique the Scaling technique. It provides a better estimator of the risk of the MV optimal portfolio. We obtain this result for a general estimator of the covariance matrix of the returns which includes the correlated sampling case as well as the independent sampling case and the exponentially weighted moving average case. This gave rise to the paper, [CMcS]. Our result concerning the Scaling technique relies on the moments of the inverse of compound Wishart matrices. This is an open problem in the theory of random matrices. We actually tackle a much more general setup, where we consider any random matrix provided that its distribution has an appropriate invariance property (orthogonal or unitary) under an appropriate action (by conjugation, or by a left-right action). Our approach is based on Weingarten calculus. As an interesting byproduct of our study - and as a preliminary to the solution of our problem of computing the moments of the inverse of a compound Wishart random matrix, we obtain explicit moment formulas for the pseudo-inverse of Ginibre random matrices. These results are also given in the paper, [CMS]. Using the moments of the inverse of compound Wishart matrices, we obtain asymptotically unbiased estimators of the risk and the weights of the MV portfolio. Finally, we have some numerical results which are part of our future work.
APA, Harvard, Vancouver, ISO, and other styles
41

Schmelter, Mark L. "Applications of Bayesian Statistics in Fluvial Bed Load Transport." DigitalCommons@USU, 2013. http://digitalcommons.usu.edu/etd/1515.

Full text
Abstract:
Fluvial sediment transport is a process that has long been important in managing water resources. While we intuitively recognize that increased flow amounts to increased sediment discharge, there is still significant uncertainty in the details. Because sediment transport---and in the context of this dissertation, bed load transport---is a strongly nonlinear process that is usually modeled using empirical or semi-empirical equations, there exists a large amount of uncertainty around model parameters, predictions, and model suitability. The focus of this dissertation is to develop and demonstrate a series of physically- and statistically-based sediment transport models that build on the scientific knowledge of the physics of sediment transport while evaluating the phenomenon in an environment that leads us to robust estimates of parametric, predictive, and model selection uncertainty. The success of these models permits us to put theoretically and procedurally sound uncertainty estimates to a process that is widely acknowledged to be variable and uncertain but has, to date, not developed robust statistical tools to quantify this uncertainty. This dissertation comprises four individual papers that methodically develop and prove the concept of Bayesian statistical sediment transport models. A simple pedagogical model is developed using synthetic and laboratory flume data---this model is then compared to traditional statistical approaches that are more familiar to the discipline. A single-fraction sediment transport model is developed on the Snake River to develop a probabilistic sediment budget whose results are compared to a sediment budget developed through an ad hoc uncertainty analysis. Lastly, a multi-fraction sediment transport model is developed in which multiple fractions of laboratory flume experiments are modeled and the results are compared to the standard theory that has been already published. The results of these models demonstrate that a Bayesian approach to sediment transport has much to offer the discipline as it is able to 1) accurately provide estimates of model parameters, 2) quantify parametric uncertainty of the models, 3) provide a means to evaluate relative model fit between different deterministic equations, 4) provide predictive uncertainty of sediment transport, 5) propagate uncertainty from the root causes into secondary and tertiary dependent functions, and 6) provide a means by which testing of established theory can be performed.
APA, Harvard, Vancouver, ISO, and other styles
42

Herrera, Rodrigo. "Statistics of Multivariate Extremes with Applications in Risk Management." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-24962.

Full text
Abstract:
The contributions of this thesis have mainly a dual purpose: introducing several multivariate statistical methodologies where in the major of the cases only stationary of the random variables is assumed, and also highlight some of the applied problems in risk management where extreme value theory may play a role. Mostly every chapter is selfcontained, they have its own more detailed introduction and short conclusion
Die Kontributionen von dieser Dissertation haben ein doppeltes Ziel: die Darstellung von vielen multivariaten statistischen Verfahren, wobei in der Mehrheit der Fälle nur Stationarität von den Zufallsvariablen angenommen wurde, und die Anwendungen in Risikomanagement in welchem Extremwerttheorie eine wichtige Rolle spielen könnte. Die Struktur der Arbeit ist eigenständig, mit einer detaillierten Einführung und kurzen Zusammenfassung in jedem Kapitel
APA, Harvard, Vancouver, ISO, and other styles
43

Mawk, Russell Lynn. "A survey of applications of spline functions to statistics." [Johnson City, Tenn. : East Tennessee State University], 2001. http://etd-submit.etsu.edu/etd/theses/available/etd-0714101-104229/restricted/mawksr0809.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Bhattacharya, Abhishek. "Nonparametric Statistics on Manifolds With Applications to Shape Spaces." Diss., The University of Arizona, 2008. http://hdl.handle.net/10150/194508.

Full text
Abstract:
This thesis presents certain recent methodologies and some new results for the statistical analysis of probability distributions on non-Euclidean manifolds. The notions of Frechet mean and variation as measures of center and spread are introduced and their properties are discussed. The sample estimates from a random sample are shown to be consistent under fairly broad conditions. Depending on the choice of distance on the manifold, intrinsic and extrinsic statistical analyses are carried out. In both cases, sufficient conditions are derived for the uniqueness of the population means and for the asymptotic normality of the sample estimates. Analytic expressions for the parameters in the asymptotic distributions are derived. The manifolds of particular interest in this thesis are the shape spaces of k-ads. The statistical analysis tools developed on general manifolds are applied to the spaces of direct similarity shapes, planar shapes, reflection similarity shapes, affine shapes and projective shapes. Two-sample nonparametric tests are constructed to compare the mean shapes and variation in shapes for two random samples. The samples in consideration can be either independent of each other or be the outcome of a matched pair experiment. The testing procedures are based on the asymptotic distribution of the test statistics, or on nonparametric bootstrap methods suitably constructed. Real life examples are included to illustrate the theory.
APA, Harvard, Vancouver, ISO, and other styles
45

De, Pascale Marco. "MACHINE LEARNING AND ADVANCED STATISTICS IN ASTRONOMY: TWO APPLICATIONS." Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3424205.

Full text
Abstract:
In spectroscopy and photometry domains, the amount of data produced by surveys is rapidly increasing, and this trend will continue thanks to next future surveys. To extract information from these data in a useful time scale, the analysis can be done by means of techniques from statistics and computer science. This work presents the development and application of two automatic methods. The thesis is in two parts. The first part describes the use of MATISSE, a parameterisation algorithm for stellar spectra developed at the Observatoire de la Cote d'Azur, and part of the AMBRE project. It has been applied to ~ 126 000 spectra observed by the ESO:HARPS spectrograph. The parameters extracted by MATISSE are effective temperature, gravity, metallicity and α elements abundance and comes with relative errors. Quality selection criteria have been defined. The accepted subsample of parameters, has been compared with results from independent works, showing very good agreement. Additionally, these parameters identify the great majority of stars as of spectral type G and K, in agreement with the type of targets observed by HARPS. This confirms MATISSE as an excellent parameterisation algorithm. The second part is concerned with the analysis of large amounts of photometric observations. It describes the development of a supernova classifier and its application to a set of simulated light curves. The method is developed using a "data-driven" approach. The aim is to extract from the data all the information necessary to solve the problem, using the as less assumption as possible. For this purpose, techniques from the machine learning domain are exploited. These techniques are able to make a computer learn the rule transforming input into output using example observations. The machine learning algorithms used are Gaussian processes to perform light curve interpolation, diffusion maps to extract parameters, and random forest to build the classification model. The goal is to reproduce the spectroscopy-based classification in the three classes of type Ia, Ib/c and II, using only light curves. In this respect the method fails, since it is not reliable in classifying type Ib/c. The main cause of this failure is to be found in the set of example light curves, not representative of the observed population of supernovae. On the other hand, when compared with independent results, the method developed results competitive in the identification of supernovae Ia.
Nel campo della spettroscopia e della fotometria, la mole di dati prodotta dalle survey sta aumentando molto velocemente, e continuerà a farlo sempre più nei prossimi anni. Un'analisi che estragga informazioni in tempi utili può essere affidata a metodi automatici sviluppati utilizzando tecniche statistiche e della scienza computazionale. Questo lavoro presenta lo sviluppo e l'applicazione di due metodi automatici. La tesi `e divisa in due parti. La prima parte riporta l'utilizzo dell'algoritmo MATISSE, sviluppato all'Observatoire de la Cote d'Azur, e della pipeline AMBRE per la parametrizzazione di ~ 126 000 spettri prodotti dallo spettrografo ESO:HARPS. I parametri estratti da MATISSE sono temperatura effettiva, gravità, metallicità e abbondanza di elementi α, completi di errori. Il sottoinsieme di parametri che ha superato i criteri di qualità definiti per il campione, è stato confrontato con i risultati di lavori indipendenti mostrando un ottimo accordo. Inoltre, i risultati identificano la grande maggioranza delle stelle come di tipo spettrale G e K, in accordo con il tipo di oggetti osservato da HARPS. Questo conferma MATISSE come un ottimo algoritmo di parametrizzazione. La seconda parte è dedicata all'analisi di grandi quantità di dati fotometrici. Qui è descritto lo sviluppo di un classificatore di supernovae e la sua applicazione a curve di luce simulate. Il metodo è sviluppato seguendo un approccio detto "data-driven'', in cui si cerca di estrarre dai dati tutta l'informazione necessaria a risolvere il problema, affidandosi al minor numero possibile di assunzioni. A questo scopo, il metodo fa affidamento a tecniche del "machine learning'', in grado di far apprendere a un computer la regola che trasforma l'input nell'output usando campioni di esempio. Nello specifico vengono utilizzati i processi gaussiani per l'interpolazione delle curve di luce, le "diffusion maps'' per la parametrizzazione e le "random forest'' per costruire il classificatore vero e proprio. Lo scopo è quello di replicare la classificazione spettroscopica nei tre tipi Ia, Ib/c e II usando solo curve di luce. In questo il metodo fallisce, non riuscendo a classificare le Ib/c in maniera soddisfacente. La causa maggiore è da ricercarsi nell'insieme di esempi disponibili, non rappresentativo della popolazione di supernovae osservata. Invece, confrontato con risultati indipendenti, il metodo presentato risulta competitivo nell'identificazione delle supernovae Ia.
APA, Harvard, Vancouver, ISO, and other styles
46

Chan, Yuen-fai, and 陳遠輝. "On exact algorithms for small-sample bootstrap iterations and their applications." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31222298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Alphonse, Sebastian Anand. "Low Dimensional Signal Sets for Radar Applications." Thesis, Illinois Institute of Technology, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10980036.

Full text
Abstract:

In this dissertation we present a view in which the radar signals as the elements of a high dimensional signal set. The dimension is equal to the number of discrete samples (M) of the signal. Because the radar signals should satisfy certain conditions for good performance, most lie in much smaller subsets or subspaces. By developing appropriate lower dimensional signal spaces that approximate these areas where the radar signals live, we can realize potential advantage because of the greater parametric simplicity. In this dissertation we apply this low dimensional signal concept in radar signal processing. In particular we focus on radar signal design and radar signal estimation. Signal design comes under radar measures and signal estimation comes under radar countermeasures.

In signal design problem one searches for the signal element that has smaller sidelobes and also satisfies certain constraints such as bandwidth occupancy, AC mainlobe width, etc. The sidelobe levels are quantified by Peak Sidelobe Ratio (PSLR) and Integrated Sidelobe Ratio (ISLR). We use linear combination of these two metrics as the cost function to determine the quality of the designed signal. There is a lot of effort in designing parameterized signal sets including our proposed Asymmetric Time Exponentiated Frequency Modulated (ATEFM) signal and Odd Polynomial FrequencySignal (OPFS). Our contribution is to demonstrate that the best signal elements from these low dimensional signal sets (LDSS) mostly outperform the best signal elements that are randomly chosen from the radar signal subset with dimensionality M. Since searching the best signal element from the LDSS requires less computational resources it is prudent to search for the best signal elements from the low dimensional signal sets.

In signal estimation problem we try to estimate the signal transmitted by a noncooperating radar which is intercepted by multiple passive sensors. The intercepted signals often have low SNR and there could be only few intercepted signals available for signal estimation. Predominantly used method for estimating the radar signals is Principal Component Analysis (PCA). When the SNR is low (< 0 dB) we need large number of intercepted signals to get an accurate estimates from PCA method. Our contribution is to demonstrate that by limiting the search for the best signal estimate within the low dimensional signal sets one can get more accurate estimates of the unknown transmitted signal at low SNRs with smaller number of sensors compared to PCA.

APA, Harvard, Vancouver, ISO, and other styles
48

Xu, Maochao. "Stochastic Orders in Heterogeneous Samples with Applications." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/391.

Full text
Abstract:
The statistics literature has mostly focused on the case when the data available is in the form of a random sample. In many cases, the observations are not identically distributed. Such samples are called heterogeneous samples. The study of heterogeneous samples is of great interest in many areas, such as statistics, econometrics, reliability engineering, operation research and risk analysis. Stochastic orders between probability distributions is a widely studied concept. There are several kinds of stochastic orders that are used to compare different aspects of probability distributions like location, variability, skewness, dependence, etc. In this dissertation, most of the work is devoted to investigating the properties of statistics based on heterogeneous samples with the aid of stochastic orders. We will see the effect of the change in the stochastic properties of various functions of observations as their parameters change. The theory of majorization will be used for this purpose. First, order statistics from heterogeneous samples will be investigated. Order statistics appear everywhere in statistics and related areas. The k-out-of-n systems are building blocks of a coherent system. The lifetime of such a system is the same as that of the (n-k+1)th order statistic in a sample size of n. Stochastic comparisons between order statistics have been studied extensively in the literature in case the parent observations are independent and identically distributed. However, in practice this assumption is often violated as different components in a system may not have the same distribution. Comparatively less work has been done in the case of heterogeneous random variables, mainly because of the reason that their distribution theory is very complicated. Some open problems in the literature have been solved in the dissertation. Some new problems associated with order statistics have been investigated in the thesis. Next, stochastic properties of spacings based on heterogeneous observations are studied. Spacings are of great interest in many areas of statistics, in particular, in the characterizations of distributions, goodness-of-fit tests, life testing and reliability models. In particular, the stochastic properties of the sample range are investigated in detail. Applications in reliability theory are highlighted. The relative dependence between extreme order statistics will be investigated in Chapter 4. In particular, the open problem discussed in Dolati, et al. (2008) is solved in this Chapter. In the last Chapter, convolutions of random variables from heterogeneous samples will be investigated. Convolutions have been widely used in many areas to model many practical situations. For example, in reliability theory, it arises as the lifetime of a redundant standby system; in queuing theory, it is used to model the total service time by an agent in a system; in insurance, it is used to model total claims on a number of policies in the individual risk model. I will compare the dispersion and skewness properties of convolutions of different heterogeneous samples. The tail behavior of convolutions are investigated as well. The work in this dissertation has significant applications in many diverse areas of applied probability and statistics. For example, statistics based on order statistics and spacings from heterogeneous samples arise in studying the robust properties of statistical procedures; the work on order statistics will also provide a better estimation of lifetime of a coherent system in reliability engineering; convolution results will be of great interest in insurance and actuarial science for evaluating risks.
APA, Harvard, Vancouver, ISO, and other styles
49

Foley, Kristen Madsen. "Multivariate Spatial Temporal Statistical Models for Applications in Coastal Ocean Prediction." NCSU, 2006. http://www.lib.ncsu.edu/theses/available/etd-07042006-110351/.

Full text
Abstract:
Estimating the spatial and temporal variation of surface wind fields plays an important role in modeling atmospheric and oceanic processes. This is particularly true for hurricane forecasting, where numerical ocean models are used to predict the height of the storm surge and the degree of coastal flooding. We use multivariate spatial-temporal statistical methods to improve coastal storm surge prediction using disparate sources of observation data. An Ensemble Kalman Filter is used to assimilate water elevation into a three dimension primitive equations ocean model. We find that data assimilation is able to improve the estimates for water elevation for a case study of Hurricane Charley of 2004. In addition we investigate the impact of inaccuracies in the wind field inputs which are the main forcing of the numerical model in storm surge applications. A new multivariate spatial statistical framework is developed to improve the estimation of these wind inputs. A spatial linear model of coregionalization (LMC) is used to account for the cross-dependency between the two orthogonal wind components. A Bayesian approach is used for estimation of the parameters of the multivariate spatial model and a physically based wind model while accounting for potential additive and multiplicative bias in the observed wind data. This spatial model consistently improves parameter estimation and prediction for surface wind data for the Hurricane Charley case study when compared to the original physical wind model. These methods are also shown to improve storm surge estimates when used as the forcing fields for the coastal ocean model. Finally we describe a new framework for estimating multivariate nonstationary spatial-temporal processes based on an extension of the LMC model. We compare this approach to other multivariate spatial models and describe an application to surface wind fields from Hurricane Floyd of 1999.
APA, Harvard, Vancouver, ISO, and other styles
50

Huang, Yilan. "Applications of Markov chains to reliability of long-haul communication systems." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23275.

Full text
Abstract:
This thesis applies Markov chain methods to evaluate the reliability of multipath switching networks and optical amplified systems in long-haul communication systems. When conventional methods proved impractical to evaluate the reliability of multipath switching networks and optical amplified systems, some practical methods based on Markov chains were developed by Whitmore and others (1987, 1988, 1991) and used successfully to evaluate the reliability of such systems. This work aims at describing the details of Markov chain methods for reliability calculation, and demonstrating their application to reliability evaluation of multipath switching networks and optical amplified systems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography