Tesis sobre el tema "Statistical"

Siga este enlace para ver otros tipos de publicaciones sobre el tema: Statistical.

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Statistical".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Tauber, Vojtěch. "Vývoj právní úpravy postavení a organizace statistické služby v Československu 1918 - 1938". Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-201593.

Texto completo
Resumen
This diploma thesis examines the legislation of the status and organisation of the statistical service in Czechoslovakia in the years 1918-1938 which has lacked sufficient exploration. In this period, statistics referring to public administration was centralised and under the responsibility of the State Statistical Council and the State Statistical Office. These institutions are given special attention, particularly issues associated with their intricate aspects: personnel, budget and location. While the State Statistical Office was one of the central government authorities in Czechoslovakia, the State Statistical Council had the nature of an advisory body and an independent decision-making authority. The centralisation did not apply to statistical surveys carried out by certain cities. This thesis uses primarily unpublished archive materials and published sources concerning the activities of the State Statistical Office and the State Statistical Council.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Kouba, Pavel. "Možnost zavedení a využívání metody SPC ve výrobě v organizaci s.n.o.p CZ, a.s". Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-16319.

Texto completo
Resumen
The diploma paper is devoted to verification the application of SPC methods and performs evaluation of statistical stability and process eligibility of steel stampings in the real production process. In the second part is the author of the paper trying to design the optimal form of SPC methods for its use in a specified manufacturing process.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Postler, Štěpán. "Statistická analýza ve webovém prostředí". Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-199226.

Texto completo
Resumen
The aim of this thesis is creating a web application that allows dataset import and analyzing data with use of statistical methods. The application uses a user access that allows multiple number of persons manipulate with a single dataset, as well as interact with each other. Data is stored on a remote server and application is accessible from any computer that is connected to the Internet. The application is created in PHP programming language with use of MySQL database system, and user interface is built in HTML language with use of CSS styles. All parts of application are stored on an attached CD in form of text files. In addition to the web application, a part of the thesis is also a text output, which contains a theoretical part in form of description of the chosen statistical analysis methods, and a practical part containing list of application's functions, data model's description and demonstration of data analysis options on specific examples.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Marohn, Frank [Verfasser]. "On statistical information of extreme order statistics / Frank Marohn". Würzburg : Universität Würzburg, 2010. http://d-nb.info/1101947373/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Obi, Jude Chukwura. "Application of statistical computing to statistical learning". Thesis, University of Leeds, 2016. http://etheses.whiterose.ac.uk/16741/.

Texto completo
Resumen
This study focuses on supervised learning, an aspect of statistical learning. The supervised learning is concerned with prediction, and prediction problems are distinguished by the output predicted. The output of prediction is either a categorical or continuous variable. If the output is a categorical variable, we have classification otherwise what obtains is regression. We therefore identify classification and regression as two prediction tools. We further identify many features commonly shared by these prediction tools, and as a result, opine that it may be possible to use a regression function in classification or vice versa. Thus, we direct our research towards classification,and intend to: (i) Compare the differences and similarities between two main classifiers namely, Fisher's Discriminant Analysis (FDA) and Support Vector Machine (SVM). (ii) Introduce a regression based classification function, with acronym RDA (Regression Discriminant Analysis). (iii) Provide proof that RDA and FDA are identical. (iv) Introduce other classification functions based on multiple regression variants (ridge regression and Lasso) namely, Lasso Discriminant Analysis (LaDA) and Ridge Regression Discriminant Analysis (RRDA). We further conduct experiments using real world datasets to verify if the error rates of RDA and FDA on the same datasets are identical or not. We also conduct similar experiments to verify if differences arising from the error rates of using LaDA, RRDA, FDA and Regularized Fisher's Discriminant Analysis (RFDA) on the same datasets are statistically different from each other or not. In the end, we explore benefits that may derive from the use of LaDA as a classifier, particularly in connection with variable selection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Zhang, Bo. "Machine Learning on Statistical Manifold". Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/hmc_theses/110.

Texto completo
Resumen
This senior thesis project explores and generalizes some fundamental machine learning algorithms from the Euclidean space to the statistical manifold, an abstract space in which each point is a probability distribution. In this thesis, we adapt the optimal separating hyperplane, the k-means clustering method, and the hierarchical clustering method for classifying and clustering probability distributions. In these modifications, we use the statistical distances as a measure of the dissimilarity between objects. We describe a situation where the clustering of probability distributions is needed and useful. We present many interesting and promising empirical clustering results, which demonstrate the statistical-distance-based clustering algorithms often outperform the same algorithms with the Euclidean distance in many complex scenarios. In particular, we apply our statistical-distance-based hierarchical and k-means clustering algorithms to the univariate normal distributions with k = 2 and k = 3 clusters, the bivariate normal distributions with diagonal covariance matrix and k = 3 clusters, and the discrete Poisson distributions with k = 3 clusters. Finally, we prove the k-means clustering algorithm applied on the discrete distributions with the Hellinger distance converges not only to the partial optimal solution but also to the local minimum.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Houska, Lukáš. "SÚS a rozvoj statistické vědy v meziválečném období". Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-17112.

Texto completo
Resumen
The thesis focuses on the creation and functioning of the State statistical office and its contribution to the statistical science and theory development. The main goal of the thesis is to make the readers acquainted with the first period of the czechoslovak state statistics and enable them to get a thorough look into the institution's publication activities. In this concept the thesis is divided into three parts. In the first one the "modus operandi" of the statistical office itself is described, the second part comes up with the State statistical office's most influential personalities' biografical data. The third part brings the description and analysis ot books, magasines and other pieces publication. At the conclusion of the third section the key works of the statistical theory are analysed. The enclosure of the thesis implies the attachment with published laws and regulations of the Czechoslovak republic, which are directly tied to the statistical office's activities, and also the list of pieces published int the two key editions of the publication system. The contribution of the thesis is in the complex insight on the topic of the czechoslovak statistics in the interwar period. By now only some fragments have been compiled and described.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Whitehead, Andile. "Statistical-thermodynamical analysis, using Tsallis statistics, in high energy physics". Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/13391.

Texto completo
Resumen
Includes bibliographical references.
Obtained via the maximisation of a modified entropy, the Tsallis distribution has been used to fit the transverse momentum distributions of identified particles from several high energy experiments. We propose a form of the distribution described in Cleymans and Worku, 2012, and show it to be thermodynamically consistent. Transverse momenta distributions and fits from ALICE, ATLAS, and CMS using both Tsallis and Boltzmann distributions are presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Murphy, Toriano A. "Statistical debugging". Thesis, Monterey, Calif. : Naval Postgraduate School, 2008. http://bosun.nps.edu/uhtbin/hyperion-image.exe/08Mar%5FMurphy_Toriano.pdf.

Texto completo
Resumen
Thesis (M.S. in Computer Science)--Naval Postgraduate School, March 2008.
Thesis Advisor(s): Auguston, Mikhail. "March 2008." Description based on title screen as viewed on May 5, 2008. Includes bibliographical references (p. 91). Also available in print.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Mueller, Erich H. "Statistical properties of high-energy rod vibrations". Thesis, Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/12116.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Jinn, Nicole Mee-Hyaang. "Toward Error-Statistical Principles of Evidence in Statistical Inference". Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/48420.

Texto completo
Resumen
The context for this research is statistical inference, the process of making predictions or inferences about a population from observation and analyses of a sample. In this context, many researchers want to grasp what inferences can be made that are valid, in the sense of being able to uphold or justify by argument or evidence. Another pressing question among users of statistical methods is: how can spurious relationships be distinguished from genuine ones? Underlying both of these issues is the concept of evidence. In response to these (and similar) questions, two questions I work on in this essay are: (1) what is a genuine principle of evidence? and (2) do error probabilities have more than a long-run role? Concisely, I propose that felicitous genuine principles of evidence should provide concrete guidelines on precisely how to examine error probabilities, with respect to a test's aptitude for unmasking pertinent errors, which leads to establishing sound interpretations of results from statistical techniques. The starting point for my definition of genuine principles of evidence is Allan Birnbaum's confidence concept, an attempt to control misleading interpretations. However, Birnbaum's confidence concept is inadequate for interpreting statistical evidence, because using only pre-data error probabilities would not pick up on a test's ability to detect a discrepancy of interest (e.g., "even if the discrepancy exists" with respect to the actual outcome. Instead, I argue that Deborah Mayo's severity assessment is the most suitable characterization of evidence based on my definition of genuine principles of evidence.
Master of Arts
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Deng, Xinwei. "Contributions to statistical learning and statistical quantification in nanomaterials". Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29777.

Texto completo
Resumen
Thesis (Ph.D)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Wu, C. F. Jeff; Committee Co-Chair: Yuan, Ming; Committee Member: Huo, Xiaoming; Committee Member: Vengazhiyil, Roshan Joseph; Committee Member: Wang, Zhonglin. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Chiarella, Andrew. "Statistical reasoning and scientific inquiry : statistics in the physical science classroom". Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33882.

Texto completo
Resumen
Teaching science using an inquiry approach is encouraged by several organisations responsible for defining teaching and learning guidelines in North America. However, using this approach can be difficult because of the complexity of inquiry. One source of difficulty is an inability to make sense of the data. Error variation, in particular, poses a significant barrier to the correct interpretation of data and therefore successful inquiry learning. A study was conducted to examine middle school students' ability to make sense of the data they collected in three related experiments. These data involved taking measurements of two continuous variables that were affected by error variation. The results indicated that students tended not to use abstract patterns to describe the data but rather used more local patterns that did not make use of the whole data set. However, many students also indicated an intuitive understanding that a greater amount of data could be used to generate results that are more accurate. This suggests a disparity between what the students understand about data and what they are capable of doing with data. Educational implications are that students may benefit from learning ideal patterns that can be compared to non-ideal data they collect.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Raj, Alvin Andrew. "Ambiguous statistics - how a statistical encoding in the periphery affects perception". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/79214.

Texto completo
Resumen
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 159-163).
Recent understanding in human vision suggests that the periphery compresses visual information to a set of summary statistics. Some visual information is robust to this lossy compression, but others, like spatial location and phase are not perfectly represented, leading to ambiguous interpretations. Using the statistical encoding, we can visualize the information available in the periphery to gain intuitions about human performance in visual tasks, which have implications for user interface design, or more generally, whether the periphery encodes sufficient information to perform a task without additional eye movements. The periphery is most of the visual field. If it undergoes these losses of information, then our perception and ability to perform tasks efficiently are affected. We show that the statistical encoding explains human performance in classic visual search experiments. Based on the statistical understanding, we also propose a quantitative model that can estimate the average number of fixations humans would need to find a target in a search display. Further, we show that the ambiguities in the peripheral representation predict many aspects of some illusions. In particular, the model correctly predicts how polarity and width affects the Pinna-Gregory illusion. Visualizing the statistical representation of the illusion shows that many qualitative aspects of the illusion are captured by the statistical ambiguities. We also investigate a phenomena known as Object Substitution Masking (OSM), where the identity of an object is impaired when a sparse, non-overlapping, and temporally trailing mask surrounds that object. We find that different types of grouping of object and mask produce different levels of impairment. This contradicts a theory about OSM which predicts that grouping should always increase masking strength. We speculate some reasons for why the statistical model of the periphery may explain OSM.
by Alvin Andrew Raj.
Ph.D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Wang, Tao. "Statistical design and analysis of microarray experiments". Connect to this title online, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1117201363.

Texto completo
Resumen
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains ix, 146 p.; also includes graphics (some col.) Includes bibliographical references (p. 145-146). Available online via OhioLINK's ETD Center
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Hardy, James C. (James Clifford). "A Monte Carlo Study of the Robustness and Power Associated with Selected Tests of Variance Equality when Distributions are Non-Normal and Dissimilar in Form". Thesis, University of North Texas, 1990. https://digital.library.unt.edu/ark:/67531/metadc332130/.

Texto completo
Resumen
When selecting a method for testing variance equality, a researcher should select a method which is robust to distribution non-normality and dissimilarity. The method should also possess sufficient power to ascertain departures from the equal variance hypothesis. This Monte Carlo study examined the robustness and power of five tests of variance equality under specific conditions. The tests examined included one procedure proposed by O'Brien (1978), two by O'Brien (1979), and two by Conover, Johnson, and Johnson (1981). Specific conditions included assorted combinations of the following factors: k=2 and k=3 groups, normal and non-normal distributional forms, similar and dissimilar distributional forms, and equal and unequal sample sizes. Under the k=2 group condition, a total of 180 combinations were examined. A total of 54 combinations were examined under the k=3 group condition. The Type I error rates and statistical power estimates were based upon 1000 replications in each combination examined. Results of this study suggest that when sample sizes are relatively large, all five procedures are robust to distribution non-normality and dissimilarity, as well as being sufficiently powerful.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Asenov, Plamen. "Accurate statistical circuit simulation in the presence of statistical variability". Thesis, University of Glasgow, 2013. http://theses.gla.ac.uk/4996/.

Texto completo
Resumen
Semiconductor device performance variation due to the granular nature of charge and matter has become a key problem in the semiconductor industry. The main sources of this ‘statistical’ variability include random discrete dopants (RDD), line edge roughness (LER) and metal gate granularity (MGG). These variability sources have been studied extensively, however a methodology has not been developed to accurately represent this variability at a circuit and system level. In order to accurately represent statistical variability in real devices the GSS simulation toolchain was utilised to simulate 10,000 20/22nm n- and p-channel transistors including RDD, LER and MGG variability sources. A statistical compact modelling methodology was developed which accurately captured the behaviour of the simulated transistors, and produced compact model parameter distributions suitable for advanced compact model generation strategies like PCA and NPM. The resultant compact model libraries were then utilised to evaluate the impact of statistical variability on SRAM design, and to quantitatively evaluate the difference between accurate compact model generation using NPM with the Gaussian VT methodology. Over 5 million dynamic write simulations were performed, and showed that at advanced technology nodes, statistical variability cannot be accurately represented using Gaussian VT . The results also show that accurate modelling techniques can help reduced design margins by elimiating some of the pessimism of standard variability modelling approaches.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Lesser, Elizabeth Rochelle. "A New Right Tailed Test of the Ratio of Variances". UNF Digital Commons, 2016. http://digitalcommons.unf.edu/etd/719.

Texto completo
Resumen
It is important to be able to compare variances efficiently and accurately regardless of the parent populations. This study proposes a new right tailed test for the ratio of two variances using the Edgeworth’s expansion. To study the Type I error rate and Power performance, simulation was performed on the new test with various combinations of symmetric and skewed distributions. It is found to have more controlled Type I error rates than the existing tests. Additionally, it also has sufficient power. Therefore, the newly derived test provides a good robust alternative to the already existing methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Onnis, Luca. "Statistical language learning". Thesis, University of Warwick, 2003. http://wrap.warwick.ac.uk/54811/.

Texto completo
Resumen
Theoretical arguments based on the "poverty of the stimulus" have denied a priori the possibility that abstract linguistic representations can be learned inductively from exposure to the environment, given that the linguistic input available to the child is both underdetermined and degenerate. I reassess such learnability arguments by exploring a) the type and amount of statistical information implicitly available in the input in the form of distributional and phonological cues; b) psychologically plausible inductive mechanisms for constraining the search space; c) the nature of linguistic representations, algebraic or statistical. To do so I use three methodologies: experimental procedures, linguistic analyses based on large corpora of naturally occurring speech and text, and computational models implemented in computer simulations. In Chapters 1,2, and 5, I argue that long-distance structural dependencies - traditionally hard to explain with simple distributional analyses based on ngram statistics - can indeed be learned associatively provided the amount of intervening material is highly variable or invariant (the Variability effect). In Chapter 3, I show that simple associative mechanisms instantiated in Simple Recurrent Networks can replicate the experimental findings under the same conditions of variability. Chapter 4 presents successes and limits of such results across perceptual modalities (visual vs. auditory) and perceptual presentation (temporal vs. sequential), as well as the impact of long and short training procedures. In Chapter 5, I show that generalisation to abstract categories from stimuli framed in non-adjacent dependencies is also modulated by the Variability effect. In Chapter 6, I show that the putative separation of algebraic and statistical styles of computation based on successful speech segmentation versus unsuccessful generalisation experiments (as published in a recent Science paper) is premature and is the effect of a preference for phonological properties of the input. In chapter 7 computer simulations of learning irregular constructions suggest that it is possible to learn from positive evidence alone, despite Gold's celebrated arguments on the unlearnability of natural languages. Evolutionary simulations in Chapter 8 show that irregularities in natural languages can emerge from full regularity and remain stable across generations of simulated agents. In Chapter 9 I conclude that the brain may endowed with a powerful statistical device for detecting structure, generalising, segmenting speech, and recovering from overgeneralisations. The experimental and computational evidence gathered here suggests that statistical language learning is more powerful than heretofore acknowledged by the current literature.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Choakjarernwanit, Naruetep. "Statistical pattern recognition". Thesis, University of Surrey, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306586.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Wells, William Mercer. "Statistical object recognition". Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/12606.

Texto completo
Resumen
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1993.
Includes bibliographical references (p. 169-177).
by William Mercer Wells, III.
Ph.D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Gašić, Milica. "Statistical dialogue modelling". Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609496.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Ortega, San Martín Luis. "Some Statistical Data". Revista de Química, 2013. http://repositorio.pucp.edu.pe/index/handle/123456789/100075.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Ortega, San Martín Luis. "Some Statistical Data". Revista de Química, 2012. http://repositorio.pucp.edu.pe/index/handle/123456789/99036.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Lee, Yun-Soo. "On some aspects of distribution theory and statistical inference involving order statistics". Virtual Press, 1991. http://liblink.bsu.edu/uhtbin/catkey/834141.

Texto completo
Resumen
Statistical methods based on nonparametric and distribution-free procedures require the use of order statistics. Order statistics are also used in many parametric estimation and testing problems. With the introduction of modern high speed computers, order statistics have gained more importance in recent years in statistical inference - the main reason being that ranking a large number of observations manually was difficult and time consuming in the past, which is no longer the case at present because of the availability of high speed computers. Also, applications of order statistics require in many cases the use of numerical tables and computer is needed to construct these tables.In this thesis, some basic concepts and results involving order statistics are provided. Typically, application of the Theory of Permanents in the distribution of order statistics are discussed. Further, the correlation coefficient between the smallest observation (Y1) and the largest observation (Y,,) of a random sample of size n from two gamma populations, where (n-1) observations of the sample are from one population and the remaining observation is from the other population, is presented.
Department of Mathematical Sciences
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Kim, Woosuk. "Statistical Inference on Dual Generalized Order Statistics for Burr Type III Distribution". University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1396533232.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Corrado, Charles J. "Nonparametric statistical methods in financial market research". Diss., The University of Arizona, 1988. http://hdl.handle.net/10150/184608.

Texto completo
Resumen
This dissertation presents an exploration of the use of nonparametric statistical methods based on ranks for use in financial market research. Applications to event study methodology and the estimation of security systematic risk are analyzed using a simulation methodology with actual daily security return data. The results indicate that procedures based on ranks are more efficient than normal theory procedures currently in common use.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Andrés, Ferrer Jesús. "Statistical approaches for natural language modelling and monotone statistical machine translation". Doctoral thesis, Universitat Politècnica de València, 2010. http://hdl.handle.net/10251/7109.

Texto completo
Resumen
Esta tesis reune algunas contribuciones al reconocimiento de formas estadístico y, más especícamente, a varias tareas del procesamiento del lenguaje natural. Varias técnicas estadísticas bien conocidas se revisan en esta tesis, a saber: estimación paramétrica, diseño de la función de pérdida y modelado estadístico. Estas técnicas se aplican a varias tareas del procesamiento del lenguajes natural tales como clasicación de documentos, modelado del lenguaje natural y traducción automática estadística. En relación con la estimación paramétrica, abordamos el problema del suavizado proponiendo una nueva técnica de estimación por máxima verosimilitud con dominio restringido (CDMLEa ). La técnica CDMLE evita la necesidad de la etapa de suavizado que propicia la pérdida de las propiedades del estimador máximo verosímil. Esta técnica se aplica a clasicación de documentos mediante el clasificador Naive Bayes. Más tarde, la técnica CDMLE se extiende a la estimación por máxima verosimilitud por leaving-one-out aplicandola al suavizado de modelos de lenguaje. Los resultados obtenidos en varias tareas de modelado del lenguaje natural, muestran una mejora en términos de perplejidad. En a la función de pérdida, se estudia cuidadosamente el diseño de funciones de pérdida diferentes a la 0-1. El estudio se centra en aquellas funciones de pérdida que reteniendo una complejidad de decodificación similar a la función 0-1, proporcionan una mayor flexibilidad. Analizamos y presentamos varias funciones de pérdida en varias tareas de traducción automática y con varios modelos de traducción. También, analizamos algunas reglas de traducción que destacan por causas prácticas tales como la regla de traducción directa; y, así mismo, profundizamos en la comprensión de los modelos log-lineares, que son de hecho, casos particulares de funciones de pérdida. Finalmente, se proponen varios modelos de traducción monótonos basados en técnicas de modelado estadístico .
Andrés Ferrer, J. (2010). Statistical approaches for natural language modelling and monotone statistical machine translation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/7109
Palancia
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

尹再英 y Choi-ying Wan. "Statistical analysis for capture-recapture experiments in discrete time". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31225287.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Abdalmajid, Mohammed Babekir Elmalik. "An application of factor analysis on a 24-item scale on the attitudes towards AIDS precautions using Pearson, Spearman and Polychoric correlation matrices". Thesis, University of the Western Cape, 2006. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_8765_1184324798.

Texto completo
Resumen

The 24-item scale has been used extensively to assess the attitudes towards AIDS precautions. This study investigated the usefulness and validity of the instrument in a South African setting, fourteen years after the development of the instrument. If a new structure could be found statistically, the HIV/AIDS prevention strategies could be more effective in aiding campaigns to change attitudes and sexual behaviour.

Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Vaitkevičius, Robertas. "Duomenų kompiuterinės statistinės analizės technologijos". Master's thesis, Lithuanian Academic Libraries Network (LABT), 2008. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2008~D_20080929_140053-98826.

Texto completo
Resumen
Darbo „Duomenų kompiuterinės statistinės analizės technologijos“ tikslas – išanalizuoti ir palyginti įvairių populiarių statistinių paketų galimybes bei pateikti rekomendacijas vartotojui. Šiame darbe buvo išanalizuoti SPSS 8.0 for Windows, STATISTICA 7 ir Minitab 15 English statistiniai paketai. Atlikti statistiniai skaičiavimai su anketos „Apie tai, kaip tu gyveni“ duomenimis, panaudojant minėtus statistinius paketus. Įvertintos šių statistinių paketų galimybės. Sudarytos statistinių paketų lyginamosios analizės lentelės. Pateiktos rekomendacijos vartotojui, padedančios jam pagrįstai pasirinkti tinkamiausią statistinį paketą, atsižvelgiant į vartotojo poreikius ir galimybes. Statistiniam paketui STATISTICA 7 sukurtos dvi makrokomandos, panaudojant šiame pakete integruotą VISUAL BASIC programavimo kalbą. Pirmoji makrokomanda skaičiuoja tiriamųjų anketų užpildymo baigtumo laipsnius. Antroji makrokomanda filtruoja pasirinkto kintamojo duomenis pagal pasirinktą kriterijų. Darbas inovatyvus tuo, kad sukurtos dvi makrokomandos, praplečiančios statistinio paketo STATISTICA 7 galimybes.
The aim of work “The technologies of computer-based statistical analysis of data”- to analyse and to compare the potentials of various popular statistical packages and to propose the recommendations for the consumer. In this work there were analysed SPSS 8.0 For Windows, STATISTICA 7 and Minitab 15 English statistical packages. Using these mentioned packages there were accomplished statistical calculations according to the questionnaire “About that, how do you live” data. There were assessed the potentials of these statistical packages. There were composed the charts of comparative analysis of the statistical packages. Recommendations were given for the consumer, helping him to pick reasonably the best statistical package, considering consumer’s requirements and possibilities. For the statistical package STATISTICA 7 there were created two macros, using VISUAL BASIC computerese integrated in this package. The first macro calculates the completeness degrees of the investigative questionnaires filling. The second macro filters the data of chosen variable according to the chosen criterion. This work is innovative that there were created two macros, extending potentials of statistical package STATISTICA 7.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

KIM, NAMHEE. "A semiparametric statistical approach to Functional MRI data". The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1262295445.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Jeng, Tian-Tzer. "Some contributions to asymptotic theory on hypothesis testing when the model is misspecified /". The Ohio State University, 1987. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487332636473942.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Nunez, Yelen. "Statistical Models for Predicting College Success". FIU Digital Commons, 2013. http://digitalcommons.fiu.edu/etd/1036.

Texto completo
Resumen
Colleges base their admission decisions on a number of factors to determine which applicants have the potential to succeed. This study utilized data for students that graduated from Florida International University between 2006 and 2012. Two models were developed (one using SAT as the principal explanatory variable and the other using ACT as the principal explanatory variable) to predict college success, measured using the student’s college grade point average at graduation. Some of the other factors that were used to make these predictions were high school performance, socioeconomic status, major, gender, and ethnicity. The model using ACT had a higher R^2 but the model using SAT had a lower mean square error. African Americans had a significantly lower college grade point average than graduates of other ethnicities. Females had a significantly higher college grade point average than males.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Shen, Zhiyuan. "EMPIRICAL LIKELIHOOD AND DIFFERENTIABLE FUNCTIONALS". UKnowledge, 2016. http://uknowledge.uky.edu/statistics_etds/14.

Texto completo
Resumen
Empirical likelihood (EL) is a recently developed nonparametric method of statistical inference. It has been shown by Owen (1988,1990) and many others that empirical likelihood ratio (ELR) method can be used to produce nice confidence intervals or regions. Owen (1988) shows that -2logELR converges to a chi-square distribution with one degree of freedom subject to a linear statistical functional in terms of distribution functions. However, a generalization of Owen's result to the right censored data setting is difficult since no explicit maximization can be obtained under constraint in terms of distribution functions. Pan and Zhou (2002), instead, study the EL with right censored data using a linear statistical functional constraint in terms of cumulative hazard functions. In this dissertation, we extend Owen's (1988) and Pan and Zhou's (2002) results subject to non-linear but Hadamard differentiable statistical functional constraints. In this purpose, a study of differentiable functional with respect to hazard functions is done. We also generalize our results to two sample problems. Stochastic process and martingale theories will be applied to prove the theorems. The confidence intervals based on EL method are compared with other available methods. Real data analysis and simulations are used to illustrate our proposed theorem with an application to the Gini's absolute mean difference.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Kamsani, Noor 'Ain. "Statistical circuit simulations - from ‘atomistic’ compact models to statistical standard cell characterisation". Thesis, University of Glasgow, 2011. http://theses.gla.ac.uk/2720/.

Texto completo
Resumen
This thesis describes the development and application of statistical circuit simulation methodologies to analyse digital circuits subject to intrinsic parameter fluctuations. The specific nature of intrinsic parameter fluctuations are discussed, and we explain the crucial importance to the semiconductor industry of developing design tools which accurately account for their effects. Current work in the area is reviewed, and three important factors are made clear: any statistical circuit simulation methodology must be based on physically correct, predictive models of device variability; the statistical compact models describing device operation must be characterised for accurate transient analysis of circuits; analysis must be carried out on realistic circuit components. Improving on previous efforts in the field, we posit a statistical circuit simulation methodology which accounts for all three of these factors. The established 3-D Glasgow atomistic simulator is employed to predict electrical characteristics for devices aimed at digital circuit applications, with gate lengths from 35 nm to 13 nm. Using these electrical characteristics, extraction of BSIM4 compact models is carried out and their accuracy in performing transient analysis using SPICE is validated against well characterised mixed-mode TCAD simulation results for 35 nm devices. Static d.c. simulations are performed to test the methodology, and a useful analytic model to predict hard logic fault limitations on CMOS supply voltage scaling is derived as part of this work. Using our toolset, the effect of statistical variability introduced by random discrete dopants on the dynamic behaviour of inverters is studied in detail. As devices scaled, dynamic noise margin variation of an inverter is increased and higher output load or input slew rate improves the noise margins and its variation. Intrinsic delay variation based on CV/I delay metric is also compared using ION and IEFF definitions where the best estimate is obtained when considering ION and input transition time variations. Critical delay distribution of a path is also investigated where it is shown non-Gaussian. Finally, the impact of the cell input slew rate definition on the accuracy of the inverter cell timing characterisation in NLDM format is investigated.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Massip, Florian. "The Statistical Fate of Genomic DNA : Modelling Match Statistics in Different Evolutionary Scenarios". Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLS008/document.

Texto completo
Resumen
Le but de cette thèse est d'étudier la distribution des tailles des répétitions au sein d'un même génome, ainsi que la distribution des tailles des appariements obtenus en comparant différents génomes. Ces distributions présentent d'importantes déviations par rapport aux prédictions des modèles probabilistes existants. Étonnamment, les déviations observées sont distribuées selon une loi de puissance. Afin d'étudier ce phénomène, nous avons développé des modèles mathématiques prenant en compte des mécanismes évolutifs plus complexes, et qui expliquent les distributions observées. Nous avons aussi implémenté des modèles d'évolution de séquences in silico générant des séquences ayant les mêmes propriétés que les génomes étudiés. Enfin, nous avons montré que nos modèles permettent de tester la qualité des génomes récemment séquencés, et de mettre en évidence la prévalence de certains mécanismes évolutifs dans les génomes eucaryotes
In this thesis, we study the length distribution of maximal exact matches within and between eukaryotic genomes. These distributions strongly deviate from what one could expect from simple probabilistic models and, surprisingly, present a power-law behavior. To analyze these deviations, we develop mathematical frameworks taking into account complex mechanisms and that reproduce the observed deviations. We also implemented in silico sequence evolution models that reproduce these behaviors. Finally, we show that we can use our framework to assess the quality of sequences of recently sequenced genomes and to highlight the importance of unexpected biological mechanisms in eukaryotic genomes
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Buchan, Iain Edward. "The development of a statistical computer software resource for medical research". Thesis, University of Liverpool, 2000. http://www.manchester.ac.uk/escholar/uk-ac-man-scw:71360.

Texto completo
Resumen
Medical research is often weakened by poor statistical practice, and inappropriate use of statistical computer software is part of this problem. The statistical knowledge that medical researchers require has traditionally been gained in both dedicated and ad hoc learning time, often separate from the research processes in which the statistical methods are applied. Computer software, however, can be written to flexibly support statistical practice. The work of this thesis was to explore the possibility of, and if possible, to create, a resource supporting medical researchers in statistical knowledge and calculation at the point of need. The work was carried out over eleven years, and was directed towards the medical research community in general. Statistical and Software Engineering methods were used to produce a unified statistical computational and knowledge support resource. Mathematically and computationally robust approaches to statistical methods were continually sought from current literature. The type of evaluation undertaken was formative; this included monitoring uptake of the software and feedback from its users, comparisons with other software, reviews in peer reviewed publications, and testing of results against classical and reference data. Large-scale opportunistic feedback from users of this resource was employed in its continuous improvement. The software resulting from the work of this thesis is provided herein as supportive evidence. Results of applying the software to classical reference data are shown in the written thesis. The scope and presentation of statistical methods are considered in a comparison of the software with common statistical software resources. This comparison showed that the software written for this thesis more closely matched statistical methods commonly used in medical research, and contained more statistical knowledge support materials. Up to October 31st 2000, uptake of the software was recorded for 5621 separate instances by individuals or institutions. The development has been self-sustaining. Medical researchers need to have sufficient statistical understanding, just as statistical researchers need to sufficiently understand the nature of data. Statistical software tools may damage statistical practice if they distract attention from statistical goals and tasks, onto the tools themselves. The work of this thesis provides a practical computing framework supporting statistical knowledge and calculation in medical research. This work has shown that sustainable software can be engineered to improve statistical appreciation and practice in ways that are beyond the reach of traditional medical statistical education.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Bakk, Audun. "Statistical Thermodynamics of Proteins". Doctoral thesis, Norwegian University of Science and Technology, Department of Physics, 2002. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-494.

Texto completo
Resumen

The subject of this thesis is to formulate effective energy expressions (Hamiltonians) of proteins and protein related systems. By use of equilibrium statistical mechanics we calculate thermodynamical functions, whereupon we compare the results from theory with experimental data. Papers 1-7 and 10-12 concern this problem. In addition, Paper 8 (P8) and Paper 9 (P9) are attached. Both these papers were finalized during the Ph.D. study. However, they are not related to proteins.


Papers II, III, V, VII, VIII, XI and XII are reprinted with kind permission of Elsevier, sciencedirect.com Papers VI and IX are reprinted with kind permission of the American Physical Society.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

MEHMOOD, RAJA MAJID y GULRAIZ IQBAL. "Visualization of Statistical Contents". Thesis, Växjö University, School of Mathematics and Systems Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-5556.

Texto completo
Resumen

Our project presents the research on visualization of statistical contents. Here wewill introduce the concepts of visualization, software quality metrics andproposed visualization technique (line chart). Our aim to study the existingvisualization techniques for visualization of software metrics and then proposedthe visualization approach that is more time efficient and easy to perceive byviewer.In this project, we focus on the practical aspects of visualization of multipleprojects with respect to the versions and metrics. This project also gives animplementation of proposed visualization techniques of software metrics. In thisresearch based work, we have to compare practically the proposed visualizationapproaches. We will discuss the software development life cycle of our proposedvisualization system, and we will also describe the complete softwareimplementation of implemented software.

Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Heim, Susanne. "Statistical Diffusion Tensor Imaging". Diss., lmu, 2007. http://nbn-resolving.de/urn:nbn:de:bvb:19-72610.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Mehmood, Raja Majid y Gulraiz Iqbal. "Visualization of Statistical Contents". Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-8583.

Texto completo
Resumen
Our project presents the research on visualization of statistical contents. Here wewill introduce the concepts of visualization, software quality metrics andproposed visualization technique (line chart). Our aim to study the existingvisualization techniques for visualization of software metrics and then proposedthe visualization approach that is more time efficient and easy to perceive byviewer.In this project, we focus on the practical aspects of visualization of multipleprojects with respect to the versions and metrics. This project also gives animplementation of proposed visualization techniques of software metrics. In thisresearch based work, we have to compare practically the proposed visualizationapproaches. We will discuss the software development life cycle of our proposedvisualization system, and we will also describe the complete softwareimplementation of implemented software.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Gao, Yu. "Statistical modelling of games". Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33298/.

Texto completo
Resumen
This thesis mainly focuses on the statistical modelling of a selection of games, namely, the minority game, the urn model and the Hawk-Dove game. Chapters 1 and 2 give a brief introduction and survey of the field. In Chapter 3, the key characteristics of the minority game are reproduced. In addition, the minority game is extended to include wealth distribution and leverage effect. By assuming that each player has initial wealth which rises and falls according to profit and loss, with the potential of borrowing and bankruptcy, we find that modelled wealth distribution may be power law distributed and leverage increases the instability of the system. In Chapter 4, to explore the effects of memory, we construct a model where agents with memories of different lengths compete for finite resources. Using analytical and numerical approaches, our research demonstrates that an instability exists at a critical memory length; and players with different memory lengths are able to compete with each other and achieve a state of co-existence. The analytical solution is found to be connected to the well-known urn model. Additionally, our findings reveal that the temperature is related to the agent's memory. Due to its general nature, this memory model could potentially be relevant for a variety of other game models. In Chapter 5, our main finding is extended to the Hawk-Dove game, by introducing the memory parameter to each agent playing the game. An assumption is made that agents try to maximise their profits by learning from past experiences, stored in their finite memories. We show that the analytical results obtained from these two games are in agreement with the results from our simulations. It is concluded that the instability occurs when agents' memory lengths reach the critical value. Finally, Chapter 6 provides some concluding remarks and outlines some potential future work.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Chung, Moo K. 1969. "Statistical morphometry in Neuroanatomy". Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=37880.

Texto completo
Resumen
The scientific aim of computational neuroanatomy using magnetic resonance imaging (MRI) is to quantify inter- and intra-subject morphological variabilities. A unified statistical framework for analyzing temporally varying brain morphology is presented. Based on the mathematical framework of differential geometry, the deformation of the brain is modeled and key morphological descriptors such as length, area, volume dilatation and curvature change are computed. To increase the signal-to-noise ratio, Gaussian kernel smoothing is applied to 3D images. For 2D curved cortical surface, diffusion smoothing, which generalizes Gaussian kernel smoothing, has been developed. Afterwards, statistical inference is based on the excursion probability of random fields defined on manifolds.
This method has been applied in localizing the regions of brain tissue growth and loss in a group of 28 normal children and adolescents. It is shown that children's brains change dramatically in localized areas even after age 12.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Young, G. A. "Data-based statistical methods". Thesis, University of Cambridge, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383307.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Bridges, M. "Statistical methods in cosmology". Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.596904.

Texto completo
Resumen
We outline the application of a new method of evidence calculation called nested sampling Skilling (2004). We use a clustered ellipsoidal bound to restrict the parameter space sampled, that is generic enough to be used for even complex multimodal posteriors. We demonstrate our algorithms, COSMOCLUST makes important savings in computational time when compared with previous methods. The study of the primordial power spectrum, which seeded the structure formation observed in both the CMB and large scale structure, is crucial in unravelling early universe physics.  In this thesis we analyse a number of spectral parameterisations based on both physical and observational grounds. Using the evidence we determine the most appropriate model in both WMAP 1 year and WMAP 3 year data (including additionally a selection of high resolution CMB and large scale structure data). We conclude that currently the evidence does suggest the need for a tilt in the spectrum, however the presence of running of the spectral index is dependent on the inclusion of, specifically Ly-α data. Bayesian analysis in cosmology is computationally demanding. We have succeeding in improving the efficiency of inference problems for a wide variety of cosmological applications by training neural networks to ‘learn’ how observables such as the CMB spectrum change with input cosmological parameters. We demonstrate that improvements in speed of several orders of magnitude are possible using our algorithm COSMONET.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Grützun, Verena, Johannes Quaas, Cyril J. Morcrette y Felix Ament. "Evaluating statistical cloud schemes". Universitätsbibliothek Leipzig, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-177257.

Texto completo
Resumen
Statistical cloud schemes with prognostic probability distribution functions have become more important in atmospheric modeling, especially since they are in principle scale adaptive and capture cloud physics in more detail. While in theory the schemes have a great potential, their accuracy is still questionable. High-resolution three-dimensional observational data of water vapor and cloud water, which could be used for testing them, are missing. We explore the potential of ground-based remote sensing such as lidar, microwave, and radar to evaluate prognostic distribution moments using the “perfect model approach.” This means that we employ a high-resolution weather model as virtual reality and retrieve full three-dimensional atmospheric quantities and virtual ground-based observations. We then use statistics from the virtual observation to validate the modeled 3-D statistics. Since the data are entirely consistent, any discrepancy occurring is due to the method. Focusing on total water mixing ratio, we find that the mean ratio can be evaluated decently but that it strongly depends on the meteorological conditions as to whether the variance and skewness are reliable. Using some simple schematic description of different synoptic conditions, we show how statistics obtained from point or line measurements can be poor at representing the full three-dimensional distribution of water in the atmosphere. We argue that a careful analysis of measurement data and detailed knowledge of the meteorological situation is necessary to judge whether we can use the data for an evaluation of higher moments of the humidity distribution used by a statistical cloud scheme.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Bach, Christoph. "Improving statistical seismicity models". Phd thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2014/7059/.

Texto completo
Resumen
Several mechanisms are proposed to be part of the earthquake triggering process, including static stress interactions and dynamic stress transfer. Significant differences of these mechanisms are particularly expected in the spatial distribution of aftershocks. However, testing the different hypotheses is challenging because it requires the consideration of the large uncertainties involved in stress calculations as well as the appropriate consideration of secondary aftershock triggering which is related to stress changes induced by smaller pre- and aftershocks. In order to evaluate the forecast capability of different mechanisms, I take the effect of smaller--magnitude earthquakes into account by using the epidemic type aftershock sequence (ETAS) model where the spatial probability distribution of direct aftershocks, if available, is correlated to alternative source information and mechanisms. Surface shaking, rupture geometry, and slip distributions are tested. As an approximation of the shaking level, ShakeMaps are used which are available in near real-time after a mainshock and thus could be used for first-order forecasts of the spatial aftershock distribution. Alternatively, the use of empirical decay laws related to minimum fault distance is tested and Coulomb stress change calculations based on published and random slip models. For comparison, the likelihood values of the different model combinations are analyzed in the case of several well-known aftershock sequences (1992 Landers, 1999 Hector Mine, 2004 Parkfield). The tests show that the fault geometry is the most valuable information for improving aftershock forecasts. Furthermore, they reveal that static stress maps can additionally improve the forecasts of off--fault aftershock locations, while the integration of ground shaking data could not upgrade the results significantly. In the second part of this work, I focused on a procedure to test the information content of inverted slip models. This allows to quantify the information gain if this kind of data is included in aftershock forecasts. For this purpose, the ETAS model based on static stress changes, which is introduced in part one, is applied. The forecast ability of the models is systematically tested for several earthquake sequences and compared to models using random slip distributions. The influence of subfault resolution and segment strike and dip is tested. Some of the tested slip models perform very good, in that cases almost no random slip models are found to perform better. Contrastingly, for some of the published slip models, almost all random slip models perform better than the published slip model. Choosing a different subfault resolution hardly influences the result, as long the general slip pattern is still reproducible. Whereas different strike and dip values strongly influence the results depending on the standard deviation chosen, which is applied in the process of randomly selecting the strike and dip values.
Verschiedene Mechanismen werden für das Triggern von Erdbeben verantwortlich gemacht, darunter statische Spannungsänderungen und dynamischer Spannungstransfer. Deutliche Unterschiede zwischen diesen Mechanismen werden insbesondere in der räumlichen Nachbebenverteilung erwartet. Es ist allerdings schwierig diese Hypothesen zu überprüfen, da die großen Unsicherheiten der Spannungsberechnungen berücksichtigt werden müssen, ebenso wie das durch lokale sekundäre Spannungsänderungen hervorgerufene initiieren von sekundären Nachbeben. Um die Vorhersagekraft verschiedener Mechanismen zu beurteilen habe ich die Effekte von Erdbeben kleiner Magnitude durch Benutzen des "epidemic type aftershock sequence" (ETAS) Modells berücksichtigt. Dabei habe ich die Verteilung direkter Nachbeben, wenn verfügbar, mit alternativen Herdinformationen korreliert. Bodenbewegung, Bruchgeometrie und Slipmodelle werden getestet. Als Aproximation der Bodenbewegung werden ShakeMaps benutzt. Diese sind nach großen Erdbeben nahezu in Echtzeit verfügbar und können daher für vorläufige Vorhersagen der räumlichen Nachbebenverteilung benutzt werden. Alternativ können empirische Beziehungen als Funktion der minimalen Distanz zur Herdfläche benutzt werden oder Coulomb Spannungsänderungen basierend auf publizierten oder zufälligen Slipmodellen. Zum Vergleich werden die Likelihood Werte der Hybridmodelle im Falle mehrerer bekannter Nachbebensequenzen analysiert (1992 Landers, 1999 Hector Mine, 2004 Parkfield). Die Tests zeigen, dass die Herdgeometrie die wichtigste Zusatzinformation zur Verbesserung der Nachbebenvorhersage ist. Des Weiteren können statische Spannungsänderungen besonders die Vorhersage von Nachbeben in größerer Entfernung zur Bruchfläche verbessern, wohingegen die Einbeziehung von Bodenbewegungskarten die Ergebnisse nicht wesentlich verbessern konnte. Im zweiten Teil meiner Arbeit führe ich ein neues Verfahren zur Untersuchung des Informationsgehaltes von invertierten Slipmodellen ein. Dies ermöglicht die Quantifizierung des Informationsgewinns, der durch Einbeziehung dieser Daten in Nachbebenvorhersagen entsteht. Hierbei wird das im ersten Teil eingeführte erweiterte ETAS Modell benutzt, welches statische Spannungsänderung zur Vorhersage der räumlichen Nachbebenverteilung benutzt. Die Vorhersagekraft der Modelle wird systematisch anhand mehrerer Erdbebensequenzen untersucht und mit Modellen basierend auf zufälligen Slipverteilungen verglichen. Der Einfluss der Veränderung der Auflösung der Slipmodelle, sowie Streich- und Fallwinkel der Herdsegmente wird untersucht. Einige der betrachteten Slipmodelle korrelieren sehr gut, in diesen Fällen werden kaum zufällige Slipmodelle gefunden, welche die Nachbebenverteilung besser erklären. Dahingegen korrelieren bei einigen Beispielen nahezu alle zufälligen Slipmodelle besser als das publizierte Modell. Das Verändern der Auflösung der Bewegungsmodelle hat kaum Einfluss auf die Ergebnisse, solange die allgemeinen Slipmuster noch reproduzierbar sind, d.h. ein bis zwei größere Slipmaxima pro Segment. Dahingegen beeinflusst eine zufallsbasierte Änderung der Streich- und Fallwinkel der Segmente die Resultate stark, je nachdem welche Standardabweichung gewählt wurde.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Ibrahim, Kamarul Asri. "Active statistical process control". Thesis, University of Newcastle Upon Tyne, 1989. http://hdl.handle.net/10443/407.

Texto completo
Resumen
Most Statistical Process Control (SPC) research has focused on the development of charting techniques for process monitoring. Unfortunately, little attention has been paid to the importance of bringing the process in control automatically via these charting techniques. This thesis shows that by drawing upon concepts from Automatic Process Control (APC), it is possible to devise schemes whereby the process is monitored and automatically controlled via SPC procedures. It is shown that Partial Correlation Analysis (PCorrA) or Principal Component Analysis (PCA) can be used to determine the variables that have to be monitored and manipulated as well as the corresponding control laws. We call this proposed procedure Active SPC and the capabilities of various strategies that arise are demonstrated by application to a simulated reaction process. Reactor product concentration was controlled using different manipulated input configurations e.g. manipulating all input variables, manipulating only two input variables, and manipulating only a single input variable. The last two manipulating schemes consider the cases when all input variables can be measured on-line but not all can be manipulated on-line. Different types of control charts are also tested with the new Active SPC method e.g. Shewhart chart with action limits; Shewhart chart with action and warning limits for individual observations, and lastly the Exponentially Weighted Moving Average control chart. The effects of calculating control limits on-line to accommodate possible changes in process characteristics were also studied. The results indicate that the use of the Exponentially Weighted Moving Average control chart, with limits calculated using Partial Correlations, showed the best promise for further development. It is also shown that this particular combination could provide better performance than the common Proportional Integral (PI) controller when manipulations incur costs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Okasha, Mahmoud Khaled Mohamed. "Statistical methods in dendrochronology". Thesis, University of Sheffield, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.295760.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía