Дисертації з теми "Probability-based method"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Probability-based method.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-28 дисертацій для дослідження на тему "Probability-based method".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

PINHO, Luis Gustavo Bastos. "Building new probability distributions: the composition method and a computer based method." Universidade Federal de Pernambuco, 2017. https://repositorio.ufpe.br/handle/123456789/24966.

Повний текст джерела
Анотація:
Submitted by Pedro Barros (pedro.silvabarros@ufpe.br) on 2018-07-03T21:14:00Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TESE Luis Gustavo Bastos Pinho.pdf: 3785410 bytes, checksum: 4a1cf7340340bd8ff994a74abb62ba0e (MD5)
Made available in DSpace on 2018-07-03T21:14:00Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TESE Luis Gustavo Bastos Pinho.pdf: 3785410 bytes, checksum: 4a1cf7340340bd8ff994a74abb62ba0e (MD5) Previous issue date: 2017-01-17
FACEPE
We discuss the creation of new probability distributions for continuous data in two distinct approaches. The first one is, to our knowledge, novelty and consists of using Estimation of Distribution Algorithms (EDAs) to obtain new cumulative distribution functions. This class of algorithms work as follows. A population of solutions for a given problem is randomly selected from a space of candidates, which may contain candidates that are not feasible solutions to the problem. The selection occurs by following a set of probability rules that, initially, assign a uniform distribution to the space of candidates. Each individual is ranked by a fitness criterion. A fraction of the most fit individuals is selected and the probability rules are then adjusted to increase the likelihood of obtaining solutions similar to the most fit in the current population. The algorithm iterates until the set of probability rules are able to provide good solutions to the problem. In our proposal, the algorithm is used to generate cumulative distribution functions to model a given continuous data set. We tried to keep the mathematical expressions of the new functions as simple as possible. The results were satisfactory. We compared the models provided by the algorithm to the ones in already published papers. In every situation, the models proposed by the algorithms had advantages over the ones already published. The main advantage is the relative simplicity of the mathematical expressions obtained. Still in the context of computational tools and algorithms, we show the performance of simple neural networks as a method for parameter estimation in probability distributions. The motivation for this was the need to solve a large number of non linear equations when dealing with SAR images (SAR stands for synthetic aperture radar) in the statistical treatment of such images. The estimation process requires solving, iteratively, a non-linear equation. This is repeated for every pixel and an image usually consists of a large number of pixels. We trained a neural network to approximate an estimator for the parameter of interest. Once trained, the network can be fed the data and it will return an estimate of the parameter of interest without the need of iterative methods. The training of the network can take place even before collecting the data from the radar. The method was tested on simulated and real data sets with satisfactory results. The same method can be applied to different distributions. The second part of this thesis shows two new probability distribution classes obtained from the composition of already existing ones. In each situation, we present the new class and general results such as power series expansions for the probability density functions, expressions for the moments, entropy and alike. The first class is obtained from the composition of the beta-G and Lehmann-type II classes. The second class, from the transmuted-G and Marshall-Olkin-G classes. Distributions in these classes are compared to already existing ones as a way to illustrate the performance of applications to real data sets.
Discutimos a criação de novas distribuições de probabilidade para dados contínuos em duas abordagens distintas. A primeira é, ao nosso conhecimento, inédita e consiste em utilizar algoritmos de estimação de distribuição para a obtenção de novas funções de distribuição acumulada. Algoritmos de estimação de distribuição funcionam da seguinte forma. Uma população de soluções para um determinado problema é extraída aleatoriamente de um conjunto que denominamos espaço de candidatos, o qual pode possuir candidatos que não são soluções viáveis para o problema. A extração ocorre de acordo com um conjunto de regras de probabilidade, as quais inicialmente atribuem uma distribuição uniforme ao espaço de candidatos. Cada indivíduo na população é classificado de acordo com um critério de desempenho. Uma porção dos indivíduos com melhor desempenho é escolhida e o conjunto de regras é adaptado para aumentar a probabilidade de obter soluções similares aos melhores indivíduos da população atual. O processo é repetido por um número de gerações até que a distribuição de probabilidade das soluções sorteadas forneça soluções boas o suficiente. Em nossa aplicação, o problema consiste em obter uma função de distribuição acumulada para um conjunto de dados contínuos qualquer. Tentamos, durante o processo, manter as expressões matemáticas das distribuições geradas as mais simples possíveis. Os resultados foram satisfatórios. Comparamos os modelos providos pelo algoritmo a modelos publicados em outros artigos. Em todas as situações, os modelos obtidos pelo algoritmo apresentaram vantagens sobre os modelos dos artigos publicados. A principal vantagem é a expressão matemática reduzida. Ainda no contexto do uso de ferramentas computacionais e algoritmos, mostramos como utilizar redes neurais simples para a estimação de parâmetros em distribuições de probabilidade. A motivação para tal aplicação foi a necessidade de resolver iterativamente um grande número de equações não lineares no tratamento estatístico de imagens obtidas de SARs (synthetic aperture radar). O processo de estimação requer a solução de uma equação por métodos iterativos e isso é repetido para cada pixel na imagem. Cada imagem possui um grande número de pixels, em geral. Pensando nisso, treinamos uma rede neural para aproximar o estimador para esse parâmetro. Uma vez treinada, a rede é alimentada com as janelas referente a cada pixel e retorna uma estimativa do parâmetro, sem a necessidade de métodos iterativos. O treino ocorre antes mesmo da obtenção dos dados do radar. O método foi testado em conjuntos de dados reais e fictícios com ótimos resultados. O mesmo método pode ser aplicado a outras distribuições. A segunda parte da tese exibe duas classes de distribuições de probabilidade obtidas a partir da composição de classes existentes. Em cada caso, apresentamos a nova classe e resultados gerais tais como expansões em série de potência para a função densidade de probabilidade, expressões para momentos, entropias e similares. A primeira classe é a composição das classes beta-G e Lehmann-tipo II. A segunda classe é obtida a partir das classes transmuted-G e Marshall-Olkin-G. Distribuições pertencentes a essas classes são comparadas a outras já existentes como maneira de ilustrar o desempenho em aplicações a dados reais.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hoang, Tam Minh Thi 1960. "A joint probability model for rainfall-based design flood estimation." Monash University, Dept. of Civil Engineering, 2001. http://arrow.monash.edu.au/hdl/1959.1/8892.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Alkhairy, Ibrahim H. "Designing and Encoding Scenario-based Expert Elicitation for Large Conditional Probability Tables." Thesis, Griffith University, 2020. http://hdl.handle.net/10072/390794.

Повний текст джерела
Анотація:
This thesis focuses on the general problem of asking experts to assess the likelihood of many scenarios, when there is insufficient time to ask about all possible scenarios. The challenge addressed here is one of experimental design: How to choose which scenarios are assessed; How to use that limited data to extrapolate information about the scenarios that remain unasked? In a mathematical sense, this problem can be constructed as a problem of expert elicitation, where experts are asked to quantify conditional probability tables (CPTs). Experts may be relied on, for example in the situation when empirical data is unavailable or limited. CPTs are used widely in statistical modelling to describe probabilistic relationships between an outcome and several factors. I consider two broad situations where CPTs are important components of quantitative models. Firstly experts are often asked to quantify CPTs that form the building blocks of Bayesian Networks (BNs). In one case study, CPTs describe how habitat suitability of feral pigs is related to various environmental factors, such as water quality and food availability. Secondly CPTs may also support a sensitivity analysis for large computer experiments, by examining how some outcome changes, as various factors are changed. Another case study uses CPTs to examine sensitivity to settings, for algorithms available through virtual laboratories, to map the geographic distribution of species such as the koala. An often-encountered problem is the sheer amount of information asked of the expert: the number of scenarios. Each scenario corresponds to a row of the CPT, and concerns a particular combination of factors, and the likely outcome. Currently most researchers arrange elicitation of CPTs by keeping the number of rows and columns in the CPT to a minimum, so that they need ask experts about no more than twenty or so scenarios. However in some practical problems, CPTs may need to involve more rows and columns, for example involving more than two factors, or factors which can take on more than two or three possible values. Here we propose a new way of choosing scenarios, that underpin the elicitation strategy, by taking advantage of experimental design to: ensure adequate coverage of all scenarios; and to make best use of the scarce resources like the valuable time of the experts. I show that this can be essentially constructed as a problem of how to better design choice of scenarios to elicit from a CPT. The main advantages of these designs is that they explore more of the design space compared to usual design choices like the one-factor-at-a-time (OFAT) design that underpins the popular encoding approach embedded in “CPT Calculator”. In addition this work tailors an under-utilized scenario-based elicitation method to ensure that the expert’s uncertainty was captured, together with their assessments, of the likelihood of each possible outcome. I adopt the more intuitive Outside-In Elicitation method to elicit the expert’s plausible range of assessed values, rather than the more common and reverse-order approach of eliciting their uncertainty around their best guess. Importantly this plausible range of values is more suitable for input into a new approach that was proposed for encoding scenario-based elicitation: Bayesian (rather than a Frequentist) interpretation. Whilst eliciting some scenarios from large CPTs, another challenge arises from the remaining CPT entries that are not elicited. This thesis shows how to adopt a statistical model to interpolate not only the missing CPT entries but also quantify the uncertainty for each scenario, which is new for these two situations: BNs and sensitivity analyses. For this purpose, I introduce the use of Bayesian generalized linear models (GLMs). The Bayesian updating framework also enables us to update the results of elicitation, by incorporating empirical data. The idea is to utilise scenarios elicited from experts to constructan informative Bayesian “prior” model. Then the prior information (e.g. about scenarios) is combined with the empirical data (e.g. from computer model runs), to update the posterior estimates of plausible outcomes (affecting all scenarios). The main findings showed that Bayesian inference suits the small data problem of encoding the expert’s mental model underlying their assessments, allowing uncertainty to vary about each scenario. In addition Bayesian inference provides rich feedback to the modeller and experts on the plausible influence of factors on the response, and whether any information was gained on their interactions. That information could be pivotal to designing the next phase of elicitation about habitat requirements or another phase of computer models. In this way, the Bayesian paradigm naturally supports a sequential approach to gradually accruing information about the issue at hand. As summarised above, the novel statistical methodology presented in this thesis also contributes to computer science. Specifically computation for Bayesian Networks and sensitivity analyses of large computer experiments can be re-designed to be more efficient. Here the expert knowledge is useful to complement the empirical data to inform a more comprehensive analyses.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mansour, Rami. "Reliability Assessment and Probabilistic Optimization in Structural Design." Doctoral thesis, KTH, Hållfasthetslära (Avd.), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-183572.

Повний текст джерела
Анотація:
Research in the field of reliability based design is mainly focused on two sub-areas: The computation of the probability of failure and its integration in the reliability based design optimization (RBDO) loop. Four papers are presented in this work, representing a contribution to both sub-areas. In the first paper, a new Second Order Reliability Method (SORM) is presented. As opposed to the most commonly used SORMs, the presented approach is not limited to hyper-parabolic approximation of the performance function at the Most Probable Point (MPP) of failure. Instead, a full quadratic fit is used leading to a better approximation of the real performance function and therefore more accurate values of the probability of failure. The second paper focuses on the integration of the expression for the probability of failure for general quadratic function, presented in the first paper, in RBDO. One important feature of the proposed approach is that it does not involve locating the MPP. In the third paper, the expressions for the probability of failure based on general quadratic limit-state functions presented in the first paper are applied for the special case of a hyper-parabola. The expression is reformulated and simplified so that the probability of failure is only a function of three statistical measures: the Cornell reliability index, the skewness and the kurtosis of the hyper-parabola. These statistical measures are functions of the First-Order Reliability Index and the curvatures at the MPP. In the last paper, an approximate and efficient reliability method is proposed. Focus is on computational efficiency as well as intuitiveness for practicing engineers, especially regarding probabilistic fatigue problems where volume methods are used. The number of function evaluations to compute the probability of failure of the design under different types of uncertainties is a priori known to be 3n+2 in the proposed method, where n is the number of stochastic design variables.

QC 20160317

Стилі APA, Harvard, Vancouver, ISO та ін.
5

Chapman, Gary. "Computer-based musical composition using a probabilistic algorithmic method." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341603.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Liu, Xiang. "Identification of indoor airborne contaminant sources with probability-based inverse modeling methods." Connect to online resource, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3337124.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Dedovic, Ines Verfasser], Jens-Rainer [Akademischer Betreuer] Ohm, and Dorit [Akademischer Betreuer] [Merhof. "Efficient probability distribution function estimation for energy based image segmentation methods / Ines Dedovic ; Jens-Rainer Ohm, Dorit Merhof." Aachen : Universitätsbibliothek der RWTH Aachen, 2016. http://d-nb.info/1130871541/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Dedović, Ines [Verfasser], Jens-Rainer Akademischer Betreuer] Ohm, and Dorit [Akademischer Betreuer] [Merhof. "Efficient probability distribution function estimation for energy based image segmentation methods / Ines Dedovic ; Jens-Rainer Ohm, Dorit Merhof." Aachen : Universitätsbibliothek der RWTH Aachen, 2016. http://d-nb.info/1130871541/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Kraum, Martin. "Fischer-Tropsch synthesis on supported cobalt based Catalysts Influence of various preparation methods and supports on catalyst activity and chain growth probability /." [S.l. : s.n.], 1999. http://deposit.ddb.de/cgi-bin/dokserv?idn=959085181.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Good, Norman Markus. "Methods for estimating the component biomass of a single tree and a stand of trees using variable probability sampling techniques." Thesis, Queensland University of Technology, 2001. https://eprints.qut.edu.au/37097/1/37097_Good_2001.pdf.

Повний текст джерела
Анотація:
This thesis developed multistage sampling methods for estimating the aggregate biomass of selected tree components, such as leaves, branches, trunk and total, in woodlands in central and western Queensland. To estimate the component biomass of a single tree randomised branch sampling (RBS) and importance sampling (IS) were trialed. RBS and IS were found to reduce the amount of time and effort to sample tree components in comparison with other standard destructive sampling methods such as ratio sampling, especially when sampling small components such as leaves and small twigs. However, RBS did not estimate leaf and small twig biomass to an acceptable degree of precision using current methods for creating path selection probabilities. In addition to providing an unbiased estimate of tree component biomass, individual estimates were used for developing allometric regression equations. Equations based on large components such as total biomass produced narrower confidence intervals than equations developed using ratio sampling. However, RBS does not estimate small component biomass such as leaves and small wood components with an acceptable degree of precision, and should be mainly used in conjunction with IS for estimating larger component biomass. A whole tree was completely enumerated to set up a sampling space with which RBS could be evaluated under a number of scenarios. To achieve a desired precision, RBS sample size and branch diameter exponents were varied, and the RBS method was simulated using both analytical and re-sampling methods. It was found that there is a significant amount of natural variation present when relating the biomass of small components to branch diameter, for example. This finding validates earlier decisions to question the efficacy of RBS for estimating small component biomass in eucalypt species. In addition, significant improvements can be made to increase the precision of RBS by increasing the number of samples taken, but more importantly by varying the exponent used for constructing selection probabilities. To further evaluate RBS on trees with differing growth forms from that enumerated, virtual trees were generated. These virtual trees were created using L-systems algebra. Decision rules for creating trees were based on easily measurable characteristics that influence a tree's growth and form. These characteristics included; child-to-child and children-to-parent branch diameter relationships, branch length and branch taper. They were modelled using probability distributions of best fit. By varying the size of a tree and/or the variation in the model describing tree characteristics; it was possible to simulate the natural variation between trees of similar size and fonn. By creating visualisations of these trees, it is possible to determine using visual means whether RBS could be effectively applied to particular trees or tree species. Simulation also aided in identifying which characteristics most influenced the precision of RBS, namely, branch length and branch taper. After evaluation of RBS/IS for estimating the component biomass of a single tree, methods for estimating the component biomass of a stand of trees (or plot) were developed and evaluated. A sampling scheme was developed which incorporated both model-based and design-based biomass estimation methods. This scheme clearly illustrated the strong and weak points associated with both approaches for estimating plot biomass. Using ratio sampling was more efficient than using RBS/IS in the field, especially for larger tree components. Probability proportional to size sampling (PPS) -size being the trunk diameter at breast height - generated estimates of component plot biomass that were comparable to those generated using model-based approaches. The research did, however, indicate that PPS is more precise than the use of regression prediction ( allometric) equations for estimating larger components such as trunk or total biomass, and the precision increases in areas of greater biomass. Using more reliable auxiliary information for identifying suitable strata would reduce the amount of within plot variation, thereby increasing precision. PPS had the added advantage of being unbiased and unhindered by numerous assumptions applicable to the population of interest, the case with a model-based approach. The application of allometric equations in predicting the component biomass of tree species other than that for which the allometric was developed is problematic. Differences in wood density need to be taken into account as well as differences in growth form and within species variability, as outlined in virtual tree simulations. However, the development and application of allometric prediction equations in local species-specific contexts is more desirable than PPS.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Rayner, Glen. "Statistical methodologies for quantile-based distributional families." Thesis, Queensland University of Technology, 1999.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Lelièvre, Nicolas. "Développement des méthodes AK pour l'analyse de fiabilité. Focus sur les évènements rares et la grande dimension." Thesis, Université Clermont Auvergne‎ (2017-2020), 2018. http://www.theses.fr/2018CLFAC045/document.

Повний текст джерела
Анотація:
Les ingénieurs utilisent de plus en plus de modèles numériques leur permettant de diminuer les expérimentations physiques nécessaires à la conception de nouveaux produits. Avec l’augmentation des performances informatiques et numériques, ces modèles sont de plus en plus complexes et coûteux en temps de calcul pour une meilleure représentation de la réalité. Les problèmes réels de mécanique sont sujets en pratique à des incertitudes qui peuvent impliquer des difficultés lorsque des solutions de conception admissibles et/ou optimales sont recherchées. La fiabilité est une mesure intéressante des risques de défaillance du produit conçu dus aux incertitudes. L’estimation de la mesure de fiabilité, la probabilité de défaillance, nécessite un grand nombre d’appels aux modèles coûteux et deviennent donc inutilisable en pratique. Pour pallier ce problème, la métamodélisation est utilisée ici, et plus particulièrement les méthodes AK qui permettent la construction d’un modèle mathématique représentatif du modèle coûteux avec un temps d’évaluation beaucoup plus faible. Le premier objectif de ces travaux de thèses est de discuter des formulations mathématiques des problèmes de conception sous incertitudes. Cette formulation est un point crucial de la conception de nouveaux produits puisqu’elle permet de comprendre les résultats obtenus. Une définition des deux concepts de fiabilité et de robustesse est aussi proposée. Ces travaux ont abouti à une publication dans la revue internationale Structural and Multidisciplinary Optimization (Lelièvre, et al. 2016). Le second objectif est de proposer une nouvelle méthode AK pour l’estimation de probabilités de défaillance associées à des évènements rares. Cette nouvelle méthode, nommée AK-MCSi, présente trois améliorations de la méthode AK-MCS : des simulations séquentielles de Monte Carlo pour diminuer le temps d’évaluation du métamodèle, un nouveau critère d’arrêt sur l’apprentissage plus stricte permettant d’assurer le bon classement de la population de Monte Carlo et un enrichissement multipoints permettant la parallélisation des calculs du modèle coûteux. Ce travail a été publié dans la revue Structural Safety (Lelièvre, et al. 2018). Le dernier objectif est de proposer de nouvelles méthodes pour l’estimation de probabilités de défaillance en grande dimension, c’est-à-dire un problème défini à la fois par un modèle coûteux et un très grand nombre de variables aléatoires d’entrée. Deux nouvelles méthodes, AK-HDMR1 et AK-PCA, sont proposées pour faire face à ce problème et sont basées respectivement sur une décomposition fonctionnelle et une technique de réduction de dimension. La méthode AK-HDMR1 fait l’objet d’une publication soumise à la revue Reliability Engineering and Structural Safety le 1er octobre 2018
Engineers increasingly use numerical model to replace the experimentations during the design of new products. With the increase of computer performance and numerical power, these models are more and more complex and time-consuming for a better representation of reality. In practice, optimization is very challenging when considering real mechanical problems since they exhibit uncertainties. Reliability is an interesting metric of the failure risks of design products due to uncertainties. The estimation of this metric, the failure probability, requires a high number of evaluations of the time-consuming model and thus becomes intractable in practice. To deal with this problem, surrogate modeling is used here and more specifically AK-based methods to enable the approximation of the physical model with much fewer time-consuming evaluations. The first objective of this thesis work is to discuss the mathematical formulations of design problems under uncertainties. This formulation has a considerable impact on the solution identified by the optimization during design process of new products. A definition of both concepts of reliability and robustness is also proposed. These works are presented in a publication in the international journal: Structural and Multidisciplinary Optimization (Lelièvre, et al. 2016). The second objective of this thesis is to propose a new AK-based method to estimate failure probabilities associated with rare events. This new method, named AK-MCSi, presents three enhancements of AK-MCS: (i) sequential Monte Carlo simulations to reduce the time associated with the evaluation of the surrogate model, (ii) a new stricter stopping criterion on learning evaluations to ensure the good classification of the Monte Carlo population and (iii) a multipoints enrichment permitting the parallelization of the evaluation of the time-consuming model. This work has been published in Structural Safety (Lelièvre, et al. 2018). The last objective of this thesis is to propose new AK-based methods to estimate the failure probability of a high-dimensional reliability problem, i.e. a problem defined by both a time-consuming model and a high number of input random variables. Two new methods, AK-HDMR1 and AK-PCA, are proposed to deal with this problem based on respectively a functional decomposition and a dimensional reduction technique. AK-HDMR1 has been submitted to Reliability Enginnering and Structural Safety on 1st October 2018
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Slowik, Ondřej. "Pravděpodobnostní optimalizace konstrukcí." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2014. http://www.nusl.cz/ntk/nusl-226801.

Повний текст джерела
Анотація:
This thesis presents the reader the importance of optimization and probabilistic assessment of structures for civil engineering problems. Chapter 2 further investigates the combination between previously proposed optimization techniques and probabilistic assessment in the form of optimization constraints. Academic software has been developed for the purposes of demonstrating the effectiveness of the suggested methods and their statistical testing. 3th chapter summarizes the results of testing previously described optimization method (called Aimed Multilevel Sampling), including a comparison with other optimization techniques. In the final part of the thesis, described procedures have been demonstrated on the selected optimization and reliability problems. The methods described in text represents engineering approach to optimization problems and aims to introduce a simple and transparent optimization algorithm, which could serve to the practical engineering purposes.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Beisler, Matthias Werner. "Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2011. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-71564.

Повний текст джерела
Анотація:
The design of hydropower projects requires a comprehensive planning process in order to achieve the objective to maximise exploitation of the existing hydropower potential as well as future revenues of the plant. For this purpose and to satisfy approval requirements for a complex hydropower development, it is imperative at planning stage, that the conceptual development contemplates a wide range of influencing design factors and ensures appropriate consideration of all related aspects. Since the majority of technical and economical parameters that are required for detailed and final design cannot be precisely determined at early planning stages, crucial design parameters such as design discharge and hydraulic head have to be examined through an extensive optimisation process. One disadvantage inherent to commonly used deterministic analysis is the lack of objectivity for the selection of input parameters. Moreover, it cannot be ensured that the entire existing parameter ranges and all possible parameter combinations are covered. Probabilistic methods utilise discrete probability distributions or parameter input ranges to cover the entire range of uncertainties resulting from an information deficit during the planning phase and integrate them into the optimisation by means of an alternative calculation method. The investigated method assists with the mathematical assessment and integration of uncertainties into the rational economic appraisal of complex infrastructure projects. The assessment includes an exemplary verification to what extent the Random Set Theory can be utilised for the determination of input parameters that are relevant for the optimisation of hydropower projects and evaluates possible improvements with respect to accuracy and suitability of the calculated results
Die Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Chen, Chi-Fan, and 陳祈帆. "Prediction of indoor pollutant source with the probability-based inverse method." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/36ng49.

Повний текст джерела
Анотація:
碩士
國立臺北科技大學
能源與冷凍空調工程系碩士班
100
The studies of pollutant dispersions and their spreading behaviors in a cleanroom, experimentally or numerically, are generally investigated based on the artificial emission sources. Detections of spreading pollutants in an operating cleanroom can be easily achieved using the respective indoor air quality monitoring systems but vice versa for the source identification. The identification of pollutant source is possible with the use of inverse numerical method. This study proposes a probability-based inverse method coupling with computational fluid dynamics (CFD) method, aiming to predict the pollutant source in an operating cleanroom with unilateral recirculation airflow field and compares the results with those obtained using the simulation model with an artificial source. The diffusion of pollutants from an artificial source is mainly relies on the airflow fields in the cleanroom. For the proposed probability-based inverse method, with the airflow field in the reversed direction, the CFD results showed the aggregation of pollutants in the unilateral airflow field. By assessing the proposed probability weighting model, the location with the highest probability is found consistent with the default location of the artificial pollution sources. The results also showed that the increase of sensor detection points helps minimizing the calculation errors in the assessment of the proposed probability weighting function, and vice versa. Besides that, the varying methods of pollutant emitted has no significant effect on the identification of pollutant source but the calculation results based on the proposed probability weighting function are relatively lower.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Hwang, Guan-Lin, and 黃冠霖. "A Web Services Selection Method based on the QoS Probability Distribution." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/85469546678794054306.

Повний текст джерела
Анотація:
碩士
國立中山大學
資訊管理學系研究所
102
Service-Oriented Architecture (SOA) provides a flexible framework for service composition. A composite web service is represented by a workflow, and for each task within the workflow, several candidate Web services which offer the same functionality may be available. In previous work (Hwang, Hsu, &; Lee, 2014), Hwang et al. propose a service selection framework based on probabilistic QoS distributions of component services. Their method decomposes a global QoS constraint into a number of local constraints using the average QoS value of each candidate service. However, heterogeneous deviation among candidate services may lead to the suboptimal selection. We propose an initial service assignment finding method with considering the standard deviation of QoS distributions. The objective of service selection is to maximize the global QoS conformance. Experimental results show that the proposed approach significantly improves the performance in probabilistic QoS-based service selection method, in terms of both global QoS conformance and running time.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Lee, Ya-fen, and 李雅芬. "An evaluation method of liquefaction probability based on the reliability theory." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/01549575868989143003.

Повний текст джерела
Анотація:
博士
國立成功大學
土木工程學系碩博士班
95
Soil liquefaction is one of earthquake-induced hazards. During the 1999 Chi-Chi earthquake, the damage caused by liquefaction was most serious in the central Taiwan. The traditional binary representation, liquefied or non-liquefied, is incapable to reflect the uncertainty and risk, which are the important characters in the geotechnique engineering. Therefore, this paper develops a new evaluation model of annual probability of liquefaction (APL) in the consideration of the uncertainties of soil parameters and model based on the reliability theory. The SPT-based and CPT-based simplified methods suggested by Youd and Idriss (2001), called herein Seed method and RW method, are taken as the basic equations. The reliability index proposed by Hasofer-Lind (1974), that owns the invariable characteristic, is used to calculate the probability of liquefaction. There are seven random variables used under the uncertainty consideration in this paper. For this reason, the knowledge nested partition method (KNPM) is employed to establish a new and global-search method for determining the reliability index in the satisfaction of calculation efficiency and the need to liquefaction evaluation for large areas. Then the functions consisted of one length L and three angles ��1、��2 and ��3 can be used to mean the foregoing seven random variables. Through the liquefied and non-liquefied case histories, analysis results of the KNPM method are compared and verified by the results of Monte Carlo simulation and iteration technique, showing that the calculating results obtained by the KNPM are correct and can reach optimal. With a rigid framework of the reliability theory, an evaluation of probability of liquefaction has to involve the uncertainties of model and parameters at the same time. The methods to quantify the uncertainties of model and parameters are proposed by way of a great number of case histories and field data, and these quantitative results can be used in the liquefaction probability evaluation. In the model uncertainty, the random sampling and analysis of alternatives, resulted from different quantity of SPT-based and CPT-based case histories are adopted. The SPT-based results revel the uncertainty of Seed’s method can be defined by c1= 1.06 and COV(c1)= 0.06. The CPT-based results demonstrate the uncertainty of RW’s method can be expressed by c1= 1.16 and COV(c1)= 0.12. Both two methods are conservative models. In the soil parameter uncertainties, the SPT-based and CPT-based field data in the Yuanlin and Mailiao areas are taken, and geostatistic method is utilized to quantify the uncertainties of soil parameters, including the standard penetration value (N), fines content (FC), soil weight (Wt), cone tip resistance (qc) and sleeve friction (fs). These results show that the soil parameter uncertainties between two areas have some differences, which are but fairly closer. So, this paper suggests that the soil parameter uncertainties of N、FC、Wt、qc and fs are 0.15, 0.14, 0.02, 0.04 and 0.14, respectively. To sum up the above-mentioned study results, including the reliability index and uncertainties of model and soil parameters, this paper develops a new evaluation model of probability of soil liquefaction. Owing to the lack of earthquake hazards and related data in early Taiwan, the verification of annual probability of liquefaction induced by earthquake is fairly difficult. By the comparison of the APL calculated from the energy dissipation theory, the proposed model has been proved to possess certain degree of accuracy. Finally, the APL in the Yuanlin area is evaluated subject to future re-reputure of Chelungpu fault and Changhua fault using the proposed model. The average annual probability of liquefaction (AAPL) is also calculated. Theses results show that AAPLs of Chelungpu fault and Changhua fault are 0.0007 to 0.0050 and 0.0001 to 0.0021, individually. The contour of average liquefaction return period is then drawn. In this study, these results can be a reference for regional liquefaction prevention.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Huang, Guan-Chin, and 黃冠欽. "Excimer Laser Micromachining of 3D Microstructures Based on Method of Probability Distribution." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/13637051303417213795.

Повний текст джерела
Анотація:
碩士
國立成功大學
機械工程學系碩博士班
94
This study applies excimer laser micro-machining technology to the manufacturing 3D microstructures of continuous profiles. Two different excimer laser machining methods based on the idea of probability distribution are used to fabricate axially symmetrical and non-axially symmetrical microstructures. Both theoretical and experimental studies are carried out to verify the feasibility and machining accuracy of these excimer laser micromachining processes.   Firstly, an “innovated hole area modulation method” is applied to fabricate non-axially symmetrical microstructures. We modify several parameters of machining contour paths and mask design process to minimize the roughness of machined microstructures. The experimental results show that this method could improve the surface roughness successfully through different types and contour ranges of excimer laser machining. However, it still has some problems on machining accuracy because the probability distribution of masks is not continuous. If one can design a mask alignment system of high precision orientation, let non-inverse and inverse masks to be used together, this machining method will have great potentials in manufacturing arbitrary non-axially symmetrical micro-optical devices in the future.   In order to manufacture axially symmetrical spherical microlenses, the “excimer laser planetary contour scanning method” is adopted in this work. The basic idea is based on a specific mask design method and a sample rotation method which includes both self-spinning and circular revolving to provide a probability function of laser machining. The probability function created by the planetary scanning assures a continuous, smooth, and precise surface profile to the machined microstructures. The surface profiles are measured and compared with their theoretical counterparts. Excellent agreements both in profile shapes and dimensions are achieved. The machined microlenses will be combined with plastic optical fiber (POF) to verify potentials in fabricating micro-optic components such as refractive microlenses or other optical-fiber related micro-devices.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Shen, Wei-min, and 沈暐閔. "Development of a quantitative human-error-probability method based on fuzzy set theory." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/90770927507729188482.

Повний текст джерела
Анотація:
碩士
國立臺灣海洋大學
商船學系所
97
Human errors occur so long as activities taken place involve human-beings regardless of domains and operations in which such performances are undertaken. Statistically, Human error is one of the crucial factors contributing to accidents. Accordingly, the human error study is a very important topic and a variety of human reliability assessment (HRA) methods has been developed to tackle such problems. The HRA approach can be divided into three categories, those using a database, using expert judgment and those using quasi-expert-judgment. The approaches falling into the first category apply a database containing generic Human Error Probability (HEP) to the specific circumstance being assessed. The HEPs obtained based on the approach in the second category are required by asking experts directly with regard to the scenario under consideration. Alternatively, some approaches in third category generate HEPs by manipulating and interrogating a quasi database incorporating with expert judgment. However, the risk analysis based on the techniques within the second and third categories may involve a high level of uncertainty due to the lack of data. This may jeopardize the reliability of the results. Some researches have been devised to resolve such a difficulty and the human error studies based on the fuzzy-number concept is one of them. This is due to its significance of transforming qualitative information into quantitative attributes under circumstances where the lack or incompleteness of data exists. However, a drawback occurs in situations in which some variables have sufficient data to evaluate risks while others do not since the discriminating ability of the studies based on the fuzzy-number concept is too low. In order to overcome such a difficulty, this research is planning to establish a framework equipped with a flexible data-acquirement method of which the objective is to provide a high level of discriminating ability. This will be achieved by first establishing membership function for linguistic variables, secondly combing such variables using the fuzzy rule base method, thirdly obtaining the crisp values through the defuzzification process and finally transferring such crisp values into Fuzzy Failure Rate(FFR). The methodology established will be verified and examined using the data from the traditional HRA studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Hong, Chung-Ming, and 洪崇銘. "A Cluster Group Method Based on the Priority Queue to Reduce Collision Probability for Wireless Sensor Network." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/38194755213671413752.

Повний текст джерела
Анотація:
碩士
國立中興大學
資訊科學與工程學系所
99
In widely used wireless sensor network (WSN) environment, when the node is in the busy state, a integer value will be selected randomly from the contention window. When the countdown is out of the back-off time that is calculated by using this random value, the node will try to grab the wireless channel in the environment. Therefore, As long as the node is in the busy state, they have the same condition to grab the channel. However, this is unfair. Because the various groups within the busy state have the different amount of information and degrees of congestion, it should give each group the corresponding back-off time depending on the situation of each group. In order to achieve this goal, we propose a mechanism for grouping back-off time, and use this mechanism to reduce the probability of collision in the environment. In this paper, we first established a multi-priority queues environment. The sensor nodes in the environment will give the collected environmental information different priority according to the differences from importance of the collected environmental information. And the sensor nodes store the priority to the corresponding priority queue. After that, we have to analyze all of the groups within a busy state in this environment. According to the amount of information of these groups, they should be given a relative priority. According to the priority of a group, we have established a grouping mechanism via setting the back-off time. This mechanism makes the high priority group of nodes have a greater chance to grab the channel faster than the low priority group of nodes, and can reduce the packet drop rate. Then, we design a collision probability formula to calculate for the condition of grouping and non-grouping. Finally, we constructed dual-priority queue module by using Matlab. The design joins the grouping mechanism under this environment. Via the experimental results, it shows that our method can not only give the appropriate back¬-off time for those group, but also can effectively reduce the collision probability between any of the nodes and external nodes. And the collision probability is relative to the power consumption and throughput. Therefore, reducing the collision probability can achieve the purpose of power-saving and throughput-enhancing.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Chang, Hsun-Chen, and 張恂禎. "On the Prediction Various Locations of Contaminant Sources in a Cleanroom with the Probability-based Inverse Method." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/4483mp.

Повний текст джерела
Анотація:
碩士
國立臺北科技大學
能源與冷凍空調工程系碩士班
101
The studies of pollutant dispersions and their spreading behaviors in a cleanroom, experimentally or numerically, are generally investigated based on the artificial emission sources. Detection of spreading pollutants in an operating cleanroom can be easily achieved using the respective indoor air quality monitoring systems but vice versa for the source identification. The identification of pollutant source is possible with the use of inverse numerical method. This study proposes a probability-based inverse method coupling with computational fluid dynamics (CFD) method, aiming to predict the pollutant source in an operating cleanroom with unilateral recirculation airflow field and compares the results with those obtained using the simulation model with an artificial source. The experiments were conducted in the same size cleanroom. Toluene was used as a tracer gas to simulate gas leakage in the Fab. PID Sensor were used to measure the toluene concentration field and the collected data were then used to compare with the simulation results. The agreement is seen to be quite good. By assessing the proposed probability weighting model, the location with the highest probability is found consistent with the default location of the artificial pollution sources
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Donde, Pratik Prakash. "LES/PDF approach for turbulent reacting flows." 2012. http://hdl.handle.net/2152/19481.

Повний текст джерела
Анотація:
The probability density function (PDF) approach is a powerful technique for large eddy simulation (LES) based modeling of turbulent reacting flows. In this approach, the joint-PDF of all reacting scalars is estimated by solving a PDF transport equation, thus providing detailed information about small-scale correlations between these quantities. The objective of this work is to further develop the LES/PDF approach for studying flame stabilization in supersonic combustors, and for soot modeling in turbulent flames. Supersonic combustors are characterized by strong shock-turbulence interactions which preclude the application of conventional Lagrangian stochastic methods for solving the PDF transport equation. A viable alternative is provided by quadrature based methods which are deterministic and Eulerian. In this work, it is first demonstrated that the numerical errors associated with LES require special care in the development of PDF solution algorithms. The direct quadrature method of moments (DQMOM) is one quadrature-based approach developed for supersonic combustion modeling. This approach is shown to generate inconsistent evolution of the scalar moments. Further, gradient-based source terms that appear in the DQMOM transport equations are severely underpredicted in LES leading to artificial mixing of fuel and oxidizer. To overcome these numerical issues, a new approach called semi-discrete quadrature method of moments (SeQMOM) is formulated. The performance of the new technique is compared with the DQMOM approach in canonical flow configurations as well as a three-dimensional supersonic cavity stabilized flame configuration. The SeQMOM approach is shown to predict subfilter statistics accurately compared to the DQMOM approach. For soot modeling in turbulent flows, an LES/PDF approach is integrated with detailed models for soot formation and growth. The PDF approach directly evolves the joint statistics of the gas-phase scalars and a set of moments of the soot number density function. This LES/PDF approach is then used to simulate a turbulent natural gas flame. A Lagrangian method formulated in cylindrical coordinates solves the high dimensional PDF transport equation and is coupled to an Eulerian LES solver. The LES/PDF simulations show that soot formation is highly intermittent and is always restricted to the fuel-rich region of the flow. The PDF of soot moments has a wide spread leading to a large subfilter variance. Further, the conditional statistics of soot moments conditioned on mixture fraction and reaction progress variable show strong correlation between the gas phase composition and soot moments.
text
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Weaver, George W. "Model based estimation of parameters of spatial populations from probability samples." Thesis, 1996. http://hdl.handle.net/1957/34124.

Повний текст джерела
Анотація:
Many ecological populations can be interpreted as response surfaces; the spatial patterns of the population vary in response to changes in the spatial patterns of environmental explanatory variables. Collection of a probability sample from the population provides unbiased estimates of the population parameters using design based estimation. When information is available for the environmental explanatory variables, model based procedures are available that provide more precise estimates of population parameters in some cases. In practice, not all of these environmental explanatory variables will be known. When the spatial coordinates of the population units are available, a spatial model can be used as a surrogate for the unknown, spatially patterned explanatory variables. Design based and model based procedures will be compared for estimating parameters of the population of Acid Neutralizing Capacity (ANC) of lakes in the Adirondack Mountains in New York. Results from the analysis of this population will be used to elucidate some general principles for model based estimation of parameters of spatial populations. Results indicate that using model based estimates of population parameters provide more precise estimates than design based estimates in some cases. In addition, including spatial information as a surrogate for spatially patterned missing covariates improves the precision of the estimates in some cases, the degree to which depends upon the model chosen to represent the spatial pattern. When the probability sample is selected from the spatial population is a stratified sample, differences in stratum variances need to be accounted for when residual spatial covariance estimation is desired for the entire population. This can be accomplished by scaling residuals by their estimated stratum standard deviation functions, and calculating the residual covariance using these scaled residuals. Results here demonstrate that the form of scaling influences the estimated strength of the residual correlation and the estimated correlation range.
Graduation date: 1997
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Upadhyay, Rochan Raj. "Simulation of population balance equations using quadrature based moment methods." Thesis, 2006. http://hdl.handle.net/2152/2943.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Cao, Jian. "Computation of High-Dimensional Multivariate Normal and Student-t Probabilities Based on Matrix Compression Schemes." Diss., 2020. http://hdl.handle.net/10754/662613.

Повний текст джерела
Анотація:
The first half of the thesis focuses on the computation of high-dimensional multivariate normal (MVN) and multivariate Student-t (MVT) probabilities. Chapter 2 generalizes the bivariate conditioning method to a d-dimensional conditioning method and combines it with a hierarchical representation of the n × n covariance matrix. The resulting two-level hierarchical-block conditioning method requires Monte Carlo simulations to be performed only in d dimensions, with d ≪ n, and allows the dominant complexity term of the algorithm to be O(n log n). Chapter 3 improves the block reordering scheme from Chapter 2 and integrates it into the Quasi-Monte Carlo simulation under the tile-low-rank representation of the covariance matrix. Simulations up to dimension 65,536 suggest that this method can improve the run time by one order of magnitude compared with the hierarchical Monte Carlo method. The second half of the thesis discusses a novel matrix compression scheme with Kronecker products, an R package that implements the methods described in Chapter 3, and an application study with the probit Gaussian random field. Chapter 4 studies the potential of using the sum of Kronecker products (SKP) as a compressed covariance matrix representation. Experiments show that this new SKP representation can save the memory footprint by one order of magnitude compared with the hierarchical representation for covariance matrices from large grids and the Cholesky factorization in one million dimensions can be achieved within 600 seconds. In Chapter 5, an R package is introduced that implements the methods in Chapter 3 and show how the package improves the accuracy of the computed excursion sets. Chapter 6 derives the posterior properties of the probit Gaussian random field, based on which model selection and posterior prediction are performed. With the tlrmvnmvt package, the computation becomes feasible in tens of thousands of dimensions, where the prediction errors are significantly reduced.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Kraum, Martin [Verfasser]. "Fischer-Tropsch synthesis on supported cobalt based Catalysts : Influence of various preparation methods and supports on catalyst activity and chain growth probability / submitted by Martin Kraum." 1999. http://d-nb.info/959085181/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

(6630578), Yellamraju Tarun. "n-TARP: A Random Projection based Method for Supervised and Unsupervised Machine Learning in High-dimensions with Application to Educational Data Analysis." Thesis, 2019.

Знайти повний текст джерела
Анотація:
Analyzing the structure of a dataset is a challenging problem in high-dimensions as the volume of the space increases at an exponential rate and typically, data becomes sparse in this high-dimensional space. This poses a significant challenge to machine learning methods which rely on exploiting structures underlying data to make meaningful inferences. This dissertation proposes the n-TARP method as a building block for high-dimensional data analysis, in both supervised and unsupervised scenarios.

The basic element, n-TARP, consists of a random projection framework to transform high-dimensional data to one-dimensional data in a manner that yields point separations in the projected space. The point separation can be tuned to reflect classes in supervised scenarios and clusters in unsupervised scenarios. The n-TARP method finds linear separations in high-dimensional data. This basic unit can be used repeatedly to find a variety of structures. It can be arranged in a hierarchical structure like a tree, which increases the model complexity, flexibility and discriminating power. Feature space extensions combined with n-TARP can also be used to investigate non-linear separations in high-dimensional data.

The application of n-TARP to both supervised and unsupervised problems is investigated in this dissertation. In the supervised scenario, a sequence of n-TARP based classifiers with increasing complexity is considered. The point separations are measured by classification metrics like accuracy, Gini impurity or entropy. The performance of these classifiers on image classification tasks is studied. This study provides an interesting insight into the working of classification methods. The sequence of n-TARP classifiers yields benchmark curves that put in context the accuracy and complexity of other classification methods for a given dataset. The benchmark curves are parameterized by classification error and computational cost to define a benchmarking plane. This framework splits this plane into regions of "positive-gain" and "negative-gain" which provide context for the performance and effectiveness of other classification methods. The asymptotes of benchmark curves are shown to be optimal (i.e. at Bayes Error) in some cases (Theorem 2.5.2).

In the unsupervised scenario, the n-TARP method highlights the existence of many different clustering structures in a dataset. However, not all structures present are statistically meaningful. This issue is amplified when the dataset is small, as random events may yield sample sets that exhibit separations that are not present in the distribution of the data. Thus, statistical validation is an important step in data analysis, especially in high-dimensions. However, in order to statistically validate results, often an exponentially increasing number of data samples are required as the dimensions increase. The proposed n-TARP method circumvents this challenge by evaluating statistical significance in the one-dimensional space of data projections. The n-TARP framework also results in several different statistically valid instances of point separation into clusters, as opposed to a unique "best" separation, which leads to a distribution of clusters induced by the random projection process.

The distributions of clusters resulting from n-TARP are studied. This dissertation focuses on small sample high-dimensional problems. A large number of distinct clusters are found, which are statistically validated. The distribution of clusters is studied as the dimensionality of the problem evolves through the extension of the feature space using monomial terms of increasing degree in the original features, which corresponds to investigating non-linear point separations in the projection space.

A statistical framework is introduced to detect patterns of dependence between the clusters formed with the features (predictors) and a chosen outcome (response) in the data that is not used by the clustering method. This framework is designed to detect the existence of a relationship between the predictors and response. This framework can also serve as an alternative cluster validation tool.

The concepts and methods developed in this dissertation are applied to a real world data analysis problem in Engineering Education. Specifically, engineering students' Habits of Mind are analyzed. The data at hand is qualitative, in the form of text, equations and figures. To use the n-TARP based analysis method, the source data must be transformed into quantitative data (vectors). This is done by modeling it as a random process based on the theoretical framework defined by a rubric. Since the number of students is small, this problem falls into the small sample high-dimensions scenario. The n-TARP clustering method is used to find groups within this data in a statistically valid manner. The resulting clusters are analyzed in the context of education to determine what is represented by the identified clusters. The dependence of student performance indicators like the course grade on the clusters formed with n-TARP are studied in the pattern dependence framework, and the observed effect is statistically validated. The data obtained suggests the presence of a large variety of different patterns of Habits of Mind among students, many of which are associated with significant grade differences. In particular, the course grade is found to be dependent on at least two Habits of Mind: "computation and estimation" and "values and attitudes."
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Beisler, Matthias Werner. "Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects." Doctoral thesis, 2010. https://tubaf.qucosa.de/id/qucosa%3A22775.

Повний текст джерела
Анотація:
The design of hydropower projects requires a comprehensive planning process in order to achieve the objective to maximise exploitation of the existing hydropower potential as well as future revenues of the plant. For this purpose and to satisfy approval requirements for a complex hydropower development, it is imperative at planning stage, that the conceptual development contemplates a wide range of influencing design factors and ensures appropriate consideration of all related aspects. Since the majority of technical and economical parameters that are required for detailed and final design cannot be precisely determined at early planning stages, crucial design parameters such as design discharge and hydraulic head have to be examined through an extensive optimisation process. One disadvantage inherent to commonly used deterministic analysis is the lack of objectivity for the selection of input parameters. Moreover, it cannot be ensured that the entire existing parameter ranges and all possible parameter combinations are covered. Probabilistic methods utilise discrete probability distributions or parameter input ranges to cover the entire range of uncertainties resulting from an information deficit during the planning phase and integrate them into the optimisation by means of an alternative calculation method. The investigated method assists with the mathematical assessment and integration of uncertainties into the rational economic appraisal of complex infrastructure projects. The assessment includes an exemplary verification to what extent the Random Set Theory can be utilised for the determination of input parameters that are relevant for the optimisation of hydropower projects and evaluates possible improvements with respect to accuracy and suitability of the calculated results.
Die Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії