Tesis sobre el tema "Probability-based method"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 28 mejores tesis para su investigación sobre el tema "Probability-based method".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
PINHO, Luis Gustavo Bastos. "Building new probability distributions: the composition method and a computer based method". Universidade Federal de Pernambuco, 2017. https://repositorio.ufpe.br/handle/123456789/24966.
Texto completoMade available in DSpace on 2018-07-03T21:14:00Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) TESE Luis Gustavo Bastos Pinho.pdf: 3785410 bytes, checksum: 4a1cf7340340bd8ff994a74abb62ba0e (MD5) Previous issue date: 2017-01-17
FACEPE
We discuss the creation of new probability distributions for continuous data in two distinct approaches. The first one is, to our knowledge, novelty and consists of using Estimation of Distribution Algorithms (EDAs) to obtain new cumulative distribution functions. This class of algorithms work as follows. A population of solutions for a given problem is randomly selected from a space of candidates, which may contain candidates that are not feasible solutions to the problem. The selection occurs by following a set of probability rules that, initially, assign a uniform distribution to the space of candidates. Each individual is ranked by a fitness criterion. A fraction of the most fit individuals is selected and the probability rules are then adjusted to increase the likelihood of obtaining solutions similar to the most fit in the current population. The algorithm iterates until the set of probability rules are able to provide good solutions to the problem. In our proposal, the algorithm is used to generate cumulative distribution functions to model a given continuous data set. We tried to keep the mathematical expressions of the new functions as simple as possible. The results were satisfactory. We compared the models provided by the algorithm to the ones in already published papers. In every situation, the models proposed by the algorithms had advantages over the ones already published. The main advantage is the relative simplicity of the mathematical expressions obtained. Still in the context of computational tools and algorithms, we show the performance of simple neural networks as a method for parameter estimation in probability distributions. The motivation for this was the need to solve a large number of non linear equations when dealing with SAR images (SAR stands for synthetic aperture radar) in the statistical treatment of such images. The estimation process requires solving, iteratively, a non-linear equation. This is repeated for every pixel and an image usually consists of a large number of pixels. We trained a neural network to approximate an estimator for the parameter of interest. Once trained, the network can be fed the data and it will return an estimate of the parameter of interest without the need of iterative methods. The training of the network can take place even before collecting the data from the radar. The method was tested on simulated and real data sets with satisfactory results. The same method can be applied to different distributions. The second part of this thesis shows two new probability distribution classes obtained from the composition of already existing ones. In each situation, we present the new class and general results such as power series expansions for the probability density functions, expressions for the moments, entropy and alike. The first class is obtained from the composition of the beta-G and Lehmann-type II classes. The second class, from the transmuted-G and Marshall-Olkin-G classes. Distributions in these classes are compared to already existing ones as a way to illustrate the performance of applications to real data sets.
Discutimos a criação de novas distribuições de probabilidade para dados contínuos em duas abordagens distintas. A primeira é, ao nosso conhecimento, inédita e consiste em utilizar algoritmos de estimação de distribuição para a obtenção de novas funções de distribuição acumulada. Algoritmos de estimação de distribuição funcionam da seguinte forma. Uma população de soluções para um determinado problema é extraída aleatoriamente de um conjunto que denominamos espaço de candidatos, o qual pode possuir candidatos que não são soluções viáveis para o problema. A extração ocorre de acordo com um conjunto de regras de probabilidade, as quais inicialmente atribuem uma distribuição uniforme ao espaço de candidatos. Cada indivíduo na população é classificado de acordo com um critério de desempenho. Uma porção dos indivíduos com melhor desempenho é escolhida e o conjunto de regras é adaptado para aumentar a probabilidade de obter soluções similares aos melhores indivíduos da população atual. O processo é repetido por um número de gerações até que a distribuição de probabilidade das soluções sorteadas forneça soluções boas o suficiente. Em nossa aplicação, o problema consiste em obter uma função de distribuição acumulada para um conjunto de dados contínuos qualquer. Tentamos, durante o processo, manter as expressões matemáticas das distribuições geradas as mais simples possíveis. Os resultados foram satisfatórios. Comparamos os modelos providos pelo algoritmo a modelos publicados em outros artigos. Em todas as situações, os modelos obtidos pelo algoritmo apresentaram vantagens sobre os modelos dos artigos publicados. A principal vantagem é a expressão matemática reduzida. Ainda no contexto do uso de ferramentas computacionais e algoritmos, mostramos como utilizar redes neurais simples para a estimação de parâmetros em distribuições de probabilidade. A motivação para tal aplicação foi a necessidade de resolver iterativamente um grande número de equações não lineares no tratamento estatístico de imagens obtidas de SARs (synthetic aperture radar). O processo de estimação requer a solução de uma equação por métodos iterativos e isso é repetido para cada pixel na imagem. Cada imagem possui um grande número de pixels, em geral. Pensando nisso, treinamos uma rede neural para aproximar o estimador para esse parâmetro. Uma vez treinada, a rede é alimentada com as janelas referente a cada pixel e retorna uma estimativa do parâmetro, sem a necessidade de métodos iterativos. O treino ocorre antes mesmo da obtenção dos dados do radar. O método foi testado em conjuntos de dados reais e fictícios com ótimos resultados. O mesmo método pode ser aplicado a outras distribuições. A segunda parte da tese exibe duas classes de distribuições de probabilidade obtidas a partir da composição de classes existentes. Em cada caso, apresentamos a nova classe e resultados gerais tais como expansões em série de potência para a função densidade de probabilidade, expressões para momentos, entropias e similares. A primeira classe é a composição das classes beta-G e Lehmann-tipo II. A segunda classe é obtida a partir das classes transmuted-G e Marshall-Olkin-G. Distribuições pertencentes a essas classes são comparadas a outras já existentes como maneira de ilustrar o desempenho em aplicações a dados reais.
Hoang, Tam Minh Thi 1960. "A joint probability model for rainfall-based design flood estimation". Monash University, Dept. of Civil Engineering, 2001. http://arrow.monash.edu.au/hdl/1959.1/8892.
Texto completoAlkhairy, Ibrahim H. "Designing and Encoding Scenario-based Expert Elicitation for Large Conditional Probability Tables". Thesis, Griffith University, 2020. http://hdl.handle.net/10072/390794.
Texto completoThesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
Mansour, Rami. "Reliability Assessment and Probabilistic Optimization in Structural Design". Doctoral thesis, KTH, Hållfasthetslära (Avd.), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-183572.
Texto completoQC 20160317
Chapman, Gary. "Computer-based musical composition using a probabilistic algorithmic method". Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.341603.
Texto completoLiu, Xiang. "Identification of indoor airborne contaminant sources with probability-based inverse modeling methods". Connect to online resource, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3337124.
Texto completoDedovic, Ines Verfasser], Jens-Rainer [Akademischer Betreuer] Ohm y Dorit [Akademischer Betreuer] [Merhof. "Efficient probability distribution function estimation for energy based image segmentation methods / Ines Dedovic ; Jens-Rainer Ohm, Dorit Merhof". Aachen : Universitätsbibliothek der RWTH Aachen, 2016. http://d-nb.info/1130871541/34.
Texto completoDedović, Ines [Verfasser], Jens-Rainer Akademischer Betreuer] Ohm y Dorit [Akademischer Betreuer] [Merhof. "Efficient probability distribution function estimation for energy based image segmentation methods / Ines Dedovic ; Jens-Rainer Ohm, Dorit Merhof". Aachen : Universitätsbibliothek der RWTH Aachen, 2016. http://d-nb.info/1130871541/34.
Texto completoKraum, Martin. "Fischer-Tropsch synthesis on supported cobalt based Catalysts Influence of various preparation methods and supports on catalyst activity and chain growth probability /". [S.l. : s.n.], 1999. http://deposit.ddb.de/cgi-bin/dokserv?idn=959085181.
Texto completoGood, Norman Markus. "Methods for estimating the component biomass of a single tree and a stand of trees using variable probability sampling techniques". Thesis, Queensland University of Technology, 2001. https://eprints.qut.edu.au/37097/1/37097_Good_2001.pdf.
Texto completoRayner, Glen. "Statistical methodologies for quantile-based distributional families". Thesis, Queensland University of Technology, 1999.
Buscar texto completoLelièvre, Nicolas. "Développement des méthodes AK pour l'analyse de fiabilité. Focus sur les évènements rares et la grande dimension". Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC045/document.
Texto completoEngineers increasingly use numerical model to replace the experimentations during the design of new products. With the increase of computer performance and numerical power, these models are more and more complex and time-consuming for a better representation of reality. In practice, optimization is very challenging when considering real mechanical problems since they exhibit uncertainties. Reliability is an interesting metric of the failure risks of design products due to uncertainties. The estimation of this metric, the failure probability, requires a high number of evaluations of the time-consuming model and thus becomes intractable in practice. To deal with this problem, surrogate modeling is used here and more specifically AK-based methods to enable the approximation of the physical model with much fewer time-consuming evaluations. The first objective of this thesis work is to discuss the mathematical formulations of design problems under uncertainties. This formulation has a considerable impact on the solution identified by the optimization during design process of new products. A definition of both concepts of reliability and robustness is also proposed. These works are presented in a publication in the international journal: Structural and Multidisciplinary Optimization (Lelièvre, et al. 2016). The second objective of this thesis is to propose a new AK-based method to estimate failure probabilities associated with rare events. This new method, named AK-MCSi, presents three enhancements of AK-MCS: (i) sequential Monte Carlo simulations to reduce the time associated with the evaluation of the surrogate model, (ii) a new stricter stopping criterion on learning evaluations to ensure the good classification of the Monte Carlo population and (iii) a multipoints enrichment permitting the parallelization of the evaluation of the time-consuming model. This work has been published in Structural Safety (Lelièvre, et al. 2018). The last objective of this thesis is to propose new AK-based methods to estimate the failure probability of a high-dimensional reliability problem, i.e. a problem defined by both a time-consuming model and a high number of input random variables. Two new methods, AK-HDMR1 and AK-PCA, are proposed to deal with this problem based on respectively a functional decomposition and a dimensional reduction technique. AK-HDMR1 has been submitted to Reliability Enginnering and Structural Safety on 1st October 2018
Slowik, Ondřej. "Pravděpodobnostní optimalizace konstrukcí". Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2014. http://www.nusl.cz/ntk/nusl-226801.
Texto completoBeisler, Matthias Werner. "Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects". Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2011. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-71564.
Texto completoDie Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben
Chen, Chi-Fan y 陳祈帆. "Prediction of indoor pollutant source with the probability-based inverse method". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/36ng49.
Texto completo國立臺北科技大學
能源與冷凍空調工程系碩士班
100
The studies of pollutant dispersions and their spreading behaviors in a cleanroom, experimentally or numerically, are generally investigated based on the artificial emission sources. Detections of spreading pollutants in an operating cleanroom can be easily achieved using the respective indoor air quality monitoring systems but vice versa for the source identification. The identification of pollutant source is possible with the use of inverse numerical method. This study proposes a probability-based inverse method coupling with computational fluid dynamics (CFD) method, aiming to predict the pollutant source in an operating cleanroom with unilateral recirculation airflow field and compares the results with those obtained using the simulation model with an artificial source. The diffusion of pollutants from an artificial source is mainly relies on the airflow fields in the cleanroom. For the proposed probability-based inverse method, with the airflow field in the reversed direction, the CFD results showed the aggregation of pollutants in the unilateral airflow field. By assessing the proposed probability weighting model, the location with the highest probability is found consistent with the default location of the artificial pollution sources. The results also showed that the increase of sensor detection points helps minimizing the calculation errors in the assessment of the proposed probability weighting function, and vice versa. Besides that, the varying methods of pollutant emitted has no significant effect on the identification of pollutant source but the calculation results based on the proposed probability weighting function are relatively lower.
Hwang, Guan-Lin y 黃冠霖. "A Web Services Selection Method based on the QoS Probability Distribution". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/85469546678794054306.
Texto completo國立中山大學
資訊管理學系研究所
102
Service-Oriented Architecture (SOA) provides a flexible framework for service composition. A composite web service is represented by a workflow, and for each task within the workflow, several candidate Web services which offer the same functionality may be available. In previous work (Hwang, Hsu, &; Lee, 2014), Hwang et al. propose a service selection framework based on probabilistic QoS distributions of component services. Their method decomposes a global QoS constraint into a number of local constraints using the average QoS value of each candidate service. However, heterogeneous deviation among candidate services may lead to the suboptimal selection. We propose an initial service assignment finding method with considering the standard deviation of QoS distributions. The objective of service selection is to maximize the global QoS conformance. Experimental results show that the proposed approach significantly improves the performance in probabilistic QoS-based service selection method, in terms of both global QoS conformance and running time.
Lee, Ya-fen y 李雅芬. "An evaluation method of liquefaction probability based on the reliability theory". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/01549575868989143003.
Texto completo國立成功大學
土木工程學系碩博士班
95
Soil liquefaction is one of earthquake-induced hazards. During the 1999 Chi-Chi earthquake, the damage caused by liquefaction was most serious in the central Taiwan. The traditional binary representation, liquefied or non-liquefied, is incapable to reflect the uncertainty and risk, which are the important characters in the geotechnique engineering. Therefore, this paper develops a new evaluation model of annual probability of liquefaction (APL) in the consideration of the uncertainties of soil parameters and model based on the reliability theory. The SPT-based and CPT-based simplified methods suggested by Youd and Idriss (2001), called herein Seed method and RW method, are taken as the basic equations. The reliability index proposed by Hasofer-Lind (1974), that owns the invariable characteristic, is used to calculate the probability of liquefaction. There are seven random variables used under the uncertainty consideration in this paper. For this reason, the knowledge nested partition method (KNPM) is employed to establish a new and global-search method for determining the reliability index in the satisfaction of calculation efficiency and the need to liquefaction evaluation for large areas. Then the functions consisted of one length L and three angles ��1、��2 and ��3 can be used to mean the foregoing seven random variables. Through the liquefied and non-liquefied case histories, analysis results of the KNPM method are compared and verified by the results of Monte Carlo simulation and iteration technique, showing that the calculating results obtained by the KNPM are correct and can reach optimal. With a rigid framework of the reliability theory, an evaluation of probability of liquefaction has to involve the uncertainties of model and parameters at the same time. The methods to quantify the uncertainties of model and parameters are proposed by way of a great number of case histories and field data, and these quantitative results can be used in the liquefaction probability evaluation. In the model uncertainty, the random sampling and analysis of alternatives, resulted from different quantity of SPT-based and CPT-based case histories are adopted. The SPT-based results revel the uncertainty of Seed’s method can be defined by c1= 1.06 and COV(c1)= 0.06. The CPT-based results demonstrate the uncertainty of RW’s method can be expressed by c1= 1.16 and COV(c1)= 0.12. Both two methods are conservative models. In the soil parameter uncertainties, the SPT-based and CPT-based field data in the Yuanlin and Mailiao areas are taken, and geostatistic method is utilized to quantify the uncertainties of soil parameters, including the standard penetration value (N), fines content (FC), soil weight (Wt), cone tip resistance (qc) and sleeve friction (fs). These results show that the soil parameter uncertainties between two areas have some differences, which are but fairly closer. So, this paper suggests that the soil parameter uncertainties of N、FC、Wt、qc and fs are 0.15, 0.14, 0.02, 0.04 and 0.14, respectively. To sum up the above-mentioned study results, including the reliability index and uncertainties of model and soil parameters, this paper develops a new evaluation model of probability of soil liquefaction. Owing to the lack of earthquake hazards and related data in early Taiwan, the verification of annual probability of liquefaction induced by earthquake is fairly difficult. By the comparison of the APL calculated from the energy dissipation theory, the proposed model has been proved to possess certain degree of accuracy. Finally, the APL in the Yuanlin area is evaluated subject to future re-reputure of Chelungpu fault and Changhua fault using the proposed model. The average annual probability of liquefaction (AAPL) is also calculated. Theses results show that AAPLs of Chelungpu fault and Changhua fault are 0.0007 to 0.0050 and 0.0001 to 0.0021, individually. The contour of average liquefaction return period is then drawn. In this study, these results can be a reference for regional liquefaction prevention.
Huang, Guan-Chin y 黃冠欽. "Excimer Laser Micromachining of 3D Microstructures Based on Method of Probability Distribution". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/13637051303417213795.
Texto completo國立成功大學
機械工程學系碩博士班
94
This study applies excimer laser micro-machining technology to the manufacturing 3D microstructures of continuous profiles. Two different excimer laser machining methods based on the idea of probability distribution are used to fabricate axially symmetrical and non-axially symmetrical microstructures. Both theoretical and experimental studies are carried out to verify the feasibility and machining accuracy of these excimer laser micromachining processes. Firstly, an “innovated hole area modulation method” is applied to fabricate non-axially symmetrical microstructures. We modify several parameters of machining contour paths and mask design process to minimize the roughness of machined microstructures. The experimental results show that this method could improve the surface roughness successfully through different types and contour ranges of excimer laser machining. However, it still has some problems on machining accuracy because the probability distribution of masks is not continuous. If one can design a mask alignment system of high precision orientation, let non-inverse and inverse masks to be used together, this machining method will have great potentials in manufacturing arbitrary non-axially symmetrical micro-optical devices in the future. In order to manufacture axially symmetrical spherical microlenses, the “excimer laser planetary contour scanning method” is adopted in this work. The basic idea is based on a specific mask design method and a sample rotation method which includes both self-spinning and circular revolving to provide a probability function of laser machining. The probability function created by the planetary scanning assures a continuous, smooth, and precise surface profile to the machined microstructures. The surface profiles are measured and compared with their theoretical counterparts. Excellent agreements both in profile shapes and dimensions are achieved. The machined microlenses will be combined with plastic optical fiber (POF) to verify potentials in fabricating micro-optic components such as refractive microlenses or other optical-fiber related micro-devices.
Shen, Wei-min y 沈暐閔. "Development of a quantitative human-error-probability method based on fuzzy set theory". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/90770927507729188482.
Texto completo國立臺灣海洋大學
商船學系所
97
Human errors occur so long as activities taken place involve human-beings regardless of domains and operations in which such performances are undertaken. Statistically, Human error is one of the crucial factors contributing to accidents. Accordingly, the human error study is a very important topic and a variety of human reliability assessment (HRA) methods has been developed to tackle such problems. The HRA approach can be divided into three categories, those using a database, using expert judgment and those using quasi-expert-judgment. The approaches falling into the first category apply a database containing generic Human Error Probability (HEP) to the specific circumstance being assessed. The HEPs obtained based on the approach in the second category are required by asking experts directly with regard to the scenario under consideration. Alternatively, some approaches in third category generate HEPs by manipulating and interrogating a quasi database incorporating with expert judgment. However, the risk analysis based on the techniques within the second and third categories may involve a high level of uncertainty due to the lack of data. This may jeopardize the reliability of the results. Some researches have been devised to resolve such a difficulty and the human error studies based on the fuzzy-number concept is one of them. This is due to its significance of transforming qualitative information into quantitative attributes under circumstances where the lack or incompleteness of data exists. However, a drawback occurs in situations in which some variables have sufficient data to evaluate risks while others do not since the discriminating ability of the studies based on the fuzzy-number concept is too low. In order to overcome such a difficulty, this research is planning to establish a framework equipped with a flexible data-acquirement method of which the objective is to provide a high level of discriminating ability. This will be achieved by first establishing membership function for linguistic variables, secondly combing such variables using the fuzzy rule base method, thirdly obtaining the crisp values through the defuzzification process and finally transferring such crisp values into Fuzzy Failure Rate(FFR). The methodology established will be verified and examined using the data from the traditional HRA studies.
Hong, Chung-Ming y 洪崇銘. "A Cluster Group Method Based on the Priority Queue to Reduce Collision Probability for Wireless Sensor Network". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/38194755213671413752.
Texto completo國立中興大學
資訊科學與工程學系所
99
In widely used wireless sensor network (WSN) environment, when the node is in the busy state, a integer value will be selected randomly from the contention window. When the countdown is out of the back-off time that is calculated by using this random value, the node will try to grab the wireless channel in the environment. Therefore, As long as the node is in the busy state, they have the same condition to grab the channel. However, this is unfair. Because the various groups within the busy state have the different amount of information and degrees of congestion, it should give each group the corresponding back-off time depending on the situation of each group. In order to achieve this goal, we propose a mechanism for grouping back-off time, and use this mechanism to reduce the probability of collision in the environment. In this paper, we first established a multi-priority queues environment. The sensor nodes in the environment will give the collected environmental information different priority according to the differences from importance of the collected environmental information. And the sensor nodes store the priority to the corresponding priority queue. After that, we have to analyze all of the groups within a busy state in this environment. According to the amount of information of these groups, they should be given a relative priority. According to the priority of a group, we have established a grouping mechanism via setting the back-off time. This mechanism makes the high priority group of nodes have a greater chance to grab the channel faster than the low priority group of nodes, and can reduce the packet drop rate. Then, we design a collision probability formula to calculate for the condition of grouping and non-grouping. Finally, we constructed dual-priority queue module by using Matlab. The design joins the grouping mechanism under this environment. Via the experimental results, it shows that our method can not only give the appropriate back¬-off time for those group, but also can effectively reduce the collision probability between any of the nodes and external nodes. And the collision probability is relative to the power consumption and throughput. Therefore, reducing the collision probability can achieve the purpose of power-saving and throughput-enhancing.
Chang, Hsun-Chen y 張恂禎. "On the Prediction Various Locations of Contaminant Sources in a Cleanroom with the Probability-based Inverse Method". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/4483mp.
Texto completo國立臺北科技大學
能源與冷凍空調工程系碩士班
101
The studies of pollutant dispersions and their spreading behaviors in a cleanroom, experimentally or numerically, are generally investigated based on the artificial emission sources. Detection of spreading pollutants in an operating cleanroom can be easily achieved using the respective indoor air quality monitoring systems but vice versa for the source identification. The identification of pollutant source is possible with the use of inverse numerical method. This study proposes a probability-based inverse method coupling with computational fluid dynamics (CFD) method, aiming to predict the pollutant source in an operating cleanroom with unilateral recirculation airflow field and compares the results with those obtained using the simulation model with an artificial source. The experiments were conducted in the same size cleanroom. Toluene was used as a tracer gas to simulate gas leakage in the Fab. PID Sensor were used to measure the toluene concentration field and the collected data were then used to compare with the simulation results. The agreement is seen to be quite good. By assessing the proposed probability weighting model, the location with the highest probability is found consistent with the default location of the artificial pollution sources
Donde, Pratik Prakash. "LES/PDF approach for turbulent reacting flows". 2012. http://hdl.handle.net/2152/19481.
Texto completotext
Weaver, George W. "Model based estimation of parameters of spatial populations from probability samples". Thesis, 1996. http://hdl.handle.net/1957/34124.
Texto completoGraduation date: 1997
Upadhyay, Rochan Raj. "Simulation of population balance equations using quadrature based moment methods". Thesis, 2006. http://hdl.handle.net/2152/2943.
Texto completoCao, Jian. "Computation of High-Dimensional Multivariate Normal and Student-t Probabilities Based on Matrix Compression Schemes". Diss., 2020. http://hdl.handle.net/10754/662613.
Texto completoKraum, Martin [Verfasser]. "Fischer-Tropsch synthesis on supported cobalt based Catalysts : Influence of various preparation methods and supports on catalyst activity and chain growth probability / submitted by Martin Kraum". 1999. http://d-nb.info/959085181/34.
Texto completo(6630578), Yellamraju Tarun. "n-TARP: A Random Projection based Method for Supervised and Unsupervised Machine Learning in High-dimensions with Application to Educational Data Analysis". Thesis, 2019.
Buscar texto completoBeisler, Matthias Werner. "Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects". Doctoral thesis, 2010. https://tubaf.qucosa.de/id/qucosa%3A22775.
Texto completoDie Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben.