Дисертації з теми "Bayesian interpretation"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Bayesian interpretation.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-17 дисертацій для дослідження на тему "Bayesian interpretation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Christen, José Andrés. "Bayesian interpretation of radiocarbon results." Thesis, University of Nottingham, 1994. http://eprints.nottingham.ac.uk/11035/.

Повний текст джерела
Анотація:
Over the last thirty years radiocarbon dating has been widely used in archaeology and related fields to address a wide-range of chronological questions. Because of some inherent stochastic factors of a complex nature, radiocarbon dating presents a rich source of challenging statistical problems. The chronological questions posed commonly involve the interpretation of groups of radiocarbon determinations and often substantial amounts of a priori information are available. The statistical techniques used up to very recently could only deal with the analysis of one determination at a time, and no prior information could be included in the analysis. However, over the last few years some problems have been successfully tackled using the Bayesian paradigm. In this thesis we expand that work and develop a general statistical framework for the Bayesian interpretation of radiocarbon determinations. Firstly we consider the problem of radiocarbon calibration and develop a novel approach. Secondly we develop a statistical framework which permits the inclusion of prior archaeological knowledge and illustrate its use with a wide range of examples. We discuss various generic problems some of which are, replications, summarisation, floating chronologies and archaeological phase structures. The techniques used to obtain the posterior distributions of interest are numerical and, in most of the cases, we have used Markov chain Monte Carlo (MCMC) methods. We also discuss the sampling routines needed for the implementation of the MCNIC methods used in our examples. Thirdly we address the very important problem of outliers in radiocarbon dating and develop an original methodology for the identification of outliers in sets of radiocarbon determinations. We show how our framework can be extended to permit the identification of outliers. Finally we apply this extended framework to the analysis of a substantial archaeological dating problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Calder, Brian. "Bayesian spatial models for SONAR image interpretation." Thesis, Heriot-Watt University, 1997. http://hdl.handle.net/10399/1249.

Повний текст джерела
Анотація:
This thesis is concerned with the utilisation of spatial information in processing of high-frequency sidescan SONAR imagery, and particularly in how such information can be used in developing techniques to assist in mapping functions. Survey applications aim to generate maps of the seabed, but are time consuming and expensive; automatic processing is required to improve efficiency. Current techniques have had some success, but utilise little of the available spatial information. Previously, inclusion of such knowledge was prohibitively expensive; recent improvements in numerical simulations techniques has reduced the costs involved. This thesis attempts to exploit these improvements into a method for including spatial information in SONAR processing and in general to image and signal analysis. Bayesian techniques for inclusion of prior knowledge and structuring complex problems are developed and applied to problems of texture segmentation, object detection and parameter extraction. It is shown through experiments on groundtruth and real datasets that the inclusion of spatial context can be very effective in improving poor techniques or, conversely in allowing simpler techniques to be used with the same objective outcome (with obvious computational advantages). The thesis also considers some of the implementation problems with the techniques used, and develops simple modifications to improve common algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Maimon, Geva. "A Bayesian approach to the statistical interpretation of DNA evidence." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=92221.

Повний текст джерела
Анотація:
This dissertation sets forth a foundation for a continuous model for the interpretation of DNA mixture evidence. We take a new approach to modelling electropherogram data by modelling the actual electropherogram as a curve rather than modelling the allelic peak areas under the curve. This shift allows us to retain all the data available and to bypass the approximation of peak areas by GeneMapper R (Applied Biosystems, 2003). The two problems associated with the use of this programme - prohibitive costs and patented processes - are thus avoided.
To establish a model for electropherogram data, we explore two Bayesian wavelet approaches to modelling functions (Chipman et al., 1997 ; M. Clyde et al., 1998) as well as a Bayesian Adaptive Regression Splines approach (DiMatteo et al., 2001). Furthermore, we establish our own genotyping algorithm, once again circumventing the need for GeneMapper R, and obtain posterior probabilities for the resulting genotypes.
With a model in place for single-source DNA samples, we develop an algorithm that deconvolves a two-person mixture into its separate components and provides the posterior probabilities for the resulting genotype combinations.
In addition, because of the widely recognized need to perform further research on continuous models in mixture interpretation and the difficulty in obtaining the necessary data to do so (due to privacy laws and laboratory restrictions), a tool for simulating realistic data is of the utmost importance. PCRSIM (Gill et al., 2005) is the most popular simulation software for this purpose. We propose a method for refining the parameter estimates used in PCRSIM in order to simulate more accurate data.
Cette dissertation établit les fondations nécessaires à la création d'un modèle continu servant à l'interprétation des échantillons d'ADN à sources multiples (mélanges). Nous prenons une nouvelle approche de la modélisation des données d'´electrophérogrammes en modélisant l'électrophérogramme en tant que courbe plutôt que de modéliser l'aire sous la courbe des sommets alléliques. Cette approche nous permet de conserver toutes les données disponibles et d'éviter l'estimation de l'aire sous la courbe au moyen de GeneMapper R (Applied Biosystems, 2003). Deux problèmes associés à l'utilisation de ce programme - des coûts prohibitifs et une procédure brevetée - sont ainsi évités.
Afin d'établir un modèle pour les données d'électrophérogramme, nous explorons deux approches bayésiennes pour la modélisation des fonctions par ondelettes (Chipman et al., 1997 ; M. Clyde et al., 1998) de même qu'une approche connue sous le nom de Bayesian Adaptive Regression Splines (DiMatteo et al., 2001). De plus, nous élaborons notre propre algorithme pour l'analyse des génotypes, nous permettant, encore une fois, d'éviter GeneMapper R, et d'obtenir les probabilités postérieures des génotypes résultants.
À l'aide d'un modèle d'échantillon d'ADN à source unique, nous développons un algorithme qui divise un échantillon de deux personnes en ses composantes séparées et estime les probabilités postérieures des différentes combinaisons possibles de génotype.
De plus, en raison des lacunes dans la littérature sur les modèles continus pour l'analyse d'échantillons d'ADN à sources multiples et de la difficulté à obtenir les données n´ecessaire pour l'effectuer (en raison des lois sur la protection de la vie privée et des restrictions en laboratoire), un outil qui simule des données réalistes est de la plus grande importance. PCRSIM (Gill et al., 2005) est un outil qui permet de répondre à ce besoin. Par cet outil, nous proposons une méthode pour raffiner les estimations des paramètres afin de simuler des données plus précises.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Haan, Benjamin J. "Decomposing Bayesian network representations of distributed sensor interpretation problems using weighted average conditional mutual information /." Available to subscribers only, 2007. http://proquest.umi.com/pqdweb?did=1421626381&sid=1&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bringmann, Oliver. "Symbolische Interpretation Technischer Zeichnungen." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2003. http://nbn-resolving.de/urn:nbn:de:swb:14-1045648731734-96098.

Повний текст джерела
Анотація:
Gescannte und vektorisierte technische Zeichnungen werden automatisch unter Nutzung eines Netzes von Modellen in eine hochwertige Datenstruktur migriert. Die Modelle beschreiben die Inhalte der Zeichnungen hierarchisch und deklarativ. Modelle für einzelne Bestandteile der Zeichnungen können paarweise unabhängig entwickelt werden. Dadurch werden auch sehr komplexe Zeichnungsklassen wie Elektroleitungsnetze oder Gebäudepläne zugänglich. Die Modelle verwendet der neue, sogenannte Y-Algorithmus: Hypothesen über die Deutung lokaler Zeichnungsinhalte werden hierarchisch generiert. Treten bei der Nutzung konkurrierender Modelle Konflikte auf, werden diese protokolliert. Mittels des Konfliktbegriffes können konsistente Interpretationen einer kompletten Zeichnung abstrakt definiert und während der Analyse einer konkreten Zeichnung bestimmt werden. Ein wahrscheinlichkeitsbasiertes Gütemaß bewertet jede dieser alternativen, globalen Interpretationen. Das Suchen einer bzgl. dieses Maßes optimalen Interpretation ist ein NP-hartes Problem. Ein Branch and Bound-Algorithmus stellt die adäquate Lösung dar.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Bringmann, Oliver. "Symbolische Interpretation Technischer Zeichnungen." Doctoral thesis, Technische Universität Dresden, 2001. https://tud.qucosa.de/id/qucosa%3A24202.

Повний текст джерела
Анотація:
Gescannte und vektorisierte technische Zeichnungen werden automatisch unter Nutzung eines Netzes von Modellen in eine hochwertige Datenstruktur migriert. Die Modelle beschreiben die Inhalte der Zeichnungen hierarchisch und deklarativ. Modelle für einzelne Bestandteile der Zeichnungen können paarweise unabhängig entwickelt werden. Dadurch werden auch sehr komplexe Zeichnungsklassen wie Elektroleitungsnetze oder Gebäudepläne zugänglich. Die Modelle verwendet der neue, sogenannte Y-Algorithmus: Hypothesen über die Deutung lokaler Zeichnungsinhalte werden hierarchisch generiert. Treten bei der Nutzung konkurrierender Modelle Konflikte auf, werden diese protokolliert. Mittels des Konfliktbegriffes können konsistente Interpretationen einer kompletten Zeichnung abstrakt definiert und während der Analyse einer konkreten Zeichnung bestimmt werden. Ein wahrscheinlichkeitsbasiertes Gütemaß bewertet jede dieser alternativen, globalen Interpretationen. Das Suchen einer bzgl. dieses Maßes optimalen Interpretation ist ein NP-hartes Problem. Ein Branch and Bound-Algorithmus stellt die adäquate Lösung dar.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

LeSage, James P., and Manfred M. Fischer. "Spatial Growth Regressions: Model Specification, Estimation and Interpretation." WU Vienna University of Economics and Business, 2007. http://epub.wu.ac.at/3968/1/SSRN%2Did980965.pdf.

Повний текст джерела
Анотація:
This paper uses Bayesian model comparison methods to simultaneously specify both the spatial weight structure and explanatory variables for a spatial growth regression involving 255 NUTS 2 regions across 25 European countries. In addition, a correct interpretation of the spatial regression parameter estimates that takes into account the simultaneous feed- back nature of the spatial autoregressive model is provided. Our findings indicate that incorporating model uncertainty in conjunction with appropriate parameter interpretation decreased the importance of explanatory variables traditionally thought to exert an important influence on regional income growth rates. (authors' abstract)
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Klukowski, Piotr. "Nuclear magnetic resonance spectroscopy interpretation for protein modeling using computer vision and probabilistic graphical models." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4720.

Повний текст джерела
Анотація:
Dynamic development of nuclear magnetic resonance spectroscopy (NMR) allowed fast acquisition of experimental data which determine structure and dynamics of macromolecules. Nevertheless, due to lack of appropriate computational methods, NMR spectra are still analyzed manually by researchers what takes weeks or years depending on protein complexity. Therefore automation of this process is extremely desired and can significantly reduce time of protein structure solving. In presented work, a new approach to automated three-dimensional protein NMR spectra analysis is presented. It is based on Histogram of Oriented Gradients and Bayesian Network which have not been ever applied in that context in the history of research in the area. Proposed method was evaluated using benchmark data which was established by manual labeling of 99 spectroscopic images taken from 6 different NMR experiments. Afterwards subsequent validation was made using spectra of upstream of N-ras protein. With the use of proposed method, a three-dimensional structure of mentioned protein was calculated. Comparison with reference structure from protein databank reveals no significant differences what has proven that proposed method can be used in practice in NMR laboratories.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Button, Zach. "The application and interpretation of the two-parameter item response model in the context of replicated preference testing." Kansas State University, 2015. http://hdl.handle.net/2097/20113.

Повний текст джерела
Анотація:
Master of Science
Statistics
Suzanne Dubnicka
Preference testing is a popular method of determining consumer preferences for a variety of products in areas such as sensory analysis, animal welfare, and pharmacology. However, many prominent models for this type of data do not allow different probabilities of preferring one product over the other for each individual consumer, called overdispersion, which intuitively exists in real-world situations. We investigate the Two-Parameter variation of the Item Response Model (IRM) in the context of replicated preference testing. Because the IRM is most commonly applied to multiple-choice testing, our primary focus is the interpretation of the model parameters with respect to preference testing and the evaluation of the model’s usefulness in this context. We fit a Bayesian version of the Two-Parameter Probit IRM (2PP) to two real-world datasets, Raisin Bran and Cola, as well as five hypothetical datasets constructed with specific parameter properties in mind. The values of the parameters are sampled via the Gibbs Sampler and examined using various plots of the posterior distributions. Next, several different models and prior distribution specifications are compared over the Raisin Bran and Cola datasets using the Deviance Information Criterion (DIC). The Two-Parameter IRM is a useful tool in the context of replicated preference testing, due to its ability to accommodate overdispersion, its intuitive interpretation, and its flexibility in terms of parameterization, link function, and prior specification. However, we find that this model brings computational difficulties in certain situations, some of which require creative solutions. Although the IRM can be interpreted for replicated preference testing scenarios, this data typically contains few replications, while the model was designed for exams with many items. We conclude that the IRM may provide little evidence for marketing decisions, and it is better-suited for exploring the nature of consumer preferences early in product development.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Li, Bin. "Statistical learning and predictive modeling in data mining." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1155058111.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Hörberg, Thomas. "Probabilistic and Prominence-driven Incremental Argument Interpretation in Swedish." Doctoral thesis, Stockholms universitet, Institutionen för lingvistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-129763.

Повний текст джерела
Анотація:
This dissertation investigates how grammatical functions in transitive sentences (i.e., `subject' and `direct object') are distributed in written Swedish discourse with respect to morphosyntactic as well as semantic and referential (i.e., prominence-based) information. It also investigates how assignment of grammatical functions during on-line comprehension of transitive sentences in Swedish is influenced by interactions between morphosyntactic and prominence-based information. In the dissertation, grammatical functions are assumed to express role-semantic (e.g., Actor and Undergoer) and discourse-pragmatic (e.g., Topic and Focus) functions of NP arguments. Grammatical functions correlate with prominence-based information that is associated with these functions (e.g., animacy and definiteness). Because of these correlations, both prominence-based and morphosyntactic information are assumed to serve as argument interpretation cues during on-line comprehension. These cues are utilized in a probabilistic fashion. The weightings, interplay and availability of them are reflected in their distribution in language use, as shown in corpus data. The dissertation investigates these assumptions by using various methods in a triangulating fashion. The first contribution of the dissertation is an ERP (event-related brain potentials) experiment that investigates the ERP response to grammatical function reanalysis, i.e., a revision of a tentative grammatical function assignment, during on-line comprehension of transitive sentences. Grammatical function reanalysis engenders a response that correlates with the (re-)assignment of thematic roles to the NP arguments. This suggests that the comprehension of grammatical functions involves assigning role-semantic functions to the NPs. The second contribution is a corpus study that investigates the distribution of prominence-based, verb-semantic and morphosyntactic features in transitive sentences in written discourse. The study finds that overt morphosyntactic information about grammatical functions is used more frequently when the grammatical functions cannot be determined on the basis of word order or animacy. This suggests that writers are inclined to accommodate the understanding of their recipients by more often providing formal markers of grammatical functions in potentially ambiguous sentences. The study also finds that prominence features and their interactions with verb-semantic features are systematically distributed across grammatical functions and therefore can predict these functions with a high degree of confidence. The third contribution consists of three computational models of incremental grammatical function assignment. These models are based upon the distribution of argument interpretation cues in written discourse. They predict processing difficulties during grammatical function assignment in terms of on-line change in the expectation of different grammatical function assignments over the presentation of sentence constituents. The most prominent model predictions are qualitatively consistent with reading times in a self-paced reading experiment of Swedish transitive sentences. These findings indicate that grammatical function assignment draws upon statistical regularities in the distribution of morphosyntactic and prominence-based information in language use. Processing difficulties in the comprehension of Swedish transitive sentences can therefore be predicted on the basis of corpus distributions.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

French, J. "Transfers of gunshot residue (GSR) to hands : an experimental study of mechanisms of transfer and deposition carried out using SEM-EDX, with explorations of the implications for forensic protocol and the application of Bayesian Networks to interpretation." Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1417086/.

Повний текст джерела
Анотація:
Gunshot residue (GSR) is produced during a firearm discharge and its recovery from the hands of a suspect may be used to support an inference that the suspect discharged a firearm. Various mechanisms of GSR transfer and deposition involving the hands of subjects were studied through a series of experimental scenarios that were intended to mimic real-world forensic situations. Samples were analysed using SEM-EDX with an automated search and detection package (INCAGSR, Oxford Instruments, U.K.). The results demonstrate the possibility of recovering considerable quantities of GSR from the hands of subjects as a result of a secondary transfer via a handshake with a shooter, or through handling a recently discharged firearm. As many as 129 particles were recovered from a handshake recipient. Additionally, GSR particles were found to undergo tertiary transfer following successive handshakes, while the possibility of GSR deposition on the hands of a bystander was confirmed. Particle size analysis revealed that very large (>50µm and >100µm) particles may undergo secondary transfer. The implications of these findings for forensic investigations are considered, particularly for interpreting the presence of GSR under competing activity level propositions about its deposition and the actions of the suspect. Bayesian Networks are inferential tools that are increasingly being employed in the interpretation of forensic evidence. Using the empirical data derived during the experimentation, the utility of Bayesian Networks for reasoning about mechanisms of GSR deposition is demonstrated. Further research aimed at unlocking the interpretative potential of GSR through empirical research and establishing the use of Bayesian Networks in forensic applications is recommended. It is anticipated that this emphasis on empirical support and probabilistic interpretation, in combination with the findings of this study, will strengthen the scientific basis of inferences made about GSR evidence and contribute to the accurate interpretation of evidence in legal settings.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Jinn, Nicole Mee-Hyaang. "Toward Error-Statistical Principles of Evidence in Statistical Inference." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/48420.

Повний текст джерела
Анотація:
The context for this research is statistical inference, the process of making predictions or inferences about a population from observation and analyses of a sample. In this context, many researchers want to grasp what inferences can be made that are valid, in the sense of being able to uphold or justify by argument or evidence. Another pressing question among users of statistical methods is: how can spurious relationships be distinguished from genuine ones? Underlying both of these issues is the concept of evidence. In response to these (and similar) questions, two questions I work on in this essay are: (1) what is a genuine principle of evidence? and (2) do error probabilities have more than a long-run role? Concisely, I propose that felicitous genuine principles of evidence should provide concrete guidelines on precisely how to examine error probabilities, with respect to a test's aptitude for unmasking pertinent errors, which leads to establishing sound interpretations of results from statistical techniques. The starting point for my definition of genuine principles of evidence is Allan Birnbaum's confidence concept, an attempt to control misleading interpretations. However, Birnbaum's confidence concept is inadequate for interpreting statistical evidence, because using only pre-data error probabilities would not pick up on a test's ability to detect a discrepancy of interest (e.g., "even if the discrepancy exists" with respect to the actual outcome. Instead, I argue that Deborah Mayo's severity assessment is the most suitable characterization of evidence based on my definition of genuine principles of evidence.
Master of Arts
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Maimon, Geva. "A Bayesian approach to the statistical interpretation of DNA evidence." 2009. http://digitool.Library.McGill.CA:8881/R/?func=dbin-jump-full&object_id=82284.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Rhodes, E. J., Ramsey C. Bronk, Zoe Outram, Catherine M. Batt, Laura H. Willis, Stephen J. Dockrill, and Julie M. Bond. "Bayesian methods applied to the interpretation of multiple OSL dates: high precision sediment ages from Old Scatness Broch excavations, Shetland Isles." 2009. http://hdl.handle.net/10454/3637.

Повний текст джерела
Анотація:
No
In this paper, we illustrate the ways in which Bayesian statistical techniques may be used to enhance chronological resolution when applied to a series of OSL sediment dates. Such application can achieve an optimal chronological model by incorporating stratigraphic and age information. The application to luminescence data is not straightforward owing to the sources of uncertainty in each date, and here we present one solution to overcoming these difficulties, and introduce the concept of "unshared systematic" errors. Using OSL sediment dates from the site of Old Scatness Broch, Shetland Isles, UK, many measured with a high degree of precision, we illustrate some of the ways in which Bayesian techniques may be applied, as a tool for assessing systematic errors when combined with independent chronological information, and to determine the optimum chronological information for specific events and contexts. We provide a detailed procedure for the application of Bayesian methods to OSL dates using the widely available radiocarbon calibration programme OxCal.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Sayyafzadeh, Mohammad. "Uncertainty reduction in reservoir characterisation through inverse modelling of dynamic data: an evolutionary computation approach." Thesis, 2013. http://hdl.handle.net/2440/81813.

Повний текст джерела
Анотація:
Precise reservoir characterisation is the basis for reliable flow performance predictions and unequivocal decision making concerning field development. History matching is an indispensable phase of reservoir characterisation in which the flow performance history is integrated into the initially constructed reservoir model to reduce uncertainties. It is a computationally intensive nonlinear inverse problem and typically suffers from illposedness. Developing an efficient automatic history matching framework is the core goal of almost all studies on this subject. To overcome some of the existing challenges in history matching, this thesis introduces new techniques which are mostly based on evolutionary computation concepts. In order to examine the techniques, in the beginning, the foundations of an automatic history matching framework are developed in which a reservoir simulator (ECLIPSE) is coupled with a programming language (MATLAB). Then, the introduced methods along with a number of conventional methods are installed on the framework, and they are compared with each other using different case studies. Thus far, numerous optimisation algorithms have been studied for history matching problems to conduct the calibration step accurately and efficiently. In this thesis, the application of a recent-developed algorithm, artificial bee colony (ABC), is assessed, for the first time. It is compared with three conventional optimisers, Levenberg-Marquette, Genetic Algorithm, and Simulated Annealing, using a synthetic reservoir model. The comparison indicates that ABC can deliver better results and is not concerned with the landscape shape of problem. The most likely reason of its success is having a suitable balance between exploration and exploitation search capability. Of course, similar to all stochastic optimisers, its main drawbacks are computational expenses and being inefficient in high-dimensional problems. Fitness approximation (proxy-modelling) approaches are common methods for reducing computational costs. All of the applied fitness approximation methods in history-matching problems use a similar approach called uncontrolled fitness approximation. It has been corroborated that the uncontrolled fitness approximation approach may mislead the optimisation direction. To prevent this issue, a new fitness approximation is developed in that a model management (evolution-control) technique is included. The results of the controlled (proposed) approach are compared with the results of conventional one using a case study (PUNQ-S3 model). It is shown that the computation can be reduced up to 75% by the proposed method. The proxy-modelling methods should be applied when the problem is not high-dimensional. None of the current formats of the applied stochastic optimisers is capable of dealing with high-dimensional problems efficiently, and they should be applied in conjunction with a reparameterisation technique which causes modelling errors. On the other hand, gradient based optimisers may be trapped into a local minimum, due to the nonlinearity of the problem. In this thesis, an inventive stochastic algorithm is developed for high-dimensional problems based on wavelet image-fusion and evolutionary algorithm concepts. The developed algorithm is compared with six algorithms (genetic algorithm with a pilot point reparameterisation, BFGS with a zonation reparameterisation, BFGS with a spectral decomposition reparameterisation, artificial bee colony, genetic algorithm and BFGS in full-parameterisation) using two different case studies. It is interesting that the best results are obtained by the introduced method. Besides, it is well-known that achieving high-quality history matched models using any of the methods depends on the reliability of objective function formulation. The most widespread approach of formulation is Bayesian framework. Because of complexities in quantifying measurement, modelling and prior model reliability, the weighting factors in the objective function may have uncertainties. The influence of these uncertainties on the outcome of history matching is studied in this thesis, and an approach is developed based on Pareto optimisation (multi-objective genetic algorithm) to deal with this issue. The approach is compared with a conventional (random selection) one. The results confirm that a high amount of computation can be saved by the Pareto approach. In last part of this thesis, a new analytical simulator is developed using the transfer function approach. The developed method does not need the expensive history matching, and it can be used for occasions that a quick forecasting is sought and/or history matching of grid-based reservoir simulation is impractical. In the developed method, it is assumed a reservoir consists of a combination of TFs, and then the order and arrangement of TFs are chosen based on the physical conditions of the reservoir ascertained by examining several cases. The results reveal a good agreement with those obtained from the grid-based simulators. An additional piece of work is done in this thesis in which the optimal infill drilling plane is estimated for a coal seam gas reservoir (semi-synthetic model constructed based on the Tiffany unit in the San Juan basin) by the use of the developed framework in which the objective function and the decision variables are set to be the net present value, and the location of infill wells, respectively.
Thesis (Ph.D.) -- University of Adelaide, Australian School of Petroleum, 2013
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Klimeš, Adam. "Ekologie společenstev z hlediska klasické a bayesovské statistiky." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-343139.

Повний текст джерела
Анотація:
Community ecology from the perspective of classic and Bayesian statistics Ekologie společenstev z hlediska klasické a Bayesovské statistiky Řešitel: Adam Klimeš Vedoucí práce: Mgr. Petr Keil, Ph.D. Abstract Quantitative evaluation of evidence through statistics is a central part of present-day science. Bayesian approach represents an emerging but rapidly developing enrichment of statistical analysis. The approach differs in its foundations from the classic methods. These differences, such as the different interpretation of probability, are often seen as obstacles for acceptance of Bayesian approach. In this thesis I outline ways to deal with the assumptions of Bayesian approach, and I address the main objections against it. I present Bayesian approach as a new way to handle data to answer scientific questions. I do this from a standpoint of community ecology: I illustrate the novelty that Bayesian approach brings to data analysis of typical community ecology data, specifically, the analysis of multivariate datasets. I focus on principal component analysis, one of the typical and frequently used analytical techniques. I execute Bayesian analyses that are analogical to the classic principal components analysis, I report the advantages of the Bayesian version, such as the possibility of working with...
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії