Dissertationen zum Thema „Empirical methods“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Empirical methods.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Empirical methods" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Luta, Gheorghe Sen Pranab Kumar Koch Gary G. „Empirical likelihood-based adjustment methods“. Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2006. http://dc.lib.unc.edu/u?/etd,502.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2006.
Title from electronic title page (viewed Oct. 10, 2007). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Biostatistics." Discipline: Biostatistics; Department/School: Public Health.
2

Zawadzki, Erik P. „Multiagent learning and empirical methods“. Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2480.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Many algorithms exist for learning how to act in a repeated game and most have theoretical guarantees associated with their behaviour. However, there are few experimental results about the empirical performance of these algorithms, which is important for any practical application of this work. Most of the empirical claims in the literature to date have been based on small experiments, and this has hampered the development of multiagent learning (MAL) algorithms with good performance properties. In order to rectify this problem, we have developed a suite of tools for running multiagent experiments called the Multiagent Learning Testbed (MALT). These tools are designed to facilitate running larger and more comprehensive experiments by removing the need to code one-off experimental apparatus. MALT also provides a number of public implementations of MAL algorithms—hopefully eliminating or reducing differences between algorithm implementations and increasing the reproducibility of results. Using this test-suite, we ran an experiment that is unprecedented in terms of the number of MAL algorithms used and the number of game instances generated. The results of this experiment were analyzed by using a variety of performance metrics—including reward, maxmin distance, regret, and several types of convergence. Our investigation also draws upon a number of empirical analysis methods. Through this analysis we found some surprising results: the most surprising observation was that a very simple algorithm—one that was intended for single-agent reinforcement problems and not multiagent learning— performed better empirically than more complicated and recent MAL algorithms.
3

Fevang, Rune, und Arne Bergene Fossaa. „Empirical evaluation of metric indexing methods“. Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-8902.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:

Metric indexing is a branch of search technology that is designed for search non-textual data. Examples of this includes image search (where the search query is an image), document search (finding documents that are roughly equal) to search in high-dimensional Euclidean spaces. Metric indexing is based on the theory of metric spaces, where the only thing known about a set of objects is the distance between them (defined by a metric distance function). A large number of methods have been proposed to solve the metric indexing problem. In this thesis, we have concentrated on new approaches to solving these problems, as well as combining existing methods to create better ones. The methods studied in this thesis include D-Index, GNAT, EMVP-Forest, HC, SA-Tree, SSS-Tree, M-Tree, PM-Tree, M*-Tree and PM*-Tree. These have all been implemented and tested against each other to find strengths and weaknesses. This thesis also studies a group of indexing methods called hybrid methods which combines tree-based methods (like SA-Tree, SSS-tree and M-Tree), with pivoting methods (like AESA and LAESA). The thesis also proposes a method to create hybrid trees from existing trees by using features in the programming language. Hybrid methods have been shown in this thesis to be very promising. While they may have a considerable overhead in construction time,CPU usage and/or memory usage, they show large benefits in reduced number of distance computations. We also propose a new way of calculating the Minimal Spanning Tree of a graph operating on metric objects, and show that it reduces the number of distance computations needed.

4

Benhaddou, Rida. „Nonparametric and Empirical Bayes Estimation Methods“. Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5765.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
In the present dissertation, we investigate two different nonparametric models; empirical Bayes model and functional deconvolution model. In the case of the nonparametric empirical Bayes estimation, we carried out a complete minimax study. In particular, we derive minimax lower bounds for the risk of the nonparametric empirical Bayes estimator for a general conditional distribution. This result has never been obtained previously. In order to attain optimal convergence rates, we use a wavelet series based empirical Bayes estimator constructed in Pensky and Alotaibi (2005). We propose an adaptive version of this estimator using Lepski's method and show that the estimator attains optimal convergence rates. The theory is supplemented by numerous examples. Our study of the functional deconvolution model expands results of Pensky and Sapatinas (2009, 2010, 2011) to the case of estimating an (r+1)-dimensional function or dependent errors. In both cases, we derive minimax lower bounds for the integrated square risk over a wide set of Besov balls and construct adaptive wavelet estimators that attain those optimal convergence rates. In particular, in the case of estimating a periodic (r+1)-dimensional function, we show that by choosing Besov balls of mixed smoothness, we can avoid the ''curse of dimensionality'' and, hence, obtain higher than usual convergence rates when r is large. The study of deconvolution of a multivariate function is motivated by seismic inversion which can be reduced to solution of noisy two-dimensional convolution equations that allow to draw inference on underground layer structures along the chosen profiles. The common practice in seismology is to recover layer structures separately for each profile and then to combine the derived estimates into a two-dimensional function. By studying the two-dimensional version of the model, we demonstrate that this strategy usually leads to estimators which are less accurate than the ones obtained as two-dimensional functional deconvolutions. Finally, we consider a multichannel deconvolution model with long-range dependent Gaussian errors. We do not limit our consideration to a specific type of long-range dependence, rather we assume that the eigenvalues of the covariance matrix of the errors are bounded above and below. We show that convergence rates of the estimators depend on a balance between the smoothness parameters of the response function, the smoothness of the blurring function, the long memory parameters of the errors, and how the total number of observations is distributed among the channels.
Ph.D.
Doctorate
Mathematics
Sciences
Mathematics
5

Reinhardt, Timothy Patrick. „Empirical methods for comparing governance structure“. Thesis, [Austin, Tex. : University of Texas, 2009. http://hdl.handle.net/2152/ETD-UT-2009-05-134.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Mikkola, Hennamari. „Empirical studies on Finnish hospital pricing methods /“. Helsinki : Helsinki School of Economics, 2002. http://aleph.unisg.ch/hsgscan/hm00068878.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Brandel, John. „Empirical Bayes methods for missing data analysis“. Thesis, Uppsala University, Department of Mathematics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-121408.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lönnstedt, Ingrid. „Empirical Bayes Methods for DNA Microarray Data“. Doctoral thesis, Uppsala University, Department of Mathematics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-5865.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:

cDNA microarrays is one of the first high-throughput gene expression technologies that has emerged within molecular biology for the purpose of functional genomics. cDNA microarrays compare the gene expression levels between cell samples, for thousands of genes simultaneously.

The microarray technology offers new challenges when it comes to data analysis, since the thousands of genes are examined in parallel, but with very few replicates, yielding noisy estimation of gene effects and variances. Although careful image analyses and normalisation of the data is applied, traditional methods for inference like the Student t or Fisher’s F-statistic fail to work.

In this thesis, four papers on the topics of empirical Bayes and full Bayesian methods for two-channel microarray data (as e.g. cDNA) are presented. These contribute to proving that empirical Bayes methods are useful to overcome the specific data problems. The sample distributions of all the genes involved in a microarray experiment are summarized into prior distributions and improves the inference of each single gene.

The first part of the thesis includes biological and statistical background of cDNA microarrays, with an overview of the different steps of two-channel microarray analysis, including experimental design, image analysis, normalisation, cluster analysis, discrimination and hypothesis testing. The second part of the thesis consists of the four papers. Paper I presents the empirical Bayes statistic B, which corresponds to a t-statistic. Paper II is based on a version of B that is extended for linear model effects. Paper III assesses the performance of empirical Bayes models by comparisons with full Bayes methods. Paper IV provides extensions of B to what corresponds to F-statistics.

9

Lönnstedt, Ingrid. „Empirical Bayes methods for DNA microarray data /“. Uppsala : Matematiska institutionen, Univ. [distributör], 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-5865.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Imhof, David. „Empirical Methods for Detecting Bid-rigging Cartels“. Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCB005/document.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Le projet de thèse présente différentes méthodes empiriques permettant de détecter des cartels. Il vise à démontrer premièrement que des résultats efficaces peuvent être obtenus avec de simples indicateurs statistiques et deuxièmement que les méthodes économétriques traditionnelles ne sont pas aussi efficaces
The PhD studies different empirical methods to detect bid-rigging cartels. It shows first that simple statistical screens perform very well to detect bid-rigging infringement. Second, the econometric method of Bajari, well established in the literature, produces poor results
11

Rubesam, Alexandre. „Essays on empirical asset pricing using Bayesian methods“. Thesis, City University London, 2009. http://openaccess.city.ac.uk/12034/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This thesis is composed of three essays related to empirical asset pricing. In the first essay of the thesis, we investigate recent rational explanations of the value premium using a regime-switching approach. Using data from the US stock market, we investigate the risk of value and growth in different market states and using alternative risk measures such as downside beta and higher moments. Our results provide little or no evidence that value is riskier than growth, and that evidence is specific to pre-1963 period (including the Great Depression). Within the post-1963 sample, there are periods when the value premium can be explained by the CAPM, whilst during other periods the premium is explained by the fact that the returns on value firms increase more than the returns on growth stocks in periods of strong market performance, whilst in downturns growth stocks suffer more than value, and these features are captured by different upside/downside betas or higher moments. These results are not consistent with a risk-based explanation of the value premium. The second essay of the thesis contributes to the debate about the momentum premium. We investigate the robustness of the momentum premium in the US over the period from 1927 to 2006 using a model that allows multiple structural breaks. We find that the risk-adjusted momentum premium is significantly positive only during certain periods, notably from the 1940s to the mid-1960s and from the mid-1970s to the late 1990s, and we find evidence that momentum has disappeared since the late 1990s. Our results suggest that the momentum premium has been slowly eroded away since the early 1990s, in a process which was delayed by the occurrence of the high-technology stock bubble of the 1990s. In particular, we estimate that the bubble accounts for at least 60% of momentum profits during the period from 1995 to 1999. In the final essay of this thesis, we study the question of which asset pricing factors should be included in linear factor asset pricing model. We develop a simple multivariate extension of a Bayesian variable selection procedure from the statistics literature to estimate posterior probabilities of asset pricing factors using many assets at once. Using a dataset of thousands of individual stocks in the US market, we calculate posterior probabilities of 12 factors which have been suggested in the literature. Our results indicate strong and robust evidence that a linear factor model should include the excess market return, the size and the liquidity factors, and only weak evidence that the idiosyncratic volatility and downside risk factors matter. We also apply our methodology to portfolios of stocks commonly used in the literature, and find that the famous Fama and French (1993, 1996) HML factor has high posterior probability only if portfolios formed on book-to-market ratio are used.
12

Kies, Jonathan K. „Empirical Methods for Evaluating Video-Mediated Collaborative Work“. Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30537.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Advancements in computer technology are making video conferencing a viable communication medium for desktop computers. These same advancements are changing the structure and means by which information workers conduct business. From a human factors perspective, however, the study of new communication technologies and their relationships with end users presents a challenging research domain. This study employed two diverse research approaches to the problem of reduced video frame rate in desktop video conferencing. In the first study, a psychophysical method was used to evaluate video image quality as a function of frame rate for a series of different scenes. Scenes varied in terms of level of detail, velocity of panning, and content. Results indicate that for most scenes, differences in frame rate become less detectable above approximately 10 frames per second (fps), suggesting a curvilinear relationship between image quality and frame rate. For a traditional conferencing scene, however, a linear increase in frame rate produced a linear improvement in perceived image quality. High detail scenes were perceived to be of lower quality than the low detail scenes, while panning velocity had no effect. In the second study, a collection of research methods known as ethnography was used to examine long-term use of desktop video by collaborators in a real work situation. Participants from a graduate course met each week for seven weeks and worked on a class project under one of four communication conditions: face-to-face, 1 fps, 10 fps, and 25 fps. Dependent measures included interviews, questionnaires, interaction analysis measures, and ethnomethodology. Recommendations are made regarding the utility and expense of each method with respect to uncovering human factors issues in video-mediated collaboration. It is believed that this research has filled a significant gap in the human factors literature of advanced telecommunications and research methodology.
Ph. D.
13

Ramsay, Mark J. „Comparing Five Empirical Biodata Scoring Methods for Personnel Selection“. Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3220/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
A biodata based personnel selection measure was created to improve the retention rate of Catalog Telemarketing Representatives at a major U.S. retail company. Five separate empirical biodata scoring methods were compared to examine their usefulness in predicting retention and reducing adverse impact. The Mean Standardized Criterion Method, the Option Criterion Correlation Method, Horizontal Percentage Method, Vertical Percentage Method, and Weighted Application Blank Method using England's (1971) Assigned Weights were employed. The study showed that when using generalizable biodata items, all methods, except the Weighted Application Blank Method, were similar in their ability to discriminate between low and high retention employees and produced similar low adverse impact effects. The Weighted Application Blank Method did not discriminate between the low and high retention employees.
14

Handley, Sean M. „The Evaluation, Analysis, and Management of the Business Outsourcing Process“. The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1217602296.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Amaral, Getulio J. A. „Bootstrap and empirical likelihood methods in statistical shape analysis“. Thesis, University of Nottingham, 2004. http://eprints.nottingham.ac.uk/11399/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The aim of this thesis is to propose bootstrap and empirical likelihood confidence regions and hypothesis tests for use in statistical shape analysis. Bootstrap and empirical likelihood methods have some advantages when compared to conventional methods. In particular, they are nonparametric methods and so it is not necessary to choose a family of distribution for building confidence regions or testing hypotheses. There has been very little work on bootstrap and empirical likelihood methods in statistical shape analysis. Only one paper (Bhattacharya and Patrangenaru, 2003) has considered bootstrap methods in statistical shape analysis, but just for constructing confidence regions. There are no published papers on the use of empirical likelihood methods in statistical shape analysis. Existing methods for building confidence regions and testing hypotheses in shape analysis have some limitations. The Hotelling and Goodall confidence regions and hypothesis tests are not appropriate for data sets with low concentration. The main reason is that these methods are designed for data with high concentration, and if this hypothesis is violated, the methods do not perform well. On the other hand, simulation results have showed that bootstrap and empirical likelihood methods developed in this thesis are appropriate to the statistical shape analysis of low concentrated data sets. For highly concentrated data sets all the methods show similar performance. Theoretical aspects of bootstrap and empirical likelihood methods are also considered. Both methods are based on asymptotic results and those results are explained in this thesis. It is proved that the bootstrap methods proposed in this thesis are asymptotically pivotal. Computational aspects are discussed. All the bootstrap algorithms are implemented in “R”. An algorithm for computing empirical likelihood tests for several populations is also implemented in “R”.
16

Petersson, Emil. „Study of semi-empirical methods for ship resistance calculations“. Thesis, Uppsala universitet, Tillämpad mekanik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413700.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
In the early ship design process a quick overview of which shipdesign that could be the optimal choice for the intended usage needsto be investigated. Therefore the feasibility and accuracy ofinterpolating between measurement data from model resistance serieswhen estimating unknown hulls were conducted. A parametric study wasundertaken in order to investigate which parameters carry the mostimportance in regard to calm water resistance for semi-displacinghulls. In order to asses the whole estimation process one semidisplacing ship (FDS-5) and one bulk carrier with a bulbous bow (JBC)were estimated in regard to calm water resistance by using semiempirical methods and were later compared with CFD results. The CFDresults came from a in part parallel conducted work. The resultsshowed that it is possible to estimate the total resistance withsemi-empirical methods to an unknown hull by linear interpolationwith an accuracy of below 5% in the designed speed interval both forFDS-5 and JBC. The CFD simulations achieved a lower accuracy comparedto the semi-empirical approach, however by furhter calibrating themodels, the accuracy could potentially be improved. Linearinterpolation between two hulls in order to estimate an unknown hull,is only advised when the hulls are nearly identical. Meaning that thehulls must be of the same ship type and that only one parameter isallowed to differ compared to the unknown hull. The parametric studyresulted in parameter importance in falling order: Slenderness ratio,length-beam ratio, longitudinal prismatic coefficient, blockcoefficient and beam-draught ratio. Even though the CFD approach notyet is completely reliable, it could still be a useful complement tothe semi-empirical approach by calculating parameters such as adynamic wetted surface, resistance due to appendages or airresistance of the full-scale ship. Simply by incrementally increasingthe accuracy of individual resistance components an overallimprovement could potentially be achieved.
17

Xie, Yanmei. „Empirical Likelihood Methods in Nonignorable Covariate-Missing Data Problems“. University of Toledo / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1562371987478916.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Copana, Paucara Julio. „Seismic Slope Stability: A Comparison Study of Empirical Predictive Methods with the Finite Element Method“. Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/100797.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This study evaluates the seismically induced displacements of a slope using the Finite Element Method (FEM) in comparison to the results of twelve empirical predictive approaches. First, the existing methods to analyze the stability of slopes subjected to seismic loads are presented and their capabilities to predict the onset of failure and post-failure behavior are discussed. These methods include the pseudostatic method, the Newmark method, and stress-deformation numerical methods. Whereas the pseudostatic method defines a seismic coefficient for the analysis and provides a safety factor, the Newmark method incorporates a yield coefficient and the actual acceleration time history to estimate permanent displacements. Numerical methods incorporate advanced constitutive models to simulate the coupled stress-strain soil behavior, making the process computationally more costly. In this study, a model slope previously studied at laboratory scale is selected and scaled up to prototype dimensions. Then, the slope is subjected to 88 different input motions, and the seismic displacements obtained from the numerical and empirical approaches are compared statistically. From correlation analyses between seven ground motion parameters and the numerical results, new empirical predictive equations are developed for slope displacements. The results show that overall the FEM displacements are generally in agreement with the numerically developed methods by Fotopoulou and Pitilakis (2015) labelled "Method 2" and "Method 3", and the Newmark-type Makdisi and Seed (1978) and Bray and Travasarou (2007) methods for rigid slopes. Finally, functional forms for seismic slope displacement are proposed as a function of peak ground acceleration (PGA), Arias intensity (Ia), and yield acceleration ratio (Ay/PGA). These functions are expected to be valid for granular slopes such as earth dams, embankments, or landfills built on a rigid base and with low fundamental periods (Ts<0.2).
Master of Science
A landslide is a displacement on a sloped ground that can be triggered by earthquake shaking. Several authors have investigated the failure mechanisms that lead to landslide initiation and subsequent mass displacement and proposed methodologies to assess the stability of slopes subjected to seismic loads. The development of these methodologies has to rely on field data that in most of the cases are difficult to obtain because identifying the location of future earthquakes involves too many uncertainties to justify investments in field instrumentation (Kutter, 1995). Nevertheless, the use of scale models and numerical techniques have helped in the investigation of these geotechnical hazards and has led to development of equations that predict seismic displacements as function of different ground motion parameters. In this study, the capabilities and limitations of the most recognized approaches to assess seismic slope stability are reviewed and explained. In addition, a previous shaking-table model is used for reference and scaled up to realistic proportions to calculate its seismic displacement using different methods, including a Finite Element model in the commercial software Plaxis2D. These displacements are compared statistically and used to develop new predictive equations. This study is relevant to understand the capabilities of newer numerical approaches in comparison to classical empirical methods.
19

Salgado-Medina, Luis, Diego Núñez-Ramírez, Humberto Pehovaz-Alvarez, Carlos Raymundo und Javier M. Moguerza. „Model for dilution control applying empirical methods in narrow vein mine deposits in Peru“. Springer Verlag, 2019. http://hdl.handle.net/10757/656290.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.
Empirical methods play an important role in the field of geomechanics due to the recognized complexity of the nature of rock mass. This study aims to analyze the applicability of empirical design methods in vein-shaped hydrothermal mining deposits (narrow vein) using Bieniawski and Barton classification systems, Mathews stability graphs, Potvin and Mawdesley geomechanics classification systems, and mining pit dilution based on the equivalent linear overbreak/slough (ELOS). In most cases, these methods are applied without understanding the underlying assumptions and limits of the database in relation to the inherent hidden risks. Herein, the dilutions obtained using the empirical methods oscillate between 8% and 11% (according to the frontal dimension), which are inferior to the operative dilution of the mine at 15%. The proposed model can be used as a practical tool to predict and reduce dilution in narrow veins.
20

Aguirre-Hernández, Rosalía. „Computational RNA secondary structure design : empirical complexity and improved methods“. Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31202.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Ribonucleic acids play fundamental roles in cellular processes and their function is directly related to their structure. The research reported in this thesis is focused on the design of RNA strands that are predicted to fold to a given secondary structure, according to a standard thermodynamic model. The design of RNA structures is important for applications in therapeutics and nanotechnology. This work also applies to DNA with the appropriate thermodynamic model for DNA molecules. The overall goal of this research is to improve the performance and scope of algorithmic methods for RNA secondary structure design. First, we investigate the hardness of this problem, since its theoretical complexity is unknown. A scaling analysis on random and biologically generated structures supports the hypothesis that the running time of the RNA Secondary Structure Designer (RNA-SSD) algorithm, one of the state of the art algorithms for designing secondary structures, scales polynomially with the size of the structure. We found that structures with small stems separated by loops are difficult to design. Our improvements to the RNA-SSD algorithm include the support for primary structure constraints, where bases or base types are fixed in certain positions of the sequence. Such constraints are important, for example, when designing RNAs such as ribozymes or tRNAs, where certain base positions must be fixed in order to permit interaction with other molecules. We investigate the correlation between the number and the location of the primary structure constraints and the performance of RNA-SSD. In the second part of our research, we have extended the RNA-SSD algorithm to design for stability, rather than minimum free energy folding. We measure stability according to several criteria such as high probability of observing the minimum free energy structure, and low average number of incorrectly paired nucleotides in the ensemble of structures for the designed sequence. The design of complexes of RNA molecules, that is RNA molecules that interact with each other, is relevant for many applications. We describe several ways to design stable structures and complexes, and we also discuss the advantages and limitations of each approach.
Science, Faculty of
Mathematics, Department of
Graduate
21

Braunack-Mayer, Annette. „General practitioners doing ethics : an empirical perspective on bioethical methods /“. Title page, contents and abstract only, 1998. http://web4.library.adelaide.edu.au/theses/09PH/09phb8253.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Jakimauskas, Gintautas. „Analysis and application of empirical Bayes methods in data mining“. Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20140423_090853-72998.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The research object is data mining empirical Bayes methods and algorithms applied in the analysis of large populations of large dimensions. The aim and objectives of the research are to create methods and algorithms for testing nonparametric hypotheses for large populations and for estimating the parameters of data models. The following problems are solved to reach these objectives: 1. To create an efficient data partitioning algorithm of large dimensional data. 2. To apply the data partitioning algorithm of large dimensional data in testing nonparametric hypotheses. 3. To apply the empirical Bayes method in testing the independence of components of large dimensional data vectors. 4. To develop an algorithm for estimating probabilities of rare events in large populations, using the empirical Bayes method and comparing Poisson-gamma and Poisson-Gaussian mathematical models, by selecting an optimal model and a respective empirical Bayes estimator. 5. To create an algorithm for logistic regression of rare events using the empirical Bayes method. The results obtained enables us to perform very fast and efficient partitioning of large dimensional data; testing the independence of selected components of large dimensional data; selecting the optimal model in the estimation of probabilities of rare events, using the Poisson-gamma and Poisson-Gaussian mathematical models and empirical Bayes estimators. The nonsingularity condition in the case of the Poisson-gamma model is presented.
Darbo tyrimų objektas yra duomenų tyrybos empiriniai Bajeso metodai ir algoritmai, taikomi didelio matavimų skaičiaus didelių populiacijų duomenų analizei. Darbo tyrimų tikslas yra sudaryti metodus ir algoritmus didelių populiacijų neparametrinių hipotezių tikrinimui ir duomenų modelių parametrų vertinimui. Šiam tikslui pasiekti yra sprendžiami tokie uždaviniai: 1. Sudaryti didelio matavimo duomenų skaidymo algoritmą. 2. Pritaikyti didelio matavimo duomenų skaidymo algoritmą neparametrinėms hipotezėms tikrinti. 3. Pritaikyti empirinį Bajeso metodą daugiamačių duomenų komponenčių nepriklausomumo hipotezei tikrinti su skirtingais matematiniais modeliais, nustatant optimalų modelį ir atitinkamą empirinį Bajeso įvertinį. 4. Sudaryti didelių populiacijų retų įvykių dažnių vertinimo algoritmą panaudojant empirinį Bajeso metodą palyginant Puasono-gama ir Puasono-Gauso matematinius modelius. 5. Sudaryti retų įvykių logistinės regresijos algoritmą panaudojant empirinį Bajeso metodą. Darbo metu gauti nauji rezultatai įgalina atlikti didelio matavimo duomenų skaidymą; atlikti didelio matavimo nekoreliuotų duomenų pasirinktų komponenčių nepriklausomumo tikrinimą; parinkti didelių populiacijų retų įvykių optimalų modelį ir atitinkamą empirinį Bajeso įvertinį. Pateikta nesinguliarumo sąlyga Puasono-gama modelio atveju.
23

Löhndorf, Nils. „An empirical analysis of scenario generation methods for stochastic optimization“. Elsevier, 2016. http://dx.doi.org/10.1016/j.ejor.2016.05.021.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This work presents an empirical analysis of popular scenario generation methods for stochastic optimization, including quasi-Monte Carlo, moment matching, and methods based on probability metrics, as well as a new method referred to as Voronoi cell sampling. Solution quality is assessed by measuring the error that arises from using scenarios to solve a multi-dimensional newsvendor problem, for which analytical solutions are available. In addition to the expected value, the work also studies scenario quality when minimizing the expected shortfall using the conditional value-at-risk. To quickly solve problems with millions of random parameters, a reformulation of the risk-averse newsvendor problem is proposed which can be solved via Benders decomposition. The empirical analysis identifies Voronoi cell sampling as the method that provides the lowest errors, with particularly good results for heavy-tailed distributions. A controversial finding concerns evidence for the ineffectiveness of widely used methods based on minimizing probability metrics under high-dimensional randomness.
24

Razali, Rozilawati. „Usability of semi-formal and formal methods integration : empirical assessments“. Thesis, University of Southampton, 2008. https://eprints.soton.ac.uk/265391/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Ren, Kaili. „Empirical likelihood methods in missing response problems and causal inference“. University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1470184291.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Everitt, Niklas. „Module identification in dynamic networks: parametric and empirical Bayes methods“. Doctoral thesis, KTH, Reglerteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208920.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The purpose of system identification is to construct mathematical models of dynamical system from experimental data. With the current trend of dynamical systems encountered in engineering growing ever more complex, an important task is to efficiently build models of these systems. Modelling the complete dynamics of these systems is in general not possible or even desired. However, often, these systems can be modelled as simpler linear systems interconnected in a dynamic network. Then, the task of estimating the whole network or a subset of the network can be broken down into subproblems of estimating one simple system, called module, embedded within the dynamic network. The prediction error method (PEM) is a benchmark in parametric system identification. The main advantage with PEM is that for Gaussian noise, it corresponds to the so called maximum likelihood (ML) estimator and is asymptotically efficient. One drawback is that the cost function is in general nonconvex and a gradient based search over the parameters has to be carried out, rendering a good starting point crucial. Therefore, other methods such as subspace or instrumental variable methods are required to initialize the search. In this thesis, an alternative method, called model order reduction Steiglitz-McBride (MORSM) is proposed. As MORSM is also motivated by ML arguments, it may also be used on its own and will in some cases provide asymptotically efficient estimates. The method is computationally attractive since it is composed of a sequence of least squares steps. It also treats the part of the network of no direct interest nonparametrically, simplifying model order selection for the user. A different approach is taken in the second proposed method to identify a module embedded in a dynamic network. Here, the impulse response of the part of the network of no direct interest is modelled as a realization of a Gaussian process. The mean and covariance of the Gaussian process is parameterized by a set of parameters called hyperparameters that needs to be estimated together with the parameters of the module of interest. Using an empirical Bayes approach, all parameters are estimated by maximizing the marginal likelihood of the data. The maximization is carried out by using an iterative expectation/conditional-maximization scheme, which alternates so called expectation steps with a series of conditional-maximization steps. When only the module input and output sensors are used, the expectation step admits an analytical expression. The conditional-maximization steps reduces to solving smaller optimization problems, which either admit a closed form solution, or can be efficiently solved by using gradient descent strategies. Therefore, the overall optimization turns out computationally efficient. Using markov chain monte carlo techniques, the method is extended to incorporate additional sensors. Apart from the choice of identification method, the set of chosen signals to use in the identification will determine the covariance of the estimated modules. To chose these signals, well known expressions for the covariance matrix could, together with signal constraints, be formulated as an optimization problem and solved. However, this approach does neither tell us why a certain choice of signals is optimal nor what will happen if some properties change. The expressions developed in this part of the thesis have a different flavor in that they aim to reformulate the covariance expressions into a form amenable for interpretation. These expressions illustrate how different properties of the identification problem affects the achievable accuracy. In particular, how the power of the input and noise signals, as well as model structure, affect the covariance.
Systemidentifiering används för att skatta en modell av ett dynamiskt system genom att anpassa modellens parametrar utifrån experimentell mätdata inhämtad från systemet som ska modelleras. Systemen som modelleras tenderar att växa sig så omfattande i skala och så komplexa att direkt modellering varken är genomförbar eller önskad. I många fall går det komplexa systemet att beskriva som en komposition av enklare linära system (moduler) sammakopplade i något vi kallar dynamiska nätverk. Uppgiften att modellera hela eller delar av nätverket kan därmed brytas ner till deluppgiften att modellera en modul i det dynamiska nätverket. Det vanligaste sättet att skatta parametrarna hos en model är genom att minimera det så kallade prediktionsfelet. Den här typen av metod har nyligen anpassats för att identifiera moduler i dynamiska nätverk. Metoden åtnjuter goda egenskaper vad det gäller det modelfel som härrör från stokastisk störningar under experimentet och i de fall där störningarna är normalfördelade sammanfaller metoden med maximum likelihood-metoden. En nackdel med metoden är att functionen som minimeras vanligen är inte är konvex och därmed riskerar metoden att fastna i ett lokalt minimum. Det är därför essentiellt med en bra startpunkt. Andra metoder krävs därmed för att hitta en startpunkt, till exempel kan instrumentvariabelmetoder användas. I den här avhandlingen föreslås en alternativ metod kallad MORSM. MORSM är motiverad med argument hämtade från maximum likelihood och är också asymptotiskt effektiv i vissa fall. MORSM består av steg som kan lösas med minstakvadratmetoden och är därmed beräkningsmässigt attraktiv. Den del av nätverket som är utan intresse skattas enbart ickeparametriskt vilket underlättar valet av modellordning för användaren. En annan utgångspunkt tas i den andra metoden som föreslås för att skatta en modul inbäddad i ett dynamiskt nätverk. Impulssvaret från den del av nätverket som är utan intresse modelleras som realisation av en Gaussisk process. Medelvärdet och kovariansen hos den Gaussiska processen parametriseras av en mängd parametrar kallade hyperparametrar vilka skattas tillsammans med parametrarna för modulen. Parametrarna skattas genom att maximera den marginella likelihood funktionen. Optimeringen utförs iterativt med ECM, en variant av förväntan och maximering algoritmen (EM). Algoritmen har två steg. E-steget har en analytisk lösning medan CM-steget reduceras till delproblem som antingen har analytisk lösning eller har låg dimensionalitet och därmed kan lösas med gradientbaserade metoder. Den övergripande optimeringen är därmed beräkningsmässigt attraktiv. Med hjälp av MCMC tekniker generaliseras metoden till att inkludera ytterligare sensorer vars impulssvar också modelleras som Gaussiska processer. Förutom valet av metod så påverkar valet av signaler vilken nogrannhet eller kovarians den skattade modulen har. Klassiska uttryck för kovariansmatrisen kan användas för att optimera valet av signaler. Dock så ger dessa uttryck ingen insikt i varför valet av vissa signaler är optimalt eller vad som skulle hända om förutsättningarna vore annorlunda. Uttrycken som framställs i den här delen av avhandlingen har ett annat syfte. De försöker i stället uttrycka kovariansen i termer som kan ge insikt i vad som påverkar den nogrannhet som kan uppnås. Mer specifikt uttrycks kovariansen med bland annat avseende på insignalernas spektra, brussignalernas spektra samt modellstruktur.

QC 20170614

27

Montuschi, Alessio. „Flexible pavement design using mechanistic-empirical methods: the Californian Approach“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/4914/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Ramya, Sravanam Ramya. „Empirical Study on Quantitative Measurement Methods for Big Image Data : An Experiment using five quantitative methods“. Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13466.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Context. With the increasing demand for image processing applications in multimedia applications, the importance for research on image quality assessment subject has received great interest. While the goal of Image Quality Assessment is to find the efficient Image Quality Metrics that are closely relative to human visual perception, from the last three decades much effort has been put by the researchers and numerous papers and literature has been developed with emerging Image Quality Assessment techniques. In this regard, emphasis is given to Full-Reference Image Quality Assessment research where analysis of quality measurement algorithms is done based on the referenced original image as that is much closer to perceptual visual quality. Objectives. In this thesis we investigate five mostly used Image Quality Metrics which were selected (which includes Peak Signal to Noise Ratio (PSNR), Structural SIMilarity Index (SSIM), Feature SIMilarity Index (FSIM), Visual Saliency Index (VSI), Universal Quality Index (UQI)) to perform an experiment on a chosen image dataset (of images with different types of distortions due to different image processing applications) and find the most efficient one with respect to the dataset used. This research analysis could possibly be helpful to researchers working on big image data projects where selection of an appropriate Image Quality Metric is of major significance. Our study details the use of dataset taken and the experimental results where the image set highly influences the results.  Methods. The goal of this study is achieved by conducting a Literature Review to investigate the existing Image Quality Assessment research and Image Quality Metrics and by performing an experiment. The image dataset used in the experiment is prepared by obtaining the database from LIVE Image Quality Assessment database. Matlab software engine was used to experiment for image processing applications. Descriptive analysis (includes statistical analysis) was employed to analyze the results obtained from the experiment. Results. For the distortion types involved (JPEG 2000, JPEG compression, White Gaussian Noise, Gaussian Blur) SSIM was efficient to measure the image quality after distortion for JPEG 2000 compressed and white Gaussian noise images and PSNR was efficient for JPEG compression and Gaussian blur images with respect to the original image.  Conclusions. From this study it is evident that SSIM and PSNR are efficient in Image Quality Assessment for the dataset used. Also, that the level of distortions in the image dataset highly influences the results, where in our case SSIM and PSNR perform efficiently for the used database.
29

Ketkar, Nikhil S. „Empirical comparison of graph classification and regression algorithms“. Pullman, Wash. : Washington State University, 2009. http://www.dissertations.wsu.edu/Dissertations/Spring2009/n_ketkar_042409.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Thesis (Ph. D.)--Washington State University, May 2009.
Title from PDF title page (viewed on June 3, 2009). "School of Electrical Engineering and Computer Science." Includes bibliographical references (p. 101-108).
30

Duan, Xiuwen. „Revisiting Empirical Bayes Methods and Applications to Special Types of Data“. Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42340.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Empirical Bayes methods have been around for a long time and have a wide range of applications. These methods provide a way in which historical data can be aggregated to provide estimates of the posterior mean. This thesis revisits some of the empirical Bayesian methods and develops new applications. We first look at a linear empirical Bayes estimator and apply it on ranking and symbolic data. Next, we consider Tweedie’s formula and show how it can be applied to analyze a microarray dataset. The application of the formula is simplified with the Pearson system of distributions. Saddlepoint approximations enable us to generalize several results in this direction. The results show that the proposed methods perform well in applications to real data sets.
31

Spielmans, Glen I. „A Comparison of Rational Versus Empirical Methods in the Prediction of Psychotherapy Outcome“. DigitalCommons@USU, 2004. https://digitalcommons.usu.edu/etd/6216.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Several systems have been designed to monitor psychotherapy outcome, in which feedback is generated based on how a client's rate of progress compares to an expected level of progress. Clients who progress at a much lesser rate than the average client are referred to as signal-alarm cases. Recent studies have shown that providing feedback to therapists based on comparing their clients' progress to a set of rational, clinically derived algorithms has enhanced outcomes for clients predicted to show poor treatment outcomes. Should another method of predicting psychotherapy outcome emerge as more accurate than the rational method, this method would likely be more useful than the rational method in enhancing psychotherapy outcomes. The present study compared the rational algorithms to those generated by an empirical prediction method generated through hierarchical linear modeling. The sample consisted of299 clients seen at a university counseling center and a psychology training clinic. The empirical method was significantly more accurate in predicting outcome than was the rational method. Clients predicted to show poor treatment outcome by the empirical method showed, on average, very little positive change. There was no difference between the methods in the ability to accurately forecast reliable worsening during treatment. The rational method resulted in a high percentage of false alarms, that is, clients who were predicted to show poor treatment response but in fact showed a positive treatment outcome. The empirical method generated significantly fewer false alarms than did the rational method. The empirical method was generally accurate in its predictions of treatment success, whereas the rational method was somewhat less accurate in predicting positive outcomes. Suggestions for future research in psychotherapy quality management are discussed.
32

Coulombe, Daniel. „Voluntary income increasing accounting changes : theory and further empirical investigation“. Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26983.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This thesis presents a three step analysis of voluntary income increasing accounting changes. We first propose a theory as to why managers would elect to modify their reporting strategy. This theory builds on research on the economic factors motivating accounting choices, since it is assumed that accounting choices are a function of political costs, manager's compensation plans and debt constraints. Specifically, we claim that adversity motivates the manager to effect an income increasing accounting change. Secondly, the thesis proposes a theoretical analysis of the potential market responses to a change announcement. The stock price effect of a change announcement is examined as a function of investors' rational anticipations of the manager's reporting actions and as a function of the level of information about adversity that investors may have prior to a change announcement. An empirical analysis is presented in the third step of this thesis. Our empirical findings are that: 1- Change announcements, on average, have no significant impact on the market. 2- Relative to the Compustat population as a whole, firms that voluntarily adopt income increasing accounting changes exhibit symptoms of financial distress, suggesting that such change announcements are associated with financial adversity. 3- Firms which voluntarily adopt income increasing accounting changes tend to exhibit symptoms of financial distress one or more years prior to the change year, suggesting that change announcements tend not to be a timely source of information conveying distress to the market. 4- There is a significant negative association between investors' proxies for prior information about adversity and the market impact of the change, especially for the subset of firms with above average leverage, suggesting that the information content of the accounting change signal is inversely related to investors prior information about adversity. The empirical results thus support the view that investors, at the time a change occurs, have information about the prevailing state of the world, and that they have rational anticipations with respect to the manager's reporting behavior. In this respect, the accounting change is, on average, an inconsequential signal that adds little to what investors already knew before the change announcement.
Business, Sauder School of
Graduate
33

Wallén, Jacob, und Evelina Karlsson. „Financial Bootstrapping : An Empirical Study of Bootstrapping Methods in Swedish Organizations“. Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Företagsekonomi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-15226.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Small and recently started-up organizations find it hard to acquire external capital from financial institutions, such as banks, venture capitalists and private investors. Information asymmetry is the main reason behind this financial gap, from both a demand-side and supply-side standpoint. However, small organizations and start-ups do not need financiers to launch themselves, and the solution to the financial shortages is not necessarily by financial means. By being creative, resources can be acquired through different means, known in research as financial bootstrapping. Previous studies have been focusing on bootstrapping application in companies, and have not included any kind of associations in their investigations. This thesis aims to enlighten the area of bootstrapping usage in associations while comparing similarities and differences with companies. The thesis will also provide a base of knowledge for the collaboration company Coompanion, who requested to increase their understanding within the area of financial bootstrapping. A survey was conducted and 44 responses were received with a mixture of companies and associations. The survey included questions regarding the organizational profile, personal profile and handling of finance. The interactive questionnaire was distributed to the managers by email and the data gathered from the respondents was inserted and analyzed using Excel, SPSS and Gretl. The results demonstrate that organizations prefer internally generated money as a first resort before using external finance, consequently following the theories of pecking order. Organizations that need more capital are inclined to use more bootstrapping techniques compared to organizations with no need for further capital. The survey indicates that some bootstrapping methods are more commonly used, such as: Same terms of payment to all customers, Best terms of payment from suppliers, Buy used equipment instead of new, Sell on credit to customers, Make customers pay through installments on ongoing work and Obtain some kind of subsidy.
34

Kirschner, Kenneth J. „Empirical learning methods for the induction of knowledge from optimization models“. Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/11271.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Harris, Paul. „An empirical comparison of kriging methods for nonstationary spatial point prediction“. Thesis, University of Newcastle Upon Tyne, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492440.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This thesis compares the performance of geostatistical and geostatistical nonparametric hybrid models for providing accurate predictions together with relevant measures of prediction confidence. The key modelling theme is nonstationarity, where models that cater for nonstationary second-order effects nave the potential to provide more accurate results over their stationary counterparts. A comprehensive review and comparison of this particular class of nonstationary predictors is considered missing from the literature. To facilitate this model comparison, models are calibrated to assess the spatial variation in freshwater acidification critical load data across Great Britain, which is shown to be a heterogeneous process requiring a nonstationary modelling approach.
36

Bare, Marshall Edwin. „Structuring empirical methods for reuse and efficiency in product development processes /“. Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1676.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Barnett, Phillip. „A MARKEDLY DIFFERENT APPROACH: INVESTIGATING PIE STOPS USING MODERN EMPIRICAL METHODS“. UKnowledge, 2018. https://uknowledge.uky.edu/ltt_etds/28.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
In this thesis, I investigate a decades-old problem found in the stop system of Proto-Indo-European (PIE). More specifically, I will be investigating the paucity of */b/ in the forms reconstructed for the ancient, hypothetical language. As cross-linguistic evidence and phonological theory alone have fallen short of providing a satisfactory answer, herein will I employ modern empirical methods of linguistic investigation, namely laboratory phonology experiments and computational database analysis. Following Byrd 2015, I advocate for an examination of synchronic phenomena and behavior as a method for investigating diachronic change. In Chapter 1, I present an overview of the various proposed phonological systems of PIE and some of the explanations previously given for the enigmatic rarity of PIE */b/. Chapter 2 presents a detailed account of three lab phonology experiments I conducted in order to investigate perceptual confusability as a motivator of asymmetric merger within a system of stop consonants. Chapter 3 presents the preliminary form and findings of a computational database of reconstructed forms in PIE that I created and have named the Database of Etymological Reconstructions Beginnning in Proto-Indo-European (DERBiPIE). The final chapter, Chapter 4, offers a summary of the work presented herein and conclusions that may be drawn, offering suggestions for continued work on the topic and others like it.
38

Safoutin, Michael John. „A methodology for empirical measurement of iteration in engineering design processes /“. Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/7111.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Niazi, M. Khalid Khan. „Image Filtering Methods for Biomedical Applications“. Doctoral thesis, Uppsala universitet, Centrum för bildanalys, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-158679.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Filtering is a key step in digital image processing and analysis. It is mainly used for amplification or attenuation of some frequencies depending on the nature of the application. Filtering can either be performed in the spatial domain or in a transformed domain. The selection of the filtering method, filtering domain, and the filter parameters are often driven by the properties of the underlying image. This thesis presents three different kinds of biomedical image filtering applications, where the filter parameters are automatically determined from the underlying images. Filtering can be used for image enhancement. We present a robust image dependent filtering method for intensity inhomogeneity correction of biomedical images. In the presented filtering method, the filter parameters are automatically determined from the grey-weighted distance transform of the magnitude spectrum. An evaluation shows that the filter provides an accurate estimate of intensity inhomogeneity. Filtering can also be used for analysis. The thesis presents a filtering method for heart localization and robust signal detection from video recordings of rat embryos. It presents a strategy to decouple motion artifacts produced by the non-rigid embryonic boundary from the heart. The method also filters out noise and the trend term with the help of empirical mode decomposition. Again, all the filter parameters are determined automatically based on the underlying signal. Transforming the geometry of one image to fit that of another one, so called image registration, can be seen as a filtering operation of the image geometry. To assess the progression of eye disorder, registration between temporal images is often required to determine the movement and development of the blood vessels in the eye. We present a robust method for retinal image registration. The method is based on particle swarm optimization, where the swarm searches for optimal registration parameters based on the direction of its cognitive and social components. An evaluation of the proposed method shows that the method is less susceptible to becoming trapped in local minima than previous methods. With these thesis contributions, we have augmented the filter toolbox for image analysis with methods that adjust to the data at hand.
40

Winkler, Tobias. „Empirical models for grape vine leaf area estimation on cv. Trincadeira“. Master's thesis, ISA-UL, 2016. http://hdl.handle.net/10400.5/13008.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Mestrado Vinifera Euromaster - Viticulture and Enology - Instituto Superior de Agronomia - UL / Institut National D'Etudes Superieures Agronomiques de Montpellier
Estimating a Vineyard’s leaf area is of great importance when evaluating the productive and quality potential of a vineyard and for characterizing the light and thermal microenvironments of grapevine plants. The aim of the present work was to validate the Lopes and Pinto method for determining vineyard leaf area in the vineyards of Lisbon’s wine growing region in Portugal, with the typical local red grape cultivar Trincadeira, and to improve prediction quality by providing cultivar specific models. The presented models are based on independent datasets of two consecutive years 2015 and 2016. Fruiting shoots were collected and analyzed during all phenological stages. Primary leaf area of shoots is estimated by models using a calculated variable obtained from the average of the largest and smallest primary leaf area multiplied by the number of primary leaves, as presented by Lopes and Pinto (2005). Lateral Leaf area additionally uses the area of the biggest lateral leaf as predictor. Models based on Shoot length and shoot diameter and number of lateral leaves were tested as less laborious alternatives. Although very fast and easy to assess, models based on shoot length and diameter were not able to predict variability of lateral leaf area sufficiently and were susceptible to canopy management. The Lopes and Pinto method is able to explain a very high proportion of variability, both in primary and lateral leaf area, independently of the phenological stage, as well as before and after trimming. They are inexpensive, universal, practical, non-destructive methods which do not require specialized staff or expensive equipment
N/A
41

Phinopoulos, Victoras Georgios. „Estimation of leaf area in grapevine cv. Syrah using empirical models“. Master's thesis, ISA/UL, 2014. http://hdl.handle.net/10400.5/8631.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Mestrado Vinifera EuroMaster - Instituto Superior de Agronomia
Empirical models for the estimation of the Area of single Primary and Lateral leaves, and total Primary and Lateral Leaf Area of a shoot, are presented for the grapevine cv. Syrah (Vitis vinifera L.). The Area of single Leaves is estimated with models using the sum of the lengths of the two lateral veins of each leaf, with logarithmic transformation of both variables. Separate models are proposed for Primary and Lateral Leaves. Models based on the Lopes and Pinto (2005) method, using Mean Leaf Area multiplied by the number of Leaves as predictors, are proposed for the estimation for Total Primary and Lateral Leaf Area. It is suggested, that failure to locate the Largest Leaf of a Primary or Lateral shoot, would not significantly impair the accuracy of the models. All models explain a very high proportion of variability in Leaf Area and they can by applied in research and viticulture for the frequent estimation of Leaf Area in any phase of the growing cycle. They are inexpensive, practical, non-destructive methods which do not require specialised staff or expensive equipment
42

Storm, Hugo [Verfasser]. „Methods of analysis and empirical evidence of farm structural change / Hugo Storm“. Bonn : Universitäts- und Landesbibliothek Bonn, 2014. http://d-nb.info/1059476339/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Sellberg, Charlott. „A comparative theoretical and empirical analysis of three methods for workplace studies“. Thesis, Högskolan i Skövde, Institutionen för kommunikation och information, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-5214.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Workplace studies in Human-Computer Interaction (HCI) is a research field that has expanded in an explosive way during the recent years. Today there is a wide range of theoretical approaches and methods to choose from, which makes it problematic to make methodological choices both in research and system design. While there have been several studies that assess the different approaches to workplace studies, there seems to be a lack of studies that explore the theoretical and methodological differences between more structured methods within the research field. In this thesis, a comparative theoretical and empirical analysis of three methods for workplace studies is being conducted to deal with the following research problem: What level of theoretical depth and methodological structure is appropriate when conducting methods for workplace studies to inform design of complex socio-technical systems? When using the two criterions descriptive power and application power, to assess Contextual Design (CD), Determining Information Flow Breakdown (DIB), and Capturing Semi-Automated Decision-Making (CASADEMA), important lessons are learned about which methods are acceptable and useful when the purpose is to inform system design.
44

Bierkamp, Nils. „Simulative portfolio optimization under distributions of hyperbolic type : methods and empirical investigation /“. Aachen : Shaker, 2006. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=014986541&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Jobmann, Anna-Lena [Verfasser]. „An Investigation of Empirical Scoring Methods for Ability Measurement / Anna-Lena Jobmann“. Wuppertal : Universitätsbibliothek Wuppertal, 2018. http://d-nb.info/1164102958/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Kouremenos, Athanasios G. „The use of quantitative methods in marketing : a theoretical and empirical analysis“. Thesis, University of Strathclyde, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.346410.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Chehab, Rami. „Bootstrap methods for heavy-tail or autocorrelated distributions with an empirical application“. Thesis, University of Exeter, 2017. http://hdl.handle.net/10871/31563.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Chapter One: The Truncated Wild Bootstrap for the Asymmetric Infinite Variance Case The wild bootstrap method proposed by Cavaliere et al. (2013) to perform hypothesis testing for the location parameter in the location model, with errors in the domain of attraction of asymmetric stable law, is inappropriate. Hence, we are introducing a new bootstrap test procedure that overcomes the failure of Efron’s (1979) resampling bootstrap. This bootstrap test exploits the Wild Bootstrap of Cavaliere et al. (2013) and the central limit theorem of trimmed variables of Berkes et al. (2012) to deliver confidence sets with correct asymptotic coverage probabilities for asymmetric heavy-tailed data. The methodology of this bootstrap method entails locating cut-off values such that all data between these two values satisfy the central limit theorem conditions. Therefore, the proposed bootstrap will be termed the Truncated Wild Bootstrap (TWB) since it takes advantage of both findings. Simulation evidence to assess the quality of inference of available bootstrap tests for this particular model reveals that, on most occasions, the TWB performs better than the Parametric bootstrap (PB) of Cornea-Madeira & Davidson (2015). In addition, TWB test scheme is superior to the PB because this procedure can test the location parameter when the index of stability is below one, whereas the PB has no power in such a case. Moreover, the TWB is also superior to the PB when the tail index is close to 1 and the distribution is heavily skewed, unless the tail index is exactly 1 and the scale parameter is very high. Chapter Two: A frequency domain wild bootstrap for dependent data In this chapter a resampling method is proposed for a stationary dependent time series, based on Rademacher wild bootstrap draws from the Fourier transform of the data. The main distinguishing feature of our method is that the bootstrap draws share their periodogram identically with the sample, implying sound properties under dependence of arbitrary form. A drawback of the basic procedure is that the bootstrap distribution of the mean is degenerate. We show that a simple Gaussian augmentation overcomes this difficulty. Monte Carlo evidence indicates a favourable comparison with alternative methods in tests of location and significance in a regression model with autocorrelated shocks, and also of unit roots. Chapter 3: Frequency-based Bootstrap Methods for DC Pension Plan Strategy Evaluation The use of conventional bootstrap methods, such as Standard Bootstrap and Moving Block Bootstrap, to produce long run returns to rank one strategy over the others based on its associated reward and risk, might be misleading. Therefore, in this chapter, we will use a simple pension model that is mainly concerned with long-term accumulation wealth to assess, for the first time in pension literature, different bootstrap methods. We find that the Multivariate Fourier Bootstrap gives the most satisfactory result in its ability to mimic the true distribution using Cramer-von-mises statistics. We also address the disagreement in the pension literature on selecting the best pension plan strategy. We present a comprehensive study to compare different strategies using a different bootstrap procedures with different Cash-flow performance measures across a range of countries. We find that bootstrap methods play a critical role in determining the optimal strategy. Additionally, different CFP measures rank pension plans differently across countries and bootstrap methods.
48

Geisbert, Jesse Stuart. „Hydrodynamic Modeling for Autonomous Underwater Vehicles Using Computational and Semi-Empirical Methods“. Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/33195.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Buoyancy driven underwater gliders, which locomote by modulating their buoyancy and their attitude with moving mass actuators and inflatable bladders, are proving their worth as efficient long-distance, long-duration ocean sampling platforms. Gliders have the capability to travel thousands of kilometers without a need to stop or recharge. There is a need for the development of methods for hydrodynamic modeling. This thesis aims to determine the hydrodynamic parameters for the governing equations of motion for three autonomous underwater vehicles. This approach is two fold, using data obtained from computational flight tests and using a semi-empirical approach. The three vehicles which this thesis focuses on are two gliders (Slocum and XRay/Liberdade), and a third vehicle, the Virginia Tech Miniature autonomous underwater vehicle.
Master of Science
49

Nan, Yehong. „Empirical Study of Two Hypothesis Test Methods for Community Structure in Networks“. Thesis, North Dakota State University, 2019. https://hdl.handle.net/10365/31640.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Many real-world network data can be formulated as graphs, where a binary relation exists between nodes. One of the fundamental problems in network data analysis is community detection, clustering the nodes into different groups. Statistically, this problem can be formulated as hypothesis testing: under the null hypothesis, there is no community structure, while under the alternative hypothesis, community structure exists. One is of the method is to use the largest eigenvalues of the scaled adjacency matrix proposed by Bickel and Sarkar (2016), which works for dense graph. Another one is the subgraph counting method proposed by Gao and Lafferty (2017a), valid for sparse network. In this paper, firstly, we empirically study the BS or GL methods to see whether either of them works for moderately sparse network; secondly, we propose a subsampling method to reduce the computation of the BS method and run simulations to evaluate the performance.
50

Allen, Andrew J. „Combining Machine Learning and Empirical Engineering Methods Towards Improving Oil Production Forecasting“. DigitalCommons@CalPoly, 2020. https://digitalcommons.calpoly.edu/theses/2223.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Current methods of production forecasting such as decline curve analysis (DCA) or numerical simulation require years of historical production data, and their accuracy is limited by the choice of model parameters. Unconventional resources have proven challenging to apply traditional methods of production forecasting because they lack long production histories and have extremely variable model parameters. This research proposes a data-driven alternative to reservoir simulation and production forecasting techniques. We create a proxy-well model for predicting cumulative oil production by selecting statistically significant well completion parameters and reservoir information as independent predictor variables in regression-based models. Then, principal component analysis (PCA) is applied to extract key features of a well’s time-rate production profile and is used to estimate cumulative oil production. The efficacy of models is examined on field data of over 400 wells in the Eagle Ford Shale in South Texas, supplied from an industry database. The results of this study can be used to help oil and gas companies determine the estimated ultimate recovery (EUR) of a well and in turn inform financial and operational decisions based on available production and well completion data.

Zur Bibliographie