Dissertations / Theses on the topic 'Prediction models'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Prediction models.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Haider, Peter. "Prediction with Mixture Models." Phd thesis, Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2014/6961/.
Full textDas Lernen eines Modells für den Zusammenhang zwischen den Eingabeattributen und annotierten Zielattributen von Dateninstanzen dient zwei Zwecken. Einerseits ermöglicht es die Vorhersage des Zielattributs für Instanzen ohne Annotation. Andererseits können die Parameter des Modells nützliche Einsichten in die Struktur der Daten liefern. Wenn die Daten eine inhärente Partitionsstruktur besitzen, ist es natürlich, diese Struktur im Modell widerzuspiegeln. Solche Mischmodelle generieren Vorhersagen, indem sie die individuellen Vorhersagen der Mischkomponenten, welche mit den Partitionen der Daten korrespondieren, kombinieren. Oft ist die Partitionsstruktur latent und muss beim Lernen des Mischmodells mitinferiert werden. Eine direkte Evaluierung der Genauigkeit der inferierten Partitionsstruktur ist in vielen Fällen unmöglich, weil keine wahren Referenzdaten zum Vergleich herangezogen werden können. Jedoch kann man sie indirekt einschätzen, indem man die Vorhersagegenauigkeit des darauf basierenden Mischmodells misst. Diese Arbeit beschäftigt sich mit dem Zusammenspiel zwischen der Verbesserung der Vorhersagegenauigkeit durch das Aufdecken latenter Partitionierungen in Daten, und der Bewertung der geschätzen Struktur durch das Messen der Genauigkeit des resultierenden Vorhersagemodells. Bei der Anwendung des Filterns unerwünschter E-Mails sind die E-Mails in der Trainingsmende latent in Werbekampagnen partitioniert. Das Aufdecken dieser latenten Struktur erlaubt das Filtern zukünftiger E-Mails mit sehr niedrigen Falsch-Positiv-Raten. In dieser Arbeit wird ein Bayes'sches Partitionierunsmodell entwickelt, um diese Partitionierungsstruktur zu modellieren. Das Wissen über die Partitionierung von E-Mails in Kampagnen hilft auch dabei herauszufinden, welche E-Mails auf Veranlassen des selben Netzes von infiltrierten Rechnern, sogenannten Botnetzen, verschickt wurden. Dies ist eine weitere Schicht latenter Partitionierung. Diese latente Struktur aufzudecken erlaubt es, die Genauigkeit von E-Mail-Filtern zu erhöhen und sich effektiv gegen verteilte Denial-of-Service-Angriffe zu verteidigen. Zu diesem Zweck wird in dieser Arbeit ein diskriminatives Partitionierungsmodell hergeleitet, welches auf dem Graphen der beobachteten E-Mails basiert. Die mit diesem Modell inferierten Partitionierungen werden via ihrer Leistungsfähigkeit bei der Vorhersage der Kampagnen neuer E-Mails evaluiert. Weiterhin kann bei der Klassifikation des Inhalts einer E-Mail statistische Information über den sendenden Server wertvoll sein. Ein Modell zu lernen das diese Informationen nutzen kann erfordert Trainingsdaten, die Serverstatistiken enthalten. Um zusätzlich Trainingsdaten benutzen zu können, bei denen die Serverstatistiken fehlen, wird ein Modell entwickelt, das eine Mischung über potentiell alle Einsetzungen davon ist. Eine weitere Anwendung ist die Vorhersage des Navigationsverhaltens von Benutzern einer Webseite. Hier gibt es nicht a priori eine Partitionierung der Benutzer. Jedoch ist es notwendig, eine Partitionierung zu erzeugen, um verschiedene Nutzungsszenarien zu verstehen und verschiedene Layouts dafür zu entwerfen. Der vorgestellte Ansatz optimiert gleichzeitig die Fähigkeiten des Modells, sowohl die beste Partition zu bestimmen als auch mittels dieser Partition Vorhersagen über das Verhalten zu generieren. Jedes Modell wird auf realen Daten evaluiert und mit Referenzmethoden verglichen. Die Ergebnisse zeigen, dass das explizite Modellieren der Annahmen über die latente Partitionierungsstruktur zu verbesserten Vorhersagen führt. In den Fällen bei denen die Vorhersagegenauigkeit nicht direkt optimiert werden kann, erweist sich die Hinzunahme einer kleinen Anzahl von übergeordneten, direkt einstellbaren Parametern als nützlich.
Vaidyanathan, Sivaranjani. "Bayesian Models for Computer Model Calibration and Prediction." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1435527468.
Full textCharraud, Jocelyn, and Saez Adrian Garcia. "Bankruptcy prediction models on Swedish companies." Thesis, Umeå universitet, Företagsekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185143.
Full textRice, Nigel. "Multivariate prediction models in medicine." Thesis, Keele University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314647.
Full textBrefeld, Ulf. "Semi-supervised structured prediction models." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2008. http://dx.doi.org/10.18452/15748.
Full textLearning mappings between arbitrary structured input and output variables is a fundamental problem in machine learning. It covers many natural learning tasks and challenges the standard model of learning a mapping from independently drawn instances to a small set of labels. Potential applications include classification with a class taxonomy, named entity recognition, and natural language parsing. In these structured domains, labeled training instances are generally expensive to obtain while unlabeled inputs are readily available and inexpensive. This thesis deals with semi-supervised learning of discriminative models for structured output variables. The analytical techniques and algorithms of classical semi-supervised learning are lifted to the structured setting. Several approaches based on different assumptions of the data are presented. Co-learning, for instance, maximizes the agreement among multiple hypotheses while transductive approaches rely on an implicit cluster assumption. Furthermore, in the framework of this dissertation, a case study on email batch detection in message streams is presented. The involved tasks exhibit an inherent cluster structure and the presented solution exploits the streaming nature of the data. The different approaches are developed into semi-supervised structured prediction models and efficient optimization strategies thereof are presented. The novel algorithms generalize state-of-the-art approaches in structural learning such as structural support vector machines. Empirical results show that the semi-supervised algorithms lead to significantly lower error rates than their fully supervised counterparts in many application areas, including multi-class classification, named entity recognition, and natural language parsing.
Asterios, Geroukis. "Prediction of Linear Models: Application of Jackknife Model Averaging." Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-297671.
Full textShrestha, Rakshya. "Deep soil mixing and predictive neural network models for strength prediction." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.607735.
Full textGrant, Stuart William. "Risk prediction models in cardiovascular surgery." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/risk-prediction-models-in-cardiovascular-surgery(1befbc5d-2aa6-4d24-8c32-e635cf55e339).html.
Full textJones, Margaret. "Point prediction in survival time models." Thesis, University of Newcastle Upon Tyne, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.340616.
Full textMonsch, Matthieu (Matthieu Frederic). "Large scale prediction models and algorithms." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/84398.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 129-132).
Over 90% of the data available across the world has been produced over the last two years, and the trend is increasing. It has therefore become paramount to develop algorithms which are able to scale to very high dimensions. In this thesis we are interested in showing how we can use structural properties of a given problem to come up with models applicable in practice, while keeping most of the value of a large data set. Our first application provides a provably near-optimal pricing strategy under large-scale competition, and our second focuses on capturing the interactions between extreme weather and damage to the power grid from large historical logs. The first part of this thesis is focused on modeling competition in Revenue Management (RM) problems. RM is used extensively across a swathe of industries, ranging from airlines to the hospitality industry to retail, and the internet has, by reducing search costs for customers, potentially added a new challenge to the design and practice of RM strategies: accounting for competition. This work considers a novel approach to dynamic pricing in the face of competition that is intuitive, tractable and leads to asymptotically optimal equilibria. We also provide empirical support for the notion of equilibrium we posit. The second part of this thesis was done in collaboration with a utility company in the North East of the United States. In recent years, there has been a number of powerful storms that led to extensive power outages. We provide a unified framework to help power companies reduce the duration of such outages. We first train a data driven model to predict the extent and location of damage from weather forecasts. This information is then used in a robust optimization model to optimally dispatch repair crews ahead of time. Finally, we build an algorithm that uses incoming customer calls to compute the likelihood of damage at any point in the electrical network.
by Matthieu Monsch.
Ph.D.
Wanigasekara, Prashan. "Latent state space models for prediction." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106269.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 95-98).
In this thesis, I explore a novel algorithm to model the joint behavior of multiple correlated signals. Our chosen example is the ECG (Electrocardiogram) and ABP (Arterial Blood Pressure) signals from patients in the ICU (Intensive Care Unit). I then use the generated models to predict blood pressure levels of ICU patients based on their historical ECG and ABP signals. The algorithm used is a variant of a Hidden Markov model. The new extension is termed as the Latent State Space Copula Model. In the novel Latent State Space Copula Modelthe ECG, ABP signals are considered to be correlated and are modeled using a bivariate Gaussian copula with Weibull marginals generated by a hidden state. We assume that there are hidden patient "states" that transition from one hidden state to another driving a joint ECG-ABP behavior. We estimate the parameters of the model using a novel Gibbs sampling approach. Using this model, we generate predictors that are the state probabilities at any given time step and use them to predict a patient's future health condition. The predictions made by the model are binary and detects whether the Mean arterial pressure(MAP) is going to be above or below a certain threshold at a future time step. Towards the end of the thesis I do a comparison between the new Latent State Space Copula Model and a state of the art Classical Discrete HMM. The Latent State Space Copula Model achieves an Area Under the ROC (AUROC) curve of .7917 for 5 states while the Classical Discrete HMM achieves an AUROC of .7609 for 5 states.
by Prashan Wanigasekara.
S.M. in Engineering and Management
Smith, Christopher P. "Surrogate models for aerodynamic performance prediction." Thesis, University of Surrey, 2015. http://epubs.surrey.ac.uk/808464/.
Full textNsolo, Edward. "Prediction models for soccer sports analytics." Thesis, Linköpings universitet, Databas och informationsteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-149033.
Full textStone, Peter H. "Climate Prediction: The Limits of Ocean Models." MIT Joint Program on the Science and Policy of Global Change, 2004. http://hdl.handle.net/1721.1/4056.
Full textAbstract in HTML and technical report in PDF available on the Massachusetts Institute of Technology Joint Program on the Science and Policy of Global Change website (http://mit.edu/globalchange/www/).
Faull, Nicholas Eric. "Ensemble climate prediction with coupled climate models." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.442944.
Full textPasiouras, Fotios. "Development of bank acquisition targets prediction models." Thesis, Coventry University, 2005. http://curve.coventry.ac.uk/open/items/ecf1b00d-da92-9bd2-5b02-fa4fab8afb0c/1.
Full textGray, Eoin. "Validating and updating lung cancer prediction models." Thesis, University of Sheffield, 2018. http://etheses.whiterose.ac.uk/19206/.
Full textPencina, Karol M. "Quantification of improvement in risk prediction models." Thesis, Boston University, 2012. https://hdl.handle.net/2144/32045.
Full textPLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.
The identification of new factors in modeling the probability of a binary outcome is the main challenge for statisticians and clinicians who want to improve risk prediction. This motivates researchers to search for measures that quantify the performance of new markers. There are many commonly used measures that assess the performance of the binary outcome model: logistic R-squares, discrimination slope. area under the ROC (receiver operating characteristic) curve (AUC) and Hosmer-Lemeshow goodness of fit chi-square. However, metrics that work well for model assessment may not be as good for quantifying the usefulness of new risk factors, especially when we add a new predictor to a well performing baseline model. The recently proposed new measures of improvement, the Integrated Discrimination Improvement (IDI) - difference between discrimination slopes - and the Net Reclassification Improvement (NRI), directly address the question of model performance and take it beyond the simple statistical significance of a new risk factor. Since these two measures are new and have not been studied as extensively as the older ones, a question of their interpretation naturally arises. In our research we propose meaningful interpretations to the new measures as well as extensions of these measures that address some of their potential shortcomings. Following the derivation of the maximum R-squared by Nagelkerke, we show how the IDI, which depends on the event rate, could be rescaled by its hypothetical maximum value to reduce this dependence. Furthermore, the IDI metric assumes a uniform distribution for all risk cut-offs. Application of clinically important thresholds prompted us to derive a metric that includes a prior distribution for the cut-off points and assigns different weights to sensitivity and specificity. Similarly, we propose the maximum and rescaled NRI. The latter is based on counting the number of categories by which the risk of a given person moved and guarantees that reclassification tables with equal marginal probabilities will lead to a zero NRI. All developments are investigated employing numerical simulations under the assumption of normality and varying effect sizes of the associations. We also illustrate the proposed concepts using examples from the Framingham Heart Study.
2031-01-02
Wang, Yangzhengxuan. "Corporate default prediction : models, drivers and measurements." Thesis, University of Exeter, 2011. http://hdl.handle.net/10036/3457.
Full textMateus, Ana Teresa Moreirinha Vila Fernandes. "Quality management in laboratories- Effciency prediction models." Doctoral thesis, Universidade de Évora, 2021. http://hdl.handle.net/10174/29338.
Full textCordeiro, Silvio Ricardo. "Distributional models of multiword expression compositionality prediction." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0501/document.
Full textNatural language processing systems often rely on the idea that language is compositional, that is, the meaning of a linguistic entity can be inferred from the meaning of its parts. This expectation fails in the case of multiword expressions (MWEs). For example, a person who is a "sitting duck" is neither a duck nor necessarily sitting. Modern computational techniques for inferring word meaning based on the distribution of words in the text have been quite successful at multiple tasks, especially since the rise of word embedding approaches. However, the representation of MWEs still remains an open problem in the field. In particular, it is unclear how one could predict from corpora whether a given MWE should be treated as an indivisible unit (e.g. "nut case") or as some combination of the meaning of its parts (e.g. "engine room"). This thesis proposes a framework of MWE compositionality prediction based on representations of distributional semantics, which we instantiate under a variety of parameters. We present a thorough evaluation of the impact of these parameters on three new datasets of MWE compositionality, encompassing English, French and Portuguese MWEs. Finally, we present an extrinsic evaluation of the predicted levels of MWE compositionality on the task of MWE identification. Our results suggest that the proper choice of distributional model and corpus parameters can produce compositionality predictions that are comparable to the state of the art
Cordeiro, Silvio Ricardo. "Distributional models of multiword expression compositionality prediction." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/174519.
Full textNatural language processing systems often rely on the idea that language is compositional, that is, the meaning of a linguistic entity can be inferred from the meaning of its parts. This expectation fails in the case of multiword expressions (MWEs). For example, a person who is a sitting duck is neither a duck nor necessarily sitting. Modern computational techniques for inferring word meaning based on the distribution of words in the text have been quite successful at multiple tasks, especially since the rise of word embedding approaches. However, the representation of MWEs still remains an open problem in the field. In particular, it is unclear how one could predict from corpora whether a given MWE should be treated as an indivisible unit (e.g. nut case) or as some combination of the meaning of its parts (e.g. engine room). This thesis proposes a framework of MWE compositionality prediction based on representations of distributional semantics, which we instantiate under a variety of parameters. We present a thorough evaluation of the impact of these parameters on three new datasets of MWE compositionality, encompassing English, French and Portuguese MWEs. Finally, we present an extrinsic evaluation of the predicted levels of MWE compositionality on the task of MWE identification. Our results suggest that the proper choice of distributional model and corpus parameters can produce compositionality predictions that are comparable to the state of the art.
Gao, Sheng. "Latent factor models for link prediction problems." Paris 6, 2012. http://www.theses.fr/2012PA066056.
Full textWith the rising of Internet as well as modern social media, relational data has become ubiquitous, which consists of those kinds of data where the objects are linked to each other with various relation types. Accordingly, various relational learning techniques have been studied in a large variety of applications with relational data, such as recommender systems, social network analysis, Web mining or bioinformatic. Among a wide range of tasks encompassed by relational learning, we address the problem of link prediction in this thesis. Link prediction has arisen as a fundamental task in relational learning, which considers to predict the presence or absence of links between objects in the relational data based on the topological structure of the network and/or the attributes of objects. However, the complexity and sparsity of network structure make this a great challenging problem. In this thesis, we propose solutions to reduce the difficulties in learning and fit various models into corresponding applications. Basically, in Chapter 3 we present a unified framework of latent factor models to address the generic link prediction problem, in which we specifically discuss various configurations in the models from computational perspective and probabilistic view. Then, according to the applications addressed in this dissertation, we propose different latentfactor models for two classes of link prediction problems: (i) structural link prediction. (ii) temporal link prediction. In terms of structural link prediction problem, in Chapter 4 we define a new task called Link Pattern Prediction (LPP) in multi-relational networks. By introducing a specific latent factor for different relation types in addition to using latent feature factors to characterize objects, we develop a computational tensor factorization model, and the probabilistic version with its Bayesian treatment to reveal the intrinsic causality of interaction patterns in multi-relational networks. Moreover, considering the complex structural patterns in relational data, in Chapter 5 we propose a novel model that simultaneously incorporates the effect of latent feature factors and the impact from the latent cluster structures in the network, and also develop an optimization transfer algorithm to facilitate the model learning procedure. In terms of temporal link prediction problem in time-evolving networks, in Chapter 6 we propose a unified latent factor model which integrates multiple information sources in the network, including the global network structure, the content of objects and the graph proximity information from the network to capture the time-evolving patterns of links. This joint model is constructed based on matrix factorization and graph regularization technique. Each model proposed in this thesis achieves state-of-the-art performances, extensive experiments are conducted on real world datasets to demonstrate their significant improvements over baseline methods. Almost all of themhave been published in international or national peer-reviewed conference proceedings
Zhang, Ruomeng. "Evaluation of Current Concrete Creep Prediction Models." University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1461963600.
Full textRantalainen, Mattias John. "Multivariate prediction models for bio-analytical data." Thesis, Imperial College London, 2008. http://hdl.handle.net/10044/1/1393.
Full textSchwiegerling, James Theodore. "Visual performance prediction using schematic eye models." Diss., The University of Arizona, 1995. http://hdl.handle.net/10150/187327.
Full textRossi, A. "PREDICTIVE MODELS IN SPORT SCIENCE: MULTI-DIMENSIONAL ANALYSIS OF FOOTBALL TRAINING AND INJURY PREDICTION." Doctoral thesis, Università degli Studi di Milano, 2017. http://hdl.handle.net/2434/495229.
Full textEadie, Edward Norman. "Small resource stock share price behaviour and prediction." Title page, contents and abstract only, 2002. http://web4.library.adelaide.edu.au/theses/09CM/09cme11.pdf.
Full textGamalielsson, Jonas. "Models for Protein Structure Prediction by Evolutionary Algorithms." Thesis, University of Skövde, Department of Computer Science, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-623.
Full textEvolutionary algorithms (EAs) have been shown to be competent at solving complex, multimodal optimisation problems in applications where the search space is large and badly understood. EAs are therefore among the most promising classes of algorithms for solving the Protein Structure Prediction Problem (PSPP). The PSPP is how to derive the 3D-structure of a protein given only its sequence of amino acids. This dissertation defines, evaluates and shows limitations of simplified models for solving the PSPP. These simplified models are off-lattice extensions to the lattice HP model which has been proposed and is claimed to possess some of the properties of real protein folding such as the formation of a hydrophobic core. Lattice models usually model a protein at the amino acid level of detail, use simple energy calculations and are used mainly for search algorithm development. Off-lattice models usually model the protein at the atomic level of detail, use more complex energy calculations and may be used for comparison with real proteins. The idea is to combine the fast energy calculations of lattice models with the increased spatial possibilities of an off-lattice environment allowing for comparison with real protein structures. A hypothesis is presented which claims that a simplified off-lattice model which considers other amino acid properties apart from hydrophobicity will yield simulated structures with lower Root Mean Square Deviation (RMSD) to the native fold than a model only considering hydrophobicity. The hypothesis holds for four of five tested short proteins with a maximum of 46 residues. Best average RMSD for any model tested is above 6Å, i.e. too high for useful structure prediction and excludes significant resemblance between native and simulated structure. Hence, the tested models do not contain the necessary biological information to capture the complex interactions of real protein folding. It is also shown that the EA itself is competent and can produce near-native structures if given a suitable evaluation function. Hence, EAs are useful for eventually solving the PSPP.
Mousavi, Biouki Seyed Mohammad Mahdi. "Design and performance evaluation of failure prediction models." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/25925.
Full textGendron-Bellemare, Marc. "Learning prediction and abstraction in partially observable models." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=18471.
Full textDepuis plusieurs déecennies, les modèeles de Markov forment l'une des bases de l'Intelligence Artificielle. Lorsque l'environnement modélisé n'est que partiellement observable, cepen- dant, ceux-ci demeurent insatisfaisants. Il est connu que la prise de décision optimale dans certains problèmes exige un historique infini. D'un autre côté, faire appel au con- cept d'état caché (tel qu'à travers l'utilisation de POMDPs) implique un coût computa- tionnel plus élevé. Afin de pallier à ce problème, je propose un modèle se servant une représentation concise de l'historique. Plutôt que de stocker un modèle parfait des prob- abilitités de transition, mon approche emploie d'une approximation linéaire des distribu- tions de probabilités. La méthode proposée est un compromis entre les modèles partielle- ment et complètement observables. Je traite aussi de la construction d'éléments en lien avec l'historique afin d'améliorer l'approximation linéaire. Des exemples restreints sont présentés afin de montrer qu'une approximation linéaire de certains modèles de Markov peut être atteinte. Des résultats empiriques au niveau de la construction d'éléments sont aussi présentés afin d'illustrer les bénéfices de mon approche.
Svensson, Jacob. "Boundary Layer Parametrization in Numerical Weather Prediction Models." Licentiate thesis, Stockholms universitet, Meteorologiska institutionen (MISU), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-117134.
Full textDet har visat sig att det är en stor utmaning för numeriska väderprognosmodeller (NWP-modeller) att simulera stabilt skiktade atmosfäriska gränsskikt och gränsskiktets dygnscykel på ett korrekt sätt. Syftet med denna studien är att utvärdera, beskriva och ge förslag på förbättringar av beskrivningen av gränsskiktet i NWP-modeller. Studien innehåller två artiklar. Den första fokuserar på beskrivningen av markytan och interaktionen mellan marken och gränsskiktet i den regionala NWP-modellen COAMPS R . Det visade sig att beskrivningen av markytan har en signifikant inverkan på gränsskiktets struktur. Det framkom också att strålningsberäkningarna endast görs en gång i timmen vilket bland annat orsakar en bias i inkommande solinstrålning vid markytan. Den andra artikeln fokuserar på beskrivningen av den turbulenta transporten i stabila skiktade gränsskikt. En implemenering av en diffusionsparametrisering som bygger på turbulent kinetisk energy (TKE) testas i en endimensionell version av NWP-modellen Integrated Forecast System (IFS), utvecklat vid European Center for Medium Range Weather Forecasts (ECMWF). Den TKE-baserade diffussionsparametriseringen är likvärdigt med den nuvaran de operationella parametriseringen i neutrala och konvektiva gränsskikt, menär mindre diffusivt i stabila gränsskikt. Diffusionens intensitet är beroende påden turbulenta längdskalan. Vidare kan turbulensen i TKE-formuleringen hamna i ett oscillerande läge om turbulensen är svag samtidigt som temperatur- och vindgradienten är kraftig. Denna oscillation kan förhindras om längdskalans minsta tillåtna värde begränsas.
Voigt, Alexander. "Mass spectrum prediction in non-minimal supersymmetric models." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-152797.
Full textHawkes, Richard Nathanael. "Linear state models for volatility estimation and prediction." Thesis, Brunel University, 2007. http://bura.brunel.ac.uk/handle/2438/7138.
Full textAMARAL, BERNARDO HALLAK. "PREDICTION OF FUTURE VOLATILITY MODELS: BRAZILIAN MARKET ANALYSIS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2012. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=20458@1.
Full textRealizar a previsão de volatilidade futura é algo que intriga muitos estudiosos, pesquisadores e pessoas do mercado financeiro. O modelo e a metodologia utilizados no cálculo são fundamentais para o apreçamento de opções e dependendo das variáveis utilizadas, o resultado se torna muito sensível, propiciando resultados diferentes. Tudo isso pode causar cálculos imprecisos e estruturação de estratégias erradas de compra e venda de ações e opções por empresas e investidores. Por isso, o objetivo deste trabalho é utilizar alguns modelos para o cálculo de volatilidade futura e analisar os resultados, avaliando qual o melhor modelo a ser empregado, propiciando uma melhor previsão da volatilidade futura.
Make a prediction of future volatility is a subject that causes debate between scholars, researchers and people in the financial market. The modeal nd methodology used in the calculation are fundamental to the pricing of options and depending on the variables used, the result becomes very sensitive, giving different results. All this can cause inaccurate calculations and wrong strategies for buying and selling stocks and options by companies and investors. Therefore, the objective of this work is to use models for the calculation of future volatility and analyze the results, evaluating the best model to be used, allowing a better prediction of future volatility.
Qarmalah, Najla Mohammed A. "Finite mixture models : visualisation, localised regression, and prediction." Thesis, Durham University, 2018. http://etheses.dur.ac.uk/12486/.
Full textLi, Edwin. "LSTM Neural Network Models for Market Movement Prediction." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231627.
Full textAtt förstå och kunna förutsäga hur index varierar med tiden och andra parametrar är ett viktigt problem inom kapitalmarknader. Tidsserieanalys med autoregressiva metoder har funnits sedan årtionden tillbaka, och har oftast gett goda resultat. Dessa metoder saknar dock möjligheten att förklara trender och cykliska variationer i tidsserien, något som kan karaktäriseras av tidsvarierande samband, men även samband mellan parametrar som indexet beror utav. Syftet med denna studie är att undersöka om recurrent neural networks (RNN) med long short-term memory-celler (LSTM) kan användas för att fånga dessa samband, för att slutligen användas som en modell för att komplettera indexhandel. Experimenten är gjorda mot en modifierad S&P-500 datamängd, och två distinkta modeller har tagits fram. Den ena är en multivariat regressionsmodell för att förutspå exakta värden, och den andra modellen är en multivariat klassifierare som förutspår riktningen på nästa dags indexrörelse. Experimenten visar för den konfiguration som presenteras i rapporten att LSTM RNN inte passar för att förutspå exakta värden för indexet, men ger tillfredsställande resultat när modellen ska förutsäga indexets framtida riktning.
Rogge-Solti, Andreas, Laura Vana, and Jan Mendling. "Time Series Petri Net Models - Enrichment and Prediction." CEUR Workshop Proceedings, 2015. http://epub.wu.ac.at/5394/1/paper8.pdf.
Full textFernando, Warnakulasuriya Chandima. "Blood Glucose Prediction Models for Personalized Diabetes Management." Thesis, North Dakota State University, 2018. https://hdl.handle.net/10365/28179.
Full textBratières, Sébastien. "Non-parametric Bayesian models for structured output prediction." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/274973.
Full textGorthi, Swathi. "Prediction Models for Estimation of Soil Moisture Content." DigitalCommons@USU, 2011. https://digitalcommons.usu.edu/etd/1090.
Full textCutugno, Carmen. "Statistical models for the corporate financial distress prediction." Thesis, Università degli Studi di Catania, 2011. http://hdl.handle.net/10761/283.
Full textKumar, Akhil. "Budget-Related Prediction Models in the Business Environment with Special Reference to Spot Price Predictions." Thesis, North Texas State University, 1986. https://digital.library.unt.edu/ark:/67531/metadc331533/.
Full textSawert, Marcus. "Predicting deliveries from suppliers : A comparison of predictive models." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-39314.
Full textWiseman, Scott. "Bayesian learning in graphical models." Thesis, University of Kent, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.311261.
Full textHu, Zhongbo. "Atmospheric artifacts correction for InSAR using empirical model and numerical weather prediction models." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/668264.
Full textLas técnicas lnSAR han demostrado su capacidad sin precedentes y méritos para el monitoreo de la deformaci6n del suelo a gran escala con una precisión centimétrica o incluso milimétrica. Sin embargo, varios factores afectan la fiabilidad y precisión de sus aplicaciones. Entre ellos, los artefactos atmosféricos debidos a variaciones espaciales y temporales del estado de la atm6sfera a menudo añaden ruido a los interferogramas. Por lo tanto, la mitigación de los artefactos atmosféricos sigue siendo uno de los mayores desafíos a abordar en la comunidad lnSAR. Los trabajos de investigaci6n de vanguardia han revelado que los artefactos atmosféricos se pueden compensar parcialmente con modelos empíricos, enfoque de filtrado temporal-espacial en series temporales lnSAR, retardo puntual del camino cenital con GPS y modelos numéricos de predicción meteorológica. En esta tesis, en primer lugar, desarrollamos un método de corrección de modelo empírico lineal ponderado por covarianza. En segundo lugar, se emplea un enfoque realista de integracion de dirección LOS basado en datos de reanálisis global y se compara exhaustivamente con el método convencional que se integra a lo largo de la dirección cenital. Finalmente, el método de integraci6n realista se aplica a los datos del modelo de pronóstico numérico WRF local. Ademas, se evalúan las comparaciones detalladas entre diferentes datos de reanálisis global y el modelo WRF local. En términos de métodos de corrección con modelos empíricos, muchas publicaciones han estudiado la corrección del retraso estratificado de la fase troposférica asumiendo un modelo lineal entre ellos y la topografía. Sin embargo, la mayoría de estos estudios no han considerado el efecto de los artefactos atmosféricos turbulentos al ajustar el modelo lineal a los datos. En esta tesis, se ha presentado una técnica mejorada que minimiza la influencia de la atm6sfera turbulenta en el ajuste del modelo. En el algoritmo propuesto, el modelo se ajusta a las diferencias de fase de los pixeles en lugar de utilizar la fase sin desenrollar de cada pixel. Además, las diferentes diferencias de fase se ponderan en función de su covarianza APS estimada a partir de un variograma empírico para reducir en el ajuste del modelo el impacto de los pares de pixeles con una atm6sfera turbulenta significativa. El rendimiento del método propuesto ha sido validado con datos SAR Sentinel-1 simulados y reales en la isla de Tenerife, España. Teniendo en cuenta los métodos que utilizan observaciones meteorológicas para mitigar APS, se ha implementado una estrategia de computación realista y precisa que utiliza datos de reanálisis atmosférico global. Con el enfoque, se considera el camino realista de LOS a lo largo del satélite y los puntos monitoreados, en lugar de convertirlos desde el retardo de la ruta cenital. En comparación con el método basado en la demora cenital, la mayor ventaja es que puede evitar errores causados por el comportamiento atmosférico anisotrópico. El método de integración preciso se valida con los datos de Sentinel-1 en tres sitios de prueba: la isla de Tenerife, España, Almería, España y la isla de Creta, Grecia. En comparación con el método cenital convencional, el método de integración realista muestra una gran mejora.
Almeida, Mara Elisabeth Monteiro. "Bankruptcy prediction models: an analysis for Portuguese SMEs." Master's thesis, 2020. http://hdl.handle.net/1822/67108.
Full textThis project intends to test the models developed by Altman (1983) and Ohlson (1980) and assess the predictive capacity of these models when applied to a dataset of Portuguese SMEs. This work is the result of a partnership with a Portuguese startup called nBanks, which dedicates its activity to providing financial services to its customers. In this sense, this project will allow nBanks to develop a new and innovative instrument that will allow its customers to access their probability or risk of default. The data were collected from the Amadeus database, for the period between 2011 and 2018. The dataset consists of 194,979 companies, of which 2,913 companies are in distress and the remaining 192,066 are healthy companies. From the application of the models, it was concluded that the O-score, using a cut-off of 3.8%, is better than the Z’’-Score as it is the model that minimizes the error of a company in distress being classified as a healthy company, although the Z’'-score presents the best overall accuracy. The O-score model is better than the Z’’-Score in forecasting financial distress when considering the group of companies in distress. It is also concluded that when the period up to 5 years before financial distress is analyzed, the accuracy of the models decreases as we move forward in the number of years. An analysis of the top 25% of the companies classified as distressed, based on the results of the O-score, showed that those companies are medium-sized companies, concentrated in the North of Portugal and the Wholesale trade sector, except for motor vehicles and motorcycles.
Este projeto pretende testar os modelos desenvolvidos por Altman (1983) e Ohlson (1980) e avaliar a capacidade preditiva desses modelos quando aplicados a um conjunto de dados de PMEs portuguesas. Este trabalho é resultado de uma parceria com uma startup portuguesa chamada nBanks, que dedica a sua atividade à prestação de serviços financeiros a seus clientes. Nesse sentido, este projeto permitirá que a nBanks desenvolva um instrumento novo e inovador que permitirá aos seus clientes acederem a informação sobre a sua probabilidade de falência ou risco de incumprimento. Os dados foram coletados na base de dados Amadeus, para o período entre 2011 e 2018. A amostra é composta por 194.979 empresas, das quais 2.913 empresas estão em dificuldade e as restantes 192.066 são empresas saudáveis. A partir da aplicação dos modelos, concluiu-se que o modelo O-score, utilizando um ponto crítico de 3,8%, é melhor que o modelo Z’’-score, pois é o modelo que minimiza o erro de uma empresa em dificuldade ser classificada como uma empresa saudável, embora o Z’’-score apresente, de um modo geral, melhor precisão. O modelo O-score é melhor que o modelo Z’’-score na previsão de dificuldade financeira ao considerar o grupo de empresas em dificuldades. Conclui-se também que, quando analisado o período de até 5 anos antes da dificuldade financeira, a precisão dos modelos diminui à medida que avançamos no número de anos. Uma análise do Top 25% das empresas classificadas em dificuldade, com base nos resultados do O-score, mostrou que essas empresas são médias empresas, concentradas no norte de Portugal e no setor de comércio grossista, exceto veículos automotores e motocicletas.
Yuan, Yan. "Prediction Performance of Survival Models." Thesis, 2008. http://hdl.handle.net/10012/3974.
Full textAuerbach, Jonathan Lyle. "Some Statistical Models for Prediction." Thesis, 2020. https://doi.org/10.7916/d8-gcvm-jj03.
Full textLin, Hsin-Yin, and 林欣穎. "The Application of Grey Prediction Models to Depreciation Expense Prediction." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/d6u94c.
Full text國立臺北科技大學
自動化科技研究所
103
Earnings per share (EPS) serves an important indicator for investors to analyze listed companies. Business owners with good reputation and accountability also consider it is important to achieve their annual EPS growth rate. Higher EPS means the company is high profitability per of unit capital, the company has better ability than the other competitors. That means it can use fewer resources to create higher profit. Among the factors affecting a listed company&;#39;s earnings per share, where the cost of the item is depreciation. Depreciation is the cost that cannot be ignored. There are many ways to calculate depreciation. The most principal is according fixed assets application to choose the different depreciation methods. This research will use a listed company for an example. To use 2013 - 2015 first quarter of the company&;#39;s capital expenditure of used data for the analysis sample, and use the calculated used data index value of each fixed assets to the relevance data of transfer property. Use grey prediction theory‘s GM (1.1) model to analyze and forecast the accumulated depreciation of the company that in year at every month, to replace the original calculation of the monthly performed by manual work mode to increase operational efficiency.