To see the other types of publications on this topic, follow the link: Prediction and analysis.

Dissertations / Theses on the topic 'Prediction and analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Prediction and analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ratti, Carlo. "Urban analysis for environmental prediction." Thesis, University of Cambridge, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.421692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vlasák, Pavel. "Exhange Rates Prediction." Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-76388.

Full text
Abstract:
The aim of this thesis is to examine the dependence of the exchange rate movement on the core fundamentals of the economy in the long term, as well as to test the validity of selected indicators of technical analysis in the short term. The dependence of the exchange rate will be examined using correlation and the discussed fundamentals are the main macroeconomic indicators, such as GDP, short-term interest rates and money base M2. In the part, which deals with the technical analysis, I will test the two groups of indicators, namely trend indicators and oscillators. From the first group it will be simple moving average (SMA), Exponential Moving Average (EMA), the weighted moving average (WMA), the triangular moving average (TMA) and MACD. From the group of oscillators I will test the relative strength index (RSI). All these indicators will be first described in the theoretical part of this thesis. The thesis is divided into two parts - theoretical and practical. The theoretical part includes two chapters which deals with the analysis of the Forex market. The first chapter deals with fundamental analysis. The second chapter deals with technical analysis. In the third chapter I will discuss both methods in practice, with emphasis on technical analysis.
APA, Harvard, Vancouver, ISO, and other styles
3

Iqbal, Ammar Tanange Rakesh Virk Shafqat. "Vehicle fault prediction analysis : a health prediction tool for heavy vehicles /." Göteborg : IT-universitetet, Chalmers tekniska högskola och Göteborgs universitet, 2006. http://www.ituniv.se/w/index.php?option=com_itu_thesis&Itemid=319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lidholm, Tomas. "Knock prediction with reduced reaction analysis." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1928.

Full text
Abstract:

In the report a model using a reduced reaction analysis has been used to see if it is possible to predict knock. The model is based on n-heptane combustion, but it is used for iso-octane. The model was supposed to be able to adapt to different fuels, but it is shown to be unable to do so. Further, the model has been compared to an existing method for predicting knock, known as knock index, to see if any improvements could be made. When comparing the model to the knock index, it has shown that no big advantages can be found using the new model. It is more time consuming and is not able to work with simulated input, instead of measured. It can however predict if knock occurs with a good reliability, but compared to the knock index it is not an improvement.

APA, Harvard, Vancouver, ISO, and other styles
5

Copley, Richard Robertson. "Analysis and prediction of protein structure." Thesis, University of Oxford, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.361954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Boscott, Paul Edmond. "Sequence analysis in protein structure prediction." Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.386870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Marsden, Russell Leonard. "Analysis and prediction of protein domains." Thesis, University College London (University of London), 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.408035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ahmed, Ikhlaaq. "Meta-analysis of risk prediction studies." Thesis, University of Birmingham, 2015. http://etheses.bham.ac.uk//id/eprint/6376/.

Full text
Abstract:
This thesis identifies and demonstrates the methodological challenges of meta-analysing risk prediction models using either aggregate data or individual patient data (IPD). Firstly, a systematic review of published breast cancer models is performed, to summarise their content and performance using aggregate data. It is found that models were not available for comparison. To address this issue, a systematic review is performed to examine articles that develop and/or validate a risk prediction model using IPD from multiple studies. This identifies that most articles only use the IPD for model development, and thus ignore external validation, and also ignore clustering of patients within studies. In response to these issues, IPD is obtained from an article which uses parathyroid hormone (PTH) assay (a continuous variable) to predict postoperative hypocalcaemia after thyroidectomy. It is shown that ignoring clustering is inappropriate, as it ignores potential between-study heterogeneity in discrimination and calibration performance. This dataset was also used to evaluate an imputation method for dealing with missing thresholds when IPD are unavailable, and the simulation results indicate the approach performs well, though further research is required. This thesis therefore makes a positive contribution towards meta-analysis of risk prediction models to improve clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
9

Ellis, Daniel Patrick Whittlesey. "Prediction-driven computational auditory scene analysis." Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/11006.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.
Includes bibliographical references (p. 173-180).
by Daniel P.W. Ellis.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
10

Elliott, Craig Julian. "Analysis and prediction of protein structure." Thesis, University of York, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

McGee, S. E. "Software requirements change analysis and prediction." Thesis, Queen's University Belfast, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.679259.

Full text
Abstract:
Software requirements continue to evolve during application development to meet the changing needs of customers and market demands. Complementing current approaches that constrain the risk that changing requirements pose to project cost, schedule and quality, this research seeks to investigate the efficacy of Bayesian networks to predict levels of volatility early in the project lifecycle. A series of industrial empirical studies is undertaken to explore the causes and consequences of requirements change, the results of which inform prediction feasibility and model construction. Models are then validated using data from four projects in two industrial organisations. Results from empirical studies indicate that classification of changes according to the source of the change is practical and informative to decisions concerning requirements management and process selection. Changes coming from sources considered external to the project are more expensive and difficult to control by comparison to the more numerous changes that occur as result of adjustments to product direction, or requirement specification. Although certain requirements are more change prone than others, the relationship between volatility and requirement novelty and complexity is not straightforward. Bayesian network models to predict levels of requirement volatility constructed based upon these results perform better than project management estimations of volatility when models are trained from a project sharing industrial context. This research carries the implication that process selection should be based upon the types of changes likely, and that formal predictive models are a promising alternative to project management estimation when investment in data collection, re-use and learning is supported.
APA, Harvard, Vancouver, ISO, and other styles
12

CARIOLI, GRETA. "CANCER MORTALITY DATA ANALYSIS AND PREDICTION." Doctoral thesis, Università degli Studi di Milano, 2019. http://hdl.handle.net/2434/612668.

Full text
Abstract:
Tradizionalmente, l’epidemiologia descrittiva viene considerata come un semplice strumento esplorativo. Tuttavia, nel corso degli anni, la maggiore disponibilità e il miglioramento della qualità dei dati epidemiologici hanno portato allo sviluppo di nuove tecniche statistiche che caratterizzano l'epidemiologia moderna. Questi metodi non sono solo esplicativi, ma anche predittivi. In ambito di sanità pubblica, le previsioni degli andamenti futuri di morbilità e mortalità sono essenziali per valutare le strategie di prevenzione, la gestione delle malattie e per pianificare l'allocazione delle risorse. Durante il mio dottorato di ricerca in "Epidemiologia, Ambiente e Sanità Pubblica" ho lavorato all'analisi degli andamenti di mortalità per tumore, utilizzando principalmente la banca dati della World Health Organization (WHO), ma anche quella della Pan American Health Organization, dell’Eurostat, della United Nation Population Division, dello United States Census Bureau e la banca dati del Japanese National Institute of Population. Considerando diversi siti neoplastici e diversi paesi nel mondo, ho calcolato i tassi specifici per ogni classe di età quinquennale (da 0-4 a 80+ o 85+ anni), e singolo anno di calendario o quinquennio. Per poter confrontare i tassi fra diversi paesi, ho calcolato, utilizzando il metodo diretto sulla base della popolazione mondiale standard, i tassi di mortalità standardizzati per età per 100.000 anni-persona. Nella maggior parte delle analisi, ho poi applicato il modello di regressione joinpoint ai tassi standardizzati con lo scopo di individuare gli anni in cui erano avvenuti cambiamenti significativi nell’andamento dei tassi; per ogni segmento individuato dalla regressione joinpoint, ho calcolato le variazioni percentuali annue. Inoltre, mi sono concentrata sulle proiezioni degli andamenti futuri. Con l’obiettivo di individuare il segmento più recente dell’andamento di mortalità, ho applicato il modello di regressione joinpoint al numero di morti in ogni gruppo di età quinquennale. Quindi, ho utilizzato i Modelli Lineari Generalizzati (GLM), scegliendo la distribuzione di Poisson e diverse funzioni link, sui dati dell’ultimo segmento individuato dal modello joinpoint. In particolare, ho considerato le funzioni link identità, logaritmica, quinta potenza e radice quadrata. Ho anche implementato un algoritmo che genera una regressione "ibrida"; questo algoritmo seleziona automaticamente, in base al valore della statistica Akaike Information Criterion (AIC), il modello GLM Poisson più performante, tra quelli generati dalle funzioni link di identità, logaritmica, quinta potenza e radice quadrata, da applicare a ciascuna classe di età quinquennale. La regressione risultante, sull’insieme dei singoli gruppi di età, è quindi una combinazione dei modelli considerati. Quindi, applicando i coefficienti ottenuti dalle quattro regressioni GLM Poisson e dalla regressione ibrida sugli anni di previsione, ho ottenuto le stime predette del numero di morti. A seguire, utilizzando il numero di morti predetto e le popolazioni predette, ho stimato i tassi previsti specifici per età e i corrispondenti intervalli di previsione al 95% (PI). Infine, come ulteriore modello di confronto, ho costruito un modello medio, che semplicemente calcola una media delle stime prodotte dai diversi modelli GLM Poisson. Al fine di confrontare fra loro i sei diversi metodi di previsione, ho utilizzato i dati relativi a 21 paesi in tutto il mondo e all'Unione Europea nel suo complesso, e ho considerato 25 maggiori cause di morte. Ho selezionato solo i paesi con oltre 5 milioni di abitanti e solo i paesi per i quali erano disponibili dati di buona qualità (ovvero con almeno il 90% di coverage). Ho analizzato i dati del periodo temporale compreso tra il 1980 e il 2011 e, in particolare, ho applicato i vari modelli sui dati dal 1980 al 2001 con l’idea di prevedere i tassi sul periodo 2002-2011, e ho poi utilizzato i dati effettivamente disponibili dal 2002 al 2011 per valutare le stime predette. Quindi, per misurare l'accuratezza predittiva dei diversi metodi, ho calcolato la deviazione relativa assoluta media (AARD). Questa quantità indica la deviazione media percentuale del valore stimato dal valore vero. Ho calcolato gli AARD su un periodo di previsione di 5 anni (i.e. 2002-2006), e anche su un periodo di 10 anni (i.e. 2002-2011). Dalle analisi è emerso che il modello ibrido non sempre forniva le migliori stime di previsione e, anche quando risultava il migliore, i corrispondenti valori di AARD non erano poi molto lontani da quelli degli altri metodi. Tuttavia, le proiezioni ottenute utilizzando il modello ibrido, per qualsiasi combinazione di sito di tumore e sesso, non sono mai risultate le peggiori. Questo modello è una sorta di compromesso tra le quattro funzioni link considerate. Anche il modello medio fornisce stime intermedie rispetto alle altre regressioni: non è mai risultato il miglior metodo di previsione, ma i suoi AARD erano competitivi rispetto agli altri metodi considerati. Complessivamente, il modello che mostra le migliori prestazioni predittive è il GLM Poisson con funzione link identità. Inoltre, questo metodo ha mostrato AARD estremamente bassi rispetto agli altri metodi, in particolare considerando un periodo di proiezione di 10 anni. Infine, bisogna tenere in considerazione che gli andamenti previsti, e i corrispondenti AARD, ottenuti da proiezioni su periodi di 5 anni sono molto più accurati rispetto a quelli su periodi di 10 anni. Le proiezioni ottenute con questi metodi per periodi superiori a 5 anni perdono in affidabilità e la loro utilità in sanità pubblica risulta quindi limitata. Durante l'implementazione della regressione ibrida e durante le analisi sono rimaste aperte alcune questioni: ci sono altri modelli rilevanti che possono essere aggiunti all'algoritmo? In che misura la regressione joinpoint influenza le proiezioni? Come trovare una regola "a priori" che aiuti a scegliere quale metodo predittivo applicare in base alle varie covariate disponibili? Tutte queste domande saranno tenute in considerazione per gli sviluppi futuri del progetto. Prevedere gli andamenti futuri è un processo complesso, le stime risultanti dovrebbero quindi essere considerate con cautela e solo come indicazioni generali in ambito epidemiologico e di pianificazione sanitaria.
Descriptive epidemiology has traditionally only been concerned with the definition of a research problem’s scope. However, the greater availability and improvement of epidemiological data over the years has led to the development of new statistical techniques that have characterized modern epidemiology. These methods are not only explanatory, but also predictive. In public health, predictions of future morbidity and mortality trends are essential to evaluate strategies for disease prevention and management, and to plan the allocation of resources. During my PhD at the school of “Epidemiology, Environment and Public Health” I worked on the analysis of cancer mortality trends, using data from the World Health Organization (WHO) database, available on electronic support (WHOSIS), and from other databases, including the Pan American Health Organization database, the Eurostat database, the United Nation Population Division database, the United States Census Bureau and the Japanese National Institute of Population database. Considering several cancer sites and several countries worldwide, I computed age-specific rates for each 5-year age-group (from 0–4 to 80+ or 85+ years) and calendar year or quinquennium. I then computed age-standardized mortality rates per 100,000 person-years using the direct method on the basis of the world standard population. I performed joinpoint models in order to identify the years when significant changes in trends occurred and I calculated the corresponding annual percent changes. Moreover, I focused on projections. I fitted joinpoint models to the numbers of certified deaths in each 5-year age-group in order to identify the most recent trend slope. Then, I applied Generalized Liner Model (GLM) Poisson regressions, considering different link functions, to the data over the time period identified by the joinpoint model. In particular, I considered the identity link, the logarithmic link, the power five link and the square root link. I also implemented an algorithm that generated a “hybrid” regression; this algorithm automatically selects the best fitting GLM Poisson model, among the identity, logarithmic, power five, and square root link functions, to apply for each age-group according to Akaike Information Criterion (AIC) values. The resulting regression is a combination of the considered models. Thus, I computed the predicted age-specific numbers of deaths and rates, and the corresponding 95% prediction intervals (PIs) using the regression coefficients obtained previously from the four GLM Poisson regressions and from the hybrid GLM Poisson regression. Lastly, as a further comparison model, I implemented an average model, which just computes a mean of the estimates produced by the different considered GLM Poisson models. In order to compare the six different prediction methods, I used data from 21 countries worldwide and for the European Union as a whole, I considered 25 major causes of death. I selected countries with over 5 million inhabitants and with good quality data (i.e. with at least 90% of coverage). I analysed data for the period between 1980 and 2011 and, in particular, I considered data from 1980 to 2001 as a training dataset, and from 2002 to 2011 as a validation set. To measure the predictive accuracy of the different models, I computed the average absolute relative deviations (AARDs). These indicate the average percent deviation from the true value. I calculated AARDs on 5-year prediction period (i.e. 2002-2006), as well as for 10-year period (i.e. 2002-2011). The results showed that the hybrid model did not give always the best predictions, and when it was the best, the corresponding AARD estimates were not very far from the other methods. However, the hybrid model projections, for any combination of cancer site and sex, were never the worst. It acted as a compromise between the four considered models. The average model is also ranked in an intermediate position: it never was the best predictive method, but its AARDs were competitive compared to the other methods considered. Overall, the method that shows the best predictive performance is the Poisson GLM with an identity link function. Furthermore, this method, showed extremely low AARDs compared to other methods, particularly when I considered a 10-year projection period. Finally, we must take into account that predicted trends and corresponding AARDs derived from 5-year projections are much more accurate than those done over a 10-year period. Projections beyond five years with these methods lack reliability and become of limited use in public health. During the implementation of the algorithm and the analyses, several questions emerged: Are there other relevant models that can be added to the algorithm? How much does the Joinpoint regression influence projections? How to find an “a priori” rule that helps in choosing which predictive method apply according to various available covariates? All these questions are set aside for the future developments of the project. Prediction of future trends is a complex procedure, the resulting estimates should be taken with caution and considered only as general indications for epidemiology and health planning.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhu, Zheng. "A Unified Exposure Prediction Approach for Multivariate Spatial Data: From Predictions to Health Analysis." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin155437434818942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Xie, Jiang. "Improved permeability prediction using multivariate analysis methods." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3223.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Wu, Feihong. "Protein-protein interface database, analysis and prediction /." [Ames, Iowa : Iowa State University], 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3379186.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Xin. "Failure analysis and prediction in compute clouds." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/50871.

Full text
Abstract:
Most cloud computing clusters are built from unreliable, commercial off-the-shelf components compared with supercomputer clusters. The high failure rates in their hardware and software components result in frequent node and application failures. Therefore, it is important to understand their failures to design a reliable cloud system. This thesis presents a characterization study of cloud application failures, and proposes a method to predict application failures in order to save resources. We first analyze a workload trace from a production cloud cluster and characterize the observed failures. The goal of our work is to improve the understanding of failures in compute clouds. We present the statistical properties of job and task failures, and attempt to correlate them with key scheduling constraints, node operations, and attributes of users in the cloud. We observe that there are many opportunities to enhance the reliability of the applications running in the cloud, and further nd that resource usage patterns of the jobs can be leveraged by failure prediction techniques. Next, we propose a prediction method based on recurrent neural networks to identify the failures. It takes the resource usage measurements or performance data, and generate features to categorize the applications into different classes. We then evaluate the method on the cloud workload trace. Our results show that the model is able to predict application failures. Moreover, we explore early classification to identify failures, and find that the prediction algorithm provides the cloud system enough time to take proactive actions much earlier than the termination of applications to avoid resource wastage.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
17

Bahceci, Oktay, and Oscar Alsing. "Stock Market Prediction using Social Media Analysis." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166448.

Full text
Abstract:
Stock Forecasting is commonly used in different forms everyday in order to predict stock prices. Sentiment Analysis (SA), Machine Learning (ML) and Data Mining (DM) are techniques that have recently become popular in analyzing public emotion in order to predict future stock prices. The algorithms need data in big sets to detect patterns, and the data has been collected through a live stream for the tweet data, together with web scraping for the stock data. This study examined how three organization's stocks correlate with the public opinion of them on the social networking platform, Twitter. Implementing various machine learning and classification models such as the Artificial Neural Network we successfully implemented a company-specific model capable of predicting stock price movement with 80% accuracy.
APA, Harvard, Vancouver, ISO, and other styles
18

Lotay, Vaneet Singh. "Evaluating coexpression analysis for gene function prediction." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/17430.

Full text
Abstract:
Microarray expression data sets vary in size, data quality and other features, but most methods for selecting coexpressed gene pairs use a ‘one size fits all’ approach. There have been many different procedures for selecting coexpressed gene pairs of high functional similarity from an expression dataset. However, it is not clear which procedure performs best as there are few studies reporting comparisons of these approaches. The goal of this thesis is to develop a set of “best practices” in order to select coexpression links of high functional similarity from an expression dataset, along which methods for identifying datasets likely to yield poor information. With these goals, we hope to improve the quality of gene function predictions produced by coexpression analysis. Using 80 human expression datasets we examined the impact of different thresholds, correlation metrics, expression data filtering and transformation procedures on performance in functional prediction. We also investigated the relationship between data quality and other features of expression datasets and their performance in functional prediction. We used the annotations of the Gene Ontology as a primary metric to measure similarity in gene function, and employ additional functional metrics for validation. Our results show that several dataset features have a greater influence on the performance in functional prediction than others. Expression datasets which produce coexpressed gene pairs of poor functional quality can be identified by a similar set of data features. Some procedures used in coexpression analysis have a negligible effect on the quality of functional predictions while others are essential to achieving the best performance in the algorithm. We also find that some procedures interact greatly with features of expression datasets and that these interactions increase the number of high quality coexpressed gene pairs retrieved through coexpression analysis. This thesis uncovers important information on the many intrinsic and extrinsic factors that influence the performance in functional prediction of coexpression analysis. The information summarized here will help guide future studies using coexpression analysis and improve the quality of gene function predictions.
APA, Harvard, Vancouver, ISO, and other styles
19

Abu, Alhaija Elham Saleh Jaber. "Class III malocclusion : analysis and growth prediction." Thesis, Queen's University Belfast, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.361352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Rufino, Stephen Duarte. "Analysis, comparison and prediction of protein structure." Thesis, Birkbeck (University of London), 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.243648.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Walker-Taylor, Alice. "Analysis and prediction of protein-protein interactions." Thesis, University College London (University of London), 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.405881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Betts, Matthew James. "Analysis and prediction of protein-protein recognition." Thesis, University College London (University of London), 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313795.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Cairoli, Claudio 1975. "Analysis of the IMS Velocity Prediction Program." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/91359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ota, Karson L. "Football play type prediction and tendency analysis." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113120.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 33).
In any competition, it is an advantage to know the actions of the opponent in advance. Knowing the move of the opponent allows for optimization of strategy in response to their move. Likewise, in football, defenses must react to the actions of the offense. Being able to predict what the offense is going to do before the play represents a tremendous advantage to the defense. This project applies machine learning algorithms to situational NFL data in order to more accurately predict play type as opposed to the widely used and overly general method of general statistics. Additionally, this project creates a way to discern tendencies in specific situations to help coaches create game plans and make in game decisions.
by Karson L. Ota.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
25

Dutta, Bishwajit. "Power Analysis and Prediction for Heterogeneous Computation." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/92870.

Full text
Abstract:
Power, performance, and cost dictate the procurement and operation of high-performance computing (HPC) systems. These systems use graphics processing units (GPUs) for performance boost. In order to identify inexpensive-to-acquire and inexpensive-to-operate systems, it is important to do a systematic comparison of such systems with respect to power, performance and energy characteristics with the end use applications. Additionally, the chosen systems must often achieve performance objectives without exceeding their respective power budgets, a task that is usually borne by a software-based power management system. Accurately predicting the power consumption of an application at different DVFS levels (or more generally, different processor configurations) is paramount for the efficient functioning of such a management system. This thesis intends to apply the latest in the state-of-the-art in green computing research to optimize the total cost of acquisition and ownership of heterogeneous computing systems. To achieve this we take a two-fold approach. First, we explore the issue of greener device selection by characterizing device power and performance. For this, we explore previously untapped opportunities arising from a special type of graphics processor --- the low-power integrated GPU --- which is commonly available in commodity systems. We compare the greenness (power, energy, and energy-delay product $rightarrow$ EDP) of the integrated GPU against a CPU running at different frequencies for the specific application domain of scientific visualization. Second, we explore the problem of predicting the power consumption of a GPU at different DVFS states via machine-learning techniques. Specifically, we perform statistically rigorous experiments to uncover the strengths and weaknesses of eight different machine-learning techniques (namely, ZeroR, simple linear regression, KNN, bagging, random forest, SMO regression, decision tree, and neural networks) in predicting GPU power consumption at different frequencies. Our study shows that a support vector machine-aided regression model (i.e., SMO regression) achieves the highest accuracy with a mean absolute error (MAE) of 4.5%. We also observe that the random forest method produces the most consistent results with a reasonable overall MAE of 7.4%. Our results also show that different models operate best in distinct regions of the application space. We, therefore, develop a novel, ensemble technique drawing the best characteristics of the various algorithms, which reduces the MAE to 3.5% and maximum error to 11% from 20% for SMO regression.
MS
APA, Harvard, Vancouver, ISO, and other styles
26

Warren, James. "Analysis and prediction of the UK economy." Thesis, University of Kent, 2016. https://kar.kent.ac.uk/58879/.

Full text
Abstract:
Using the business cycle accounting (BCA) framework pioneered by Chari, Kehoe and McGratten (2007, Econometrica) we examine the causes of the 2008-09 recession in the UK. There has been much commentary on the finnancial causes of this recession, which we might expect to bring about variation in the intertemporal rate of substitution in consumption. However, the recession appears to have been mostly driven by shocks to the efficiency wedge in total production, rather than the intertemporal (asset price) consumption, labour or spending wedge. From an expenditure perspective this result is consistent with the observed large falls in both consumption and investment during the recession. To assess this result we also simulate artifcial data from a DSGE model in which asset price shocks dominate and and no strong role for the intertemporal consumption wedge using the BCA method. This result does not imply that .nancial frictions did not matter for the recent recession but that such frictions do not necessarily impact only on the intertemporal rate of substitution in consumption. We investigate the ability of three standard nowcasting methodologies, bridge equations, unrestricted Mixed Data Sampling regressions and mixed frequency VARs, to nowcast the UK GDP. All three methodologies may have advantages over the other, bridge equations are the simplest to construct and are the most transparent. The direct forecasting approach of MIDAS may reduce errors in the face of model misspecification while remaining relatively simple to estimate and forecast with. The mixed frequency VAR allows for dynamics between the variables which may help to reduce the forecast error. We evaluate these methods using a final dataset which mimics the data availability at each period in time for 5 monthly indicators. We find that the VAR on average across all forecast horizons is the most consistent, while MIDAS has the best predictive power at the 1 step ahead horizon. The bridge equations do not appear useful until the final month of the quarter. Throughout the evaluation period the predictive accuracy of the methods varies, the MFVAR performs best during the 'Great Recession' period while MIDAS is better during normal growth periods. In this paper, we apply the factor-augmented VAR of Bernanke, Boivin and Eliasz (2005) in the context of mixed frequencies for a US and a UK dataset. For the US we further extend the model to allow for regime switching dynamics, we compare the short-term predictive ability of the two models against the standard Mixed Frequency VAR of Murasawa and Mariano (2004, 2010). We find that in general, the MFVAR with factors performs slightly worse than the standard MFVAR for the US dataset, marginally so for forecast horizons greater than one and significantly worse at the single period ahead forecast. This result was broadly consistent for the UK dataset, except at the FAMFVAR performed slightly better at the single period ahead horizon. The Markov switching extension was the worst performing of all of the models. Studying the filtered probabilities for the recessionary regime indicated that only the deeper of the recessions were captured. Further work on dealing with the label switching problem may be required for better performance for the Bayesian treatment of MFVARs with regime switches.
APA, Harvard, Vancouver, ISO, and other styles
27

Freeland, R. Keith. "Statistical analysis of discrete time series with application to the analysis of workers' compensation claims data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq27144.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Parast, Layla. "Landmark Prediction of Survival." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10085.

Full text
Abstract:
The importance of developing personalized risk prediction estimates has become increasingly evident in recent years. In general, patient populations may be heterogenous and represent a mixture of different unknown subtypes of disease. When the source of this heterogeneity and resulting subtypes of disease are unknown, accurate prediction of survival may be difficult. However, in certain disease settings the onset time of an observable intermediate event may be highly associated with these unknown subtypes of disease and thus may be useful in predicting long term survival. Throughout this dissertation, we examine an approach to incorporate intermediate event information for the prediction of long term survival: the landmark model. In Chapter 1, we use the landmark modeling framework to develop procedures to assess how a patient’s long term survival trajectory may change over time given good intermediate outcome indications along with prognosis based on baseline markers. We propose time-varying accuracy measures to quantify the predictive performance of landmark prediction rules for residual life and provide resampling-based procedures to make inference about such accuracy measures. We illustrate our proposed procedures using a breast cancer dataset. In Chapter 2, we aim to incorporate intermediate event time information for the prediction of survival. We propose a fully non-parametric procedure to incorporate intermediate event information when only a single baseline discrete covariate is available for prediction. When a continuous covariate or multiple covariates are available, we propose to incorporate intermediate event time information using a flexible varying coefficient model. To evaluate the performance of the resulting landmark prediction rule and quantify the information gained by using the intermediate event, we use robust non-parametric procedures. We illustrate these procedures using a dataset of post-dialysis patients with end-stage renal disease. In Chapter 3, we consider improving efficiency by incorporating intermediate event information in a randomized clinical trial setting. We propose a semi-nonparametric two-stage procedure to estimate survival by incorporating intermediate event information observed before the landmark time. In addition, we present a testing procedure using these resulting estimates to test for a difference in survival between two treatment groups. We illustrate these proposed procedures using an AIDS dataset.
APA, Harvard, Vancouver, ISO, and other styles
29

Rossi, A. "PREDICTIVE MODELS IN SPORT SCIENCE: MULTI-DIMENSIONAL ANALYSIS OF FOOTBALL TRAINING AND INJURY PREDICTION." Doctoral thesis, Università degli Studi di Milano, 2017. http://hdl.handle.net/2434/495229.

Full text
Abstract:
Due to the fact that team sports such as football have a complex multidirectional and intermittent nature, an accurate planning of the training workload is needed in order to maximise the athletes’ performance during the matches and reduce their risk of injury. Despite the evaluation of external workloads during trainings and matches has become more and more easier thanks to the advent of the tracking system technologies such as Global Position System (GPS), the planning of the best training workloads aimed to obtain the higher performance during the matches and a lower risk of injury during sport stimuli, is still a very difficult challenge for sport scientists, athletic trainers and coaches. The application of machine learning approaches on sport sciences aims to solve this crucial issue. Hence, the combination between data and sport scientists’ peculiarities could maximize the information that can be obtained from the football training and match analysis. Thus, the aim of this thesis is to provide examples of the application of the machine learning approach on sport science. In particular, two studies are provided with the aim of detecting a pattern during in-season football training weeks and predicting injuries. For these studies, 23 elite football players were monitored in eighty in-season trainings by using a portable non-differential 10 Hz global position system (GPS) integrated with 100 Hz 3-D accelerometer, a 3-D gyroscope, and a 3-D digital, Northern Ireland compass (STATSports Viper). Information about non-traumatic injuries were also recorded by the club’s medical staff. In order to detect a pattern during the in-season training weeks and the injuries, Extra Tree Random Forest (ETRFC) and Decision Tree (DT) Classifier were computed, respectively. In the first study it was found that the in-season football trainings follow a sinusoidal model (i.e. zig-zag shape found in autocorrelation analysis) because their periodization is characterized by repeated short-term cycles which are constituted by two parts: the first one (i.e. trainings long before the match) is constituted by high training loads, and the second one (i.e. trainings close to the match) by low ones. This short-term structure appears to be a strategy useful both to facilitate the decay of accumulated fatigue from high training loads performed at the beginning of the cycle and to promote readiness for the following performance. As a matter of fact, a patter was detected through the in-season football training weeks by ETRFC. This machine learning process can accurately define the training loads to be performed in each training day to maintain higher performance throughout the season. Moreover, it was found that the most important features able to discriminate short-term training days are the distance covered above 20 W·kg-1, the acceleration above 2 m·s-2, the total distance and the distance covered above 25.5 W·Kg-1 and below 19.8Km·h-1. Thus, in accordance with the results found in this study, athletic trainers and coaches may use machine learning processes to define training loads with the aim of obtaining the best performance during all the season matches. Players’ training loads discrepancy in comparison with the ones defined by athletic trainers and coaches as the best ones to obtain enhancement in match performance, might be considered an index of individuals’ physical issue, which could induce injuries. As a matter of fact, in the second study presented in this thesis, it was found that it is possible to correctly predict 60.9% of the injuries by using the rules defined by DT classifier assessing training loads in a predictive window of 6-days. In particular, it was found that the number of injuries that the player suffered through the season, the total number of Acceleration above 2 m·s-2 and 3 m·s-2, and the distance in meters when the Metabolic Power (Energy Consumption per Kilogramme per second) is above the value of 25.5 W/Kg per minute, are the most important features able to predict injuries. Moreover, the football team analysed in this thesis should keep under control the discrepancy of these features when players return to the regular training because of the numerous fall-backs into injuries that have been recorded. Thus, this machine learning approach enables football teams to identify when their players should pay more attention during both trainings and matches in order to reduce the injury risk, while improving team strategy. In conclusion, Machine Learning processes could help athletic trainers and coaches with the coaching process. In particular, they could define which training loads could be useful to obtain enhancement in sport performance and to predict injuries. The diversities of coaching processes and physical characteristics of the football players in each team do not permit to make inferences on the football players’ population. Hence, these models should be built in each team in order to improve the accuracy of the machine learning processes.
APA, Harvard, Vancouver, ISO, and other styles
30

Duncan, Gregory S. "Milling dynamics prediction and uncertainty analysis using receptance coupling substructure analysis." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0015544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Zarad, Abdallah. "Developing an advanced spline fatigue prediction method." Thesis, Blekinge Tekniska Högskola, Institutionen för maskinteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18927.

Full text
Abstract:
Fatigue failure is one of the most critical issues in industry nowadays as 60 to 90 percent of failures in metals are due to fatigue. Therefore, different methods and approaches are developed to estimate the fatigue life of metallic parts. In this research, a case-hardened steel splined shaft is studied to estimate the fatigue life that the shaft will withstand before failure. The purpose of the research is to develop an advanced fatigue prediction method for splines.A static experimental test was performed on the splined shaft for analyzing the load-strain behavior of the shaft and determining the suitable load cases of the study. A dynamic test of pure torsional load was carried out to collect experimental results for validating the generated fatigue methods and investigating the failure behavior of the shaft. Stress analysis was performed on the part for investigating critical areas and the effect of the different spline teeth designs on the resulting stress. Two finite element models were analyzed using two software, MSC Marc software with a geometry of straight spline teeth and Spline LDP with an involute spline teeth model. DIN 5466-1 spline standard’s analytical solution was used for verification purposes. Stress and strain-based approaches were used to estimate fatigue life. The most suitable method was evaluated against experimental test results.The research findings show that the most critical stress areas on the shaft are the spline root fillet and relief. When the part fails due to fatigue the crack initiates at the root fillet and propagates to the relief. It is also shown that involute teeth spline gives higher stress than straight teeth for the same load due to less contact area.The conclusion of the research could be summarized in: the stress-based method (Wöhler curve) is giving good accuracy and proved a reliable method. While among six different approaches used of strain-based methods, four-point correlation method is giving the best correlation to test results. Hence, it is recommended to use four-point correlation method for fatigue analysis for its accuracy and for considering both elastic and plastic behavior of the material.
APA, Harvard, Vancouver, ISO, and other styles
32

Leonardi, Mary L. "Prediction and geometry of chaotic time series." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1997. http://handle.dtic.mil/100.2/ADA333449.

Full text
Abstract:
Thesis (M.S. in Applied Mathematics) Naval Postgraduate School, June 1997.
Thesis advisors, Christopher Frenzen, Philip Beaver. Includes bibliographical references (p. 103-104). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
33

Cakil, Semih. "Computational Analysis For Performance Prediction Of Stirling Cryocoolers." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612738/index.pdf.

Full text
Abstract:
Stirling cryocoolers are required for a wide variety of applications, especially in military equipment, due to their small size, low weight, long lifetime and high reliability considering their efficiency. Thus, it is important to be able to investigate the operating performance of these coolers in the design stage. This study focuses on developing a computer program for simulating a Stirling cryocooler according to the second order analysis. The main consideration is to simulate thermodynamic, fluid dynamic and heat transfer behavior of Stirling cryocoolers. This goal is achieved by following the route of Urieli (1984), which was focused on Stirling cycle engines. In this research, a simulation for performance prediction of a Stirling cryocooler is performed. In addition to that, the effects of system parameters are investigated. This attempt helps to understand the real behavior of Stirling cryocoolers using porous regenerator material. Results implied that first order analysis methods give optimistic predictions where second order method provides more realistic data compared to first order methods. In addition to that, it is shown that regenerator porosity has positive effect on heat transfer characteristics while affecting flow friction negatively. As a conclusion, this study provides a clear understanding of loss mechanisms in a cryocooler. Performed numerical analysis can be used as a tool for investigation of effects of system parameters on overall performance.
APA, Harvard, Vancouver, ISO, and other styles
34

Vukovic, Divna, and Cecilia Wester. "Staff Prediction Analysis : Effort Estimation In System Test." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1739.

Full text
Abstract:
This master thesis is made in 2001 at Blekinge Institute of Technology and Symbian, which is a software company in Ronneby, Sweden. The purpose of the thesis is to find a suitable prediction and estimation model for the test effort. To do this, we have studied the State of the Art in cost/effort estimation and fault prediction. The conclusion of this thesis is that it is hard to make a general proposal, which is applicable for all organisations. For Symbian we have proposed a model based on use and test cases to predict the test effort.
APA, Harvard, Vancouver, ISO, and other styles
35

Müller, Wolfgang A. "Analysis and prediction of the European winter climate /." Zürich : MeteoSchweiz, 2004. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=15540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Vlasova, Julija. "Spatio-temporal analysis of wind power prediction errors." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2007. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2007~D_20070816_142259-79654.

Full text
Abstract:
Nowadays there is no need to convince anyone about the necessity of renewable energy. One of the most promising ways to obtain it is the wind power. Countries like Denmark, Germany or Spain proved that, while professionally managed, it can cover a substantial part of the overall energy demand. One of the main and specific problems related to the wind power management — development of the accurate power prediction models. Nowadays State-Of-Art systems provide predictions for a single wind turbine, wind farm or a group of them. However, the spatio-temporal propagation of the errors is not adequately considered. In this paper the potential for improving modern wind power prediction tool WPPT, based on the spatio-temporal propagation of the errors, is examined. Several statistical models (Linear, Threshold, Varying-coefficient and Conditional Parametric) capturing the cross-dependency of the errors, obtained in different parts of the country, are presented. The analysis is based on the weather forecast information and wind power prediction errors obtained for the territory of Denmark in the year 2004.
Vienas iš perspektyviausių bei labiausiai plėtojamų atsinaujinančių energijos šaltinių - vėjas. Tokios Europos Sąjungos šalys kaip Danija, Vokietija bei Ispanija savo patirtimi įrodė, jog tinkamai valdomas bei vystomas vėjo ūkis gali padengti svarią šalies energijos paklausos dalį. Pagal Europos Sąjungos direktyvą 2001/77/EC Lietuva yra įsipareigojusi iki 2010 m. pasiekti, kad elektros energijos gamyba iš atsinaujinančių energijos išteklių sudarytų 7% suvartojamos elektros energijos. Šių įsipareigojimų įvykdymui Lietuvos vyriausybės priimtu nutarimu yra nustatyta atsinaujinančių energijos išteklių naudojimo skatinimo tvarka, pagal kurią numatyta palaipsniui plėsti vėjo energijos naudojimą šalyje. Planuojama, kad iki 2010 m. bus pastatyta 200 MW bendros galios vėjo elektrinių, kurios gamins apie 2,2% visos suvartojamos elektros energijos [Marčiukaitis, 2007]. Didėjant vėjo energijos daliai energetikos sistemoje, Lietuvoje ateityje kils sistemos balansavimo problemų dėl nuolatinių vėjo jėgainių galios svyravimų. Kaip rodo kitų šalių patirtis, vėjo elektrinių galios prognozė yra efektyvi priemonė, leidžianti išspręsti šias problemas. Šiame darbe pristatyti keletas statistinių modelių bei metodų, skirtų išgaunamos vėjo energijos prognozėms gerinti. Analizė bei modeliavimas atlikti nagrinėjant Danijos WPPT (Wind Power Prediction Tool) duomenis bei meteorologines prognozes. Pagrindinis darbo tikslas - modifikuoti WPPT, atsižvelgiant į vėjo krypties bei stiprio įtaką energijos... [toliau žr. visą tekstą]
APA, Harvard, Vancouver, ISO, and other styles
37

Burgoyne, Nicholas John. "The structural analysis and prediction of protein interactions." Thesis, University of Leeds, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Harrison, Paul Martin. "Analysis and prediction of protein structure : disulphide bridges." Thesis, University College London (University of London), 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

AMARAL, BERNARDO HALLAK. "PREDICTION OF FUTURE VOLATILITY MODELS: BRAZILIAN MARKET ANALYSIS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2012. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=20458@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
Realizar a previsão de volatilidade futura é algo que intriga muitos estudiosos, pesquisadores e pessoas do mercado financeiro. O modelo e a metodologia utilizados no cálculo são fundamentais para o apreçamento de opções e dependendo das variáveis utilizadas, o resultado se torna muito sensível, propiciando resultados diferentes. Tudo isso pode causar cálculos imprecisos e estruturação de estratégias erradas de compra e venda de ações e opções por empresas e investidores. Por isso, o objetivo deste trabalho é utilizar alguns modelos para o cálculo de volatilidade futura e analisar os resultados, avaliando qual o melhor modelo a ser empregado, propiciando uma melhor previsão da volatilidade futura.
Make a prediction of future volatility is a subject that causes debate between scholars, researchers and people in the financial market. The modeal nd methodology used in the calculation are fundamental to the pricing of options and depending on the variables used, the result becomes very sensitive, giving different results. All this can cause inaccurate calculations and wrong strategies for buying and selling stocks and options by companies and investors. Therefore, the objective of this work is to use models for the calculation of future volatility and analyze the results, evaluating the best model to be used, allowing a better prediction of future volatility.
APA, Harvard, Vancouver, ISO, and other styles
40

Sauvé, Sarah A. "Prediction in polyphony : modelling musical auditory scene analysis." Thesis, Queen Mary, University of London, 2018. http://qmro.qmul.ac.uk/xmlui/handle/123456789/46805.

Full text
Abstract:
How do we know that a melody is a melody? In other words, how does the human brain extract melody from a polyphonic musical context? This thesis begins with a theoretical presentation of musical auditory scene analysis (ASA) in the context of predictive coding and rule-based approaches and takes methodological and analytical steps to evaluate selected components of a proposed integrated framework for musical ASA, unified by prediction. Predictive coding has been proposed as a grand unifying model of perception, action and cognition and is based on the idea that brains process error to refine models of the world. Existing models of ASA tackle distinct subsets of ASA and are currently unable to integrate all the acoustic and extensive contextual information needed to parse auditory scenes. This thesis proposes a framework capable of integrating all relevant information contributing to the understanding of musical auditory scenes, including auditory features, musical features, attention, expectation and listening experience, and examines a subset of ASA issues - timbre perception in relation to musical training, modelling temporal expectancies, the relative salience of musical parameters and melody extraction - using probabilistic approaches. Using behavioural methods, attention is shown to influence streaming perception based on timbre more than instrumental experience. Using probabilistic methods, information content (IC) for temporal aspects of music as generated by IDyOM (information dynamics of music; Pearce, 2005), are validated and, along with IC for pitch and harmonic aspects of the music, are subsequently linked to perceived complexity but not to salience. Furthermore, based on the hypotheses that a melody is internally coherent and the most complex voice in a piece of polyphonic music, IDyOM has been extended to extract melody from symbolic representations of chorales by J.S. Bach and a selection of string quartets by W.A. Mozart.
APA, Harvard, Vancouver, ISO, and other styles
41

Salina, Aigul Pazenovna. "Financial soundness of Kazakhstan banks : analysis and prediction." Thesis, Robert Gordon University, 2017. http://hdl.handle.net/10059/3128.

Full text
Abstract:
Purpose – The financial systems in many emerging countries are still impacted by the devastating effect of the 2008 financial crisis which created a massive disaster in the global economy. The banking sector needs appropriate quantitative techniques to assess its financial soundness, strengths and weaknesses. This research aims to explore, empirically assess and analyze the financial soundness of the banking sector in Kazakhstan. It also examines the prediction of financial unsoundness at an individual bank level using PCA, cluster, MDA, logit and probit analyses. Design/Methodology/Approach – A cluster analysis, in combination with principal component analysis (PCA), was utilized as a classification technique. It groups sound and unsound banks in Kazakhstan's banking sector by examining various financial ratios. Cluster analysis was run on a sample of 34 commercial banks on 1st January, 2008 and 37 commercial banks on 1st January, 2014 to test the ability of this technique to detect unsound banks before they fail. Then, Altman Z” and EM Score models were tested and re-estimated and the MDA, logit and probit models were constructed on a sample of 12 Kazakhstan banks during the period between 1st January, 2008 and 1st January, 2014. The sample consists of 6 sound and 6 unsound banks and accounts for 81.3% of the total assets of the Kazakhstan banking sector in 2014. These statistical methods used various financial variables to represent capital adequacy, asset quality, management, earnings and liquidity. Last but not least, the MDA, logit and probit models were systematically combined together to construct an integrated model to predict bank financial unsoundness. Findings – First of all, results from Chapter 3 indicate that cluster analysis is able to identify the structure of the Kazakh banking sector by the degree of financial soundness. Secondly, based on the findings in the second empirical chapter, the tested and re-estimated Altman models show a modest ability to predict bank financial unsoundness in Kazakhstan. Thirdly, the MDA, logit and probit models show high predictive accuracy in excess of 80%. Finally, the model that integrated the MDA, logit and probit types presents superior predictability with lower Type I errors. Practical Implications – The results of this research are of interest to supervisory and regulatory bodies. The models can be used as a reliable and effective tool, particularly the cluster based methodology for assessing the degree of financial soundness in the banking sector and the integrated model for predicting the financial unsoundness of banks. Originality/Value – This study is the first to employ a cluster-based methodology to assess financial soundness in the Kazakh banking sector. In addition, the integrated model can be used as a promising technique for evaluating the financial unsoundness of banks in terms of predictive accuracy and robustness. Importance – Assessing the financial soundness of the Kazakh banking system is of particular importance as the World Bank has ranked Kazakhstan as leading the world for the volume of non-performing credits in the total number of loans granted in 2012. It is one of the first academic studies carried out on Kazakhstan banks which comprehensively evaluate the financial soundness of banks. It is anticipated that the findings of the current study will provide useful lessons for developing and transition countries during periods of financial turmoil.
APA, Harvard, Vancouver, ISO, and other styles
42

Gondelach, David J. "Orbit prediction and analysis for space situational awareness." Thesis, University of Surrey, 2019. http://epubs.surrey.ac.uk/850116/.

Full text
Abstract:
The continuation of space activities is at risk due to the growing number of uncontrolled objects, called space debris, which can collide with operational spacecraft. In addition, debris can fall back to the Earth causing risks to the population. Therefore, space agencies have started space situational awareness (SSA) programs and taken space debris mitigation measures to reduce the risks caused by uncontrolled objects and prevent the generation of new debris. A fundamental need for SSA is the capability to predict, design and analyse orbits. In this work, new techniques for orbit prediction are developed that are suitable for SSA in terms of accuracy, efficiency and ability to deal with uncertainties and are applied for re-entry prediction, end-of-life disposal, ADR mission design and long-term orbit prediction. The performance of high-order Poincaré mapping of perturbed orbits is improved by introducing a new set of orbital elements and the method is applied for orbit propagation and analysis of quasi-periodic orbits. Two new Lambert problem solvers are developed to compute perturbed rendezvous trajectories with hundreds of revolutions for the design of active debris removal missions. The computation of the effect of drag for semi-analytical propagation is speed up by using high-order Taylor expansions to evaluate the mean element rates efficiently. In addition, the high-order expansion of the flow through semi-analytical propagation is enabled using differential algebra to allow efficient propagation of initial conditions. The predictability of Galileo disposal orbits was investigated using chaos indicators and sensitivity analysis. The study showed that the orbits are predictable and that chaos indicators are not unsuitable for predictability analysis. Finally, to improve the re-entry prediction of rocket bodies based on two-line element data, ballistic coefficient and state estimation methods are enhanced. Using the developed approach, the re-entry prediction using only a ballistic coefficient estimate was found to be as accurate as re-entry prediction after full state estimation.
APA, Harvard, Vancouver, ISO, and other styles
43

Moore, Barbara Kirsten. "An analysis of representations for protein structure prediction." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/32620.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (p. 270-279).
by Barbara K. Moore Bryant.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
44

Mahmoodian, Mojtaba. "Reliability analysis and service life prediction of pipelines." Thesis, University of Greenwich, 2013. http://gala.gre.ac.uk/11374/.

Full text
Abstract:
Pipelines are extensively used engineering structures for conveying of fluid from one place to another. Most of the time, pipelines are placed underground, surcharged by soil weight and traffic loads. Corrosion of pipe material is the most common form of pipeline deterioration and should be considered in both the strength and serviceability analysis of pipes. The study in this research focuses on two different types of buried pipes including concrete pipes in sewage systems (concrete sewers) and cast iron water pipes used in water distribution systems. This research firstly investigates how to involve the effect of corrosion as a time dependent process of deterioration in the structural and failure analysis of these two types of pipes. Then two probabilistic time dependent reliability analysis methods including first passage probability theory and the gamma distributed degradation model are developed and applied for service life prediction of the pipes. The obtained results are verified by using Monte Carlo simulation technique. Sensitivity analysis is also performed to identify the most important parameters that affect pipe failure. For each type of the pipelines both individual failure mode and multi failure mode assessment are considered. The factors that affect and control the process of deterioration and their effects on the remaining service life are studied in a quantitative manner. The reliability analysis methods which have been developed in this research, contribute as rational tools for decision makers with regard to strengthening and rehabilitation of existing pipelines. The results can be used to obtain a cost-effective strategy for the management of the pipeline system. The output of this research is a methodology that will help infrastructure managers and design professionals to predict service life of pipeline systems and to optimize materials selection and design parameters for designing pipelines with longer service life.
APA, Harvard, Vancouver, ISO, and other styles
45

Holmqvist, Carl. "Opinion analysis of microblogs for stock market prediction." Thesis, KTH, Teoretisk datalogi, TCS, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-233197.

Full text
Abstract:
This degree project investigates if a company’s stock price development can be predicted using the general opinion expressed in tweets about the company. The project starts off with the model from a previous project and then tries to improve the results using state-of-the-art neural network sentiment analysis and more tweet data. This project also attempts to perform hourly predictions along with daily predictions in order to investigate the method further. The results show a decrease in accuracy compared to the previous project. The results also indicate that the neural network sentiment analysis improves the accuracy of the stock price development when compared to the baseline model under comparable conditions.
Detta examensarbete undersöker om ett företags aktievärdesutveckling kan förutspås genom att använda sig av den generella opinionen hos tweets skrivna om företaget. Examensarbetet utgår ifrån en model från ett tidigare projekt och försöker förbättra resultaten från denna genom att använda sig av dels state-of-the-art sentimentanalys med neurala nätverk, dels mer tweet data. Examensarbetet undersöker både prognoser timvis samt dygnsvis för att undersöka metoden djupare. Resultaten tyder på en minskad träffsäkerhet jämfört med det tidigare projektet. Resultaten indikerar också att sentimentanalys med neurala nätverk förbättrar träffsäkerheten hos aktievärdesprognosen jämfört med tidigare sentimentanalysmetod givet jämförbara förutsättningar.
APA, Harvard, Vancouver, ISO, and other styles
46

Višňovský, Marek. "Prediction and Analysis of Nucleosome Positions in DNA." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-412874.

Full text
Abstract:
Eukaryotní DNA se váže kolem nukleozomů, čím ovplyvnuje vyšši strukturu DNA a přístup k vazebním mistům pro všeobecní transkripční faktory a oblasti genů. Je proto důležité vědet, kde se nukleozomy vážou na DNA, a jak silná tato vazba je, abychom mohli porozumět mechanizmům regulace genů. V rámci projektu byla implementována nová metoda pro predikci nukleozomů založená na rozšíření Skrytých Markovových modelů, kde jako trénovací a testovací sada posloužila publikována data z Brogaard et al. (Brogaard K, Wang J-P, Widom, J. Nature 486(7404), 496-501 (2012). doi:10.1038/nature11142). Správne predikováno bylo zhruba 50% nukleozomů, co je porovnatenlný výsledek s existujícimi metodami. Okrem toho byla provedena řada experimentů popisující vlastnosti sekvencí nukleozomů a ich organizace.
APA, Harvard, Vancouver, ISO, and other styles
47

Terribilini, Michael Joseph. "Computational analysis and prediction of protein-RNA interactions." [Ames, Iowa : Iowa State University], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
48

Pandejee, Grishma Riken. "Prediction and Analysis of Connectivity in the Brain." Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/18179.

Full text
Abstract:
Neural brain connectivity has three aspects: a physical connection between different brain regions – termed anatomical connectivity; a mutual relationship between dynamical activities between different brain regions – termed functional connectivity; and the connectivity pattern that gives details of an influence one brain region has from other regions – termed effective connectivity. The anatomical and functional connectivities of the brain are experimentally measured in the form of connection matrices (CMs) by mapping the connectivity strengths between regions of interest (RoIs) of the brain using diffusion spectrum imaging (DSI) and functional magnetic resonance imaging (fMRI), respectively. However, the effective connectivity of the brain is difficult to measure experimentally. The thesis, firstly, infers direct and multistep effective connectivities from the functional connectivity of the brain and its relationship to cortical geometry in a healthy brain. Secondly, it presents a foundation to predict the impact on functional connectivity of the brain produced by lesions that are due to brain injuries. Finally, an analysis is presented of statistical properties of the connectivity strengths and its relation to brain connectivity mapping and cortical distances.
APA, Harvard, Vancouver, ISO, and other styles
49

Dimadi, Ioanna. "Social media sentiment analysis for firm's revenue prediction." Thesis, Uppsala universitet, Informationssystem, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-363117.

Full text
Abstract:
The advent of the Internet and its social media platforms have affected people’s daily life. More and more people use it as a tool in order to communicate, exchange opin-ions and share information with others. However, those platforms have not only been used for socializing but also for expressing people’s product preferences. This wide spread of social networking sites has enabled companies to take advantage of them as an important way of approaching their target audience. This thesis focuses on study-ing the influence of social media platforms on the revenue of a single organization like Nike that uses them actively. Facebook and Twitter, two widely-used social me-dia platforms, were investigated with tweets and comments produced by consumer’s online discussions in brand’s hosted pages being gathered. This unstructured social media data were collected from 26 Nike official pages, 13 fan pages from each plat-form and their sentiment was analyzed. The classification of those comments had been done by using the Valence Aware Dictionary and Sentiment Reasoner (VADER), a lexicon-based approach that is implemented for social media analysis. After gathering the five-year Nike’s revenue, the degree to which these could be affected by the clas-sified data was examined by using multiple stepwise linear regression analysis. The findings showed that the fraction of positive/total for both Facebook and Twitter ex-plained 84.6% of the revenue’s variance. Fitting this data on the multiple regression model, Nike’s revenue could be forecast with a root mean square error around 287 billion.
APA, Harvard, Vancouver, ISO, and other styles
50

Rashed, Azadeh <1983&gt. "A New Prediction Model for Slope Stability Analysis." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6628/1/Doctoral_Thesis-_A_New_Prediction_Model_for_Slope_Stability_Analysis.pdf.

Full text
Abstract:
The instability of river bank can result in considerable human and land losses. The Po river is the most important in Italy, characterized by main banks of significant and constantly increasing height. This study presents multilayer perceptron of artificial neural network (ANN) to construct prediction models for the stability analysis of river banks along the Po River, under various river and groundwater boundary conditions. For this aim, a number of networks of threshold logic unit are tested using different combinations of the input parameters. Factor of safety (FS), as an index of slope stability, is formulated in terms of several influencing geometrical and geotechnical parameters. In order to obtain a comprehensive geotechnical database, several cone penetration tests from the study site have been interpreted. The proposed models are developed upon stability analyses using finite element code over different representative sections of river embankments. For the validity verification, the ANN models are employed to predict the FS values of a part of the database beyond the calibration data domain. The results indicate that the proposed ANN models are effective tools for evaluating the slope stability. The ANN models notably outperform the derived multiple linear regression models.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography