Dissertations / Theses on the topic 'FOV PREDICTION'

To see the other types of publications on this topic, follow the link: FOV PREDICTION.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'FOV PREDICTION.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Björsell, Joachim. "Long Range Channel Predictions for Broadband Systems : Predictor antenna experiments and interpolation of Kalman predictions." Thesis, Uppsala universitet, Signaler och System, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-281058.

Full text
Abstract:
The field of wireless communication is under massive development and the demands on the cellular system, especially, are constantly increasing as the utilizing devices are increasing in number and diversity. A key component of wireless communication is the knowledge of the channel, i.e, how the signal is affected when sent over the wireless medium. Channel prediction is one concept which can improve current techniques or enable new ones in order to increase the performance of the cellular system. Firstly, this report will investigate the concept of a predictor antenna on new, extensive measurements which represent many different environments and scenarios. A predictor antenna is a separate antenna that is placed in front of the main antenna on the roof of a vehicle. The predictor antenna could enable good channel prediction for high velocity vehicles. The measurements show to be too noisy to be used directly in the predictor antenna concept but show potential if the measurements can be noise-filtered without distorting the signal. The use of low-pass filter and Kalman filter to do this, did not give the desired results but the technique to do this should be further investigated. Secondly, a interpolation technique will be presented which utilizes predictions with different prediction horizon by estimating intermediate channel components using interpolation. This could save channel feedback resources as well as give a better robustness to bad channel predictions by letting fresh, local, channel predictions be used as quality reference of the interpolated channel estimates. For a linear interpolation between 8-step and 18-step Kalman predictions with Normalized Mean Square Error (NMSE) of -15.02 dB and -10.88 dB, the interpolated estimates had an average NMSE of -13.14 dB, while lowering the required feedback data by about 80 %. The use of a warning algorithm reduced the NMSE by a further 0.2 dB. It mainly eliminated the largest prediction error which otherwise could lead to retransmission, which is not desired.
APA, Harvard, Vancouver, ISO, and other styles
2

Kock, Peter. "Prediction and predictive control for economic optimisation of vehicle operation." Thesis, Kingston University, 2013. http://eprints.kingston.ac.uk/35861/.

Full text
Abstract:
Truck manufacturers are currently under pressure to reduce pollution and cost of transportation. The cost efficient way to reduce CO[sub]2 and cost is to reduce fuel consumption by adaptation of the vehicle speed to the driving conditions - by heuristic knowledge or mathematical optimisation. Due to their experience, professional drivers are capable of driving with great efficiency in terms of fuel consumption. The key research question addressed in this work is the comparison of the fuel efficiency for an unassisted drive by an experienced professional driver versus an enhanced drive using driver assistance system. The motivation for this is based on the advantage of such a system in terms of price (lower than driver's training) but potentially it can be challenging to obtain drivers' acceptance of the system. There is a range of fundamental issued that have to be addressed prior to the design and implementation of the driver assistance system. The first issue is related to the evaluation of the correctness of the prediction model under development, due to a range of inaccuracies introduced by slope errors in digital maps, imprecise modelling of combustion engine, vehicle physics etc. The second issue is related to the challenge in selecting a suitable method for optimisation of mixed integer non-linear systems. Dynamic Programming proved to be very suitable for this work and some methods of search space reduction are presented here. Also an analytical solution of the Bernoulli differential equation of the vehicle dynamics is presented and used here in order to reduce computing effort. Extensive simulation and driving tests were performed using different driving approaches to compare well trained human experts with a range of different driving assistance systems based on standard cruise control, heuristic and mathematical optimisation. Finally the acceptance of the systems by drivers been evaluated.
APA, Harvard, Vancouver, ISO, and other styles
3

Schön, Tomas. "Identification for Predictive Control : A Multiple Model Approach." Thesis, Linköping University, Department of Electrical Engineering, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1050.

Full text
Abstract:

Predictive control relies on predictions of the future behaviour of the system to be controlled. These predictions are calculated from a model of this system, thus making the model the cornerstone of the predictive controller. Furthermore predictive control is the only advanced control methodology that has managed to become widely used in the industry. The necessity of good models in the predictive control context can thus be motivated both from the very nature of predictive control and from its widespread use in industry.

This thesis is concerned with examining the use of multiple models in the predictive controller. In order to do this the standard predictive control formulation has been extended to incorporate the use of multiple models. The most general case of this new formulation allows the use of an individual model for each prediction horizon.

The models are estimated using measurements of the input and output sequences from the true system. When using this data to find a good model of the system it is important to remember the intended purpose of the model. In this case the model is going to be used in a predictive controller and the most important feature of the models is to deliver good k-step ahead predictions. The identification algorithms used to estimate the models thus strives for estimating models good at calculating these predictions.

Finally this thesis presents some complete simulations of these ideas showing the potential of using multiple models in the predictive control framework.

APA, Harvard, Vancouver, ISO, and other styles
4

Shrestha, Rakshya. "Deep soil mixing and predictive neural network models for strength prediction." Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.607735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bangalore, Narendranath Rao Amith Kaushal. "Online Message Delay Prediction for Model Predictive Control over Controller Area Network." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78626.

Full text
Abstract:
Today's Cyber-Physical Systems (CPS) are typically distributed over several computing nodes communicating by way of shared buses such as Controller Area Network (CAN). Their control performance gets degraded due to variable delays (jitters) incurred by messages on the shared CAN bus due to contention and network overhead. This work presents a novel online delay prediction approach that predicts the message delay at runtime based on real-time traffic information on CAN. It leverages the proposed method to improve control quality, by compensating for the message delay using the Model Predictive Control (MPC) algorithm in designing the controller. By simulating an automotive Cruise Control system and a DC Motor plant in a CAN environment, it goes on to demonstrate that the delay prediction is accurate, and that the MPC design which takes the message delay into consideration, performs considerably better. It also implements the proposed method on an 8-bit 16MHz ATmega328P microcontroller and measures the execution time overhead. The results clearly indicate that the method is computationally feasible for online usage.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Yutao. "Algorithms and Applications for Nonlinear Model Predictive Control with Long Prediction Horizon." Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3421957.

Full text
Abstract:
Fast implementations of NMPC are important when addressing real-time control of systems exhibiting features like fast dynamics, large dimension, and long prediction horizon, as in such situations the computational burden of the NMPC may limit the achievable control bandwidth. For that purpose, this thesis addresses both algorithms and applications. First, fast NMPC algorithms for controlling continuous-time dynamic systems using a long prediction horizon have been developed. A bridge between linear and nonlinear MPC is built using partial linearizations or sensitivity update. In order to update the sensitivities only when necessary, a Curvature-like measure of nonlinearity (CMoN) for dynamic systems has been introduced and applied to existing NMPC algorithms. Based on CMoN, intuitive and advanced updating logic have been developed for different numerical and control performance. Thus, the CMoN, together with the updating logic, formulates a partial sensitivity updating scheme for fast NMPC, named CMoN-RTI. Simulation examples are used to demonstrate the effectiveness and efficiency of CMoN-RTI. In addition, a rigorous analysis on the optimality and local convergence of CMoN-RTI is given and illustrated using numerical examples. Partial condensing algorithms have been developed when using the proposed partial sensitivity update scheme. The computational complexity has been reduced since part of the condensing information are exploited from previous sampling instants. A sensitivity updating logic together with partial condensing is proposed with a complexity linear in prediction length, leading to a speed up by a factor of ten. Partial matrix factorization algorithms are also proposed to exploit partial sensitivity update. By applying splitting methods to multi-stage problems, only part of the resulting KKT system need to be updated, which is computationally dominant in on-line optimization. Significant improvement has been proved by giving floating point operations (flops). Second, efficient implementations of NMPC have been achieved by developing a Matlab based package named MATMPC. MATMPC has two working modes: the one completely relies on Matlab and the other employs the MATLAB C language API. The advantages of MATMPC are that algorithms are easy to develop and debug thanks to Matlab, and libraries and toolboxes from Matlab can be directly used. When working in the second mode, the computational efficiency of MATMPC is comparable with those software using optimized code generation. Real-time implementations are achieved for a nine degree of freedom dynamic driving simulator and for multi-sensory motion cueing with active seat.
Implementazioni rapide di NMPC sono importanti quando si affronta il controllo in tempo reale di sistemi che presentano caratteristiche come dinamica veloce, ampie dimensioni e orizzonte di predizione lungo, poiché in tali situazioni il carico di calcolo dell'MNPC può limitare la larghezza di banda di controllo ottenibile. A tale scopo, questa tesi riguarda sia gli algoritmi che le applicazioni. In primo luogo, sono stati sviluppati algoritmi veloci NMPC per il controllo di sistemi dinamici a tempo continuo che utilizzano un orizzonte di previsione lungo. Un ponte tra MPC lineare e non lineare viene costruito utilizzando linearizzazioni parziali o aggiornamento della sensibilità. Al fine di aggiornare la sensibilità solo quando necessario, è stata introdotta una misura simile alla curva di non linearità (CMoN) per i sistemi dinamici e applicata agli algoritmi NMPC esistenti. Basato su CMoN, sono state sviluppate logiche di aggiornamento intuitive e avanzate per diverse prestazioni numeriche e di controllo. Pertanto, il CMoN, insieme alla logica di aggiornamento, formula uno schema di aggiornamento della sensibilità parziale per NMPC veloce, denominato CMoN-RTI. Gli esempi di simulazione sono utilizzati per dimostrare l'efficacia e l'efficienza di CMoN-RTI. Inoltre, un'analisi rigorosa sull'ottimalità e sulla convergenza locale di CMoN-RTI viene fornita ed illustrata utilizzando esempi numerici. Algoritmi di condensazione parziale sono stati sviluppati quando si utilizza lo schema di aggiornamento della sensibilità parziale proposto. La complessità computazionale è stata ridotta poiché parte delle informazioni di condensazione sono sfruttate da precedenti istanti di campionamento. Una logica di aggiornamento della sensibilità insieme alla condensazione parziale viene proposta con una complessità lineare nella lunghezza della previsione, che porta a una velocità di un fattore dieci. Sono anche proposti algoritmi di fattorizzazione parziale della matrice per sfruttare l'aggiornamento della sensibilità parziale. Applicando metodi di suddivisione a problemi a più stadi, è necessario aggiornare solo parte del sistema KKT risultante, che è computazionalmente dominante nell'ottimizzazione online. Un miglioramento significativo è stato dimostrato dando operazioni in virgola mobile (flop). In secondo luogo, sono state realizzate implementazioni efficienti di NMPC sviluppando un pacchetto basato su Matlab chiamato MATMPC. MATMPC ha due modalità operative: quella si basa completamente su Matlab e l'altra utilizza l'API del linguaggio MATLAB C. I vantaggi di MATMPC sono che gli algoritmi sono facili da sviluppare e eseguire il debug grazie a Matlab e le librerie e le toolbox di Matlab possono essere utilizzate direttamente. Quando si lavora nella seconda modalità, l'efficienza computazionale di MATMPC è paragonabile a quella del software che utilizza la generazione di codice ottimizzata. Le realizzazioni in tempo reale sono ottenute per un simulatore di guida dinamica di nove gradi di libertà e per il movimento multisensoriale con sedile attivo.
APA, Harvard, Vancouver, ISO, and other styles
7

Ge, Esther. "The query based learning system for lifetime prediction of metallic components." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/18345/4/Esther_Ting_Ge_Thesis.pdf.

Full text
Abstract:
This research project was a step forward in developing an efficient data mining method for estimating the service life of metallic components in Queensland school buildings. The developed method links together the different data sources of service life information and builds the model for a real situation when the users have information on limited inputs only. A practical lifetime prediction system was developed for the industry partners of this project including Queensland Department of Public Works and Queensland Department of Main Roads. The system provides high accuracy in practice where not all inputs are available for querying to the system.
APA, Harvard, Vancouver, ISO, and other styles
8

Ge, Esther. "The query based learning system for lifetime prediction of metallic components." Queensland University of Technology, 2008. http://eprints.qut.edu.au/18345/.

Full text
Abstract:
This research project was a step forward in developing an efficient data mining method for estimating the service life of metallic components in Queensland school buildings. The developed method links together the different data sources of service life information and builds the model for a real situation when the users have information on limited inputs only. A practical lifetime prediction system was developed for the industry partners of this project including Queensland Department of Public Works and Queensland Department of Main Roads. The system provides high accuracy in practice where not all inputs are available for querying to the system.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhu, Zheng. "A Unified Exposure Prediction Approach for Multivariate Spatial Data: From Predictions to Health Analysis." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin155437434818942.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Aldars, García Laila. "Predictive mycology as a tool for controlling and preventing the aflatoxin risk in postharvest." Doctoral thesis, Universitat de Lleida, 2017. http://hdl.handle.net/10803/418806.

Full text
Abstract:
Les aflatoxines són potents carcinògens que representen una amenaça significativa per a la salut humana. La incidència d'aquestes micotoxines en els aliments és alta, de manera que el seu control i prevenció són necessaris en la indústria alimentària. El desenvolupament de models predictius apropiats que ens permetin predir el creixement fúngic i la producció de micotoxines és de gran utilitat com a eina per controlar, predir i prevenir el risc de micotoxines en aliments. És important que els models predictius siguin capaços d'explicar les condicions ambientals que es troben al llarg de la cadena alimentària. Entre aquestes condicions trobem: condicions subòptimes per al creixement i producció de micotoxines, distribució aleatòria d'espores en l'aliment, presència de diferents soques de la mateixa espècie o condicions ambientals canviants. El present treball proporciona una base per al desenvolupament de models científicament provats, que poden ser aplicats per la indústria alimentària per millorar el control en postcollita.
Las aflatoxinas son potentes carcinógenos que representan una amenaza significativa para la salud humana. La incidencia de estas micotoxinas en los alimentos es alta, por lo que su control y prevención es obligatoria en la industria alimentaria. El desarrollo de modelos predictivos apropiados que nos permitan predecir el crecimiento fúngico y la producción de micotoxinas es de gran utilidad como herramienta para controlar, predecir y prevenir el riesgo de micotoxinas en alimentos. Es importante que los modelos predictivos sean capaces de explicar las condiciones ambientales que se encuentran a lo largo de la cadena alimentaria. Entre tales condiciones encontramos: condiciones subóptimas para el crecimiento y producción de micotoxinas, distribución aleatoria de esporas fúngicas en el alimento, presencia de diferentes cepas de la misma especie o condiciones ambientales dinámicas. El presente trabajo proporciona una base para el desarrollo de modelos científicamente probados, que pueden ser aplicados por la industria alimentaria para mejorar el control de micotoxinas en postcosecha.
Aflatoxins are potent carcinogens that pose a significant threat to human health. Incidence of these mycotoxins in foodstuffs is high, thus their control and prevention is mandatory in the food industry. The development of appropriate predictive models that allow us to predict fungal growth and mycotoxin production will be a valuable tool to monitor, predict and prevent the mycotoxin risk. To develop accurate predictive models it is important to account for the real conditions that we will encounter through the food chain. Such conditions include: suboptimal conditions for growth and mycotoxin production, even distribution of spores across the food matrix, presence of different strains of the same species or dynamic environmental conditions. Given the scope and complexity of the problem the present work provides the basis for scientifically proven models, which can be applied in the food industry in order to improve postharvest control of commodities.
APA, Harvard, Vancouver, ISO, and other styles
11

Oleksandra, Shovkun. "Some methods for reducing the total consumption and production prediction errors of electricity: Adaptive Linear Regression of Original Predictions and Modeling of Prediction Errors." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-34398.

Full text
Abstract:
Balance between energy consumption and production of electricityis a very important for the electric power system operation and planning. Itprovides a good principle of effective operation, reduces the generation costin a power system and saves money. Two novel approaches to reduce thetotal errors between forecast and real electricity consumption wereproposed. An Adaptive Linear Regression of Original Predictions (ALROP)was constructed to modify the existing predictions by using simple linearregression with estimation by the Ordinary Least Square (OLS) method.The Weighted Least Square (WLS) method was also used as an alternativeto OLS. The Modeling of Prediction Errors (MPE) was constructed in orderto predict errors for the existing predictions by using the Autoregression(AR) and the Autoregressive-Moving-Average (ARMA) models. For thefirst approach it is observed that the last reported value is of mainimportance. An attempt was made to improve the performance and to getbetter parameter estimates. The separation of concerns and the combinationof concerns were suggested in order to extend the constructed approachesand raise the efficacy of them. Both methods were tested on data for thefourth region of Sweden (“elområde 4”) provided by Bixia. The obtainedresults indicate that all suggested approaches reduce the total percentageerrors of prediction consumption approximately by one half. Resultsindicate that use of the ARMA model slightly better reduces the total errorsthan the other suggested approaches. The most effective way to reduce thetotal consumption prediction errors seems to be obtained by reducing thetotal errors for each subregion.
APA, Harvard, Vancouver, ISO, and other styles
12

Altmisdort, F. Nadir. "Development of a new prediction algorithm and a simulator for the Predictive Read Cache (PRC)." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1996. http://handle.dtic.mil/100.2/ADA322724.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, September 1996.
Thesis advisor(s): Douglas J. Fouts. "September 1996." Includes bibliographical references (p. 127-128). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
13

Hotz-Behofsits, Christian, Florian Huber, and Thomas Zörner. "Predicting crypto-currencies using sparse non-Gaussian state space models." Wiley, 2018. http://dx.doi.org/10.1002/for.2524.

Full text
Abstract:
In this paper we forecast daily returns of crypto-currencies using a wide variety of different econometric models. To capture salient features commonly observed in financial time series like rapid changes in the conditional variance, non-normality of the measurement errors and sharply increasing trends, we develop a time-varying parameter VAR with t-distributed measurement errors and stochastic volatility. To control for overparameterization, we rely on the Bayesian literature on shrinkage priors that enables us to shrink coefficients associated with irrelevant predictors and/or perform model specification in a flexible manner. Using around one year of daily data we perform a real-time forecasting exercise and investigate whether any of the proposed models is able to outperform the naive random walk benchmark. To assess the economic relevance of the forecasting gains produced by the proposed models we moreover run a simple trading exercise.
APA, Harvard, Vancouver, ISO, and other styles
14

Silva, Jesús, Palma Hugo Hernández, Núẽz William Niebles, David Ovallos-Gazabon, and Noel Varela. "Time Series Decomposition using Automatic Learning Techniques for Predictive Models." Institute of Physics Publishing, 2020. http://hdl.handle.net/10757/652144.

Full text
Abstract:
This paper proposes an innovative way to address real cases of production prediction. This approach consists in the decomposition of original time series into time sub-series according to a group of factors in order to generate a predictive model from the partial predictive models of the sub-series. The adjustment of the models is carried out by means of a set of statistic techniques and Automatic Learning. This method was compared to an intuitive method consisting of a direct prediction of time series. The results show that this approach achieves better predictive performance than the direct way, so applying a decomposition method is more appropriate for this problem than non-decomposition.
APA, Harvard, Vancouver, ISO, and other styles
15

Losik, Len. "Using Oracol® for Predicting Long-Term Telemetry Behavior for Earth and Lunar Orbiting and Interplanetary Spacecraft." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/604280.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
Providing normal telemetry behavior predictions prior to and post launch will help to stop surprise catastrophic satellite and spacecraft equipment failures. In-orbit spacecraft fail from surprise equipment failures that can result from not having normal telemetry behavior available for comparison with actual behavior catching satellite engineers by surprise. Some surprise equipment failures lead to the total loss of the satellite or spacecraft. Some recovery actions from a surprise equipment failure increase spacecraft risk and involve decisions requiring a level of experience far beyond the responsible engineers.
APA, Harvard, Vancouver, ISO, and other styles
16

Losik, Len. "Using Oracol® for Predicting Long-Term Telemetry Behavior for Earth and Lunar Orbiting and Interplanetary Spacecraft." International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/606127.

Full text
Abstract:
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Providing normal telemetry behavior predictions prior to and post launch will help to stop surprise catastrophic satellite and spacecraft equipment failures. In-orbit spacecraft fail from surprise equipment failures that can result from not having normal telemetry behavior available for comparison with actual behavior catching satellite engineers by surprise. Some surprise equipment failures lead to the total loss of the satellite or spacecraft. Some recovery actions as a consequence of a surprise equipment failure are high risk and involve decisions requiring a level of experience far beyond the responsible engineers.
APA, Harvard, Vancouver, ISO, and other styles
17

Abo, Al Ahad George, and Abbas Salami. "Machine Learning for Market Prediction : Soft Margin Classifiers for Predicting the Sign of Return on Financial Assets." Thesis, Linköpings universitet, Produktionsekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-151459.

Full text
Abstract:
Forecasting procedures have found applications in a wide variety of areas within finance and have further shown to be one of the most challenging areas of finance. Having an immense variety of economic data, stakeholders aim to understand the current and future state of the market. Since it is hard for a human to make sense out of large amounts of data, different modeling techniques have been applied to extract useful information from financial databases, where machine learning techniques are among the most recent modeling techniques. Binary classifiers such as Support Vector Machines (SVMs) have to some extent been used for this purpose where extensions of the algorithm have been developed with increased prediction performance as the main goal. The objective of this study has been to develop a process for improving the performance when predicting the sign of return of financial time series with soft margin classifiers. An analysis regarding the algorithms is presented in this study followed by a description of the methodology that has been utilized. The developed process containing some of the presented soft margin classifiers, and other aspects of kernel methods such as Multiple Kernel Learning have shown pleasant results over the long term, in which the capability of capturing different market conditions have been shown to improve with the incorporation of different models and kernels, instead of only a single one. However, the results are mostly congruent with earlier studies in this field. Furthermore, two research questions have been answered where the complexity regarding the kernel functions that are used by the SVM have been studied and the robustness of the process as a whole. Complexity refers to achieving more complex feature maps through combining kernels by either adding, multiplying or functionally transforming them. It is not concluded that an increased complexity leads to a consistent improvement, however, the combined kernel function is superior during some of the periods of the time series used in this thesis for the individual models. The robustness has been investigated for different signal-to-noise ratio where it has been observed that windows with previously poor performance are more exposed to noise impact.
APA, Harvard, Vancouver, ISO, and other styles
18

Hausberger, Thomas [Verfasser]. "Nonlinear High-Speed Model Predictive Control with Long Prediction Horizons for Power-Converter Systems / Thomas Hausberger." Düren : Shaker, 2021. http://d-nb.info/1233548271/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Fawcett, Lee, Neil Thorpe, Joseph Matthews, and Karsten Kremer. "A novel Bayesian hierarchical model for road safety hotspot prediction." Elsevier, 2016. https://publish.fid-move.qucosa.de/id/qucosa%3A72268.

Full text
Abstract:
In this paper, we propose a Bayesian hierarchical model for predicting accident counts in future years at sites within a pool of potential road safety hotspots. The aim is to inform road safety practitioners of the location of likely future hotspots to enable a proactive, rather than reactive, approach to road safety scheme implementation. A feature of our model is the ability to rank sites according to their potential to exceed, in some future time period, a threshold accident count which may be used as a criterion for scheme implementation. Our model specification enables the classical empirical Bayes formulation – commonly used in before-and-after studies, wherein accident counts from a single before period are used to estimate counterfactual counts in the after period – to be extended to incorporate counts from multiple time periods. This allows site-specific variations in historical accident counts (e.g. locally-observed trends) to offset estimates of safety generated by a global accident prediction model (APM), which itself is used to help account for the effects of global trend and regression-to-mean (RTM). The Bayesian posterior predictive distribution is exploited to formulate predictions and to properly quantify our uncertainty in these predictions. The main contributions of our model include (i) the ability to allow accident counts from multiple time-points to inform predictions, with counts in more recent years lending more weight to predictions than counts from time-points further in the past; (ii) where appropriate, the ability to offset global estimates of trend by variations in accident counts observed locally, at a site-specific level; and (iii) the ability to account for unknown/unobserved site-specific factors which may affect accident counts. We illustrate our model with an application to accident counts at 734 potential hotspots in the German city of Halle; we also propose some simple diagnostics to validate the predictive capability of our model. We conclude that our model accurately predicts future accident counts, with point estimates from the predictive distribution matching observed counts extremely well.
APA, Harvard, Vancouver, ISO, and other styles
20

Sowan, Bilal I. "Enhancing Fuzzy Associative Rule Mining Approaches for Improving Prediction Accuracy. Integration of Fuzzy Clustering, Apriori and Multiple Support Approaches to Develop an Associative Classification Rule Base." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5387.

Full text
Abstract:
Building an accurate and reliable model for prediction for different application domains, is one of the most significant challenges in knowledge discovery and data mining. This thesis focuses on building and enhancing a generic predictive model for estimating a future value by extracting association rules (knowledge) from a quantitative database. This model is applied to several data sets obtained from different benchmark problems, and the results are evaluated through extensive experimental tests. The thesis presents an incremental development process for the prediction model with three stages. Firstly, a Knowledge Discovery (KD) model is proposed by integrating Fuzzy C-Means (FCM) with Apriori approach to extract Fuzzy Association Rules (FARs) from a database for building a Knowledge Base (KB) to predict a future value. The KD model has been tested with two road-traffic data sets. Secondly, the initial model has been further developed by including a diversification method in order to improve a reliable FARs to find out the best and representative rules. The resulting Diverse Fuzzy Rule Base (DFRB) maintains high quality and diverse FARs offering a more reliable and generic model. The model uses FCM to transform quantitative data into fuzzy ones, while a Multiple Support Apriori (MSapriori) algorithm is adapted to extract the FARs from fuzzy data. The correlation values for these FARs are calculated, and an efficient orientation for filtering FARs is performed as a post-processing method. The FARs diversity is maintained through the clustering of FARs, based on the concept of the sharing function technique used in multi-objectives optimization. The best and the most diverse FARs are obtained as the DFRB to utilise within the Fuzzy Inference System (FIS) for prediction. The third stage of development proposes a hybrid prediction model called Fuzzy Associative Classification Rule Mining (FACRM) model. This model integrates the ii improved Gustafson-Kessel (G-K) algorithm, the proposed Fuzzy Associative Classification Rules (FACR) algorithm and the proposed diversification method. The improved G-K algorithm transforms quantitative data into fuzzy data, while the FACR generate significant rules (Fuzzy Classification Association Rules (FCARs)) by employing the improved multiple support threshold, associative classification and vertical scanning format approaches. These FCARs are then filtered by calculating the correlation value and the distance between them. The advantage of the proposed FACRM model is to build a generalized prediction model, able to deal with different application domains. The validation of the FACRM model is conducted using different benchmark data sets from the University of California, Irvine (UCI) of machine learning and KEEL (Knowledge Extraction based on Evolutionary Learning) repositories, and the results of the proposed FACRM are also compared with other existing prediction models. The experimental results show that the error rate and generalization performance of the proposed model is better in the majority of data sets with respect to the commonly used models. A new method for feature selection entitled Weighting Feature Selection (WFS) is also proposed. The WFS method aims to improve the performance of FACRM model. The prediction performance is improved by minimizing the prediction error and reducing the number of generated rules. The prediction results of FACRM by employing WFS have been compared with that of FACRM and Stepwise Regression (SR) models for different data sets. The performance analysis and comparative study show that the proposed prediction model provides an effective approach that can be used within a decision support system.
Applied Science University (ASU) of Jordan
APA, Harvard, Vancouver, ISO, and other styles
21

Broberg, Magnus. "Performance Prediction and Improvement Techniques for Parallel Programs in Multiprocessors." Doctoral thesis, Karlskrona: Department of Software Engineering and Computer Science, Blekinge Institute of Technology, 2002. http://www.bth.se/fou/forskinfo.nsf/01f1d3898cbbd490c12568160037fb62/2bf3ca6a32368b72c1256b98003d7466!OpenDocument.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Scott, Hanna. "Towards a Framework for Fault and Failure Prediction and Estimation." Licentiate thesis, Karlskrona : Department of Systems and Software Engineering, School of Engineering, Blekinge Institute of Technology, 2008. http://www.bth.se/fou/Forskinfo.nsf/allfirst2/46bd1c549ac32f74c12574c100299f82?OpenDocument.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Darwiche, Aiman A. "Machine Learning Methods for Septic Shock Prediction." Diss., NSUWorks, 2018. https://nsuworks.nova.edu/gscis_etd/1051.

Full text
Abstract:
Sepsis is an organ dysfunction life-threatening disease that is caused by a dysregulated body response to infection. Sepsis is difficult to detect at an early stage, and when not detected early, is difficult to treat and results in high mortality rates. Developing improved methods for identifying patients in high risk of suffering septic shock has been the focus of much research in recent years. Building on this body of literature, this dissertation develops an improved method for septic shock prediction. Using the data from the MMIC-III database, an ensemble classifier is trained to identify high-risk patients. A robust prediction model is built by obtaining a risk score from fitting the Cox Hazard model on multiple input features. The score is added to the list of features and the Random Forest ensemble classifier is trained to produce the model. The Cox Enhanced Random Forest (CERF) proposed method is evaluated by comparing its predictive accuracy to those of extant methods.
APA, Harvard, Vancouver, ISO, and other styles
24

Yang, Lei. "Methodology of Prognostics Evaluation for Multiprocess Manufacturing Systems." University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1298043095.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Sowan, Bilal Ibrahim. "Enhancing fuzzy associative rule mining approaches for improving prediction accuracy : integration of fuzzy clustering, apriori and multiple support approaches to develop an associative classification rule base." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5387.

Full text
Abstract:
Building an accurate and reliable model for prediction for different application domains, is one of the most significant challenges in knowledge discovery and data mining. This thesis focuses on building and enhancing a generic predictive model for estimating a future value by extracting association rules (knowledge) from a quantitative database. This model is applied to several data sets obtained from different benchmark problems, and the results are evaluated through extensive experimental tests. The thesis presents an incremental development process for the prediction model with three stages. Firstly, a Knowledge Discovery (KD) model is proposed by integrating Fuzzy C-Means (FCM) with Apriori approach to extract Fuzzy Association Rules (FARs) from a database for building a Knowledge Base (KB) to predict a future value. The KD model has been tested with two road-traffic data sets. Secondly, the initial model has been further developed by including a diversification method in order to improve a reliable FARs to find out the best and representative rules. The resulting Diverse Fuzzy Rule Base (DFRB) maintains high quality and diverse FARs offering a more reliable and generic model. The model uses FCM to transform quantitative data into fuzzy ones, while a Multiple Support Apriori (MSapriori) algorithm is adapted to extract the FARs from fuzzy data. The correlation values for these FARs are calculated, and an efficient orientation for filtering FARs is performed as a post-processing method. The FARs diversity is maintained through the clustering of FARs, based on the concept of the sharing function technique used in multi-objectives optimization. The best and the most diverse FARs are obtained as the DFRB to utilise within the Fuzzy Inference System (FIS) for prediction. The third stage of development proposes a hybrid prediction model called Fuzzy Associative Classification Rule Mining (FACRM) model. This model integrates the ii improved Gustafson-Kessel (G-K) algorithm, the proposed Fuzzy Associative Classification Rules (FACR) algorithm and the proposed diversification method. The improved G-K algorithm transforms quantitative data into fuzzy data, while the FACR generate significant rules (Fuzzy Classification Association Rules (FCARs)) by employing the improved multiple support threshold, associative classification and vertical scanning format approaches. These FCARs are then filtered by calculating the correlation value and the distance between them. The advantage of the proposed FACRM model is to build a generalized prediction model, able to deal with different application domains. The validation of the FACRM model is conducted using different benchmark data sets from the University of California, Irvine (UCI) of machine learning and KEEL (Knowledge Extraction based on Evolutionary Learning) repositories, and the results of the proposed FACRM are also compared with other existing prediction models. The experimental results show that the error rate and generalization performance of the proposed model is better in the majority of data sets with respect to the commonly used models. A new method for feature selection entitled Weighting Feature Selection (WFS) is also proposed. The WFS method aims to improve the performance of FACRM model. The prediction performance is improved by minimizing the prediction error and reducing the number of generated rules. The prediction results of FACRM by employing WFS have been compared with that of FACRM and Stepwise Regression (SR) models for different data sets. The performance analysis and comparative study show that the proposed prediction model provides an effective approach that can be used within a decision support system.
APA, Harvard, Vancouver, ISO, and other styles
26

Flöjs, Amanda, and Alexandra Hägg. "Churn Prediction : Predicting User Churn for a Subscription-based Service using Statistical Analysis and Machine Learning Models." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-171678.

Full text
Abstract:
Subscription-based services are becoming more popular in today’s society. Therefore, any company that engages in the subscription-based business needs to understand the user behavior and minimize the number of users canceling their subscription, i.e. minimize churn. According to marketing metrics, the probability of selling to an existing user is markedly higher than selling to a brand new user. Nonetheless, it is of great importance that more focus is directed towards preventing users from leaving the service, in other words preventing user churn. To be able to prevent user churn the company needs to identify the users in the risk zone of churning. Therefore, this thesis project will treat this as a classification problem. The objective of the thesis project was to develop a statistical model to predict churn for a subscription-based service. Various statistical methods were used in order to identify patterns in user behavior using activity and engagement data including variables describing recency, frequency, and volume. The best performing statistical model for predicting churn was achieved by the Random Forest algorithm. The selected model is able to separate the two classes of churning users and the non-churning users with 73% probability and has a fairly low missclassification rate of 35%. The results show that it is possible to predict user churn using statistical models. Although, there are indications that it is difficult for the model to generalize a specific behavioral pattern for user churn. This is understandable since human behavior is hard to predict. The results show that variables describing how frequent the user is interacting with the service are explaining the most whether a user is likely to churn or not.
Prenumerationstjänster blir alltmer populära i dagens samhälle. Därför är det viktigt för ett företag med en prenumerationsbaserad verksamhet att ha en god förståelse för sina användares beteendemönster på tjänsten, samt att de minskar antalet användare som avslutar sin prenumeration. Enligt marknads-föringsstatistik är sannolikheten att sälja till en redan existerande användare betydligt högre än att sälja till en helt ny. Av den anledningen, är det viktigt att ett stort fokus riktas mot att förebygga att användare lämnar tjänsten. För att förebygga att användare lämnar tjänsten måste företaget identifiera vilka användare som är i riskzonen att lämna. Därför har detta examensarbete behandlats som ett klassifikations problem. Syftet med arbetet var att utveckla en statistisk modell för att förutspå vilka användare som sannolikt kommer att lämna prenumerationstjänsten inom nästa månad. Olika statistiska metoder har prövats för att identifiera användares beteendemönster i aktivitet- och engagemangsdata, data som inkluderar variabler som beskriver senaste interaktion, frekvens och volym. Bäst prestanda för att förutspå om en användare kommer att lämna tjänsten gavs av Random Forest algoritmen. Den valda modellen kan separera de två klasserna av användare som lämnar tjänsten och de användare som stannar med 73% sannolikhet och har en relativt låg missfrekvens på 35%. Resultatet av arbetet visar att det går att förutspå vilka användare som befinner sig i riskzonen för att lämna tjänsten med hjälp av statistiska modeller, även om det är svårt för modellen att generalisera ett specifikt beteendemönster för de olika grupperna. Detta är dock förståeligt då det är mänskligt beteende som modellen försöker att förutspå. Resultatet av arbetet pekar mot att variabler som beskriver frekvensen av användandet av tjänsten beskriver mer om en användare är påväg att lämna tjänsten än variabler som beskriver användarens aktivitet i volym.
APA, Harvard, Vancouver, ISO, and other styles
27

Ferreira, de Melo Filho Alberto. "Predicting the unpredictable - Can Artificial Neural Network replace ARIMA for prediction of the Swedish Stock Market (OMXS30)?" Thesis, Mittuniversitetet, Institutionen för ekonomi, geografi, juridik och turism, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-36908.

Full text
Abstract:
During several decades the stock market has been an area of interest forresearchers due to its complexity, noise, uncertainty and nonlinearity of thedata. Most of the studies regarding this area use a classical stochastics method,an example of this is ARIMA which is a standard approach for time seriesprediction. There is however another method for prediction of the stock marketthat is gaining traction in the recent years; Artificial Neural Network (ANN).This method has mostly been used in research on the American and Asian stockmarkets so far. Therefore, the purpose of this essay was to explore if ArtificialNeural Network could be used instead of ARIMA to predict the Swedish stockmarket (OMXS30). The study used data from the Swedish Stock Marketbetween 1991-07-09 to 2018-12-28 for the training of the ARIMA model anda forecast data that ranged between 2019-01-02 to 2019-04-26. The forecastdata of the ANN was composed of 80% of the data between 1991-07-09 to2019-04-26 and the evaluation data was composed of the remaining 20%. TheANN architecture had one input layer with chunks of 20 consecutive days asinput, followed by three Long Short-Term Memory (LSTM) hidden layers with128 neurons in each layer, followed by another hidden layer with RectifiedLinear Unit (ReLU) containing 32 neurons, followed by the output layercontaining 2 neurons with softmax activation. The results showed that theANN, with an accuracy of 0,9892, could be a successful method to forecast theSwedish stock market instead of ARIMA.
APA, Harvard, Vancouver, ISO, and other styles
28

Jahedpari, Fatemeh. "Artificial prediction markets for online prediction of continuous variables." Thesis, University of Bath, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.690730.

Full text
Abstract:
In this dissertation, we propose an online machine learning technique – named Artificial Continuous Prediction Market (ACPM) – to predict the value of a continuous variable by (i) integrating a set of data streams from heterogeneous sources with time varying compositions such as changing the quality of data streams, (ii) integrating the results of several analysis models for each data source when the most suitable model for a given data source is not known a priori, (iii) dynamically weighting the prediction of each analysis model and data source to form the system prediction. We adapt the concept of prediction market, motivated by their success in forecasting accurately the outcome of many events [Nikolova and Sami, 2007]. Our proposed model instantiates a sequence of prediction markets in which artificial agents play the role of market participants. Agents participate in the markets with the objective of increasing their own utility and hence indirectly cause the markets to aggregate their knowledge. Each market is run in a number of rounds in which agents have the opportunity to send their prediction and bet to the market. At the end of each round, the aggregated prediction of the crowd is announced to all agents, which provides a signal to agents about the private information of other agents so they can adjust their beliefs accordingly. Once the true value of the record is known, agents are rewarded according to accuracy of their prediction. Using this information, agents update their models and knowledge, with the aim of improving their performance in future markets. This thesis proposes two trading strategies to be utilised by agents when participating in a market. While the first one is a naive constant strategy, the second one is an adaptive strategy based on Q-Learning technique [Watkins, 1989]. We evaluate the performance of our model in different situations using real-world and synthetic data sets. Our results suggest that ACPM: i) is either better or very close to the best performing agents, ii) is resilient to the addition of agents with low performance, iii) outperforms many well-known machine learning models, iv) is resilient to quality drop-out in the best performing agents, v) adapts to changes in quality of agents predictions.
APA, Harvard, Vancouver, ISO, and other styles
29

Cai, Xun Ph D. Massachusetts Institute of Technology. "Transforms for prediction residuals based on prediction inaccuracy modeling." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/109003.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 157-162).
In a typical transform-based image and video compression system, an image or a video frame is predicted from previously encoded information. The prediction residuals are encoded with transforms. With a proper choice of the transform, a large amount of the residual energy compacts into a small number of transform coefficients. This is known as the energy compaction property. Given the covariance function of the signal, the linear transform with the best energy compaction property is the Karhunen Loeve transform. In this thesis, we develop a new set of transforms for prediction residuals. We observe that the prediction process in practical video compression systems is usually not accurate. By studying the inaccuracy of the prediction process, we can derive new covariance functions for prediction residuals. The estimated covariance function is used to generate the Karhunen Loeve transform for residual encoding. In this thesis, we model the prediction inaccuracy for two types of residuals. Specifically, we estimate the covariance function of the directional intra prediction residuals. We show that the covariance function and the optimal transform for directional intra prediction residuals are related with the one-dimensional gradient of boundary predictors. We estimate the covariance function of the motion-compensated prediction residuals. We show that the covariance function and the optimal transform for motion-compensated prediction residuals are related with the two-dimensional gradient of the displaced reference block. The proposed transforms are evaluated using the energy compaction property and the rate-distortion metric in a practical video coding system. Experimental results indicate that the proposed transforms significantly improve the performance in a typical transform-based compression scenario.
by Xun Cai.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
30

op, den Kelder Antonia. "Using predictive uncertainty analysis to optimise data acquisition for stream depletion and land-use change predictions." Thesis, Stockholms universitet, Institutionen för naturgeografi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-160851.

Full text
Abstract:
To facilitate robust understanding of the processes and properties that govern a groundwater system, managers need data. However, this often requires them to make difficult decisions about what types of data to collect and where and when to collect it in the most cost - effective manner. This is where data worth analysis, which is based on predictive uncertainty analyses, can play an important role. The ‘worth’ of data is defined here as the reduction in uncertainty of a specific prediction of interest that is achieved as a result of a given data collection strategy. With the use of data worth analysis, the optimal data types, sample locations, and sampling frequencies can be determined for a specific prediction that informs, for example, management decisions. In this study a data worth method was used to optimize data collection when predicting pumping - induced stream depletion (water quantity section) and when predicting changing nitrate concentrations as a result of land-use change (water quality section). Specifically, the First Order Second Moment (FOSM) based data worth method was employed. This thesis also builds upon previous work which explores the impacts of spatial model parameterisation on the performance of the data worth analysis in the context of stream depletion assessments. A transient groundwater model was developed, using the MODFLOW-NWT software, and a steady state transport model was developed, using the MT3D-USGS software for the mid-Mataura catchment located in Southland, New Zealand. The ‘worth’ of both existing and additional potential monitoring data were investigated. In addition, and for only the water quantity part of the thesis, three spatial hydraulic parameter density scenarios were investigated to assess of parameter simplification on the performance of the data worth method: 1) distributed pilot-point parameters, 2) homogeneous parameters, and 3) grid-cell based parameters. The water quantity (stream depletion) predictions were made at 2 key locations: (i) the catchment outlet at Gore and (ii) the outlet of a spring-fedstream (McKellar Stream). The water quality prediction (change in nitrate concentration due to land-use change) was made at 7 locations 4 key surface water locations, 2 town supply bores at Gore and one additional groundwater location further upstream. For the water quantity predictions, results show that the existing transient groundwater level data resulted in the largest reduction in uncertainty for the predictions examined. Because the low flow predictions at Gore were integrating predictions, the most uncertainty reducing observations were scattered through the catchment area with a focus on the north-west. This coincides with the recharge zone (which means that there are large water level fluctuations and hence a larger ‘signal to noise’ content in the groundwater level data). In contrast, because McKellar Stream is a discrete prediction (in this case, because McKellar Stream is spring-fed), the observations directly surrounding the stream reduce the uncertainty the most significantly. The impact of parameter simplification in the water quantity modelling showed that the data worth analysis using the grid-cell based parameterisation were very similar to those using pilot-points. However, when using the homogeneous parameterisation, the data worth results became corrupted by the lack of spatial variability available in the parameterisation. Indicating that spatial heterogeneity is needed when predicting low flows, as was shown by previous studies. However, the computational time associated with performing data worth uncertainty analyses is much higher with a grid-cell based parameterisation. A pilot-point based scheme should perhaps therefore be considered a favourable option. For the water quality predictions, results showed a strong correlation between the hydraulic conductivity, porosity and denitrification. This is likely because the hydraulic conductivity and porosity provide information about the velocity of the groundwater for a given hydraulic - head gradient, which provides information about the amount of time available for denitrification to take place in the soil substrate. Next to that, results showed no distinct difference between surface water and groundwater predictions when predicting changing nitrate concentrations, but they showed that the spatial data worth patterns depended on the proximity of the prediction location to the denitrifying areas. Overall it can be concluded that spatial parameterisation is needed when performing a data worth study for stream depletion predictions, however a more detailed parameterisation than pilot – points does not provide significantly more information. Next to that, it can be concluded that the spatial data worth patterns when predicting low flows mainly depend on if the predictions are integrating or discrete predictions. Lastly, it can also be concluded that the data worth patterns when predicting change in nitrate concentration depend on the proximity of the prediction location to the denitrifying areas.
APA, Harvard, Vancouver, ISO, and other styles
31

Sammouri, Wissam. "Data mining of temporal sequences for the prediction of infrequent failure events : application on floating train data for predictive maintenance." Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1041/document.

Full text
Abstract:
De nos jours, afin de répondre aux exigences économiques et sociales, les systèmes de transport ferroviaire ont la nécessité d'être exploités avec un haut niveau de sécurité et de fiabilité. On constate notamment un besoin croissant en termes d'outils de surveillance et d'aide à la maintenance de manière à anticiper les défaillances des composants du matériel roulant ferroviaire. Pour mettre au point de tels outils, les trains commerciaux sont équipés de capteurs intelligents envoyant des informations en temps réel sur l'état de divers sous-systèmes. Ces informations se présentent sous la forme de longues séquences temporelles constituées d'une succession d'événements. Le développement d'outils d'analyse automatique de ces séquences permettra d'identifier des associations significatives entre événements dans un but de prédiction d'événement signant l'apparition de défaillance grave. Cette thèse aborde la problématique de la fouille de séquences temporelles pour la prédiction d'événements rares et s'inscrit dans un contexte global de développement d'outils d'aide à la décision. Nous visons à étudier et développer diverses méthodes pour découvrir les règles d'association entre événements d'une part et à construire des modèles de classification d'autre part. Ces règles et/ou ces classifieurs peuvent ensuite être exploités pour analyser en ligne un flux d'événements entrants dans le but de prédire l'apparition d'événements cibles correspondant à des défaillances. Deux méthodologies sont considérées dans ce travail de thèse: La première est basée sur la recherche des règles d'association, qui est une approche temporelle et une approche à base de reconnaissance de formes. Les principaux défis auxquels est confronté ce travail sont principalement liés à la rareté des événements cibles à prédire, la redondance importante de certains événements et à la présence très fréquente de "bursts". Les résultats obtenus sur des données réelles recueillies par des capteurs embarqués sur une flotte de trains commerciaux permettent de mettre en évidence l'efficacité des approches proposées
In order to meet the mounting social and economic demands, railway operators and manufacturers are striving for a longer availability and a better reliability of railway transportation systems. Commercial trains are being equipped with state-of-the-art onboard intelligent sensors monitoring various subsystems all over the train. These sensors provide real-time flow of data, called floating train data, consisting of georeferenced events, along with their spatial and temporal coordinates. Once ordered with respect to time, these events can be considered as long temporal sequences which can be mined for possible relationships. This has created a neccessity for sequential data mining techniques in order to derive meaningful associations rules or classification models from these data. Once discovered, these rules and models can then be used to perform an on-line analysis of the incoming event stream in order to predict the occurrence of target events, i.e, severe failures that require immediate corrective maintenance actions. The work in this thesis tackles the above mentioned data mining task. We aim to investigate and develop various methodologies to discover association rules and classification models which can help predict rare tilt and traction failures in sequences using past events that are less critical. The investigated techniques constitute two major axes: Association analysis, which is temporal and Classification techniques, which is not temporal. The main challenges confronting the data mining task and increasing its complexity are mainly the rarity of the target events to be predicted in addition to the heavy redundancy of some events and the frequent occurrence of data bursts. The results obtained on real datasets collected from a fleet of trains allows to highlight the effectiveness of the approaches and methodologies used
APA, Harvard, Vancouver, ISO, and other styles
32

Alstermark, Olivia, and Evangelina Stolt. "Purchase Probability Prediction : Predicting likelihood of a new customer returning for a second purchase using machine learning methods." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184831.

Full text
Abstract:
When a company evaluates a customer for being a potential prospect, one of the key questions to answer is whether the customer will generate profit in the long run. A possible step to answer this question is to predict the likelihood of the customer returning to the company again after the initial purchase. The aim of this master thesis is to investigate the possibility of using machine learning techniques to predict the likelihood of a new customer returning for a second purchase within a certain time frame. To investigate to what degree machine learning techniques can be used to predict probability of return, a number of di↵erent model setups of Logistic Lasso, Support Vector Machine and Extreme Gradient Boosting are tested. Model development is performed to ensure well-calibrated probability predictions and to possibly overcome the diculty followed from an imbalanced ratio of returning and non-returning customers. Throughout the thesis work, a number of actions are taken in order to account for data protection. One such action is to add noise to the response feature, ensuring that the true fraction of returning and non-returning customers cannot be derived. To further guarantee data protection, axes values of evaluation plots are removed and evaluation metrics are scaled. Nevertheless, it is perfectly possible to select the superior model out of all investigated models. The results obtained show that the best performing model is a Platt calibrated Extreme Gradient Boosting model, which has much higher performance than the other models with regards to considered evaluation metrics, while also providing predicted probabilities of high quality. Further, the results indicate that the setups investigated to account for imbalanced data do not improve model performance. The main con- clusion is that it is possible to obtain probability predictions of high quality for new customers returning to a company for a second purchase within a certain time frame, using machine learning techniques. This provides a powerful tool for a company when evaluating potential prospects.
APA, Harvard, Vancouver, ISO, and other styles
33

Arnold, Naomi (Naomi Aiko). "Wafer defect prediction with statistical machine learning." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105633.

Full text
Abstract:
Thesis: S.M. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2016. In conjunction with the Leaders for Global Operations Program at MIT.
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, 2016. In conjunction with the Leaders for Global Operations Program at MIT.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 81-83).
In the semiconductor industry where the technology continues to grow in complexity while also striving to achieve lower manufacturing costs, it is becoming increasingly important to drive cost savings by screening out defective die upstream. The primary goal of the project is to build a statistical prediction model to facilitate operational improvements across two global manufacturing locations. The scope of the project includes one high-volume product line, an off-line statistical model using historical production data, and experimentation with machine learning algorithms. The prediction model pilot demonstrates there exists a potential to improve the wafer sort process using random forest classifier on wafer and die-level datasets. Yet more development is needed to conclude final memory test defect die-level predictions are possible. Key findings include the importance of model computational performance in big data problems, necessity of a living model that stays accurate over time to meet operational needs, and an evaluation methodology based on business requirements. This project provides a case study for a high-level strategy of assessing big data and advanced analytics applications to improve semiconductor manufacturing.
by Naomi Arnold.
S.M. in Engineering Systems
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
34

Campbell, Brian. "Type-based amortized stack memory prediction." Thesis, University of Edinburgh, 2008. http://hdl.handle.net/1842/3176.

Full text
Abstract:
Controlling resource usage is important for the reliability, efficiency and security of software systems. Automated analyses for bounding resource usage can be invaluable tools for ensuring these properties. Hofmann and Jost have developed an automated static analysis for finding linear heap space bounds in terms of the input size for programs in a simple functional programming language. Memory requirements are amortized by representing them as a requirement for an abstract quantity, potential, which is supplied by assigning potential to data structures in proportion to their size. This assignment is represented by annotations on their types. The type system then ensures that all potential requirements can be met from the original input’s potential if a set of linear constraints can be solved. Linear programming can optimise this amount of potential subject to the constraints, yielding a upper bound on the memory requirements. However, obtaining bounds on the heap space requirements does not detect a faulty or malicious program which uses excessive stack space. In this thesis, we investigate extending Hofmann and Jost’s techniques to infer bounds on stack space usage, first by examining two approaches: using the Hofmann- Jost analysis unchanged by applying a CPS transformation to the program being analysed, then showing that this predicts the stack space requirements of the original program; and directly adapting the analysis itself, which we will show is more practical. We then consider how to deal with the different allocation patterns stack space usage presents. In particular, the temporary nature of stack allocation leads us to a system where we calculate the total potential after evaluating an expression in terms of assignments of potential to the variables appearing in the expression as well as the result. We also show that this analysis subsumes our previous systems, and improves upon them. We further increase the precision of the bounds inferred by noting the importance of expressing stack memory bounds in terms of the depth of data structures and by taking the maximum of the usage bounds of subexpressions. We develop an analysis which uses richer definitions of the potential calculation to allow depth and maxima to be used, albeit with a more subtle inference process.
APA, Harvard, Vancouver, ISO, and other styles
35

Jing, Junbo. "Vehicle Predictive Fuel-Optimal Control for Real-World Systems." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1534506777487814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ge, Wuxiang. "Prediction-based failure management for supercomputers." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/predictionbased-failure-management-for-supercomputers(3accd61b-e77a-4722-919b-5bd9ae11610b).html.

Full text
Abstract:
The growing requirements of a diversity of applications necessitate the deployment of large and powerful computing systems and failures in these systems may cause severe damage in every aspect from loss of human lives to world economy. However, current fault tolerance techniques cannot meet the increasing requirements for reliability. Thus new solutions are urgently needed and research on proactive schemes is one of the directions that may offer better efficiency. This thesis proposes a novel proactive failure management framework. Its goal is to reduce the failure penalties and improve fault tolerance efficiency in supercomputers when running complex applications. The proposed proactive scheme builds on two core components: failure prediction and proactive failure recovery. More specifically, the failure prediction component is based on the assessment of system events and employs semi-Markov models to capture the dependencies between failures and other events for the forecasting of forthcoming failures. Furthermore, a two-level failure prediction strategy is described that not only estimates the future failure occurrence but also identifies the specific failure categories. Based on the accurate failure forecasting, a prediction-based coordinated checkpoint mechanism is designed to construct extra checkpoints just before each predicted failure occurrence so that the wasted computational time can be significantly reduced. Moreover, a theoretical model has been developed to assess the proactive scheme that enables calculation of the overall wasted computational time.The prediction component has been applied to industrial data from the IBM BlueGene/L system. Results of the failure prediction component show a great improvement of the prediction accuracy in comparison with three other well-known prediction approaches, and also demonstrate that the semi-Markov based predictor, which has achieved the precision of 87.41% and the recall of 77.95%, performs better than other predictors.
APA, Harvard, Vancouver, ISO, and other styles
37

Mehdi, Muhammad Sarim. "Trajectory Prediction for ADAS." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21891/.

Full text
Abstract:
A novel pipeline is presented for unsupervised trajectory prediction. As part of this research, numerous techniques are investigated for trajectory prediction of dynamic obstacles from an egocentric perspective (driver’s per- spective). The algorithm takes images from a calibrated stereo camera as input or data from a laser scanner and outputs a heat map that describes all possible future locations of that specific 3D object for the next few frames. This research has many applications, most notably for autonomous cars as it allows them to make better driving decisions if they are able to anticipate where another moving object is going to be in the future.
APA, Harvard, Vancouver, ISO, and other styles
38

Ibarria, Lorenzo. "Geometric Prediction for Compression." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16162.

Full text
Abstract:
This thesis proposes several new predictors for the compression of shapes, volumes and animations. To compress frames in triangle-mesh animations with fixed connectivity, we introduce the ELP (Extended Lorenzo Predictor) and the Replica predictors that extrapolate the position of each vertex in frame $i$ from the position of each vertex in frame $i-1$ and from the position of its neighbors in both frames. For lossy compression we have combined these predictors with a segmentation of the animation into clips and a synchronized simplification of all frames in a clip. To compress 2D and 3D static or animated scalar fields sampled on a regular grid, we introduce the Lorenzo predictor well suited for scanline traversal and the family of Spectral predictors that accommodate any traversal and predict a sample value from known samples in a small neighborhood. Finally, to support the compressed streaming of isosurface animations, we have developed an approach that identifies all node-values needed to compute a given isosurface and encodes the unknown values using our Spectral predictor.
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Xiang. "Lifetime prediction for rocks." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2013. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-126371.

Full text
Abstract:
A lifetime prediction scheme is proposed based on the assumption that the lifetime (time to failure) of rocks under load is governed by the growth of microstructual defects (microcracks). The numerical approach is based on linear elastic fracture mechanics. The numerical calculation scheme is implemented as a cellular automat, where each cell contains a microcrack with length and orientation following certain distributions. The propagation of the microcrack is controlled by the Charles equation, based on subcritical crack growth. The zone inside the numerical model fails if the microcrack has reached the zone dimension or the stress intensity factor of the crack reached the fracture toughness. Macroscopic fractures are formed by these coalesced propagating microcracks, and finally lead to failure of the model. In the numerical approaches, elasto-plastic stress redistributions take place during the forming of the macroscopic fractures. Distinct microcrack propagation types have been programmed and applied to the proposed numerical models. These numerical models are studied under different loading conditions. Numerical results with excellent agreement with the analytical solutions are obtained with respective to predicted lifetime, important parameters for the microcracks, fracture pattern and damage evolution. Potential applications of the proposed numerical model schemes are investigated in some preliminary studies and simulation results are discussed. Finally, conclusions are drawn and possible improvements to the numerical approaches and extensions of the research work are given
本文认为微结构缺陷(微裂纹)的扩展决定了受力岩石的寿命(破坏时间)。基于此假设,提出了岩石寿命预测方法。利用线弹性断裂力学理论,通过FLAC进行了数值模拟。数值模型中每个单元定义一条初始裂纹,其长度与方向服从特定分布。基于亚临界裂纹扩展理论,由Charles方程决定微裂纹的扩展(速度)。如微裂纹发展至单元边界,或应力强度系数到达断裂韧度,则单元破坏。宏观裂纹由微裂纹所联合形成,并最终贯穿模型导致破坏。在形成宏观裂纹的过程中,发生弹塑性应力重分布。在数值模型中,编制了不同类型的微裂纹扩展方式,并在不同的受力条件下加以分析。数值模型的岩石寿命,裂纹形状,破坏方式以及一些重要的参数的数值模拟结果与解析解有较好的一致性。对本文所提出的数值模型的初步实际应用进行了分析,并讨论了计算结果。最后讨论了本文所提出的岩石寿命预测方法的可能改良与发展,并对进一步的研究工作给出建议。
APA, Harvard, Vancouver, ISO, and other styles
40

Abdullah, Siti Norbaiti binti. "Machine learning approach for crude oil price prediction." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/machine-learning-approach-for-crude-oil-price-prediction(949fa2d5-1a4d-416a-8e7c-dd66da95398e).html.

Full text
Abstract:
Crude oil prices impact the world economy and are thus of interest to economic experts and politicians. Oil price’s volatile behaviour, which has moulded today’s world economy, society and politics, has motivated and continues to excite researchers for further study. This volatile behaviour is predicted to prompt more new and interesting research challenges. In the present research, machine learning and computational intelligence utilising historical quantitative data, with the linguistic element of online news services, are used to predict crude oil prices via five different models: (1) the Hierarchical Conceptual (HC) model; (2) the Artificial Neural Network-Quantitative (ANN-Q) model; (3) the Linguistic model; (4) the Rule-based Expert model; and, finally, (5) the Hybridisation of Linguistic and Quantitative (LQ) model. First, to understand the behaviour of the crude oil price market, the HC model functions as a platform to retrieve information that explains the behaviour of the market. This is retrieved from Google News articles using the keyword “Crude oil price”. Through a systematic approach, price data are classified into categories that explain the crude oil price’s level of impact on the market. The price data classification distinguishes crucial behaviour information contained in the articles. These distinguished data features ranked hierarchically according to the level of impact and used as reference to discover the numeric data implemented in model (2). Model (2) is developed to validate the features retrieved in model (1). It introduces the Back Propagation Neural Network (BPNN) technique as an alternative to conventional techniques used for forecasting the crude oil market. The BPNN technique is proven in model (2) to have produced more accurate and competitive results. Likewise, the features retrieved from model (1) are also validated and proven to cause market volatility. In model (3), a more systematic approach is introduced to extract the features from the news corpus. This approach applies a content utilisation technique to news articles and mines news sentiments by applying a fuzzy grammar fragment extraction. To extract the features from the news articles systematically, a domain-customised ‘dictionary’ containing grammar definitions is built beforehand. These retrieved features are used as the linguistic data to predict the market’s behaviour with crude oil price. A decision tree is also produced from this model which hierarchically delineates the events (i.e., the market’s rules) that made the market volatile, and later resulted in the production of model (4). Then, model (5) is built to complement the linguistic character performed in model (3) from the numeric prediction model made in model (2). To conclude, the hybridisation of these two models and the integration of models (1) to (5) in this research imitates the execution of crude oil market’s regulators in calculating their risk of actions before executing a price hedge in the market, wherein risk calculation is based on the ‘facts’ (quantitative data) and ‘rumours’ (linguistic data) collected. The hybridisation of quantitative and linguistic data in this study has shown promising accuracy outcomes, evidenced by the optimum value of directional accuracy and the minimum value of errors obtained.
APA, Harvard, Vancouver, ISO, and other styles
41

Hagward, Anders. "Using Git Commit History for Change Prediction : An empirical study on the predictive potential of file-level logical coupling." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-172998.

Full text
Abstract:
In recent years, a new generation of distributed version control systems have taken the place of the aging centralized ones, with Git arguably being the most popular distributed system today. We investigate the potential of using Git commit history to predict files that are often changed together. Specifically, we look at the rename tracking heuristic found in Git, and the impact it has on prediction performance. By applying a data mining algorithm to five popular GitHub repositories we extract logical coupling – inter-file dependencies not necessarily detectable by static analysis – on which we base our change prediction. In addition, we examine if certain commits are better suited for change prediction than others; we define a bug fix commit as a commit that resolves one or more issues in the associated issue tracking system and compare their prediction performance. While our findings do not reveal any notable differences in prediction performance when disregarding rename information, they suggest that extracting coupling from, and predicting on, bug fix commits in particular could lead to predictions that are both more accurate and numerous.
De senaste åren har en ny generation av distribuerade versionshanteringssystem tagit plats där tidigare centraliserade sådana huserat. I spetsen för dessa nya system går ett system vid namn Git. Vi undersöker potentialen i att nyttja versionshistorik från Git i syftet att förutspå filer som ofta redigeras ihop. I synnerhet synar vi Gits heuristik för att detektera när en fil flyttats eller bytt namn, någonting som torde vara användbart för att bibehålla historiken för en sådan fil, och mäter dess inverkan på prediktionsprestandan. Genom att applicera en datautvinningsalgoritm på fem populära GitHubprojekt extraherar vi logisk koppling – beroenden mellan filer som inte nödvändigtvis är detekterbara medelst statisk analys – på vilken vi baserar vår prediktion. Därtill utreder vi huruvida vissa Gitcommits är bättre lämpade för prediktion än andra; vi definierar en buggfixcommit som en commit som löser en eller flera buggar i den tillhörande buggdatabasen, och jämför deras prediktionsprestanda. Medan våra resultat ej kan påvisa några större prestandamässiga skillnader när flytt- och namnbytesinformationen ignorerades, indikerar de att extrahera koppling från, och prediktera på, enbart bugfixcommits kan leda till förutsägelser som är både mer precisa och mångtaliga.
APA, Harvard, Vancouver, ISO, and other styles
42

Gailey, Robert Stuart. "The amputee mobility predictor : a functional assessment instrument for the prediction of the lower limb amputee's readiness to ambulate." Thesis, University of Strathclyde, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.367028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Kuchangi, Shamanth. "A categorical model for traffic incident likelihood estimation." Thesis, Texas A&M University, 2006. http://hdl.handle.net/1969.1/4661.

Full text
Abstract:
In this thesis an incident prediction model is formulated and calibrated. The primary idea of the model developed is to correlate the expected number of crashes on any section of a freeway to a set of traffic stream characteristics, so that a reliable estimation of likelihood of crashes can be provided on a real-time basis. Traffic stream variables used as explanatory variables in this model are termed as “incident precursors”. The most promising incident precursors for the model formulation for this research were determined by reviewing past research. The statistical model employed is the categorical log-linear model with coefficient of speed variation and occupancy as the precursors. Peak-hour indicators and roadway-type indicators were additional categorical variables used in the model. The model was calibrated using historical loop detector data and crash reports, both of which were available from test beds in Austin, Texas. An examination of the calibrated model indicated that the model distinguished different levels of crash rate for different precursor values and hence could be a useful tool in estimating the likelihood of incidents for real-time freeway incident management systems.
APA, Harvard, Vancouver, ISO, and other styles
44

Iqbal, Ammar Tanange Rakesh Virk Shafqat. "Vehicle fault prediction analysis : a health prediction tool for heavy vehicles /." Göteborg : IT-universitetet, Chalmers tekniska högskola och Göteborgs universitet, 2006. http://www.ituniv.se/w/index.php?option=com_itu_thesis&Itemid=319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

McElroy, Wade Allen. "Demand prediction modeling for utility vegetation management." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117973.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, in conjunction with the Leaders for Global Operations Program at MIT, 2018.
Thesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, in conjunction with the Leaders for Global Operations Program at MIT, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 63-64).
This thesis proposes a demand prediction model for utility vegetation management (VM) organizations. The primary uses of the model is to aid in the technology adoption process of Light Detection and Ranging (LiDAR) inspections, and overall system planning efforts. Utility asset management ensures vegetation clearance of electrical overhead powerlines to meet state and federal regulations, all in an effort to create the safest and most reliable electrical system for their customers. To meet compliance, the utility inspects and then prunes and/or removes trees within their entire service area on an annual basis. In recent years LiDAR technology has become more widely implemented in utilities to quickly and accurately inspect their service territory. VM programs encounter the dilemma of wanting to pursue LiDAR as a technology to improve their operations, but find it prudent, especially in the high risk and critical regulatory environment, to test the technology. The biggest problem during, and after, the testing is having a baseline of the expected number of tree units worked each year due to the intrinsic variability of tree growth. As such, double inspection and/or long pilot projects are conducted before there is full adoption of the technology. This thesis will address the prediction of circuit-level tree work forecasting through the development a model using statistical methods. The outcome of this model will be a reduced timeframe for complete adoption of LiDAR technology for utility vegetation programs. Additionally, the modeling effort provides the utility with insight into annual planning improvements. Lastly for later usage, the model will be a baseline for future individual tree growth models that include and leverage LiDAR data to provide a superior level of safety and reliability for utility customers.
by Wade Allen McElroy.
M.B.A.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
46

Sepp, Löfgren Nicholas. "Accelerating bulk material property prediction using machine learning potentials for molecular dynamics : predicting physical properties of bulk Aluminium and Silicon." Thesis, Linköpings universitet, Teoretisk Fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-179894.

Full text
Abstract:
In this project machine learning (ML) interatomic potentials are trained and used in molecular dynamics (MD) simulations to predict the physical properties of total energy, mean squared displacement (MSD) and specific heat capacity for systems of bulk Aluminium and Silicon. The interatomic potentials investigated are potentials trained using the ML models kernel ridge regression (KRR) and moment tensor potentials (MTPs). The simulations using these ML potentials are then compared with results obtained from ab-initio simulations using the gold standard method of density functional theory (DFT), as implemented in the Vienna ab-intio simulation package (VASP). The results show that the MTP simulations reach comparable accuracy compared to the DFT simulations for the properties total energy and MSD for Aluminium, with errors in the orders of magnitudes of meV and 10-5 Å2. Specific heat capacity is not reasonably replicated for Aluminium. The MTP simulations do not reasonably replicate the studied properties for the system of Silicon. The KRR models are implemented in the most direct way, and do not yield reasonably low errors even when trained on all available 10000 time steps of DFT training data. On the other hand, the MTPs require only to be trained on approximately 100 time steps to replicate the physical properties of Aluminium with accuracy comparable to DFT. After being trained on 100 time steps, the trained MTPs achieve mean absolute errors in the orders of magnitudes for the energy per atom and force magnitude predictions of 10-3 and 10-1 respectively for Aluminium, and 10-3 and 10-2 respectively for Silicon. At the same time, the MTP simulations require less core hours to simulate the same amount of time steps as the DFT simulations. In conclusion, MTPs could very likely play a role in accelerating both materials simulations themselves and subsequently the emergence of the data-driven materials design and informatics paradigm.
APA, Harvard, Vancouver, ISO, and other styles
47

Akkasli, Cem. "Methods for Path loss Prediction." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-6127.

Full text
Abstract:

Large scale path loss modeling plays a fundamental role in designing both fixed and mobile radio systems. Predicting the radio coverage area of a system is not done in a standard manner. Wireless systems are expensive systems. Therefore, before setting up a system one has to choose a proper method depending on the channel environment, frequency band and the desired radio coverage range. Path loss prediction plays a crucial role in link budget analysis and in the cell coverage prediction of mobile radio systems. Especially in urban areas, increasing numbers of subscribers brings forth the need for more base stations and channels. To obtain high efficiency from the frequency reuse concept in modern cellular systems one has to eliminate the interference at the cell boundaries. Determining the cell size properly is done by using an accurate path loss prediction method. Starting from the radio propagation phenomena and basic path loss models this thesis aims at describing various accurate path loss prediction methods used both in rural and urban environments. The Walfisch-Bertoni and Hata models, which are both used for UHF propagation in urban areas, were chosen for a detailed comparison. The comparison shows that the Walfisch-Bertoni model, which involves more parameters, agrees with the Hata model for the overall path loss.

APA, Harvard, Vancouver, ISO, and other styles
48

Bayrak, Hakan. "Lifetime Condition Prediction For Bridges." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613793/index.pdf.

Full text
Abstract:
Infrastructure systems are crucial facilities. They supply the necessary transportation, water and energy utilities for the public. However, while aging, these systems gradually deteriorate in time and approach the end of their lifespans. As a result, they require periodic maintenance and repair in order to function and be reliable throughout their lifetimes. Bridge infrastructure is an essential part of the transportation infrastructure. Bridge management systems (BMSs), used to monitor the condition and safety of the bridges in a bridge infrastructure, have evolved considerably in the past decades. The aim of BMSs is to use the resources in an optimal manner keeping the bridges out of risk of failure. The BMSs use the lifetime performance curves to predict the future condition of the bridge elements or bridges. The most widely implemented condition-based performance prediction and maintenance optimization model is the Markov Decision Process-based models (MDP). The importance of the Markov Decision Process-based model is that it defines the time-variant deterioration using the Markov Transition Probability Matrix and performs the lifetime cost optimization by finding the optimum maintenance policy. In this study, the Markov decision process-based model is examined and a computer program to find the optimal policy with discounted life-cycle cost is developed. The other performance prediction model investigated in this study is a probabilistic Bi-linear model which takes into account the uncertainties for the deterioration process and the application of maintenance actions by the use of random variables. As part of the study, in order to further analyze and develop the Bi-linear model, a Latin Hypercube Sampling-based (LHS) simulation program is also developed and integrated into the main computational algorithm which can produce condition, safety, and life-cycle cost profiles for bridge members with and without maintenance actions. Furthermore, a polynomial-based condition prediction is also examined as an alternative performance prediction model. This model is obtained from condition rating data by applying regression analysis. Regression-based performance curves are regenerated using the Latin Hypercube sampling method. Finally, the results from the Markov chain-based performance prediction are compared with Simulation-based Bi-linear prediction and the derivation of the transition probability matrix from simulated regression based condition profile is introduced as a newly developed approach. It has been observed that the results obtained from the Markov chain-based average condition rating profiles match well with those obtained from Simulation-based mean condition rating profiles. The result suggests that the Simulation-based condition prediction model may be considered as a potential model in future BMSs.
APA, Harvard, Vancouver, ISO, and other styles
49

Yu, Xiaofeng. "Prediction Intervals for Class Probabilities." The University of Waikato, 2007. http://hdl.handle.net/10289/2436.

Full text
Abstract:
Prediction intervals for class probabilities are of interest in machine learning because they can quantify the uncertainty about the class probability estimate for a test instance. The idea is that all likely class probability values of the test instance are included, with a pre-specified confidence level, in the calculated prediction interval. This thesis proposes a probabilistic model for calculating such prediction intervals. Given the unobservability of class probabilities, a Bayesian approach is employed to derive a complete distribution of the class probability of a test instance based on a set of class observations of training instances in the neighbourhood of the test instance. A random decision tree ensemble learning algorithm is also proposed, whose prediction output constitutes the neighbourhood that is used by the Bayesian model to produce a PI for the test instance. The Bayesian model, which is used in conjunction with the ensemble learning algorithm and the standard nearest-neighbour classifier, is evaluated on artificial datasets and modified real datasets.
APA, Harvard, Vancouver, ISO, and other styles
50

Rozum, Michael A. "Effective design augmentation for prediction." Diss., This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-08032007-102232/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography