Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Prediction.

Rozprawy doktorskie na temat „Prediction”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Prediction”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Carrión, Brännström Robin. "Aggregating predictions using Non-Disclosed Conformal Prediction". Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385098.

Pełny tekst źródła
Streszczenie:
When data are stored in different locations and pooling of such data is not allowed, there is an informational loss when doing predictive modeling. In this thesis, a new method called Non-Disclosed Conformal Prediction (NDCP) is adapted into a regression setting, such that predictions and prediction intervals can be aggregated from different data sources without interchanging any data. The method is built upon the Conformal Prediction framework, which produces predictions with confidence measures on top of any machine learning method. The method is evaluated on regression benchmark data sets using Support Vector Regression, with different sizes and settings for the data sources, to simulate real life scenarios. The results show that the method produces conservatively valid prediction intervals even though in some settings, the individual data sources do not manage to create valid intervals. NDCP also creates more stable intervals than the individual data sources. Thanks to its straightforward implementation, data owners which cannot share data but would like to contribute to predictive modeling, would benefit from using this method.
Style APA, Harvard, Vancouver, ISO itp.
2

Miller, Mark Daniel. "Entangled predictive brain : emotion, prediction and embodied cognition". Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33218.

Pełny tekst źródła
Streszczenie:
How does the living body impact, and perhaps even help constitute, the thinking, reasoning, feeling agent? This is the guiding question that the following work seeks to answer. The subtitle of this project is emotion, prediction and embodied cognition for good reason: these are the three closely related themes that tie together the various chapters of the following thesis. The central claim is that a better understanding of the nature of emotion offers valuable insight for understanding the nature of the so called 'predictive mind', including a powerful new way to think about the mind as embodied Recently a new perspective has arguably taken the pole position in both philosophy of mind and the cognitive sciences when it comes to discussing the nature of mind. This framework takes the brain to be a probabilistic prediction engine. Such engines, so the framework proposes, are dedicated to the task of minimizing the disparity between how they expect the world to be and how the world actually is. Part of the power of the framework is the elegant suggestion that much of what we take to be central to human intelligence - perception, action, emotion, learning and language - can be understood within the framework of prediction and error reduction. In what follows I will refer to this general approach to understanding the mind and brain as 'predictive processing'. While the predictive processing framework is in many ways revolutionary, there is a tendency for researchers interested in this topic to assume a very traditional 'neurocentric' stance concerning the mind. I argue that this neurocentric stance is completely optional, and that a focus on emotional processing provides good reasons to think that the predictive mind is also a deeply embodied mind. The result is a way of understanding the predictive brain that allows the body and the surrounding environment to make a robust constitutive contribution to the predictive process. While it's true that predictive models can get us a long way in making sense of what drives the neural-economy, I will argue that a complete picture of human intelligence requires us to also explore the many ways that a predictive brain is embodied in a living body and embedded in the social-cultural world in which it was born and lives.
Style APA, Harvard, Vancouver, ISO itp.
3

Björsell, Joachim. "Long Range Channel Predictions for Broadband Systems : Predictor antenna experiments and interpolation of Kalman predictions". Thesis, Uppsala universitet, Signaler och System, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-281058.

Pełny tekst źródła
Streszczenie:
The field of wireless communication is under massive development and the demands on the cellular system, especially, are constantly increasing as the utilizing devices are increasing in number and diversity. A key component of wireless communication is the knowledge of the channel, i.e, how the signal is affected when sent over the wireless medium. Channel prediction is one concept which can improve current techniques or enable new ones in order to increase the performance of the cellular system. Firstly, this report will investigate the concept of a predictor antenna on new, extensive measurements which represent many different environments and scenarios. A predictor antenna is a separate antenna that is placed in front of the main antenna on the roof of a vehicle. The predictor antenna could enable good channel prediction for high velocity vehicles. The measurements show to be too noisy to be used directly in the predictor antenna concept but show potential if the measurements can be noise-filtered without distorting the signal. The use of low-pass filter and Kalman filter to do this, did not give the desired results but the technique to do this should be further investigated. Secondly, a interpolation technique will be presented which utilizes predictions with different prediction horizon by estimating intermediate channel components using interpolation. This could save channel feedback resources as well as give a better robustness to bad channel predictions by letting fresh, local, channel predictions be used as quality reference of the interpolated channel estimates. For a linear interpolation between 8-step and 18-step Kalman predictions with Normalized Mean Square Error (NMSE) of -15.02 dB and -10.88 dB, the interpolated estimates had an average NMSE of -13.14 dB, while lowering the required feedback data by about 80 %. The use of a warning algorithm reduced the NMSE by a further 0.2 dB. It mainly eliminated the largest prediction error which otherwise could lead to retransmission, which is not desired.
Style APA, Harvard, Vancouver, ISO itp.
4

Bramlet, John. "Earthquake prediction and earthquake damage prediction /". Connect to resource, 1996. http://hdl.handle.net/1811/31764.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Greco, Antonino. "The role of task relevance in the modulation of brain dynamics during sensory predictions". Doctoral thesis, Università degli studi di Trento, 2021. http://hdl.handle.net/11572/307050.

Pełny tekst źródła
Streszczenie:
Associative learning is a fundamental ability biological systems possess in order to adapt to a nonstationary environment. One of the core aspects of associative learning theoretical frameworks is that surprising events drive learning by signalling the need to update the system’s beliefs about the probability structure governing stimuli associations. Specifically, the central neural system generates internal predictions to anticipate the causes of its perceptual experience and compute a prediction error to update its generative model of the environment, an idea generally known as the predictive coding framework. However, it is not clear whether the brain generates these predictions only for goal-oriented behavior or they are more a general characteristic of the brain function. In this thesis, I explored the role of task relevance in modulating brain activity when exposed to sensory associative learning task. In the first study, participants were asked to perform a perceptual detection task while audio-visual stimuli were presented as distractors. These distractors possessed a probability structure that made some of them more paired than others. Results showed that occipital activity triggered by the conditioned stimulus was elicited just before the arrival of the unconditioned visual stimulus. Moreover, occipital activity after the onset of the unconditioned stimulus followed a pattern of precision-weighted prediction errors. In the second study, two more sessions were added to the task in the previous study in which the probability structure for all stimuli associations was identical and the whole experiment was spanned in six days across two weeks. Results showed a difference in the modulation of the beta band induced by the presentation of the unconditioned stimulus preceded by the predictive and unpredictive conditioned auditory stimuli by comparing the pre and post sessions activity. In the third study, participants were exposed to a similar task with respect to the second study with the modification that there was a condition in which the conditioned-unconditioned stimulus association was task-relevant, thus allowing to directly compare task-relevant and task-irrelevant associations. Results showed that both types of associations had similar patterns in terms of activity and functional connectivity when comparing the brain responses to the onset of the unconditioned visual stimulus. Taken together, these findings demonstrate irrelevant associations rely on the same neural mechanisms of relevant ones. Thus, even if task relevance plays a modulatory role on the strength of the neural effects of associative learning, predictive processes take place in sensory associative learning regardless of task relevance.
Style APA, Harvard, Vancouver, ISO itp.
6

Kock, Peter. "Prediction and predictive control for economic optimisation of vehicle operation". Thesis, Kingston University, 2013. http://eprints.kingston.ac.uk/35861/.

Pełny tekst źródła
Streszczenie:
Truck manufacturers are currently under pressure to reduce pollution and cost of transportation. The cost efficient way to reduce CO[sub]2 and cost is to reduce fuel consumption by adaptation of the vehicle speed to the driving conditions - by heuristic knowledge or mathematical optimisation. Due to their experience, professional drivers are capable of driving with great efficiency in terms of fuel consumption. The key research question addressed in this work is the comparison of the fuel efficiency for an unassisted drive by an experienced professional driver versus an enhanced drive using driver assistance system. The motivation for this is based on the advantage of such a system in terms of price (lower than driver's training) but potentially it can be challenging to obtain drivers' acceptance of the system. There is a range of fundamental issued that have to be addressed prior to the design and implementation of the driver assistance system. The first issue is related to the evaluation of the correctness of the prediction model under development, due to a range of inaccuracies introduced by slope errors in digital maps, imprecise modelling of combustion engine, vehicle physics etc. The second issue is related to the challenge in selecting a suitable method for optimisation of mixed integer non-linear systems. Dynamic Programming proved to be very suitable for this work and some methods of search space reduction are presented here. Also an analytical solution of the Bernoulli differential equation of the vehicle dynamics is presented and used here in order to reduce computing effort. Extensive simulation and driving tests were performed using different driving approaches to compare well trained human experts with a range of different driving assistance systems based on standard cruise control, heuristic and mathematical optimisation. Finally the acceptance of the systems by drivers been evaluated.
Style APA, Harvard, Vancouver, ISO itp.
7

Andeta, Jemal Ahmed. "Road-traffic accident prediction model : Predicting the Number of Casualties". Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-20146.

Pełny tekst źródła
Streszczenie:
Efficient and effective road traffic prediction and management techniques are crucial in intelligent transportation systems. It can positively influence road advancement, safety enhancement, regulation formulation, and route planning to save living things in advance from road traffic accidents. This thesis considers road safety by predicting the number of casualties if an accident occurs using multiple traffic accident attributes. It helps individuals (drivers) or traffic offices to adjust and control their contributions for the occurrence of an accident before emerging it. Three candidate algorithms from different regression fit patterns are proposed and evaluated to conduct the thesis: the bagging, linear, and non-linear fitting patterns. The gradient boosting machines (GBoost) from the bagging, Linearsupport vector regression (LinearSVR) from the linear, and extreme learning machines (ELM) also from the non-linear side are the selected algorithms. RMSE and MAE performance evaluation metrics are applied to evaluate the models. The GBoost achieved a better performance than the other two with a low error rate and minimum prediction interval value for 95% prediction interval. A SHAP (SHapley Additive exPlanations) interpretation technique is applied to interpret each model at the global interpretation level using SHAP’s beeswarm plots. Finally, suggestions for future improvements are presented via the dataset and hyperparameter tuning.
Style APA, Harvard, Vancouver, ISO itp.
8

Peterson, Ashley Thomas. "Cavitation prediction". Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612813.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Åkermark, Alexander, i Mattias Hallefält. "Churn Prediction". Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-41236.

Pełny tekst źródła
Streszczenie:
Churn analysis is an important tool for companies as it can reduce the costs that are related to customer churn. Churn prediction is the process of identifying users before they churn, this is done by implementing methods on collected data in order to find patterns that can be helpful when predicting new churners in the future.The objective of this report is to identify churners with the use of surveys collected from different golfclubs, their members and guests. This was accomplished by testing several different supervised machine learning algorithms in order to find the different classes and to see which supervised algorithms are most suitable for this kind of data.The margin of success was to have a greater accuracy than the percentage of major class in the datasetThe data was processed using label encoding, ONE-hot encoding and principal component analysis and was split into 10 folds, 9 training folds and 1 testing fold ensuring cross validation when iterated 10 times rearranging the test and training folds. Each algorithm processed the training data to create a classifier which was tested on the test data.The classifiers used for the project was K nearest neighbours, Support vector machine, multi-layer perceptron, decision trees and random forest.The different classifiers generally had an accuracy of around 72% and the best classifier which was random forest had an accuracy of 75%. All the classifiers had an accuracy above the margin of success.K-folding, confusion-matrices, classification report and other internal crossvalidation techniques were performed on the the data to ensure the quality of the classifier.The project was a success although there is a strong belief that the bottleneck for the project was the quality of the data in terms of new legislation when collecting and storing data that results in redundant and faulty data.
Churn analys är ett viktigt verktyg för företag då det kan reducera kostnaderna som är relaterade till kund churn. Churn prognoser är processen av att identifiera användare innan de churnas, detta är gjort med implementering av metoder på samlad data för att hitta mönster som är hjälpsamma när framtida användare ska prognoseras. Objektivet med denna rapport är att identifiera churnare med användning av enkäter samlade från golfklubbar och deras kunder och gäster. Det är uppnå att igenom att testa flera olika kontrollerade maskinlärnings algoritmer för att jämföra vilken algoritm som passar bäst. Felmarginalen uppgick till att ha en större träffsäkerhet än procenthalten av den dominanta klassen i datasetet. Datan behandlades med label encoding, ONE-hot encoding och principial komponent analys och delades upp i 10 delar, 9 träning och 1 test del för att säkerställa korsvalidering. Varje algoritm behandlade träningsdatan för att skapa att klassifierare som sedan testades på test datan. Klassifierarna som användes för projekted innefattar K nearest neighbours, Support vector machine, multi-layer perceptron, decision trees och random forest. De olika klassifierarna hade en generell träffssäkerhet omkring 72%, där den bästa var random forest med en träffssäkerhet på 75%. Alla klassifierare hade en träffsäkerhet än den felmarginal som st¨alldes. K-folding, confusion matrices, classification report och andra interna korsvaliderings tekniker användes för att säkerställa kvaliteten på klassifieraren. Projektet var lyckat, men det finns misstanke om att flaskhalsen för projektet låg inom kvaliteten på datan med hänsyn på villkor för ny lagstiftning vid insamling och lagring av data som leder till överflödiga och felaktiga uppgifter.
Style APA, Harvard, Vancouver, ISO itp.
10

Jahedpari, Fatemeh. "Artificial prediction markets for online prediction of continuous variables". Thesis, University of Bath, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.690730.

Pełny tekst źródła
Streszczenie:
In this dissertation, we propose an online machine learning technique – named Artificial Continuous Prediction Market (ACPM) – to predict the value of a continuous variable by (i) integrating a set of data streams from heterogeneous sources with time varying compositions such as changing the quality of data streams, (ii) integrating the results of several analysis models for each data source when the most suitable model for a given data source is not known a priori, (iii) dynamically weighting the prediction of each analysis model and data source to form the system prediction. We adapt the concept of prediction market, motivated by their success in forecasting accurately the outcome of many events [Nikolova and Sami, 2007]. Our proposed model instantiates a sequence of prediction markets in which artificial agents play the role of market participants. Agents participate in the markets with the objective of increasing their own utility and hence indirectly cause the markets to aggregate their knowledge. Each market is run in a number of rounds in which agents have the opportunity to send their prediction and bet to the market. At the end of each round, the aggregated prediction of the crowd is announced to all agents, which provides a signal to agents about the private information of other agents so they can adjust their beliefs accordingly. Once the true value of the record is known, agents are rewarded according to accuracy of their prediction. Using this information, agents update their models and knowledge, with the aim of improving their performance in future markets. This thesis proposes two trading strategies to be utilised by agents when participating in a market. While the first one is a naive constant strategy, the second one is an adaptive strategy based on Q-Learning technique [Watkins, 1989]. We evaluate the performance of our model in different situations using real-world and synthetic data sets. Our results suggest that ACPM: i) is either better or very close to the best performing agents, ii) is resilient to the addition of agents with low performance, iii) outperforms many well-known machine learning models, iv) is resilient to quality drop-out in the best performing agents, v) adapts to changes in quality of agents predictions.
Style APA, Harvard, Vancouver, ISO itp.
11

Cai, Xun Ph D. Massachusetts Institute of Technology. "Transforms for prediction residuals based on prediction inaccuracy modeling". Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/109003.

Pełny tekst źródła
Streszczenie:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 157-162).
In a typical transform-based image and video compression system, an image or a video frame is predicted from previously encoded information. The prediction residuals are encoded with transforms. With a proper choice of the transform, a large amount of the residual energy compacts into a small number of transform coefficients. This is known as the energy compaction property. Given the covariance function of the signal, the linear transform with the best energy compaction property is the Karhunen Loeve transform. In this thesis, we develop a new set of transforms for prediction residuals. We observe that the prediction process in practical video compression systems is usually not accurate. By studying the inaccuracy of the prediction process, we can derive new covariance functions for prediction residuals. The estimated covariance function is used to generate the Karhunen Loeve transform for residual encoding. In this thesis, we model the prediction inaccuracy for two types of residuals. Specifically, we estimate the covariance function of the directional intra prediction residuals. We show that the covariance function and the optimal transform for directional intra prediction residuals are related with the one-dimensional gradient of boundary predictors. We estimate the covariance function of the motion-compensated prediction residuals. We show that the covariance function and the optimal transform for motion-compensated prediction residuals are related with the two-dimensional gradient of the displaced reference block. The proposed transforms are evaluated using the energy compaction property and the rate-distortion metric in a practical video coding system. Experimental results indicate that the proposed transforms significantly improve the performance in a typical transform-based compression scenario.
by Xun Cai.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
12

Shrestha, Rakshya. "Deep soil mixing and predictive neural network models for strength prediction". Thesis, University of Cambridge, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.607735.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Lönnbark, Carl. "On Risk Prediction". Doctoral thesis, Umeå universitet, Nationalekonomi, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-22200.

Pełny tekst źródła
Streszczenie:
This thesis comprises four papers concerning risk prediction. Paper [I] suggests a nonlinear and multivariate time series model framework that enables the study of simultaneity in returns and in volatilities, as well as asymmetric effects arising from shocks. Using daily data 2000-2006 for the Baltic state stock exchanges and that of Moscow we find recursive structures with Riga directly depending in returns on Tallinn and Vilnius, and Tallinn on Vilnius. For volatilities both Riga and Vilnius depend on Tallinn. In addition, we find evidence of asymmetric effects of shocks arising in Moscow and in the Baltic states on both returns and volatilities. Paper [II] argues that the estimation error in Value at Risk predictors gives rise to underestimation of portfolio risk. A simple correction is proposed and in an empirical illustration it is found to be economically relevant. Paper [III] studies some approximation approaches to computing the Value at Risk and the Expected Shortfall for multiple period asset re- turns. Based on the result of a simulation experiment we conclude that among the approaches studied the one based on assuming a skewed t dis- tribution for the multiple period returns and that based on simulations were the best. We also found that the uncertainty due to the estimation error can be quite accurately estimated employing the delta method. In an empirical illustration we computed five day Value at Risk's for the S&P 500 index. The approaches performed about equally well. Paper [IV] argues that the practise used in the valuation of the port- folio is important for the calculation of the Value at Risk. In particular, when liquidating a large portfolio the seller may not face horizontal de- mandcurves. We propose a partially new approach for incorporating this fact in the Value at Risk and in an empirical illustration we compare it to a competing approach. We find substantial differences.
Style APA, Harvard, Vancouver, ISO itp.
14

Chan, Pee Yuaw. "Software reliability prediction". Thesis, City, University of London, 1986. http://openaccess.city.ac.uk/18127/.

Pełny tekst źródła
Streszczenie:
Two methods are proposed to find the maximum likelihood parameter estimates of a number of software reliability models. On the basis of the results from analysing 7 sets of real data, these methods are found to be both efficient and reliable. The simple approach of adapting software reliability predictions by Keiller and Littlewood (1984) can produce improved predictions, but at the same time, introduces a lot of internal noise into the adapted predictions. This is due to the fact that the adaptor is a joined-up function. An alternative adaptive procedure, which involves the parametric spline adaptor, can produce at least as good adapted predictions without the predictions being contaminated by internal noise as in the simple approach. Miller and Sofer (1986a) proposed a method for estimating the failure rate of a program non-parametrically. Here, these non-parametric rates are used to produce reliability predictions and their quality is analysed and compared with the parametric predictions.
Style APA, Harvard, Vancouver, ISO itp.
15

Samee, Farman. "Options with Prediction". Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.516360.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Cai, Changqing. "Personal preference prediction". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ61879.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Tardieu, Giliane. "Thermal conductivity prediction". Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/10014.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Cuff, James Andrew. "Protein structure prediction". Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.365685.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Care, Matthew Anthony. "Deleterious SNP prediction". Thesis, University of Leeds, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.496547.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Anderson, O. E. "Grammatical error prediction". Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.595506.

Pełny tekst źródła
Streszczenie:
In this thesis, we investigate methods for automatic detection, and to some extent correction, of grammatical errors. The evaluation is based on manual error annotation in the Cambridge Learner Corpus (CLC), and automatic or semi-automatic annotation of error corpora is one possible application, but the methods are also applicable in other settings, for instance to give learners feedback on their writing or in a proofreading tool used to prepare texts for publication. Apart from the CLC, we use the British National Corpus (BNC) to get a better model of correct usage, WordNet for semantic relations, other machine-readable dictionaries for orthography/morphology, and the Robust Accurate Statistical Parsing (RASP) system to parse both the CLC and the BNC and thereby identify syntactic relations within the sentence. Different techniques are investigated, including: sentence-level binary classification based on machine-learning over n-grams of words, n-grams of part-of-speech tags and grammatical relations; automatic identification of features which are highly indicative of individual errors; and development of classifiers aimed more specifically at given error types, for instance concord errors based on syntactic structure and collocation errors based on co-occurrence statistics from BNC, using clustering to deal with data sparseness. We show that such techniques, when applied, can detect, and sometimes even correct, at least certain error types as well as or better than human annotators. We finally present an annotation experiment in which a human annotator corrects and supplements the automatic annotation, which confirms the high detection/correction accuracy of our system and furthermore shows that such a hybrid set-up gives higher-quality annotation with considerably less time and effort expended compared to fully manual annotation.
Style APA, Harvard, Vancouver, ISO itp.
21

Wright, David R. "Software reliability prediction". Thesis, City University London, 2001. http://openaccess.city.ac.uk/8387/.

Pełny tekst źródła
Streszczenie:
This thesis presents some extensions to existing methods of software reliability estimation and prediction. Firstly, we examine a technique called 'recalibration' by means of which many existing software reliability prediction algorithms assess past predictive performance in order to improve the accuracy of current reliability predictions. This existing technique for forecasting future failure times of software is already quite general. Indeed, whenever your predictions are produced in the form of time-to-failure distributions, successively as more actual failure times are observed, you can apply recalibration irrespective both of which probabilistic software reliability model and of which statistical inference technique you are using. In the current work we further generalise the recalibration method to those situations where empirical failure data take the form of failure-counts rather than precise inter-failure times. We then briefly explore how the reasoning we have used, in this extension of recalibration to the prediction of failure-count sequences, might further extend to recalibration of other representations of predicted reliability. Secondly, the thesis contains a theoretical discussion of some modelling possibilities for improving software reliability predictions by the incorporation of disparate sources of data. There are well established techniques for forecasting the reliability of a particular software product using as data only the past failure behaviour of that software under statistically representative operational testing. However, there may sometimes be reasons for seeking improved predictive accuracy by using data of other kinds too, rather than relying on this single source of empirical evidence. Notable among these is the economic impracticability, in many cases, of obtaining sufficient, representative software failure vs. time data (from execution of the particular product in question) to determine, by inference applied to software reliability growth models, whether or not a high reliability requirement has been achieved in a particular case, prior to extensive operational use of the software in question. For example, this problem arises in particular for safety-critical systems, whose required reliability is often extremely high. An accurate reliability assessment is often required in advance of a decision whether to release the software for actual use in the field. Another argument for attempting to determine other usable data sources for software reliability prediction is the value that would attach to rigorous empirical confirmation or refutation of any of the many existing theories and claims about what are the factors of software reliability, and how these factors may interact, in some given context. In those cases, such as some safety-critical systems, in which assessment of a high reliability level is required at an early stage, the necessary assessment is in practice often currently carried out rather informally, and often does claim to take account of many different types of evidence experience of previous, similar systems; evidence of the efficacy of the development process; expert judgement, etc-to supplement the limited available data on past failure vs. time behaviour which emanates from testing of the software within a realistic usage environment. Ideally, we would like this assessment to allow all such evidence to be combined into a final numerical measure of reliability in a scientifically more rigorous way. To address these problems, we first examine some candidate general statistical regression models used in other fields such as medicine and insurance and discuss how these might be applied to prediction of software reliability. We have here termed these models explanatory variables regression models. The goal here would be to investigate statistically how to explain differences in software failure behaviour in terms of differences in other measured characteristics of a number of different statistical 'individuals', or 'experimental units': We discuss the interpretation, within the software reliability context, of this statistical concept of an 'individual', with our favoured interpretation being such that a single statistical reliability regression model would be used to model simultaneously a family of parallel series of inter-failure times emanating from measurably different software products or from measurably different installations of a single software product. In statistical regression terms here, each one of these distinct failure vs. time histories would be the 'response variable' corresponding to one of these 'individuals'. The other measurable differences between these individuals would be captured in the model as explanatory variable values which would differ from one individual to another. Following this discussion, we then leave general regression models to examine a slightly different theoretical approach-to essentially the same question of how to incorporate diverse data within our predictions-through an examination of models for 'unexplained' differences between individuals' failure behaviours. Here, rather than assuming the availability of putative 'explanatory variables' to distinguish our statistical individuals and 'explain' the way that their reliabilities differ, we instead use randomness alone to model their differences in reliability. We have termed the class of models produced by this approach similar products models, meaning models in which we regard the individuals' different likely failure vs. time behaviours as initially (i. e. a priori) indistinguishable to us: Here, either we cannot (or we choose not to attempt with a formal model to) explain the differences between individuals' reliabilities in terms of other metrics applied to our individuals, but we do still expect that the 'similar products" (i. e. the individuals') reliabilities will be different from each other: We postulate the existence of a single probability distribution from which we may assume our individuals' true, unknown reliabilities to have all been drawn independently in a random fashion. We present some mathematical consequences, showing how, within such a modelling framework, prior belief about the distribution of reliabilities assumes great importance for model consequences. We also present some illustrative numerical results that seem to suggest that experience from previous products or environments, so represented within the model-even where very high operational dependability has been achieved in such previous cases-can only modestly improve our confidence in the reliability of a new product, or of an existing product when transferred to a new environment.
Style APA, Harvard, Vancouver, ISO itp.
22

Yates, Amanda Marie. "Prediction of sepsis". Thesis, University of the West of England, Bristol, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.429692.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Steel, Donald. "Software reliability prediction". Thesis, Abertay University, 1990. https://rke.abertay.ac.uk/en/studentTheses/4613ff72-9650-4fa1-95d1-1a9b7b772ee4.

Pełny tekst źródła
Streszczenie:
The aim of the work described in this thesis was to improve NCR's decision making process for progressing software products through the development cycle. The first chapter briefly describes the software development process at NCR, detailing documentation review and software testing techniques. The objectives and reasons for investigating software reliability models as a tool in the decision making process are outlined. There follows a short review of software reliability models, with the Littlewood and Verrall Bayesian model considered in detail. The difficulties in using this model to obtain estimates for model parameters and time to next failure are described. These estimation difficulties exist using the model on good datasets, in this case simulated failure data, and the difficulties are compounded when used with real failure data. The problems of collecting and recording failure data are outlined, highlighting the inadequacies of these collected data, and real failure data are analysed. Software reliability models are used in an attempt to quantify the reliability of real software products. The thesis concludes by summarising the problems encountered when using reliability models to measure software products and suggests future research into metrics that are required in this area of software engineering.
Style APA, Harvard, Vancouver, ISO itp.
24

Luo, Meng. "Frost Depth Prediction". Thesis, North Dakota State University, 2014. https://hdl.handle.net/10365/27488.

Pełny tekst źródła
Streszczenie:
The purpose of this research project is to develop a model that is able to accurately predict frost depth on a particular date, using available information. Frost depth prediction is useful in many applications in several domains. For example in agriculture, knowing frost depth early is crucial for farmers to determine when and how deep they should plant. In this study, data is collected primarily from NDAWN(North Dakota AgriculturalWeather Network) Fargo station for historical soil depth temperature and weather information. Lasso regression is used to model the frost depth. Since soil temperature is clearly seasonal, meaning there should be an obvious correlation between temperature and different days, our model can handle residual correlations that are generated not only from time domain, but space domain, since temperatures of different levels should also be correlated. Furthermore, root mean square error (RMSE) is used to evaluate goodness-of-fit of the model.
Style APA, Harvard, Vancouver, ISO itp.
25

Aghi, Nawar, i Ahmad Abdulal. "House Price Prediction". Thesis, Högskolan Kristianstad, Fakulteten för naturvetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-20945.

Pełny tekst źródła
Streszczenie:
This study proposes a performance comparison between machine learning regression algorithms and Artificial Neural Network (ANN). The regression algorithms used in this study are Multiple linear, Least Absolute Selection Operator (Lasso), Ridge, Random Forest. Moreover, this study attempts to analyse the correlation between variables to determine the most important factors that affect house prices in Malmö, Sweden. There are two datasets used in this study which called public and local. They contain house prices from Ames, Iowa, United States and Malmö, Sweden, respectively.The accuracy of the prediction is evaluated by checking the root square and root mean square error scores of the training model. The test is performed after applying the required pre-processing methods and splitting the data into two parts. However, one part will be used in the training and the other in the test phase. We have also presented a binning strategy that improved the accuracy of the models.This thesis attempts to show that Lasso gives the best score among other algorithms when using the public dataset in training. The correlation graphs show the variables' level of dependency. In addition, the empirical results show that crime, deposit, lending, and repo rates influence the house prices negatively. Where inflation, year, and unemployment rate impact the house prices positively.
Style APA, Harvard, Vancouver, ISO itp.
26

Vlasák, Pavel. "Exhange Rates Prediction". Master's thesis, Vysoká škola ekonomická v Praze, 2009. http://www.nusl.cz/ntk/nusl-76388.

Pełny tekst źródła
Streszczenie:
The aim of this thesis is to examine the dependence of the exchange rate movement on the core fundamentals of the economy in the long term, as well as to test the validity of selected indicators of technical analysis in the short term. The dependence of the exchange rate will be examined using correlation and the discussed fundamentals are the main macroeconomic indicators, such as GDP, short-term interest rates and money base M2. In the part, which deals with the technical analysis, I will test the two groups of indicators, namely trend indicators and oscillators. From the first group it will be simple moving average (SMA), Exponential Moving Average (EMA), the weighted moving average (WMA), the triangular moving average (TMA) and MACD. From the group of oscillators I will test the relative strength index (RSI). All these indicators will be first described in the theoretical part of this thesis. The thesis is divided into two parts - theoretical and practical. The theoretical part includes two chapters which deals with the analysis of the Forex market. The first chapter deals with fundamental analysis. The second chapter deals with technical analysis. In the third chapter I will discuss both methods in practice, with emphasis on technical analysis.
Style APA, Harvard, Vancouver, ISO itp.
27

Lönnbark, Carl. "On risk prediction /". Umeå : Institutionen för nationalekonomi, Umeå universitet, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-22200.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Ge, Esther. "The query based learning system for lifetime prediction of metallic components". Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/18345/4/Esther_Ting_Ge_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
This research project was a step forward in developing an efficient data mining method for estimating the service life of metallic components in Queensland school buildings. The developed method links together the different data sources of service life information and builds the model for a real situation when the users have information on limited inputs only. A practical lifetime prediction system was developed for the industry partners of this project including Queensland Department of Public Works and Queensland Department of Main Roads. The system provides high accuracy in practice where not all inputs are available for querying to the system.
Style APA, Harvard, Vancouver, ISO itp.
29

Ge, Esther. "The query based learning system for lifetime prediction of metallic components". Queensland University of Technology, 2008. http://eprints.qut.edu.au/18345/.

Pełny tekst źródła
Streszczenie:
This research project was a step forward in developing an efficient data mining method for estimating the service life of metallic components in Queensland school buildings. The developed method links together the different data sources of service life information and builds the model for a real situation when the users have information on limited inputs only. A practical lifetime prediction system was developed for the industry partners of this project including Queensland Department of Public Works and Queensland Department of Main Roads. The system provides high accuracy in practice where not all inputs are available for querying to the system.
Style APA, Harvard, Vancouver, ISO itp.
30

Oleksandra, Shovkun. "Some methods for reducing the total consumption and production prediction errors of electricity: Adaptive Linear Regression of Original Predictions and Modeling of Prediction Errors". Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-34398.

Pełny tekst źródła
Streszczenie:
Balance between energy consumption and production of electricityis a very important for the electric power system operation and planning. Itprovides a good principle of effective operation, reduces the generation costin a power system and saves money. Two novel approaches to reduce thetotal errors between forecast and real electricity consumption wereproposed. An Adaptive Linear Regression of Original Predictions (ALROP)was constructed to modify the existing predictions by using simple linearregression with estimation by the Ordinary Least Square (OLS) method.The Weighted Least Square (WLS) method was also used as an alternativeto OLS. The Modeling of Prediction Errors (MPE) was constructed in orderto predict errors for the existing predictions by using the Autoregression(AR) and the Autoregressive-Moving-Average (ARMA) models. For thefirst approach it is observed that the last reported value is of mainimportance. An attempt was made to improve the performance and to getbetter parameter estimates. The separation of concerns and the combinationof concerns were suggested in order to extend the constructed approachesand raise the efficacy of them. Both methods were tested on data for thefourth region of Sweden (“elområde 4”) provided by Bixia. The obtainedresults indicate that all suggested approaches reduce the total percentageerrors of prediction consumption approximately by one half. Resultsindicate that use of the ARMA model slightly better reduces the total errorsthan the other suggested approaches. The most effective way to reduce thetotal consumption prediction errors seems to be obtained by reducing thetotal errors for each subregion.
Style APA, Harvard, Vancouver, ISO itp.
31

Pesquita, Ana. "The social is predictive : human sensitivity to attention control in action prediction". Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/59076.

Pełny tekst źródła
Streszczenie:
Observing others is predicting others. Humans have a natural tendency to make predictions about other people’s future behavior. This predisposition sits at the basis of social cognition: others become accessible to us because we are able to simulate their internal states, and in this way make predictions about their future behavior (Blakemore & Decety, 2001). In this thesis, I examine prediction in the social realm through three main contributions. The first contribution is of a theoretical nature, the second is methodological, and the third contribution is empirical. On the theoretical plane, I present a new framework for cooperative social interactions – the predictive joint-action model, which extends previous models of social interaction (Wolpert, Doya, & Kawato, 2003) to include the higher level goals of joint action and planning (Vesper, Butterfill, Knoblich, & Sebanz, 2010). Action prediction is central to joint-action. A recent theory proposes that social awareness to someone else’s attentional states underlies our ability to predict their future actions (Graziano, 2013). In the methodological realm, I developed a procedure for investigating the role of sensitivity to other’s attention control states in action prediction. This method offers a way to test the hypothesis that humans are sensitive to whether someone’s spatial attention was endogenously controlled (as in the case of choosing to attend towards a particular event) or exogenously controlled (as in the case of attention being prompted by an external event), independent of their sensitivity to the spatial location of that person’s attentional focus. On the empirical front, I present new evidence supporting the hypothesis that social cognition involves the predictive modeling of other’s attentional states. In particular, a series of experiments showed that observers are sensitive to someone else’s attention control and that this sensitivity occurs through an implicit kinematic process linked to social aptitude. In conclusion, I bring these contributions together. I do this by offering an interpretation of the empirical findings through the lens of the theoretical framework, by discussing several limitations of the present work, and by pointing to several questions that emerge from the new findings, thereby outlining avenues for future research on social cognition.
Arts, Faculty of
Graduate
Style APA, Harvard, Vancouver, ISO itp.
32

Horton, Sara Jane. "Refining the prediction of childhood diabetes using insulin autoantibodies : disease predictive idiotypes". Thesis, University of Exeter, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418545.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
33

Yang, Ziyi. "Monitoring and predicting railway subsidence using InSAR and time series prediction techniques". Thesis, University of Birmingham, 2015. http://etheses.bham.ac.uk//id/eprint/6377/.

Pełny tekst źródła
Streszczenie:
Improvements in railway capabilities have resulted in heavier axle loads and higher speed operations, which increase the dynamic loads on the track. As a result, railway subsidence has become a threat to good railway performance and safe railway operation. The author of this thesis provides an approach for railway performance assessment through the monitoring and prediction of railway subsidence. The InSAR technique, which is able to monitor railway subsidence over a large area and long time period, was selected for railway subsidence monitoring. Future trends of railway subsidence should also be predicted using subsidence prediction models based on the time series deformation records obtained by InSAR. Three time series prediction models, which are the ARMA model, a neural network model and the grey model, are adopted in this thesis. Two case studies which monitor and predict the subsidence of the HS1 route were carried out to assess the performance of HS1. The case studies demonstrate that except for some areas with potential subsidence, no large scale subsidence has occurred on HS1 and the line is still stable after its 10 years' operation. In addition, the neural network model has the best performance in predicting the subsidence of HS1.
Style APA, Harvard, Vancouver, ISO itp.
34

Bangalore, Narendranath Rao Amith Kaushal. "Online Message Delay Prediction for Model Predictive Control over Controller Area Network". Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78626.

Pełny tekst źródła
Streszczenie:
Today's Cyber-Physical Systems (CPS) are typically distributed over several computing nodes communicating by way of shared buses such as Controller Area Network (CAN). Their control performance gets degraded due to variable delays (jitters) incurred by messages on the shared CAN bus due to contention and network overhead. This work presents a novel online delay prediction approach that predicts the message delay at runtime based on real-time traffic information on CAN. It leverages the proposed method to improve control quality, by compensating for the message delay using the Model Predictive Control (MPC) algorithm in designing the controller. By simulating an automotive Cruise Control system and a DC Motor plant in a CAN environment, it goes on to demonstrate that the delay prediction is accurate, and that the MPC design which takes the message delay into consideration, performs considerably better. It also implements the proposed method on an 8-bit 16MHz ATmega328P microcontroller and measures the execution time overhead. The results clearly indicate that the method is computationally feasible for online usage.
Master of Science
Style APA, Harvard, Vancouver, ISO itp.
35

Baker, Kristen. "Examining how attention and prediction modulate visual perception: A predictive coding view". Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/235895/1/Kristen%2BBaker%2BThesis%282%29.pdf.

Pełny tekst źródła
Streszczenie:
This thesis investigated the relationship between prediction and attention in visual perception by recording electrophysiological brain responses. Visual paradigms were implemented using various manipulations of stimuli (shapes, neutral faces, and emotional faces), types of attention (spatial, featural, and emotion-guided), and prior precision of spatial location (low and high). This thesis found that during the early stages of visual processing prediction error signalling consistently occurs, with information then diverging into the associated brain regions for further processing for information updating. This thesis demonstrates that prediction and attention both interact and dissociate in the brain in distinct stages during visual perception.
Style APA, Harvard, Vancouver, ISO itp.
36

Chen, Yutao. "Algorithms and Applications for Nonlinear Model Predictive Control with Long Prediction Horizon". Doctoral thesis, Università degli studi di Padova, 2018. http://hdl.handle.net/11577/3421957.

Pełny tekst źródła
Streszczenie:
Fast implementations of NMPC are important when addressing real-time control of systems exhibiting features like fast dynamics, large dimension, and long prediction horizon, as in such situations the computational burden of the NMPC may limit the achievable control bandwidth. For that purpose, this thesis addresses both algorithms and applications. First, fast NMPC algorithms for controlling continuous-time dynamic systems using a long prediction horizon have been developed. A bridge between linear and nonlinear MPC is built using partial linearizations or sensitivity update. In order to update the sensitivities only when necessary, a Curvature-like measure of nonlinearity (CMoN) for dynamic systems has been introduced and applied to existing NMPC algorithms. Based on CMoN, intuitive and advanced updating logic have been developed for different numerical and control performance. Thus, the CMoN, together with the updating logic, formulates a partial sensitivity updating scheme for fast NMPC, named CMoN-RTI. Simulation examples are used to demonstrate the effectiveness and efficiency of CMoN-RTI. In addition, a rigorous analysis on the optimality and local convergence of CMoN-RTI is given and illustrated using numerical examples. Partial condensing algorithms have been developed when using the proposed partial sensitivity update scheme. The computational complexity has been reduced since part of the condensing information are exploited from previous sampling instants. A sensitivity updating logic together with partial condensing is proposed with a complexity linear in prediction length, leading to a speed up by a factor of ten. Partial matrix factorization algorithms are also proposed to exploit partial sensitivity update. By applying splitting methods to multi-stage problems, only part of the resulting KKT system need to be updated, which is computationally dominant in on-line optimization. Significant improvement has been proved by giving floating point operations (flops). Second, efficient implementations of NMPC have been achieved by developing a Matlab based package named MATMPC. MATMPC has two working modes: the one completely relies on Matlab and the other employs the MATLAB C language API. The advantages of MATMPC are that algorithms are easy to develop and debug thanks to Matlab, and libraries and toolboxes from Matlab can be directly used. When working in the second mode, the computational efficiency of MATMPC is comparable with those software using optimized code generation. Real-time implementations are achieved for a nine degree of freedom dynamic driving simulator and for multi-sensory motion cueing with active seat.
Implementazioni rapide di NMPC sono importanti quando si affronta il controllo in tempo reale di sistemi che presentano caratteristiche come dinamica veloce, ampie dimensioni e orizzonte di predizione lungo, poiché in tali situazioni il carico di calcolo dell'MNPC può limitare la larghezza di banda di controllo ottenibile. A tale scopo, questa tesi riguarda sia gli algoritmi che le applicazioni. In primo luogo, sono stati sviluppati algoritmi veloci NMPC per il controllo di sistemi dinamici a tempo continuo che utilizzano un orizzonte di previsione lungo. Un ponte tra MPC lineare e non lineare viene costruito utilizzando linearizzazioni parziali o aggiornamento della sensibilità. Al fine di aggiornare la sensibilità solo quando necessario, è stata introdotta una misura simile alla curva di non linearità (CMoN) per i sistemi dinamici e applicata agli algoritmi NMPC esistenti. Basato su CMoN, sono state sviluppate logiche di aggiornamento intuitive e avanzate per diverse prestazioni numeriche e di controllo. Pertanto, il CMoN, insieme alla logica di aggiornamento, formula uno schema di aggiornamento della sensibilità parziale per NMPC veloce, denominato CMoN-RTI. Gli esempi di simulazione sono utilizzati per dimostrare l'efficacia e l'efficienza di CMoN-RTI. Inoltre, un'analisi rigorosa sull'ottimalità e sulla convergenza locale di CMoN-RTI viene fornita ed illustrata utilizzando esempi numerici. Algoritmi di condensazione parziale sono stati sviluppati quando si utilizza lo schema di aggiornamento della sensibilità parziale proposto. La complessità computazionale è stata ridotta poiché parte delle informazioni di condensazione sono sfruttate da precedenti istanti di campionamento. Una logica di aggiornamento della sensibilità insieme alla condensazione parziale viene proposta con una complessità lineare nella lunghezza della previsione, che porta a una velocità di un fattore dieci. Sono anche proposti algoritmi di fattorizzazione parziale della matrice per sfruttare l'aggiornamento della sensibilità parziale. Applicando metodi di suddivisione a problemi a più stadi, è necessario aggiornare solo parte del sistema KKT risultante, che è computazionalmente dominante nell'ottimizzazione online. Un miglioramento significativo è stato dimostrato dando operazioni in virgola mobile (flop). In secondo luogo, sono state realizzate implementazioni efficienti di NMPC sviluppando un pacchetto basato su Matlab chiamato MATMPC. MATMPC ha due modalità operative: quella si basa completamente su Matlab e l'altra utilizza l'API del linguaggio MATLAB C. I vantaggi di MATMPC sono che gli algoritmi sono facili da sviluppare e eseguire il debug grazie a Matlab e le librerie e le toolbox di Matlab possono essere utilizzate direttamente. Quando si lavora nella seconda modalità, l'efficienza computazionale di MATMPC è paragonabile a quella del software che utilizza la generazione di codice ottimizzata. Le realizzazioni in tempo reale sono ottenute per un simulatore di guida dinamica di nove gradi di libertà e per il movimento multisensoriale con sedile attivo.
Style APA, Harvard, Vancouver, ISO itp.
37

Dahlgren, Lindström Adam. "Structured Prediction using Voted Conditional Random FieldsLink Prediction in Knowledge Bases". Thesis, Umeå universitet, Institutionen för datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-140692.

Pełny tekst źródła
Streszczenie:
Knowledge bases are useful in the validation of automatically extracted information, and for hypothesis selection during the extraction process. Building knowledge bases is a dfficult task and the process is bound to miss facts. Therefore, the existence of facts can be estimated using link prediction, i.e., by solving the structured prediction problem.It has been shown that combining directly observable features with latent features increases performance. Observable features include, e.g., the presence of another chain of facts leading to the same end point. Latent features include, e.g, properties that are not modelled by facts on the form subject-predicate-object, such as being a good actor. Observable graph features are modelled using the Path Ranking Algorithm, and latent features using the bilinear RESCAL model. Voted Conditional Random Fields can be used to combine feature families while taking into account their complexity to minimize the risk of training a poor predictor. We propose a combined model fusing these theories together with a complexity analysis of the feature families used. In addition, two simple feature families are constructed to model neighborhood properties.The model we propose captures useful features for link prediction, but needs further evaluation to guarantee effcient learning. Finally, suggestions for experiments and other feature families are given.
Style APA, Harvard, Vancouver, ISO itp.
38

Iqbal, Ammar Tanange Rakesh Virk Shafqat. "Vehicle fault prediction analysis : a health prediction tool for heavy vehicles /". Göteborg : IT-universitetet, Chalmers tekniska högskola och Göteborgs universitet, 2006. http://www.ituniv.se/w/index.php?option=com_itu_thesis&Itemid=319.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

GAO, HONGLIANG. "IMPROVING BRANCH PREDICTION ACCURACY VIA EFFECTIVE SOURCE INFORMATION AND PREDICTION ALGORITHMS". Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3286.

Pełny tekst źródła
Streszczenie:
Modern superscalar processors rely on branch predictors to sustain a high instruction fetch throughput. Given the trend of deep pipelines and large instruction windows, a branch misprediction will incur a large performance penalty and result in a significant amount of energy wasted by the instructions along wrong paths. With their critical role in high performance processors, there has been extensive research on branch predictors to improve the prediction accuracy. Conceptually a dynamic branch prediction scheme includes three major components: a source, an information processor, and a predictor. Traditional works mainly focus on the algorithm for the predictor. In this dissertation, besides novel prediction algorithms, we investigate other components and develop untraditional ways to improve the prediction accuracy. First, we propose an adaptive information processing method to dynamically extract the most effective inputs to maximize the correlation to be exploited by the predictor. Second, we propose a new prediction algorithm, which improves the Prediction by Partial Matching (PPM) algorithm by selectively combining multiple partial matches. The PPM algorithm was previously considered optimal and has been used to derive the upper limit of branch prediction accuracy. Our proposed algorithm achieves higher prediction accuracy than PPM and can be implemented in realistic hardware budget. Third, we discover a new locality existing between the address of producer loads and the outcomes of their consumer branches. We study this address-branch correlation in detail and propose a branch predictor to explore this correlation for long-latency and hard-to-predict branches, which existing branch predictors fail to predict accurately.
Ph.D.
School of Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science PhD
Style APA, Harvard, Vancouver, ISO itp.
40

Löfström, Tuwe. "On Effectively Creating Ensembles of Classifiers : Studies on Creation Strategies, Diversity and Predicting with Confidence". Doctoral thesis, Stockholms universitet, Institutionen för data- och systemvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-116683.

Pełny tekst źródła
Streszczenie:
An ensemble is a composite model, combining the predictions from several other models. Ensembles are known to be more accurate than single models. Diversity has been identified as an important factor in explaining the success of ensembles. In the context of classification, diversity has not been well defined, and several heuristic diversity measures have been proposed. The focus of this thesis is on how to create effective ensembles in the context of classification. Even though several effective ensemble algorithms have been proposed, there are still several open questions regarding the role diversity plays when creating an effective ensemble. Open questions relating to creating effective ensembles that are addressed include: what to optimize when trying to find an ensemble using a subset of models used by the original ensemble that is more effective than the original ensemble; how effective is it to search for such a sub-ensemble; how should the neural networks used in an ensemble be trained for the ensemble to be effective? The contributions of the thesis include several studies evaluating different ways to optimize which sub-ensemble would be most effective, including a novel approach using combinations of performance and diversity measures. The contributions of the initial studies presented in the thesis eventually resulted in an investigation of the underlying assumption motivating the search for more effective sub-ensembles. The evaluation concluded that even if several more effective sub-ensembles exist, it may not be possible to identify which sub-ensembles would be the most effective using any of the evaluated optimization measures. An investigation of the most effective ways to train neural networks to be used in ensembles was also performed. The conclusions are that effective ensembles can be obtained by training neural networks in a number of different ways but that high average individual accuracy or much diversity both would generate effective ensembles. Several findings regarding diversity and effective ensembles presented in the literature in recent years are also discussed and related to the results of the included studies. When creating confidence based predictors using conformal prediction, there are several open questions regarding how data should be utilized effectively when using ensembles. Open questions related to predicting with confidence that are addressed include: how can data be utilized effectively to achieve more efficient confidence based predictions using ensembles; how do problems with class imbalance affect the confidence based predictions when using conformal prediction? Contributions include two studies where it is shown in the first that the use of out-of-bag estimates when using bagging ensembles results in more effective conformal predictors and it is shown in the second that a conformal predictor conditioned on the class labels to avoid a strong bias towards the majority class is more effective on problems with class imbalance. The research method used is mainly inspired by the design science paradigm, which is manifested by the development and evaluation of artifacts.
En ensemble är en sammansatt modell som kombinerar prediktionerna från flera olika modeller. Det är välkänt att ensembler är mer träffsäkra än enskilda modeller. Diversitet har identifierats som en viktig faktor för att förklara varför ensembler är så framgångsrika. Diversitet hade fram tills nyligen inte definierats entydigt för klassificering vilket resulterade i att många heuristiska diverstitetsmått har föreslagits. Den här avhandlingen fokuserar på hur klassificeringsensembler kan skapas på ett ändamålsenligt (eng. effective) sätt. Den vetenskapliga metoden är huvudsakligen inspirerad av design science-paradigmet vilket lämpar sig väl för utveckling och evaluering av IT-artefakter. Det finns sedan tidigare många framgångsrika ensembleralgoritmer men trots det så finns det fortfarande vissa frågetecken kring vilken roll diversitet spelar vid skapande av välpresterande (eng. effective) ensemblemodeller. Några av de frågor som berör diversitet som behandlas i avhandlingen inkluderar: Vad skall optimeras när man söker efter en delmängd av de tillgängliga modellerna för att försöka skapa en ensemble som är bättre än ensemblen bestående av samtliga modeller; Hur väl fungerar strategin att söka efter sådana delensembler; Hur skall neurala nätverk tränas för att fungera så bra som möjligt i en ensemble? Bidraget i avhandlingen inkluderar flera studier som utvärderar flera olika sätt att finna delensembler som är bättre än att använda hela ensemblen, inklusive ett nytt tillvägagångssätt som utnyttjar en kombination av både diversitets- och prestandamått. Resultaten i de första studierna ledde fram till att det underliggande antagandet som motiverar att söka efter delensembler undersöktes. Slutsatsen blev, trots att det fanns flera delensembler som var bättre än hela ensemblen, att det inte fanns något sätt att identifiera med tillgänglig data vilka de bättre delensemblerna var. Vidare undersöktes hur neurala nätverk bör tränas för att tillsammans samverka så väl som möjligt när de används i en ensemble. Slutsatserna från den undersökningen är att det är möjligt att skapa välpresterande ensembler både genom att ha många modeller som är antingen bra i genomsnitt eller olika varandra (dvs diversa). Insikter som har presenterats i litteraturen under de senaste åren diskuteras och relateras till resultaten i de inkluderade studierna. När man skapar konfidensbaserade modeller med hjälp av ett ramverk som kallas för conformal prediction så finns det flera frågor kring hur data bör utnyttjas på bästa sätt när man använder ensembler som behöver belysas. De frågor som relaterar till konfidensbaserad predicering inkluderar: Hur kan data utnyttjas på bästa sätt för att åstadkomma mer effektiva konfidensbaserade prediktioner med ensembler; Hur påverkar obalanserad datade konfidensbaserade prediktionerna när man använder conformal perdiction? Bidragen inkluderar två studier där resultaten i den första visar att det mest effektiva sättet att använda data när man har en baggingensemble är att använda sk out-of-bag estimeringar. Resultaten i den andra studien visar att obalanserad data behöver hanteras med hjälp av en klassvillkorad konfidensbaserad modell för att undvika en stark tendens att favorisera majoritetsklassen.

At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 8: In press.


Dataanalys för detektion av läkemedelseffekter (DADEL)
Style APA, Harvard, Vancouver, ISO itp.
41

Zhu, Zheng. "A Unified Exposure Prediction Approach for Multivariate Spatial Data: From Predictions to Health Analysis". University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin155437434818942.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Darwiche, Aiman A. "Machine Learning Methods for Septic Shock Prediction". Diss., NSUWorks, 2018. https://nsuworks.nova.edu/gscis_etd/1051.

Pełny tekst źródła
Streszczenie:
Sepsis is an organ dysfunction life-threatening disease that is caused by a dysregulated body response to infection. Sepsis is difficult to detect at an early stage, and when not detected early, is difficult to treat and results in high mortality rates. Developing improved methods for identifying patients in high risk of suffering septic shock has been the focus of much research in recent years. Building on this body of literature, this dissertation develops an improved method for septic shock prediction. Using the data from the MMIC-III database, an ensemble classifier is trained to identify high-risk patients. A robust prediction model is built by obtaining a risk score from fitting the Cox Hazard model on multiple input features. The score is added to the list of features and the Random Forest ensemble classifier is trained to produce the model. The Cox Enhanced Random Forest (CERF) proposed method is evaluated by comparing its predictive accuracy to those of extant methods.
Style APA, Harvard, Vancouver, ISO itp.
43

Mehdi, Muhammad Sarim. "Trajectory Prediction for ADAS". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21891/.

Pełny tekst źródła
Streszczenie:
A novel pipeline is presented for unsupervised trajectory prediction. As part of this research, numerous techniques are investigated for trajectory prediction of dynamic obstacles from an egocentric perspective (driver’s per- spective). The algorithm takes images from a calibrated stereo camera as input or data from a laser scanner and outputs a heat map that describes all possible future locations of that specific 3D object for the next few frames. This research has many applications, most notably for autonomous cars as it allows them to make better driving decisions if they are able to anticipate where another moving object is going to be in the future.
Style APA, Harvard, Vancouver, ISO itp.
44

Glass, Colin William. "Computational crystal structure prediction /". Zürich : ETH, 2008. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17852.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Bäumer, Lars. "Identification in prediction theory". [S.l. : s.n.], 2000. http://deposit.ddb.de/cgi-bin/dokserv?idn=959725504.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Ibarria, Lorenzo. "Geometric Prediction for Compression". Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16162.

Pełny tekst źródła
Streszczenie:
This thesis proposes several new predictors for the compression of shapes, volumes and animations. To compress frames in triangle-mesh animations with fixed connectivity, we introduce the ELP (Extended Lorenzo Predictor) and the Replica predictors that extrapolate the position of each vertex in frame $i$ from the position of each vertex in frame $i-1$ and from the position of its neighbors in both frames. For lossy compression we have combined these predictors with a segmentation of the animation into clips and a synchronized simplification of all frames in a clip. To compress 2D and 3D static or animated scalar fields sampled on a regular grid, we introduce the Lorenzo predictor well suited for scanline traversal and the family of Spectral predictors that accommodate any traversal and predict a sample value from known samples in a small neighborhood. Finally, to support the compressed streaming of isosurface animations, we have developed an approach that identifies all node-values needed to compute a given isosurface and encodes the unknown values using our Spectral predictor.
Style APA, Harvard, Vancouver, ISO itp.
47

Schelin, Lina. "Spatial sampling and prediction". Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-53286.

Pełny tekst źródła
Streszczenie:
This thesis discusses two aspects of spatial statistics: sampling and prediction. In spatial statistics, we observe some phenomena in space. Space is typically of two or three dimensions, but can be of higher dimension. Questions in mind could be; What is the total amount of gold in a gold-mine? How much precipitation could we expect in a specific unobserved location? What is the total tree volume in a forest area? In spatial sampling the aim is to estimate global quantities, such as population totals, based on samples of locations (papers III and IV). In spatial prediction the aim is to estimate local quantities, such as the value at a single unobserved location, with a measure of uncertainty (papers I, II and V). In papers III and IV, we propose sampling designs for selecting representative probability samples in presence of auxiliary variables. If the phenomena under study have clear trends in the auxiliary space, estimation of population quantities can be improved by using representative samples. Such samples also enable estimation of population quantities in subspaces and are especially needed for multi-purpose surveys, when several target variables are of interest. In papers I and II, the objective is to construct valid prediction intervals for the value at a new location, given observed data. Prediction intervals typically rely on the kriging predictor having a Gaussian distribution. In paper I, we show that the distribution of the kriging predictor can be far from Gaussian, even asymptotically. This motivated us to propose a semiparametric method that does not require distributional assumptions. Prediction intervals are constructed from the plug-in ordinary kriging predictor. In paper V, we consider prediction in the presence of left-censoring, where observations falling below a minimum detection limit are not fully recorded. We review existing methods and propose a semi-naive method. The semi-naive method is compared to one model-based method and two naive methods, all based on variants of the kriging predictor.
Style APA, Harvard, Vancouver, ISO itp.
48

Rigaldo, Alexis. "Aerodynamics Gust Response Prediction". Thesis, KTH, Flygdynamik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-41506.

Pełny tekst źródła
Streszczenie:
This project presents the work performed within the aerodynamics department of Airbus Operation SAS inToulouse through a ve months master thesis. This department works with the industrialization and the use of tools developed by laboratories to perform CFD aerodynamic simulations. The primary purpose of the present work was to support the development of gust analysis methods based on CFD. A new gust model has been developed and integrated to the aerodynamic solver elsA.This solver has been used in order to compute the unsteady aerodynamic simulations for both gust loads and forced motions with CFD. The results were then compared with those from a Doublet Lattice Method computation for validation. Once the validation phase was ended with good agreement between the two methods, a Chimera simulation has been carried out.
Style APA, Harvard, Vancouver, ISO itp.
49

Fredette, Marc. "Prediction of recurrent events". Thesis, University of Waterloo, 2004. http://hdl.handle.net/10012/1142.

Pełny tekst źródła
Streszczenie:
In this thesis, we will study issues related to prediction problems and put an emphasis on those arising when recurrent events are involved. First we define the basic concepts of frequentist and Bayesian statistical prediction in the first chapter. In the second chapter, we study frequentist prediction intervals and their associated predictive distributions. We will then present an approach based on asymptotically uniform pivotals that is shown to dominate the plug-in approach under certain conditions. The following three chapters consider the prediction of recurrent events. The third chapter presents different prediction models when these events can be modeled using homogeneous Poisson processes. Amongst these models, those using random effects are shown to possess interesting features. In the fourth chapter, the time homogeneity assumption is relaxed and we present prediction models for non-homogeneous Poisson processes. The behavior of these models is then studied for prediction problems with a finite horizon. In the fifth chapter, we apply the concepts discussed previously to a warranty dataset coming from the automobile industry. The number of processes in this dataset being very large, we focus on methods providing computationally rapid prediction intervals. Finally, we discuss the possibilities of future research in the last chapter.
Style APA, Harvard, Vancouver, ISO itp.
50

Nordfors, Per. "Prediction of Code Lifetime". Thesis, Linköpings universitet, Statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-135060.

Pełny tekst źródła
Streszczenie:
There are several previous studies in which machine learning algorithms are used to predict how fault-prone a piece of code is. This thesis takes on a slightly different approach by attempting to predict how long a piece of code will remain unmodified after being written (its “lifetime”). This is based on the hypothesis that frequently modified code is more likely to contain weaknesses, which may make lifetime predictions useful for code evaluation purposes. In this thesis, the predictions are made with machine learning algorithms which are trained on open source code examples from GitHub. Two different machine learning algorithms are used: the multilayer perceptron and the support vector machine. A piece of code is described by three groups of features: code contents, code properties obtained from static code analysis, and metadata from the version control system Git. In a series of experiments it is shown that the support vector machine is the best performing algorithm and that all three feature groups are useful for predicting lifetime. Both the multilayer perceptron and the support vector machine outperform a baseline prediction which always outputs the mean lifetime of the training set. This indicates that lifetime to some extent can be predicted based on information extracted from the code. However, lifetime prediction performance is shown to be highly dataset dependent with large error magnitudes.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii