Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Algorithmic probability theory.

Дисертації з теми "Algorithmic probability theory"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Algorithmic probability theory".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Minozzo, Marco. "On some aspects of the prequential and algorithmic approaches to probability and statistical theory." Thesis, University College London (University of London), 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362374.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Larsson, Frans. "Algorithmic trading surveillance : Identifying deviating behavior with unsupervised anomaly detection." Thesis, Uppsala universitet, Matematiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-389941.

Повний текст джерела
Анотація:
The financial markets are no longer what they used to be and one reason for this is the breakthrough of algorithmic trading. Although this has had several positive effects, there have been recorded incidents where algorithms have been involved. It is therefore of interest to find effective methods to monitor algorithmic trading. The purpose of this thesis was therefore to contribute to this research area by investigating if machine learning can be used for detecting deviating behavior. Since the real world data set used in this study lacked labels, an unsupervised anomaly detection approach was chosen. Two models, isolation forest and deep denoising autoencoder, were selected and evaluated. Because the data set lacked labels, artificial anomalies were injected into the data set to make evaluation of the models possible. These synthetic anomalies were generated by two different approaches, one based on a downsampling strategy and one based on manual construction and modification of real data. The evaluation of the anomaly detection models shows that both isolation forest and deep denoising autoencoder outperform a trivial baseline model, and have the ability to detect deviating behavior. Furthermore, it is shown that a deep denoising autoencoder outperforms isolation forest, with respect to both area under the receiver operating characteristics curve and area under the precision-recall curve. A deep denoising autoencoder is therefore recommended for the purpose of algorithmic trading surveillance.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Jurvelin, Olsson Mikael, and Andreas Hild. "Pairs Trading, Cryptocurrencies and Cointegration : A Performance Comparison of Pairs Trading Portfolios of Cryptocurrencies Formed Through the Augmented Dickey Fuller Test, Johansen’s Test and Phillips Perron’s Test." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385484.

Повний текст джерела
Анотація:
This thesis analyzes the performance and process of constructing portfolios of cryptocurrency pairs based on cointegrated relationships indicated by the Augmented Dickey-Fuller test, Johansen’s test and Phillips Peron’s test. Pairs are tested for cointegration over a 3-month and a 6-month window and then traded over a trading window of the same length. The cryptocurrencies included in the study are 14 cryptocurrencies with the highest market capitalization on April 24th 2019. One trading strategy has been applied on every portfolio following the 3-month and the 6-month methodology with thresholds at 1.75 and stop-losses at 4 standard deviations. The performance of each portfolio is compared with their corresponding buy and hold benchmark. All portfolios outperformed their buy and hold benchmark, with and without transaction costs set to 2%. Following the 3-month methodology was superior to the 6- month method and the portfolios formed through Phillips Peron’s test had the highest return for both window methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mozayyan, Esfahani Sina. "Algorithmic Trading and Prediction of Foreign Exchange Rates Based on the Option Expiration Effect." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252297.

Повний текст джерела
Анотація:
The equity option expiration effect is a well observed phenomenon and is explained by delta hedge rebalancing and pinning risk, which makes the strike price of an option work as a magnet for the underlying price. The FX option expiration effect has not previously been explored to the same extent. In this paper the FX option expiration effect is investigated with the aim of finding out whether it provides valuable information for predicting FX rate movements. New models are created based on the concept of the option relevance coefficient that determines which options are at higher risk of being in the money or out of the money at a specified future time and thus have an attraction effect. An algorithmic trading strategy is created to evaluate these models. The new models based on the FX option expiration effect strongly outperform time series models used as benchmarks. The best results are obtained when the information about the FX option expiration effect is included as an exogenous variable in a GARCH-X model. However, despite promising and consistent results, more scientific research is required to be able to draw significant conclusions.
Effekten av aktieoptioners förfall är ett välobserverat fenomen, som kan förklaras av delta hedge-ombalansering och pinning-risk. Som följd av dessa fungerar lösenpriset för en option som en magnet för det underliggande priset. Effekten av FX-optioners förfall har tidigare inte utforskats i samma utsträckning. I denna rapport undersöks effekten av FX-optioners förfall med målet att ta reda på om den kan ge information som kan användas till prediktioner av FX-kursen. Nya modeller skapas baserat på konceptet optionsrelevanskoefficient som bestämmer huruvida optioner har en större sannolikhet att vara "in the money" eller "out of the money" vid en specificerad framtida tidpunkt och därmed har en attraktionseffekt. En algoritmisk tradingstrategi skapas för att evaluera dessa modeller. De nya modellerna baserade på effekten av FX-optioners förfall överpresterar klart jämfört med de tidsseriemodeller som användes som riktmärken. De bästa resultaten uppnåddes när informationen om effekten av FX-optioners förfall inkluderas som en exogen variabel i en GARCH-X modell. Dock, trots lovande och konsekventa resultat, behövs mer vetenskaplig forskning för att kunna dra signifikanta slutsatser.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Barakat, Arian. "What makes an (audio)book popular?" Thesis, Linköpings universitet, Statistik och maskininlärning, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-152871.

Повний текст джерела
Анотація:
Audiobook reading has traditionally been used for educational purposes but has in recent times grown into a popular alternative to the more traditional means of consuming literature. In order to differentiate themselves from other players in the market, but also provide their users enjoyable literature, several audiobook companies have lately directed their efforts on producing own content. Creating highly rated content is, however, no easy task and one reoccurring challenge is how to make a bestselling story. In an attempt to identify latent features shared by successful audiobooks and evaluate proposed methods for literary quantification, this thesis employs an array of frameworks from the field of Statistics, Machine Learning and Natural Language Processing on data and literature provided by Storytel - Sweden’s largest audiobook company. We analyze and identify important features from a collection of 3077 Swedish books concerning their promotional and literary success. By considering features from the aspects Metadata, Theme, Plot, Style and Readability, we found that popular books are typically published as a book series, cover 1-3 central topics, write about, e.g., daughter-mother relationships and human closeness but that they also hold, on average, a higher proportion of verbs and a lower degree of short words. Despite successfully identifying these, but also other factors, we recognized that none of our models predicted “bestseller” adequately and that future work may desire to study additional factors, employ other models or even use different metrics to define and measure popularity. From our evaluation of the literary quantification methods, namely topic modeling and narrative approximation, we found that these methods are, in general, suitable for Swedish texts but that they require further improvement and experimentation to be successfully deployed for Swedish literature. For topic modeling, we recognized that the sole use of nouns provided more interpretable topics and that the inclusion of character names tended to pollute the topics. We also identified and discussed the possible problem of word inflections when modeling topics for more morphologically complex languages, and that additional preprocessing treatments such as word lemmatization or post-training text normalization may improve the quality and interpretability of topics. For the narrative approximation, we discovered that the method currently suffers from three shortcomings: (1) unreliable sentence segmentation, (2) unsatisfactory dictionary-based sentiment analysis and (3) the possible loss of sentiment information induced by translations. Despite only examining a handful of literary work, we further found that books written initially in Swedish had narratives that were more cross-language consistent compared to books written in English and then translated to Swedish.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Graham, Matthew McKenzie. "Auxiliary variable Markov chain Monte Carlo methods." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/28962.

Повний текст джерела
Анотація:
Markov chain Monte Carlo (MCMC) methods are a widely applicable class of algorithms for estimating integrals in statistical inference problems. A common approach in MCMC methods is to introduce additional auxiliary variables into the Markov chain state and perform transitions in the joint space of target and auxiliary variables. In this thesis we consider novel methods for using auxiliary variables within MCMC methods to allow approximate inference in otherwise intractable models and to improve sampling performance in models exhibiting challenging properties such as multimodality. We first consider the pseudo-marginal framework. This extends the Metropolis–Hastings algorithm to cases where we only have access to an unbiased estimator of the density of target distribution. The resulting chains can sometimes show ‘sticking’ behaviour where long series of proposed updates are rejected. Further the algorithms can be difficult to tune and it is not immediately clear how to generalise the approach to alternative transition operators. We show that if the auxiliary variables used in the density estimator are included in the chain state it is possible to use new transition operators such as those based on slice-sampling algorithms within a pseudo-marginal setting. This auxiliary pseudo-marginal approach leads to easier to tune methods and is often able to improve sampling efficiency over existing approaches. As a second contribution we consider inference in probabilistic models defined via a generative process with the probability density of the outputs of this process only implicitly defined. The approximate Bayesian computation (ABC) framework allows inference in such models when conditioning on the values of observed model variables by making the approximation that generated observed variables are ‘close’ rather than exactly equal to observed data. Although making the inference problem more tractable, the approximation error introduced in ABC methods can be difficult to quantify and standard algorithms tend to perform poorly when conditioning on high dimensional observations. This often requires further approximation by reducing the observations to lower dimensional summary statistics. We show how including all of the random variables used in generating model outputs as auxiliary variables in a Markov chain state can allow the use of more efficient and robust MCMC methods such as slice sampling and Hamiltonian Monte Carlo (HMC) within an ABC framework. In some cases this can allow inference when conditioning on the full set of observed values when standard ABC methods require reduction to lower dimensional summaries for tractability. Further we introduce a novel constrained HMC method for performing inference in a restricted class of differentiable generative models which allows conditioning the generated observed variables to be arbitrarily close to observed data while maintaining computational tractability. As a final topicwe consider the use of an auxiliary temperature variable in MCMC methods to improve exploration of multimodal target densities and allow estimation of normalising constants. Existing approaches such as simulated tempering and annealed importance sampling use temperature variables which take on only a discrete set of values. The performance of these methods can be sensitive to the number and spacing of the temperature values used, and the discrete nature of the temperature variable prevents the use of gradient-based methods such as HMC to update the temperature alongside the target variables. We introduce new MCMC methods which instead use a continuous temperature variable. This both removes the need to tune the choice of discrete temperature values and allows the temperature variable to be updated jointly with the target variables within a HMC method.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Asif, Muneeb. "Predicting the Success of Bank Telemarketing using various Classification Algorithms." Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-67994.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hemsley, Ross. "Méthodes probabilistes pour l'analyse des algorithmes sur les tesselations aléatoires." Thesis, Nice, 2014. http://www.theses.fr/2014NICE4143/document.

Повний текст джерела
Анотація:
Dans cette thèse, nous exploitons les outils de la théorie des probabilités et de la géométrie stochastique pour analyser des algorithmes opérant sur les tessellations. Ce travail est divisé entre deux thèmes principaux, le premier traite de la navigation dans une tessellation de Delaunay et dans son dual, le diagramme de Voronoï avec des implications pour les algorithmes de localisation spatiales et de routage dans les réseaux en ligne. Nous proposons deux nouveaux algorithmes de navigation dans la triangulation de Delaunay, que nous appelons Pivot Walk et Cone Walk. Pour Cone Walk, nous fournissons une analyse en moyenne détaillée avec des bornes explicites sur les propriétés de la pire marche possible effectuée par l'algorithme sur une triangulation de Delaunay aléatoire d'une région convexe bornée. C'est un progrès significatif car dans l'algorithme Cone Walk, les probabilités d'utiliser un triangle ou un autre au cours de la marche présentent des dépendances complexes, dépendances inexistantes dans d'autres marches. La deuxième partie de ce travail concerne l'étude des propriétés extrémales de tessellations aléatoires. En particulier, nous dérivons les premiers et derniers statistiques d'ordre pour les boules inscrites dans les cellules d'un arrangement de droites Poissonnien; ce résultat a des implications par exemple pour le hachage respectant la localité. Comme corollaire, nous montrons que les cellules minimisant l'aire sont des triangles
In this thesis, we leverage the tools of probability theory and stochastic geometry to investigate the behavior of algorithms on geometric tessellations of space. This work is split between two main themes, the first of which is focused on the problem of navigating the Delaunay tessellation and its geometric dual, the Voronoi diagram. We explore the applications of this problem to point location using walking algorithms and the study of online routing in networks. We then propose and investigate two new algorithms which navigate the Delaunay triangulation, which we call Pivot Walk and Cone Walk. For Cone Walk, we provide a detailed average-case analysis, giving explicit bounds on the properties of the worst possible path taken by the algorithm on a random Delaunay triangulation in a bounded convex region. This analysis is a significant departure from similar results that have been obtained, due to the difficulty of dealing with the complex dependence structure of localized navigation algorithms on the Delaunay triangulation. The second part of this work is concerned with the study of extremal properties of random tessellations. In particular, we derive the first and last order-statistics for the inballs of the cells in a Poisson line tessellation. This result has implications for algorithms involving line tessellations, such as locality sensitive hashing. As a corollary, we show that the cells minimizing the area are triangles
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Jones, Bo. "A New Approximation Scheme for Monte Carlo Applications." Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/cmc_theses/1579.

Повний текст джерела
Анотація:
Approximation algorithms employing Monte Carlo methods, across application domains, often require as a subroutine the estimation of the mean of a random variable with support on [0,1]. One wishes to estimate this mean to within a user-specified error, using as few samples from the simulated distribution as possible. In the case that the mean being estimated is small, one is then interested in controlling the relative error of the estimate. We introduce a new (epsilon, delta) relative error approximation scheme for [0,1] random variables and provide a comparison of this algorithm's performance to that of an existing approximation scheme, both establishing theoretical bounds on the expected number of samples required by the two algorithms and empirically comparing the samples used when the algorithms are employed for a particular application.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Dineff, Dimitris. "Clustering using k-means algorithm in multivariate dependent models with factor structure." Thesis, Uppsala universitet, Tillämpad matematik och statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-429528.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Huang, Xin. "A study on the application of machine learning algorithms in stochastic optimal control." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252541.

Повний текст джерела
Анотація:
By observing a similarity between the goal of stochastic optimal control to minimize an expected cost functional and the aim of machine learning to minimize an expected loss function, a method of applying machine learning algorithm to approximate the optimal control function is established and implemented via neural approximation. Based on a discretization framework, a recursive formula for the gradient of the approximated cost functional on the parameters of neural network is derived. For a well-known Linear-Quadratic-Gaussian control problem, the approximated neural network function obtained with stochastic gradient descent algorithm manages to reproduce to shape of the theoretical optimal control function, and application of different types of machine learning optimization algorithm gives quite close accuracy rate in terms of their associated empirical value function. Furthermore, it is shown that the accuracy and stability of machine learning approximation can be improved by increasing the size of minibatch and applying a finer discretization scheme. These results suggest the effectiveness and appropriateness of applying machine learning algorithm for stochastic optimal control.
Genom att observera en likhet mellan målet för stokastisk optimal styrning för att minimera en förväntad kostnadsfunktionell och syftet med maskininlärning att minimera en förväntad förlustfunktion etableras och implementeras en metod för att applicera maskininlärningsalgoritmen för att approximera den optimala kontrollfunktionen via neuralt approximation. Baserat på en diskretiseringsram, härleds en rekursiv formel för gradienten av den approximerade kostnadsfunktionen på parametrarna för neuralt nätverk. För ett välkänt linjärt-kvadratisk-gaussiskt kontrollproblem lyckas den approximerade neurala nätverksfunktionen erhållen med stokastisk gradient nedstigningsalgoritm att reproducera till formen av den teoretiska optimala styrfunktionen och tillämpning av olika typer av algoritmer för maskininlärning optimering ger en ganska nära noggrannhet med avseende på deras motsvarande empiriska värdefunktion. Vidare är det visat att noggrannheten och stabiliteten hos maskininlärning simetrationen kan förbättras genom att öka storleken på minibatch och tillämpa ett finare diskretiseringsschema. Dessa resultat tyder på effektiviteten och lämpligheten av att tillämpa maskininlärningsalgoritmen för stokastisk optimal styrning.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Wang, Juan. "Estimation of individual treatment effect via Gaussian mixture model." HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/839.

Повний текст джерела
Анотація:
In this thesis, we investigate the estimation problem of treatment effect from Bayesian perspective through which one can first obtain the posterior distribution of unobserved potential outcome from observed data, and then obtain the posterior distribution of treatment effect. We mainly consider how to represent a joint distribution of two potential outcomes - one from treated group and another from control group, which can give us an indirect impression of correlation, since the estimation of treatment effect depends on correlation between two potential outcomes. The first part of this thesis illustrates the effectiveness of adapting Gaussian mixture models in solving the treatment effect problem. We apply the mixture models - Gaussian Mixture Regression (GMR) and Gaussian Mixture Linear Regression (GMLR)- as a potentially simple and powerful tool to investigate the joint distribution of two potential outcomes. For GMR, we consider a joint distribution of the covariate and two potential outcomes. For GMLR, we consider a joint distribution of two potential outcomes, which linearly depend on covariate. Through developing an EM algorithm for GMLR, we find that GMR and GMLR are effective in estimating means and variances, but they are not effective in capturing correlation between two potential outcomes. In the second part of this thesis, GMLR is modified to capture unobserved covariance structure (correlation between outcomes) that can be explained by latent variables introduced through making an important model assumption. We propose a much more efficient Pre-Post EM Algorithm to implement our proposed GMLR model with unobserved covariance structure in practice. Simulation studies show that Pre-Post EM Algorithm performs well not only in estimating means and variances, but also in estimating covariance.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Ringh, Emil. "Low complexity algorithms for faster-than-Nyquistsign : Using coding to avoid an NP-hard problem." Thesis, KTH, Optimeringslära och systemteori, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-136936.

Повний текст джерела
Анотація:
This thesis is an investigation of what happens when communication links are pushed towards their limits and the data-bearing-pulses are packed tighter in time than previously done. This is called faster-than-Nyquist (FTN) signaling and it will violate the Nyquist inter-symbol interference criterion, implying that the data-pulsesare no longer orthogonal and thus that the samples at the receiver will be dependent on more than one of the transmitted symbols. Inter-symbol interference (ISI) has occurred and the consequences of it are studied for the AWGN-channel model. Here it is shown that in order to do maximum likelihood estimation on these samples the receiver will face an NP-hard problem. The standard algorithm to make good estimations in the ISI case is the Viterbi algorithm, but applied on a block with N bits and interference among K bits thecomplexity is O(N *2K), hence limiting the practical applicability. Here, a precoding scheme is proposed together with a decoding that reduce the estimation complexity. By applying the proposed precoding/decoding to a data block of length N the estimation can be done in O(N2) operations preceded by a single off-line O(N3) calculation. The precoding itself is also done in O(N2)operations, with a single o ff-line operation of O(N3) complexity. The strength of the precoding is shown in simulations. In the first it was tested together with turbo codes of code rate 2/3 and block lengthof 6000 bits. When sending 25% more data (FTN) the non-precoded case needed about 2.5 dB higher signal-to-noise ratio (SNR) to have the same error rate as the precoded case. When the precoded case performed without any block errors, the non-precoded case still had a block error rate almost equal to 1. We also studied the scenario of transmission with low latency and high reliability. Here, 600 bits were transmitted with a code rate of 2/3, and hence the target was to communicate 400 bits of data. Applying FTN with doublepacking, that is transmitting 1200 bits during the same amount of time, it was possible to lower the code rate to 1/3 since only 400 bits of data was to be communicated. This technique greatly improves the robustness. When the FTN case performed error free, the classical Nyquist case still had a block error rate of 0.19. To reach error free performance the Nyquist case needed 1.25 dB higher SNR compared to the precoded FTN case with lower code rate.
Detta examensarbete handlar om vad som händer då kommunikationskanaler pressas till sin gräns och pulserna som bär data packas tätare i tiden. Detta kallas snabbare-än-Nyquist (FTN) och kommer att bryta mot Nyquists kriterium för intersymbolinterferens, vilket innebär att de databärande pulserna inte längre kommer vara ortogonala och att signalsamplen kommer vara beroende av mer än en skickad symbol. Det uppstår intersymbolinterferens (ISI) och dess konsekvenser studeras inom kanalmodellen AWGN. Vi visar att göra en maximum likelihood uppskattning baserat på dessa data är ett NP-svårt problem. Normalt används Viterbi algoritmen när man har ISI, men den har exponentiell komplexitet. På ett block med N symboler och interferens i storleken K symboler är komplexiteten O(N*2K) vilket gör att algoritmen är svår att använda i praktiska fall. Istället så föreslås en förkodning, som tillsammans med en avkodning reducerar komplexiteten. Kodningen appliceras blockvis och på ett block med N symboler är komplexiteten O(N2) för kodning/avkodning. Denna måste i båda fall föregås av en O(N3) beräkning, som dock behöver göras endast en gång.  Simuleringar visar den föreslagna kodningens fördelar. I den första simuleringen testades den ihop med turbokodning med blocklängd på 6000 bitar och en kodningsgrad på 2/3. När FTN användes för att skicka 25% mer data krävdes det cirka 2.5 dB högre signal-till-brus-förhållande (SNR) för att den icke förkodade signalen skulle ha samma felfrekvens som den förkodade. När det förkodade fallet presterade felfritt gjorde det oförkodade fel på nästan alla block.  Ett annat scenario som testades var det med korta koder, liten fördröjning och hög robusthet. I detta scenario skickades 600 bitar med en kodningsgrad på 2/3, alltså 400 bitar ren data. Genom att använda FTN med en dubbel packningsgrad, vilket innebär att 1200 bitar skickades under samma tid, var det möjligt att sänka kodningsgraden till 1/3, eftersom det bara var 400 bitar ren data som skulle överföras. Detta ökad robustheten i systemet ty då FTN fallet gjorde felfritt hade det klassiska Nyquist fallet fortfarande en felfrekvens på 0.19 för sina block. Det krävdes 1.25 dB högre SNR för Nyquist fallet att bli felfritt jämfört med FTN och lägre kodningsgrad.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Larsen, Ross Allen Andrew. "Food Shelf Life: Estimation and Experimental Design." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1315.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Myers, Tracy S. (Tracy Scott). "Reasoning with incomplete probabilistic knowledge : the RIP algorithm for de Finetti's fundamental theorem of probability." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11885.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Neumann, Geoffrey K. "TEDA : a Targeted Estimation of Distribution Algorithm." Thesis, University of Stirling, 2014. http://hdl.handle.net/1893/20248.

Повний текст джерела
Анотація:
This thesis discusses the development and performance of a novel evolutionary algorithm, the Targeted Estimation of Distribution Algorithm (TEDA). TEDA takes the concept of targeting, an idea that has previously been shown to be effective as part of a Genetic Algorithm (GA) called Fitness Directed Crossover (FDC), and introduces it into a novel hybrid algorithm that transitions from a GA to an Estimation of Distribution Algorithm (EDA). Targeting is a process for solving optimisation problems where there is a concept of control points, genes that can be said to be active, and where the total number of control points found within a solution is as important as where they are located. When generating a new solution an algorithm that uses targeting must first of all choose the number of control points to set in the new solution before choosing which to set. The hybrid approach is designed to take advantage of the ability of EDAs to exploit patterns within the population to effectively locate the global optimum while avoiding the tendency of EDAs to prematurely converge. This is achieved by initially using a GA to effectively explore the search space before transitioning into an EDA as the population converges on the region of the global optimum. As targeting places an extra restriction on the solutions produced by specifying their size, combining it with the hybrid approach allows TEDA to produce solutions that are of an optimal size and of a higher quality than would be found using a GA alone without risking a loss of diversity. TEDA is tested on three different problem domains. These are optimal control of cancer chemotherapy, network routing and Feature Subset Selection (FSS). Of these problems, TEDA showed consistent advantage over standard EAs in the routing problem and demonstrated that it is able to find good solutions faster than untargeted EAs and non evolutionary approaches at the FSS problem. It did not demonstrate any advantage over other approaches when applied to chemotherapy. The FSS domain demonstrated that in large and noisy problems TEDA’s targeting derived ability to reduce the size of the search space significantly increased the speed with which good solutions could be found. The routing domain demonstrated that, where the ideal number of control points is deceptive, both targeting and the exploitative capabilities of an EDA are needed, making TEDA a more effective approach than both untargeted approaches and FDC. Additionally, in none of the problems was TEDA seen to perform significantly worse than any alternative approaches.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Diciolla, Marco. "Quantitative verification of real-time properties with application to medical devices." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:fee320e8-9d4f-4831-a999-f5a1febade36.

Повний текст джерела
Анотація:
Probabilistic model checking is a powerful technique used to ensure the correct functioning of systems which exhibit real-time and stochastic behaviours. Many such systems are embedded and used in safety-critical situations, to mention implantable medical devices. This thesis aims to develop a formal model-based framework that is tailored for the analysis and verification of cardiac pacemakers. The contributions are novel approaches for the automatic verification and validation of real-time properties over continuous-time models, which are applicable to software embedded in medical devices. First, we address the problem of model checking continuous-time Markov chain (CTMC) models against real-time specifications given in the form of temporal logic, namely, metric temporal logic (MTL) and linear duration properties (LDP), or as timed automata (TA). The main question that we address is “given a continuous-time Markov chain, what is the probability of the set of timed paths that satisfy the real-time property under consideration?”. We provide novel algorithms to approximate the probability through generating systems of linear inequalities over variables that represent the waiting times in system states, and then solving multidimensional integrals over this set. Second, we present a model-based framework to support the design and verification of pacemakers against real-time properties. The pacemaker is modelled as a network of timed automata, whereas the human heart is modelled either as a network of timed automata or as a network of hybrid automata. Our framework can be instantiated with personalised heart models whose parameters can be learnt from patient data, and we have done so to validate our approach. We introduce property patterns and the counting metric temporal logic (CMTL) in order to specify the properties of interest. We provide new verification algorithms for networks of timed or hybrid automata against property patterns and CMTL. Finally, we pose and solve the parameter synthesis problem, i.e., given a network of timed automata containing model parameters, an objective function and a CMTL formula, find the set of parameter valuations, whenever existing, which satisfy the CMTL formula and maximise the objective function. The framework has been implemented using Simulink, Matlab and Python code. Extensive experimental results on pacemaker models have been carried out and discussed in detail. The techniques developed in this thesis can assist in the design and verification of software embedded in medical devices.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Rudberg, Olov, and Daniel Bezaatpour. "Regional Rainfall Frequency Analysis." Thesis, Stockholms universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-186813.

Повний текст джерела
Анотація:
Frequency analysis is a vital tool when nding a well-suited probability distributionin order to predict extreme rainfall. The regional frequency approach have beenused for determination of homogeneous regions, using 11 sites in Skane, Sweden. Todescribe maximum annual daily rainfall, the Generalized Logistic (GLO), GeneralizedExtreme Value (GEV), Generalized Normal (GNO), Pearson Type III (PE3),and Generalized Pareto (GPA) distributions have been considered. The method ofL-moments have been used in order to nd parameter estimates for the candidatedistributions. Heterogeneity measures, goodness-of-t tests, and accuracy measureshave been executed in order to accurately estimate quantiles for 1-, 5-, 10-, 50- and100-year return periods. It was found that the whole province of Skane could beconsidered as homogeneous. The GEV distribution was the most consistent withthe data followed by the GNO distribution and they were both used in order toestimate quantiles for the return periods. The GEV distribution generated the mostprecise estimates with the lowest relative RMSE, hence, it was concluded to be thebest-t distribution for maximum annual daily rainfall in the province.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

SINGH, KEVIN. "Comparing Variable Selection Algorithms On Logistic Regression – A Simulation." Thesis, Uppsala universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446090.

Повний текст джерела
Анотація:
When we try to understand why some schools perform worse than others, if Covid-19 has struck harder on some demographics or whether income correlates with increased happiness, we may turn to regression to better understand how these variables are correlated. To capture the true relationship between variables we may use variable selection methods in order to ensure that the variables which have an actual effect have been included in the model. Choosing the right model for variable selection is vital. Without it there is a risk of including variables which have little to do with the dependent variable or excluding variables that are important. Failing to capture the true effects would paint a picture disconnected from reality and it would also give a false impression of what reality really looks like. To mitigate this risk a simulation study has been conducted to find out what variable selection algorithms to apply in order to make more accurate inference. The different algorithms being tested are stepwise regression, backward elimination and lasso regression. Lasso performed worst when applied to a small sample but performed best when applied to larger samples. Backward elimination and stepwise regression had very similar results.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Krook, Jonatan. "Predicting low airfares with time series features and a decision tree algorithm." Thesis, Uppsala universitet, Statistiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-353274.

Повний текст джерела
Анотація:
Airlines try to maximize revenue by letting prices of tickets vary over time. This fluctuation contains patterns that can be exploited to predict price lows. In this study, we create an algorithm that daily decides whether to buy a certain ticket or wait for the price to go down. For creation and evaluation, we have used data from searches made online for flights on the route Stockholm – New York during 2017 and 2018. The algorithm is based on time series features selected by a decision tree and clearly outperforms the selected benchmarks.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Westerlund, Fredrik. "CREDIT CARD FRAUD DETECTION (Machine learning algorithms)." Thesis, Umeå universitet, Statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-136031.

Повний текст джерела
Анотація:
Credit card fraud is a field with perpetrators performing illegal actions that may affect other individuals or companies negatively. For instance, a criminalcan steal credit card information from an account holder and then conduct fraudulent transactions. The activities are a potential contributory factor to how illegal organizations such as terrorists and drug traffickers support themselves financially. Within the machine learning area, there are several methods that possess the ability to detect credit card fraud transactions; supervised learning and unsupervised learning algorithms. This essay investigates the supervised approach, where two algorithms (Hellinger Distance Decision Tree (HDDT) and Random Forest) are evaluated on a real life dataset of 284,807 transactions. Under those circumstances, the main purpose is to develop a “well-functioning” model with a reasonable capacity to categorize transactions as fraudulent or legit. As the data is heavily unbalanced, reducing the false-positive rate is also an important part when conducting research in the chosen area. In conclusion, evaluated algorithms present a fairly similar outcome, where both models have the capability to distinguish the classes from each other. However, the Random Forest approach has a better performance than HDDT in all measures of interest.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Wang, Yunli. "Mass Spectrum Analysis of a Substance Sample Placed into Liquid Solution." Thesis, North Dakota State University, 2011. https://hdl.handle.net/10365/28881.

Повний текст джерела
Анотація:
Mass spectrometry is an analytical technique commonly used for determining elemental composition in a substance sample. For this purpose, the sample is placed into some liquid solution called liquid matrix. Unfortunately, the spectrum of the sample is not observable separate from that of the solution. Thus, it is desired to distinguish the sample spectrum. The analysis is usually based on the comparison of the mixed spectrum with the one of the sole solution. Introducing the missing information about the origin of observed spectrum peaks, the author obtains a classic set up for the Expectation-Maximization (EM) algorithm. The author proposed a mixture modeling the spectrum of the liquid solution as well as that of the sample. A bell-shaped probability mass function obtained by discretization of the univariate Gaussian probability density function was proposed or serving as a mixture component. The E- and M- steps were derived under the proposed model. The corresponding R program is written and tested on a small but challenging simulation example. Varying the number of mixture components for the liquid matrix and sample, the author found the correct model according to Bayesian Information Criterion. The initialization of the EM algorithm is a difficult standalone problem that was successfully resolved for this case. The author presents the findings and provides results from the simulation example as well as corresponding illustrations supporting the conclusions.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Fu, Shuai. "Inversion probabiliste bayésienne en analyse d'incertitude." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00766341.

Повний текст джерела
Анотація:
Ce travail de recherche propose une solution aux problèmes inverses probabilistes avec des outils de la statistique bayésienne. Le problème inverse considéré est d'estimer la distribution d'une variable aléatoire non observée X a partir d'observations bruitées Y suivant un modèle physique coûteux H. En général, de tels problèmes inverses sont rencontrés dans le traitement des incertitudes. Le cadre bayésien nous permet de prendre en compte les connaissances préalables d'experts surtout avec peu de données disponibles. Un algorithme de Metropolis-Hastings-within-Gibbs est proposé pour approcher la distribution a posteriori des paramètres de X avec un processus d'augmentation des données. A cause d'un nombre élevé d'appels, la fonction coûteuse H est remplacée par un émulateur de krigeage (méta-modèle) H chapeau. Cette approche implique plusieurs erreurs de nature différente et, dans ce travail, nous nous attachons a estimer et réduire l'impact de ces erreurs. Le critère DAC a été proposé pour évaluer la pertinence du plan d'expérience (design) et le choix de la loi a priori, en tenant compte des observations. Une autre contribution est la construction du design adaptatif adapté a notre objectif particulier dans le cadre bayésien. La principale méthodologie présentée dans ce travail a été appliquée a un cas d' étude d'ingénierie hydraulique.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Shipitsyn, Aleksey. "Statistical Learning with Imbalanced Data." Thesis, Linköpings universitet, Filosofiska fakulteten, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139168.

Повний текст джерела
Анотація:
In this thesis several sampling methods for Statistical Learning with imbalanced data have been implemented and evaluated with a new metric, imbalanced accuracy. Several modifications and new algorithms have been proposed for intelligent sampling: Border links, Clean Border Undersampling, One-Sided Undersampling Modified, DBSCAN Undersampling, Class Adjusted Jittering, Hierarchical Cluster Based Oversampling, DBSCAN Oversampling, Fitted Distribution Oversampling, Random Linear Combinations Oversampling, Center Repulsion Oversampling. A set of requirements on a satisfactory performance metric for imbalanced learning have been formulated and a new metric for evaluating classification performance has been developed accordingly. The new metric is based on a combination of the worst class accuracy and geometric mean. In the testing framework nonparametric Friedman's test and post hoc Nemenyi’s test have been used to assess the performance of classifiers, sampling algorithms, combinations of classifiers and sampling algorithms on several data sets. A new approach of detecting algorithms with dominating and dominated performance has been proposed with a new way of visualizing the results in a network. From experiments on simulated and several real data sets we conclude that: i) different classifiers are not equally sensitive to sampling algorithms, ii) sampling algorithms have different performance within specific classifiers, iii) oversampling algorithms perform better than undersampling algorithms, iv) Random Oversampling and Random Undersampling outperform many well-known sampling algorithms, v) our proposed algorithms Hierarchical Cluster Based Oversampling, DBSCAN Oversampling with FDO, and Class Adjusted Jittering perform much better than other algorithms, vi) a few good combinations of a classifier and sampling algorithm may boost classification performance, while a few bad combinations may spoil the performance, but the majority of combinations are not significantly different in performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Nehme, Bilal. "Techniques non-additives d'estimation de la densité de probabilité." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2010. http://tel.archives-ouvertes.fr/tel-00576957.

Повний текст джерела
Анотація:
Dans cette thèse, nous proposons une nouvelle méthode d'estimation non-paramétrique de la densité de probabilité. Cette méthode d'estimation imprécise combine la théorie de distribution de Schwartz et la théorie de possibilité. La méthode d'estimation que nous proposons est une extension de la méthode d'estimation à noyau. Cette extension est basée sur une nouvelle méthode de représentation de la notion de voisinage sur laquelle s'appuie l'estimation à noyau. Cette représentation porte le nom de noyau maxitif. L'estimation produite est de nature intervalliste. Elle est une enveloppe convexe d'un ensemble d'estimation de Parzen-Rosenblatt obtenus avec un ensemble de noyaux contenus dans une famille particulière. Nous étudions un certain nombre des propriétés théoriques liées à cette nouvelle méthode d'estimation. Parmi ces propriétés, nous montrons un certain type de convergence de cet estimateur. Nous montrons aussi une aptitude particulière de ce type d'estimation à quantifier l'erreur d'estimation liée à l'aspect aléatoire de la distribution des observations. Nous proposons un certain nombre d'algorithmes de faible complexité permettant de programmer facilement les mathodes que nous proposons.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Asafu-Adjei, Joseph Kwaku. "Probabilistic Methods." VCU Scholars Compass, 2007. http://hdl.handle.net/10156/1420.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Minsker, Stanislav. "Non-asymptotic bounds for prediction problems and density estimation." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44808.

Повний текст джерела
Анотація:
This dissertation investigates the learning scenarios where a high-dimensional parameter has to be estimated from a given sample of fixed size, often smaller than the dimension of the problem. The first part answers some open questions for the binary classification problem in the framework of active learning. Given a random couple (X,Y) with unknown distribution P, the goal of binary classification is to predict a label Y based on the observation X. Prediction rule is constructed from a sequence of observations sampled from P. The concept of active learning can be informally characterized as follows: on every iteration, the algorithm is allowed to request a label Y for any instance X which it considers to be the most informative. The contribution of this work consists of two parts: first, we provide the minimax lower bounds for the performance of active learning methods. Second, we propose an active learning algorithm which attains nearly optimal rates over a broad class of underlying distributions and is adaptive with respect to the unknown parameters of the problem. The second part of this thesis is related to sparse recovery in the framework of dictionary learning. Let (X,Y) be a random couple with unknown distribution P. Given a collection of functions H, the goal of dictionary learning is to construct a prediction rule for Y given by a linear combination of the elements of H. The problem is sparse if there exists a good prediction rule that depends on a small number of functions from H. We propose an estimator of the unknown optimal prediction rule based on penalized empirical risk minimization algorithm. We show that the proposed estimator is able to take advantage of the possible sparse structure of the problem by providing probabilistic bounds for its performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Stewart, Robert Grisham. "A Statistical Evaluation of Algorithms for Independently Seeding Pseudo-Random Number Generators of Type Multiplicative Congruential (Lehmer-Class)." Digital Commons @ East Tennessee State University, 2007. https://dc.etsu.edu/etd/2049.

Повний текст джерела
Анотація:
To be effective, a linear congruential random number generator (LCG) should produce values that are (a) uniformly distributed on the unit interval (0,1) excluding endpoints and (b) substantially free of serial correlation. It has been found that many statistical methods produce inflated Type I error rates for correlated observations. Theoretically, independently seeding an LCG under the following conditions attenuates serial correlation: (a) simple random sampling of seeds, (b) non-replicate streams, (c) non-overlapping streams, and (d) non-adjoining streams. Accordingly, 4 algorithms (each satisfying at least 1 condition) were developed: (a) zero-leap, (b) fixed-leap, (c) scaled random-leap, and (d) unscaled random-leap. Note that the latter satisfied all 4 independent seeding conditions. To assess serial correlation, univariate and multivariate simulations were conducted at 3 equally spaced intervals for each algorithm (N=24) and measured using 3 randomness tests: (a) the serial correlation test, (b) the runs up test, and (c) the white noise test. A one-way balanced multivariate analysis of variance (MANOVA) was used to test 4 hypotheses: (a) omnibus, (b) contrast of unscaled vs. others, (c) contrast of scaled vs. others, and (d) contrast of fixed vs. others. The MANOVA assumptions of independence, normality, and homogeneity were satisfied. In sum, the seeding algorithms did not differ significantly from each other (omnibus hypothesis). For the contrast hypotheses, only the fixed-leap algorithm differed significantly from all other algorithms. Surprisingly, the scaled random-leap offered the least difference among the algorithms (theoretically this algorithm should have produced the second largest difference). Although not fully supported by the research design used in this study, it is thought that the unscaled random-leap algorithm is the best choice for independently seeding the multiplicative congruential random number generator. Accordingly, suggestions for further research are proposed.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Ledet, Jeffrey H. "Simulation and Performance Evaluation of Algorithms for Unmanned Aircraft Conflict Detection and Resolution." ScholarWorks@UNO, 2016. http://scholarworks.uno.edu/td/2168.

Повний текст джерела
Анотація:
The problem of aircraft conflict detection and resolution (CDR) in uncertainty is addressed in this thesis. The main goal in CDR is to provide safety for the aircraft while minimizing their fuel consumption and flight delays. In reality, a high degree of uncertainty can exist in certain aircraft-aircraft encounters especially in cases where aircraft do not have the capabilities to communicate with each other. Through the use of a probabilistic approach and a multiple model (MM) trajectory information processing framework, this uncertainty can be effectively handled. For conflict detection, a randomized Monte Carlo (MC) algorithm is used to accurately detect conflicts, and, if a conflict is detected, a conflict resolution algorithm is run that utilizes a sequential list Viterbi algorithm. This thesis presents the MM CDR method and a comprehensive MC simulation and performance evaluation study that demonstrates its capabilities and efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

DENEVI, GIULIA. "Efficient Lifelong Learning Algorithms: Regret Bounds and Statistical Guarantees." Doctoral thesis, Università degli studi di Genova, 2019. http://hdl.handle.net/11567/986813.

Повний текст джерела
Анотація:
We study the Meta-Learning paradigm where the goal is to select an algorithm in a prescribed family – usually denoted as inner or within-task algorithm – that is appropriate to address a class of learning problems (tasks), sharing specific similarities. More precisely, we aim at designing a procedure, called meta-algorithm, that is able to infer this tasks’ relatedness from a sequence of observed tasks and to exploit such a knowledge in order to return a within-task algorithm in the class that is best suited to solve a new similar task. We are interested in the online Meta-Learning setting, also known as Lifelong Learning. In this scenario the meta-algorithm receives the tasks sequentially and it incrementally adapts the inner algorithm on the fly as the tasks arrive. In particular, we refer to the framework in which also the within-task data are processed sequentially by the inner algorithm as Online-Within-Online (OWO) Meta-Learning, while, we use the term Online-Within-Batch (OWB) Meta-Learning to denote the setting in which the within-task data are processed in a single batch. In this work we propose an OWO Meta-Learning method based on primal-dual Online Learning. Our method is theoretically grounded and it is able to cover various types of tasks’ relatedness and learning algorithms. More precisely, we focus on the family of inner algorithms given by a parametrized variant of Follow The Regularized Leader (FTRL) aiming at minimizing the withintask regularized empirical risk. The inner algorithm in this class is incrementally adapted by a FTRL meta-algorithm using the within-task minimum regularized empirical risk as the meta-loss. In order to keep the process fully online, we use the online inner algorithm to approximate the subgradients used by the meta-algorithm and we show how to exploit an upper bound on this approximation error in order to derive a cumulative error bound for the proposed method. Our analysis can be adapted to the statistical setting by two nested online-to-batch conversion steps. We also show how the proposed OWO method can provide statistical guarantees comparable to its natural more expensive OWB variant, where the inner online algorithm is substituted by the batch minimizer of the regularized empirical risk. Finally, we apply our method to two important families of learning algorithms parametrized by a bias vector or a linear feature map.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Javelle, Jérôme. "Cryptographie Quantique : Protocoles et Graphes." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM093/document.

Повний текст джерела
Анотація:
Je souhaite réaliser un modèle théorique optimal pour les protocoles de partage de secret quantique basé sur l'utilisation des états graphes. Le paramètre représentatif d'un partage de secret à seuil est, entre autres la taille du plus grand ensemble de joueurs qui ne peut pas accéder au secret. Je souhaite donc trouver un famille de protocoles pour laquelle ce paramètre est le plus petit possible. J'étudie également les liens entre les protocoles de partage de secret quantique et des familles de courbes en géométrie algébrique
I want to realize an optimal theoretical model for quantum secret sharing protocols based on graph states. The main parameter of a threshold quantum secret sharing scheme is the size of the largest set of players that can not access the secret. Thus, my goal is to find a collection of protocols for which the value of this parameter is the smallest possible. I also study the links between quantum secret sharing protocols and families of curves in algebraic geometry
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Hededal, Klincov Lazar, and Ali Symeri. "Devising a Trend-break-detection Algorithm of stored Key Performance Indicators for Telecom Equipment." Thesis, KTH, Data- och elektroteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208965.

Повний текст джерела
Анотація:
A problem that is prevalent for testers at Ericsson is that performance test results are continuously generated but not analyzed. The time between occurrence of problems and information about the occurrence is long and variable. This is due to the manual analysis of log files that is time consuming and tedious. The requested solution is automation with an algorithm that analyzes the performance and notifies when problems occur. A binary classifier algorithm, based on statistical methods, was developed and evaluated as a solution to the stated problem. The algorithm was evaluated with simulated data and produced an accuracy of 97.54 %, to detect trend breaks. Furthermore, correlation analysis was carried out between performance and hardware to gain insights in how hardware configurations affect test runs.
Ett allmänt förekommande problem för testare på Ericsson är att resultat från flera prestandatester genereras kontinuerligt men inte analyseras. Tiden mellan förekommande fel och informationen av dessa är hög och varierande. Detta på grund av manuell analys av loggfiler som är tidsödande och ledsamt. Den efterfrågade lösningen är automatisering med en algoritm, baserad på statistisk metodik, som analyserar data om prestanda och meddelar när problem förekommer. En algoritm för binär klassifikation utvecklades och utvärderades som lösning till det fastställda problemet. Algoritmen utvärderades med simulerad data och alstrade en noggrannhet på 97,54%, för att detektera trendbrott. Dessutom utfördes korrelationsanalys mellan prestandan och hårdvaran för att få insikt i hur hårdvarukonfigurationen påverkar testkörningar.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Roux, Jeanne-Marie. "Introduction to graphical models with an application in finding coplanar points." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/4094.

Повний текст джерела
Анотація:
Thesis (MSc (Applied Mathematics))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: This thesis provides an introduction to the statistical modeling technique known as graphical models. Since graph theory and probability theory are the two legs of graphical models, these two topics are presented, and then combined to produce two examples of graphical models: Bayesian Networks and Markov Random Fields. Furthermore, the max-sum, sum-product and junction tree algorithms are discussed. The graphical modeling technique is then applied to the specific problem of finding coplanar points in stereo images, taken with an uncalibrated camera. Although it is discovered that graphical models might not be the best method, in terms of speed, to use for this appliation, it does illustrate how to apply this technique in a real-life problem.
AFRIKAANSE OPSOMMING: Hierdie tesis stel die leser voor aan die statistiese modelerings-tegniek genoemd grafiese modelle. Aangesien grafiek teorie en waarskynlikheidsleer die twee bene van grafiese modelle is, word hierdie areas aangespreek en dan gekombineer om twee voorbeelde van grafiese modelle te vind: Bayesian Netwerke en Markov Lukrake Liggaam. Die maks-som, som-produk en aansluitboom algoritmes word ook bestudeer. Nadat die teorie van grafiese modelle en hierdie drie algoritmes afgehandel is, word grafiese modelle dan toegepas op ’n spesifieke probleem— om punte op ’n gemeenskaplike vlak in stereo beelde te vind, wat met ’n ongekalibreerde kamera geneem is. Alhoewel gevind is dat grafiese modelle nie die optimale metode is om punte op ’n gemeenskaplike vlak te vind, in terme van spoed, word die gebruik van grafiese modelle wel ten toongestel met hierdie praktiese voorbeeld.
National Research Foundation (South Africa)
Стилі APA, Harvard, Vancouver, ISO та ін.
34

McInerney, Robert E. "Decision making under uncertainty." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:a34e87ad-8330-42df-8ba6-d55f10529331.

Повний текст джерела
Анотація:
Operating and interacting in an environment requires the ability to manage uncertainty and to choose definite courses of action. In this thesis we look to Bayesian probability theory as the means to achieve the former, and find that through rigorous application of the rules it prescribes we can, in theory, solve problems of decision making under uncertainty. Unfortunately such methodology is intractable in realworld problems, and thus approximation of one form or another is inevitable. Many techniques make use of heuristic procedures for managing uncertainty. We note that such methods suffer unreliable performance and rely on the specification of ad-hoc variables. Performance is often judged according to long-term asymptotic performance measures which we also believe ignores the most complex and relevant parts of the problem domain. We therefore look to develop principled approximate methods that preserve the meaning of Bayesian theory but operate with the scalability of heuristics. We start doing this by looking at function approximation in continuous state and action spaces using Gaussian Processes. We develop a novel family of covariance functions which allow tractable inference methods to accommodate some of the uncertainty lost by not following full Bayesian inference. We also investigate the exploration versus exploitation tradeoff in the context of the Multi-Armed Bandit, and demonstrate that principled approximations behave close to optimal behaviour and perform significantly better than heuristics on a range of experimental test beds.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Toft, Albin. "Particle-based Parameter Inference in Stochastic Volatility Models: Batch vs. Online." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252313.

Повний текст джерела
Анотація:
This thesis focuses on comparing an online parameter estimator to an offline estimator, both based on the PaRIS-algorithm, when estimating parameter values for a stochastic volatility model. By modeling the stochastic volatility model as a hidden Markov model, estimators based on particle filters can be implemented in order to estimate the unknown parameters of the model. The results from this thesis implies that the proposed online estimator could be considered as a superior method to the offline counterpart. The results are however somewhat inconclusive, and further research regarding the subject is recommended.
Detta examensarbetefokuserar på att jämföra en online och offline parameter-skattare i stokastiskavolatilitets modeller. De två parameter-skattarna som jämförs är båda baseradepå PaRIS-algoritmen. Genom att modellera en stokastisk volatilitets-model somen dold Markov kedja, kunde partikelbaserade parameter-skattare användas föratt uppskatta de okända parametrarna i modellen. Resultaten presenterade idetta examensarbete tyder på att online-implementationen av PaRIS-algorimen kanses som det bästa alternativet, jämfört med offline-implementationen.Resultaten är dock inte helt övertygande, och ytterligare forskning inomområdet
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Granström, Daria, and Johan Abrahamsson. "Loan Default Prediction using Supervised Machine Learning Algorithms." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252312.

Повний текст джерела
Анотація:
It is essential for a bank to estimate the credit risk it carries and the magnitude of exposure it has in case of non-performing customers. Estimation of this kind of risk has been done by statistical methods through decades and with respect to recent development in the field of machine learning, there has been an interest in investigating if machine learning techniques can perform better quantification of the risk. The aim of this thesis is to examine which method from a chosen set of machine learning techniques exhibits the best performance in default prediction with regards to chosen model evaluation parameters. The investigated techniques were Logistic Regression, Random Forest, Decision Tree, AdaBoost, XGBoost, Artificial Neural Network and Support Vector Machine. An oversampling technique called SMOTE was implemented in order to treat the imbalance between classes for the response variable. The results showed that XGBoost without implementation of SMOTE obtained the best result with respect to the chosen model evaluation metric.
Det är nödvändigt för en bank att ha en bra uppskattning på hur stor risk den bär med avseende på kunders fallissemang. Olika statistiska metoder har använts för att estimera denna risk, men med den nuvarande utvecklingen inom maskininlärningsområdet har det väckt ett intesse att utforska om maskininlärningsmetoder kan förbättra kvaliteten på riskuppskattningen. Syftet med denna avhandling är att undersöka vilken metod av de implementerade maskininlärningsmetoderna presterar bäst för modellering av fallissemangprediktion med avseende på valda modelvaldieringsparametrar. De implementerade metoderna var Logistisk Regression, Random Forest, Decision Tree, AdaBoost, XGBoost, Artificiella neurala nätverk och Stödvektormaskin. En översamplingsteknik, SMOTE, användes för att behandla obalansen i klassfördelningen för svarsvariabeln. Resultatet blev följande: XGBoost utan implementering av SMOTE visade bäst resultat med avseende på den valda metriken.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Cénac, Peggy. "Récursivité au carrefour de la modélisation de séquences, des arbres aléatoires, des algorithmes stochastiques et des martingales." Habilitation à diriger des recherches, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00954528.

Повний текст джерела
Анотація:
Ce mémoire est une synthèse de plusieurs études à l'intersection des systèmes dynamiques dans l'analyse statistique de séquences, de l'analyse d'algorithmes dans des arbres aléatoires et des processus stochastiques discrets. Les résultats établis ont des applications dans des domaines variés allant des séquences biologiques aux modèles de régression linéaire, processus de branchement, en passant par la statistique fonctionnelle et les estimations d'indicateurs de risque appliqués à l'assurance. Tous les résultats établis utilisent d'une façon ou d'une autre le caractère récursif de la structure étudiée, en faisant apparaître des invariants comme des martingales. Elles sont au coeur de ce mémoire, utilisées comme outils dans les preuves ou comme objets d'étude.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Barreau, Thibaud. "Strategic optimization of a global bank capital management using statistical methods on open data." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273413.

Повний текст джерела
Анотація:
This project is about the optimization of the capital management of a French global bank. Capital management corresponds here to allocating the available capital to the different business units. In this project, I focus on the optimization of the allocation of the risk weighted assets (RWA) between some of the business units of the bank, as a representation of the allocated capital. Emphasis is put on the market and retail part of the bank and the first step was to be able to model the evolution of a business unit given an economic environment. The second one was about optimizing the distribution of RWA among the selected parts of the bank.
Projektets ämne handlar om att optimering allokering av kapital inom en fransk global bank. Kapital management syftar här på hur kapital ska fördelas mellan olika avdelningar inom banken. I detta projekt fokuserar jag på optimering av allokeringen av riskvägda resurser (RWA) mellan några av bankens enheter, som en representation av det allokerade kapitalet. Uppsatsen inriktar sig främst emot retail-delen av banken. Första steget var att modellera utvecklingen av en bankavdelning givet en ekonomisk omgivning? Andra steget var att försöka optimera fördelningen av RWA mellan de utvalda bankavdelningarna.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Reinhammar, Ragna. "Estimation of Regression Coefficients under a Truncated Covariate with Missing Values." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-385672.

Повний текст джерела
Анотація:
By means of a Monte Carlo study, this paper investigates the relative performance of Listwise Deletion, the EM-algorithm and the default algorithm in the MICE-package for R (PMM) in estimating regression coefficients under a left truncated covariate with missing values. The intention is to investigate whether the three frequently used missing data techniques are robust against left truncation when missing values are MCAR or MAR. The results suggest that no technique is superior overall in all combinations of factors studied. The EM-algorithm is unaffected by left truncation under MCAR but negatively affected by strong left truncation under MAR. Compared to the default MICE-algorithm, the performance of EM is more stable across distributions and combinations of sample size and missing rate. The default MICE-algorithm is improved by left truncation but is sensitive to missingness pattern and missing rate. Compared to Listwise Deletion, the EM-algorithm is less robust against left truncation when missing values are MAR. However, the decline in performance of the EM-algorithm is not large enough for the algorithm to be completely outperformed by Listwise Deletion, especially not when the missing rate is moderate. Listwise Deletion might be robust against left truncation but is inefficient.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Maire, F. "Détection et classification de cibles multispectrales dans l'infrarouge." Phd thesis, Telecom ParisTech, 2014. http://tel.archives-ouvertes.fr/tel-01018701.

Повний текст джерела
Анотація:
Les dispositifs de protection de sites sensibles doivent permettre de détecter des menaces potentielles suffisamment à l'avance pour pouvoir mettre en place une stratégie de défense. Dans cette optique, les méthodes de détection et de reconnaissance d'aéronefs se basant sur des images infrarouge multispectrales doivent être adaptées à des images faiblement résolues et être robustes à la variabilité spectrale et spatiale des cibles. Nous mettons au point dans cette thèse, des méthodes statistiques de détection et de reconnaissance d'aéronefs satisfaisant ces contraintes. Tout d'abord, nous spécifions une méthode de détection d'anomalies pour des images multispectrales, combinant un calcul de vraisemblance spectrale avec une étude sur les ensembles de niveaux de la transformée de Mahalanobis de l'image. Cette méthode ne nécessite aucune information a priori sur les aéronefs et nous permet d'identifier les images contenant des cibles. Ces images sont ensuite considérées comme des réalisations d'un modèle statistique d'observations fluctuant spectralement et spatialement autour de formes caractéristiques inconnues. L'estimation des paramètres de ce modèle est réalisée par une nouvelle méthodologie d'apprentissage séquentiel non supervisé pour des modèles à données manquantes que nous avons développée. La mise au point de ce modèle nous permet in fine de proposer une méthode de reconnaissance de cibles basée sur l'estimateur du maximum de vraisemblance a posteriori. Les résultats encourageants, tant en détection qu'en classification, justifient l'intérêt du développement de dispositifs permettant l'acquisition d'images multispectrales. Ces méthodes nous ont également permis d'identifier les regroupements de bandes spectrales optimales pour la détection et la reconnaissance d'aéronefs faiblement résolus en infrarouge.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Rombach, Michaela Puck. "Colouring, centrality and core-periphery structure in graphs." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:7326ecc6-a447-474f-a03b-6ec244831ad4.

Повний текст джерела
Анотація:
Krivelevich and Patkós conjectured in 2009 that χ(G(n, p)) ∼ χ=(G(n, p)) ∼ χ∗=(G(n, p)) for C/n < p < 1 − ε, where ε > 0. We prove this conjecture for n−1+ε1 < p < 1 − ε2 where ε1, ε2 > 0. We investigate several measures that have been proposed to indicate centrality of nodes in networks, and find examples of networks where they fail to distinguish any of the vertices nodes from one another. We develop a new method to investigate core-periphery structure, which entails identifying densely-connected core nodes and sparsely-connected periphery nodes. Finally, we present an experiment and an analysis of empirical networks, functional human brain networks. We found that reconfiguration patterns of dynamic communities can be used to classify nodes into a stiff core, a flexible periphery, and a bulk. The separation between this stiff core and flexible periphery changes as a person learns a simple motor skill and, importantly, it is a good predictor of how successful the person is at learning the skill. This temporally defined core-periphery organisation corresponds well with the core- periphery detected by the method that we proposed earlier the static networks created by averaging over the subjects dynamic functional brain networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Zaldivar, Cynthia. "On the Performance of some Poisson Ridge Regression Estimators." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3669.

Повний текст джерела
Анотація:
Multiple regression models play an important role in analyzing and making predictions about data. Prediction accuracy becomes lower when two or more explanatory variables in the model are highly correlated. One solution is to use ridge regression. The purpose of this thesis is to study the performance of available ridge regression estimators for Poisson regression models in the presence of moderately to highly correlated variables. As performance criteria, we use mean square error (MSE), mean absolute percentage error (MAPE), and percentage of times the maximum likelihood (ML) estimator produces a higher MSE than the ridge regression estimator. A Monte Carlo simulation study was conducted to compare performance of the estimators under three experimental conditions: correlation, sample size, and intercept. It is evident from simulation results that all ridge estimators performed better than the ML estimator. We proposed new estimators based on the results, which performed very well compared to the original estimators. Finally, the estimators are illustrated using data on recreational habits.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Shabala, Alexander. "Mathematical modelling of oncolytic virotherapy." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:cca2c9bc-cbd4-4651-9b59-8a4dea7245d1.

Повний текст джерела
Анотація:
This thesis is concerned with mathematical modelling of oncolytic virotherapy: the use of genetically modified viruses to selectively spread, replicate and destroy cancerous cells in solid tumours. Traditional spatially-dependent modelling approaches have previously assumed that virus spread is due to viral diffusion in solid tumours, and also neglect the time delay introduced by the lytic cycle for viral replication within host cells. A deterministic, age-structured reaction-diffusion model is developed for the spatially-dependent interactions of uninfected cells, infected cells and virus particles, with the spread of virus particles facilitated by infected cell motility and delay. Evidence of travelling wave behaviour is shown, and an asymptotic approximation for the wave speed is derived as a function of key parameters. Next, the same physical assumptions as in the continuum model are used to develop an equivalent discrete, probabilistic model for that is valid in the limit of low particle concentrations. This mesoscopic, compartment-based model is then validated against known test cases, and it is shown that the localised nature of infected cell bursts leads to inconsistencies between the discrete and continuum models. The qualitative behaviour of this stochastic model is then analysed for a range of key experimentally-controllable parameters. Two-dimensional simulations of in vivo and in vitro therapies are then analysed to determine the effects of virus burst size, length of lytic cycle, infected cell motility, and initial viral distribution on the wave speed, consistency of results and overall success of therapy. Finally, the experimental difficulty of measuring the effective motility of cells is addressed by considering effective medium approximations of diffusion through heterogeneous tumours. Considering an idealised tumour consisting of periodic obstacles in free space, a two-scale homogenisation technique is used to show the effects of obstacle shape on the effective diffusivity. A novel method for calculating the effective continuum behaviour of random walks on lattices is then developed for the limiting case where microscopic interactions are discrete.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Osborne, Michael A. "Bayesian Gaussian processes for sequential prediction, optimisation and quadrature." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:1418c926-6636-4d96-8bf6-5d94240f3d1f.

Повний текст джерела
Анотація:
We develop a family of Bayesian algorithms built around Gaussian processes for various problems posed by sensor networks. We firstly introduce an iterative Gaussian process for multi-sensor inference problems, and show how our algorithm is able to cope with data that may be noisy, missing, delayed and/or correlated. Our algorithm can also effectively manage data that features changepoints, such as sensor faults. Extensions to our algorithm allow us to tackle some of the decision problems faced in sensor networks, including observation scheduling. Along these lines, we also propose a general method of global optimisation, Gaussian process global optimisation (GPGO), and demonstrate how it may be used for sensor placement. Our algorithms operate within a complete Bayesian probabilistic framework. As such, we show how the hyperparameters of our system can be marginalised by use of Bayesian quadrature, a principled method of approximate integration. Similar techniques also allow us to produce full posterior distributions for any hyperparameters of interest, such as the location of changepoints. We frame the selection of the positions of the hyperparameter samples required by Bayesian quadrature as a decision problem, with the aim of minimising the uncertainty we possess about the values of the integrals we are approximating. Taking this approach, we have developed sampling for Bayesian quadrature (SBQ), a principled competitor to Monte Carlo methods. We conclude by testing our proposals on real weather sensor networks. We further benchmark GPGO on a wide range of canonical test problems, over which it achieves a significant improvement on its competitors. Finally, the efficacy of SBQ is demonstrated in the context of both prediction and optimisation.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Esteve, Rothenberg Christian Rodolfo 1982. "Compact forwarding = uma abordagem probabilística para o encaminhamento de pacotes em redes orientadas a conteúdo." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261004.

Повний текст джерела
Анотація:
Orientador: Mauricio Ferreira Magalhães
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-17T10:39:34Z (GMT). No. of bitstreams: 1 EsteveRothenberg_ChristianRodolfo_D.pdf: 14213626 bytes, checksum: 46a6a812d056a078c8a7fe49c80ce0ff (MD5) Previous issue date: 2010
Resumo: Esta tese introduz um novo conceito para as redes de conteúdo denominado compact forwarding. Este conceito traduz-se na utilização de técnicas probabilísticas no plano de encaminhamento onde o espaço de identificação não é mais relacionado a um host final, mas sim, à identificação de conteúdo(s). A essência do conceito originou-se de uma questão básica, qual seja, onde deve ser colocado o estado associado ao encaminhamento do pacote? Nos elementos de rede ou no cabeçalho do pacote? A tese propõe duas soluções que representam estes extremos, SPSwitch, na qual o estado é colocado nos elementos de rede e, LIPSIN, onde o estado é colocado no cabeçalho do pacote. O denominador comum a essas soluções consiste na utilização de técnicas probabilísticas inspiradas no Bloom filter como elemento base das decisões de encaminhamento. A utilização de estruturas de dados derivadas do Bloom filter traz um custo adicional necessário à minimização dos erros associados à utilização de uma estrutura probabilística. A tese contribui com várias técnicas para redução desses erros incluindo a análise dos custos associados. Cenários de aplicação são apresentados para validação das propostas discutidas no trabalho
Abstract: This thesis introduces the concept of compact forwarding in the field of content-oriented networks. The main idea behind this concept is taking a probabilistic approach to the problem of packet forwarding in networks centered on content identifiers rather than traditional host addresses. The fundamental question explored is where to place the packet forwarding state, in network nodes or in packet headers? Solutions for both extremes are proposed. In the SPSwitch, approximate forwarding state is kept in network nodes. In LIPSIN, the state is carried in the packets themselves. Both approaches are based on probabilistic packet forwarding functions inspired by the Bloom filter data structure. The approximate forwarding state comes at the cost of additional considerations due to the effects of one-sided error-prone data structures. The thesis contributes with a series of techniques to mitigate the false positive errors. The proposed compact forwarding methods are experimentally validated in several practical networking scenarios
Doutorado
Engenharia de Computação
Doutor em Engenharia Elétrica
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Segal, Aleksandr V. "Iterative Local Model Selection for tracking and mapping." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:8690e0e0-33c5-403e-afdf-e5538e5d304f.

Повний текст джерела
Анотація:
The past decade has seen great progress in research on large scale mapping and perception in static environments. Real world perception requires handling uncertain situations with multiple possible interpretations: e.g. changing appearances, dynamic objects, and varying motion models. These aspects of perception have been largely avoided through the use of heuristics and preprocessing. This thesis is motivated by the challenge of including discrete reasoning directly into the estimation process. We approach the problem by using Conditional Linear Gaussian Networks (CLGNs) as a generalization of least-squares estimation which allows the inclusion of discrete model selection variables. CLGNs are a powerful framework for modeling sparse multi-modal inference problems, but are difficult to solve efficiently. We propose the Iterative Local Model Selection (ILMS) algorithm as a general approximation strategy specifically geared towards the large scale problems encountered in tracking and mapping. Chapter 4 introduces the ILMS algorithm and compares its performance to traditional approximate inference techniques for Switching Linear Dynamical Systems (SLDSs). These evaluations validate the characteristics of the algorithm which make it particularly attractive for applications in robot perception. Chief among these is reliability of convergence, consistent performance, and a reasonable trade off between accuracy and efficiency. In Chapter 5, we show how the data association problem in multi-target tracking can be formulated as an SLDS and effectively solved using ILMS. The SLDS formulation allows the addition of additional discrete variables which model outliers and clutter in the scene. Evaluations on standard pedestrian tracking sequences demonstrates performance competitive with the state of the art. Chapter 6 applies the ILMS algorithm to robust pose graph estimation. A non-linear CLGN is constructed by introducing outlier indicator variables for all loop closures. The standard Gauss-Newton optimization algorithm is modified to use ILMS as an inference algorithm in between linearizations. Experiments demonstrate a large improvement over state-of-the-art robust techniques. The ILMS strategy presented in this thesis is simple and general, but still works surprisingly well. We argue that these properties are encouraging for wider applicability to problems in robot perception.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Guo, Mingming. "User-Centric Privacy Preservation in Mobile and Location-Aware Applications." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3674.

Повний текст джерела
Анотація:
The mobile and wireless community has brought a significant growth of location-aware devices including smart phones, connected vehicles and IoT devices. The combination of location-aware sensing, data processing and wireless communication in these devices leads to the rapid development of mobile and location-aware applications. Meanwhile, user privacy is becoming an indispensable concern. These mobile and location-aware applications, which collect data from mobile sensors carried by users or vehicles, return valuable data collection services (e.g., health condition monitoring, traffic monitoring, and natural disaster forecasting) in real time. The sequential spatial-temporal data queries sent by users provide their location trajectory information. The location trajectory information not only contains users’ movement patterns, but also reveals sensitive attributes such as users’ personal habits, preferences, as well as home and work addresses. By exploring this type of information, the attackers can extract and sell user profile data, decrease subscribed data services, and even jeopardize personal safety. This research spans from the realization that user privacy is lost along with the popular usage of emerging location-aware applications. The outcome seeks to relive user location and trajectory privacy problems. First, we develop a pseudonym-based anonymity zone generation scheme against a strong adversary model in continuous location-based services. Based on a geometric transformation algorithm, this scheme generates distributed anonymity zones with personalized privacy parameters to conceal users’ real location trajectories. Second, based on the historical query data analysis, we introduce a query-feature-based probabilistic inference attack, and propose query-aware randomized algorithms to preserve user privacy by distorting the probabilistic inference conducted by attackers. Finally, we develop a privacy-aware mobile sensing mechanism to help vehicular users reduce the number of queries to be sent to the adversarial servers. In this mechanism, mobile vehicular users can selectively query nearby nodes in a peer-to-peer way for privacy protection in vehicular networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Hunt, Julian David. "Integration of rationale management with multi-criteria decision analysis, probabilistic forecasting and semantics : application to the UK energy sector." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:2cc24d23-3e93-42e0-bb7a-6e39a65d7425.

Повний текст джерела
Анотація:
This thesis presents a new integrated tool and decision support framework to approach complex problems resulting from the interaction of many multi-criteria issues. The framework is embedded in an integrated tool called OUTDO (Oxford University Tool for Decision Organisation). OUTDO integrates Multi-Criteria Decision Analysis (MCDA), decision rationale management with a modified Issue-Based Information Systems (IBIS) representation, and probabilistic forecasting to effectively capture the essential reasons why decisions are made and to dynamically re-use the rationale. In doing so, it allows exploration of how changes in external parameters affect complicated and uncertain decision making processes in the present and in the future. Once the decision maker constructs his or her own decision process, OUTDO checks if the decision process is consistent and coherent and looks for possible ways to improve it using three new semantic-based decision support approaches. For this reason, two ontologies (the Decision Ontology and the Energy Ontology) were integrated into OUTDO to provide it with these semantic capabilities. The Decision Ontology keeps a record of the decision rationale extracted from OUTDO and the Energy Ontology describes the energy generation domain, focusing on the water requirement in thermoelectric power plants. A case study, with the objective of recommending electricity generation and steam condensation technologies for ten different regions in the UK, is used to verify OUTDO’s features and reach conclusions about the overall work.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Islam, Md Samsul, Lin Zhou, and Fei Li. "Application of Artificial Intelligence (Artificial Neural Network) to Assess Credit Risk : A Predictive Model For Credit Card Scoring." Thesis, Blekinge Tekniska Högskola, Sektionen för management, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2099.

Повний текст джерела
Анотація:
Credit Decisions are extremely vital for any type of financial institution because it can stimulate huge financial losses generated from defaulters. A number of banks use judgmental decisions, means credit analysts go through every application separately and other banks use credit scoring system or combination of both. Credit scoring system uses many types of statistical models. But recently, professionals started looking for alternative algorithms that can provide better accuracy regarding classification. Neural network can be a suitable alternative. It is apparent from the classification outcomes of this study that neural network gives slightly better results than discriminant analysis and logistic regression. It should be noted that it is not possible to draw a general conclusion that neural network holds better predictive ability than logistic regression and discriminant analysis, because this study covers only one dataset. Moreover, it is comprehensible that a “Bad Accepted” generates much higher costs than a “Good Rejected” and neural network acquires less amount of “Bad Accepted” than discriminant analysis and logistic regression. So, neural network achieves less cost of misclassification for the dataset used in this study. Furthermore, in the final section of this study, an optimization algorithm (Genetic Algorithm) is proposed in order to obtain better classification accuracy through the configurations of the neural network architecture. On the contrary, it is vital to note that the success of any predictive model largely depends on the predictor variables that are selected to use as the model inputs. But it is important to consider some points regarding predictor variables selection, for example, some specific variables are prohibited in some countries, variables all together should provide the highest predictive strength and variables may be judged through statistical analysis etc. This study also covers those concepts about input variables selection standards.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Kaalen, Stefan. "Semi-Markov processes for calculating the safety of autonomous vehicles." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252331.

Повний текст джерела
Анотація:
Several manufacturers of road vehicles today are working on developing autonomous vehicles. One subject that is often up for discussion when it comes to integrating autonomous road vehicles into the infrastructure is the safety aspect. There is in the context no common view of how safety should be quantified. As a contribution to this discussion we propose describing each potential hazardous event of a vehicle as a Semi-Markov Process (SMP). A reliability-based method for using the semi-Markov representation to calculate the probability of a hazardous event to occur is presented. The method simplifies the expression for the reliability using the Laplace-Stieltjes transform and calculates the transform of the reliability exactly. Numerical inversion algorithms are then applied to approximate the reliability up to a desired error tolerance. The method is validated using alternative techniques and is thereafter applied to a system for automated steering based on a real example from the industry. A desired evolution of the method is to involve a framework for how to represent each hazardous event as a SMP.
Flertalet tillverkare av vägfordon jobbar idag på att utveckla autonoma fordon. Ett ämne ofta på agendan i diskussionen om att integrera autonoma fordon på vägarna är säkerhet. Det finns i sammanhanget ingen klar bild över hur säkerhet ska kvantifieras. Som ett bidrag till denna diskussion föreslås här att beskriva varje potentiellt farlig situation av ett fordon som en Semi-Markov process (SMP). En metod presenteras för att via beräkning av funktionssäkerheten nyttja semi-Markov representationen för att beräkna sannolikheten för att en farlig situation ska uppstå. Metoden nyttjar Laplace-Stieltjes transformen för att förenkla uttrycket för funktionssäkerheten och beräknar transformen av funktionssäkerheten exakt. Numeriska algoritmer för den inversa transformen appliceras sedan för att beräkna funktionssäkerheten upp till en viss feltolerans. Metoden valideras genom alternativa tekniker och appliceras sedan på ett system för autonom styrning baserat på ett riktigt exempel från industrin. En fördelaktig utveckling av metoden som presenteras här skulle vara att involvera ett ramverk för hur varje potentiellt farlig situation ska representeras som en SMP.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії