Dissertations / Theses on the topic 'Processi random'

To see the other types of publications on this topic, follow the link: Processi random.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Processi random.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dionigi, Pierfrancesco. "A random matrix theory approach to complex networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18513/.

Full text
Abstract:
Si presenta un approccio matematico formale ai complex networks tramite l'uso della Random Matrix Theory (RMT). La legge del semicerchio di Wigner viene presentata come una generalizzazione del Teorema del Limite Centrale per determinati ensemble di matrici random. Sono presentati inoltre i principali metodi per calcolare la distribuzione spettrale delle matrici random e se ne sottolineano le differenze. Si è poi studiato come la RMT sia collegata alla Free Probability. Si è studiato come due tipi di grafi random apparentemente uguali, posseggono proprietà spettrali differenti analizzando le loro matrici di adiacenza. Da questa analisi si deducono alcune proprietà geometriche e topologiche dei grafi e si può analizzare la correlazione statistica tra i vertici. Si è poi costruito sul grafo un passeggiata aleatoria tramite catene di Markov, definendo la matrice di transizione del processo tramite la matrice di adiacenza del network opportunamente normalizzata. Infine si è mostrato come il comportamento dinamico della passeggiata aleatoria sia profondamente connesso con gli autovalori della matrice di transizione, e le principali relazioni sono mostrate.
APA, Harvard, Vancouver, ISO, and other styles
2

Siena, Martina. "Caratterizzazione della permeabilità in mezzi porosi sintetici e naturali." Doctoral thesis, Università degli studi di Trieste, 2013. http://hdl.handle.net/10077/8661.

Full text
Abstract:
2011/2012
La presente tesi ha come principale obiettivo lo studio della variabilità di proprietà idrologiche in mezzi porosi, con particolare attenzione alla permeabilità. A tal fine, ci si avvale di un approccio che combina l'analisi di proprietà statistiche e di scaling applicata a dataset di permeabilità, con lo studio di risultati numerici di simulazioni di flusso alla microscala in mezzi porosi. Con la prima analisi è possibile caratterizzare variazioni di permeabilità alla scala di misura (tipicamente dell'ordine del centimetro), mentre la seconda analisi dà una descrizione dell'eterogeneità di permeabilità ad una scala inferiore (nell'ordine del millimetro), ottenuta risolvendo processi fisici alla scala dei pori e derivando le quantità integrali di interesse. L'analisi statistica e di scaling, effettuata sia su distribuzioni di permeabilità sintetiche, sia su dataset raccolti su campioni reali, avvalora la validità dei modelli truncated fractional Brownian motion (tfBm) e truncated fractional Gaussian noise (tfGn), o di processi random sub-Gaussiani ad essi subordinati, per l'interpretazione della variabilità di proprietà idrologiche. Soluzioni numeriche di campi di flusso (i.e. velocità e pressione) alla scala dei pori sono ottenute sia per campioni sintetici, sia per campioni reali, la cui geometria è ricostruita mediante micro-tomografia a raggi X. Diverse metodologie di applicazione delle condizioni al contorno in corrispondenza dell'interfaccia liquido-solido forniscono risultati qualitativamente simili sia in termini di quantità microscopiche, sia in termini di quantità medie.
The work is aimed at providing some insights on the variability of hydrological properties in porous media, focusing in particular on permeability. We consider an approach which combines scaling and statistical analyses of air-permeability datasets with pore-scale numerical simulations of flow through porous media. The former investigation allows to characterize permeability heterogeneity at the centimeter observation scale; the latter provides a description of heterogeneity on a millimeter scale by resolving physical processes occurring at the microscopic scale and deriving up-scaled quantities. Scaling and statistical analyses performed on synthetic permeability distributions as well as on datasets collected on real media support the identification of truncated fractional Brownian motion (tfBm) or truncated fractional Gaussian noise (tfGn) and of sub-Gaussian random processes subordinated to tfBm (or tfGn) as viable models for the interpretation of hydrological properties variability. Pore-scale numerical solutions of flow (i.e., in terms of velocity and pressure distributions) are performed on both randomly generated samples and real porous media reconstructed via X-ray Micro-Tomography. Different approaches for the enforcement of boundary conditions at the fluid-solid interface provide qualitatively similar results in terms of both microscopic and averaged quantities.
XXV Ciclo
1984
APA, Harvard, Vancouver, ISO, and other styles
3

Hannigan, Patrick. "Random polynomials." Thesis, University of Ulster, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.263248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Zheng. "Approximation to random process by wavelet basis." View abstract/electronic edition; access limited to Brown University users, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3318378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Buckley, Stephen Philip. "Problems in random walks in random environments." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:06a12be2-b831-4c2a-87b1-f0abccfb9b8b.

Full text
Abstract:
Recent years have seen progress in the analysis of the heat kernel for certain reversible random walks in random environments. In particular the work of Barlow(2004) showed that the heat kernel for the random walk on the infinite component of supercritical bond percolation behaves in a Gaussian fashion. This heat kernel control was then used to prove a quenched functional central limit theorem. Following this work several examples have been analysed with anomalous heat kernel behaviour and, in some cases, anomalous scaling limits. We begin by generalizing the first result - looking for sufficient conditions on the geometry of the environment that ensure standard heat kernel upper bounds hold. We prove that these conditions are satisfied with probability one in the case of the random walk on continuum percolation and use the heat kernel bounds to prove an invariance principle. The random walk on dynamic environment is then considered. It is proven that if the environment evolves ergodically and is, in a certain sense, geometrically d-dimensional then standard on diagonal heat kernel bounds hold. Anomalous lower bounds on the heat kernel are also proven - in particular the random conductance model is shown to be "more anomalous" in the dynamic case than the static. Finally, the reflected random walk amongst random conductances is considered. It is shown in one dimension that under the usual scaling, this walk converges to reflected Brownian motion.
APA, Harvard, Vancouver, ISO, and other styles
6

Auret, Lidia. "Process monitoring and fault diagnosis using random forests." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/5360.

Full text
Abstract:
Thesis (PhD (Process Engineering))--University of Stellenbosch, 2010.
Dissertation presented for the Degree of DOCTOR OF PHILOSOPHY (Extractive Metallurgical Engineering) in the Department of Process Engineering at the University of Stellenbosch
ENGLISH ABSTRACT: Fault diagnosis is an important component of process monitoring, relevant in the greater context of developing safer, cleaner and more cost efficient processes. Data-driven unsupervised (or feature extractive) approaches to fault diagnosis exploit the many measurements available on modern plants. Certain current unsupervised approaches are hampered by their linearity assumptions, motivating the investigation of nonlinear methods. The diversity of data structures also motivates the investigation of novel feature extraction methodologies in process monitoring. Random forests are recently proposed statistical inference tools, deriving their predictive accuracy from the nonlinear nature of their constituent decision tree members and the power of ensembles. Random forest committees provide more than just predictions; model information on data proximities can be exploited to provide random forest features. Variable importance measures show which variables are closely associated with a chosen response variable, while partial dependencies indicate the relation of important variables to said response variable. The purpose of this study was therefore to investigate the feasibility of a new unsupervised method based on random forests as a potentially viable contender in the process monitoring statistical tool family. The hypothesis investigated was that unsupervised process monitoring and fault diagnosis can be improved by using features extracted from data with random forests, with further interpretation of fault conditions aided by random forest tools. The experimental results presented in this work support this hypothesis. An initial study was performed to assess the quality of random forest features. Random forest features were shown to be generally difficult to interpret in terms of geometry present in the original variable space. Random forest mapping and demapping models were shown to be very accurate on training data, and to extrapolate weakly to unseen data that do not fall within regions populated by training data. Random forest feature extraction was applied to unsupervised fault diagnosis for process data, and compared to linear and nonlinear methods. Random forest results were comparable to existing techniques, with the majority of random forest detections due to variable reconstruction errors. Further investigation revealed that the residual detection success of random forests originates from the constrained responses and poor generalization artifacts of decision trees. Random forest variable importance measures and partial dependencies were incorporated in a visualization tool to allow for the interpretation of fault conditions. A dynamic change point detection application with random forests proved more successful than an existing principal component analysis-based approach, with the success of the random forest method again residing in reconstruction errors. The addition of random forest fault diagnosis and change point detection algorithms to a suite of abnormal event detection techniques is recommended. The distance-to-model diagnostic based on random forest mapping and demapping proved successful in this work, and the theoretical understanding gained supports the application of this method to further data sets.
AFRIKAANSE OPSOMMING: Foutdiagnose is ’n belangrike komponent van prosesmonitering, en is relevant binne die groter konteks van die ontwikkeling van veiliger, skoner en meer koste-effektiewe prosesse. Data-gedrewe toesigvrye of kenmerkekstraksie-benaderings tot foutdiagnose benut die vele metings wat op moderne prosesaanlegte beskikbaar is. Party van die huidige toesigvrye benaderings word deur aannames rakende liniariteit belemmer, wat as motivering dien om nie-liniêre metodes te ondersoek. Die diversiteit van datastrukture is ook verdere motivering vir ondersoek na nuwe kenmerkekstraksiemetodes in prosesmonitering. Lukrake-woude is ’n nuwe statistiese inferensie-tegniek, waarvan die akkuraatheid toegeskryf kan word aan die nie-liniêre aard van besluitnemingsboomlede en die bekwaamheid van ensembles. Lukrake-woudkomitees verskaf meer as net voorspellings; modelinligting oor datapuntnabyheid kan benut word om lukrakewoudkenmerke te verskaf. Metingbelangrikheidsaanduiers wys watter metings in ’n noue verhouding met ’n gekose uitsetveranderlike verkeer, terwyl parsiële afhanklikhede aandui wat die verhouding van ’n belangrike meting tot die gekose uitsetveranderlike is. Die doel van hierdie studie was dus om die uitvoerbaarheid van ’n nuwe toesigvrye metode vir prosesmonitering gebaseer op lukrake-woude te ondersoek. Die ondersoekte hipotese lui: toesigvrye prosesmonitering en foutdiagnose kan verbeter word deur kenmerke te gebruik wat met lukrake-woude geëkstraheer is, waar die verdere interpretasie van foutkondisies deur addisionele lukrake-woude-tegnieke bygestaan word. Eksperimentele resultate wat in hierdie werkstuk voorgelê is, ondersteun hierdie hipotese. ’n Intreestudie is gedoen om die gehalte van lukrake-woudkenmerke te assesseer. Daar is bevind dat dit moeilik is om lukrake-woudkenmerke in terme van die geometrie van die oorspronklike metingspasie te interpreteer. Verder is daar bevind dat lukrake-woudkartering en -dekartering baie akkuraat is vir opleidingsdata, maar dat dit swak ekstrapolasie-eienskappe toon vir ongesiene data wat in gebiede buite dié van die opleidingsdata val. Lukrake-woudkenmerkekstraksie is in toesigvrye-foutdiagnose vir gestadigde-toestandprosesse toegepas, en is met liniêre en nie-liniêre metodes vergelyk. Resultate met lukrake-woude is vergelykbaar met dié van bestaande metodes, en die meerderheid lukrake-woudopsporings is aan metingrekonstruksiefoute toe te skryf. Verdere ondersoek het getoon dat die sukses van res-opsporing op die beperkte uitsetwaardes en swak veralgemenende eienskappe van besluitnemingsbome berus. Lukrake-woude-metingbelangrikheidsaanduiers en parsiële afhanklikhede is ingelyf in ’n visualiseringstegniek wat vir die interpretasie van foutkondisies voorsiening maak. ’n Dinamiese aanwending van veranderingspuntopsporing met lukrake-woude is as meer suksesvol bewys as ’n bestaande metode gebaseer op hoofkomponentanalise. Die sukses van die lukrake-woudmetode is weereens aan rekonstruksie-reswaardes toe te skryf. ’n Voorstel wat na aanleiding van hierde studie gemaak is, is dat die lukrake-woudveranderingspunt- en foutopsporingsmetodes by ’n soortgelyke stel metodes gevoeg kan word. Daar is in hierdie werk bevind dat die afstand-vanaf-modeldiagnostiek gebaseer op lukrake-woudkartering en -dekartering suksesvol is vir foutopsporing. Die teoretiese begrippe wat ontsluier is, ondersteun die toepassing van hierdie metodes op verdere datastelle.
APA, Harvard, Vancouver, ISO, and other styles
7

Kandler, Anne, Matthias Richter, Scheidt Jürgen vom, Hans-Jörg Starkloff, and Ralf Wunderlich. "Moving-Average approximations of random epsilon-correlated processes." Universitätsbibliothek Chemnitz, 2004. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200401266.

Full text
Abstract:
The paper considers approximations of time-continuous epsilon-correlated random processes by interpolation of time-discrete Moving-Average processes. These approximations are helpful for Monte-Carlo simulations of the response of systems containing random parameters described by epsilon-correlated processes. The paper focuses on the approximation of stationary epsilon-correlated processes with a prescribed correlation function. Numerical results are presented.
APA, Harvard, Vancouver, ISO, and other styles
8

Iuliano, Antonella. "Analysis of a birth and death process with alternating rates and of a telegraph process with underlying random walk." Doctoral thesis, Universita degli studi di Salerno, 2012. http://hdl.handle.net/10556/311.

Full text
Abstract:
2010 - 2011
My thesis for the Doctoral Programme in Mathematics (November 1, 2008 - October 31, 2011) at University of Salerno, Italy, has been oriented to the analysis of two stochastic models, with particular emphasis on the de- termination of their probability laws and related properties. The discussion of the doctoral dissertation will be given in 20 March 2012. The first part of the thesis is devoted to the analysis of a birth and death process with alternating rates. We recall that an extensive survey on birth- death processes (BDP) has been provided by Parthasarathy and Lenin [3]. In this work the authors adopt standard methods of analysis (such as power series technique and Laplace transforms) to find explicit expressions for the transient and stationary distributions of BDPs and provide applications of such results to specific fields (communication systems, chemical and biolog- ical models). In particular, in Section 9 they use BDPs to describe the time changes in the concentrations of the components of a chemical reaction and discuss the role of BDPs in the study of diatomic molecular chains. More- over, the paper by StockMayer et al. [4] gives an example of application of stochastic processes in the study of chain molecular diffusion. In this work a molecule is modeled as a freely-joined chain of two regularly alternating kinds of atoms. All bonds have the same length but the two kinds of atoms have alternating jump rates, i.e. the forward and backward jump rates for even labeled beads are α and β, respectively, and these rates are reversed for odd labeled beads. The authors obtain the exact timedependent aver- age length of bond vectors. Inspired by this works, Conolly [1] studied an infinitely long chain of atoms joined by links of equal length. The links are assumed to be subject to random shocks, that force the atoms to move and the molecule to diffuse. The shock mechanism is different according to whether the atom occupies an odd or an even position on the chain. The originating stochastic model is a randomized random walk on the integers with an unusual exponential pattern for the inter-step time intervals. The authors analyze some features of this process and investigate also its queue counterpart, where the walk is confined to the non negative integers. Stimulated by the above researches, a birth and death process N(t) on the integers with a transition rate λ from even states and a possibly different rate μ from odd states has been studied in the first part of the thesis. A de- tailed description of the model is performed, and the Chapman-Kolmogorov equations are introduced. Then, the probability generating functions of even and odd states are then obtained. These allow to evaluate the transition probabilities of the process for arbitrary integer initial state. Certain sym- metry properties of the transition probabilities are also pinpointed. Then, the birth and death process obtained by superimposing a reecting bound- ary in the zero-state is analyzed. In particular, by making use of a Laplace transform approach, the probability of a transition from state 0 or state 1 to the zero-state is obtained. Formulas for mean and variance of both processes are finally provided. The second part of the thesis is devoted to the analysis of a generalized telegraph process with an underlying random walk. The classical telegraph process describes a random motion on the real line characterized by two _nite velocities with opposite directions, where the velocity changes are governed by a time-homogeneous Poisson process (see Orsingher [2]). The novelty in the proposed model consists in the use of new rules for velocity changes, which are now governed by a sequence of Bernoulli trials. This implies that the random times separating consecutive changes of direction of the mov- ing particle have a general distribution and form a non-regular alternating renewal process. Starting from the origin, the running particle performs an alternating motion with velocities c and -v (c; v > 0). The direction of the motion (forward and backward) is determined by the velocity sign. The particle changes the direction according to the outcome of a Bernoulli trial. Hence, this defines a (possibly asymmetric) random walk governing the choice of the velocity at any epoch. By adopting techniques based on renewal theory, the general form of probability law is determined as well as the mean of the process. Furthermore, two instances are investigated in detail, in which the random intertimes between consecutive velocity changes are exponentially distributed with (i) constant rates and with (ii) linearly increasing rates. In the first case, explicit expressions of the transition den- sity and of the conditional mean of the process are expressed as series of Gauss hypergeometric functions. The second case leads to a damped ran- dom motion, for which we obtain the transition density in closed form. It is interesting to note that the latter case yields a logistic stationary density. References [1] Conolly B.W. (1971) On randomized random walks. SIAM Review, 13, 81-99. [2] Orsingher, E. (1990) Probability law, flow functions, maximum distri- bution of wave-governed random motions and their connections with Kirchoff's laws. Stoch. Process. Appl., 34, 49-66. [3] Parthasarathy P.R. and Lenin R.B. (2004) Birth and death process (BDP) models with applications-queueing, communication systems, chemical models, biological models: the state-of the- art with a time- dependent perspective. American Series in Mathematical and Manage- ment Sciences, vol. 51, American Sciences Press, Columbus (2004) [4] Stockmayer W.H., Gobush W. and Norvich R. (1971) Local-jump mod- els for chain dynamics. Pure Appl. Chem., 26, 555-561. NOTE The thesis consists of four chapters: Chapter 1. Some definitions and properties of stochastic processes. Chapter 2. Analysis of birth-death processes on the set of integers, char- acterized by alternating rates. Chapter 3. Results on the standard telegraph process. Chapter 4. Study of the telegraph process with an underlying random walk governing the velocity changes. [edited by author]
X n.s.
APA, Harvard, Vancouver, ISO, and other styles
9

Elstro, Stephanie Jo. "25 Random Things About Me." Miami University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=miami1250637568.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jones, Elinor Mair. "Large deviations of random walks and levy processes." Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.491853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Andreou, Pantelis. "A random reordering stochastic process for regression residuals." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0003/NQ42492.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Mussini, Filipe. "Random cover times using the Poisson cylinder process." Licentiate thesis, Uppsala universitet, Analys och sannolikhetsteori, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-329351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

El-Feghi, Farag Abdulrazzak. "Miscible flooding in correlated random fields." Thesis, Heriot-Watt University, 1992. http://hdl.handle.net/10399/1506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ezepue, P. O. "The dual process and renewal theory in the random environment branching process." Thesis, University of Sheffield, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Puplinskaitė, Donata. "Aggregation of autoregressive processes and random fields with finite or infinite variance." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2013~D_20131029_102339-22917.

Full text
Abstract:
Aggregated data appears in many areas such as econimics, sociology, geography, etc. This motivates an importance of studying the (dis)aggregation problem. One of the most important reasons why the contemporaneous aggregation become an object of research is the possibility of obtaining the long memory phenomena in processes. The aggregation provides an explanation of the long-memory effect in time series and a simulation method of such series as well. Accumulation of short-memory non-ergodic random processes can lead to the long memory ergodic process, that can be used for the forecasts of the macro and micro variables. We explore the aggregation scheme of AR(1) processes and nearest-neighbour random fields with infinite variance. We provide results on the existence of limit aggregated processes, and find conditions under which it has long memory properties in certain sense. For the random fields on Z^2, we introduce the notion of (an)isotropic long memory based on the behavior of partial sums. In L_2 case, the known aggregation of independent AR(1) processes leads to the Gaussian limit. While we describe a new model of aggregation based on independent triangular arrays. This scheme gives the limit aggregated process with finite variance which is not necessary Gaussian. We study a discrete time risk insurance model with stationary claims, modeled by the aggregated heavy-tailed process. We establish the asymptotic properties of the ruin probability and the dependence structure... [to full text]
Agreguoti duomenys naudojami daugelyje mokslo sričių tokių kaip ekonomika, sociologija, geografija ir kt. Tai motyvuoja tirti (de)agregavimo uždavinį. Viena iš pagrindinių priežasčių kodėl vienalaikis agregavimas tapo tyrimų objektu yra galimybė gauti ilgos atminties procesus. Agregavimas paaiškina ilgos atminties atsiradima procesuose ir yra vienas iš būdų tokius procesus generuoti. Agreguodami trumpos atminties neergodiškus atsitiktinius procesus, galime gauti ilgos atminties ergodišką procesą, kuris gali būti naudojamas mikro ir makro kintamųjų prognozavimui. Disertacijoje nagrinėjama AR(1) procesų bei artimiausio kaimyno atsitiktinių laukų, turinčių begalinę dispersiją, agregavimo schema, randamos sąlygos, kurioms esant ribinis agreguotas procesas egzistuoja, ir turi ilgąją atmintį tam tikra prasme. Atsitiktinių laukų atveju, įvedamas anizotropinės/izotropinės ilgos atminties apibrėžimas, kuris yra paremtas dalinių sumų elgesiu. Baigtinės dispersijos atveju yra gerai žinoma nepriklausomų AR(1) procesų schema, kuri rezultate duoda Gauso ribinį agreguotą procesą. Disertacijoje aprašoma trikampio masyvo agregavimo modelis, kuris baigtinės dispersijos atveju duoda nebūtinai Gauso ribinį agreguotą procesą. Taip pat disertacijoje nagrinėjama bankroto tikimybės asimptotika, kai žalos yra aprašomos sunkiauodegiu agreguotu procesu, nusakoma priklausomybė tarp žalų, apibūdinama žalų ilga atmintis.
APA, Harvard, Vancouver, ISO, and other styles
16

Broutin, Nicolas. "Shedding new light on random trees." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=102963.

Full text
Abstract:
We introduce a weighted model of random trees and analyze the asymptotic properties of their heights. Our framework encompasses most trees of logarithmic height that were introduced in the context of the analysis of algorithms or combinatorics. This allows us to state a sort of "master theorem" for the height of random trees, that covers binary search trees (Devroye, 1986), random recursive trees (Devroye, 1987; Pittel, 1994), digital search trees (Pittel, 1985), scale-free trees (Pittel, 1994; Barabasi and Albert, 1999), and all polynomial families of increasing trees (Bergeron et al., 1992; Broutin et al., 2006) among others. Other applications include the shape of skinny cells in geometric structures like k-d and relaxed k-d trees.
This new approach sheds new light on the tight relationship between data structures like trees and tries that used to be studied separately. In particular, we show that digital search trees and the tries built from sequences generated by the same memoryless source share the same stable core. This link between digital search trees and tries is at the heart of our analysis of heights of tries. It permits us to derive the height of several species of tries such as the trees introduced by de la Briandais (1959) and the ternary search trees of Bentley and Sedgewick (1997).
The proofs are based on the theory of large deviations. The first order terms of the asymptotic expansions of the heights are geometrically characterized using the Crame'r functions appearing in estimates of the tail probabilities for sums of independent random variables.
APA, Harvard, Vancouver, ISO, and other styles
17

Khoshbin, Ehteram. "Modelling two stage duration process." Thesis, Lancaster University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Milun, Davin. "Generating Markov random field image analysis systems from examples." Buffalo, N.Y. : Dept. of Computer Science, State University of New York at Buffalo, 1995. http://www.cse.buffalo.edu/tech%2Dreports/95%2D23.ps.Z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Sidenko, Sergiy. "Kac's random walk and coupon collector's process on posets." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/43781.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 2008.
Includes bibliographical references (p. 100-104).
In the first part of this work, we study a long standing open problem on the mixing time of Kac's random walk on SO(n, R) by random rotations. We obtain an upper bound mix = O (n2.5 log n) for the weak convergence which is close to the trivial lower bound [Omega] (n2). This improves the upper bound O (n4 log n) by Diaconis and SaloffCoste 1131. The proof is a variation on the coupling technique we develop to bound the mixing time for compact Markov chains, which is of independent interest. In the second part, we consider a generalization of the coupon collector's problem in which coupons are allowed to be collected according to a partial order. Along with the discrete process, we also study the Poisson version which admits a tractable parametrization. This allows us to prove convexity of the expected completion time E T with respect to sample probabilities, which has been an open question for the standard coupon collector's problem. Since the exact computation of E - is formidable, we use convexity to establish the upper and the lower bound (these bounds differ by a log factor). We refine these bounds for special classes of posets. For instance, we show the cut-off phenomenon for shallow posets that are closely connected to the classical Dixie Cup problem. We also prove the linear growth of the expectation for posets whose number of chains grows at most exponentially with respect to the maximal length of a chain. Examples of these posets are d-dimensional grids, for which the Poisson process is usually referred as the last passage percolation problem. In addition, the coupon collector's process on a poset can be used to generate a random linear extension.
(cont.) We show that for forests of rooted directed trees it is possible to assign sample probabilities such that the induced distribution over all linear extensions will be uniform. Finally, we show the connection of the process with some combinatorial properties of posets.
by Sergiy Sidenko.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
20

Urry, Matthew. "Learning curves for Gaussian process regression on random graphs." Thesis, King's College London (University of London), 2013. https://kclpure.kcl.ac.uk/portal/en/theses/learning-curves-for-gaussian-process-regression-on-random-graphs(c1f5f395-0426-436c-989c-d0ade913423e).html.

Full text
Abstract:
Gaussian processes are a non-parametric method that can be used to learn both regression and classification rules from examples for arbitrary input spaces using the ’kernel trick’. They are well understood for inputs from Euclidean spaces, however, much less research has focused on other spaces. In this thesis I aim to at least partially resolve this. In particular I focus on the case where inputs are defined on the vertices of a graph and the task is to learn a function defined on the vertices from noisy examples, i.e. a regression problem. A challenging problem in the area of non-parametric learning is to predict the general-isation error as a function of the number of examples or learning curve. I show that, unlike in the Euclidean case where predictions are either quantitatively accurate for a few specific cases or only qualitatively accurate for a broader range of situations, I am able to derive accurate learning curves for Gaussian processes on graphs for a wide range of input spaces given by ensembles of random graphs. I focus on the random walk kernel but my results generalise to any kernel that can be written as a truncated sum of powers of the normalised graph Laplacian. I begin first with a discussion of the properties of the random walk kernel, which can be viewed as an approximation of the ubiquitous squared exponential kernel in continuous spaces. I show that compared to the squared exponential kernel, the random walk kernel has some surprising properties which includes a non-trivial limiting form for some types of graphs. After investigating the limiting form of the kernel I then study its use as a prior. I propose a solution to this in the form of a local normalisation, where the prior scale at each vertex is normalised locally as desired. To drive home the point about kernel normalisation I then examine the differences between the two kernels when they are used as a Gaussian process prior over functions defined on the vertices of a graph. I show using numerical simulations that the locally normalised kernel leads to a probabilistically more plausible Gaussian process prior. After investigating the properties of the random walk kernel I then discuss the learning curves of a Gaussian process with a random walk kernel for both kernel normalisations in a matched scenario (where student and teacher are both Gaussian processes with matching hyperparameters). I show that by using the cavity method I can derive accu-rate predictions along the whole length of the learning curve that dramatically improves upon previously derived approximations for continuous spaces suitably extended to the discrete graph case. The derivation of the learning curve for the locally normalised kernel required an addi-tional approximation in the resulting cavity equations. I subsequently, therefore, investi-gate this approximation in more detail using the replica method. I show that the locally normalised kernel leads to a highly non-trivial replica calculation, that eventually shows that the approximation used in the cavity analysis amounts to ignoring some consistency requirements between incoming cavity distributions. I focus in particular on a teacher distribution that is given by a Gaussian process with a random walk kernel but different hyperparameters. I show that in this case, by applying the cavity method, I am able once more to calculate accurate predictions of the learning curve. The resulting equations resemble the matched case over an inflated number of variables. To finish this thesis I examine the learning curves for varying degrees of model mismatch.
APA, Harvard, Vancouver, ISO, and other styles
21

Warnke, Lutz. "Random graph processes with dependencies." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:71b48e5f-a192-4684-a864-ea9059a25d74.

Full text
Abstract:
Random graph processes are basic mathematical models for large-scale networks evolving over time. Their systematic study was pioneered by Erdös and Rényi around 1960, and one key feature of many 'classical' models is that the edges appear independently. While this makes them amenable to a rigorous analysis, it is desirable, both mathematically and in terms of applications, to understand more complicated situations. In this thesis the main goal is to improve our rigorous understanding of evolving random graphs with significant dependencies. The first model we consider is known as an Achlioptas process: in each step two random edges are chosen, and using a given rule only one of them is selected and added to the evolving graph. Since 2000 a large class of 'complex' rules has eluded a rigorous analysis, and it was widely believed that these could give rise to a striking and unusual phenomenon. Making this explicit, Achlioptas, D'Souza and Spencer conjectured in Science that one such rule yields a very abrupt (discontinuous) percolation phase transition. We disprove this, showing that the transition is in fact continuous for all Achlioptas process. In addition, we give the first rigorous analysis of the more 'complex' rules, proving that certain key statistics are tightly concentrated (i) in the subcritical evolution, and (ii) also later on if an associated system of differential equations has a unique solution. The second model we study is the H-free process, where random edges are added subject to the constraint that they do not complete a copy of some fixed graph H. The most important open question for such 'constrained' processes is due to Erdös, Suen and Winkler: in 1995 they asked what the typical final number of edges is. While Osthus and Taraz answered this in 2000 up to logarithmic factors for a large class of graphs H, more precise bounds are only known for a few special graphs. We close this gap for the cases where a cycle of fixed length is forbidden, determining the final number of edges up to constants. Our result not only establishes several conjectures, it is also the first which answers the more than 15-year old question of Erdös et. al. for a class of forbidden graphs H.
APA, Harvard, Vancouver, ISO, and other styles
22

Steinberg, Eran. "Analysis of random halftone dithering using second order statistics /." Online version of thesis, 1991. http://hdl.handle.net/1850/10976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Heckel, Annika. "Colourings of random graphs." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:79e14d55-0589-4e17-bbb5-a216d81b8875.

Full text
Abstract:
We study graph parameters arising from different types of colourings of random graphs, defined broadly as an assignment of colours to either the vertices or the edges of a graph. The chromatic number X(G) of a graph is the minimum number of colours required for a vertex colouring where no two adjacent vertices are coloured the same. Determining the chromatic number is one of the classic challenges in random graph theory. In Chapter 3, we give new upper and lower bounds for the chromatic number of the dense random graph G(n,p)) where p ∈ (0,1) is constant. These bounds are the first to match up to an additive term of order o(1) in the denominator, and in particular, they determine the average colour class size in an optimal colouring up to an additive term of order o(1). In Chapter 4, we study a related graph parameter called the equitable chromatic number. This is defined as the minimum number of colours needed for a vertex colouring where no two adjacent vertices are coloured the same and, additionally, all colour classes are as equal in size as possible. We prove one point concentration of the equitable chromatic number of the dense random graph G(n,m) with m = pn(n-1)/2, p < 1-1/e2 constant, on a subsequence of the integers. We also show that whp, the dense random graph G(n,p) allows an almost equitable colouring with a near optimal number of colours. We call an edge colouring of a graph G a rainbow colouring if every pair of vertices is joined by a rainbow path, which is a path where no colour is repeated. The least number of colours where this is possible is called the rainbow connection number rc(G). Since its introduction in 2008 as a new way to quantify how well connected a given graph is, the rainbow connection number has attracted the attention of a great number of researchers. For any graph G, rc(G)≥diam(G), where diam(G) denotes the diameter. In Chapter 5, we will see that in the random graph G(n,p), rainbow connection number 2 is essentially equivalent to diameter 2. More specifically, we consider G ~ G(n,p) close to the diameter 2 threshold and show that whp rc(G) = diam(G) ∈ {2,3}. Furthermore, we show that in the random graph process, whp the hitting times of diameter 2 and of rainbow connection number 2 coincide. In Chapter 6, we investigate sharp thresholds for the property rc(G)≤=r where r is a fixed integer. The results of Chapter 6 imply that for r=2, the properties rc(G)≤=2 and diam(G)≤=2 share the same sharp threshold. For r≥3, the situation seems quite different. We propose an alternative threshold and prove that this is an upper bound for the sharp threshold for rc(G)≤=r where r≥3.
APA, Harvard, Vancouver, ISO, and other styles
24

Lapajne, Mikael Hellborg, and Daniel Slat. "Random Forests for CUDA GPUs." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2953.

Full text
Abstract:
Context. Machine Learning is a complex and resource consuming process that requires a lot of computing power. With the constant growth of information, the need for efficient algorithms with high performance is increasing. Today's commodity graphics cards are parallel multi processors with high computing capacity at an attractive price and are usually pre-installed in new PCs. The graphics cards provide an additional resource to be used in machine learning applications. The Random Forest learning algorithm which has been showed competitive within machine learning has a good potential for performance increase through parallelization of the algorithm. Objectives. In this study we implement and review a revised Random Forest algorithm for GPU execution using CUDA. Methods. A review of previous work in the area has been done by studying articles from several sources, including Compendex, Inspec, IEEE Xplore, ACM Digital Library and Springer Link. Additional information regarding GPU architecture and implementation specific details have been obtained mainly from documentation available from Nvidia and the Nvidia developer forums. The implemented algorithm has been benchmarked and compared with two state-of-the-art CPU implementations of the Random Forest algorithm, both regarding consumed time for training and classification and for classification accuracy. Results. Measurements from benchmarks made on the three different algorithms are gathered showing the performance results of the algorithms for two publicly available data sets. Conclusion. We conclude that our implementation under the right conditions is able to outperform its competitors. We also conclude that this is only true for certain data sets depending on the size of the data sets. Moreover we conclude that there is potential for further improvements of the algorithm both regarding performance as well as adaption towards a wider range of real world applications.
Mikael: +46768539263, Daniel: +46703040693
APA, Harvard, Vancouver, ISO, and other styles
25

Kang, Sungyeol. "Extreme values of random times in stochastic networks." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/24305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Friel, Nial Patrick. "Application of random sets to image analysis." Thesis, University of Glasgow, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Holcomb, Diane, and Benedek Valkó. "Overcrowding asymptotics for the Sine(beta) process." INST MATHEMATICAL STATISTICS, 2017. http://hdl.handle.net/10150/625509.

Full text
Abstract:
We give overcrowding estimates for the Sine(beta) process, the bulk point process limit of the Gaussian beta-ensemble. We show that the probability of having exactly n points in a fixed interval is given by e(-beta/2n2) log(n)+O(n(2)) as n -> infinity. We also identify the next order term in the exponent if the size of the interval goes to zero.
APA, Harvard, Vancouver, ISO, and other styles
28

Oosthuizen, Joubert. "Random walks on graphs." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86244.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: We study random walks on nite graphs. The reader is introduced to general Markov chains before we move on more specifically to random walks on graphs. A random walk on a graph is just a Markov chain that is time-reversible. The main parameters we study are the hitting time, commute time and cover time. We nd novel formulas for the cover time of the subdivided star graph and broom graph before looking at the trees with extremal cover times. Lastly we look at a connection between random walks on graphs and electrical networks, where the hitting time between two vertices of a graph is expressed in terms of a weighted sum of e ective resistances. This expression in turn proves useful when we study the cover cost, a parameter related to the cover time.
AFRIKAANSE OPSOMMING: Ons bestudeer toevallige wandelings op eindige gra eke in hierdie tesis. Eers word algemene Markov kettings beskou voordat ons meer spesi ek aanbeweeg na toevallige wandelings op gra eke. 'n Toevallige wandeling is net 'n Markov ketting wat tyd herleibaar is. Die hoof paramaters wat ons bestudeer is die treftyd, pendeltyd en dektyd. Ons vind oorspronklike formules vir die dektyd van die verdeelde stergra ek sowel as die besemgra ek en kyk daarna na die twee bome met uiterste dektye. Laastens kyk ons na 'n verband tussen toevallige wandelings op gra eke en elektriese netwerke, waar die treftyd tussen twee punte op 'n gra ek uitgedruk word in terme van 'n geweegde som van e ektiewe weerstande. Hierdie uitdrukking is op sy beurt weer nuttig wanneer ons die dekkoste bestudeer, waar die dekkoste 'n paramater is wat verwant is aan die dektyd.
APA, Harvard, Vancouver, ISO, and other styles
29

Bansaye, Vincent. "Applications des processus de Lévy et processus de branchement à des études motivées par l'informatique et la biologie." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2008. http://tel.archives-ouvertes.fr/tel-00339230.

Full text
Abstract:
Dans une première partie, j'étudie un processus de stockage de données en temps continu où le disque dur est identifié à la droite réelle. Ce modèle est une version continu du problème original de Parking de Knuth. Ici l'arrivée des fichiers est Poissonienne et le fichier se stocke dans les premiers espaces libres à droite de son point d'arrivée, quitte à se fragmenter. Dans un premier temps, je construis le modèle et donne une caractérisation géométrique et analytique de la partie du disque recouverte au temps t. Ensuite j'étudie les régimes asymptotiques au moment de saturation du disque. Enfin, je décris l'évolution en temps d'un block de données typique. La deuxième partie est constituée de l'étude de processus de branchement, motivée par des questions d'infection cellulaire. Dans un premier temps, je considère un processus de branchement en environnement aléatoire sous-critique, et détermine les théorèmes limites en fonction de la population initiale, ainsi que des propriétes sur les environnements, les limites de Yaglom et le Q-processus. Ensuite, j'utilise ce processus pour établir des résultats sur un modèle décrivant la prolifération d'un parasite dans une cellule en division. Je détermine la probabilité de guérison, le nombre asymptotique de cellules inféctées ainsi que les proportions asymptotiques de cellules infectées par un nombre donné de parasites. Ces différents résulats dépendent du régime du processus de branchement en environnement aléatoire. Enfin, j'ajoute une contamination aléatoire par des parasites extérieures.
APA, Harvard, Vancouver, ISO, and other styles
30

Keller, Peter, Sylvie Roelly, and Angelo Valleriani. "A quasi-random-walk to model a biological transport process." Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6358/.

Full text
Abstract:
Transport Molecules play a crucial role for cell viability. Amongst others, linear motors transport cargos along rope-like structures from one location of the cell to another in a stochastic fashion. Thereby each step of the motor, either forwards or backwards, bridges a fixed distance. While moving along the rope the motor can also detach and is lost. We give here a mathematical formalization of such dynamics as a random process which is an extension of Random Walks, to which we add an absorbing state to model the detachment of the motor from the rope. We derive particular properties of such processes that have not been available before. Our results include description of the maximal distance reached from the starting point and the position from which detachment takes place. Finally, we apply our theoretical results to a concrete established model of the transport molecule Kinesin V.
APA, Harvard, Vancouver, ISO, and other styles
31

Eslava, Fernández Laura. "The rank of symmetric random matrices via a graph process." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=114602.

Full text
Abstract:
Random matrix theory comprises a broad range of topics and avenues of research, one of them being to understand the probability of singularity for discrete random matrices. This is a fundamental, basic question about discrete matrices. Although is been proven that for random symmetric Bernoulli matrices the probability of singularity decays at least polynomially in the size of the matrix, it is conjectured that the right order of decay is exponential. We are interested in the adjacency matrix Q of the Erdos-Réyni random graph and we study the statistics of the rank of Q as a means of understanding the probability of singularity of Q. We take a stochastic process perspective, looking at the family of matrices Q (parametrized by p) as an increasing family of random matrices. We then investigate the structure of Q at the moment that it becomes non-singular and prove that, similar to some monotone properties of random graphs, the property of being non-singular obeys a so-called 'hitting time theorem'. Broadly speaking, this means that all-zero rows, which are a 'local' property of the matrix, are the only obstruction for non-singularity. This fact, which is the main novel contribution to the thesis, extends previous work by Costello and Vu.
La théorie des matrices aléatoires a un large éventail de sujets et de pistes de recherche, l'un d'entre eux étant de comprendre la probabilité de la singularité des matrices aléatoires discrètes. Ca a été prouvé que pour des matrices aléatoires de Bernoulli symétriques la probabilité de singularité a des bornes polynomiales, mais la conjecture est que le bon ordre de décroissance est exponentiel. Nous sommes intéressés par la matrice d'adjacence Q du graphe aléatoire d'Erdos et Réyni et nous étudions les statistiques du rang de Q comme un moyen de comprende la probabilité de singularité de Q. Nous proposons maintenant une perspective de processus stochastique. Dans ce mémoire, nous considérons la famille Q comme une famille croissante de matrices aléatoires et nous étudions la structure de Q au moment oú il devient non singulière et nous prouvons de la même facon pour certaines propriétés monotones des graphes aléatoires, la propriété d'être non singulière obéit à soi-disant 'théorème de temps d'arrêt'. D'une manière globale, cela signifie que les lignes remplies de zéros, qui sont une propriété locale de la matrice, sont la seule obstruction pour la non-singularité. Ce fait, qui est la nouvelle contribution principale de ce mémoire, élargie les résultats antérieurs de Costello et Vu.
APA, Harvard, Vancouver, ISO, and other styles
32

Puplinskaitė, Donata. "Autoregresinių procesų ir atsitiktinių laukų su baigtine arba begaline dispersija agregavimas." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2013~D_20131029_102452-85316.

Full text
Abstract:
Agreguoti duomenys naudojami daugelyje mokslo sričių tokių kaip ekonomika, sociologija, geografija ir kt. Tai motyvuoja tirti (de)agregavimo uždavinį. Viena iš pagrindinių priežasčių kodėl vienalaikis agregavimas tapo tyrimų objektu yra galimybė gauti ilgos atminties procesus. Agregavimas paaiškina ilgos atminties atsiradima procesuose ir yra vienas iš būdų tokius procesus generuoti. Agreguodami trumpos atminties neergodiškus atsitiktinius procesus, galime gauti ilgos atminties ergodišką procesą, kuris gali būti naudojamas mikro ir makro kintamųjų prognozavimui. Disertacijoje nagrinėjama AR(1) procesų bei artimiausio kaimyno atsitiktinių laukų, turinčių begalinę dispersiją, agregavimo schema, randamos sąlygos, kurioms esant ribinis agreguotas procesas egzistuoja, ir turi ilgąją atmintį tam tikra prasme. Atsitiktinių laukų atveju, įvedamas anizotropinės/izotropinės ilgos atminties apibrėžimas, kuris yra paremtas dalinių sumų elgesiu. Baigtinės dispersijos atveju yra gerai žinoma nepriklausomų AR(1) procesų schema, kuri rezultate duoda Gauso ribinį agreguotą procesą. Disertacijoje aprašoma trikampio masyvo agregavimo modelis, kuris baigtinės dispersijos atveju duoda nebūtinai Gauso ribinį agreguotą procesą. Taip pat disertacijoje nagrinėjama bankroto tikimybės asimptotika, kai žalos yra aprašomos sunkiauodegiu agreguotu procesu, nusakoma priklausomybė tarp žalų, apibūdinama žalų ilga atmintis.
Aggregated data appears in many areas such as econimics, sociology, geography, etc. This motivates an importance of studying the (dis)aggregation problem. One of the most important reasons why the contemporaneous aggregation become an object of research is the possibility of obtaining the long memory phenomena in processes. The aggregation provides an explanation of the long-memory effect in time series and a simulation method of such series as well. Accumulation of short-memory non-ergodic random processes can lead to the long memory ergodic process, that can be used for the forecasts of the macro and micro variables. We explore the aggregation scheme of AR(1) processes and nearest-neighbour random fields with infinite variance. We provide results on the existence of limit aggregated processes, and find conditions under which it has long memory properties in certain sense. For the random fields on Z^2, we introduce the notion of (an)isotropic long memory based on the behavior of partial sums. In L_2 case, the known aggregation of independent AR(1) processes leads to the Gaussian limit. While we describe a new model of aggregation based on independent triangular arrays. This scheme gives the limit aggregated process with finite variance which is not necessary Gaussian. We study a discrete time risk insurance model with stationary claims, modeled by the aggregated heavy-tailed process. We establish the asymptotic properties of the ruin probability and the dependence structure... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
33

Fu, Zuopeng. "Karlin Random Fields: Limit Theorems, Representations and Simulations." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1613754836854037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Hirotsu, Kenichi. "Neural network hardware with random weight change learning algorithm." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/15765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Guo-Huei. "Image segmentation using a multiresolution random field model." Thesis, University of Warwick, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Van, Zyl Alexis J. "Basic concepts of random matrix theory." Thesis, Stellenbosch : University of Stellenbosch, 2005. http://hdl.handle.net/10019.1/1624.

Full text
Abstract:
Thesis (MSc (Physics))--University of Stellenbosch, 2005.
It was Wigner that in the 1950’s first introduced the idea of modelling physical reality with an ensemble of random matrices while studying the energy levels of heavy atomic nuclei. Since then, the field of Random Matrix Theory has grown tremendously, with applications ranging from fluctuations on the economic markets to M-theory. It is the purpose of this thesis to discuss the basic concepts of Random Matrix Theory, using the ensembles of random matrices originally introduced by Wigner, the Gaussian ensembles, as a starting point. As Random Matrix Theory is classically concerned with the statistical properties of levels sequences, we start with a brief introduction to the statistical analysis of a level sequence before getting to the introduction of the Gaussian ensembles. With the ensembles defined, we move on to the statistical properties that they predict. In the light of these predictions, a few of the classical applications of Random Matrix Theory are discussed, and as an example of some of the important concepts, the Anderson model of localization is investigated in some detail.
APA, Harvard, Vancouver, ISO, and other styles
37

Chamseddine, Ismail. "Construction of random signals from their higher order moments." Thesis, Imperial College London, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Fellenberg, Benno, Jürgen vom Scheidt, and Matthias Richter. "Simulation of Weakly Correlated Functions and its Application to Random Surfaces and Random Polynomials." Universitätsbibliothek Chemnitz, 1998. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-199801258.

Full text
Abstract:
The paper is dedicated to the modeling and the simulation of random processes and fields. Using the concept and the theory of weakly correlated functions a consistent representation of sufficiently smooth random processes will be derived. Special applications will be given with respect to the simulation of road surfaces in vehicle dynamics and to the confirmation of theoretical results with respect to the zeros of random polynomials.
APA, Harvard, Vancouver, ISO, and other styles
39

Öberg, Johanna. "Time prediction and process discovery of administration process." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-432893.

Full text
Abstract:
Machine learning and process mining are two techniques that are becoming more and more popular among organisations for business intelligence purposes. Results from these techniques can be very useful for organisations' decision-making. The Swedish National Forensic Centre (NFC), an organisation that performs forensic analyses, is in need of a way to visualise and understand its administration process. In addition, the organisation would like to be able to predict the time analyses will take to perform. In this project, it was evaluated if machine learning and process mining could be used on NFC's administration process-related data to satisfy the organisation's needs. Using the process mining tool Mehrwerk Process Mining implemented in the software Qlik Sense, different process variants were discovered from the data and visualised in a comprehensible way. The process variants were easy to interpret and useful for NFC. Machine learning regression models were trained on the data to predict analysis length. Two different datasets were tried, a large dataset with few features and a smaller dataset with more features. The models were then evaluated on test datasets. The models did not predict the length of analyses in an acceptable way. A reason to this could be that the information in the data was not sufficient for this prediction.
APA, Harvard, Vancouver, ISO, and other styles
40

Breen, Barbara J. "Computational nonlinear dynamics monostable stochastic resonance and a bursting neuron model /." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04082004-180036/unrestricted/breen%5Fbarbara%5Fj%5F200312%5Fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Brown, Martin Lloyd. "Stochastic process approximation method with application to random volterra integral equations." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/29222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Lu, Yunkai. "Mechanical Properties of Random Discontinuous Fiber Composites Manufactured from Wetlay Process." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/34503.

Full text
Abstract:
The random discontinuous fiber composite has uniform properties in all directions. The wetlay process is an efficient method to manufacture random discontinuous thermoplastic preform sheets that can be molded into random composite plaques in the hot-press. Investigations were done on the molding parameters that included the set-point mold pressure, set-point mold temperature and cooling methods. The fibers used in the study included glass and carbon fiber. Polypropylene (PP) and Polyethylene Terephthalate (PET) were used as the matrix. Glass/PP and Glass/PET plaques that had fiber volume fractions ranging from 0.05 to 0.50 at an increment of 0.05 were molded. Both tensile and flexural tests were conducted. The test results showed a common pattern, i.e., the modulus and strength of the composite increased with the fiber volume fraction to a maximum and then started to descend. The test results were analyzed to find out the optimal fiber volume fraction that yielded the maximum modulus or strength. Carbon/PET composites plaques were also molded to compare their properties with Glass/PET composite at similar fiber volume fractions. Micrographs were taken of selected specimens to examine the internal structure of the material. Existing micromechanics models that predict the tensile modulus or strength of random fiber composites were examined. Predictions from some of the models were compared with test data.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
43

Vo, Ba Tuong. "Random finite sets in Multi-object filtering." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2009.0045.

Full text
Abstract:
[Truncated abstract] The multi-object filtering problem is a logical and fundamental generalization of the ubiquitous single-object vector filtering problem. Multi-object filtering essentially concerns the joint detection and estimation of the unknown and time-varying number of objects present, and the dynamic state of each of these objects, given a sequence of observation sets. This problem is intrinsically challenging because, given an observation set, there is no knowledge of which object generated which measurement, if any, and the detected measurements are indistinguishable from false alarms. Multi-object filtering poses significant technical challenges, and is indeed an established area of research, with many applications in both military and commercial realms. The new and emerging approach to multi-object filtering is based on the formal theory of random finite sets, and is a natural, elegant and rigorous framework for the theory of multiobject filtering, originally proposed by Mahler. In contrast to traditional approaches, the random finite set framework is completely free of explicit data associations. The random finite set framework is adopted in this dissertation as the basis for a principled and comprehensive study of multi-object filtering. The premise of this framework is that the collection of object states and measurements at any time are treated namely as random finite sets. A random finite set is simply a finite-set-valued random variable, i.e. a random variable which is random in both the number of elements and the values of the elements themselves. Consequently, formulating the multiobject filtering problem using random finite set models precisely encapsulates the essence of the multi-object filtering problem, and enables the development of principled solutions therein. '...' The performance of the proposed algorithm is demonstrated in simulated scenarios, and shown at least in simulation to dramatically outperform traditional single-object filtering in clutter approaches. The second key contribution is a mathematically principled derivation and practical implementation of a novel algorithm for multi-object Bayesian filtering, based on moment approximations to the posterior density of the random finite set state. The performance of the proposed algorithm is also demonstrated in practical scenarios, and shown to considerably outperform traditional multi-object filtering approaches. The third key contribution is a mathematically principled derivation and practical implementation of a novel algorithm for multi-object Bayesian filtering, based on functional approximations to the posterior density of the random finite set state. The performance of the proposed algorithm is compared with the previous, and shown to appreciably outperform the previous in certain classes of situations. The final key contribution is the definition of a consistent and efficiently computable metric for multi-object performance evaluation. It is shown that the finite set theoretic state space formulation permits a mathematically rigorous and physically intuitive construct for measuring the estimation error of a multi-object filter, in the form of a metric. This metric is used to evaluate and compare the multi-object filtering algorithms developed in this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
44

Jurjiu, A., R. Dockhorn, O. Mironova, and J. U. Sommer. "Two universality classes for random hyperbranched polymers." Royal Society of Chemistry, 2014. https://tud.qucosa.de/id/qucosa%3A36397.

Full text
Abstract:
We grow AB₂ random hyperbranched polymer structures in different ways and using different simulation methods. In particular we use a method of ad hoc construction of the connectivity matrix and the bond fluctuation model on a 3D lattice. We show that hyperbranched polymers split into two universality classes depending on the growth process. For a “slow growth” (SG) process where monomers are added sequentially to an existing molecule which strictly avoids cluster–cluster aggregation the resulting structures share all characteristic features with regular dendrimers. For a “quick growth” (QG) process which allows for cluster–cluster aggregation we obtain structures which can be identified as random fractals. Without excluded volume interactions the SG model displays a logarithmic growth of the radius of gyration with respect to the degree of polymerization while the QG model displays a power law behavior with an exponent of 1/4. By analyzing the spectral properties of the connectivity matrix we confirm the behavior of dendritic structures for the SG model and the corresponding fractal properties in the QG case. A mean field model is developed which explains the extension of the hyperbranched polymers in an athermal solvent for both cases. While the radius of gyration of the QG model shows a power-law behavior with the exponent value close to 4/5, the corresponding result for the SG model is a mixed logarithmic–power-law behavior. These different behaviors are confirmed by simulations using the bond fluctuation model. Our studies indicate that random sequential growth according to our SG model can be an alternative to the synthesis of perfect dendrimers.
APA, Harvard, Vancouver, ISO, and other styles
45

Craig, David W. (David William) Carleton University Dissertation Engineering Electrical. "Light traffic loss of random hard real-time tasks in a network." Ottawa, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
46

Miller, Joshua. "Sequential Probing With a Random Start." Scholarship @ Claremont, 2018. https://scholarship.claremont.edu/hmc_theses/115.

Full text
Abstract:
Processing user requests quickly requires not only fast servers, but also demands methods to quickly locate idle servers to process those requests. Methods of finding idle servers are analogous to open addressing in hash tables, but with the key difference that servers may return to an idle state after having been busy rather than staying busy. Probing sequences for open addressing are well-studied, but algorithms for locating idle servers are less understood. We investigate sequential probing with a random start as a method for finding idle servers, especially in cases of heavy traffic. We present a procedure for finding the distribution of the number of probes required for finding an idle server by using a Markov chain and ideas from enumerative combinatorics, then present numerical simulation results in lieu of a general analytic solution.
APA, Harvard, Vancouver, ISO, and other styles
47

Greenberg, Sam. "Random sampling of lattice configurations using local Markov chains." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/28090.

Full text
Abstract:
Thesis (M. S.)--Mathematics, Georgia Institute of Technology, 2009.
Committee Chair: Randall, Dana; Committee Member: Heitsch, Christine; Committee Member: Mihail, Milena; Committee Member: Trotter, Tom; Committee Member: Vigoda, Eric.
APA, Harvard, Vancouver, ISO, and other styles
48

Smith, Andrew. "Logarithmic opinion pools for conditional random fields." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/1730.

Full text
Abstract:
Since their recent introduction, conditional random fields (CRFs) have been successfully applied to a multitude of structured labelling tasks in many different domains. Examples include natural language processing (NLP), bioinformatics and computer vision. Within NLP itself we have seen many different application areas, like named entity recognition, shallow parsing, information extraction from research papers and language modelling. Most of this work has demonstrated the need, directly or indirectly, to employ some form of regularisation when applying CRFs in order to overcome the tendency for these models to overfit. To date a popular method for regularising CRFs has been to fit a Gaussian prior distribution over the model parameters. In this thesis we explore other methods of CRF regularisation, investigating their properties and comparing their effectiveness. We apply our ideas to sequence labelling problems in NLP, specifically part-of-speech tagging and named entity recognition. We start with an analysis of conventional approaches to CRF regularisation, and investigate possible extensions to such approaches. In particular, we consider choices of prior distribution other than the Gaussian, including the Laplacian and Hyperbolic; we look at the effect of regularising different features separately, to differing degrees, and explore how we may define an appropriate level of regularisation for each feature; we investigate the effect of allowing the mean of a prior distribution to take on non-zero values; and we look at the impact of relaxing the feature expectation constraints satisfied by a standard CRF, leading to a modified CRF model we call the inequality CRF. Our analysis leads to the general conclusion that although there is some capacity for improvement of conventional regularisation through modification and extension, this is quite limited. Conventional regularisation with a prior is in general hampered by the need to fit a hyperparameter or set of hyperparameters, which can be an expensive process. We then approach the CRF overfitting problem from a different perspective. Specifically, we introduce a form of CRF ensemble called a logarithmic opinion pool (LOP), where CRF distributions are combined under a weighted product. We show how a LOP has theoretical properties which provide a framework for designing new overfitting reduction schemes in terms of diverse models, and demonstrate how such diverse models may be constructed in a number of different ways. Specifically, we show that by constructing CRF models from manually crafted partitions of a feature set and combining them with equal weight under a LOP, we may obtain an ensemble that significantly outperforms a standard CRF trained on the entire feature set, and is competitive in performance to a standard CRF regularised with a Gaussian prior. The great advantage of LOP approach is that, unlike the Gaussian prior method, it does not require us to search a hyperparameter space. Having demonstrated the success of LOPs in the simple case, we then move on to consider more complex uses of the framework. In particular, we investigate whether it is possible to further improve the LOP ensemble by allowing parameters in different models to interact during training in such a way that diversity between the models is encouraged. Lastly, we show how the LOP approach may be used as a remedy for a problem that standard CRFs can sometimes suffer. In certain situations, negative effects may be introduced to a CRF by the inclusion of highly discriminative features. An example of this is provided by gazetteer features, which encode a word's presence in a gazetteer. We show how LOPs may be used to reduce these negative effects, and so provide some insight into how gazetteer features may be more effectively handled in CRFs, and log-linear models in general.
APA, Harvard, Vancouver, ISO, and other styles
49

Hurth, Tobias. "Invariant densities for dynamical systems with random switching." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/52274.

Full text
Abstract:
We studied invariant measures and invariant densities for dynamical systems with random switching (switching systems, in short). These switching systems can be described by a two-component Markov process whose first component is a stochastic process on a finite-dimensional smooth manifold and whose second component is a stochastic process on a finite collection of smooth vector fields that are defined on the manifold. We identified sufficient conditions for uniqueness and absolute continuity of the invariant measure associated to this Markov process. These conditions consist of a Hoermander-type hypoellipticity condition and a recurrence condition. In the case where the manifold is the real line or a subset of the real line, we studied regularity properties of the invariant densities of absolutely continuous invariant measures. We showed that invariant densities are smooth away from critical points of the vector fields. Assuming in addition that the vector fields are analytic, we derived the asymptotically dominant term for invariant densities at critical points.
APA, Harvard, Vancouver, ISO, and other styles
50

Maddalena, Daniela. "Stationary states in random walks on networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10170/.

Full text
Abstract:
In this thesis we dealt with the problem of describing a transportation network in which the objects in movement were subject to both finite transportation capacity and finite accomodation capacity. The movements across such a system are realistically of a simultaneous nature which poses some challenges when formulating a mathematical description. We tried to derive such a general modellization from one posed on a simplified problem based on asyncronicity in particle transitions. We did so considering one-step processes based on the assumption that the system could be describable through discrete time Markov processes with finite state space. After describing the pre-established dynamics in terms of master equations we determined stationary states for the considered processes. Numerical simulations then led to the conclusion that a general system naturally evolves toward a congestion state when its particle transition simultaneously and we consider one single constraint in the form of network node capacity. Moreover the congested nodes of a system tend to be located in adjacent spots in the network, thus forming local clusters of congested nodes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography