Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Stochastic weights.

Rozprawy doktorskie na temat „Stochastic weights”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 27 najlepszych rozpraw doktorskich naukowych na temat „Stochastic weights”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Dohndorf, Iryna [Verfasser], Peter [Akademischer Betreuer] Buchholz i Boudewijn R. [Gutachter] Haverkort. "Stochastic graph models with phase type distributed edge weights / Iryna Dohndorf ; Gutachter: Boudewijn R. Haverkort ; Betreuer: Peter Buchholz". Dortmund : Universitätsbibliothek Dortmund, 2017. http://d-nb.info/1134953046/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Medeiros, Júnior Maurício da Silva. "Stochastic discount factor bounds and rare events: a review". reponame:Repositório Institucional do FGV, 2016. http://hdl.handle.net/10438/16459.

Pełny tekst źródła
Streszczenie:
Submitted by Maurício da Silva Medeiros Júnior (mauriciojr.df@gmail.com) on 2016-04-06T21:06:49Z No. of bitstreams: 1 Dissertação - Maurício Medeiros Jr. - FGV-EPGE.pdf: 2837403 bytes, checksum: dc338c6b56e600b8b9194b1c27abb080 (MD5)
Approved for entry into archive by BRUNA BARROS (bruna.barros@fgv.br) on 2016-04-19T18:34:15Z (GMT) No. of bitstreams: 1 Dissertação - Maurício Medeiros Jr. - FGV-EPGE.pdf: 2837403 bytes, checksum: dc338c6b56e600b8b9194b1c27abb080 (MD5)
Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2016-04-27T18:16:31Z (GMT) No. of bitstreams: 1 Dissertação - Maurício Medeiros Jr. - FGV-EPGE.pdf: 2837403 bytes, checksum: dc338c6b56e600b8b9194b1c27abb080 (MD5)
Made available in DSpace on 2016-04-27T18:21:05Z (GMT). No. of bitstreams: 1 Dissertação - Maurício Medeiros Jr. - FGV-EPGE.pdf: 2837403 bytes, checksum: dc338c6b56e600b8b9194b1c27abb080 (MD5) Previous issue date: 2016-03-22
We aim to provide a review of the stochastic discount factor bounds usually applied to diagnose asset pricing models. In particular, we mainly discuss the bounds used to analyze the disaster model of Barro (2006). Our attention is focused in this disaster model since the stochastic discount factor bounds that are applied to study the performance of disaster models usually consider the approach of Barro (2006). We first present the entropy bounds that provide a diagnosis of the analyzed disaster model which are the methods of Almeida and Garcia (2012, 2016); Ghosh et al. (2016). Then, we discuss how their results according to the disaster model are related to each other and also present the findings of other methodologies that are similar to these bounds but provide different evidence about the performance of the framework developed by Barro (2006).
Style APA, Harvard, Vancouver, ISO itp.
3

Silva, Emanuel Araújo. "MODELAGEM DINÂMICA PARA SIMULAÇÃO NO PROCESSO DE ARENIZAÇÃO E COBERTURA FLORESTAL NA CAMPANHA OCIDENTAL - RS". Universidade Federal de Santa Maria, 2015. http://repositorio.ufsm.br/handle/1/3781.

Pełny tekst źródła
Streszczenie:
The dynamic modeling process is a useful tool for the knowledge of land use and occupation, creating methodological guidelines associated to ambient, social and economical issues. This work aims to establish a model to simulate the dynamic in the sandfication process and forest cover at South-west of Rio Grande do Sul, named micro regions of Campanha Ocidental and, based on this technics, make a future scenery projection. An image mosaic of LANDSAT 5 satellite was used, which recovers the studied region in the years of 1985, 1996, 2011 and LANDSAT 8 in 2013 year. SPRING was used to data base elaboration and data processing of digital images. After the image classification, the LEGAL program was used to develop the cross thematic maps, which will be used on simulations for the future sceneries by modeling with Dinamica EGO software. The expected results for 2026 indicate that forest cover will increase from 14.22% in 2011 to 15,03% in the year 2026 the total area of the Campanha Ocidental, showing that the expansion of forest cover is in the process of stabilization, focusing the areas in east, high altitudes and around drainage rivers. In the sand, this projection will retracts from 0.37% in 2011 to 0.33% in 2026, its concentration will be in the northeast, high altitudes and around the Ibicuí river drainage.
A modelagem dinâmica é uma ferramenta útil para o conhecimento do uso e ocupação da terra, gerando diretrizes metodológicas associadas às questões ambientais, sociais e econômicas. Este trabalho teve por objetivo aplicar um modelo para simular a dinâmica no processo de arenização e cobertura florestal do Sudoeste do Rio Grande do Sul, denominada microrregião da Campanha Ocidental e, com base nessas técnicas, efetuar a projeção de cenários futuros. Foi utilizado um mosaico de imagens do satélite LANDSAT 5 sensor TM, que recobre a região de estudo nos anos de 1985, 1996 e 2011 e LANDSAT 8 sensor OLI no ano de 2013. Para elaboração da base de dados e processamento digital das imagens, utilizou-se o aplicativo SPRING. Após a classificação das imagens, foi realizado o cruzamento dos mapas temáticos com auxílio da programação LEGAL, e posteriormente, empregado a simulação dos cenários futuros por meio da modelagem com o aplicativo Dinamica EGO. Os resultados previstos para 2026 indicam que a cobertura florestal irá se expandir de 14,22% em 2011 para 15,03% no ano de 2026 da área total da Campanha Ocidental, demonstrando que o aumento da cobertura florestal encontra-se em processo de estabilização, concentrando-se suas áreas na parte leste, altitudes elevadas e nas bordas da rede de drenagem. Nos areais, a projeção demonstrou que sua área sofrerá retração de 0,37% em 2011 para 0,33% da área total da região em 2026, e sua concentração estará presente na parte leste, em altitudes elevadas e em torno da drenagem do rio Ibicui.
Style APA, Harvard, Vancouver, ISO itp.
4

Menz, William Jefferson. "Stochastic modelling of silicon nanoparticle synthesis". Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/245146.

Pełny tekst źródła
Streszczenie:
This thesis presents new methods to study the aerosol synthesis of nano-particles and a new model to simulate the formation of silicon nanoparticles. Population balance modelling is used to model nanoparticle synthesis and a stochastic numerical method is used to solve the governing equations. The population balance models are coupled to chemical kinetic models and offer insight into the fundamental physiochemical processes leading to particle formation. The first method developed in this work is a new mathematical expression for calculating the rate of Brownian coagulation with stochastic weighted algorithms (SWAs). The new expression permits the solution of the population balance equations with SWAs using a computationally-efficient technique of majorant rates and fictitious jumps. Convergence properties and efficiency of the expression are evaluated using a detailed silica particle model. A sequential-modular algorithm is subsequently presented which solves networks of perfectly stirred reactors with a population balance model using the stochastic method. The algorithm is tested in some simple network configurations, which are used to identify methods through which error in the stochastic solution may be reduced. It is observed that SWAs are useful in preventing accumulation of error in reactor networks. A new model for silicon nanoparticle synthesis is developed. The model includes gas-phase reactions describing silane decomposition, and a detailed multivariate particle model which tracks particle structure and composition. Systematic parameter estimation is used to fit the model to experimental cases. Results indicated that the key challenge in modelling silicon systems is obtaining a correct description of the particle nucleation process. Finally, the silicon model is used in conjunction with the reactor network algorithm to simulate the start-up of a plug-flow reactor. The power of stochastic methods in resolving characteristics of a particle ensemble is highlighted by investigating the number, size, degree of sintering and polydispersity along the length of the reactor.
Style APA, Harvard, Vancouver, ISO itp.
5

Kindl, Mark Richard. "A stochastic approach to path planning in the Weighted-Region Problem". Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26789.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Rattana, Prapanporn. "Mean-field-like approximations for stochastic processes on weighted and dynamic networks". Thesis, University of Sussex, 2015. http://sro.sussex.ac.uk/id/eprint/56600/.

Pełny tekst źródła
Streszczenie:
The explicit use of networks in modelling stochastic processes such as epidemic dynamics has revolutionised research into understanding the impact of contact pattern properties, such as degree heterogeneity, preferential mixing, clustering, weighted and dynamic linkages, on how epidemics invade, spread and how to best control them. In this thesis, I worked on mean-field approximations of stochastic processes on networks with particular focus on weighted and dynamic networks. I mostly used low dimensional ordinary differential equation (ODE) models and explicit network-based stochastic simulations to model and analyse how epidemics become established and spread in weighted and dynamic networks. I begin with a paper presenting the susceptible-infected-susceptible/recovered (SIS, SIR) epidemic models on static weighted networks with different link weight distributions. This work extends the pairwise model paradigm to weighted networks and gives excellent agreement with simulations. The basic reproductive ratio, R0, is formulated for SIR dynamics. The effects of link weight distribution on R0 and on the spread of the disease are investigated in detail. This work is followed by a second paper, which considers weighted networks in which the nodal degree and weights are not independent. Moreover, two approximate models are explored: (i) the pairwise model and (ii) the edge-based compartmental model. These are used to derive important epidemic descriptors, including early growth rate, final epidemic size, basic reproductive ratio and epidemic dynamics. Whilst the first two papers concentrate on static networks, the third paper focuses on dynamic networks, where links can be activated and/or deleted and this process can evolve together with the epidemic dynamics. We consider an adaptive network with a link rewiring process constrained by spatial proximity. This model couples SIS dynamics with that of the network and it investigates the impact of rewiring on the network structure and disease die-out induced by the rewiring process. The fourth paper shows that the generalised master equations approach works well for networks with low degree heterogeneity but it fails to capture networks with modest or high degree heterogeneity. In particular, we show that a recently proposed generalisation performs poorly, except for networks with low heterogeneity and high average degree.
Style APA, Harvard, Vancouver, ISO itp.
7

Hilton, Cary Allen. "A stochastic approach to solving the 2 _x001B_p1_x001B_s/_x001B_b2_x001B_s dimensional weighted region problem". Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/28563.

Pełny tekst źródła
Streszczenie:
This thesis describes a method of computing a feasible path solution for the anisotropic weighted region problem. Heuristics are used to locate an initial starting solution. This starting solution is iteratively improved using a golden ratio search to produce a solution within a specified tolerance. The path solution is then randomly perturbed or detoured through different region frontiers, and the golden ratio search is again applied. These random detours are controlled by a process known as simulated annealing, which determines the number of detours made and decides whether to accept or reject each path solution. Better solutions are always accepted and worse solutions are accepted based on a probability distribution. Accepting worse solutions allows an opportunity to escape from a local minimum condition and continue the search for the optimal path. Since an exhaustive search is not performed, the globally optimal path may not be found, but a feasible path can be found with this method
Style APA, Harvard, Vancouver, ISO itp.
8

Xu, Zhouyi. "Stochastic Modeling and Simulation of Gene Networks". Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_dissertations/645.

Pełny tekst źródła
Streszczenie:
Recent research in experimental and computational biology has revealed the necessity of using stochastic modeling and simulation to investigate the functionality and dynamics of gene networks. However, there is no sophisticated stochastic modeling techniques and efficient stochastic simulation algorithms (SSA) for analyzing and simulating gene networks. Therefore, the objective of this research is to design highly efficient and accurate SSAs, to develop stochastic models for certain real gene networks and to apply stochastic simulation to investigate such gene networks. To achieve this objective, we developed several novel efficient and accurate SSAs. We also proposed two stochastic models for the circadian system of Drosophila and simulated the dynamics of the system. The K-leap method constrains the total number of reactions in one leap to a properly chosen number thereby improving simulation accuracy. Since the exact SSA is a special case of the K-leap method when K=1, the K-leap method can naturally change from the exact SSA to an approximate leap method during simulation if necessary. The hybrid tau/K-leap and the modified K-leap methods are particularly suitable for simulating gene networks where certain reactant molecular species have a small number of molecules. Although the existing tau-leap methods can significantly speed up stochastic simulation of certain gene networks, the mean of the number of firings of each reaction channel is not equal to the true mean. Therefore, all existing tau-leap methods produce biased results, which limit simulation accuracy and speed. Our unbiased tau-leap methods remove the bias in simulation results that exist in all current leap SSAs and therefore significantly improve simulation accuracy without sacrificing speed. In order to efficiently estimate the probability of rare events in gene networks, we applied the importance sampling technique to the next reaction method (NRM) of the SSA and developed a weighted NRM (wNRM). We further developed a systematic method for selecting the values of importance sampling parameters. Applying our parameter selection method to the wSSA and the wNRM, we get an improved wSSA (iwSSA) and an improved wNRM (iwNRM), which can provide substantial improvement over the wSSA in terms of simulation efficiency and accuracy. We also develop a detailed and a reduced stochastic model for circadian rhythm in Drosophila and employ our SSA to simulate circadian oscillations. Our simulations showed that both models could produce sustained oscillations and that the oscillation is robust to noise in the sense that there is very little variability in oscillation period although there are significant random fluctuations in oscillation peeks. Moreover, although average time delays are essential to simulation of oscillation, random changes in time delays within certain range around fixed average time delay cause little variability in the oscillation period. Our simulation results also showed that both models are robust to parameter variations and that oscillation can be entrained by light/dark circles.
Style APA, Harvard, Vancouver, ISO itp.
9

Michel, Simon [Verfasser]. "Stochastic evolution equations in weighted L² spaces with jump noise / Simon Michel. Fakultät für Mathematik". Bielefeld : Universitätsbibliothek Bielefeld, Hochschulschriften, 2012. http://d-nb.info/1022030078/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Szyszkowicz, B. (Barbara) Carleton University Dissertation Mathematics. "Weak convergence of stochastic processes in weighted metrics and their applications to contiguous changepoint analysis". Ottawa, 1992.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

El, Haj Abir. "Stochastics blockmodels, classifications and applications". Thesis, Poitiers, 2019. http://www.theses.fr/2019POIT2300.

Pełny tekst źródła
Streszczenie:
Cette thèse de doctorat porte sur l’analyse de réseaux pondérés, graphes finis où chaque arête est associée à un poids représentant l’intensité de sa force. Nous introduisons une extension du modèle à blocs stochastiques (SBM) binaire, appelée modèle à blocs stochastiques binomial (bSBM). Cette question est motivée par l’étude des réseaux de co-citations dans un contexte de fouille de textes où les données sont représentées par un graphe. Les noeuds sont des mots et chaque arête joignant deux mots est pondérée par le nombre de documents inclus dans le corpus citant simultanément cette paire de mots. Nous développons une méthode d’inférence basée sur l’algorithme espérance maximisation variationnel (EMV) pour estimer les paramètres du modèle proposé ainsi que pour classifier les mots du réseau. Puis nous adoptons une méthode qui repose sur la maximisation d’un critère ICL (en anglais integrated classification likelihood) pour sélectionner le modèle optimal et le nombre de clusters. D’autre part, nous développons une approche variationnelle pour traiter le réseau et nous comparons les deux approches. Des applications à des données réelles sont adoptées pour montrer l’efficacité des deux méthodes ainsi que pour les comparer. Enfin, nous développons un SBM avec plusieurs attributs pour traiter les réseaux ayant des poids associés aux noeuds. Nous motivons cette méthode par une application qui vise au développement d’un outil d’aide à la spécification de différents traitements cognitifs réalisés par le cerveau lors de la préparation à l’écriture
This PhD thesis focuses on the analysis of weighted networks, where each edge is associated to a weight representing its strength. We introduce an extension of the binary stochastic block model (SBM), called binomial stochastic block model (bSBM). This question is motivated by the study of co-citation networks in a context of text mining where data is represented by a graph. Nodes are words and each edge joining two words is weighted by the number of documents included in the corpus simultaneously citing this pair of words. We develop an inference method based on a variational maximization algorithm (VEM) to estimate the parameters of the modelas well as to classify the words of the network. Then, we adopt a method based on maximizing an integrated classification likelihood (ICL) criterion to select the optimal model and the number of clusters. Otherwise, we develop a variational approach to analyze the given network. Then we compare the two approaches. Applications based on real data are adopted to show the effectiveness of the two methods as well as to compare them. Finally, we develop a SBM model with several attributes to deal with node-weighted networks. We motivate this approach by an application that aims at the development of a tool to help the specification of different cognitive treatments performed by the brain during the preparation of the writing
Style APA, Harvard, Vancouver, ISO itp.
12

Davis, Brett Andrew, i Brett Davis@abs gov au. "Inference for Discrete Time Stochastic Processes using Aggregated Survey Data". The Australian National University. Faculty of Economics and Commerce, 2003. http://thesis.anu.edu.au./public/adt-ANU20040806.104137.

Pełny tekst źródła
Streszczenie:
We consider a longitudinal system in which transitions between the states are governed by a discrete time finite state space stochastic process X. Our aim, using aggregated sample survey data of the form typically collected by official statistical agencies, is to undertake model based inference for the underlying process X. We will develop inferential techniques for continuing sample surveys of two distinct types. First, longitudinal surveys in which the same individuals are sampled in each cycle of the survey. Second, cross-sectional surveys which sample the same population in successive cycles but with no attempt to track particular individuals from one cycle to the next. Some of the basic results have appeared in Davis et al (2001) and Davis et al (2002).¶ Longitudinal surveys provide data in the form of transition frequencies between the states of X. In Chapter Two we develop a method for modelling and estimating the one-step transition probabilities in the case where X is a non-homogeneous Markov chain and transition frequencies are observed at unit time intervals. However, due to their expense, longitudinal surveys are typically conducted at widely, and sometimes irregularly, spaced time points. That is, the observable frequencies pertain to multi-step transitions. Continuing to assume the Markov property for X, in Chapter Three, we show that these multi-step transition frequencies can be stochastically interpolated to provide accurate estimates of the one-step transition probabilities of the underlying process. These estimates for a unit time increment can be used to calculate estimates of expected future occupation time, conditional on an individual’s state at initial point of observation, in the different states of X.¶ For reasons of cost, most statistical collections run by official agencies are cross-sectional sample surveys. The data observed from an on-going survey of this type are marginal frequencies in the states of X at a sequence of time points. In Chapter Four we develop a model based technique for estimating the marginal probabilities of X using data of this form. Note that, in contrast to the longitudinal case, the Markov assumption does not simplify inference based on marginal frequencies. The marginal probability estimates enable estimation of future occupation times (in each of the states of X) for an individual of unspecified initial state. However, in the applications of the technique that we discuss (see Sections 4.4 and 4.5) the estimated occupation times will be conditional on both gender and initial age of individuals.¶ The longitudinal data envisaged in Chapter Two is that obtained from the surveillance of the same sample in each cycle of an on-going survey. In practice, to preserve data quality it is necessary to control respondent burden using sample rotation. This is usually achieved using a mechanism known as rotation group sampling. In Chapter Five we consider the particular form of rotation group sampling used by the Australian Bureau of Statistics in their Monthly Labour Force Survey (from which official estimates of labour force participation rates are produced). We show that our approach to estimating the one-step transition probabilities of X from transition frequencies observed at incremental time intervals, developed in Chapter Two, can be modified to deal with data collected under this sample rotation scheme. Furthermore, we show that valid inference is possible even when the Markov property does not hold for the underlying process.
Style APA, Harvard, Vancouver, ISO itp.
13

Huang, Xueping [Verfasser]. "On stochastic completeness of weighted graphs / Xueping Huang. Fakultät für Mathematik. SFB 701 Spectral Structures and Topological Methods in Mathematics". Bielefeld : Universitätsbibliothek Bielefeld, Hochschulschriften, 2011. http://d-nb.info/1014454670/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Fell, Jonathan [Verfasser], Holger [Akademischer Betreuer] Rauhut i Hartmut [Akademischer Betreuer] Führ. "Weighted l1-Analysis minimization and stochastic gradient descent for low rank matrix recovery / Jonathan Martin Fell ; Holger Rauhut, Hartmut Führ". Aachen : Universitätsbibliothek der RWTH Aachen, 2020. http://d-nb.info/1227992521/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Yenerdag, Erdem <1988&gt. "Contagion Analysis in European Financial Markets Through the Lens of Weighted Stochastic Block Model: Systematically Important Communities of Financial Institutions". Master's Degree Thesis, Università Ca' Foscari Venezia, 2016. http://hdl.handle.net/10579/8816.

Pełny tekst źródła
Streszczenie:
This study provides a new perspective to analyze systemic risk and contagion channels of financial markets by proposing Weighted Stochastic Block Model (WSBM) as a generative model for the financial networks. WSBM allows regulators to analyze systemic risk and contagion channels of financial markets by the topological features of WSBM communities. In the empirical application of the WSBM, it is found that the number of communities tends to increase during the financial crisis which can be analyzed as a new early warning indicator of systemic risk. In addition, a new ranking method, based on the new notion of systematically important communities of financial institutions, is provided to assess the systemically important financial institutions.
Style APA, Harvard, Vancouver, ISO itp.
16

Carbas, Serdar. "Optimum Topological Design Of Geometrically Nonlinear Single Layer Lamella Domes Using Harmony Search Method". Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609634/index.pdf.

Pełny tekst źródła
Streszczenie:
Harmony search method based optimum topology design algorithm is presented for single layer lamella domes. The harmony search method is a numerical optimization technique developed recently that imitates the musical performance process which takes place when a musician searches for a better state of harmony. Jazz improvisation seeks to find musically pleasing harmony similar to the optimum design process which seeks to find the optimum solution. The optimum design algorithm developed imposes the behavioral and performance constraints in accordance with LRFD-AISC. The optimum number of rings, the height of the crown and the tubular cross-sectional designations for dome members are treated as design variables. The member grouping is allowed so that the same section can be adopted for each group. The design algorithm developed has a routine that build the data for the geometry of the dome automatically that covers the numbering of joints, and member incidences, and the computation of the coordinates of joints. Due to the slenderness and the presence of imperfections in dome structures it is necessary to consider the geometric nonlinearity in the prediction of their response under the external loading. Design examples are considered to demonstrate the efficiency of the algorithm presented.
Style APA, Harvard, Vancouver, ISO itp.
17

Ahmad, Ola. "Stochastic representation and analysis of rough surface topography by random fields and integral geometry - Application to the UHMWPE cup involved in total hip arthroplasty". Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2013. http://tel.archives-ouvertes.fr/tel-00905519.

Pełny tekst źródła
Streszczenie:
Surface topography is, generally, composed of many length scales starting from its physical geometry, to its microscopic or atomic scales known by roughness. The spatial and geometrical evolution of the roughness topography of engineering surfaces avail comprehensive understanding, and interpretation of many physical and engineering problems such as friction, and wear mechanisms during the mechanical contact between adjoined surfaces. Obviously, the topography of rough surfaces is of random nature. It is composed of irregular hills/valleys being spatially correlated. The relation between their densities and their geometric properties are the fundamental topics that have been developed, in this research study, using the theory of random fields and the integral geometry.An appropriate random field model of a rough surface has been defined by the most significant parameters, whose changes influence the geometry of its excursion. The excursion sets were quantified by functions known as intrinsic volumes. These functions have many physical interpretations, in practice. It is possible by deriving their analytical formula to estimate the parameters of the random field model being applied on the surface, and for statistical analysis investigation of its excursion sets. These subjects have been essentially considered in this thesis. Firstly, the intrinsic volumes of the excursion sets of a class of mixture models defined by the linear combination of Gaussian and t random fields, then for the skew-t random fields are derived analytically. They have been compared and tested on surfaces generated by simulations. In the second stage, these random fields have been applied to real surfaces measured from the UHMWPE component, involved in application of total hip implant, before and after wear simulation process. The primary results showed that the skew-t random field is more adequate, and flexible for modelling the topographic roughness. Following these arguments, a statistical analysis approach, based on the skew-t random field, is then proposed. It aims at estimating, hierarchically, the significant levels including the real hills/valleys among the uncertain measurements. The evolution of the mean area of the hills/valleys and their levels enabled describing the functional behaviour of the UHMWPE surface over wear time, and indicating the predominant wear mechanisms.
Style APA, Harvard, Vancouver, ISO itp.
18

Aydogdu, Ibrahim. "Optimum Design Of 3-d Irregular Steel Frames Using Ant Colony Optimization And Harmony Search Algorithms". Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612285/index.pdf.

Pełny tekst źródła
Streszczenie:
Steel space frames having irregular shapes when subjected to lateral loads caused by wind or earthquakes undergo twisting as a result of their unsymmetrical topology. As a result, torsional moment comes out which is required to be resisted by the three dimensional frame system. The members of such frame are generally made out of steel I sections which are thin walled open sections. The simple beam theory is not adequate to predict behavior of such thin-walled sections under torsional moments due to the fact that the large warping deformations occur in the cross section of the member. Therefore, it is necessary to consider the effect of warping in the design of the steel space frames having members of thin walled steel sections is significant. In this study the optimum design problem of steel space frames is formulated according to the provisions of LRFD-AISC (Load and Resistance factor design of American Institute of Steel Construction) in which the effect of warping is also taken into account. Ant colony optimization and harmony search techniques two of the recent methods in stochastic search techniques are used to obtain the solution of the design problem. Number of space frame examples is designed by the algorithms developed in order to demonstrate the effect of warping in the optimum design.
Style APA, Harvard, Vancouver, ISO itp.
19

Erdal, Ferhat. "Ultimate Load Capacity Of Optimally Designed Cellular Beams". Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613007/index.pdf.

Pełny tekst źródła
Streszczenie:
Cellular beams became increasingly popular as an efficient structural form in steel construction since their introduction. Their sophisticated design and profiling process provides greater flexibility in beam proportioning for strength, depth, size and location of circular holes. The purpose of manufacturing these beams is to increase overall beam depth, the moment of inertia and section modulus, which results in greater strength and rigidity. Cellular beams are used as primary or secondary floor beams in order to achieve long spans and service integration. They are also used as roof beams beyond the range of portal-frame construction, and are the perfect solution for curved roof applications, combining weight savings with a low-cost manufacturing process. The purpose of the current research is to study optimum design, ultimate load capacity under applied load and finite element analysis of non-composite cellular beams. The first part of the research program focuses on the optimum design of steel cellular beams using one of the stochastic search methods called &ldquo
harmony search algorithm&rdquo
. The minimum weight is taken as the design objective while the design constraints are implemented from the Steel Construction Institute. Design constraints include the displacement limitations, overall beam flexural capacity, beam shear capacity, overall beam buckling strength, web post flexure and buckling, vierendeel bending of upper and lower tees and local buckling of compression flange. The design methods adopted in this publication are consistent with BS5950. In the second part of the research, which is the experimental work, twelve non-composite cellular beams are tested to determine the ultimate load carrying capacities of these beams under using a hydraulic plug to apply point load. The tested cellular beam specimens have been designed by using harmony search algorithm. Finally, finite element analysis program is used to perform elastic buckling analysis and predict critical loads of all steel cellular beams. Finite element analysis results are then compared with experimental test results for each tested cellular beam.
Style APA, Harvard, Vancouver, ISO itp.
20

Ramakrishnan, K. "Multi-Armed Bandits – On Range Searching and On Slowly-varying Non-stationarity". Thesis, 2022. https://etd.iisc.ac.in/handle/2005/6043.

Pełny tekst źródła
Streszczenie:
Multi-Armed Bandits (MAB) is a popular framework for modelling sequential decision-making problems under uncertainty. This thesis is a compilation of two independent works on MABs. 1. In the first work, we study a multi-armed bandit (MAB) version of the range-searching problem. In its basic form, range searching considers as input a set of points (on the real line) and a collection of (real) intervals. Here, with each specified point, we have an associated weight, and the problem objective is to find a maximum-weight point within every given interval. The current work addresses range searching with stochastic weights: each point corresponds to an arm (that admits sample access), and the point’s weight is the (unknown) mean of the underlying distribution. In this MAB setup, we develop sample-efficient algorithms that find, with high probability, near-optimal arms within the given intervals, i.e., we obtain PAC (probably approximately correct) guarantees. We also provide an algorithm for a generalization wherein the weight of each point is a multi-dimensional vector. The sample complexities of our algorithms depend, in particular, on the size of the optimal hitting set of the given intervals. Finally, we establish lower bounds proving that the obtained sample complexities are essentially tight. Our results highlight the significance of geometric constructs— specifically, hitting sets—in our MAB setting. 2. In the second work, we consider minimisation of dynamic regret in non-stationary bandits with a slowly varying property. Namely, we assume that arms’ rewards are stochastic and independent over time, but that the absolute difference between the expected rewards of any arm at any two consecutive time-steps is at most a drift limit δ > 0. For this setting that has not received enough attention in the past, we give a new algorithm which extends naturally the well-known Successive Elimination algorithm to the non-stationary bandit setting. We establish the first instance-dependent regret upper bound for slowly varying non-stationary bandits. The analysis in turn relies on a novel characterization of the instance as a detectable gap profile that depends on the expected arm reward differences. We also provide the first minimax regret lower bound for this problem, enabling us to show that our algorithm is essentially minimax optimal. Also, this lower bound we obtain matches that of the more general total variation-budgeted bandits problem, establishing that the seemingly easier former problem is at least as hard as the more general latter problem in the minimax sense. We complement our theoretical results with experimental illustrations.
Style APA, Harvard, Vancouver, ISO itp.
21

Kuhlbusch, Dirk. "Moment conditions for weighted branching processes /". 2004. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=012962072&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

AGRESTI, ANTONIO. "Nonlinear parabolic stochastic evolution equations in critical spaces". Doctoral thesis, 2021. http://hdl.handle.net/11573/1565547.

Pełny tekst źródła
Streszczenie:
In this thesis we develop a new approach to nonlinear stochastic partial differential equations (SPDEs) with Gaussian noise. Our aim is to provide an abstract framework which is applicable to a large class of SPDEs and includes many important cases of nonlinear parabolic problems which are of quasi- or semilinear type. One of the main contributions of this thesis is a new method to bootstrap Sobolev and Hölder regularity in time and space, which does not require smoothness of the initial data. This leads to new results even in the classical $L^2$-settings, which we illustrate for a parabolic SPDE and for the stochastic Navier-Stokes equations in two dimensions. Our theory is formulated in an $L^p$-setting, and because of this we can deal with nonlinearities in a very efficient way. Applications to local-well posedness to several concrete problems and their quasilinear variants are given. This includes Stochastic Navier-Stokes equations, Burger's equation, the Allen-Cahn equation, the Cahn-Hilliard equation, reaction-diffusion equations, and the porous media equation. The interplay of the nonlinearities and the critical spaces of initial data leads to new results and insights for these SPDEs. Most of the previous equations will be considered with a gradient-noise term. The thesis is divided into three parts. The first one concerns local well-posedness for stochastic evolution equations. Here, we study stochastic maximal $L^p$-regularity for semigroup generators, and in particular, we prove a sharp time-space regularity result for stochastic convolutions which will play a basic role for the nonlinear theory. Next, we show local existence of solutions to stochastic evolution equations with rough initial data which allows us to define `critical spaces' in an abstract way. The proofs are based on weighted maximal regularity techniques for the linearized problem as well as on a combination of several sophisticated splitting and truncation arguments. The local-existence theory developed here can be seen as a stochastic version of the theory of critical spaces due to Prüss-Simonett-Wilke (2018). We conclude the first part by applying our main result to several SPDEs. In particular, we check that critical spaces defined abstractly coincide with the critical spaces from a PDEs perspective, i.e. spaces invariant under the natural scaling of the SPDE considered. The second part is devoted to the study of blow-up criteria and instantaneous regularization. Here we prove several blow-up criteria for stochastic evolution equations. Some of them were not known even in the determinstic setting. For semilinear equations we obtain a Serrin type blow-up criterium, which extends a recent result of Prüss-Simonett-Wilke (2018) to the stochastic setting. Blow-up criteria can be used to prove global well-posedness for SPDEs. As in the first part, maximal regularity techniques and weights in time play a central role in the proofs. Next we present a new abstract bootstrapping method to show Sobolev and H"older regularity in time and space, which does not require smoothness of the initial data. The blow-up criteria are at the basis of these new methods. Moreover, in applications the bootstrap results can be combined with our blow-up criteria, to obtain efficient ways to prove global existence. This fact will be illustrated for a concrete SPDE. In the third part, we apply the previous results to study quasilinear reaction-diffusion equations and stochastic Navier-Stokes equations with gradient noise. As regards the former, we show global well-posedness and instantaneous regularization of solutions employing suitable dissipative conditions. Here we also prove a suitable stochastic version of the parabolic DeGiorgi-Nash-Moser estimates by employing a standard reduction method. The last chapter concerns stochastic Navier-Stokes equations and in the three dimensional case we prove local existence with data in the critical spaces $L^3$ and $B^{rac{3}{q}-1}_{q,p}$. In addition, we prove a blow-up criterium for solutions with paths in $L^p(L^q)$ where $rac{2}{p}+rac{3}{q}=1$ which extends the usual Serrin blow-up criteria to the stochastic setting. Finally, we prove existence of global solutions in two dimensions under minimal assumptions on the noise term and on the initial data.
Style APA, Harvard, Vancouver, ISO itp.
23

TZENG, WEI-LIN, i 曾煒淋. "Application of Stochastics-k to Test the Investment Performanceof Taiwan 50 weighted stock". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/8ufj9g.

Pełny tekst źródła
Streszczenie:
碩士
逢甲大學
統計與精算所
100
In this study, the 50 constituent stocks in Taiwan as the research object, the use of KD indicators in the K value and the strength of stock price band to establish a trading strategy. Further testing Taiwan''s investment performance of 50 constituent stocks. It was found that whether or not to consider the cost, the use of random K index trading strategy, their performance was significantly better than buy and hold strategy. Therefore, using a random K index can obtain excess returns.
Style APA, Harvard, Vancouver, ISO itp.
24

Davis, Brett Andrew. "Inference for Discrete Time Stochastic Processes using Aggregated Survey Data". Phd thesis, 2003. http://hdl.handle.net/1885/46631.

Pełny tekst źródła
Streszczenie:
We consider a longitudinal system in which transitions between the states are governed by a discrete time finite state space stochastic process X. Our aim, using aggregated sample survey data of the form typically collected by official statistical agencies, is to undertake model based inference for the underlying process X. We will develop inferential techniques for continuing sample surveys of two distinct types. First, longitudinal surveys in which the same individuals are sampled in each cycle of the survey. Second, cross-sectional surveys which sample the same population in successive cycles but with no attempt to track particular individuals from one cycle to the next. Some of the basic results have appeared in Davis et al (2001) and Davis et al (2002).¶ ...
Style APA, Harvard, Vancouver, ISO itp.
25

Schalkwyk, Garth Van. "Stochastic modelling in bank management and optimization of bank asset allocation". Thesis, 2009. http://hdl.handle.net/11394/3419.

Pełny tekst źródła
Streszczenie:
>Magister Scientiae - MSc
The Basel Committee published its proposals for a revised capital adequacy framework(the Basel II Capital Accord) in June 2006. One of the main objectives of this framework is to improve the incentives for state-of-the-art risk management in banking, especially in the area of credit risk in view of Basel II. The new regulation seeks to provide incentives for greater awareness of differences in risk through more risk-sensitive minimum capital requirements based on numerical formulas. This attempt to control bank behaviour has a heavy reliance on regulatory ratios like the risk-based capital adequacy ratio (CAR). In essence, such ratios compare the capital that a bank holds to the level of credit, market and operational risk that it bears. Due to this fact the objectives in this dissertation are as follows. Firstly, in an attempt to address these problems and under assumptions about retained earnings, loan-loss reserves, the market and shareholder-bank owner relationships, we construct continuous-time models of the risk-based CAR which is computed from credit and market risk-weighted assets (RWAs) and bank regulatory capital (BRC) in a stochastic setting. Secondly, we demonstrate how the CAR can be optimized in terms of equity allocation. Here, we employ dynamic programming for stochastic optimization, to obtain and verify the results. Thirdly, an important feature of this study is that we apply the mean-variance approach to obtain an optimal strategy that diversifies a portfolio consisting of three assets. In particular, chapter 5 is an original piece of work by the author of this dissertation where we demonstrate how to employ a mean-variance optimization approach to equity allocation under certain conditions.
Style APA, Harvard, Vancouver, ISO itp.
26

LI, TUNG-LUNG, i 李東龍. "A Tentative Study on Constructing the Price-Volume weighted KD stochastic oscillator - A Case of Taiwanese Stock Market". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/38jfs2.

Pełny tekst źródła
Streszczenie:
碩士
佛光大學
應用經濟學系
106
The tranditional KD index is constructed on the basis of price and has been commonly used for years. Nonetheless, despite so, the questions about that whether the index is effective, while whether the index is conductive to investors’ excess return are still remained unsolved to date. The stock market often shows that “the volume can go before the price”, “volume is the leading indicator of price”, and the factor of volume has its importance in technical analysis. This study attempts to reconstruct the traditional KD index by adding the volume. Through the combination of price and volume, it tries to construct a Price-Volume KD index that can determine the trend and help investors earn excess returns. Empirically, this study used the historical data of Taiwan Stock Market, including on days, weeks, and months from January 4, 1997 to December 29, 2017, for a total of 21 years to simulate the investment strategy. The transaction data includes three time periods: day, week, and month. The winning and return rates of the three trading strategies, traditional KD, volume KD, and Price-Volume KD, will be compared and analyzed. Empirical results found: (1) Looking at the Price-Volume KD on the 9-day sample, both the winning percentage and the rate of return are better than the traditional KD. However, the medium-and-long-term 9-week KD and the 9-month KD did not do better than the traditional KD. (2) Under the traditional KD trading range (20, 80), comparing the three time periods of day, week and month, the monthly cycle is the highest, regardless of winning percentage and return rate. (3) Adjusting the traditional KD trading range (20, 80), on the weekly cycle, it is possible to effectively increase the winning ratio of the price KD and the volume KD. Although the Price-Volume KD Index can effectively improve the winning ratio of the Taiwan Stock Market, the rate of return does not necessarily increase.
Style APA, Harvard, Vancouver, ISO itp.
27

PALAMARČUK, Igor. "Konstrukce automatického obchodního systému a vyhodnocení dosažených výsledků při obchodování na komoditních trzích". Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-367499.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii