Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Continuous random energy model.

Дисертації з теми "Continuous random energy model"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-36 дисертацій для дослідження на тему "Continuous random energy model".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Ho, Fu-Hsuan. "Aspects algorithmiques du modèle continu à énergie aléatoire." Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30184.

Повний текст джерела
Анотація:
Cette thèse explore les perspectives algorithmiques de la marche aléatoire branchante et du modèle continu d'énergie aléatoire (CREM). Nous nous intéressons notamment à la construction d'algorithmes en temps polynomial capables d'échan¬tillonner la mesure de Gibbs du modèle avec une grande probabilité, et à identifier le régime de dureté, qui consiste en toute température inverse bêta telle que de tels algorithmes en temps polynomial n'existent pas. Dans le Chapitre 1, nous fournissons un aperçu historique des modèles et moti¬vons les problèmes algorithmiques étudiés. Nous donnons également un aperçu des verres de spin à champ moyen qui motive la ligne de notre recherche. Dans le Chapitre 2, nous abordons le problème de l'échantillonnage de la mesure de Gibbs dans le contexte de la marche aléatoire branchante. Nous identifions une température inverse critique bêta_c, identique au point critique statique, où une tran¬sition de dureté se produit. Dans le régime sous-critique bêta < bêta_c, nous établissons qu'un algorithme d'échantillonnage récursif est capable d'échantillonner efficace¬ment la mesure de Gibbs. Dans le régime supercritique bêta > bêta_c, nous montrons que nous ne pouvons pas trouver d'algorithme en temps polynomial qui appartienne à une certaine classe d'algorithmes. Dans le Chapitre 3, nous portons notre attention sur le même problème d'échan¬tillonnage pour le modèle continu d'énergie aléatoire (CREM). Dans le cas où la fonction de covariance de ce modèle est concave, nous montrons que pour toute température inverse bêta < à l'infini, l'algorithme d'échantillonnage récursif considéré au Chapitre 2 est capable d'échantillonner efficacement la mesure de Gibbs. Pour le cas non concave, nous identifions un point critique bêta_G où une transition de dureté similaire à celle du Chapitre 2 se produit. Nous fournissons également une borne inférieure de l'énergie libre du CREM qui pourrait être d'un intérêt indépendant. Dans le Chapitre 4, nous étudions le moment négatif de la fonction de partition du CREM. Bien que cela ne soit pas directement lié au thème principal de la thèse, cela découle du cours de la recherche. Dans le Chapitre 5, nous donnons un aperçu de certaines orientations futures qui pourraient être intéressantes à étudier
This thesis explores the algorithmic perspectives of the branching random walk and the continuous random energy model (CREM). Namely, we are interested in constructing polynomial-time algorithms that can sample the model's Gibbs measure with high probability, and to indentify the hardness regime, which consists of any inverse temperature bêta such that such polynomial-time algorithms do not exist. In Chapter 1, we provide a historical overview of the models and motivate the algorithmic problems under investigation. We also provide an overview on the mean-field spin glasses that motivates the line of our research. In Chapter 2, we address the sampling problem of the Gibbs measure in the context of branching random walk. We identify a critical inverse temperature bêta_c, identical to the static critical point, that the a hardness transition occurs. In the subcritical regime bêta < bêta_c, we establish a recursive sampling algorithm is able to sample the Gibbs measure efficiently. In the supercritical regime bêta > bêta_c,we show that we cannot find polynomial-time algorithm that belongs to a certain class of algorithms. In Chapter 3, we turn our attention to the same sampling problem for the con¬tinuous random energy model (CREM). For the case where the covariance function of this model is concave, we show that for any inverse temperature bêta < to infinity, the recursive sampling algorithm considered in Chapter 2 is able to sample the Gibbs measure efficiently. For the non-concave case, we identify a critical point bêta_G that similar hardness transition as the one in Chapter 2 occurs. We also provide a lower bound of the CREM free energy that might be of independent interest. In Chapter 4, we study the negative moment of the CREM partition function. While this is not connected directly to the main theme of the thesis, it spins off during the course of research. In Chapter 5, we provide an outlook of some further directions that might be interesting to investigate
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Erturk, Huseyin. "Limit theorems for random exponential sums and their applications to insurance and the random energy model." Thesis, The University of North Carolina at Charlotte, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10111893.

Повний текст джерела
Анотація:

In this dissertation, we are mainly concerned with the sum of random exponentials. Here, the random variables are independent and identically distributed. Another distinctive assumption is the number of variables in this sum is a function of the constant on the exponent. Our first goal is to find the limiting distributions of the random exponential sums for new class of the random variables. For some classes, such results are known; normal distribution, Weibull distribution etc.

Secondly, we apply these limit theorems to some insurance models and the random energy model in statistical physics. Specifically for the first case, we give the estimate of the ruin probability in terms of the empirical data. For the random energy model, we present the analysis of the free energy for new class of distribution. In some particular cases, we prove the existence of several critical points for the free energy. In some other cases, we prove the absence of phase transitions.

Our results give a new approach to compute the ruin probabilities of insurance portfolios empirically when there is a sequence of insurance portfolios with a custom growth rate of the claim amounts. The second application introduces a simple method to drive the free energy in the case the random variables in the statistical sum can be represented as a function of standard exponential random variables. The technical tool of this study includes the classical limit theory for the sum of independent and identically distributed random variables and different asymptotic methods like the Euler-Maclaurin formula and Laplace method.

Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wolff, Tilman [Verfasser], and Wolfgang [Akademischer Betreuer] König. "Random Walk Local Times, Dirichlet Energy and Effective Conductivity in the Random Conductance Model / Tilman Wolff. Betreuer: Wolfgang König." Berlin : Technische Universität Berlin, 2013. http://d-nb.info/1064810357/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Li, Hailong. "Analytical Model for Energy Management in Wireless Sensor Networks." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1367936881.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Niblett, Samuel Peter. "Higher order structure in the energy landscapes of model glass formers." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/277582.

Повний текст джерела
Анотація:
The study of supercooled liquids and glasses remains one of the most divisive and divided fields in modern physics. Despite a vast amount of effort and research time invested in this topic, the answers to many central questions remain disputed and incomplete. However, the link between the behaviour of supercooled liquids and their energy landscapes is well established and widely accepted. Understanding this link would be a key step towards resolving many of the mysteries and controversies surrounding the glass transition. Therefore the study of glassy energy landscapes is an important area of research. In this thesis, I report some of the most detailed computational studies of glassy potential energy landscapes ever performed. Using geometry optimisation techniques, I have sampled the local minima and saddle points of the landscapes for several supercooled liquids to analyse their dynamics and thermodynamics. Some of my analysis follows previous work on the binary Lennard-Jones fluid (BLJ), a model atomic liquid. BLJ is a fragile glass former, meaning that its transport coefficients have super-Arrhenius temperature dependence, rather than the more usual Arrhenius behaviour exhibited by strong liquids. The difference in behaviour between these two classes of liquid has previously been attributed to differing degrees of structure in the relevant energy landscapes. I have studied models for both fragile and strong glass formers: the molecular liquid ortho-terphenyl (OTP) and viscous silica (SiO$_{2}$) respectively. My results for OTP agree closely with trends observed for BLJ, suggesting that the same diffusion mechanism is applicable to fragile molecular liquids as well as to atomic. However, the dynamics and energy landscape of OTP are made complicated by the molecular orientational degrees of freedom, making the analysis more challenging for this system. Dynamics of BLJ, OTP and silica are all dominated by cage-breaking events: structural rearrangements in which atoms change their nearest neighbours. I propose a robust and general method to identify cage breaks for small rigid molecules, and compare some properties of cage breaks between strong and fragile systems. The energy landscapes of BLJ and OTP both display hierarchical ordering of potential energy minima into metabasins. These metabasins can be detected by the cage-breaking method. It has previously been suggested that metabasins are responsible for super-Arrhenius behaviour, and are absent from the landscapes of strong liquids such as SiO2. My results indicate that metabasins are present on the silica landscape, but that they each contain fewer minima than metabasins in BLJ or OTP. Metabasins are associated with anticorrelated particle motion, mediated by reversed transitions between minima of the potential energy landscape. I show that accounting for time-correlation of particle displacement vectors is essential to describe super-Arrhenius behaviour in BLJ and OTP, but also required to reproduce strong behaviour in silica. I hypothesise that the difference between strong and fragile liquids arises from a longer correlation timescale in the latter case, and I suggest a number of ways in which this proposition could be tested. I have investigated the effect on the landscape of freezing the positions of some particles in a BLJ fluid. This “pinning” procedure induces a dynamical crossover that has been described as an equilibrium “pinning transition”, related to the hypothetical ideal glass transition. I show that the pinning transition is related to (and probably caused by) a dramatic change in the potential energy landscape. Pinning a large fraction of the particles in a supercooled liquid causes its energy landscape to acquire global structure and hence structure-seeking behaviour, very different from the landscape of a typical supercooled liquid. I provide a detailed description of this change in structure, and investigate the mechanism underlying it. I introduce a new algorithm for identifying hierarchical organisation of a landsape, which uses concepts related to the pinning transition but is applicable to unpinned liquids as well. This definition is complementary to metabasins, but the two methods often identify the same higher-order structures. The new “packings” algorithm offers a route to test thermodynamic theories of the glass transition in the context of the potential energy landscape. Over the course of this thesis, I discuss several different terms and methods to identify higher-order structures in the landscapes of model glass formers, and investigate how this organisation varies between different systems. Although little variation is immediately apparent between most glassy landscapes, deeper analysis reveals a surprising diversity, which has important implications for dynamical behaviour in the vicinity of the glass transition.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kameswar, Rao Vaddina. "Evaluation of A Low-power Random Access Memory Generator." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7823.

Повний текст джерела
Анотація:

In this work, an existing RAM generator is analysed and evaluated. Some of the aspects that were considered in the evaluation are the optimization of the basic SRAM cell, how the RAM generator can be ported to newer technologys, automating the simulation process and the creation of the workflow for the energy model.

One of the main focus of this thesis work is to optimize the basic SRAM cell. The SRAM cell which is used in the RAM generator is not optimized for area nor power. A compact layout is suggested which saves a lot of area and power. The technology that is used to create the RAM generator is old and a suitable way to port it to newer technology has also been found.

To create an energy model one has to simulate a lot of memories with a lot of data. This cannot be done in the traditional way of simulating circuits using the GUI. Hence an automation procedure has been suggested which can be made to work to create energy models by simulating the memories comprehensively.

Finally, basic ground work has been initiated by creating a workflow for the creation of the energy model.

Стилі APA, Harvard, Vancouver, ISO та ін.
7

Alevanau, Aliaksandr. "Study of the Apparent Kinetics of Biomass Gasification Using High-Temperature Steam." Licentiate thesis, KTH, Energi- och ugnsteknik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-26356.

Повний текст джерела
Анотація:
Among the latest achievements in gasification technology, one may list the development of a method to preheat gasification agents using switched ceramic honey combs. The best output from this technology is achieved with use of water steam as a gasification agent, which is heated up to 1600 °C. The application of these temperatures with steam as a gasification agent provides a cleaner syngas (no nitrogen from air, cracked tars) and the ash melts into easily utilised glass-like sludge. High hydrogen content in output gas is also favourable for end-user applications.Among the other advantages of this technology is the presumable application of fixed-bed-type reactors fed by separately produced and preheated steam. This construction assumes relatively high steam flow rates to deliver the heat needed for endothermic reactions involving biomass. The biomass is to be heated uniformly and evenly in the volume of the whole reactor, providing easier and simpler control and operation in comparison to other types of reactors. To provide potential constructors and exploiters of these reactors with the kinetic data needed for the calculations of vital parameters for both reactor construction and exploitation, basic experimental research of high-temperature steam gasification of four types of industrially produced biomass has been conducted.Kinetic data have been obtained for straw and wood pellets, wood-chip charcoal and compressed charcoal of mixed origin. Experiments were conducted using two experimental facilities at the Energy and Furnace Division of the Department of Material Science and Engineering (MSE) at the School of Industrial Engineering and Management (ITM) of the Royal Institute of Technology (KTH) and at the Combustion Laboratory of the Mechanical Engineering Department of the University of Maryland (UMD), USA. The experimental facility at the Energy and Furnace Division has been improved with the addition of several constructive elements, providing better possibilities for thermo-gravimetric measurements.The obtained thermo-gravimetric data were analysed and approximated using several models described in the literature. In addition, appropriate software based on the Scilab package was developed. The implementation of the isothermal method based on optimisation algorithms has been developed and tested on the data obtained under the conditions of a slow decrease of temperature in experiments with the char gasification in small-scale experimental facilities in the Energy and Furnace Division.The composition of the gases generated during the gasification of straw and wood pellets by high-temperature steam has been recorded and analysed for different experimental conditions.

QC 20101124


Study of ignition and kinetics of biomass/solid waste thermal conversion with high-temperature air/steam
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Luo, Simon Junming. "An Information Geometric Approach to Increase Representational Power in Unsupervised Learning." Thesis, The University of Sydney, 2021. https://hdl.handle.net/2123/25773.

Повний текст джерела
Анотація:
Machine learning models increase their representational power by increasing the number of parameters in the model. The number of parameters in the model can be increased by introducing hidden nodes, higher-order interaction effects or by introducing new features into the model. In this thesis we study different approaches to increase the representational power in unsupervised machine learning models. We investigate the use of incidence algebra and information geometry to develop novel machine learning models to include higher-order interactions effects into the model. Incidence algebra provides a natural formulation for combinatorics by expressing it as a generative function and information geometry provides many theoretical guarantees in the model by projecting the problem onto a dually flat Riemannian structure for optimization. Combining the two techniques together formulates the information geometric formulation of the binary log-linear model. We first use the information geometric formulation of the binary log-linear model to formulate the higher-order Boltzmann machine (HBM) to compare the different behaviours when using hidden nodes and higher-order feature interactions to increase the representational power of the model. We then apply the concepts learnt from this study to include higher-order interaction terms in Blind Source Separation (BSS) and to create an efficient approach to estimate higher order functions in Poisson process. Lastly, we explore the possibility to use Bayesian non-parametrics to automatically reduce the number of higher-order interactions effects included in the model.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Hua, Xiaoben, and Yuxia Yang. "A Fusion Model For Enhancement of Range Images." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2203.

Повний текст джерела
Анотація:
In this thesis, we would like to present a new way to enhance the “depth map” image which is called as the fusion of depth images. The goal of our thesis is to try to enhance the “depth images” through a fusion of different classification methods. For that, we will use three similar but different methodologies, the Graph-Cut, Super-Pixel and Principal Component Analysis algorithms to solve the enhancement and output of our result. After that, we will compare the effect of the enhancement of our result with the original depth images. This result indicates the effectiveness of our methodology.
Room 401, No.56, Lane 21, Yin Gao Road, Shanghai, China
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kaděrová, Jana. "Pravděpodobnostní diskrétní model porušování betonu." Doctoral thesis, Vysoké učení technické v Brně. Fakulta stavební, 2018. http://www.nusl.cz/ntk/nusl-390288.

Повний текст джерела
Анотація:
The thesis presents results of a numerical study on the performance of 3D discrete meso–scale lattice–particle model of concrete. The existing model was extended by introducing the spatial variability of chosen material parameter in form of random field. An experimental data from bending tests on notched and unnotched beams was exploited for the identification of model parameters as well as for the subsequent validation of its performance. With the basic and the extended randomized version of the model, numerical simulations were calculated so that the influence of the rate of fluctuation of the random field (governed by the correlation length) could be observed. The final part of the thesis describes the region in the beam active during the test in which the most of the fracture energy is released in terms of its size and shape. This region defines the strength of the whole member and as shown in the thesis, it does not have a constant size but it is influenced by the geometrical setup and the correlation length of the random field.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Yan, Huijie. "Challenges of China’s sustainability : integrating energy, environment and health policies." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM1092.

Повний текст джерела
Анотація:
Dans le but de faire face aux défis interdépendants en termes d’épuisement des ressources énergétiques, de dégradation environnementale et des préoccupations de santé publique dans le contexte chinois en réponse au développement durable, nous nous concentrons sur l'étude des politiques en matière d’énergie, d’environnement et de santé en Chine. Dans le chapitre 1, nous donnons un aperçu des politiques chinoises en matière d’énergie, d’environnement et de santé au cours des 20 dernières années afin de connaître les orientations politiques futures auxquelles le gouvernement n'a pas donné une attention suffisante. Dans les trois chapitres suivants, nous proposons une série d'études empiriques afin de tirer quelques implications politiques utiles. Dans le chapitre 2, nous étudions l'impact de l'urbanisation, de l'adaptation de la structure industrielle, du prix de l'énergie et de l'exportation sur les intensités énergétiques agrégés et désagrégés des provinces. Dans le chapitre 3, nous étudions les facteurs qui expliquent la transition énergétique vers des combustibles propres des ménages ruraux. Dans le chapitre 4, nous examinons les effets conjoints des risques environnementaux, du revenu individuel, des politiques de santé sur l'état de santé des adultes chinois. En particulier, nos résultats empiriques suggèrent d’intégrer le développement urbain dans la stratégie d'économies d'énergie; de considérer des substitutions/complémentarités complexes parmi les sources d'énergie et entre l'énergie et l’alimentation pour les ménages ruraux; d’aligner les politiques environnementales, énergétiques et alimentaires avec les politiques de santé
With the purpose of coping with the intertwined challenges of energy depletion, environmental degradation and public health concerns in the Chinese-specific context in response to sustainable development, we focus on investigating China’s energy, environment and health policies. In chapter 1, we provide an overview of China’s energy, environment and health policies over the past 20 years in order to know about the future policy directions to which the government has not given a sufficient attention. In the following three chapters, we provide a series of empirical studies so as to derive some useful policy implications. In chapter 2, we investigate the impact of urbanization, industrial structure adjustment, energy price and export on provincial aggregate and disaggregate energy intensities. In chapter 3, we study the factors explaining the switches from dirty to clean fuel sources in rural households. In chapter 4, we examine the joint effects of environmental hazards, individual income and health policies on the health status of Chinese adults. Our empirical findings particularly suggest integrating urban development into the strategy of energy saving; considering the complex substitutions/complementarities among energy sources and between energy and food for rural households; aligning the environment, energy and food policies with health policies
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Dargie, Waltenegus. "Impact of Random Deployment on Operation and Data Quality of Sensor Networks." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-32911.

Повний текст джерела
Анотація:
Several applications have been proposed for wireless sensor networks, including habitat monitoring, structural health monitoring, pipeline monitoring, and precision agriculture. Among the desirable features of wireless sensor networks, one is the ease of deployment. Since the nodes are capable of self-organization, they can be placed easily in areas that are otherwise inaccessible to or impractical for other types of sensing systems. In fact, some have proposed the deployment of wireless sensor networks by dropping nodes from a plane, delivering them in an artillery shell, or launching them via a catapult from onboard a ship. There are also reports of actual aerial deployments, for example the one carried out using an unmanned aerial vehicle (UAV) at a Marine Corps combat centre in California -- the nodes were able to establish a time-synchronized, multi-hop communication network for tracking vehicles that passed along a dirt road. While this has a practical relevance for some civil applications (such as rescue operations), a more realistic deployment involves the careful planning and placement of sensors. Even then, nodes may not be placed optimally to ensure that the network is fully connected and high-quality data pertaining to the phenomena being monitored can be extracted from the network. This work aims to address the problem of random deployment through two complementary approaches: The first approach aims to address the problem of random deployment from a communication perspective. It begins by establishing a comprehensive mathematical model to quantify the energy cost of various concerns of a fully operational wireless sensor network. Based on the analytic model, an energy-efficient topology control protocol is developed. The protocol sets eligibility metric to establish and maintain a multi-hop communication path and to ensure that all nodes exhaust their energy in a uniform manner. The second approach focuses on addressing the problem of imperfect sensing from a signal processing perspective. It investigates the impact of deployment errors (calibration, placement, and orientation errors) on the quality of the sensed data and attempts to identify robust and error-agnostic features. If random placement is unavoidable and dense deployment cannot be supported, robust and error-agnostic features enable one to recognize interesting events from erroneous or imperfect data.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Li, Qiuju. "Statistical inference for joint modelling of longitudinal and survival data." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/statistical-inference-for-joint-modelling-of-longitudinal-and-survival-data(65e644f3-d26f-47c0-bbe1-a51d01ddc1b9).html.

Повний текст джерела
Анотація:
In longitudinal studies, data collected within a subject or cluster are somewhat correlated by their very nature and special cares are needed to account for such correlation in the analysis of data. Under the framework of longitudinal studies, three topics are being discussed in this thesis. In chapter 2, the joint modelling of multivariate longitudinal process consisting of different types of outcomes are discussed. In the large cohort study of UK north Stafforshire osteoarthritis project, longitudinal trivariate outcomes of continuous, binary and ordinary data are observed at baseline, year 3 and year 6. Instead of analysing each process separately, joint modelling is proposed for the trivariate outcomes to account for the inherent association by introducing random effects and the covariance matrix G. The influence of covariance matrix G on statistical inference of fixed-effects parameters has been investigated within the Bayesian framework. The study shows that by joint modelling the multivariate longitudinal process, it can reduce the bias and provide with more reliable results than it does by modelling each process separately. Together with the longitudinal measurements taken intermittently, a counting process of events in time is often being observed as well during a longitudinal study. It is of interest to investigate the relationship between time to event and longitudinal process, on the other hand, measurements taken for the longitudinal process may be potentially truncated by the terminated events, such as death. Thus, it may be crucial to jointly model the survival and longitudinal data. It is popular to propose linear mixed-effects models for the longitudinal process of continuous outcomes and Cox regression model for survival data to characterize the relationship between time to event and longitudinal process, and some standard assumptions have been made. In chapter 3, we try to investigate the influence on statistical inference for survival data when the assumption of mutual independence on random error of linear mixed-effects models of longitudinal process has been violated. And the study is conducted by utilising conditional score estimation approach, which provides with robust estimators and shares computational advantage. Generalised sufficient statistic of random effects is proposed to account for the correlation remaining among the random error, which is characterized by the data-driven method of modified Cholesky decomposition. The simulation study shows that, by doing so, it can provide with nearly unbiased estimation and efficient statistical inference as well. In chapter 4, it is trying to account for both the current and past information of longitudinal process into the survival models of joint modelling. In the last 15 to 20 years, it has been popular or even standard to assume that longitudinal process affects the counting process of events in time only through the current value, which, however, is not necessary to be true all the time, as recognised by the investigators in more recent studies. An integral over the trajectory of longitudinal process, along with a weighted curve, is proposed to account for both the current and past information to improve inference and reduce the under estimation of effects of longitudinal process on the risk hazards. A plausible approach of statistical inference for the proposed models has been proposed in the chapter, along with real data analysis and simulation study.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Decelle, Aurélien. "Statistical physics of disordered networks - Spin Glasses on hierarchical lattices and community inference on random graphs." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00653375.

Повний текст джерела
Анотація:
Cette thèse aborde des aspects fondamentales et appliquées de la théorie des verres de spin etplus généralement des systèmes complexes. Les premiers modèles théoriques décrivant la transitionvitreuse sont apparues dans les années 1970. Ceux-ci décrivaient les verres à l'aide d'interactionsaléatoires. Il a fallu alors plusieurs années avant qu'une théorie de champs moyen pour ces systèmessoient comprises. De nos jours il existe un grand nombre de modèles tombant dans la classe de" champs moyen " et qui sont bien compris à la fois analytiquement, mais également numériquementgrâce à des outils tels que le monte-carlo ou la méthode de la cavité. Par ailleurs il est bien connu quele groupe de renormalisation a échoué jusque ici à pouvoir prédire le comportement des observablescritiques dans les verres hors champs moyen. Nous avons donc choisi d'étudier des systèmes eninteraction à longue portée dont on ignore encore si la physique est identique à celle du champmoyen. Nous avons montré dans une première partie, la facilité avec laquelle on peut décrire unetransformation du groupe de renormalisation dans les systèmes ferromagnétiques en interaction àlongue portée dé finies sur le réseau hiérarchique de Dyson. Dans un second temps, nous avons portéenotre attention sur des modèles de verre de spin sur ce même réseau. Un début d'analyse sur cestransformations dans l'espace réel est présenté ainsi qu'une comparaison de la mesure de l'exposantcritique nu par différentes méthodes. Si la transformation décrite semble prometteuse il faut cependantnoter que celle-ci doit encore être améliorée afin d'être considérée comme une méthode valide pournotre système. Nous avons continué dans cette même direction en analysant un modèle d'énergiesaléatoires toujours en utilisant la topologie du réseau hiérarchique. Nous avons étudié numériquementce système dans lequel nous avons pu observer l'existence d'une transition de phase de type " criseentropique " tout à fait similaire à celle du REM de Derrida. Toutefois, notre modèle présente desdifférences importantes avec ce dernier telles que le comportement non-analytique de l'entropie à latransition, ainsi que l'émergence de " criticalité " dont la présence serait à confirmer par d'autres études.Nous montrons également à l'aide de notre méthode numérique comment la température critique dece système peut-être estimée de trois façon différentes.Dans une dernière partie nous avons abordé des problèmes liés aux systèmes complexes. Il aété remarqué récemment que les modèles étudiés dans divers domaines, par exemple la physique, labiologie ou l'informatique, étaient très proches les uns des autres. Ceci est particulièrement vrai dansl'optimisation combinatoire qui a en partie été étudiée par des méthodes de physique statistique. Cesméthodes issues de la théories des verres de spin et des verres structuraux ont été très utilisées pourétudier les transitions de phase qui ont lieux dans ces systèmes ainsi que pour inventer de nouveauxalgorithmes pour ces modèles. Nous avons étudié le problème de l'inférence de modules dans lesréseaux à l'aide de ces même méthodes. Nous présentons une analyse sur la détection des modules topologiques dans des réseaux aléatoires et démontrons la présence d'une transition de phase entre une région où ces modules sont indétectables et une région où ils sont détectables. Par ailleurs, nous avons implémenté pour ces problèmes un algorithme utilisant Belief Propagation afin d'inférer les modules ainsi que d'apprendre leurs propriétés en ayant pour unique information la structure du réseau. Finalementnous avons appliqué cet algorithme sur des réseaux construits à partir de données réelles et discutonsles développements à apporter à notre méthode.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Beisler, Matthias Werner. "Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola", 2011. http://nbn-resolving.de/urn:nbn:de:bsz:105-qucosa-71564.

Повний текст джерела
Анотація:
The design of hydropower projects requires a comprehensive planning process in order to achieve the objective to maximise exploitation of the existing hydropower potential as well as future revenues of the plant. For this purpose and to satisfy approval requirements for a complex hydropower development, it is imperative at planning stage, that the conceptual development contemplates a wide range of influencing design factors and ensures appropriate consideration of all related aspects. Since the majority of technical and economical parameters that are required for detailed and final design cannot be precisely determined at early planning stages, crucial design parameters such as design discharge and hydraulic head have to be examined through an extensive optimisation process. One disadvantage inherent to commonly used deterministic analysis is the lack of objectivity for the selection of input parameters. Moreover, it cannot be ensured that the entire existing parameter ranges and all possible parameter combinations are covered. Probabilistic methods utilise discrete probability distributions or parameter input ranges to cover the entire range of uncertainties resulting from an information deficit during the planning phase and integrate them into the optimisation by means of an alternative calculation method. The investigated method assists with the mathematical assessment and integration of uncertainties into the rational economic appraisal of complex infrastructure projects. The assessment includes an exemplary verification to what extent the Random Set Theory can be utilised for the determination of input parameters that are relevant for the optimisation of hydropower projects and evaluates possible improvements with respect to accuracy and suitability of the calculated results
Die Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Olson, Brent. "Evaluating the error of measurement due to categorical scaling with a measurement invariance approach to confirmatory factor analysis." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/332.

Повний текст джерела
Анотація:
It has previously been determined that using 3 or 4 points on a categorized response scale will fail to produce a continuous distribution of scores. However, there is no evidence, thus far, revealing the number of scale points that may indeed possess an approximate or sufficiently continuous distribution. This study provides the evidence to suggest the level of categorization in discrete scales that makes them directly comparable to continuous scales in terms of their measurement properties. To do this, we first introduced a novel procedure for simulating discretely scaled data that was both informed and validated through the principles of the Classical True Score Model. Second, we employed a measurement invariance (MI) approach to confirmatory factor analysis (CFA) in order to directly compare the measurement quality of continuously scaled factor models to that of discretely scaled models. The simulated design conditions of the study varied with respect to item-specific variance (low, moderate, high), random error variance (none, moderate, high), and discrete scale categorization (number of scale points ranged from 3 to 101). A population analogue approach was taken with respect to sample size (N = 10,000). We concluded that there are conditions under which response scales with 11 to 15 scale points can reproduce the measurement properties of a continuous scale. Using response scales with more than 15 points may be, for the most part, unnecessary. Scales having from 3 to 10 points introduce a significant level of measurement error, and caution should be taken when employing such scales. The implications of this research and future directions are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Горобей, О. О. "Інформаційна технологія комп'ютерного моделювання мікроклімату у теплицях". Master's thesis, Сумський державний університет, 2018. http://essuir.sumdu.edu.ua/handle/123456789/72186.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Forsblom, Findlay, and Lars Petter Ulvatne. "Snow depth measurements and predictions : Reducing environmental impact for artificial grass pitches at snowfall." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-96395.

Повний текст джерела
Анотація:
Rubber granulates, used at artificial grass pitches, pose a threat to the environment when leaking into the nature. As the granulates leak to the environment through rain water and snow clearances, they can be transported by rivers and later on end up in the marine life. Therefore, reducing the snow clearances to its minimum is of importance. If the snow clearance problem is minimized or even eliminated, this will have a positive impact on the surrounding nature. The object of this project is to propose a method for deciding when to remove snow and automate the information dispersing upon clearing or closing a pitch. This includes finding low powered sensors to measure snow depth, find a machine learning model to predict upcoming snow levels and create an application with a clear and easy-to-use interface to present weather information and disperse information to the responsible persons. Controlled experiments is used to find the models and sensors that are suitable to solve this problem. The sensors are tested on a single snow quality, where ultrasonic and infrared sensors are found suitable. However, fabricated tests for newly fallen snow questioned the possibility of measuring snow depth using the ultrasonic sensor in the general case. Random Forest is presented as the machine learning model that predicts future snow levels with the highest accuracy. From a survey, indications is found that the web application fulfills the intended functionalities, with some improvements suggested.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Albers, Tony. "Weak nonergodicity in anomalous diffusion processes." Doctoral thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-214327.

Повний текст джерела
Анотація:
Anomale Diffusion ist ein weitverbreiteter Transportmechanismus, welcher für gewöhnlich mit ensemble-basierten Methoden experimentell untersucht wird. Motiviert durch den Fortschritt in der Einzelteilchenverfolgung, wo typischerweise Zeitmittelwerte bestimmt werden, entsteht die Frage nach der Ergodizität. Stimmen ensemble-gemittelte Größen und zeitgemittelte Größen überein, und wenn nicht, wie unterscheiden sie sich? In dieser Arbeit studieren wir verschiedene stochastische Modelle für anomale Diffusion bezüglich ihres ergodischen oder nicht-ergodischen Verhaltens hinsichtlich der mittleren quadratischen Verschiebung. Wir beginnen unsere Untersuchung mit integrierter Brownscher Bewegung, welche von großer Bedeutung für alle Systeme mit Impulsdiffusion ist. Für diesen Prozess stellen wir die ensemble-gemittelte quadratische Verschiebung und die zeitgemittelte quadratische Verschiebung gegenüber und charakterisieren insbesondere die Zufälligkeit letzterer. Im zweiten Teil bilden wir integrierte Brownsche Bewegung auf andere Modelle ab, um einen tieferen Einblick in den Ursprung des nicht-ergodischen Verhaltens zu bekommen. Dabei werden wir auf einen verallgemeinerten Lévy-Lauf geführt. Dieser offenbart interessante Phänomene, welche in der Literatur noch nicht beobachtet worden sind. Schließlich führen wir eine neue Größe für die Analyse anomaler Diffusionsprozesse ein, die Verteilung der verallgemeinerten Diffusivitäten, welche über die mittlere quadratische Verschiebung hinausgeht, und analysieren mit dieser ein oft verwendetes Modell der anomalen Diffusion, den subdiffusiven zeitkontinuierlichen Zufallslauf
Anomalous diffusion is a widespread transport mechanism, which is usually experimentally investigated by ensemble-based methods. Motivated by the progress in single-particle tracking, where time averages are typically determined, the question of ergodicity arises. Do ensemble-averaged quantities and time-averaged quantities coincide, and if not, in what way do they differ? In this thesis, we study different stochastic models for anomalous diffusion with respect to their ergodic or nonergodic behavior concerning the mean-squared displacement. We start our study with integrated Brownian motion, which is of high importance for all systems showing momentum diffusion. For this process, we contrast the ensemble-averaged squared displacement with the time-averaged squared displacement and, in particular, characterize the randomness of the latter. In the second part, we map integrated Brownian motion to other models in order to get a deeper insight into the origin of the nonergodic behavior. In doing so, we are led to a generalized Lévy walk. The latter reveals interesting phenomena, which have never been observed in the literature before. Finally, we introduce a new tool for analyzing anomalous diffusion processes, the distribution of generalized diffusivities, which goes beyond the mean-squared displacement, and we analyze with this tool an often used model of anomalous diffusion, the subdiffusive continuous time random walk
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Peña, Monferrer Carlos. "Computational fluid dynamics multiscale modelling of bubbly flow. A critical study and new developments on volume of fluid, discrete element and two-fluid methods." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90493.

Повний текст джерела
Анотація:
The study and modelling of two-phase flow, even the simplest ones such as the bubbly flow, remains a challenge that requires exploring the physical phenomena from different spatial and temporal resolution levels. CFD (Computational Fluid Dynamics) is a widespread and promising tool for modelling, but nowadays, there is no single approach or method to predict the dynamics of these systems at the different resolution levels providing enough precision of the results. The inherent difficulties of the events occurring in this flow, mainly those related with the interface between phases, makes that low or intermediate resolution level approaches as system codes (RELAP, TRACE, ...) or 3D TFM (Two-Fluid Model) have significant issues to reproduce acceptable results, unless well-known scenarios and global values are considered. Instead, methods based on high resolution level such as Interfacial Tracking Method (ITM) or Volume Of Fluid (VOF) require a high computational effort that makes unfeasible its use in complex systems. In this thesis, an open-source simulation framework has been designed and developed using the OpenFOAM library to analyze the cases from microescale to macroscale levels. The different approaches and the information that is required in each one of them have been studied for bubbly flow. In the first part, the dynamics of single bubbles at a high resolution level have been examined through VOF. This technique has allowed to obtain accurate results related to the bubble formation, terminal velocity, path, wake and instabilities produced by the wake. However, this approach has been impractical for real scenarios with more than dozens of bubbles. Alternatively, this thesis proposes a CFD Discrete Element Method (CFD-DEM) technique, where each bubble is represented discretely. A novel solver for bubbly flow has been developed in this thesis. This includes a large number of improvements necessary to reproduce the bubble-bubble and bubble-wall interactions, turbulence, velocity seen by the bubbles, momentum and mass exchange term over the cells or bubble expansion, among others. But also new implementations as an algorithm to seed the bubbles in the system have been incorporated. As a result, this new solver gives more accurate results as the provided up to date. Following the decrease on resolution level, and therefore the required computational resources, a 3D TFM have been developed with a population balance equation solved with an implementation of the Quadrature Method Of Moments (QMOM). The solver is implemented with the same closure models as the CFD-DEM to analyze the effects involved with the lost of information due to the averaging of the instantaneous Navier-Stokes equation. The analysis of the results with CFD-DEM reveals the discrepancies found by considering averaged values and homogeneous flow in the models of the classical TFM formulation. Finally, for the lowest resolution level approach, the system code RELAP5/MOD3 is used for modelling the bubbly flow regime. The code has been modified to reproduce properly the two-phase flow characteristics in vertical pipes, comparing the performance of the calculation of the drag term based on drift-velocity and drag coefficient approaches.
El estudio y modelado de flujos bifásicos, incluso los más simples como el bubbly flow, sigue siendo un reto que conlleva aproximarse a los fenómenos físicos que lo rigen desde diferentes niveles de resolución espacial y temporal. El uso de códigos CFD (Computational Fluid Dynamics) como herramienta de modelado está muy extendida y resulta prometedora, pero hoy por hoy, no existe una única aproximación o técnica de resolución que permita predecir la dinámica de estos sistemas en los diferentes niveles de resolución, y que ofrezca suficiente precisión en sus resultados. La dificultad intrínseca de los fenómenos que allí ocurren, sobre todo los ligados a la interfase entre ambas fases, hace que los códigos de bajo o medio nivel de resolución, como pueden ser los códigos de sistema (RELAP, TRACE, etc.) o los basados en aproximaciones 3D TFM (Two-Fluid Model) tengan serios problemas para ofrecer resultados aceptables, a no ser que se trate de escenarios muy conocidos y se busquen resultados globales. En cambio, códigos basados en alto nivel de resolución, como los que utilizan VOF (Volume Of Fluid), requirieren de un esfuerzo computacional tan elevado que no pueden ser aplicados a sistemas complejos. En esta tesis, mediante el uso de la librería OpenFOAM se ha creado un marco de simulación de código abierto para analizar los escenarios desde niveles de resolución de microescala a macroescala, analizando las diferentes aproximaciones, así como la información que es necesaria aportar en cada una de ellas, para el estudio del régimen de bubbly flow. En la primera parte se estudia la dinámica de burbujas individuales a un alto nivel de resolución mediante el uso del método VOF (Volume Of Fluid). Esta técnica ha permitido obtener resultados precisos como la formación de la burbuja, velocidad terminal, camino recorrido, estela producida por la burbuja e inestabilidades que produce en su camino. Pero esta aproximación resulta inviable para entornos reales con la participación de más de unas pocas decenas de burbujas. Como alternativa, se propone el uso de técnicas CFD-DEM (Discrete Element Methods) en la que se representa a las burbujas como partículas discretas. En esta tesis se ha desarrollado un nuevo solver para bubbly flow en el que se han añadido un gran número de nuevos modelos, como los necesarios para contemplar los choques entre burbujas o con las paredes, la turbulencia, la velocidad vista por las burbujas, la distribución del intercambio de momento y masas con el fluido en las diferentes celdas por cada una de las burbujas o la expansión de la fase gaseosa entre otros. Pero también se han tenido que incluir nuevos algoritmos como el necesario para inyectar de forma adecuada la fase gaseosa en el sistema. Este nuevo solver ofrece resultados con un nivel de resolución superior a los desarrollados hasta la fecha. Siguiendo con la reducción del nivel de resolución, y por tanto los recursos computacionales necesarios, se efectúa el desarrollo de un solver tridimensional de TFM en el que se ha implementado el método QMOM (Quadrature Method Of Moments) para resolver la ecuación de balance poblacional. El solver se desarrolla con los mismos modelos de cierre que el CFD-DEM para analizar los efectos relacionados con la pérdida de información debido al promediado de las ecuaciones instantáneas de Navier-Stokes. El análisis de resultados de CFD-DEM permite determinar las discrepancias encontradas por considerar los valores promediados y el flujo homogéneo de los modelos clásicos de TFM. Por último, como aproximación de nivel de resolución más bajo, se investiga el uso uso de códigos de sistema, utilizando el código RELAP5/MOD3 para analizar el modelado del flujo en condiciones de bubbly flow. El código es modificado para reproducir correctamente el flujo bifásico en tuberías verticales, comparando el comportamiento de aproximaciones para el cálculo del término d
L'estudi i modelatge de fluxos bifàsics, fins i tot els més simples com bubbly flow, segueix sent un repte que comporta aproximar-se als fenòmens físics que ho regeixen des de diferents nivells de resolució espacial i temporal. L'ús de codis CFD (Computational Fluid Dynamics) com a eina de modelatge està molt estesa i resulta prometedora, però ara per ara, no existeix una única aproximació o tècnica de resolució que permeta predir la dinàmica d'aquests sistemes en els diferents nivells de resolució, i que oferisca suficient precisió en els seus resultats. Les dificultat intrínseques dels fenòmens que allí ocorren, sobre tots els lligats a la interfase entre les dues fases, fa que els codis de baix o mig nivell de resolució, com poden ser els codis de sistema (RELAP,TRACE, etc.) o els basats en aproximacions 3D TFM (Two-Fluid Model) tinguen seriosos problemes per a oferir resultats acceptables , llevat que es tracte d'escenaris molt coneguts i se persegueixen resultats globals. En canvi, codis basats en alt nivell de resolució, com els que utilitzen VOF (Volume Of Fluid), requereixen d'un esforç computacional tan elevat que no poden ser aplicats a sistemes complexos. En aquesta tesi, mitjançant l'ús de la llibreria OpenFOAM s'ha creat un marc de simulació de codi obert per a analitzar els escenaris des de nivells de resolució de microescala a macroescala, analitzant les diferents aproximacions, així com la informació que és necessària aportar en cadascuna d'elles, per a l'estudi del règim de bubbly flow. En la primera part s'estudia la dinàmica de bambolles individuals a un alt nivell de resolució mitjançant l'ús del mètode VOF. Aquesta tècnica ha permès obtenir resultats precisos com la formació de la bambolla, velocitat terminal, camí recorregut, estela produida per la bambolla i inestabilitats que produeix en el seu camí. Però aquesta aproximació resulta inviable per a entorns reals amb la participació de més d'unes poques desenes de bambolles. Com a alternativa en aqueix cas es proposa l'ús de tècniques CFD-DEM (Discrete Element Methods) en la qual es representa a les bambolles com a partícules discretes. En aquesta tesi s'ha desenvolupat un nou solver per a bubbly flow en el qual s'han afegit un gran nombre de nous models, com els necessaris per a contemplar els xocs entre bambolles o amb les parets, la turbulència, la velocitat vista per les bambolles, la distribució de l'intercanvi de moment i masses amb el fluid en les diferents cel·les per cadascuna de les bambolles o els models d'expansió de la fase gasosa entre uns altres. Però també s'ha hagut d'incloure nous algoritmes com el necessari per a injectar de forma adequada la fase gasosa en el sistema. Aquest nou solver ofereix resultats amb un nivell de resolució superior als desenvolupat fins la data. Seguint amb la reducció del nivell de resolució, i per tant els recursos computacionals necessaris, s'efectua el desenvolupament d'un solver tridimensional de TFM en el qual s'ha implementat el mètode QMOM (Quadrature Method Of Moments) per a resoldre l'equació de balanç poblacional. El solver es desenvolupa amb els mateixos models de tancament que el CFD-DEM per a analitzar els efectes relacionats amb la pèrdua d'informació a causa del promitjat de les equacions instantànies de Navier-Stokes. L'anàlisi de resultats de CFD-DEM permet determinar les discrepàncies ocasionades per considerar els valors promitjats i el flux homogeni dels models clàssics de TFM. Finalment, com a aproximació de nivell de resolució més baix, s'analitza l'ús de codis de sistema, utilitzant el codi RELAP5/MOD3 per a analitzar el modelatge del fluxos en règim de bubbly flow. El codi és modificat per a reproduir correctament les característiques del flux bifàsic en canonades verticals, comparant el comportament d'aproximacions per al càlcul del terme de drag basades en velocitat de drift flux model i de les basades en coe
Peña Monferrer, C. (2017). Computational fluid dynamics multiscale modelling of bubbly flow. A critical study and new developments on volume of fluid, discrete element and two-fluid methods [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90493
TESIS
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Golder, Jacques. "Modélisation d'un phénomène pluvieux local et analyse de son transfert vers la nappe phréatique." Phd thesis, Université d'Avignon, 2013. http://tel.archives-ouvertes.fr/tel-01057725.

Повний текст джерела
Анотація:
Dans le cadre des recherches de la qualité des ressources en eau, l'étude du processus de transfert de masse du sol vers la nappe phréatique constitue un élément primordial pour la compréhension de la pollution de cette dernière. En effet, les éléments polluants solubles à la surface (produits liés aux activités humaines tels engrais, pesticides...) peuvent transiter vers la nappe à travers le milieu poreux qu'est le sol. Ce scénario de transfert de pollution repose sur deux phénomènes : la pluie qui génère la masse d'eau à la surface et la dispersion de celle-ci à travers le milieu poreux. La dispersion de masse dans un milieu poreux naturel comme le sol forme un sujet de recherche vaste et difficile aussi bien au plan expérimental que théorique. Sa modélisation constitue une préoccupation du laboratoire EMMAH, en particulier dans le cadre du projet Sol Virtuel dans lequel un modèle de transfert (modèle PASTIS) a été développé. Le couplage de ce modèle de transfert avec en entrée un modèle décrivant la dynamique aléatoire de la pluie est un des objectifs de la présente thèse. Ce travail de thèse aborde cet objectif en s'appuyant d'une part sur des résultats d'observations expérimentaux et d'autre part sur de la modélisation inspirée par l'analyse des données d'observation. La première partie du travail est consacrée à l'élaboration d'un modèle stochastique de pluie. Le choix et la nature du modèle sont basés sur les caractéristiques obtenus à partir de l'analyse de données de hauteur de pluie recueillies sur 40 ans (1968-2008) sur le Centre de Recherche de l'INRA d'Avignon. Pour cela, la représentation cumulée des précipitations sera assimilée à une marche aléatoire dans laquelle les sauts et les temps d'attente entre les sauts sont respectivement les amplitudes et les durées aléatoires entre deux occurrences d'événements de pluie. Ainsi, la loi de probabilité des sauts (loi log-normale) et celle des temps d'attente entre les sauts (loi alpha-stable) sont obtenus en analysant les lois de probabilité des amplitudes et des occurrences des événements de pluie. Nous montrons alors que ce modèle de marche aléatoire tend vers un mouvement brownien géométrique subordonné en temps (quand les pas d'espace et de temps de la marche tendent simultanément vers zéro tout en gardant un rapport constant) dont la loi de densité de probabilité est régie par une équation de Fokker Planck fractionnaire (FFPE). Deux approches sont ensuite utilisées pour la mise en œuvre du modèle. La première approche est de type stochastique et repose sur le lien existant entre le processus stochastique issu de l'équation différentielle d'Itô et la FFPE. La deuxième approche utilise une résolution numérique directe par discrétisation de la FFPE. Conformément à l'objectif principal de la thèse, la seconde partie du travail est consacrée à l'analyse de la contribution de la pluie aux fluctuations de la nappe phréatique. Cette analyse est faite sur la base de deux relevés simultanées d'observations de hauteurs de pluie et de la nappe phréatique sur 14 mois (février 2005-mars 2006). Une étude statistique des liens entre les signaux de pluie et de fluctuations de la nappe est menée comme suit : Les données de variations de hauteur de nappe sont analysées et traitées pour isoler les fluctuations cohérentes avec les événements de pluie. Par ailleurs, afin de tenir compte de la dispersion de masse dans le sol, le transport de la masse d'eau pluviale dans le sol sera modélisé par un code de calcul de transfert (modèle PASTIS) auquel nous appliquons en entrée les données de hauteurs de pluie mesurées. Les résultats du modèle permettent entre autre d'estimer l'état hydrique du sol à une profondeur donnée (ici fixée à 1.6m). Une étude de la corrélation entre cet état hydrique et les fluctuations de la nappe sera ensuite effectuée en complément à celle décrite ci-dessus pour illustrer la possibilité de modéliser l'impact de la pluie sur les fluctuations de la nappe
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Wu, Cheng-En, and 吳昌恩. "Random Walk Model for the Pollutant Transport of Continuous Point Source." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/55579277267333427557.

Повний текст джерела
Анотація:
碩士
逢甲大學
土木及水利工程研究所
87
Abstract The midstream and downstream of major and minor rivers in Taiwan have been polluted seriously. The pollution is much caused by various continuous pollutants. This study uses Random Walk Method to set up a numerical model to simulate the transport of continuous point source in rivers. The goal is to describe the simulation method, analyze error of simulation result, and simulate the seawater intrusion in Wu-Shi tidal reach as a practical sample. This paper sets up a 1-D random walk numerical model for continuous point source successfully, which can be executed in 1-D flow field with constant parameters. This study showed that if DF=0.20 the discrete error can be minimized, and that the random error can be controlled by adjusting the number of boundary released particles and the size of counting grid.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Yeh, Ming-Kai, and 葉銘凱. "The study of using continuous time random walk model in contaminant transport modeling." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/76223852580153147071.

Повний текст джерела
Анотація:
碩士
國立成功大學
資源工程學系碩博士班
95
Due to the heterogeneities of physical and chemical properties of porous media, the plumes of contamination are often shown non-Gaussian behaviors. We proposed a numerical model to simulate non-Gaussian distribution by using the continuous time random walk(CTRW) method. The parameters required in the CTRW model are defined by fitting results of CTRW and these of traditional advection-dispersion model. The proposed model is applied to Borden site. Results show that CTRW well describes the non-Gaussian behavior of solute transport and is a better model to simulate the contaminate transport.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Tsai, Yi-Po, and 蔡易珀. "A Study on The Random and Discrete SamplingEffect of Continuous-time Diffusion Model." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/05203511399785068856.

Повний текст джерела
Анотація:
碩士
國立中山大學
應用數學系研究所
98
High-frequency financial data are not only discretely sampled in time but the time separating successive observations is often random. We review the paper of Aït-Sahalia and Mykland (2003), that measure the effects of discreteness sampling and ignoring the randomness of the sampling for estimating the m.l.e of a continuous-time diffusion model. In that article, three different assumptions and restrict in one made on the sampling intervals, and the corresponding likelihood function, asymptotic normality, and covariance matrix are obtained. It is concluded that the effects due to discretely sampling are smaller than the effect of simply ignoring the sampling randomness. This study focuses on rechecking the results in the paper of A‥ıt-Sahalia and Mykland (2003) including theory, simulation and application. We derive a different likelihood function expression from A‥ıt-Sahalia and Mykland (2003)’s result. However, the asymptotic covariance are consistent for both approaching in the O-U process. Furthermore, we conduct an empirical study on the high frequency transaction time data by using non-homogeneous Poisson Processes.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

CHEN, CHIA-HUNG, and 陳佳鴻. "2-D Random Walk Numerical Model for Riverine Pollutant Transport of Continuous Point Source." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/pmzng3.

Повний текст джерела
Анотація:
碩士
逢甲大學
土木及水利工程所
90
Abstract The midstream and downstream of major and minor rivers in Taiwan have been polluted seriously. The pollution is much caused by various continuous pollutants. This study uses Random Walk Method to set up a numerical model to simulate the riverine pollutant transport of continuous point source. The random walk method considers the released mass at each time step as being made up of thousands of particles. The particles released not only advect with the flow, but also walk randomly due to diffusion (or dispersion) effect. At desirable time the concentration at one position may be obtain by dividing the total mass in the concerned volume by the volume. The position of every particle is memorized by the computer at every time step. For saving the memories of the computer, the memories of a particle flows out of the flow field are replaced by that of a new released particle. In this study, model structure is discussed, and calculated results are compared with the data of laboratory experiment to analyze the mechanism of the dispersion in meandering channel.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

黃俊凱. "Random walk model for the pollutant transport of continuous point source in tidal river." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/05842304459298570869.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

廖學豐. "By using random yield model to establish a copper alloy stress - strain continuous curve." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/43083451145091038382.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Wesolkowski, Slawomir Bogumil. "Stochastic Nested Aggregation for Images and Random Fields." Thesis, 2007. http://hdl.handle.net/10012/2998.

Повний текст джерела
Анотація:
Image segmentation is a critical step in building a computer vision algorithm that is able to distinguish between separate objects in an image scene. Image segmentation is based on two fundamentally intertwined components: pixel comparison and pixel grouping. In the pixel comparison step, pixels are determined to be similar or different from each other. In pixel grouping, those pixels which are similar are grouped together to form meaningful regions which can later be processed. This thesis makes original contributions to both of those areas. First, given a Markov Random Field framework, a Stochastic Nested Aggregation (SNA) framework for pixel and region grouping is presented and thoroughly analyzed using a Potts model. This framework is applicable in general to graph partitioning and discrete estimation problems where pairwise energy models are used. Nested aggregation reduces the computational complexity of stochastic algorithms such as Simulated Annealing to order O(N) while at the same time allowing local deterministic approaches such as Iterated Conditional Modes to escape most local minima in order to become a global deterministic optimization method. SNA is further enhanced by the introduction of a Graduated Models strategy which allows an optimization algorithm to converge to the model via several intermediary models. A well-known special case of Graduated Models is the Highest Confidence First algorithm which merges pixels or regions that give the highest global energy decrease. Finally, SNA allows us to use different models at different levels of coarseness. For coarser levels, a mean-based Potts model is introduced in order to compute region-to-region gradients based on the region mean and not edge gradients. Second, we develop a probabilistic framework based on hypothesis testing in order to achieve color constancy in image segmentation. We develop three new shading invariant semi-metrics based on the Dichromatic Reflection Model. An RGB image is transformed into an R'G'B' highlight invariant space to remove any highlight components, and only the component representing color hue is preserved to remove shading effects. This transformation is applied successfully to one of the proposed distance measures. The probabilistic semi-metrics show similar performance to vector angle on images without saturated highlight pixels; however, for saturated regions, as well as very low intensity pixels, the probabilistic distance measures outperform vector angle. Third, for interferometric Synthetic Aperture Radar image processing we apply the Potts model using SNA to the phase unwrapping problem. We devise a new distance measure for identifying phase discontinuities based on the minimum coherence of two adjacent pixels and their phase difference. As a comparison we use the probabilistic cost function of Carballo as a distance measure for our experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Stadler, Peter F., Wim Hordijk, and Jose F. Fontanari. "Phase transition and landscape statistics of the number partitioning problem." 2003. https://ul.qucosa.de/id/qucosa%3A31920.

Повний текст джерела
Анотація:
The phase transition in the number partitioning problem (NPP), i.e., the transition from a region in the space of control parameters in which almost all instances have many solutions to a region in which almost all instances have no solution, is investigated by examining the energy landscape of this classic optimization problem. This is achieved by coding the information about the minimum energy paths connecting pairs of minima into a tree structure, termed a barrier tree, the leaves and internal nodes of which represent, respectively, the minima and the lowest energy saddles connecting those minima. Here we apply several measures of shape (balance and symmetry) as well as of branch lengths (barrier heights) to the barrier trees that result from the landscape of the NPP, aiming at identifying traces of the easy-hard transition. We find that it is not possible to tell the easy regime from the hard one by visual inspection of the trees or by measuring the barrier heights. Only the difficulty measure, given by the maximum value of the ratio between the barrier height and the energy surplus of local minima, succeeded in detecting traces of the phase transition in the tree. In addition, we show that the barrier trees associated with the NPP are very similar to random trees, contrasting dramatically with trees associated with the p spin-glass and random energy models. We also examine critically a recent conjecture on the equivalence between the NPP and a truncated random energy model.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Dargie, Waltenegus. "Impact of Random Deployment on Operation and Data Quality of Sensor Networks." Doctoral thesis, 2009. https://tud.qucosa.de/id/qucosa%3A25278.

Повний текст джерела
Анотація:
Several applications have been proposed for wireless sensor networks, including habitat monitoring, structural health monitoring, pipeline monitoring, and precision agriculture. Among the desirable features of wireless sensor networks, one is the ease of deployment. Since the nodes are capable of self-organization, they can be placed easily in areas that are otherwise inaccessible to or impractical for other types of sensing systems. In fact, some have proposed the deployment of wireless sensor networks by dropping nodes from a plane, delivering them in an artillery shell, or launching them via a catapult from onboard a ship. There are also reports of actual aerial deployments, for example the one carried out using an unmanned aerial vehicle (UAV) at a Marine Corps combat centre in California -- the nodes were able to establish a time-synchronized, multi-hop communication network for tracking vehicles that passed along a dirt road. While this has a practical relevance for some civil applications (such as rescue operations), a more realistic deployment involves the careful planning and placement of sensors. Even then, nodes may not be placed optimally to ensure that the network is fully connected and high-quality data pertaining to the phenomena being monitored can be extracted from the network. This work aims to address the problem of random deployment through two complementary approaches: The first approach aims to address the problem of random deployment from a communication perspective. It begins by establishing a comprehensive mathematical model to quantify the energy cost of various concerns of a fully operational wireless sensor network. Based on the analytic model, an energy-efficient topology control protocol is developed. The protocol sets eligibility metric to establish and maintain a multi-hop communication path and to ensure that all nodes exhaust their energy in a uniform manner. The second approach focuses on addressing the problem of imperfect sensing from a signal processing perspective. It investigates the impact of deployment errors (calibration, placement, and orientation errors) on the quality of the sensed data and attempts to identify robust and error-agnostic features. If random placement is unavoidable and dense deployment cannot be supported, robust and error-agnostic features enable one to recognize interesting events from erroneous or imperfect data.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Barrias, Diogo Ferreira. "As condições para a implementação de melhoria contínua : caso de estudo da EDP Distribuição." Master's thesis, 2020. http://hdl.handle.net/10400.14/31924.

Повний текст джерела
Анотація:
O presente trabalho final de mestrado (TFM) tem como objetivo estudar, a nível teórico, como proceder aquando da implementação da metodologia de gestão melhoria contínua (MC) numa organização, tendo como caso de estudo a empresa EDP Distribuição (EDP D), uma vez que tal nunca fora antes realizado na organização. Para isso, recorre-se à metodologia quantitativa, através de um inquérito baseado em 8 comportamentos relacionados com um modelo que categoriza o estado de maturidade da metodologia em 5 níveis diferentes. Para além disso, é realizada uma revisão da literatura sobre fatores que condicionam MC. Ambas, metodologia e revisão, têm como objetivo responder à questão de investigação “Quais são as condições necessárias para a implementação de MC?”. Do questionário, resulta que a empresa deve, ainda, desenvolver os comportamentos 1 - “Perceber MC” - e 2 - “Criar hábitos de MC” - para atingir em pleno o 2º nível de maturidade, apesar de já mostrar relativamente bons resultados nos restantes comportamentos. Também são analisados quais os passos a seguir para atingir os restantes níveis, servindo, assim, o modelo como mapa da implementação da metodologia. Da análise do caso da EDP D, conclui-se que para a implementação de MC é necessário a definição de um líder de MC, o esclarecimento da estrutura organizacional, treino e formação, incentivos ao envolvimento na iniciativa, tratamento inicial de processos mais simples, e o estabelecimento e comunicação de métricas e estratégia de implementação.
The current master’s final assignment has the objective of studying, at the theoretical level, how to proceed in the implementation of the continuous improvement (CI) management methodology in an organisation, utilising the company EDP Distribuição as a case study, as such has never been done before in the organisation. As so, the study utilises the quantitative methodology, with the distribution of an inquiry based on 8 behaviours related to a model that categorizes the state of maturity of the methodology in 5 different levels. Besides this, a review of the literature concerning the factors that condition CI is formulated. Both methodology and review have the objective of answering the research question “What are the necessary conditions for the implementation of CI?”. From the questionnaire, it results that the company should, yet, develop the 1 - “Understand CI” - and 2 – “Develop habits of CI” – behaviours, in order to attain in full the 2nd level of maturity, although there are good results in the other behaviours. It is also analysed what are the next steps to follow in order to achieve the other levels, by which the model serves also as a map for the implementation of the methodology. From the analysis of EDP D’s case, it can be concluded that for the implementation of CI it is necessary the definition of CI leader, the clarifying of organisational structure, training and formation, incentives for the involvement in the initiative, initial treatment to the more simple processes, and the establishment and communication of metrics and implementation strategy.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Karmakar, Smarajit. "Numerical Studies Of Slow Dynamics And Glass Transition In Model Liquids." Thesis, 2009. https://etd.iisc.ac.in/handle/2005/633.

Повний текст джерела
Анотація:
An increase in the co-operativity in the motion of particles and a growth of a suitably defined dynamical correlation length seem to be generic features exhibited by all liquids upon supercooling. These features have been observed both in experiments and in numerical simulations of glass-forming liquids. Specially designed NMR experiments have estimated that the rough magnitude of this correlation length is of the order of a few nanometers near the glass transition. Simulations also predict that there are regions in the system which are more liquid-like than other regions. A complete theoretical understanding of this behaviour is not available at present. In recent calculations, Berthier, Biroli and coworkers [1, 2] extended the simple mode coupling theory (MCT) to incorporate the effects of dynamic heterogeneity and predicted the existence of a growing dynamical correlation length associated with the cooperativity of the dynamics. MCT also predicts a power law divergence of different dynamical quantities at the mode coupling temperature and at temperatures somewhat higher than the mode coupling temperature, these predictions are found to be consistent with experimental and simulation results. The system size dependence of these quantities should exhibit finite size scaling (FSS) similar to that observed near a continuous phase transition in the temperature range where they show power law growth. Hence we have used the method of finite size scaling in the context of the dynamics of supercooled liquids. In chapter 2, we present the results of extensive molecular dynamics simulations of a model glass forming liquid and extract a dynamical correlation length ξ associated with dynamic heterogeneity by performing a detailed finite size scaling analysis of a four-point dynamic susceptibility χ4(t) [3] and the associated Binder cumulant. We find that although these quantities show the “normal” finite size scaling behaviour expected for a system with a growing correlation length, the relaxation time τ does not. Thus glassy dynamics can not be fully understood in terms of “standard” critical phenomena. Inspired by the success of the empirical Adam-Gibbs relation [4] which relates dynamics with the configurational entropy, we have calculated the configurational entropy for different system sizes and temperatures to explain the nontrivial scaling behaviour of the relaxation time. We find that the behaviour of the relaxation time τ can be explained in terms of the Adam-Gibbs relation [4] for all temperatures and system sizes. This observation raises serious questions about the validity of the mode coupling theory which does not include the effects of the potential energy (or free energy) landscape on the dynamics. On the other hand, in the “random first order transition” theory (RFOT), introduced by Wolynes and coworkers [5], the configurational entropy plays a central role in determining the dynamics. So we also tried to explain our simulation results in terms of RFOT. However, this interpretation has the drawback that the value of one of the exponents of this theory extracted from our numerical results does not satisfy an expected physical bound, and there is no clear explanation for the obtained values of other exponents. Thus we find puzzling values for the exponents relevant to the applicability of RFOT, which are in need of explanation. This can be due to the fact that RFOT focuses only near the glass transition, while all our simulation results are for temperatures far above the glass transition temperature (actually, above the mode coupling temperature). Interestingly, results similar to ours were obtained in a recent analysis [6] of experimental data near the laboratory glass transition, on a large class of glass-forming materials. Thus right now we do not have any theory which can explain our simulation data consistently from all perspectives. There have been some attempts to extend the RFOT analysis to temperatures above the mode coupling temperature [7, 8] and to estimate a length scale associated with the configurational entropy at such temperatures. We compare our results with the predictions arising from these analyses. In chapter 3, we present simulation results that suggest that finite size scaling analysis is probably the only feasible method for obtaining reliable estimates of the dynamical correlation length for supercooled liquids. As mentioned before, although there exists a growing correlation length, the behaviour of all measured quantities (specifically, the relaxation time) is not in accordance with the behaviour expected in “standard” critical phenomena. So one might suspect the results for the correlation length extracted from the scaling analysis. To find out whether the results obtained by doing finite size scaling are correct, we have done simulations of very large system sizes for the same model glass forming liquid. In earlier studies, the correlation length has been extracted from the wave vector dependence of the dynamic susceptibility in the limit of zero wave vector, but to estimate the correlation length with reasonable accuracy one needs data in the small wave vector range. This implies that one needs to simulate very large systems. But as far as we know, in all previous studies typical system sizes of the order of 10, 000 particles have been used to do this analysis. In this chapter we show by comparing results for systems of 28, 000 and 350, 000 particles that these previous estimates are not reliable. We also show that one needs to simulate systems with at least a million particles to estimate the correlation length correctly near the mode coupling temperature and this size increases with decreasing temperature. We compare the correlation length obtained by analyzing the wave vector dependence of the dynamic susceptibility for a 350, 000particle system with the results obtained from the finite size scaling analysis. We were only able to compare the results in the high temperature range due to obvious reasons. However the agreement in the high temperature range shows that the finite size scaling analysis is robust and also establishes the fact that finite size scaling is the only practical method to extract reliable correlation lengths in supercooled liquids. In chapter 4, we present a free energy landscape analysis of dynamic heterogeneity for a monodisperse hard sphere system. The importance of the potential energy landscape for particles interacting with soft potentials is well known in the glass community from the work of Sastry et al. [9] and others, but the hard sphere system which does not have any well defined potential energy landscape also exhibits similar slow dynamics in the high density limit. Thus it is not clear how to treat the hard sphere systems within the same energy landscape formalism. Dasgupta et al. [10, 11, 12, 13, 14, 15] showed that one can explain the slow dynamics of these hard core systems in term of a free energy landscape picture. They and other researchers showed that these system have many aperiodic local minima in its free energy landscape, with free energy lower than that of the liquid. Using the Ramkrishnan-Yussouff free energy functional, we have performed multi parameter variational minimizations to map out the detailed density distribution of glassy free energy minima. We found that the distribution of the widths of local density peaks at glassy minima is spatially heterogeneous. By performing hard sphere event driven molecular dynamics simulation, we show that there exists strong correlation between these density inhomogeneity and the local Debye-Waller factor which provides a measure of the dynamic heterogeneity observed in simulations. This result unifies the system of hard core particles with the other soft core particles in terms of a landscapebased description of dynamic heterogeneity. In chapter 5, we extend the same free energy analysis to a polydisperse system and show that there is a critical polydispersity beyond which the crystal state is not stable and glassy states are thermodynamically stable. We also found a reentrant behaviour in the liquid-solid phase transition within this free-energy based formalism. These results are in qualitative agreement with experimental observations for colloidal systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Karmakar, Smarajit. "Numerical Studies Of Slow Dynamics And Glass Transition In Model Liquids." Thesis, 2009. http://hdl.handle.net/2005/633.

Повний текст джерела
Анотація:
An increase in the co-operativity in the motion of particles and a growth of a suitably defined dynamical correlation length seem to be generic features exhibited by all liquids upon supercooling. These features have been observed both in experiments and in numerical simulations of glass-forming liquids. Specially designed NMR experiments have estimated that the rough magnitude of this correlation length is of the order of a few nanometers near the glass transition. Simulations also predict that there are regions in the system which are more liquid-like than other regions. A complete theoretical understanding of this behaviour is not available at present. In recent calculations, Berthier, Biroli and coworkers [1, 2] extended the simple mode coupling theory (MCT) to incorporate the effects of dynamic heterogeneity and predicted the existence of a growing dynamical correlation length associated with the cooperativity of the dynamics. MCT also predicts a power law divergence of different dynamical quantities at the mode coupling temperature and at temperatures somewhat higher than the mode coupling temperature, these predictions are found to be consistent with experimental and simulation results. The system size dependence of these quantities should exhibit finite size scaling (FSS) similar to that observed near a continuous phase transition in the temperature range where they show power law growth. Hence we have used the method of finite size scaling in the context of the dynamics of supercooled liquids. In chapter 2, we present the results of extensive molecular dynamics simulations of a model glass forming liquid and extract a dynamical correlation length ξ associated with dynamic heterogeneity by performing a detailed finite size scaling analysis of a four-point dynamic susceptibility χ4(t) [3] and the associated Binder cumulant. We find that although these quantities show the “normal” finite size scaling behaviour expected for a system with a growing correlation length, the relaxation time τ does not. Thus glassy dynamics can not be fully understood in terms of “standard” critical phenomena. Inspired by the success of the empirical Adam-Gibbs relation [4] which relates dynamics with the configurational entropy, we have calculated the configurational entropy for different system sizes and temperatures to explain the nontrivial scaling behaviour of the relaxation time. We find that the behaviour of the relaxation time τ can be explained in terms of the Adam-Gibbs relation [4] for all temperatures and system sizes. This observation raises serious questions about the validity of the mode coupling theory which does not include the effects of the potential energy (or free energy) landscape on the dynamics. On the other hand, in the “random first order transition” theory (RFOT), introduced by Wolynes and coworkers [5], the configurational entropy plays a central role in determining the dynamics. So we also tried to explain our simulation results in terms of RFOT. However, this interpretation has the drawback that the value of one of the exponents of this theory extracted from our numerical results does not satisfy an expected physical bound, and there is no clear explanation for the obtained values of other exponents. Thus we find puzzling values for the exponents relevant to the applicability of RFOT, which are in need of explanation. This can be due to the fact that RFOT focuses only near the glass transition, while all our simulation results are for temperatures far above the glass transition temperature (actually, above the mode coupling temperature). Interestingly, results similar to ours were obtained in a recent analysis [6] of experimental data near the laboratory glass transition, on a large class of glass-forming materials. Thus right now we do not have any theory which can explain our simulation data consistently from all perspectives. There have been some attempts to extend the RFOT analysis to temperatures above the mode coupling temperature [7, 8] and to estimate a length scale associated with the configurational entropy at such temperatures. We compare our results with the predictions arising from these analyses. In chapter 3, we present simulation results that suggest that finite size scaling analysis is probably the only feasible method for obtaining reliable estimates of the dynamical correlation length for supercooled liquids. As mentioned before, although there exists a growing correlation length, the behaviour of all measured quantities (specifically, the relaxation time) is not in accordance with the behaviour expected in “standard” critical phenomena. So one might suspect the results for the correlation length extracted from the scaling analysis. To find out whether the results obtained by doing finite size scaling are correct, we have done simulations of very large system sizes for the same model glass forming liquid. In earlier studies, the correlation length has been extracted from the wave vector dependence of the dynamic susceptibility in the limit of zero wave vector, but to estimate the correlation length with reasonable accuracy one needs data in the small wave vector range. This implies that one needs to simulate very large systems. But as far as we know, in all previous studies typical system sizes of the order of 10, 000 particles have been used to do this analysis. In this chapter we show by comparing results for systems of 28, 000 and 350, 000 particles that these previous estimates are not reliable. We also show that one needs to simulate systems with at least a million particles to estimate the correlation length correctly near the mode coupling temperature and this size increases with decreasing temperature. We compare the correlation length obtained by analyzing the wave vector dependence of the dynamic susceptibility for a 350, 000particle system with the results obtained from the finite size scaling analysis. We were only able to compare the results in the high temperature range due to obvious reasons. However the agreement in the high temperature range shows that the finite size scaling analysis is robust and also establishes the fact that finite size scaling is the only practical method to extract reliable correlation lengths in supercooled liquids. In chapter 4, we present a free energy landscape analysis of dynamic heterogeneity for a monodisperse hard sphere system. The importance of the potential energy landscape for particles interacting with soft potentials is well known in the glass community from the work of Sastry et al. [9] and others, but the hard sphere system which does not have any well defined potential energy landscape also exhibits similar slow dynamics in the high density limit. Thus it is not clear how to treat the hard sphere systems within the same energy landscape formalism. Dasgupta et al. [10, 11, 12, 13, 14, 15] showed that one can explain the slow dynamics of these hard core systems in term of a free energy landscape picture. They and other researchers showed that these system have many aperiodic local minima in its free energy landscape, with free energy lower than that of the liquid. Using the Ramkrishnan-Yussouff free energy functional, we have performed multi parameter variational minimizations to map out the detailed density distribution of glassy free energy minima. We found that the distribution of the widths of local density peaks at glassy minima is spatially heterogeneous. By performing hard sphere event driven molecular dynamics simulation, we show that there exists strong correlation between these density inhomogeneity and the local Debye-Waller factor which provides a measure of the dynamic heterogeneity observed in simulations. This result unifies the system of hard core particles with the other soft core particles in terms of a landscapebased description of dynamic heterogeneity. In chapter 5, we extend the same free energy analysis to a polydisperse system and show that there is a critical polydispersity beyond which the crystal state is not stable and glassy states are thermodynamically stable. We also found a reentrant behaviour in the liquid-solid phase transition within this free-energy based formalism. These results are in qualitative agreement with experimental observations for colloidal systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Beisler, Matthias Werner. "Modelling of input data uncertainty based on random set theory for evaluation of the financial feasibility for hydropower projects." Doctoral thesis, 2010. https://tubaf.qucosa.de/id/qucosa%3A22775.

Повний текст джерела
Анотація:
The design of hydropower projects requires a comprehensive planning process in order to achieve the objective to maximise exploitation of the existing hydropower potential as well as future revenues of the plant. For this purpose and to satisfy approval requirements for a complex hydropower development, it is imperative at planning stage, that the conceptual development contemplates a wide range of influencing design factors and ensures appropriate consideration of all related aspects. Since the majority of technical and economical parameters that are required for detailed and final design cannot be precisely determined at early planning stages, crucial design parameters such as design discharge and hydraulic head have to be examined through an extensive optimisation process. One disadvantage inherent to commonly used deterministic analysis is the lack of objectivity for the selection of input parameters. Moreover, it cannot be ensured that the entire existing parameter ranges and all possible parameter combinations are covered. Probabilistic methods utilise discrete probability distributions or parameter input ranges to cover the entire range of uncertainties resulting from an information deficit during the planning phase and integrate them into the optimisation by means of an alternative calculation method. The investigated method assists with the mathematical assessment and integration of uncertainties into the rational economic appraisal of complex infrastructure projects. The assessment includes an exemplary verification to what extent the Random Set Theory can be utilised for the determination of input parameters that are relevant for the optimisation of hydropower projects and evaluates possible improvements with respect to accuracy and suitability of the calculated results.
Die Auslegung von Wasserkraftanlagen stellt einen komplexen Planungsablauf dar, mit dem Ziel das vorhandene Wasserkraftpotential möglichst vollständig zu nutzen und künftige, wirtschaftliche Erträge der Kraftanlage zu maximieren. Um dies zu erreichen und gleichzeitig die Genehmigungsfähigkeit eines komplexen Wasserkraftprojektes zu gewährleisten, besteht hierbei die zwingende Notwendigkeit eine Vielzahl für die Konzepterstellung relevanter Einflussfaktoren zu erfassen und in der Projektplanungsphase hinreichend zu berücksichtigen. In frühen Planungsstadien kann ein Großteil der für die Detailplanung entscheidenden, technischen und wirtschaftlichen Parameter meist nicht exakt bestimmt werden, wodurch maßgebende Designparameter der Wasserkraftanlage, wie Durchfluss und Fallhöhe, einen umfangreichen Optimierungsprozess durchlaufen müssen. Ein Nachteil gebräuchlicher, deterministischer Berechnungsansätze besteht in der zumeist unzureichenden Objektivität bei der Bestimmung der Eingangsparameter, sowie der Tatsache, dass die Erfassung der Parameter in ihrer gesamten Streubreite und sämtlichen, maßgeblichen Parameterkombinationen nicht sichergestellt werden kann. Probabilistische Verfahren verwenden Eingangsparameter in ihrer statistischen Verteilung bzw. in Form von Bandbreiten, mit dem Ziel, Unsicherheiten, die sich aus dem in der Planungsphase unausweichlichen Informationsdefizit ergeben, durch Anwendung einer alternativen Berechnungsmethode mathematisch zu erfassen und in die Berechnung einzubeziehen. Die untersuchte Vorgehensweise trägt dazu bei, aus einem Informationsdefizit resultierende Unschärfen bei der wirtschaftlichen Beurteilung komplexer Infrastrukturprojekte objektiv bzw. mathematisch zu erfassen und in den Planungsprozess einzubeziehen. Es erfolgt eine Beurteilung und beispielhafte Überprüfung, inwiefern die Random Set Methode bei Bestimmung der für den Optimierungsprozess von Wasserkraftanlagen relevanten Eingangsgrößen Anwendung finden kann und in wieweit sich hieraus Verbesserungen hinsichtlich Genauigkeit und Aussagekraft der Berechnungsergebnisse ergeben.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Jain, Rohit. "Anomalous Diffusion in a Rearranging Medium Diffusing Diffusivity Models." Thesis, 2017. http://etd.iisc.ac.in/handle/2005/4151.

Повний текст джерела
Анотація:
Diffusion processes, because of their applications to a wide range of phenomena, have been a subject of great scientific interest ever since Einstein formulated the celebrated theory of Brownian motion. Brownian motion is the most commonly known class of diffusion and is the dominant form of molecular transport in physical systems which are usually driven by thermal noise e.g. dissolution of sugar in water. It is also the simplest case of a random process where it is assumed that the time scale of motion of diffusing particle is much larger than that of the solvent molecules. This causes an extreme separation of time scales- one associated with the slower diffusing particle, and the other associated with the faster solvent molecules. This in turn leads to two fundamental laws of Brownian motion : (1) the mean square displacement (MSD) of particle is proportional to the time lapsed, i.e. hx2i / T . It is usually referred to as Fickian motion and (2) the probability distribution function (pdf) of displacements is Gaussian with the width of distribution p-scaling as T (this is equivalent to say that the motion is Fickian). However, there are many other diffusion processes which can not be classified as Brownian motion and hence are termed as anomalous diffusion. A diffusion process can be termed as anomalous if any one or both the laws of Brownian motion are violated. There are a lot of phenomena in which is diffusion is anomalous, i.e. where the pdf is not Gaussian but a stable distribution with a functional form f(jxj=T =2) such that the width of distribution increases like T =2 with 6= 1. The Brownian motion, on the other hand, would lead to a Gaussian distribution with = 1. In the past, it has been usually assumed that if 6= 1, i.e. if the diffusion is non-Fickian, then the distribution would also be non-Gaussian. Conversely, if = 1, then the distribution would be Gaussian. This was so well accepted that it was almost never tested until recently. In a series of experiments from Granick's group [1, 2] where the environment undergoes structural rearrangement on a time scale less than that of observation of diffusion, non-Gaussian distributions have been realized. Even more interesting, coexisting with this non-Gaussian distribution was observed a MSD which was found to be vary linearly in time at all times irrespective of the actual form of the distribution. In these experiments, the pdf was found to be exponential at short times which then crossed over to being Gaussian at large enough time scales. Chubynsky and Slater [3] have analyzed the \di using diffusivity" model, in which dif-fusion coefficient changes as a stochastic function of time, because of the rearrangement of environment. Assuming an exponential distribution of diffusivity at small time scales, these authors showed analytically that (1) the diffusion is Fickian and (2) the distribution of displacements, after averaging the Gaussian pdf over the exponential distribution of diffusivity, becomes non-Gaussian (exponential). The width of this non-Gaussian distribution increases as T . At larger time scales, they performed simulations and the result was a cross over to Gaussian distribution. Following their work, we have proposed a class of \diffusing diffusivity" models which we have been able to solve analytically at all time scales, using the methods of path integrals [4]. In the thesis, we are interested in developing models of diffusing diffusivity that could be used to describe different kinds of anomalous diffusion processes. We show that our model of diffusing diffusivity is equivalent to another important class of physical processes, i.e. that of the Brownian motion with absorption, or the reaction- diffusion process. In reaction-diffusion models, the concentration of a chemical substance changes in space and time because of its reaction with another substance while the diffusion causes the spread in the concentrations of various substances. The connection of diffusing diffusivity model to the reaction-diffusion model is particularly useful as one can now have different models of diffusivity describing its diffusion while, interestingly the reaction term remains unchanged. In our first model, diffusivity is modeled as a simple Brownian process. More precisely, we take D(t) = 2(t) where is the position vector of an n-dimensional harmonic oscillator executing Brownian motion. For the case n = 2, the equilibrium distribution of diffusivity is an exponential, thereby making this particular case an ideal choice to compare our results with the numerical results of Chubynsky and Slater [3]. We have shown that our results are in very good agreement with theirs [5]. Further, our model is quite generic and it is possible to nd exact analytical solution with arbitrary value of n. The non-Gaussianity parameter, which is a measure of deviation from normality, has been evaluated exactly as a function of time and n. At short times, the value of parameter is non-zero, signifying non-Gaussian dynamics which eventually becomes zero in the large time limit, marking an onset of Gaussian dynamics. For larger values of n, the non-Gaussianity starts disappearing faster implying an earlier onset of Gaussian behavior. The model has been applied to the problem of calculating survival probability of a free particle in crowded, rearranging and bounded regions. We have obtained exact results for this problem where we have shown that for larger compartments and faster relaxation of the surroundings, diffusion inside a crowded, rearranging medium is similar to the diffusion in a homogeneous medium with a constant diffusivity. We have also studied the model for rotational diffusion process. We have obtained simple analytical expressions for the probability distribution and the mean square angular displacement in arbitrary dimensions. As in the case of translation diffusion, a non-Gaussianity parameter quanties the extent of deviation from Gaussian dynamics, we have defined in a similar fashion a non-normal parameter for rotational diffusion. This could be useful in analyzing the experimental data to find the extent of deviation from normal diffusion. In another study, we have used the model of diffusing diffusivity for the diffusion of a harmonic oscillator in crowded, rearranging environment. We have obtained two interesting results here namely (1) the expression for the MSD in case of diffusing diffusivity is of same kind as that for the case of constant diffusivity and (2) the probability distribution function remains non-Gaussian even in the limit of very large time unlike the previous cases where it eventually crosses over to become Gaussian. In our model of diffusivity, and also in the model of Chubynsky and Slater [3], the distribution of diffusivity decays to zero exponentially fast, implying that the probability of having a large value of D is rather small. However, there are cases where the distribution of D is broad and therefore D can occasionally have a large value with a sizable probability. We have analyzed a model of diffusivity where it evolves as a Levy flight process. More with this modelxiv is found to be a stable distribution with a time dependent width. The width of the p distribution increases as T , as in the case of Fickian dynamics but at longer times it increases at a much faster rate as T 1=2 . Thus, the dynamics is Fickian at short times and super diffusive at long times. After studying the models of diffusivity where it evolves as a Brownian process and as a Levy flight process, respectively, we have also studied a model of diffusivity where it evolves as a sub diffusive process. For that we have modeled diffusivity as a continuous time random walk (CTRW) process such that it attains an exponential distribution in the equilibrium limit. This model is actually a generalization of our first model of diffusing diffusivity with a parameter 2 (0; 1]. The problem of diffusing diffusivity, in this case, is shown to be equivalent to a class of models known as reaction-sub diffusion systems. We have analyzed two such models of reaction-sub diffusion. With both these models, we get all the results of our first model of diffusivity if = 1. Within the first model, the MSD is found to increase linearly in time at all the time scales and for all values of 2 (0; 1], thereby confirming a Fickian dynamics. Although the probability distribution function also becomes Gaussian in the limit of very large time for all values of as is our first model of diffusing diffusivity yet the evolution of pdf from a non-Gaussian function to Gaussian is a very slow process. Smaller is the value of , slower is the transition from non-Gaussian to Gaussian dynamics. The second model leads to sub diffusive dynamics in position space. The MSD here is shown to increase as T with a non-Gaussian pdf at all the time scales.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Persechino, Roberto. "Le modèle GREM jumelé à un champ magnétique aléatoire." Thèse, 2018. http://hdl.handle.net/1866/21150.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії