Literatura científica selecionada sobre o tema "Stochastic processes with large dimension"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Stochastic processes with large dimension".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Stochastic processes with large dimension"

1

Panos, Aristeidis, Petros Dellaportas e Michalis K. Titsias. "Large scale multi-label learning using Gaussian processes". Machine Learning 110, n.º 5 (14 de abril de 2021): 965–87. http://dx.doi.org/10.1007/s10994-021-05952-5.

Texto completo da fonte
Resumo:
AbstractWe introduce a Gaussian process latent factor model for multi-label classification that can capture correlations among class labels by using a small set of latent Gaussian process functions. To address computational challenges, when the number of training instances is very large, we introduce several techniques based on variational sparse Gaussian process approximations and stochastic optimization. Specifically, we apply doubly stochastic variational inference that sub-samples data instances and classes which allows us to cope with Big Data. Furthermore, we show it is possible and beneficial to optimize over inducing points, using gradient-based methods, even in very high dimensional input spaces involving up to hundreds of thousands of dimensions. We demonstrate the usefulness of our approach on several real-world large-scale multi-label learning problems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Feitzinger, J. V. "Star Formation in the Large Magellanic Cloud". Symposium - International Astronomical Union 115 (1987): 521–33. http://dx.doi.org/10.1017/s0074180900096315.

Texto completo da fonte
Resumo:
Methods used in pattern recognition and cluster analysis are applied to investigate the spatial distribution of the star forming regions. The fractal dimension of these structures is deduced. The new 21 cm, radio continuum (1.4 GHz) and IRAS surveys reveal scale structures of 700 pc to 1500 pc being identical with the optically identified star forming sites. The morphological structures delineated by young stars reflect physical parameters which determine the star formation in this galaxy. The formation of spiral arm filaments is understandable by stochastic selfpropagating star formation processes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

FRICKE, THOMAS, e DIETMAR WENDT. "THE MARKOFF AUTOMATON: A NEW ALGORITHM FOR SIMULATING THE TIME-EVOLUTION OF LARGE STOCHASTIC DYNAMIC SYSTEMS". International Journal of Modern Physics C 06, n.º 02 (abril de 1995): 277–306. http://dx.doi.org/10.1142/s0129183195000216.

Texto completo da fonte
Resumo:
We describe a new algorithm for simulating complex Markoff processes. We have used a reaction-cell method in order to simulate arbitrary reactions. It can be used for any kind of RDS on arbitrary topologies, including fractal dimensions or configurations not being related to any spatial geometry. The events within a single cell are managed by an event handler which has been implemented independently of the system studied. The method is exact on the Markoff level including the correct treatment of finite numbers of molecules. To demonstrate its properties, we apply it on a very simple reaction-diffusion-systems (RDS). The chemical equations [Formula: see text] and [Formula: see text] in 1 to 4 dimensions serve as models for systems whose dynamics on an intermediate time scale are governed by fluctuations. We compare our results to the analytic approach by the scaling ansatz. The simulations confirm the exponents of the A+B system within statistical errors, including the logarithmic corrections in the dimension d=2. The method is able to simulate the crossover from the reaction to diffusion limited regime, which is defined be a crossover time depending on the system size.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Mazzolo, Alain, e Cécile Monthus. "Conditioning diffusion processes with killing rates". Journal of Statistical Mechanics: Theory and Experiment 2022, n.º 8 (1 de agosto de 2022): 083207. http://dx.doi.org/10.1088/1742-5468/ac85ea.

Texto completo da fonte
Resumo:
Abstract When the unconditioned process is a diffusion submitted to a space-dependent killing rate k ( x → ) , various conditioning constraints can be imposed for a finite time horizon T. We first analyze the conditioned process when one imposes both the surviving distribution at time T and the killing-distribution for the intermediate times t ∈ [0, T]. When the conditioning constraints are less-detailed than these full distributions, we construct the appropriate conditioned processes via the optimization of the dynamical large deviations at level 2.5 in the presence of the conditioning constraints that one wishes to impose. Finally, we describe various conditioned processes for the infinite horizon T → +∞. This general construction is then applied to two illustrative examples in order to generate stochastic trajectories satisfying various types of conditioning constraints: the first example concerns the pure diffusion in dimension d with the quadratic killing rate k ( x → ) = γ x → 2 , while the second example is the Brownian motion with uniform drift submitted to the delta killing rate k(x) = kδ(x) localized at the origin x = 0.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Honkonen, Juha. "Fractional Stochastic Field Theory". EPJ Web of Conferences 173 (2018): 01005. http://dx.doi.org/10.1051/epjconf/201817301005.

Texto completo da fonte
Resumo:
Models describing evolution of physical, chemical, biological, social and financial processes are often formulated as differential equations with the understanding that they are large-scale equations for averages of quantities describing intrinsically random processes. Explicit account of randomness may lead to significant changes in the asymptotic behaviour (anomalous scaling) in such models especially in low spatial dimensions, which in many cases may be captured with the use of the renormalization group. Anomalous scaling and memory effects may also be introduced with the use of fractional derivatives and fractional noise. Construction of renormalized stochastic field theory with fractional derivatives and fractional noise in the underlying stochastic differential equations and master equations and the interplay between fluctuation-induced and built-in anomalous scaling behaviour is reviewed and discussed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Jonckheere, Matthieu, e Seva Shneer. "Stability of Multi-Dimensional Birth-and-Death Processes with State-Dependent 0-Homogeneous Jumps". Advances in Applied Probability 46, n.º 1 (março de 2014): 59–75. http://dx.doi.org/10.1239/aap/1396360103.

Texto completo da fonte
Resumo:
We study the conditions for positive recurrence and transience of multi-dimensional birth-and-death processes describing the evolution of a large class of stochastic systems, a typical example being the randomly varying number of flow-level transfers in a telecommunication wire-line or wireless network. First, using an associated deterministic dynamical system, we provide a generic method to construct a Lyapunov function when the drift is a smooth function on ℝN. This approach gives an elementary and direct proof of ergodicity. We also provide instability conditions. Our main contribution consists of showing how discontinuous drifts change the nature of the stability conditions and of providing generic sufficient stability conditions having a simple geometric interpretation. These conditions turn out to be necessary (outside a negligible set of the parameter space) for piecewise constant drifts in dimension two.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Jonckheere, Matthieu, e Seva Shneer. "Stability of Multi-Dimensional Birth-and-Death Processes with State-Dependent 0-Homogeneous Jumps". Advances in Applied Probability 46, n.º 01 (março de 2014): 59–75. http://dx.doi.org/10.1017/s0001867800006935.

Texto completo da fonte
Resumo:
We study the conditions for positive recurrence and transience of multi-dimensional birth-and-death processes describing the evolution of a large class of stochastic systems, a typical example being the randomly varying number of flow-level transfers in a telecommunication wire-line or wireless network. First, using an associated deterministic dynamical system, we provide a generic method to construct a Lyapunov function when the drift is a smooth function on ℝN. This approach gives an elementary and direct proof of ergodicity. We also provide instability conditions. Our main contribution consists of showing how discontinuous drifts change the nature of the stability conditions and of providing generic sufficient stability conditions having a simple geometric interpretation. These conditions turn out to be necessary (outside a negligible set of the parameter space) for piecewise constant drifts in dimension two.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Anantharam, Venkat, e François Baccelli. "Capacity and Error Exponents of Stationary Point Processes under Random Additive Displacements". Advances in Applied Probability 47, n.º 1 (março de 2015): 1–26. http://dx.doi.org/10.1239/aap/1427814578.

Texto completo da fonte
Resumo:
Consider a real-valued discrete-time stationary and ergodic stochastic process, called the noise process. For each dimension n, we can choose a stationary point process in ℝn and a translation invariant tessellation of ℝn. Each point is randomly displaced, with a displacement vector being a section of length n of the noise process, independent from point to point. The aim is to find a point process and a tessellation that minimizes the probability of decoding error, defined as the probability that the displaced version of the typical point does not belong to the cell of this point. We consider the Shannon regime, in which the dimension n tends to ∞, while the logarithm of the intensity of the point processes, normalized by dimension, tends to a constant. We first show that this problem exhibits a sharp threshold: if the sum of the asymptotic normalized logarithmic intensity and of the differential entropy rate of the noise process is positive, then the probability of error tends to 1 with n for all point processes and all tessellations. If it is negative then there exist point processes and tessellations for which this probability tends to 0. The error exponent function, which denotes how quickly the probability of error goes to 0 in n, is then derived using large deviations theory. If the entropy spectrum of the noise satisfies a large deviations principle, then, below the threshold, the error probability goes exponentially fast to 0 with an exponent that is given in closed form in terms of the rate function of the noise entropy spectrum. This is obtained for two classes of point processes: the Poisson process and a Matérn hard-core point process. New lower bounds on error exponents are derived from this for Shannon's additive noise channel in the high signal-to-noise ratio limit that hold for all stationary and ergodic noises with the above properties and that match the best known bounds in the white Gaussian noise case.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Anantharam, Venkat, e François Baccelli. "Capacity and Error Exponents of Stationary Point Processes under Random Additive Displacements". Advances in Applied Probability 47, n.º 01 (março de 2015): 1–26. http://dx.doi.org/10.1017/s0001867800007679.

Texto completo da fonte
Resumo:
Consider a real-valued discrete-time stationary and ergodic stochastic process, called the noise process. For each dimension n, we can choose a stationary point process in ℝ n and a translation invariant tessellation of ℝ n . Each point is randomly displaced, with a displacement vector being a section of length n of the noise process, independent from point to point. The aim is to find a point process and a tessellation that minimizes the probability of decoding error, defined as the probability that the displaced version of the typical point does not belong to the cell of this point. We consider the Shannon regime, in which the dimension n tends to ∞, while the logarithm of the intensity of the point processes, normalized by dimension, tends to a constant. We first show that this problem exhibits a sharp threshold: if the sum of the asymptotic normalized logarithmic intensity and of the differential entropy rate of the noise process is positive, then the probability of error tends to 1 with n for all point processes and all tessellations. If it is negative then there exist point processes and tessellations for which this probability tends to 0. The error exponent function, which denotes how quickly the probability of error goes to 0 in n, is then derived using large deviations theory. If the entropy spectrum of the noise satisfies a large deviations principle, then, below the threshold, the error probability goes exponentially fast to 0 with an exponent that is given in closed form in terms of the rate function of the noise entropy spectrum. This is obtained for two classes of point processes: the Poisson process and a Matérn hard-core point process. New lower bounds on error exponents are derived from this for Shannon's additive noise channel in the high signal-to-noise ratio limit that hold for all stationary and ergodic noises with the above properties and that match the best known bounds in the white Gaussian noise case.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Dulfan, Anna, e Iryna Voronko. "Features of Spatial-Temporal Hierarchical Structures Formation". Lighting Engineering & Power Engineering 60, n.º 2 (29 de outubro de 2021): 66–70. http://dx.doi.org/10.33042/2079-424x.2021.60.2.03.

Texto completo da fonte
Resumo:
The degree of ordering of the structure of technologically important materials formed as a result of the evolution of complex physicochemical systems determines their physical properties, in particular optical. In this regard, the primary task for the theoretical study of methods for obtaining materials with predetermined physical properties is to develop approaches to describe the evolution of fractal (scale-invariant) objects in the formation of self-similar structures in systems exhibiting chaotic behavior. The paper forms an idea of the processes of evolution in materials formed as a result of stochastic processes. It is established that the conduct of ultrametrics in time space allows to characterize the time of the evolutionary process of fractal dimension, which is calculated either theoretically or model. The description of evolutionary processes in a condensed medium, accompanied by topological transformations, is significantly supplemented by the method of describing the stages of evolution of structures, which makes it possible to analyze a wide range of materials and can control their properties, primarily optical. It is shown that the most large-scale invariant structures, due to the investigated properties, can be used as information carriers. It is demonstrated that the presence in physical systems of fractal temporal dimension and generates a self-similar (consisting of parts in a sense similar to the whole object) evolutionary tree, which, in turn, generates spatial objects of non-integer dimension, observed in real situations. On the other hand, temporal fractality provides analysis of systems with dynamic chaos, leading to universal relaxation functions. In particular, in systems with a large-scale invariant distribution of relaxation characteristics, an algebraic law of relaxation is manifested, which leads to rheological models and equations of states, which are characterized by fractional derivatives. It is argued that the fractal dimension of time hierarchies stores information that determines the process of self-organization. Developed in the paper ideas about the processes of building the structure of materials, which lead to the fractal geometry of objects, can be used to predict their properties, in particular, optical.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Stochastic processes with large dimension"

1

Bastide, Dorinel-Marian. "Handling derivatives risks with XVAs in a one-period network model". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASM027.

Texto completo da fonte
Resumo:
La réglementation requiert des établissements bancaires d'être en mesure de conduire des analyses de scénarios de tests de résistance (stress tests) réguliers de leurs expositions, en particulier face aux chambres de compensation (CCPs) auxquels ils sont largement exposés, en appliquant des chocs de marchés pour capturer le risque de marché et des chocs économiques pouvant conduire à l'état de faillite, dit aussi de défaut, divers acteurs financiers afin de refléter les risques de crédit et de contrepartie. Un des rôles principaux des CCPs est d'assurer par leur interposition entre acteurs financiers la réduction du risque de contrepartie associé aux pertes potentiels des engagements contractuels non respectés dus à la faillite d'une ou plusieurs des parties engagées. Elles facilitent également les divers flux financiers des activités de trading même en cas de défaut d'un ou plusieurs de leurs membres en re-basculant certaines des positions de ces membres et en allouant toute perte qui pourrait se matérialiser suite à ces défauts aux membres survivants . Pour développer une vision juste des risques et disposer d'outils performants de pilotage du capital, il apparaît essentiel d'être en mesure d'appréhender de manière exhaustive les pertes et besoins de liquidités occasionnés par ces divers chocs dans ces réseaux financiers ainsi que d'avoir une compréhension précise des mécanismes sous-jacents. Ce projet de thèse aborde différentes questions de modélisation permettant de refléter ces besoins, qui sont au cœur de la gestion des risques d'une banque dans les environnements actuels de trading centralisé. Nous commençons d'abord par définir un dispositif de modèle statique à une période reflétant les positions hétérogènes et possibilité de défauts joints de multiples acteurs financiers, qu'ils soient membres de CCPs ou autres participants financiers, pour identifier les différents coûts, dits de XVA, générés par les activités de clearing et bilatérales avec des formules explicites pour ces coûts. Divers cas d'usage de ce dispositif sont illustrés avec des exemples d'exercices de stress test sur des réseaux financiers depuis le point de vue d'un membre ou de novation de portefeuille de membres en défaut sur des CCPs avec les autres membres survivants. Des modèles de distributions à queues épaisses pour générer les pertes sur les portefeuilles et les défauts sont privilégiés avec l'application de techniques de Monte-Carlo en très grande dimension accompagnée des quantifications d'incertitudes numériques. Nous développons aussi l'aspect novation de portefeuille de membres en défauts et les transferts de coûts XVA associés. Ces novations peuvent s'exécuter soit sur les places de marchés (exchanges), soit par les CCP elles-mêmes qui désignent les repreneurs optimaux ou qui mettent aux enchères les positions des membres défaillants avec des expressions d'équilibres économiques. Les défauts de membres sur plusieurs CCPs en commun amènent par ailleurs à la mise en équation et la résolution de problèmes d'optimisation multidimensionnelle du transfert des risques abordées dans ces travaux
Finance regulators require banking institutions to be able to conduct regular scenario analyses to assess their resistance to various shocks (stress tests) of their exposures, in particular towards clearing houses (CCPs) to which they are largely exposed, by applying market shocks to capture market risk and economic shocks leading some financial players to bankruptcy, known as default state, to reflect both credit and counterparty risks. By interposing itself between financial actors, one of the main purposes of CCPs are to limit counterparty risk due to contractual payment failures due to one or several defaults among engaged parties. They also facilitate the various financial flows of the trading activities even in the event of default of one or more of their members by re-arranging certain positions and allocating any loss that could materialize following these defaults to the surviving members. To develop a relevant view of risks and ensure effective capital steering tools, it is essential for banks to have the capacity to comprehensively understand the losses and liquidity needs caused by these various shocks within these financial networks as well as to have an understanding of the underlying mechanisms. This thesis project aims at tackling modelling issues to answer those different needs that are at the heart of risk management practices for banks under clearing environments. We begin by defining a one-period static model for reflecting the market heterogeneous positions and possible joint defaults of multiple financial players, being members of CCPs and other financial participants, to identify the different costs, known as XVAs, generated by both clearing and bilateral activities, with explicit formulas for these costs. Various use cases of this modelling framework are illustrated with stress test exercises examples on financial networks from a member's point of view or innovation of portfolio of CCP defaulted members with other surviving members. Fat-tailed distributions are favoured to generate portfolio losses and defaults with the application of very large-dimension Monte-Carlo methods along with numerical uncertainty quantifications. We also expand on the novation aspects of portfolios of defaulted members and the associated XVA costs transfers. These innovations can be carried out either on the marketplaces (exchanges) or by the CCPs themselves by identifying the optimal buyers or by conducting auctions of defaulted positions with dedicated economic equilibrium problems. Failures of members on several CCPs in common also lead to the formulation and resolution of multidimensional optimization problems of risk transfer that are introduced in this thesis
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Jones, Elinor Mair. "Large deviations of random walks and levy processes". Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.491853.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Suzuki, Kohei. "Convergence of stochastic processes on varying metric spaces". 京都大学 (Kyoto University), 2016. http://hdl.handle.net/2433/215281.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Kuwada, Kazumasa. "On large deviations for current-valued processes induced from stochastic line integrals". 京都大学 (Kyoto University), 2004. http://hdl.handle.net/2433/147585.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Hoshaw-Woodard, Stacy. "Large sample methods for analyzing longitudinal data in rehabilitation research /". free to MU campus, to others for purchase, 1999. http://wwwlib.umi.com/cr/mo/fullcit?p9946263.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Löhr, Wolfgang. "Models of Discrete-Time Stochastic Processes and Associated Complexity Measures". Doctoral thesis, Universitätsbibliothek Leipzig, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-38267.

Texto completo da fonte
Resumo:
Many complexity measures are defined as the size of a minimal representation in a specific model class. One such complexity measure, which is important because it is widely applied, is statistical complexity. It is defined for discrete-time, stationary stochastic processes within a theory called computational mechanics. Here, a mathematically rigorous, more general version of this theory is presented, and abstract properties of statistical complexity as a function on the space of processes are investigated. In particular, weak-* lower semi-continuity and concavity are shown, and it is argued that these properties should be shared by all sensible complexity measures. Furthermore, a formula for the ergodic decomposition is obtained. The same results are also proven for two other complexity measures that are defined by different model classes, namely process dimension and generative complexity. These two quantities, and also the information theoretic complexity measure called excess entropy, are related to statistical complexity, and this relation is discussed here. It is also shown that computational mechanics can be reformulated in terms of Frank Knight's prediction process, which is of both conceptual and technical interest. In particular, it allows for a unified treatment of different processes and facilitates topological considerations. Continuity of the Markov transition kernel of a discrete version of the prediction process is obtained as a new result.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Löhr, Wolfgang. "Models of Discrete-Time Stochastic Processes and Associated Complexity Measures". Doctoral thesis, Max Planck Institut für Mathematik in den Naturwissenschaften, 2009. https://ul.qucosa.de/id/qucosa%3A11017.

Texto completo da fonte
Resumo:
Many complexity measures are defined as the size of a minimal representation in a specific model class. One such complexity measure, which is important because it is widely applied, is statistical complexity. It is defined for discrete-time, stationary stochastic processes within a theory called computational mechanics. Here, a mathematically rigorous, more general version of this theory is presented, and abstract properties of statistical complexity as a function on the space of processes are investigated. In particular, weak-* lower semi-continuity and concavity are shown, and it is argued that these properties should be shared by all sensible complexity measures. Furthermore, a formula for the ergodic decomposition is obtained. The same results are also proven for two other complexity measures that are defined by different model classes, namely process dimension and generative complexity. These two quantities, and also the information theoretic complexity measure called excess entropy, are related to statistical complexity, and this relation is discussed here. It is also shown that computational mechanics can be reformulated in terms of Frank Knight''s prediction process, which is of both conceptual and technical interest. In particular, it allows for a unified treatment of different processes and facilitates topological considerations. Continuity of the Markov transition kernel of a discrete version of the prediction process is obtained as a new result.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Kubasch, Madeleine. "Approximation of stochastic models for epidemics on large multi-level graphs". Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. https://theses.hal.science/tel-04717689.

Texto completo da fonte
Resumo:
Nous étudions un modèle SIR à deux niveaux de mélange, à savoir un niveau global uniformément mélangeant, et un niveau local divisé en deux couches de contacts au sein des foyers et lieux de travail, respectivement. Nous cherchons à développer des modèles réduits qui approchent bien cette dynamique épidémique, tout en étant plus maniables pour l’analyse numérique et/ou théorique.D'abord, nous analysons l’impact épidémique de la distribution des tailles des lieux de travail. Notre étude par simulations montre que, si la moyenne de la distribution des tailles de lieux de travail est fixée, sa variance est un bon indicateur de son influence sur des caractéristiques clés de l’épidémie. Cela nous permet de proposer des stratégies de télétravail efficaces. Ensuite, nous montrons qu’un modèle SIR déterministe, uniformément mélangeant, calibré sur le taux de croissance épidémique fournit une approximation parcimonieuse de l'épidémie.Néanmoins, la précision de ce modèle réduit décroît au cours du temps et n'a pas de garanties théoriques. Nous étudions donc la limite grande population du modèle stochastique à foyers et lieux de travail, que nous formalisons comme un processus à valeur mesure dont l’espace de types est continu. Nous établissons sa convergence vers l’unique solution déterministe d’une équation à valeur mesure. Dans le cas où les périodes infectieuses sont exponentiellement distribuées, une réduction plus forte vers un système dynamique fini-dimensionnel est obtenue.De plus, une étude de sensibilité nous permet de comprendre l’impact des paramètres du modèle sur la performance de ces deux modèles réduits. Nous montrons que la limite grande population du modèle foyer-travail permet de bien approcher l’épidémie, même si certaines hypothèses sur le réseau de contact sont relâchées. De même, nous quantifions l’impact des paramètres épidémiques sur la capacité du modèle réduit uniformément mélangeant à prédire des caractéristiques clés de l’épidémie.Enfin, nous considérons plus généralement des processus de population densité-dépendants. Nous établissons une formule tous-pour-un qui réduit la lignée typique d’un individu échantillonné à un processus spinal inhomogène en temps. Par ailleurs, nous quantifions par couplage la convergence en grande population d'une construction spinale
We study an SIR model with two levels of mixing, namely a uniformly mixing global level, and a local level with two layers of household and workplace contacts, respectively. More precisely, we aim at proposing reduced models which approximate well the epidemic dynamics at hand, while being more prone to mathematical analysis and/or numerical exploration.We investigate the epidemic impact of the workplace size distribution. Our simulation study shows that if the average workplace size is kept fixed, the variance of the workplace size distribution is a good indicator of its influence on key epidemic outcomes. In addition, this allows to design an efficient teleworking strategy. Next, we demonstrate that a deterministic, uniformly mixing SIR model calibrated using the epidemic growth rate yields a parsimonious approximation of the household-workplace model.However, the accuracy of this reduced model deteriorates over time and lacks theoretical guarantees. Hence, we study the large population limit of the stochastic household-workplace model, which we formalize as a measure-valued process with continuous state space. In a general setting, we establish convergence to the unique deterministic solution of a measure-valued equation. In the case of exponentially distributed infectious periods, a stronger reduction to a finite dimensional dynamical system is obtained.Further, in order to gain a finer insight on the impact of the model parameters on the performance of both reduced models, we perform a sensitivity study. We show that the large population limit of the household-workplace model can approximate well the epidemic even if some assumptions on the contact network are relaxed. Similarly, we quantify the impact of epidemic parameters on the capacity of the uniformly mixing reduced model to predict key epidemic outcomes.Finally, we consider density-dependent population processes in general. We establish a many-to-one formula which reduces the typical lineage of a sampled individual to a time-inhomogeneous spinal process. In addition, we use a coupling argument to quantify the large population convergence of a spinal process
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

De, Oliveira Gomes André. "Large Deviations Studies for Small Noise Limits of Dynamical Systems Perturbed by Lévy Processes". Doctoral thesis, Humboldt-Universität zu Berlin, 2018. http://dx.doi.org/10.18452/19118.

Texto completo da fonte
Resumo:
Die vorliegende Dissertation beschäftigt sich mit der Anwendung der Theorie der großen Abweichungen auf verschiedene Fragestellungen der stochastischen Analysis und stochastischen Dynamik von Sprungprozessen. Die erste Fragestellung behandelt die erste Austrittszeit aus einem beschränkten Gebiet für eine bestimmte Klasse von Sprungdiffusionen mit exponentiell leichten Sprüngen. In Abhängigkeit von der Leichtheit des Sprungmaßes wird das asymptotische Verhalten der Verteilung und insbesondere der Erwartung der ersten Austrittszeit bestimmt wenn das Rauschen verschwindet. Dabei folgt die Verteilung der ersten Austrittszeit einem Prinzip der großen Abweichungen im Falle eines superexponentiellen Sprungmaßes. Wohingegen im subexponentiellen Fall die Verteilung einem Prinzip moderater Abweichungen genügt. In beiden Fällen wird die Asymptotik bestimmt durch eine deterministische Größe, die den minimalen Energieaufwand beschreibt, um die Sprungdiffusion einen optimalen Kontrollpfad, der zum Austritt führt, folgen zu lassen. Die zweite Fragestellung widmet sich dem Grenzverhalten gekoppelter Vorwärts-Rückwärtssysteme stochastischer Differentialgleichungen bei kleinem Rauschen. Dazu assoziiert ist eine spezielle Klasse nicht-lokaler partieller Differentialgleichungen, die auch in nicht-lokalen Modellen der Fluiddynamik eine Rolle spielen. Mithilfe eines probabilistischen Ansatzes und der Markovschen Struktur dieser Systeme wird die Konvergenz auf Ebene von Viskositätslösungen untersucht. Dabei wird ein Prinzip der großen Abweichungen für die involvierten Stochastischen Prozesse hergeleitet.
This thesis deals with applications of Large Deviations Theory to different problems of Stochastic Dynamics and Stochastic Analysis concerning Jump Processes. The first problem we address is the first exit time from a fixed bounded domain for a certain class of exponentially light jump diffusions. According to the lightness of the jump measure of the driving process, we derive, when the source of the noise vanishes, the asymptotic behavior of the law and of the expected value of first exit time. In the super-exponential regime the law of the first exit time follows a large deviations scale and in the sub-exponential regime it follows a moderate deviations one. In both regimes the first exit time is comprehended, in the small noise limit, in terms of a deterministic quantity that encodes the minimal energy the jump diffusion needs to spend in order to follow an optimal controlled path that leads to the exit. The second problem that we analyze is the small noise limit of a certain class of coupled forward-backward systems of Stochastic Differential Equations. Associated to these stochastic objects are some nonlinear nonlocal Partial Differential Equations that arise as nonlocal toy-models of Fluid Dynamics. Using a probabilistic approach and the Markov nature of these systems we study the convergence at the level of viscosity solutions and we derive a large deviations principles for the laws of the stochastic processes that are involved.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Chinnici, Marta. "Stochastic self-similar processes and large scale structures". Tesi di dottorato, 2008. http://www.fedoa.unina.it/1993/1/Chinnici_Scienze_Computazionali.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Stochastic processes with large dimension"

1

1939-, Tzafestas S. G., e Watanabe Keigo 1952-, eds. Stochastic large-scale engineering systems. New York: M. Dekker, 1992.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Bosq, Denis. Inference and prediction in large dimensions. Hoboken, NJ: John Wiley, 2007.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Girko, V. L. Statistical analysis of observations of increasing dimension. Dordrecht: Kluwer Academic Publishers, 1995.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

G, Kurtz Thomas, ed. Large deviations for stochastic processes. Providence, R.I: American Mathematical Society, 2006.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Assing, Sigurd. Continuous strong Markov processes in dimension one: A stochastic calculus approach. Berlin: Springer, 1998.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Wentzell, A. D. Limit Theorems on Large Deviations for Markov Stochastic Processes. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-1852-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Wentzell, Alexander D. Limit theorems on large deviations for Markov stochastic processes. Dordrecht: Kluwer Academic Publishers, 1990.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Wentzell, A. D. Limit Theorems on Large Deviations for Markov Stochastic Processes. Dordrecht: Springer Netherlands, 1990.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Vaart, A. W. van der. Statistical estimation in large parameter spaces. Amsterdam: CWI, 1987.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

A. W. van der Vaart. Statistical estimation in large parameter spaces. Amsterdam, Netherlands: Centre for Mathematics and Computer Science, 1988.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Stochastic processes with large dimension"

1

Athreya, K. B., e A. Vidyashankar. "Large Deviation Results for Branching Processes". In Stochastic Processes, 7–12. New York, NY: Springer New York, 1993. http://dx.doi.org/10.1007/978-1-4615-7909-0_2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Monrad, Ditlev, e Loren D. Pitt. "Local Nondeterminism and Hausdorff Dimension". In Seminar on Stochastic Processes, 1986, 163–89. Boston, MA: Birkhäuser Boston, 1987. http://dx.doi.org/10.1007/978-1-4684-6751-2_12.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Feng, Jin, e Thomas Kurtz. "Large deviations for stochastic processes". In Mathematical Surveys and Monographs, 57–76. Providence, Rhode Island: American Mathematical Society, 2006. http://dx.doi.org/10.1090/surv/131/04.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Schinazi, Rinaldo B. "Asymmetric and Higher Dimension Random Walks". In Classical and Spatial Stochastic Processes, 67–80. New York, NY: Springer New York, 2014. http://dx.doi.org/10.1007/978-1-4939-1869-0_4.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Bakry, Dominique. "Ricci Curvature and Dimension for Diffusion Semigroups". In Stochastic Processes and their Applications, 21–31. Dordrecht: Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-2117-7_2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Orey, Steven. "Large Deviations in Ergodic Theory". In Seminar on Stochastic Processes, 1984, 195–249. Boston, MA: Birkhäuser Boston, 1986. http://dx.doi.org/10.1007/978-1-4684-6745-1_12.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Le Jan, Y. "Haussdorf dimension for the statistical equilibrium of stochastics flows". In Stochastic Processes — Mathematics and Physics, 201–7. Berlin, Heidelberg: Springer Berlin Heidelberg, 1986. http://dx.doi.org/10.1007/bfb0080218.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Talagrand, Michel. "The Ultimate Matching Theorem in Dimension ≥3". In Upper and Lower Bounds for Stochastic Processes, 475–513. Berlin, Heidelberg: Springer Berlin Heidelberg, 2014. http://dx.doi.org/10.1007/978-3-642-54075-2_15.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Talagrand, Michel. "The Ultimate Matching Theorem in Dimension 3". In Upper and Lower Bounds for Stochastic Processes, 561–603. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-82595-9_18.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Arnold, Ludwig, e Petra Boxler. "Stochastic bifurcation: instructive examples in dimension one". In Diffusion Processes and Related Problems in Analysis, Volume II, 241–55. Boston, MA: Birkhäuser Boston, 1992. http://dx.doi.org/10.1007/978-1-4612-0389-6_10.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Stochastic processes with large dimension"

1

Sastry, A. M., C. W. Wang e L. Berhan. "Deformation and Failure in Stochastic Fibrous Networks: Scale, Dimension and Application". In ASME 2000 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2000. http://dx.doi.org/10.1115/imece2000-2667.

Texto completo da fonte
Resumo:
Abstract Simulating local deformation and failure in stochastic fibrous materials is of interest in a number of key materials technologies, including papers and filters, electrochemical substrates, and biomaterials. The local initiators of both deformation and damage are of key technological interest as they govern the properties of networks, and allow rational design of networks once variance in global properties is reasonable predicted. Here, we examine several key microphenomena associated with failure of these networks, and further, map loading and environmental conditions under which each dominate. These mechanisms include tensile and bending failures, failures due to stress risers resulting from the way in which particles are bonded, and failures due to morphological changes (such as corrosion processes in batteries). We also describe stochastic approaches for particle/fiber network generation, including the tracking of key geometric features. We show that single-parameter distribution functions rarely capture the real morphological variability seen in engineered microstructures, and suggest a methodology for more robust descriptions. We also present results of both large and small scale simulations of failure progression, and describe the importance of scale effect in numerical methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Wang, Yan. "Simulating Stochastic Diffusions by Quantum Walks". In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-12739.

Texto completo da fonte
Resumo:
Stochastic differential equation (SDE) and Fokker-Planck equation (FPE) are two general approaches to describe the stochastic drift-diffusion processes. Solving SDEs relies on the Monte Carlo samplings of individual system trajectory, whereas FPEs describe the time evolution of overall distributions via path integral alike methods. The large state space and required small step size are the major challenges of computational efficiency in solving FPE numerically. In this paper, a generic continuous-time quantum walk formulation is developed to simulate stochastic diffusion processes. Stochastic diffusion in one-dimensional state space is modeled as the dynamics of an imaginary-time quantum system. The proposed quantum computational approach also drastically accelerates the path integrals with large step sizes. The new approach is compared with the traditional path integral method and the Monte Carlo trajectory sampling.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zhang, Xin, Xin Li, Xueping Zhang e Zhenqiang Yao. "Grinding Force Prediction Model by Discretizing Stochastic Grains". In ASME 2023 18th International Manufacturing Science and Engineering Conference. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/msec2023-104618.

Texto completo da fonte
Resumo:
Abstract Material removal process during grinding is the accumulative effect of numerous stochastic grains’ asperity interactions with workpiece, leading to extreme difficulty to predict grinding force. The existing analytical grinding force prediction models primarily focus on representing the stochastic interactions in contact zone using average effective grains number and undeformed chip thickness or assuming ideal distribution law for the stochastic characteristics of grains, which can result in relatively large error in predicting grinding force, whereas the experimentally empirical methods are usually time-consuming and laborious. To overcome these drawbacks, the research proposed a hybrid methodology to predict grinding force by means of discretizing stochastic grains, matrix calculation and two-dimension (2D) micro-grinding simulations. Consequently, grains’ stochastic geometry, size, position distribution, and corresponding undeformed chip size in contact region were considered to guarantee the authenticity and credibility of the predicted grinding forces. Firstly, the stochastic interaction between a random grain and workpiece was converted equivalently to a series of plane-cutting processes between micro-edges with different rake angles and micro-chip layers with corresponding thicknesses. Then, the rake angle matrix of micro edges and the corresponding cutting depth matrix in contact zone were identified through matrix calculation method based on single grain discretization analysis. Followed, both matrixes were incorporated into the plane-cutting force interpolant function derived from the finite element analysis (FEA) to achieve the grinding force matrix, and further the resultant grinding force along normal and tangential directions. The cross validation with experimental grinding results in literature showed the prediction error falls within 20% to verify the authenticity of proposed model. The model considers both influences of the stochastic grain distribution in grinding tool and their interactions with workpiece. Therefore, it bears significance in improving the prediction accuracy of grinding force, and promoting a better understanding of inherent logic from micro-level abrasive-workpiece interactions to macro-level grinding force prediction during the abrasive-based machining including grinding, honing, and polishing.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Jin, Yuan, Jin Chai e Olivier Jung. "Automatically Designed Deep Gaussian Process for Turbomachinery Application". In ASME Turbo Expo 2021: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-58469.

Texto completo da fonte
Resumo:
Abstract Thanks to their flexibility and robustness to overfitting, Gaussian Processes (GPs) are widely used as black-box function approximators. Deep Gaussian Processes (DGPs) are multilayer generations of GPs. The deep architecture alleviates the kernel dependance of GPs, while complicates model inference. The so-called doubly stochastic variational approach, which does not force the independence between layers, shows its effectiveness in large dataset classification and regression in the literature. Meanwhile, similar to deep neural network, DGPs also require application-specific architecture. In addition, the doubly stochastic process introduces extra hyperparameters, which further increases the difficulty in model definition and training. In this study, we apply doubly stochastic variational inference DGP as surrogate model on high-dimensional structural data regression drawn from turbomachinery area. A discrete optimizer, which is based on classification discriminating good solutions from bad ones, is utilized to realize automatic DGP model design and tuning. Empirical experiments are performed firstly on analytical functions to demonstrate the capability of DPGs in high-dimensional and non-stationary data handling. Two industrial turbomachinery problems with respectively 80 and 180 input dimensions are addressed. The first application consists in a turbine frame design problem. In the second application, DGP is used to describe the correlation between 3D blade profiles of a multi-stage low pressure turbine and the corresponding turbine total-total efficiency. Through these two applications, we show the applicability of the proposed automatically designed DGPs in turbomachinery area by highlighting their outperformance with respect to classic GPs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Rezagah, Farideh Ebrahim, Shirin Jalali, Elza Erkip e H. Vincent Poor. "Rate-distortion dimension of stochastic processes". In 2016 IEEE International Symposium on Information Theory (ISIT). IEEE, 2016. http://dx.doi.org/10.1109/isit.2016.7541665.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Geiger, Bernhard C., e Tobias Koch. "On the information dimension rate of stochastic processes". In 2017 IEEE International Symposium on Information Theory (ISIT). IEEE, 2017. http://dx.doi.org/10.1109/isit.2017.8006656.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Buchkovskii, I. A., A. G. Gorkavchuk, M. S. Gavrylyak e P. P. Maksimyak. "Study of stochastic processes into phase with finite dimension". In SPIE Proceedings, editado por Malgorzata Kujawinska e Oleg V. Angelsky. SPIE, 2008. http://dx.doi.org/10.1117/12.797122.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Tianhai Tian e K. Burrage. "Parallel implementation of stochastic simulation for large-scale cellular processes". In Eighth International Conference on High-Performance Computing in Asia-Pacific Region (HPCASIA'05). IEEE, 2005. http://dx.doi.org/10.1109/hpcasia.2005.67.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Morton, David, Bruce Letellier, Jeremy Tejada, David Johnson, Zahra Mohaghegh, Ernie Kee, Vera Moiseytseva, Seyed Reihani e Alexander Zolan. "Sensitivity Analyses of a Simulation Model for Estimating Fiber-Induced Sump Screen and Core Failure Rates". In 2014 22nd International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/icone22-30917.

Texto completo da fonte
Resumo:
Output from a high-order simulation model with random inputs may be difficult to fully evaluate absent an understanding of sensitivity to the inputs. We describe, and apply, a sensitivity analysis procedure to a large-scale computer simulation model of the processes associated with Nuclear Regulatory Commission (NRC) Generic Safety Issue (GSI) 191. Our GSI-191 simulation model has a number of distinguishing features: (i) The model is large in scale in that it has a high-dimensional vector of inputs; (ii) some model inputs are governed by probability distributions; (iii) a key model output is the probability of system failure — a rare event; (iv) the model’s outputs require estimation by Monte Carlo sampling, including the use of variance reduction techniques; (v) it is computationally expensive to obtain precise estimates of the failure probability; (vi) we seek to propagate key uncertainties on model inputs to obtain distributional characteristics of the model’s outputs; and, (vii) the overall model involves a loose coupling between a physics-based stochastic simulation sub-model and a logic-based Probabilistic Risk Assessment (PRA) sub-model via multiple initiating events. Our proposal is guided by the need to have a practical approach to sensitivity analysis for a computer simulation model with these characteristics. We use common random numbers to reduce variability and smooth output analysis; we assess differences between two model configurations; and, we properly characterize both sampling error and the effect of uncertainties on input parameters. We show selected results of studies for sensitivities to parameters used in the South Texas Project Electric Generating Station (STP) GSI-191 risk-informed resolution project.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Stu¨bing, S., M. Dietzel e M. Sommerfeld. "Modelling Agglomeration and the Fluid Dynamic Behaviour of Agglomerates". In ASME-JSME-KSME 2011 Joint Fluids Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/ajk2011-12025.

Texto completo da fonte
Resumo:
For modeling agglomeration processes in the frame of the Lagrangian approach, where the particles are treated as point masses, an extended structure model was developed. This model provides not only information on the number of primary particles in the agglomerate, but also on the geometrical distension of the agglomerates. These are for example the interception diameter, the radius of gyration, the fractal dimension and the porosity of the agglomerate using the convex hull. The question however arises now, which is the proper agglomerate cross-section for the calculation of the drag force. In order to find an answer, the Lattice-Boltzmann-Method (LBM) was applied for simulating the flow about fixed agglomerates of different morphology and number of primary particles involved. From these simulations the drag coefficient was determined using different possible cross-sections of the agglomerate. Numerous simulations showed that the cross-section of the convex hull yields a drag coefficient which is almost independent on the structure of the agglomerate if they have the same cross-sectional area in flow direction. Using the cross-section of the volume equivalent sphere showed a very large scatter in the simulated drag coefficient. This information was accounted for in the Lagrangian agglomeration model. The basis of modeling particle collisions and possible agglomeration was the stochastic inter-particle collision model accounting for the impact efficiency. The possibility of particle sticking was based on a critical velocity determined from an energy balance which accounts for dissipation and the van der Waals adhesion. If the instantaneous relative velocity between the particles is smaller than this critical velocity agglomeration occurs. In order to allow the determination of the agglomerate structure reference vectors are stored between a reference particle and all other primary particles collected in the agglomerate. For describing the collision of a new primary particle with an agglomerate the collision model was extended in order to determine which primary particle in the agglomerate is the collision partner. For demonstrating the capabilities of the Lagrangian agglomerate structure model the dispersion and collision of small primary particles in a homogeneous isotropic turbulence was considered. From these calculations statistics on the properties of the agglomerates were made, e.g. number of primary particles, radius of gyration, porosity, sphericity and fractal dimension. Finally, the dispersion of particles in vertical grid turbulence was calculated by the Lagrangian approach. For one selected model agglomerate, dispersion calculations were performed with different possible characteristic cross-sections of the agglomerate. These calculations gave a deviation of the mean square dispersion of up to 20% after a dispersion time of 0.4 seconds for the different cross-sections. This demonstrates that a proper selection of the cross-section is essential for calculating agglomerate motion in turbulent flows.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Stochastic processes with large dimension"

1

Jury, William A., e David Russo. Characterization of Field-Scale Solute Transport in Spatially Variable Unsaturated Field Soils. United States Department of Agriculture, janeiro de 1994. http://dx.doi.org/10.32747/1994.7568772.bard.

Texto completo da fonte
Resumo:
This report describes activity conducted in several lines of research associated with field-scale water and solute processes. A major effort was put forth developing a stochastic continuum analysis for an important class of problems involving flow of reactive and non reactive chemicals under steady unsaturated flow. The field-scale velocity covariance tensor has been derived from local soil properties and their variability, producing a large-scale description of the medium that embodies all of the local variability in a statistical sense. Special cases of anisotropic medium properties not aligned along the flow direction of spatially variable solute sorption were analysed in detail, revealing a dependence of solute spreading on subtle features of the variability of the medium, such as cross-correlations between sorption and conductivity. A novel method was developed and tested for measuring hydraulic conductivity at the scale of observation through the interpretation of a solute transport outflow curve as a stochastic-convective process. This undertaking provided a host of new K(q) relationships for existing solute experiments and also laid the foundation for future work developing a self-consistent description of flow and transport under these conditions. Numerical codes were developed for calculating K(q) functions for a variety of solute pulse outflow shapes, including lognormal, Fickian, Mobile-Immobile water, and bimodal. Testing of this new approach against conventional methodology was mixed, and agreed most closely when the assumptions of the new method were met. We conclude that this procedure offers a valuable alternative to conventional methods of measuring K(q), particularly when the application of the method is at a scale (e.g. and agricultural field) that is large compared to the common scale at which conventional K(q) devices operate. The same problem was approached from a numerical perspective, by studying the feasibility of inverting a solute outflow signal to yield the hydraulic parameters of the medium that housed the experiment. We found that the inverse problem was solvable under certain conditions, depending on the amount of noise in the signal and the degree of heterogeneity in the medium. A realistic three dimensional model of transient water and solute movement in a heterogeneous medium that contains plant roots was developed and tested. The approach taken was to generate a single realization of this complex flow event, and examine the results to see whether features were present that might be overlooked in less sophisticated model efforts. One such feature revealed is transverse dispersion, which is a critically important component in the development of macrodispersion in the longitudinal direction. The lateral mixing that was observed greatly exceeded that predicted from simpler approaches, suggesting that at least part of the important physics of the mixing process is embedded in the complexity of three dimensional flow. Another important finding was the observation that variability can produce a pseudo-kinetic behavior for solute adsorption, even when the local models used are equilibrium.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Gonzalez Diez, Verónica M. Resettlement Processes and their Socioeconomic Impact: Porce II Hydroelectric Project, Colombia. Inter-American Development Bank, março de 2011. http://dx.doi.org/10.18235/0010448.

Texto completo da fonte
Resumo:
This evaluation conducted a comprehensive analysis of the long-term socioeconomic impact on the resettled population in the context of the Porce II Hydroelectric Project in Antioquia, Colombia. The evaluation's results highlight the formalization of landholdings, as well as quality improvements in housing and access to public and social services in the resettlement. Ethnographic workshops documented the use and enjoyment of the homes and common areas. There were also positive trends in terms of the educational levels in the resettlement. Results showed the resettled families' ability to adapt and coexist with groups outside their family networks. The economic dimension was the greatest challenge for a resettlement endeavor seeking to diversify the population's economic structure, which had engaged almost exclusively in mining and, to a lesser extent, in agriculture and cattle farming. The evaluation corroborated the shift in economic focus and the improvement of the resettled population's ability to engage in other activities. While the evaluation showed significant improvements in net worth and family spending, impact on income has not been significant. This evaluation provides conceptual and methodological elements that embody best practices and objectively contribute to the debate on the issue of population displacement as a consequence of large infrastructure projects.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Mizrahi, Itzhak, e Bryan A. White. Uncovering rumen microbiome components shaping feed efficiency in dairy cows. United States Department of Agriculture, janeiro de 2015. http://dx.doi.org/10.32747/2015.7600020.bard.

Texto completo da fonte
Resumo:
Ruminants provide human society with high quality food from non-human-edible resources, but their emissions negatively impact the environment via greenhouse gas production. The rumen and its resident microorganisms dictate both processes. The overall goal of this project was to determine whether a causal relationship exists between the rumen microbiome and the host animal's physiology, and if so, to isolate and examine the specific determinants that enable this causality. To this end, we divided the project into three specific parts: (1) determining the feed efficiency of 200 milking cows, (2) determining whether the feed- efficiency phenotype can be transferred by transplantation and (3) isolating and examining microbial consortia that can affect the feed-efficiency phenotype by their transplantation into germ-free ruminants. We finally included 1000 dairy cow metadata in our study that revealed a global core microbiome present in the rumen whose composition and abundance predicted many of the cows’ production phenotypes, including methane emission. Certain members of the core microbiome are heritable and have strong associations to cardinal rumen metabolites and fermentation products that govern the efficiency of milk production. These heritable core microbes therefore present primary targets for rumen manipulation towards sustainable and environmentally friendly agriculture. We then went beyond examining the metagenomic content, and asked whether microbes behave differently with relation to the host efficiency state. We sampled twelve animals with two extreme efficiency phenotypes, high efficiency and low efficiency where the first represents animals that maximize energy utilization from their feed whilst the later represents animals with very low utilization of the energy from their feed. Our analysis revealed differences in two host efficiency states in terms of the microbial expression profiles both with regards to protein identities and quantities. Another aim of the proposal was the cultivation of undescribed rumen microorganisms is one of the most important tasks in rumen microbiology. Our findings from phylogenetic analysis of cultured OTUs on the lower branches of the phylogenetic tree suggest that multifactorial traits govern cultivability. Interestingly, most of the cultured OTUs belonged to the rare rumen biosphere. These cultured OTUs could not be detected in the rumen microbiome, even when we surveyed it across 38 rumen microbiome samples. These findings add another unique dimension to the complexity of the rumen microbiome and suggest that a large number of different organisms can be cultured in a single cultivation effort. In the context of the grant, the establishment of ruminant germ-free facility was possible and preliminary experiments were successful, which open up the way for direct applications of the new concepts discovered here, prior to the larger scale implementation at the agricultural level.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Snyder, Victor A., Dani Or, Amos Hadas e S. Assouline. Characterization of Post-Tillage Soil Fragmentation and Rejoining Affecting Soil Pore Space Evolution and Transport Properties. United States Department of Agriculture, abril de 2002. http://dx.doi.org/10.32747/2002.7580670.bard.

Texto completo da fonte
Resumo:
Tillage modifies soil structure, altering conditions for plant growth and transport processes through the soil. However, the resulting loose structure is unstable and susceptible to collapse due to aggregate fragmentation during wetting and drying cycles, and coalescense of moist aggregates by internal capillary forces and external compactive stresses. Presently, limited understanding of these complex processes often leads to consideration of the soil plow layer as a static porous medium. With the purpose of filling some of this knowledge gap, the objectives of this Project were to: 1) Identify and quantify the major factors causing breakdown of primary soil fragments produced by tillage into smaller secondary fragments; 2) Identify and quantify the. physical processes involved in the coalescence of primary and secondary fragments and surfaces of weakness; 3) Measure temporal changes in pore-size distributions and hydraulic properties of reconstructed aggregate beds as a function of specified initial conditions and wetting/drying events; and 4) Construct a process-based model of post-tillage changes in soil structural and hydraulic properties of the plow layer and validate it against field experiments. A dynamic theory of capillary-driven plastic deformation of adjoining aggregates was developed, where instantaneous rate of change in geometry of aggregates and inter-aggregate pores was related to current geometry of the solid-gas-liquid system and measured soil rheological functions. The theory and supporting data showed that consolidation of aggregate beds is largely an event-driven process, restricted to a fairly narrow range of soil water contents where capillary suction is great enough to generate coalescence but where soil mechanical strength is still low enough to allow plastic deforn1ation of aggregates. The theory was also used to explain effects of transient external loading on compaction of aggregate beds. A stochastic forInalism was developed for modeling soil pore space evolution, based on the Fokker Planck equation (FPE). Analytical solutions for the FPE were developed, with parameters which can be measured empirically or related to the mechanistic aggregate deformation model. Pre-existing results from field experiments were used to illustrate how the FPE formalism can be applied to field data. Fragmentation of soil clods after tillage was observed to be an event-driven (as opposed to continuous) process that occurred only during wetting, and only as clods approached the saturation point. The major mechanism of fragmentation of large aggregates seemed to be differential soil swelling behind the wetting front. Aggregate "explosion" due to air entrapment seemed limited to small aggregates wetted simultaneously over their entire surface. Breakdown of large aggregates from 11 clay soils during successive wetting and drying cycles produced fragment size distributions which differed primarily by a scale factor l (essentially equivalent to the Van Bavel mean weight diameter), so that evolution of fragment size distributions could be modeled in terms of changes in l. For a given number of wetting and drying cycles, l decreased systematically with increasing plasticity index. When air-dry soil clods were slightly weakened by a single wetting event, and then allowed to "age" for six weeks at constant high water content, drop-shatter resistance in aged relative to non-aged clods was found to increase in proportion to plasticity index. This seemed consistent with the rheological model, which predicts faster plastic coalescence around small voids and sharp cracks (with resulting soil strengthening) in soils with low resistance to plastic yield and flow. A new theory of crack growth in "idealized" elastoplastic materials was formulated, with potential application to soil fracture phenomena. The theory was preliminarily (and successfully) tested using carbon steel, a ductile material which closely approximates ideal elastoplastic behavior, and for which the necessary fracture data existed in the literature.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia