To see the other types of publications on this topic, follow the link: Stochastic simulation technique (SST).

Journal articles on the topic 'Stochastic simulation technique (SST)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Stochastic simulation technique (SST).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Viseur, Sophie. "Turbidite reservoir characterization : object-based stochastic simulation meandering channels." Bulletin de la Société Géologique de France 175, no. 1 (January 1, 2004): 11–20. http://dx.doi.org/10.2113/175.1.11.

Full text
Abstract:
Abstract Stochastic imaging has become an important tool for risk assessment and has successfully been applied to oil field management. This procedure aims at generating several possible and equiprobable 3D models of subsurface structures that enhance the available data set. Among these stochastic simulation techniques, object-based approaches consist of defining and distributing objects reproducing underground geobodies. A technical challenge still remains in object-based simulation. Due to advances in deep water drilling technology, new hydrocarbon exploration has been opened along the Atlantic margins. In these turbidite oil fields, segments of meandering channels can be observed on high-resolution seismic horizons. However, no present object-based simulation technique can reproduce exactly such known segments of channel. An improved object-based approach is proposed to simulate meandering turbidite channels conditioned on well observations and such seismic data. The only approaches dealing with meandering channels are process-based as opposed to structure-imitating. They are based on the reproduction of continental river evolution through time. Unfortunately, such process-based approaches cannot be used for stochastic imaging as they are based on equations reflecting meandering river processes and not turbiditic phenomena. Moreover, they incoporate neither shape constraints (such as channel dimensions and sinuosity) nor location constraints, such as well data. Last, these methods generally require hydraulic parameters that are not available from oil field study. The proposed approach aims at stochastically generating meandering channels with specified geometry that can be constrained to pass through well-observations. The method relies on the definition of geometrical parameters that characterize the shape of the expected channels such as dimensions, directions and sinuosity. The meandering channel object is modelled via a flexible parametric shape. The object is defined by a polygonal center-line (called backbone) that supports several sections. Channel sinuosity and local channel profiles are controlled by the backbone and, respectively the sections. Channel generation is performed within a 2D domain, D representing the channel-belt area. The proposed approach proceeds in two main steps. The first step consists in generating a channel center-line (C) defined by an equation v=Z(u) within the domain D. The geometry of this line is simulated using a geostatistical simulation technique that allows the generation of controlled but irregular center-lines conditioned on data points. During the second step, a vector field enabling the curve (C) to be transformed into a meandering curve (C’) is estimated. This vector field acts as a transform that specifies the third degree of channel sinuosity, in other words, the meandering parts of the loops. This field is parameterized by geometrical parameters such as curvature and tangent vectors along the curve (C) and the a priori maximum amplitude of the meander loops of the curve (C’). To make channel objects pass through conditioning points, adjustment vectors are computed at these locations and are interpolated along the curves. Synthetic datasets have been built to check if a priori parameters such as tortuosity are reproduced, and if the simulations are equiprobable. From this dataset, hundred simulations have been generated and enable one to verify that these two conditions are satisfied. Equiprobability is however not always satisfied from data points that are very close and located in a multivalued part of a meander : preferential orientation of the loops may indeed be observed. Solving this issue will be the focus of future works. Nevertheless, the results presented in this paper show that the approach provides satisfying simulations in any other configurations. This approach is moreover well-suited for petroleum reservoir characterization because it only needs specification of geometrical parameters such as dimension and sinuosity that can be inferred from the channel parts seen on seismic horizons or analogues.
APA, Harvard, Vancouver, ISO, and other styles
2

Rodionov, Alexander, Alexander Zhuchkov, and Victoria Pekut. "The Features of the Technique of Practical Training on “Fundamentals of Simulation of Automated Systems» Discipline." NBI Technologies, no. 1 (August 2019): 21–24. http://dx.doi.org/10.15688/nbit.jvolsu.2019.1.4.

Full text
Abstract:
The paper deals with the content and methods of practical training in “Fundamentals of Simulation of Automated Systems”discipline. The relevance of the work is due to ever-growing requirements for the design of protected information systems, which are a class (subsystem) of automated systems. The level of mathematical training of students, and especially undergraduates, is significantly different. This leads to the need for the careful study of methods of lectures and especially practical training in the discipline. With multiple simulations of the network system, it is possible to accumulate statistics on the output scalars and thus compare the parameters of the system at different times. This is necessary because the network model is stochastic and depending on the initial values of the input data set by the random number generator, different simulation results can be obtained.
APA, Harvard, Vancouver, ISO, and other styles
3

Aragon-Calvo, M. A. "Smooth stochastic density field reconstruction." Monthly Notices of the Royal Astronomical Society 503, no. 1 (February 11, 2021): 557–62. http://dx.doi.org/10.1093/mnras/stab403.

Full text
Abstract:
ABSTRACT We introduce a method for generating a continuous, mass-conserving and high-order differentiable density field from a discrete point distribution such as particles or haloes from an N-body simulation or galaxies from a spectroscopic survey. The method consists on generating an ensemble of point realizations by perturbing the original point set following the geometric constraints imposed by the Delaunay tessellation in the vicinity of each point in the set. By computing the mean field of the ensemble we are able to significantly reduce artefacts arising from the Delaunay tessellation in poorly sampled regions while conserving the features in the point distribution. Our implementation is based on the Delaunay Tessellation Field Estimation (DTFE) method; however, other tessellation techniques are possible. The method presented here shares the same advantages of the DTFE method such as self-adaptive scale, mass conservation, and continuity, while being able to reconstruct even the faintest structures of the point distribution usually dominated by artefacts in Delaunay-based methods. Additionally, we also present preliminary results of an application of this method to image denoising and artefact removal, highlighting the broad applicability of the technique introduced here.
APA, Harvard, Vancouver, ISO, and other styles
4

Carletti, Margherita, and Malay Banerjee. "A Backward Technique for Demographic Noise in Biological Ordinary Differential Equation Models." Mathematics 7, no. 12 (December 9, 2019): 1204. http://dx.doi.org/10.3390/math7121204.

Full text
Abstract:
Physical systems described by deterministic differential equations represent idealized situations since they ignore stochastic effects. In the context of biomathematical modeling, we distinguish between environmental or extrinsic noise and demographic or intrinsic noise, for which it is assumed that the variation over time is due to demographic variation of two or more interacting populations (birth, death, immigration, and emigration). The modeling and simulation of demographic noise as a stochastic process affecting units of populations involved in the model is well known in the literature, resulting in discrete stochastic systems or, when the population sizes are large, in continuous stochastic ordinary differential equations and, if noise is ignored, in continuous ordinary differential equation models. The inverse process, i.e., inferring the effects of demographic noise on a natural system described by a set of ordinary differential equations, is still an issue to be addressed. With this paper, we provide a technique to model and simulate demographic noise going backward from a deterministic continuous differential system to its underlying discrete stochastic process, based on the framework of chemical kinetics, since demographic noise is nothing but the biological or ecological counterpart of intrinsic noise in genetic regulation. Our method can, thus, be applied to ordinary differential systems describing any kind of phenomena when intrinsic noise is of interest.
APA, Harvard, Vancouver, ISO, and other styles
5

Wilson, Spencer, Abdullah Alabdulkarim, and David Goldsman. "Green Simulation of Pandemic Disease Propagation." Symmetry 11, no. 4 (April 22, 2019): 580. http://dx.doi.org/10.3390/sym11040580.

Full text
Abstract:
This paper is concerned with the efficient stochastic simulation of multiple scenarios of an infectious disease as it propagates through a population. In particular, we propose a simple “green” method to speed up the simulation of disease transmission as we vary the probability of infection of the disease from scenario to scenario. After running a baseline scenario, we incrementally increase the probability of infection, and use the common random numbers variance reduction technique to avoid re-simulating certain events in the new scenario that would not otherwise have changed from the previous scenario. A set of Monte Carlo experiments illustrates the effectiveness of the procedure. We also propose various extensions of the method, including its use to estimate the sensitivity of propagation characteristics in response to small changes in the infection probability.
APA, Harvard, Vancouver, ISO, and other styles
6

CHAN, M. S., F. MUTAPI, M. E. J. WOOLHOUSE, and V. S. ISHAM. "Stochastic simulation and the detection of immunity to schistosome infections." Parasitology 120, no. 2 (February 2000): 161–69. http://dx.doi.org/10.1017/s003118209900534x.

Full text
Abstract:
In this paper we address the question of detecting immunity to helminth infections from patterns of infection in endemic communities. We use stochastic simulations to investigate whether it would be possible to detect patterns predicted by theoretical models, using typical field data. Thus, our technique is to simulate a theoretical model, to generate the data that would be obtained in field surveys and then to analyse these data using methods usually employed for field data. The general behaviour of the model, and in particular the levels of variability of egg counts predicted, show that the model is capturing most of the variability present in field data. However, analysis of the data in detail suggests that detection of immunity patterns in real data may be very difficult even if the underlying patterns are present. Analysis of a real data set does show patterns consistent with acquired immunity and the implications of this are discussed.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Hong-an, Yangyang Lv, Changkai Xia, Shudong Sun, and Honghao Wang. "Optimal Computing Budget Allocation for Ordinal Optimization in Solving Stochastic Job Shop Scheduling Problems." Mathematical Problems in Engineering 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/619254.

Full text
Abstract:
We focus on solving Stochastic Job Shop Scheduling Problem (SJSSP) with random processing time to minimize the expected sum of earliness and tardiness costs of all jobs. To further enhance the efficiency of the simulation optimization technique of embedding Evolutionary Strategy in Ordinal Optimization (ESOO) which is based on Monte Carlo simulation, we embed Optimal Computing Budget Allocation (OCBA) technique into the exploration stage of ESOO to optimize the performance evaluation process by controlling the allocation of simulation times. However, while pursuing a good set of schedules, “super individuals,” which can absorb most of the given computation while others hardly get any simulation budget, may emerge according to the allocating equation of OCBA. Consequently, the schedules cannot be evaluated exactly, and thus the probability of correct selection (PCS) tends to be low. Therefore, we modify OCBA to balance the computation allocation: (1) set a threshold of simulation times to detect “super individuals” and (2) follow an exclusion mechanism to marginalize them. Finally, the proposed approach is applied to an SJSSP comprising 8 jobs on 8 machines with random processing time in truncated normal, uniform, and exponential distributions, respectively. The results demonstrate that our method outperforms the ESOO method by achieving better solutions.
APA, Harvard, Vancouver, ISO, and other styles
8

Cahen, Ewan Jacov, Michel Mandjes, and Bert Zwart. "RARE EVENT ANALYSIS AND EFFICIENT SIMULATION FOR A MULTI-DIMENSIONAL RUIN PROBLEM." Probability in the Engineering and Informational Sciences 31, no. 3 (January 23, 2017): 265–83. http://dx.doi.org/10.1017/s0269964816000553.

Full text
Abstract:
This paper focuses on the evaluation of the probability that both components of a bivariate stochastic process ever simultaneously exceed some large level; a leading example is that of two Markov fluid queues driven by the same background process ever reaching the set (u, ∞)×(u, ∞), for u>0. Exact analysis being prohibitive, we resort to asymptotic techniques and efficient simulation, focusing on large values of u. The first contribution concerns various expressions for the decay rate of the probability of interest, which are valid under Gärtner–Ellis-type conditions. The second contribution is an importance-sampling-based rare-event simulation technique for the bivariate Markov modulated fluid model, which is capable of asymptotically efficiently estimating the probability of interest; the efficiency of this procedure is assessed in a series of numerical experiments.
APA, Harvard, Vancouver, ISO, and other styles
9

Ren, Y. J., I. Elishakoff, and M. Shinozuka. "Simulation of Multivariate Gaussian Fields Conditioned by Realizations of the Fields and Their Derivatives." Journal of Applied Mechanics 63, no. 3 (September 1, 1996): 758–65. http://dx.doi.org/10.1115/1.2823360.

Full text
Abstract:
This paper investigates conditional simulation technique of multivariate Gaussian random fields by stochastic interpolation technique. For the first time in the literature a situation is studied when the random fields are conditioned not only by a set of realizations of the fields, but also by a set of realizations of their derivatives. The kriging estimate of multivariate Gaussian field is proposed, which takes into account both the random field as well as its derivative. Special conditions are imposed on the kriging estimate to determine the kriging weights. Basic formulation for simulation of conditioned multivariate random fields is established. As a particular case of uncorrelated components of multivariate field without realizations of the derivative of the random field, the present formulation includes that of univariate field given by Hoshiya. Examples of a univariate field and a three component field are elucidated and some numerical results are discussed. It is concluded that the information on the derivatives may significantly alter the results of the conditional simulation.
APA, Harvard, Vancouver, ISO, and other styles
10

Qi, Ji, and Yanhui Li. "L1 control for Itô stochastic nonlinear networked control systems." Transactions of the Institute of Measurement and Control 42, no. 14 (July 2, 2020): 2675–85. http://dx.doi.org/10.1177/0142331220923770.

Full text
Abstract:
This paper investigates L1 control problem for a class of nonlinear stochastic networked control systems (NCSs) described by Takagi-Sugeno (T-S) fuzzy model. By exploiting a delay-dependent and basis-dependent Lyapunov-Krasovskii function and by means of the Itô stochastic differential equation technique, results on stability and L1 performance are proposed for the T-S fuzzy stochastic NCS. Specially, attention is focused on the fuzzy controller design that guarantees the closed-loop T-S fuzzy stochastic NCS is mean-square asymptotically stable and satisfies a prescribed L1 noise attenuation level [Formula: see text] with respect to all persistent and amplitude-bounded disturbance input signals. To reduce the conservatism of design, the signal transmission delay, data packet dropout, and quantization have been taken into consideration in the controller design. The corresponding design problem of L1 controller is converted into a convex optimization problem by solving a set of linear matrix inequalities (LMIs). Finally, simulation examples are provided to illustrate the feasibility and effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
11

Mohamed, Linah, Mike Christie, and Vasily Demyanov. "Comparison of Stochastic Sampling Algorithms for Uncertainty Quantification." SPE Journal 15, no. 01 (November 17, 2009): 31–38. http://dx.doi.org/10.2118/119139-pa.

Full text
Abstract:
Summary History matching and uncertainty quantification are two important research topics in reservoir simulation currently. In the Bayesian approach, we start with prior information about a reservoir (e.g., from analog outcrop data) and update our reservoir models with observations (e.g., from production data or time-lapse seismic). The goal of this activity is often to generate multiple models that match the history and use the models to quantify uncertainties in predictions of reservoir performance. A critical aspect of generating multiple history-matched models is the sampling algorithm used to generate the models. Algorithms that have been studied include gradient methods, genetic algorithms, and the ensemble Kalman filter (EnKF). This paper investigates the efficiency of three stochastic sampling algorithms: Hamiltonian Monte Carlo (HMC) algorithm, Particle Swarm Optimization (PSO) algorithm, and the Neighbourhood Algorithm (NA). HMC is a Markov chain Monte Carlo (MCMC) technique that uses Hamiltonian dynamics to achieve larger jumps than are possible with other MCMC techniques. PSO is a swarm intelligence algorithm that uses similar dynamics to HMC to guide the search but incorporates acceleration and damping parameters to provide rapid convergence to possible multiple minima. NA is a sampling technique that uses the properties of Voronoi cells in high dimensions to achieve multiple history-matched models. The algorithms are compared by generating multiple history- matched reservoir models and comparing the Bayesian credible intervals (p10-p50-p90) produced by each algorithm. We show that all the algorithms are able to find equivalent match qualities for this example but that some algorithms are able to find good fitting models quickly, whereas others are able to find a more diverse set of models in parameter space. The effects of the different sampling of model parameter space are compared in terms of the p10-p50-p90 uncertainty envelopes in forecast oil rate. These results show that algorithms based on Hamiltonian dynamics and swarm intelligence concepts have the potential to be effective tools in uncertainty quantification in the oil industry.
APA, Harvard, Vancouver, ISO, and other styles
12

Ahmed, N. U., and X. H. Ouyang. "Suboptimal RED Feedback Control for Buffered TCP Flow Dynamics in Computer Network." Mathematical Problems in Engineering 2007 (2007): 1–17. http://dx.doi.org/10.1155/2007/54683.

Full text
Abstract:
We present an improved dynamic system that simulates the behavior of TCP flows and active queue management (AQM) system. This system can be modeled by a set of stochastic differential equations driven by a doubly stochastic point process with intensities being the controls. The feedback laws proposed monitor the status of buffers and multiplexor of the router, detect incipient congestion by sending warning signals to the sources. The simulation results show that the optimal feedback control law from the class of linear as well as quadratic polynomials can improve the system performance significantly in terms of maximizing the link utilization, minimizing congestion, packet losses, as well as global synchronization. The optimization process used is based on random recursive search technique known as RRS.
APA, Harvard, Vancouver, ISO, and other styles
13

Rostampour, Vahab, and Tamás Keviczky. "Distributed Computational Framework for Large-Scale Stochastic Convex Optimization." Energies 14, no. 1 (December 23, 2020): 23. http://dx.doi.org/10.3390/en14010023.

Full text
Abstract:
This paper presents a distributed computational framework for stochastic convex optimization problems using the so-called scenario approach. Such a problem arises, for example, in a large-scale network of interconnected linear systems with local and common uncertainties. Due to the large number of required scenarios to approximate the stochasticity of these problems, the stochastic optimization involves formulating a large-scale scenario program, which is in general computationally demanding. We present two novel ideas in this paper to address this issue. We first develop a technique to decompose the large-scale scenario program into distributed scenario programs that exchange a certain number of scenarios with each other to compute local decisions using the alternating direction method of multipliers (ADMM). We show the exactness of the decomposition with a-priori probabilistic guarantees for the desired level of constraint fulfillment for both local and common uncertainty sources. As our second contribution, we develop a so-called soft communication scheme based on a set parametrization technique together with the notion of probabilistically reliable sets to reduce the required communication between the subproblems. We show how to incorporate the probabilistic reliability notion into existing results and provide new guarantees for the desired level of constraint violations. Two different simulation studies of two types of interconnected network, namely dynamically coupled and coupling constraints, are presented to illustrate advantages of the proposed distributed framework.
APA, Harvard, Vancouver, ISO, and other styles
14

AVELLANEDA, MARCO, ROBERT BUFF, CRAIG FRIEDMAN, NICOLAS GRANDECHAMP, LUKASZ KRUK, and JOSHUA NEWMAN. "WEIGHTED MONTE CARLO: A NEW TECHNIQUE FOR CALIBRATING ASSET-PRICING MODELS." International Journal of Theoretical and Applied Finance 04, no. 01 (February 2001): 91–119. http://dx.doi.org/10.1142/s0219024901000882.

Full text
Abstract:
A general approach for calibrating Monte Carlo models to the market prices of benchmark securities is presented. Starting from a given model for market dynamics (price diffusion, rate diffusion, etc.), the algorithm corrects price-misspecifications and finite-sample effects in the simulation by assigning "probability weights" to the simulated paths. The choice of weights is done by minimizing the Kullback–Leibler relative entropy distance of the posterior measure to the empirical measure. The resulting ensemble prices the given set of benchmark instruments exactly or in the sense of least-squares. We discuss pricing and hedging in the context of these weighted Monte Carlo models. A significant reduction of variance is demonstrated theoretically as well as numerically. Concrete applications to the calibration of stochastic volatility models and term-structure models with up to 40 benchmark instruments are presented. The construction of implied volatility surfaces and forward-rate curves and the pricing and hedging of exotic options are investigated through several examples.
APA, Harvard, Vancouver, ISO, and other styles
15

Федорович, Олег Євгенович, Олег Семенович Уруський, Людмила Миколаївна Лутай, and Ксенія Олегівна Западня. "ОПТИМІЗАЦІЯ ЖИТТЄВОГО ЦИКЛУ СТВОРЕННЯ НОВОЇ ТЕХНІКИ В УМОВАХ КОНКУРЕНЦІЇ ТА СТОХАСТИЧНОЇ ПОВЕДІНКИ РИНКУ ЗБУТУ ВИСОКОТЕХНОЛОГІЧНОЇ ПРОДУКЦІЇ." Aerospace technic and technology, no. 6 (November 27, 2020): 80–85. http://dx.doi.org/10.32620/aktt.2020.6.09.

Full text
Abstract:
The task to optimize the life cycle of new technique (aerospace, engineering products, etc.) in difficult economic conditions is stated and solved. The aim of the study is to develop a method to reduce the life cycle of complex technique creation. The subject of the research is the planning and management of the life cycle of complex technique in a highly competitive environment and stochastic behavior of high-tech products market. The paper shows the contradictions between the planned nature of modern production, which operates in conditions of Industry 4.0 and the stochastic behavior of the market. This contradiction leads to the relevance of short-term production plans with minimal risks. Therefore, the planning of the production system is carried out based on the portfolio that can be done in the short term. When planning new orders, it is necessary to shorten the life cycle of new equipment creation by analyzing the main stages: design, preparation of production, production. The optimization model to select the measures (project actions) at the initial stage of the life cycle is proposed. In order to generate the set from which the alternative options for activities are selected, expert assessments being ordered on the basis of the importance of time indicators, competitiveness, innovation, costs and risks are used. Simulation of the shortening of the new technique creation life cycle at a time of capacity constraints of production that creates high-tech products is carried out. The following mathematical models and methods are used: system analysis, optimization using integer programming, multi-criteria optimization, expert assessments, simulation modeling, agent-based modeling, risk assessment. The method allows to create competitive products at a time of capacity constraints by means of planning to shorten the life cycle of new technique creation and resources management.
APA, Harvard, Vancouver, ISO, and other styles
16

Luo, Junzhi, and Wanly Yang. "H∞Control of Supply Chain Based on Switched Model of Stock Level." Mathematical Problems in Engineering 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/464256.

Full text
Abstract:
This paper is concerned with the problem ofH∞control for a class of discrete supply chain systems. A new method based on network control technique is presented to address this issue. Supply chain systems are modeled as networked systems with stochastic time delay. Sufficient conditions forH∞controller design are given in terms of a set of linear matrix inequalities, based on which the mean-square asymptotic stability as well asH∞performance is satisfied for such systems. Simulation results are provided to demonstrate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
17

Hou, Zenghao, and Joyoung Lee. "Multi-Thread Optimization for the Calibration of Microscopic Traffic Simulation Model." Transportation Research Record: Journal of the Transportation Research Board 2672, no. 20 (September 18, 2018): 98–109. http://dx.doi.org/10.1177/0361198118796395.

Full text
Abstract:
This paper proposes an innovative multi-thread stochastic optimization approach for the calibration of microscopic traffic simulation models. Combining Quasi-Monte Carlo (QMC) sampling and the Particle Swarm Optimization (PSO) algorithm, the proposed approach, namely the Quasi-Monte Carlo Particle Swarm (QPS) calibration method, is designed to boost the searching process without prejudice to the calibration accuracy. Given the search space constructed by the combinations of simulation parameters, the QMC sampling technique filters the searching space, followed by the multi-thread optimization through the PSO algorithm. A systematic framework for the implementation of the QPS QMC-initialized PSO method is developed and applied for a case study dealing with a large-scale simulation model covering a 6-mile stretch of Interstate Highway 66 (I-66) in Fairfax, Virginia. The case study results prove that the proposed QPS method outperforms other methods utilizing Genetic Algorithm and Latin Hypercube Sampling in achieving faster convergence to obtain an optimal calibration parameter set.
APA, Harvard, Vancouver, ISO, and other styles
18

Prato, Carlo Giacomo. "META-ANALYSIS OF CHOICE SET GENERATION EFFECTS ON ROUTE CHOICE MODEL ESTIMATES AND PREDICTIONS." TRANSPORT 27, no. 3 (September 19, 2012): 286–98. http://dx.doi.org/10.3846/16484142.2012.719840.

Full text
Abstract:
Large scale applications of behaviorally realistic transport models pose several challenges to transport modelers on both the demand and the supply sides. On the supply side, path-based solutions to the user assignment equilibrium problem help modelers in enhancing the route choice behavior modeling, but require them to generate choice sets by selecting a path generation technique and its parameters according to personal judgments. This paper proposes a methodology and an experimental setting to provide general indications about objective judgments for an effective route choice set generation. Initially, path generation techniques are implemented within a synthetic network to generate possible subjective choice sets considered by travelers. Next, ‘true model estimates’ and ‘postulated predicted routes’ are assumed from the simulation of a route choice model. Then, objective choice sets are applied for model estimation and results are compared to the ‘true model estimates’. Last, predictions from the simulation of models estimated with objective choice sets are compared to the ‘postulated predicted routes’. A meta-analytical approach allows synthesizing the effect of judgments for the implementation of path generation techniques, since a large number of models generate a large amount of results that are otherwise difficult to summarize and to process. Meta-analysis estimates suggest that transport modelers should implement stochastic path generation techniques with average variance of its distribution parameters and correction for unequal sampling probabilities of the alternative routes in order to obtain satisfactory results in terms of coverage of ‘postulated chosen routes’, reproduction of ‘true model estimates’ and prediction of ‘postulated predicted routes’.
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Wei, Yuanyuan Zou, Nan Xiao, and Yugang Niu. "Quantized H∞ filtering for discrete-time systems over fading channels." Transactions of the Institute of Measurement and Control 40, no. 10 (July 24, 2017): 3115–24. http://dx.doi.org/10.1177/0142331217714862.

Full text
Abstract:
This paper addresses the problem of quantized [Formula: see text] filtering for multi-output discrete-time systems over independent identically distributed (i.i.d.) fading channels and Markov fading channels, respectively. The measurement outputs are quantized by a logarithmic quantizer, and then transmitted to the filter over fading channels. For the i.i.d. fading channels, the stochastic multiplicative noise form is used to model the unreliable communication environment. For the Markov fading channels, a set of Markov channel state processes is introduced to model time-varying fading channels, which characterizes various configurations of the physical communication environment and/or different channel fading amplitudes. The sufficient condition for stochastic stability with a prescribed [Formula: see text] performance is obtained by using a Lyapunov method and matrix decoupling technique. The corresponding filter design casts into a convex optimization problem. Finally, simulation results are provided to illustrate the effectiveness of our results.
APA, Harvard, Vancouver, ISO, and other styles
20

Montesi, Giuseppe, Giovanni Papiro, Massimiliano Fazzini, and Alessandro Ronga. "Stochastic Optimization System for Bank Reverse Stress Testing." Journal of Risk and Financial Management 13, no. 8 (August 6, 2020): 174. http://dx.doi.org/10.3390/jrfm13080174.

Full text
Abstract:
The recent evolution of prudential regulation establishes a new requirement for banks and supervisors to perform reverse stress test exercises in their risk assessment processes, aimed at detecting default or near-default scenarios. We propose a reverse stress test methodology based on a stochastic simulation optimization system. This methodology enables users to derive the critical combination of risk factors that, by triggering a preset key capital indicator threshold, causes the bank’s default, thus detecting the set of assumptions that defines the reverse stress test scenario. This article presents a theoretical presentation of the approach, providing a general description of the stochastic framework and, for illustrative purposes, an example of the application of the proposed methodology to the Italian banking sector, in order to illustrate the possible advantages of the approach in a simplified framework, which highlights the basic functioning of the model. In the paper, we also show how to take into account some relevant risk factor interactions and second round effects such as liquidity–solvency interlinkage and modeling of Pillar 2 risks including interest rate risk, sovereign risk, and reputational risk. The reverse stress test technique presented is a practical and manageable risk assessment approach, suitable for both micro- and macro-prudential analysis.
APA, Harvard, Vancouver, ISO, and other styles
21

Kalidass, Mathiyalagan, Hongye Su, and Sakthivel Rathinasamy. "Robust Stochastic Stability of Discrete-Time Markovian Jump Neural Networks with Leakage Delay." Zeitschrift für Naturforschung A 69, no. 1-2 (February 1, 2014): 70–80. http://dx.doi.org/10.5560/zna.2013-0078.

Full text
Abstract:
This paper presents a robust analysis approach to stochastic stability of the uncertain Markovian jumping discrete-time neural networks (MJDNNs) with time delay in the leakage term. By choosing an appropriate Lyapunov functional and using free weighting matrix technique, a set of delay dependent stability criteria are derived. The stability results are delay dependent, which depend on not only the upper bounds of time delays but also their lower bounds. The obtained stability criteria are established in terms of linear matrix inequalities (LMIs) which can be effectively solved by some standard numerical packages. Finally, some illustrative numerical examples with simulation results are provided to demonstrate applicability of the obtained results. It is shown that even if there is no leakage delay, the obtained results are less restrictive than in some recent works.
APA, Harvard, Vancouver, ISO, and other styles
22

Gordon, T. J., C. Marsh, and Q. H. Wu. "Stochastic Optimal Control of Active Vehicle Suspensions Using Learning Automata." Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 207, no. 3 (August 1993): 143–52. http://dx.doi.org/10.1243/pime_proc_1993_207_333_02.

Full text
Abstract:
This paper is concerned with the application of reinforcement learning to the stochastic optimal control of an idealized active vehicle suspension system. The use of learning automata in optimal control is a new application of this machine learning technique, and the principal aim of this work is to define and demonstrate the method in a relatively simple context, as well as to compare performance against results obtained from standard linear optimal control theory. The most distinctive feature of the approach is that no formal modelling is involved in the control system design; once implemented, learning takes place on-line, and the automaton improves its control performance with respect to a predefined cost function. An important new feature of the method is the use of subset actions, which enables the automaton to reduce the size of its action set at any particular instant, without imposing any global restrictions on the controller that is eventually learnt. The results, though based on simulation studies, suggest that there is great potential for implementing learning control in active vehicle suspensions, as well as for many other systems.
APA, Harvard, Vancouver, ISO, and other styles
23

Nussbaumer, Raphaël, Grégoire Mariethoz, Erwan Gloaguen, and Klaus Holliger. "Hydrogeophysical data integration through Bayesian Sequential Simulation with log-linear pooling." Geophysical Journal International 221, no. 3 (February 6, 2020): 2184–200. http://dx.doi.org/10.1093/gji/ggaa072.

Full text
Abstract:
SUMMARY Bayesian sequential simulation (BSS) is a geostastistical technique, which uses a secondary variable to guide the stochastic simulation of a primary variable. As such, BSS has proven significant promise for the integration of disparate hydrogeophysical data sets characterized by vastly differing spatial coverage and resolution of the primary and secondary variables. An inherent limitation of BSS is its tendency to underestimate the variance of the simulated fields due to the smooth nature of the secondary variable. Indeed, in its classical form, the method is unable to account for this smoothness because it assumes independence of the secondary variable with regard to neighbouring values of the primary variable. To overcome this limitation, we have modified the Bayesian updating with a log-linear pooling approach, which allows us to account for the inherent interdependence between the primary and the secondary variables by adding exponential weights to the corresponding probabilities. The proposed method is tested on a pertinent synthetic hydrogeophysical data set consisting of surface-based electrical resistivity tomography (ERT) data and local borehole measurements of the hydraulic conductivity. Our results show that, compared to classical BSS, the proposed log-linear pooling method using equal constant weights for the primary and secondary variables enhances the reproduction of the spatial statistics of the stochastic realizations, while maintaining a faithful correspondence with the geophysical data. Significant additional improvements can be achieved by optimizing the choice of these constant weights. We also explore a dynamic adaptation of the weights during the course of the simulation process, which provides valuable insights into the optimal parametrization of the proposed log-linear pooling approach. The results corroborate the strategy of selectively emphasizing the probabilities of the secondary and primary variables at the very beginning and for the remainder of the simulation process, respectively.
APA, Harvard, Vancouver, ISO, and other styles
24

Deng, Qiqi, and Tianshou Zhou. "Memory-Induced Bifurcation and Oscillations in the Chemical Brusselator Model." International Journal of Bifurcation and Chaos 30, no. 10 (August 2020): 2050151. http://dx.doi.org/10.1142/s0218127420501515.

Full text
Abstract:
Previous studies assumed that the reaction processes in the chemical Brusselator model are memoryless or Markovian. However, as long as a reactant interacts with its environment, the reaction kinetics cannot be described as a memoryless process. This raises a question: how do we predict the behavior of the chemical Brusselator system with molecular memory characterized by nonexponential waiting-time distributions? Here, a novel technique is developed to address this question. This technique converts a non-Markovian question to a Markovian one by introducing effective transition rates that explicitly decode the memory effect. Based on this conversion, it is analytically shown that molecular memory can induce bifurcations and oscillations. Moreover, a set of sufficient conditions are derived, which can guarantee that the system of the rate equations for the Markovian reaction system generates oscillations via memory index-induced bifurcation. In turn, these conditions can guarantee that the original non-Markovian reaction system generates stochastic oscillations. Numerical simulation verifies the theoretical prediction. The overall analysis indicates that molecular memory is not a negligible factor affecting a chemical system’s behavior.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, L., P. M. Wong, and S. A. R. Shibli. "Modeling Porosity Distribution in the A'nan Oilfield: Use of Geological Quantification, Neural Networks, and Geostatistics." SPE Reservoir Evaluation & Engineering 2, no. 06 (December 1, 1999): 527–32. http://dx.doi.org/10.2118/59090-pa.

Full text
Abstract:
Summary A'nan Oilfield is located in the northeast of the Erlian Basin in North China. The porosity distribution of the oil-bearing stratum is primarily controlled by complex distribution patterns of sedimentary lithofacies and diagenetic facies. This paper describes a methodology to provide a porosity model for the A'nan Oilfield using limited well porosity data, with the incorporation of the conceptual reservoir architecture. Neural network residual kriging or simulation is employed to tackle the problem. The integrated technique is developed based on a combined use of radial basis function neural networks and geostatistics. It has the flexibility of neural networks in handling high-dimensional data, the exactitude property of kriging and the ability to perform stochastic simulation via the use of kriging variance. The results of this study show that the integrated technique provides a realistic description of porosity honoring both the well data and the conceptual framework of the geological interpretations. The technique is fast, straightforward and does not require any tedious cross-correlation modeling. It is of great benefit to reservoir geologists and engineers. Introduction Spatial description of porosity is a crucial step for fluid flow simulation study. Such descriptions are often used in porosity and permeability transforms in order to derive a transmissibility field. The distribution of porosity is commonly controlled by qualitative geological features. While the importance of these features is well known to the geological community, they are often difficult to incorporate quantitatively during the three dimensional (3D) geological modeling study. There is therefore a strong need for the industry to fully utilize existing geological interpretations rather than iteratively match the computational outputs to the interpretations by varying model parameters. The objective of this paper is to provide an integrated solution to make use of existing geological interpretations for improved reservoir mapping. Although some purely geostatistical techniques are capable of providing some of these functionalities, often difficult and tedious cross-correlation modeling (e.g., cokriging) as well as time consuming indicator coding (e.g., nonparametric analysis) are required. The integrated technique used in this paper is developed based on a combined use of artificial neural networks (NNs) and geostatistics. The original idea was proposed by Kanevski et al.1 The authors assumed that spatial prediction is composed of a predictable (trend) component and an error (noise or residual) component. They used multilayered feedforward neural networks (an inexact estimator) to model the former component and kriging (an exact estimator) to model the latter component. Hence the name neural network residual kriging (NNRK) was used. The final estimate is simply the sum of the two components, and hence the estimator restores all the conditioning data. The kriging variance also allows the estimator to perform stochastic simulation. A technique, such as neural network residual simulation (NNRS)2 is an example. There are many advantages of combining NNs with geostatistics. The most popular geostatistical model, kriging,3 is based on error variance minimization with the use of spatial correlation structures. It has the ability to generate an exact interpolation. Kriging variance is also useful for stochastic simulation (e.g., via sequential Gaussian simulation3) in order to quantify the spatial uncertainty of the predictions. However, most geostatistical models become unattractive when there are many types of information available for modeling. In mathematical terms, geostatistics is often not the best solution for high-dimensional problems. On the other hand, NN methods are highly flexible in handling nonlinear, high-dimensional data without tedious cross-correlation modeling. However, most NN methods could neither produce exact interpolation nor perform stochastic simulation for uncertainty analysis. Hence, the combined use of NNs and geostatistics provides a powerful tool for reservoir mapping. This paper will first describe the integrated method using porosity as an example. The reservoir description of the A'nan Oilfield will be presented. This is followed by the application of the method to model the porosity distribution across the field based on limited well data and extensive geological information regarding the distribution patterns of the sedimentary lithofacies and diagenetic facies. Basis of Neural Networks This paper uses a special class of NN estimators, namely "radial basis function neural networks." This particular estimator is chosen because it is simple and the origin of the method is similar to most spatial interpolators, that is, the prediction is calculated based on the distance between the prediction location and the reference data location. Its application to reservoir characterization includes reservoir mapping4–6 and log interpretation.7 Like most NN methods, radial basis function neural networks (RBFNNs) attempt to mimic simple biological learning processes. They can learn from examples. The learning phase is an essential starting point that requires training patterns consisting of a number of input signals (e.g., a high-dimensional vector) paired with target signals. The inputs are presented to the network and the corresponding outputs are calculated with the aim of minimizing the model error (i.e., the total difference between the calculated outputs and target signals). The gradient descent method is the most popular learning method to reduce the model error by iteration. Training can be terminated when the model error is below a tolerance value. After training, the network creates a set of parameters that can be used for predicting properties in situations where the actual outputs are not known.
APA, Harvard, Vancouver, ISO, and other styles
26

Elloumi, Mourad, and Samira Kamoun. "Improved Discrete Techniques of Time-Delay and Order Estimation for Large-Scale Interconnected Nonlinear Systems." Mathematical Problems in Engineering 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/1919823.

Full text
Abstract:
The selection of a suitable model structure is an essential step in system modeling. Model structure is defined by determining the class, the time delay, and the model order. This paper proposes improved structural estimation procedures for large-scale interconnected nonlinear systems which are composed of a set of interconnected Single-Input Single-Output (SISO) Hammerstein structures and described by discrete-time stochastic models with unknown time-varying parameters. An extensive Determinant Report (DR) algorithm is developed to determine the order of the process. An improved discrete-time technique based on Recursive Extended Least Squares with Varying Time (RELSVT) delay method is proposed to estimate the time delays of the considered system. The developed theoretical analysis and simulation results prove the validity and performance of the proposed algorithms.
APA, Harvard, Vancouver, ISO, and other styles
27

Albatran, Saher, Muwaffaq I. Alomoush, and Ahmed M. Koran. "Gravitational-Search Algorithm for Optimal Controllers Design of Doubly-fed Induction Generator." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 2 (April 1, 2018): 780. http://dx.doi.org/10.11591/ijece.v8i2.pp780-792.

Full text
Abstract:
Recently, the Gravitational-Search Algorithm (GSA) has been presented as a promising physics-inspired stochastic global optimization technique. It takes its derivation and features from laws of gravitation. This paper applies the GSA to design optimal controllers of a nonlinear system consisting of a doubly-fed induction generator (DFIG) driven by a wind turbine. Both the active and the reactive power are controlled and processed through a back-to-back converter. The active power control loop consists of two cascaded proportional integral (PI) controllers. Another PI controller is used to set the q-component of the rotor voltage by compensating the generated reactive power. The GSA is used to simultaneously tune the parameters of the three PI controllers. A time-weighted absolute error (ITAE) is used in the objective function to stabilize the system and increase its damping when subjected to different disturbances. Simulation results will demonstrate that the optimal GSA-based coordinated controllers can efficiently damp system oscillations under severe disturbances. Moreover, simulation results will show that the designed optimal controllers obtained using the GSA perform better than the optimal controllers obtained using two commonly used global optimization techniques, which are the Genetic Algorithm (GA) and Particle Swarm Optimization (PSO).
APA, Harvard, Vancouver, ISO, and other styles
28

Abu Dabous, Saleh, and Ghadeer Al-Khayyat. "A Flexible Bridge Rating Method Based on Analytical Evidential Reasoning and Monte Carlo Simulation." Advances in Civil Engineering 2018 (June 27, 2018): 1–13. http://dx.doi.org/10.1155/2018/1290632.

Full text
Abstract:
Several bridge inspection standards and condition assessment practices have been developed around the globe. Some practices employ four linguistic expressions to rate bridge elements while other practices use five or six, or adopt numerical ratings such as 1 to 9. This research introduces a condition rating method that can operate under different condition assessment practices and account for uncertainties in condition assessment by means of the Evidential Reasoning (ER) theory. The method offers flexibility in terms of using default elements and their weights or selecting alternative set of elements and condition rating schemes. The implemented ER approach accounts for uncertainties in condition rating by treating the condition assessments as probabilistic grades rather than numerical values. The ER approach requires the assignment of initial basic beliefs or probabilities, and typically these initial beliefs are assigned by an expert. Alternatively, this research integrates the Monte Carlo Simulation (MCS) technique with the ER theory to quantitatively estimate the basic probabilities and to produce robust overall bridge condition ratings. The proposed method is novel to the literature and has the following features: (1) flexible and can be used with any number of bridge elements and any standard of condition grades; (2) intuitive and simple paired comparison technique is implemented to evaluate weights of the bridge elements; (3) the MCS technique is integrated with the ER approach to quantify uncertainties associated with the stochastic nature of the bridge deterioration process; (4) the method can function with limited data and can incorporate new evidence to update the condition rating; (5) the final rating consists of multiple condition grades and is produced as a distributed probabilistic assessment reflecting the condition of the bridge elements collectively. The proposed method is illustrated with a real case study, and potential future research work is identified.
APA, Harvard, Vancouver, ISO, and other styles
29

Hossain, F., E. N. Anagnostou, and K. H. Lee. "A non-linear and stochastic response surface method for Bayesian estimation of uncertainty in soil moisture simulation from a land surface model." Nonlinear Processes in Geophysics 11, no. 4 (September 24, 2004): 427–40. http://dx.doi.org/10.5194/npg-11-427-2004.

Full text
Abstract:
Abstract. This study presents a simple and efficient scheme for Bayesian estimation of uncertainty in soil moisture simulation by a Land Surface Model (LSM). The scheme is assessed within a Monte Carlo (MC) simulation framework based on the Generalized Likelihood Uncertainty Estimation (GLUE) methodology. A primary limitation of using the GLUE method is the prohibitive computational burden imposed by uniform random sampling of the model's parameter distributions. Sampling is improved in the proposed scheme by stochastic modeling of the parameters' response surface that recognizes the non-linear deterministic behavior between soil moisture and land surface parameters. Uncertainty in soil moisture simulation (model output) is approximated through a Hermite polynomial chaos expansion of normal random variables that represent the model's parameter (model input) uncertainty. The unknown coefficients of the polynomial are calculated using limited number of model simulation runs. The calibrated polynomial is then used as a fast-running proxy to the slower-running LSM to predict the degree of representativeness of a randomly sampled model parameter set. An evaluation of the scheme's efficiency in sampling is made through comparison with the fully random MC sampling (the norm for GLUE) and the nearest-neighborhood sampling technique. The scheme was able to reduce computational burden of random MC sampling for GLUE in the ranges of 10%-70%. The scheme was also found to be about 10% more efficient than the nearest-neighborhood sampling method in predicting a sampled parameter set's degree of representativeness. The GLUE based on the proposed sampling scheme did not alter the essential features of the uncertainty structure in soil moisture simulation. The scheme can potentially make GLUE uncertainty estimation for any LSM more efficient as it does not impose any additional structural or distributional assumptions.
APA, Harvard, Vancouver, ISO, and other styles
30

Ramsay, Travis. "Uncertainty Quantification of an Explicitly Coupled Multiphysics Simulation of In-Situ Pyrolysis by Radio Frequency Heating in Oil Shale." SPE Journal 25, no. 03 (March 9, 2020): 1443–61. http://dx.doi.org/10.2118/200476-pa.

Full text
Abstract:
Summary In-situ pyrolysis provides an enhanced oil recovery (EOR) technique for exploiting oil and gas from oil shale by converting in-place solid kerogen into liquid oil and gas. Radio-frequency (RF) heating of the in-place oil shale has previously been proposed as a method by which the electromagnetic energy gets converted to thermal energy, thereby heating in-situ kerogen so that it converts to oil and gas. In order to numerically model the RF heating of the in-situ oil shale, a novel explicitly coupled thermal, phase field, mechanical, and electromagnetic (TPME) framework is devised using the finite element method in a 2D domain. Contemporaneous efforts in the commercial development of oil shale by in-situ pyrolysis have largely focused on pilot methodologies intended to validate specific corporate or esoteric EOR strategies. This work focuses on addressing efficient epistemic uncertainty quantification (UQ) of select thermal, oil shale distribution, electromagnetic, and mechanical characteristics of oil shale in the RF heating process, comparing a spectral methodology to a Monte Carlo (MC) simulation for validation. Attempts were made to parameterize the stochastic simulation models using the characteristic properties of Green River oil shale. The geologic environment being investigated is devised as a kerogen-poor under- and overburden separated by a layer of heterogeneous yet kerogen-rich oil shale in a target formation. The objective of this work is the quantification of plausible oil shale conversion using TPME simulation under parametric uncertainty; this, while considering a referenced conversion timeline of 1.0 × 107 seconds. Nonintrusive polynomial chaos (NIPC) and MC simulation were used to evaluate complex stochastically driven TPME simulations of RF heating. The least angle regression (LAR) method was specifically used to determine a sparse set of polynomial chaos coefficients leading to the determination of summary statistics that describe the TPME results. Given the existing broad use of MC simulation methods for UQ in the oil and gas industry, the combined LAR and NIPC is suggested to provide a distinguishable performance improvement to UQ compared to MC methods.
APA, Harvard, Vancouver, ISO, and other styles
31

Jhwueng, Dwueng-Chwuan, and Chih-Ping Wang. "Phylogenetic Curved Optimal Regression for Adaptive Trait Evolution." Entropy 23, no. 2 (February 10, 2021): 218. http://dx.doi.org/10.3390/e23020218.

Full text
Abstract:
Regression analysis using line equations has been broadly applied in studying the evolutionary relationship between the response trait and its covariates. However, the characteristics among closely related species in nature present abundant diversities where the nonlinear relationship between traits have been frequently observed. By treating the evolution of quantitative traits along a phylogenetic tree as a set of continuous stochastic variables, statistical models for describing the dynamics of the optimum of the response trait and its covariates are built herein. Analytical representations for the response trait variables, as well as their optima among a group of related species, are derived. Due to the models’ lack of tractable likelihood, a procedure that implements the Approximate Bayesian Computation (ABC) technique is applied for statistical inference. Simulation results show that the new models perform well where the posterior means of the parameters are close to the true parameters. Empirical analysis supports the new models when analyzing the trait relationship among kangaroo species.
APA, Harvard, Vancouver, ISO, and other styles
32

Sokolov, Vladimir Yu, Chin-Hsiung Loh, and Kuo-Liang Wen. "Empirical Models for Site- and Region-Dependent Ground-Motion Parameters in the Taipei Area: A Unified Approach." Earthquake Spectra 17, no. 2 (May 2001): 313–31. http://dx.doi.org/10.1193/1.1586177.

Full text
Abstract:
We calculated peak ground accelerations and response spectra for the Taipei area using stochastic simulation technique on the basis of recently obtained empirical models. The source, path and site effects were characterized separately on the basis of the analysis of a large collection of ground-motion recordings obtained since 1991 in the Taiwan area. The simple ω-squared Brune's point-source model combined with regional anelastic attenuation ( Q) and duration (τ0.9) models provide a satisfactory estimation of ground-motion parameters for rock sites. Effects of local site response are considered by means of empirical soil/bedrock spectral ratios calculated as ratios between spectra of actual earthquake records and those modeled for hypothetical “hard rock” site. The results of the simulation demonstrate that this combination of source, path and site response models provides an accurate prediction of “site- and region-dependent” ground-motion parameters for the Taipei basin for the broad range of earthquake magnitudes, distances and site conditions. The model, with a set of generic soil profiles, can be considered as an efficient tool for estimating of design input ground motion parameters in the Taipei basin both in deterministic (scenario earthquakes) and probabilistic (“site- and region-dependent” Uniform Hazard response spectra) seismic hazard assessment.
APA, Harvard, Vancouver, ISO, and other styles
33

Dexter, Nick, Hoang Tran, and Clayton Webster. "A mixed ℓ1 regularization approach for sparse simultaneous approximation of parameterized PDEs." ESAIM: Mathematical Modelling and Numerical Analysis 53, no. 6 (November 2019): 2025–45. http://dx.doi.org/10.1051/m2an/2019048.

Full text
Abstract:
We present and analyze a novel sparse polynomial technique for the simultaneous approximation of parameterized partial differential equations (PDEs) with deterministic and stochastic inputs. Our approach treats the numerical solution as a jointly sparse reconstruction problem through the reformulation of the standard basis pursuit denoising, where the set of jointly sparse vectors is infinite. To achieve global reconstruction of sparse solutions to parameterized elliptic PDEs over both physical and parametric domains, we combine the standard measurement scheme developed for compressed sensing in the context of bounded orthonormal systems with a novel mixed-norm based ℓ1 regularization method that exploits both energy and sparsity. In addition, we are able to prove that, with minimal sample complexity, error estimates comparable to the best s-term and quasi-optimal approximations are achievable, while requiring only a priori bounds on polynomial truncation error with respect to the energy norm. Finally, we perform extensive numerical experiments on several high-dimensional parameterized elliptic PDE models to demonstrate the superior recovery properties of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
34

Rzhetsky, Andrey, Joaquín Dopazo, Eric Snyder, Charles A. Dangler, and Francisco José Ayala. "Assessing Dissimilarity of Genes by Comparing Their RNAse A Mismatch Cleavage Patterns." Genetics 144, no. 4 (December 1, 1996): 1975–83. http://dx.doi.org/10.1093/genetics/144.4.1975.

Full text
Abstract:
We propose a simple algorithm for estimating the number of nucleotide differences between a pair of RNA or DNA sequences through comparison of their RNAse A mismatch cleavage patterns. In the RNAse A mismatch cleavage technique two or more sample sequences are hybridized to the same RNA probe, the hybrids are partially digested with RNAse A, and the digestion products are compared on an electrophoretic gel. Here we provide an algorithm for converting the numbers of unique and matching electrophoretic bands into an estimate of the number of nucleotide differences between the sequences. Computer simulation indicates that the proposed method yields a robust estimate of the genetic distance despite stochastic errors and occasional violation of certain assumptions. Our study suggests that the method performs best when the distance between the sequences is <15 differences. When the sequences under analysis are likely to have larger distances, we advise to substitute one long riboprobe with a set of shorter nonoverlapping probes. The new algorithm is applied to infer the proximity of several strains of pseudorabies virus.
APA, Harvard, Vancouver, ISO, and other styles
35

Nouy, Anthony, and Florent Pled. "A multiscale method for semi-linear elliptic equations with localized uncertainties and non-linearities." ESAIM: Mathematical Modelling and Numerical Analysis 52, no. 5 (September 2018): 1763–802. http://dx.doi.org/10.1051/m2an/2018025.

Full text
Abstract:
A multiscale numerical method is proposed for the solution of semi-linear elliptic stochastic partial differential equations with localized uncertainties and non-linearities, the uncertainties being modeled by a set of random parameters. It relies on a domain decomposition method which introduces several subdomains of interest (called patches) containing the different sources of uncertainties and non-linearities. An iterative algorithm is then introduced, which requires the solution of a sequence of linear global problems (with deterministic operators and uncertain right-hand sides), and non-linear local problems (with uncertain operators and/or right-hand sides) over the patches. Non-linear local problems are solved using an adaptive sampling-based least-squares method for the construction of sparse polynomial approximations of local solutions as functions of the random parameters. Consistency, convergence and robustness of the algorithm are proved under general assumptions on the semi-linear elliptic operator. A convergence acceleration technique (Aitken’s dynamic relaxation) is also introduced to speed up the convergence of the algorithm. The performances of the proposed method are illustrated through numerical experiments carried out on a stationary non-linear diffusion-reaction problem.
APA, Harvard, Vancouver, ISO, and other styles
36

Azevedo, Leonardo, Ruben Nunes, Pedro Correia, Amílcar Soares, Luis Guerreiro, and Guenther Schwedersky Neto. "Multidimensional scaling for the evaluation of a geostatistical seismic elastic inversion methodology." GEOPHYSICS 79, no. 1 (January 1, 2014): M1—M10. http://dx.doi.org/10.1190/geo2013-0037.1.

Full text
Abstract:
Due to the nature of seismic inversion problems, there are multiple possible solutions that can equally fit the observed seismic data while diverging from the real subsurface model. Consequently, it is important to assess how inverse-impedance models are converging toward the real subsurface model. For this purpose, we evaluated a new methodology to combine the multidimensional scaling (MDS) technique with an iterative geostatistical elastic seismic inversion algorithm. The geostatistical inversion algorithm inverted partial angle stacks directly for acoustic and elastic impedance (AI and EI) models. It was based on a genetic algorithm in which the model perturbation at each iteration was performed recurring to stochastic sequential simulation. To assess the reliability and convergence of the inverted models at each step, the simulated models can be projected in a metric space computed by MDS. This projection allowed distinguishing similar from variable models and assessing the convergence of inverted models toward the real impedance ones. The geostatistical inversion results of a synthetic data set, in which the real AI and EI models are known, were plotted in this metric space along with the known impedance models. We applied the same principle to a real data set using a cross-validation technique. These examples revealed that the MDS is a valuable tool to evaluate the convergence of the inverse methodology and the impedance model variability among each iteration of the inversion process. Particularly for the geostatistical inversion algorithm we evaluated, it retrieves reliable impedance models while still producing a set of simulated models with considerable variability.
APA, Harvard, Vancouver, ISO, and other styles
37

Olaitan, Oladipupo A., and John Geraghty. "Evaluation of production control strategies for negligible‐setup, multi‐product, serial lines with consideration for robustness." Journal of Manufacturing Technology Management 24, no. 3 (March 8, 2013): 331–57. http://dx.doi.org/10.1108/17410381311318864.

Full text
Abstract:
PurposeThe aims of this paper is to investigate simulation‐based optimisation and stochastic dominance testing while employing kanban‐like production control strategies (PCS) operating dedicated and, where applicable, shared kanban card allocation policies in a multi‐product system with negligible set‐up times and with consideration for robustness to uncertainty.Design/methodology/approachDiscrete event simulation and a genetic algorithm were utilised to optimise the control parameters for dedicated kanban control strategy (KCS), CONWIP and base stock control strategy (BSCS), extended kanban control strategy (EKCS) and generalised kanban control strategy (GKCS) as well as the shared versions of EKCS and GKCS. All‐pairwise comparisons and a ranking and selection technique were employed to compare the performances of the strategies and select the best strategy without consideration of robustness to uncertainty. A latin hypercube sampling experimental design and stochastic dominance testing were utilised to determine the preferred strategy when robustness to uncertainty is considered.FindingsThe findings of this work show that shared GKCS outperforms other strategies when robustness is not considered. However, when robustness of the strategies to uncertainty in the production environment is considered, the results of our research show that the dedicated EKCS is preferred. The effect of system bottleneck location on the inventory accumulation behaviour of different strategies is reported and this was also observed to have a relationship to the nature of a PCS's kanban information transmission.Practical implicationsThe findings of this study are directly relevant to industry where increasing market pressures for product diversity require operating multi‐product production lines with negligible set‐up times. The optimization and robustness test approaches employed in this work can be extended to the analysis of more complicated system configurations and higher number of product types.Originality/valueThis work involves further investigation into the performance of multi‐product kanban‐like PCS by examining their robustness to common sources of uncertainties after they have been initially optimized for base scenarios. The results of the robustness tests also provide new insights into how dedicated kanban card allocation policies might offer higher flexibility and robustness over shared policies under conditions of uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
38

Navarra, A., M. N. Ward, and N. A. Rayner. "A stochastic model of SST for climate simulation experiments." Climate Dynamics 14, no. 7-8 (June 26, 1998): 473–87. http://dx.doi.org/10.1007/s003820050235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Hetmanski, Joseph H. R., Matthew C. Jones, Fatima Chunara, Jean-Marc Schwartz, and Patrick T. Caswell. "Combinatorial mathematical modelling approaches to interrogate rear retraction dynamics in 3D cell migration." PLOS Computational Biology 17, no. 3 (March 10, 2021): e1008213. http://dx.doi.org/10.1371/journal.pcbi.1008213.

Full text
Abstract:
Cell migration in 3D microenvironments is a complex process which depends on the coordinated activity of leading edge protrusive force and rear retraction in a push-pull mechanism. While the potentiation of protrusions has been widely studied, the precise signalling and mechanical events that lead to retraction of the cell rear are much less well understood, particularly in physiological 3D extra-cellular matrix (ECM). We previously discovered that rear retraction in fast moving cells is a highly dynamic process involving the precise spatiotemporal interplay of mechanosensing by caveolae and signalling through RhoA. To further interrogate the dynamics of rear retraction, we have adopted three distinct mathematical modelling approaches here based on (i) Boolean logic, (ii) deterministic kinetic ordinary differential equations (ODEs) and (iii) stochastic simulations. The aims of this multi-faceted approach are twofold: firstly to derive new biological insight into cell rear dynamics via generation of testable hypotheses and predictions; and secondly to compare and contrast the distinct modelling approaches when used to describe the same, relatively under-studied system. Overall, our modelling approaches complement each other, suggesting that such a multi-faceted approach is more informative than methods based on a single modelling technique to interrogate biological systems. Whilst Boolean logic was not able to fully recapitulate the complexity of rear retraction signalling, an ODE model could make plausible population level predictions. Stochastic simulations added a further level of complexity by accurately mimicking previous experimental findings and acting as a single cell simulator. Our approach highlighted the unanticipated role for CDK1 in rear retraction, a prediction we confirmed experimentally. Moreover, our models led to a novel prediction regarding the potential existence of a ‘set point’ in local stiffness gradients that promotes polarisation and rapid rear retraction.
APA, Harvard, Vancouver, ISO, and other styles
40

Ouala, Said, Ronan Fablet, Cédric Herzet, Bertrand Chapron, Ananda Pascual, Fabrice Collard, and Lucile Gaultier. "Neural Network Based Kalman Filters for the Spatio-Temporal Interpolation of Satellite-Derived Sea Surface Temperature." Remote Sensing 10, no. 12 (November 22, 2018): 1864. http://dx.doi.org/10.3390/rs10121864.

Full text
Abstract:
The forecasting and reconstruction of oceanic dynamics is a crucial challenge. While model driven strategies are still the state-of-the-art approaches in the reconstruction of spatio-temporal dynamics. The ever increasing availability of data collections in oceanography raised the relevance of data-driven approaches as computationally efficient representations of spatio-temporal fields reconstruction. This tools proved to outperform classical state-of-the-art interpolation techniques such as optimal interpolation and DINEOF in the retrievement of fine scale structures while still been computationally efficient comparing to model based data assimilation schemes. However, coupling this data-driven priors to classical filtering schemes limits their potential representativity. From this point of view, the recent advances in machine learning and especially neural networks and deep learning can provide a new infrastructure for dynamical modeling and interpolation within a data-driven framework. In this work we adress this challenge and develop a novel Neural-Network-based (NN-based) Kalman filter for spatio-temporal interpolation of sea surface dynamics. Based on a data-driven probabilistic representation of spatio-temporal fields, our approach can be regarded as an alternative to classical filtering schemes such as the ensemble Kalman filters (EnKF) in data assimilation. Overall, the key features of the proposed approach are two-fold: (i) we propose a novel architecture for the stochastic representation of two dimensional (2D) geophysical dynamics based on a neural networks, (ii) we derive the associated parametric Kalman-like filtering scheme for a computationally-efficient spatio-temporal interpolation of Sea Surface Temperature (SST) fields. We illustrate the relevance of our contribution for an OSSE (Observing System Simulation Experiment) in a case-study region off South Africa. Our numerical experiments report significant improvements in terms of reconstruction performance compared with operational and state-of-the-art schemes (e.g., optimal interpolation, Empirical Orthogonal Function (EOF) based interpolation and analog data assimilation).
APA, Harvard, Vancouver, ISO, and other styles
41

Lamont, Byron B., Neal J. Enright, E. T. F. Witkowski, and J. Groeneveld. "Conservation biology of banksias: insights from natural history to simulation modelling." Australian Journal of Botany 55, no. 3 (2007): 280. http://dx.doi.org/10.1071/bt06024.

Full text
Abstract:
We have studied the ecology and conservation requirements of Banksia species in the species-rich sandplains of south-western Australia for 25 years. Loss of habitat through land-clearing has had the greatest impact on their conservation status over the last 50 years. Ascertaining optimal conditions for conservation management in bushland requires detailed knowledge of the species under consideration, including demographic attributes, fire regime, growing conditions and interactions with other species. Where populations have been fragmented, seed production per plant has also fallen. The group most vulnerable to the vagaries of fire, disease, pests, weeds and climate change are the non-sprouters, rather than the resprouters, with population extinction so far confined to non-sprouting species. Recent short-interval fires (<8 years) appear to have had little impact at the landscape scale, possibly because they are rare and patchy. Fire intervals exceeding 25–50 years can also lead to local extinction. Up to 200 viable seeds are required for parent replacement in Banksia hookeriana when growing conditions are poor (low post-fire rainfall, commercial flower harvesting) and seed banks of this size can take up to 12 years to be reached. Seed production is rarely limited by pollinators, but interannual seasonal effects and resource availability are important. Genetic diversity of the seed store is quickly restored to the level of the parents in B. hookeriana. Florivores and granivores generally reduce seed stores, although this varies markedly among species. In Banksia tricuspis, black cockatoos actually increase seed set by selectively destroying borers. Potential loss of populations through the root pathogen Phytophthora cinnamomi also challenges management, especially in the southern sandplains. Prefire dead plants are a poor source of seeds for the next generation when fire does occur. Harvesting seeds and sowing post-fire have much to commend them for critically endangered species. Bare areas caused by humans can result in ideal conditions for plant growth and seed set. However, in the case of B. hookeriana/B. prionotes, disturbance by humans has fostered hybridisation, threatening the genetic integrity of both species, whereas fine-textured soils are unsuitable for colonisation or rehabilitation. Few viable seeds become seedlings after fire, owing to post-release granivory and herbivory and unsuitable germination conditions. Seedling-competitive effects ensure that season/intensity of fire is not critical to recruitment levels, except in the presence of weeds. Water availability during summer–autumn is critical and poses a problem for conservation management if the trend for declining rainfall in the region continues. Our simulation modelling for three banksias shows that the probability of co-occurrence is maximal when fire is stochastic around a mean of 13 years, and where fire-proneness and post-fire recruitment success vary in the landscape. Modelling results suggest that non-sprouting banksias could not survive the pre-European frequent-fire scenario suggested by the new grasstree technique for south-western Australia. However, we have yet to fully explore the conservation significance of long-distance dispersal of seeds, recently shown to exceed 2.5 km in B. hookeriana.
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, S. S., and H. P. Hong. "Partial safety factors for designing and assessing flexible pavement performance." Canadian Journal of Civil Engineering 31, no. 3 (June 1, 2004): 397–406. http://dx.doi.org/10.1139/l03-109.

Full text
Abstract:
In designing and assessing pavement performance, the uncertainty in material properties and geometrical variables of pavement and in traffic and environmental actions should be considered. A single factor is employed to deal with these uncertainties in the current American Association of State Highway and Transportation Officials (AASHTO) guide for design of pavements. However, use of this single factor may not ensure reliability-consistent pavement design and assessment because different random variables that may have different degrees of uncertainty affect the safety and performance of pavement differently. Similar problems associated with structural design have been recognized by code writers and dealt with using partial safety factors or load resistance factors. The present study is focused on evaluating a set of partial safety factors to be used in conjunction with the flexible pavement deterioration model in the Ontario pavement analysis of cost and the model in the AASHTO guide for evaluating the flexible pavement performance or serviceability. Evaluation and probabilistic analyses are carried out using the first-order reliability method and simple simulation technique. The results of the analysis were used to suggest factors that could be used, in a partial safety factor format, for designing or assessing flexible pavement conditions to achieve a specified target safety level.Key words: deterioration, reliability, pavement, serviceability, stochastic process, performance, partial safety factor.
APA, Harvard, Vancouver, ISO, and other styles
43

SIÓDMIAK, J., and A. GADOMSKI. "COMPUTER MODEL OF BIOPOLYMER CRYSTAL GROWTH AND AGGREGATION BY ADDITION OF MACROMOLECULAR UNITS — A COMPARATIVE STUDY." International Journal of Modern Physics C 17, no. 07 (July 2006): 1037–53. http://dx.doi.org/10.1142/s0129183106009643.

Full text
Abstract:
We discuss the results of a computer simulation of the biopolymer crystal growth and aggregation based on the 2D lattice Monte Carlo technique and the HP approximation of the biopolymers. As a modeled molecule (growth unit) we comparatively consider the previously studied non-mutant lysozyme protein, Protein Data Bank (PDB) ID: 193L, which forms, under a certain set of thermodynamic-kinetic conditions, the tetragonal crystals, and an amyloidogenic variant of the lysozyme, PDB ID: 1LYY, which is known as fibril-yielding and prone-to-aggregation agent. In our model, the site-dependent attachment, detachment and migration processes are involved. The probability of growth unit motion, attachment and detachment to/from the crystal surface are assumed to be proportional to the orientational factor representing the anisotropy of the molecule. Working within a two-dimensional representation of the truly three-dimensional process, we also argue that the crystal grows in a spiral way, whereby one or more screw dislocations on the crystal surface give rise to a terrace. We interpret the obtained results in terms of known models of crystal growth and aggregation such as B-C-F (Burton-Cabrera-Frank) dislocation driven growth and M-S (Mullins-Sekerka) instability concept, with stochastic aspects supplementing the latter. We discuss the conditions under which crystals vs non-crystalline protein aggregates appear, and how the process depends upon difference in chemical structure of the protein molecule seen as the main building block of the elementary crystal cell.
APA, Harvard, Vancouver, ISO, and other styles
44

Huang, Chuangxia, Xinsong Yang, and Yigang He. "Stability Analysis of Stochastic Reaction-Diffusion Cohen-Grossberg Neural Networks with Time-Varying Delays." Discrete Dynamics in Nature and Society 2009 (2009): 1–18. http://dx.doi.org/10.1155/2009/439173.

Full text
Abstract:
This paper is concerned withpth moment exponential stability of stochastic reaction-diffusion Cohen-Grossberg neural networks with time-varying delays. With the help of Lyapunov method, stochastic analysis, and inequality techniques, a set of new suffcient conditions onpth moment exponential stability for the considered system is presented. The proposed results generalized and improved some earlier publications.
APA, Harvard, Vancouver, ISO, and other styles
45

El-Beltagy, Mohamed A., and Amnah S. Al-Johani. "Numerical Approximation of Higher-Order Solutions of the Quadratic Nonlinear Stochastic Oscillatory Equation Using WHEP Technique." Journal of Applied Mathematics 2013 (2013): 1–21. http://dx.doi.org/10.1155/2013/685137.

Full text
Abstract:
This paper introduces higher-order solutions of the stochastic nonlinear differential equations with the Wiener-Hermite expansion and perturbation (WHEP) technique. The technique is used to study the quadratic nonlinear stochastic oscillatory equation with different orders, different number of corrections, and different strengths of the nonlinear term. The equivalent deterministic equations are derived up to third order and fourth correction. A model numerical integral solver is developed to solve the resulting set of equations. The numerical solver is tested and validated and then used in simulating the stochastic quadratic nonlinear oscillatory motion with different parameters. The solution ensemble average and variance are computed and compared in all cases. The current work extends the use of WHEP technique in solving stochastic nonlinear differential equations.
APA, Harvard, Vancouver, ISO, and other styles
46

Short, Michael. "Bounds on Worst-Case Deadline Failure Probabilities in Controller Area Networks." Journal of Computer Networks and Communications 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/5196092.

Full text
Abstract:
Industrial communication networks like the Controller Area Network (CAN) are often required to operate reliably in harsh environments which expose the communication network to random errors. Probabilistic schedulability analysis can employ rich stochastic error models to capture random error behaviors, but this is most often at the expense of increased analysis complexity. In this paper, an efficient method (of time complexityO(n log n)) to bound the message deadline failure probabilities for an industrial CAN network consisting ofnperiodic/sporadic message transmissions is proposed. The paper develops bounds for Deadline Minus Jitter Monotonic (DMJM) and Earliest Deadline First (EDF) message scheduling techniques. Both random errors and random bursts of errors can be included in the model. Stochastic simulations and a case study considering DMJM and EDF scheduling of an automotive benchmark message set provide validation of the technique and highlight its application.
APA, Harvard, Vancouver, ISO, and other styles
47

Basile, Angelo, Antonello Bonfante, Antonio Coppola, Roberto De Mascellis, Salvatore Falanga Bolognesi, Fabio Terribile, and Piero Manna. "How does PTF Interpret Soil Heterogeneity? A Stochastic Approach Applied to a Case Study on Maize in Northern Italy." Water 11, no. 2 (February 5, 2019): 275. http://dx.doi.org/10.3390/w11020275.

Full text
Abstract:
Soil water balance on a local scale is generally achieved by applying the classical nonlinear Richards equation that requires hydraulic properties, namely, water retention and hydraulic conductivity functions, to be known. Its application in agricultural systems on field or larger scales involves three major problems being solved, related to (i) the assessment of spatial variability of soil hydraulic properties, (ii) accounting for this spatial variability in modelling large-scale soil water flow, and (iii) measuring the effects of such variability on real field variables (e.g., soil water storage, biomass, etc.). To deal with the first issue, soil hydraulic characterization is frequently performed by using the so-called pedotransfer functions (PTFs), whose effectiveness in providing the actual information on spatial variability has been questioned. With regard to the second problem, the variability of hydraulic properties at the field scale has often been dealt with using a relatively simple approach of considering soils in the field as an ensemble of parallel and statistically independent tubes, assuming only vertical flow. This approach in dealing with spatial variability has been popular in the framework of a Monte Carlo technique. As for the last issue, remote sensing seems to be the only viable solution to verify the pattern of variability, going by several modelling outputs which have considered the soil spatial variability. Based on these premises, the goals of this work concerning the issues discussed above are the following: (1) analyzing the sensitivity of a Richards-based model to the measured variability of θ(h) and k(θ) parameters; (2) establishing the predictive capability of PTF in terms of a simple comparison with measured data; and (3) establishing the effectiveness of use of PTF by employing as data quality control an independent and spatially distributed estimation of the Above Ground Biomass (AGB). The study area of approximately 2000 hectares mainly devoted to maize forage cultivation is located in the Po plain (Lodi), in northern Italy. Sample sites throughout the study area were identified for hydropedological analysis (texture, bulk density, organic matter content, and other chemical properties on all the samples, and water retention curve and saturated hydraulic conductivity on a sub-set). Several pedotransfer functions were tested; the PTF‒Vereckeen proved to be the best one to derive hydraulic properties of the entire soil database. The Monte Carlo approach was used to analyze model sensitivity to two measured input parameters: the slope of water retention curve (n) and the saturated hydraulic conductivity (k0). The analysis showed sensitivity of the simulated process to the parameter n being significantly higher than to k0, although the former was much less variable. The PTFs showed a smoothing effect of the output variability, even though they were previously validated on a set of measured data. Interesting positive and significant correlations were found between the n parameter, from measured water retention curves, and the NDVI (Normalized Difference Vegetation Index), when using multi-temporal (2004–2018) high resolution remotely sensed data on maize cultivation. No correlation was detected when the n parameter derived from PTF was used. These results from our case study mainly suggest that: (i) despite the good performance of PTFs calculated via error indexes, their use in the simulation of hydrological processes should be carefully evaluated for real field-scale applications; and (ii) the NDVI index may be used successfully as a proxy to evaluate PTF reliability in the field.
APA, Harvard, Vancouver, ISO, and other styles
48

Chang, R. J. "Maximum Entropy Approach for Stationary Response of Nonlinear Stochastic Oscillators." Journal of Applied Mechanics 58, no. 1 (March 1, 1991): 266–71. http://dx.doi.org/10.1115/1.2897162.

Full text
Abstract:
A new approach based on the maximum entropy method is developed for deriving the stationary probability density function of a stable nonlinear stochastic system. The technique is implemented by employing the density function with undetermined parameters from the entropy method and solving a set of algebraic moment equations from a nonlinear stochastic system for the unknown parameters. For a wide class of stochastic systems with given density functions, an explicit density function of the stochastic system perturbed by a nonlinear function of states and noises can be obtained. Three nonlinear oscillators are selected for illustrating the present scheme and the validity of the derived density functions is further supported by some exact solutions and Monte Carlo simulations.
APA, Harvard, Vancouver, ISO, and other styles
49

Menh, Nguyen Cao, and Tran Duong Tri. "On the simulation technique of stochastic processes and nonlinear vibrations." Vietnam Journal of Mechanics 16, no. 3 (September 30, 1994): 23–31. http://dx.doi.org/10.15625/0866-7136/10171.

Full text
Abstract:
In this paper the procedure and program for simulation of stochastic processes are represented. The program is applied to nonlinear mechanical systems subjected to stochastic stationary excitation. The results obtained are compared with the ones from other methods which are used for estimating the exactitude of simulation technique.
APA, Harvard, Vancouver, ISO, and other styles
50

Adlakha, V. G., and H. Arsham. "A simulation technique for estimation in perturbed stochastic activity networks." SIMULATION 58, no. 4 (April 1992): 258–67. http://dx.doi.org/10.1177/003754979205800406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography