Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Stochastic processes with large dimension.

Статті в журналах з теми "Stochastic processes with large dimension"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Stochastic processes with large dimension".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Panos, Aristeidis, Petros Dellaportas, and Michalis K. Titsias. "Large scale multi-label learning using Gaussian processes." Machine Learning 110, no. 5 (April 14, 2021): 965–87. http://dx.doi.org/10.1007/s10994-021-05952-5.

Повний текст джерела
Анотація:
AbstractWe introduce a Gaussian process latent factor model for multi-label classification that can capture correlations among class labels by using a small set of latent Gaussian process functions. To address computational challenges, when the number of training instances is very large, we introduce several techniques based on variational sparse Gaussian process approximations and stochastic optimization. Specifically, we apply doubly stochastic variational inference that sub-samples data instances and classes which allows us to cope with Big Data. Furthermore, we show it is possible and beneficial to optimize over inducing points, using gradient-based methods, even in very high dimensional input spaces involving up to hundreds of thousands of dimensions. We demonstrate the usefulness of our approach on several real-world large-scale multi-label learning problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Feitzinger, J. V. "Star Formation in the Large Magellanic Cloud." Symposium - International Astronomical Union 115 (1987): 521–33. http://dx.doi.org/10.1017/s0074180900096315.

Повний текст джерела
Анотація:
Methods used in pattern recognition and cluster analysis are applied to investigate the spatial distribution of the star forming regions. The fractal dimension of these structures is deduced. The new 21 cm, radio continuum (1.4 GHz) and IRAS surveys reveal scale structures of 700 pc to 1500 pc being identical with the optically identified star forming sites. The morphological structures delineated by young stars reflect physical parameters which determine the star formation in this galaxy. The formation of spiral arm filaments is understandable by stochastic selfpropagating star formation processes.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

FRICKE, THOMAS, and DIETMAR WENDT. "THE MARKOFF AUTOMATON: A NEW ALGORITHM FOR SIMULATING THE TIME-EVOLUTION OF LARGE STOCHASTIC DYNAMIC SYSTEMS." International Journal of Modern Physics C 06, no. 02 (April 1995): 277–306. http://dx.doi.org/10.1142/s0129183195000216.

Повний текст джерела
Анотація:
We describe a new algorithm for simulating complex Markoff processes. We have used a reaction-cell method in order to simulate arbitrary reactions. It can be used for any kind of RDS on arbitrary topologies, including fractal dimensions or configurations not being related to any spatial geometry. The events within a single cell are managed by an event handler which has been implemented independently of the system studied. The method is exact on the Markoff level including the correct treatment of finite numbers of molecules. To demonstrate its properties, we apply it on a very simple reaction-diffusion-systems (RDS). The chemical equations [Formula: see text] and [Formula: see text] in 1 to 4 dimensions serve as models for systems whose dynamics on an intermediate time scale are governed by fluctuations. We compare our results to the analytic approach by the scaling ansatz. The simulations confirm the exponents of the A+B system within statistical errors, including the logarithmic corrections in the dimension d=2. The method is able to simulate the crossover from the reaction to diffusion limited regime, which is defined be a crossover time depending on the system size.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Mazzolo, Alain, and Cécile Monthus. "Conditioning diffusion processes with killing rates." Journal of Statistical Mechanics: Theory and Experiment 2022, no. 8 (August 1, 2022): 083207. http://dx.doi.org/10.1088/1742-5468/ac85ea.

Повний текст джерела
Анотація:
Abstract When the unconditioned process is a diffusion submitted to a space-dependent killing rate k ( x → ) , various conditioning constraints can be imposed for a finite time horizon T. We first analyze the conditioned process when one imposes both the surviving distribution at time T and the killing-distribution for the intermediate times t ∈ [0, T]. When the conditioning constraints are less-detailed than these full distributions, we construct the appropriate conditioned processes via the optimization of the dynamical large deviations at level 2.5 in the presence of the conditioning constraints that one wishes to impose. Finally, we describe various conditioned processes for the infinite horizon T → +∞. This general construction is then applied to two illustrative examples in order to generate stochastic trajectories satisfying various types of conditioning constraints: the first example concerns the pure diffusion in dimension d with the quadratic killing rate k ( x → ) = γ x → 2 , while the second example is the Brownian motion with uniform drift submitted to the delta killing rate k(x) = kδ(x) localized at the origin x = 0.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Honkonen, Juha. "Fractional Stochastic Field Theory." EPJ Web of Conferences 173 (2018): 01005. http://dx.doi.org/10.1051/epjconf/201817301005.

Повний текст джерела
Анотація:
Models describing evolution of physical, chemical, biological, social and financial processes are often formulated as differential equations with the understanding that they are large-scale equations for averages of quantities describing intrinsically random processes. Explicit account of randomness may lead to significant changes in the asymptotic behaviour (anomalous scaling) in such models especially in low spatial dimensions, which in many cases may be captured with the use of the renormalization group. Anomalous scaling and memory effects may also be introduced with the use of fractional derivatives and fractional noise. Construction of renormalized stochastic field theory with fractional derivatives and fractional noise in the underlying stochastic differential equations and master equations and the interplay between fluctuation-induced and built-in anomalous scaling behaviour is reviewed and discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Jonckheere, Matthieu, and Seva Shneer. "Stability of Multi-Dimensional Birth-and-Death Processes with State-Dependent 0-Homogeneous Jumps." Advances in Applied Probability 46, no. 1 (March 2014): 59–75. http://dx.doi.org/10.1239/aap/1396360103.

Повний текст джерела
Анотація:
We study the conditions for positive recurrence and transience of multi-dimensional birth-and-death processes describing the evolution of a large class of stochastic systems, a typical example being the randomly varying number of flow-level transfers in a telecommunication wire-line or wireless network. First, using an associated deterministic dynamical system, we provide a generic method to construct a Lyapunov function when the drift is a smooth function on ℝN. This approach gives an elementary and direct proof of ergodicity. We also provide instability conditions. Our main contribution consists of showing how discontinuous drifts change the nature of the stability conditions and of providing generic sufficient stability conditions having a simple geometric interpretation. These conditions turn out to be necessary (outside a negligible set of the parameter space) for piecewise constant drifts in dimension two.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Jonckheere, Matthieu, and Seva Shneer. "Stability of Multi-Dimensional Birth-and-Death Processes with State-Dependent 0-Homogeneous Jumps." Advances in Applied Probability 46, no. 01 (March 2014): 59–75. http://dx.doi.org/10.1017/s0001867800006935.

Повний текст джерела
Анотація:
We study the conditions for positive recurrence and transience of multi-dimensional birth-and-death processes describing the evolution of a large class of stochastic systems, a typical example being the randomly varying number of flow-level transfers in a telecommunication wire-line or wireless network. First, using an associated deterministic dynamical system, we provide a generic method to construct a Lyapunov function when the drift is a smooth function on ℝN. This approach gives an elementary and direct proof of ergodicity. We also provide instability conditions. Our main contribution consists of showing how discontinuous drifts change the nature of the stability conditions and of providing generic sufficient stability conditions having a simple geometric interpretation. These conditions turn out to be necessary (outside a negligible set of the parameter space) for piecewise constant drifts in dimension two.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Anantharam, Venkat, and François Baccelli. "Capacity and Error Exponents of Stationary Point Processes under Random Additive Displacements." Advances in Applied Probability 47, no. 1 (March 2015): 1–26. http://dx.doi.org/10.1239/aap/1427814578.

Повний текст джерела
Анотація:
Consider a real-valued discrete-time stationary and ergodic stochastic process, called the noise process. For each dimension n, we can choose a stationary point process in ℝn and a translation invariant tessellation of ℝn. Each point is randomly displaced, with a displacement vector being a section of length n of the noise process, independent from point to point. The aim is to find a point process and a tessellation that minimizes the probability of decoding error, defined as the probability that the displaced version of the typical point does not belong to the cell of this point. We consider the Shannon regime, in which the dimension n tends to ∞, while the logarithm of the intensity of the point processes, normalized by dimension, tends to a constant. We first show that this problem exhibits a sharp threshold: if the sum of the asymptotic normalized logarithmic intensity and of the differential entropy rate of the noise process is positive, then the probability of error tends to 1 with n for all point processes and all tessellations. If it is negative then there exist point processes and tessellations for which this probability tends to 0. The error exponent function, which denotes how quickly the probability of error goes to 0 in n, is then derived using large deviations theory. If the entropy spectrum of the noise satisfies a large deviations principle, then, below the threshold, the error probability goes exponentially fast to 0 with an exponent that is given in closed form in terms of the rate function of the noise entropy spectrum. This is obtained for two classes of point processes: the Poisson process and a Matérn hard-core point process. New lower bounds on error exponents are derived from this for Shannon's additive noise channel in the high signal-to-noise ratio limit that hold for all stationary and ergodic noises with the above properties and that match the best known bounds in the white Gaussian noise case.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Anantharam, Venkat, and François Baccelli. "Capacity and Error Exponents of Stationary Point Processes under Random Additive Displacements." Advances in Applied Probability 47, no. 01 (March 2015): 1–26. http://dx.doi.org/10.1017/s0001867800007679.

Повний текст джерела
Анотація:
Consider a real-valued discrete-time stationary and ergodic stochastic process, called the noise process. For each dimension n, we can choose a stationary point process in ℝ n and a translation invariant tessellation of ℝ n . Each point is randomly displaced, with a displacement vector being a section of length n of the noise process, independent from point to point. The aim is to find a point process and a tessellation that minimizes the probability of decoding error, defined as the probability that the displaced version of the typical point does not belong to the cell of this point. We consider the Shannon regime, in which the dimension n tends to ∞, while the logarithm of the intensity of the point processes, normalized by dimension, tends to a constant. We first show that this problem exhibits a sharp threshold: if the sum of the asymptotic normalized logarithmic intensity and of the differential entropy rate of the noise process is positive, then the probability of error tends to 1 with n for all point processes and all tessellations. If it is negative then there exist point processes and tessellations for which this probability tends to 0. The error exponent function, which denotes how quickly the probability of error goes to 0 in n, is then derived using large deviations theory. If the entropy spectrum of the noise satisfies a large deviations principle, then, below the threshold, the error probability goes exponentially fast to 0 with an exponent that is given in closed form in terms of the rate function of the noise entropy spectrum. This is obtained for two classes of point processes: the Poisson process and a Matérn hard-core point process. New lower bounds on error exponents are derived from this for Shannon's additive noise channel in the high signal-to-noise ratio limit that hold for all stationary and ergodic noises with the above properties and that match the best known bounds in the white Gaussian noise case.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Dulfan, Anna, and Iryna Voronko. "Features of Spatial-Temporal Hierarchical Structures Formation." Lighting Engineering & Power Engineering 60, no. 2 (October 29, 2021): 66–70. http://dx.doi.org/10.33042/2079-424x.2021.60.2.03.

Повний текст джерела
Анотація:
The degree of ordering of the structure of technologically important materials formed as a result of the evolution of complex physicochemical systems determines their physical properties, in particular optical. In this regard, the primary task for the theoretical study of methods for obtaining materials with predetermined physical properties is to develop approaches to describe the evolution of fractal (scale-invariant) objects in the formation of self-similar structures in systems exhibiting chaotic behavior. The paper forms an idea of the processes of evolution in materials formed as a result of stochastic processes. It is established that the conduct of ultrametrics in time space allows to characterize the time of the evolutionary process of fractal dimension, which is calculated either theoretically or model. The description of evolutionary processes in a condensed medium, accompanied by topological transformations, is significantly supplemented by the method of describing the stages of evolution of structures, which makes it possible to analyze a wide range of materials and can control their properties, primarily optical. It is shown that the most large-scale invariant structures, due to the investigated properties, can be used as information carriers. It is demonstrated that the presence in physical systems of fractal temporal dimension and generates a self-similar (consisting of parts in a sense similar to the whole object) evolutionary tree, which, in turn, generates spatial objects of non-integer dimension, observed in real situations. On the other hand, temporal fractality provides analysis of systems with dynamic chaos, leading to universal relaxation functions. In particular, in systems with a large-scale invariant distribution of relaxation characteristics, an algebraic law of relaxation is manifested, which leads to rheological models and equations of states, which are characterized by fractional derivatives. It is argued that the fractal dimension of time hierarchies stores information that determines the process of self-organization. Developed in the paper ideas about the processes of building the structure of materials, which lead to the fractal geometry of objects, can be used to predict their properties, in particular, optical.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

EL BOUSTANI, SAMI, and ALAIN DESTEXHE. "BRAIN DYNAMICS AT MULTIPLE SCALES: CAN ONE RECONCILE THE APPARENT LOW-DIMENSIONAL CHAOS OF MACROSCOPIC VARIABLES WITH THE SEEMINGLY STOCHASTIC BEHAVIOR OF SINGLE NEURONS?" International Journal of Bifurcation and Chaos 20, no. 06 (June 2010): 1687–702. http://dx.doi.org/10.1142/s0218127410026769.

Повний текст джерела
Анотація:
Nonlinear time series analyses have suggested that the human electroencephalogram (EEG) may share statistical and dynamical properties with chaotic systems. During slow-wave sleep or pathological states like epilepsy, correlation dimension measurements display low values, while in awake and attentive subjects, there is no such low dimensionality, and the EEG is more similar to a stochastic variable. We briefly review these results and contrast them with recordings in cat cerebral cortex, as well as with theoretical models. In awake or sleeping cats, recordings with microelectrodes inserted in cortex show that global variables such as local field potentials (local EEG) are similar to the human EEG. However, neuronal discharges are highly irregular and exponentially distributed, similar to Poisson stochastic processes. To reconcile these results, we investigate models of randomly-connected networks of integrate-and-fire neurons, and also contrast global (averaged) variables, with neuronal activity. The network displays different states, such as "synchronous regular" (SR) or "asynchronous irregular" (AI) states. In SR states, the global variables display coherent behavior with low dimensionality, while in AI states, the global activity is high-dimensionally chaotic with exponentially distributed neuronal discharges, similar to awake cats. Scale-dependent Lyapunov exponents and ∊-entropies show that the seemingly stochastic nature at small scales (neurons) can coexist with more coherent behavior at larger scales (averages). Thus, we suggest that brain activity obeys a similar scheme, with seemingly stochastic dynamics at small scales (neurons), while large scales (EEG) display more coherent behavior or high-dimensional chaos.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Hashino, Tempei, and Gregory J. Tripoli. "The Spectral Ice Habit Prediction System (SHIPS). Part IV: Box Model Simulations of the Habit-Dependent Aggregation Process." Journal of the Atmospheric Sciences 68, no. 6 (June 1, 2011): 1142–61. http://dx.doi.org/10.1175/2011jas3667.1.

Повний текст джерела
Анотація:
Abstract The purpose of this paper is to assess the prediction of particle properties of aggregates and particle size distributions with the Spectral Ice Habit Prediction System (SHIPS) and to investigate the effects of crystal habits on aggregation process. Aggregation processes of ice particles are critical to the understanding of precipitation and the radiative signatures of cloud systems. Conventional approaches taken in cloud-resolving models (CRMs) are not ideal to study the effects of crystal habits on aggregation processes because the properties of aggregates have to be assumed beforehand. As described in Part III, SHIPS solves the stochastic collection equation along with particle property variables that contain information about crystal habits and maximum dimensions of aggregates. This approach makes it possible to simulate properties of aggregates explicitly and continuously in CRMs according to the crystal habits. The aggregation simulations were implemented in a simple model setup, assuming seven crystal habits and several initial particle size distributions (PSDs). The predicted PSDs showed good agreement with observations after rescaling except for the large-size end. The ice particle properties predicted by the model, such as the mass–dimensional (m-D) relationship and the relationship between diameter of aggregates and number of component crystals in an aggregate, were found to be quantitatively similar to those observed. Furthermore, these predictions were dependent on the initial PSDs and habits. A simple model for the growth of a particle’s maximum dimension was able to simulate the typically observed fractal dimension of aggregates when an observed value of the separation ratio of two particles was used. A detailed analysis of the collection kernel indicates that the m-D relationship unique to each crystal habit has a large impact on the growth rate of aggregates through the cross-sectional area or terminal velocity difference, depending on the initial equivalent particle distribution. A significant decrease in terminal velocity differences was found in the inertial flow regime for all the habits but the constant-density sphere. It led to formation of a local maximum in the collection kernel and, in turn, formed an identifiable mode in the PSDs. Remaining issues that must be addressed in order to improve the aggregation simulation with the quasi-stochastic model are discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Hasal, Pavel, and Vladimír Kudrna. "Certain Problems with the Application of Stochastic Diffusion Processes for the Description of Chemical Engineering Phenomena. Numerical Simulation of One-Dimensional Diffusion Process." Collection of Czechoslovak Chemical Communications 61, no. 4 (1996): 512–35. http://dx.doi.org/10.1135/cccc19960512.

Повний текст джерела
Анотація:
Some problems are analyzed arising when a numerical simulation of a random motion of a large ensemble of diffusing particles is used to approximate the solution of a one-dimensional diffusion equation. The particle motion is described by means of a stochastic differential equation. The problems emerging especially when the diffusion coefficient is a function of spatial coordinate are discussed. The possibility of simulation of various kinds of stochastic integral is demonstrated. It is shown that the application of standard numerical procedures commonly adopted for ordinary differential equations may lead to erroneous results when used for solution of stochastic differential equations. General conclusions are verified by numerical solution of three stochastic differential equations with different forms of the diffusion coefficient.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Morzfeld, M., and A. J. Chorin. "Implicit particle filtering for models with partial noise, and an application to geomagnetic data assimilation." Nonlinear Processes in Geophysics 19, no. 3 (June 19, 2012): 365–82. http://dx.doi.org/10.5194/npg-19-365-2012.

Повний текст джерела
Анотація:
Abstract. Implicit particle filtering is a sequential Monte Carlo method for data assimilation, designed to keep the number of particles manageable by focussing attention on regions of large probability. These regions are found by minimizing, for each particle, a scalar function F of the state variables. Some previous implementations of the implicit filter rely on finding the Hessians of these functions. The calculation of the Hessians can be cumbersome if the state dimension is large or if the underlying physics are such that derivatives of F are difficult to calculate, as happens in many geophysical applications, in particular in models with partial noise, i.e. with a singular state covariance matrix. Examples of models with partial noise include models where uncertain dynamic equations are supplemented by conservation laws with zero uncertainty, or with higher order (in time) stochastic partial differential equations (PDE) or with PDEs driven by spatially smooth noise processes. We make the implicit particle filter applicable to such situations by combining gradient descent minimization with random maps and show that the filter is efficient, accurate and reliable because it operates in a subspace of the state space. As an example, we consider a system of nonlinear stochastic PDEs that is of importance in geomagnetic data assimilation.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Dong, Zhipeng, Yucheng Liu, Jianshe Kang, and Shaohui Zhang. "A Stochastic Learning Algorithm for Machine Fault Diagnosis." Shock and Vibration 2022 (February 18, 2022): 1–9. http://dx.doi.org/10.1155/2022/5790185.

Повний текст джерела
Анотація:
Industrial big data bring a large number of high-dimensional sample datasets. Although a deep learning network can well mine the internal nonlinear structure of the dataset, the construction of the deep learning model requires a lot of computing time and hardware facilities. At the same time, there are some nonlinear problems such as noise and fluctuation in industrial data, which make the deep architecture extremely complex and the recognition accuracy of the diagnosis model difficult to guarantee. To solve this problem, a new method, named stochastic learning algorithm (SL), is proposed in this paper for dimension reduction. The proposed method consists of three steps: firstly, to increase the computational efficiency of the model, the dimension of the high-dimensional data is reduced by establishing a random matrix; secondly, for enhancing the clustering influence of the sample, the input data are enhanced by feature processing; thirdly, to make the clustering effects more pronounced, the noise and interference of the data need to be processed, and the singularity value denoising method is used to denoise training data and test data. To further prove the superiority of the SL method, we conducted two sets of experiments on the wind turbine gearbox and the benchmark dataset. It can be seen from the experimental results that the SL method not only improves the classification accuracy but also reduces the computational burden.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Blăjină, Ovidiu, and Ionuţ Gabriel Ghionea. "On Solving Stochastic Optimization Problems." Mathematics 11, no. 21 (October 27, 2023): 4451. http://dx.doi.org/10.3390/math11214451.

Повний текст джерела
Анотація:
Many optimization mathematical models, associated with the technical-economic processes of real-world problems, have elements of uncertainty in their structure, which places them in stochastic optimization programming. Their diversity and complexity, due to the large uncertainty space, require special methods of solving, because there is no general solution method. Within this context, in this paper we consider the category of optimization models that can contain random variable type coefficients and/or imposed probability levels on the constraints. The purpose of the paper is to propose a methodology dedicated to these studied models. Applying the methodology leads to developing a deterministic linear programming model, associated with the initial stochastic model. In fact, the proposed methodology reduces the stochastic formulation to a deterministic formulation. The methodology is illustrated with a numerical case study based on a manufacturing problem. Solving the obtained deterministic model is carried out in the version assisted by a specialized software product (WinQSB Version 2.0). It allows for the performing of a sensitivity analysis of the optimal solution, and/or a parametric analysis relative to certain model coefficients, both also presented in the paper. The main result of the study in this paper is the proposed methodology, which is applicable on a large scale, for any mathematical model of stochastic optimization of the mentioned type, regardless of complexity, dimensions and the domain of the process to which it is associated. The numerical results obtained when applying this methodology indicate its efficiency and effectiveness in finding the solution for the studied models. The approach to this issue in the present paper is determined by the wide range of stochastic optimization problems in the various studied real-life processes and by the imperative need to adopt the best decisions in conditions of uncertainty.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Ullmann, Sebastian, Christopher Müller, and Jens Lang. "Stochastic Galerkin Reduced Basis Methods for Parametrized Linear Convection–Diffusion–Reaction Equations." Fluids 6, no. 8 (July 22, 2021): 263. http://dx.doi.org/10.3390/fluids6080263.

Повний текст джерела
Анотація:
We consider the estimation of parameter-dependent statistics of functional outputs of steady-state convection–diffusion–reaction equations with parametrized random and deterministic inputs in the framework of linear elliptic partial differential equations. For a given value of the deterministic parameter, a stochastic Galerkin finite element (SGFE) method can estimate the statistical moments of interest of a linear output at the cost of solving a single, large, block-structured linear system of equations. We propose a stochastic Galerkin reduced basis (SGRB) method as a means to lower the computational burden when statistical outputs are required for a large number of deterministic parameter queries. Our working assumption is that we have access to the computational resources necessary to set up such a reduced-order model for a spatial-stochastic weak formulation of the parameter-dependent model equations. In this scenario, the complexity of evaluating the SGRB model for a new value of the deterministic parameter only depends on the reduced dimension. To derive an SGRB model, we project the spatial-stochastic weak solution of a parameter-dependent SGFE model onto a reduced basis generated by a proper orthogonal decomposition (POD) of snapshots of SGFE solutions at representative values of the parameter. We propose residual-corrected estimates of the parameter-dependent expectation and variance of linear functional outputs and provide respective computable error bounds. We test the SGRB method numerically for a convection–diffusion–reaction problem, choosing the convective velocity as a deterministic parameter and the parametrized reactivity or diffusivity field as a random input. Compared to a standard reduced basis model embedded in a Monte Carlo sampling procedure, the SGRB model requires a similar number of reduced basis functions to meet a given tolerance requirement. However, only a single run of the SGRB model suffices to estimate a statistical output for a new deterministic parameter value, while the standard reduced basis model must be solved for each Monte Carlo sample.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Lin, Chang-Sheng, and Yi-Xiu Wu. "Response-Only Parametric Estimation of Structural Systems Using a Modified Stochastic Subspace Identification Technique." Applied Sciences 11, no. 24 (December 10, 2021): 11751. http://dx.doi.org/10.3390/app112411751.

Повний текст джерела
Анотація:
The present paper is a study of output-only modal estimation based on the stochastic subspace identification technique (SSI) to avoid the restrictions of well-controlled laboratory conditions when performing experimental modal analysis and aims to develop the appropriate algorithms for ambient modal estimation. The conventional SSI technique, including two types of covariance-driven and data-driven algorithms, is employed for parametric identification of a system subjected to stationary white excitation. By introducing the procedure of solving the system matrix in SSI-COV in conjunction with SSI-DATA, the SSI technique can be efficiently performed without using the original large-dimension data matrix, through the singular value decomposition of the improved projection matrix. In addition, the computational efficiency of the SSI technique is also improved by extracting two predictive-state matrixes with recursive relationship from the same original predictive-state matrix, and then omitting the step of reevaluating the predictive-state matrix at the next-time moment. Numerical simulations and experimental verification illustrate and confirm that the present method can accurately implement modal estimation from stationary response data only.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

De Meulenaere, Roeland, Diederik Coppitters, Ale Sikkema, Tim Maertens, and Julien Blondeau. "Uncertainty Quantification for Thermodynamic Simulations with High-Dimensional Input Spaces Using Sparse Polynomial Chaos Expansion: Retrofit of a Large Thermal Power Plant." Applied Sciences 13, no. 19 (September 27, 2023): 10751. http://dx.doi.org/10.3390/app131910751.

Повний текст джерела
Анотація:
The assessment of the future thermodynamics performance of a retrofitted heat and power production unit is prone to many uncertainties due to the large number of parameters involved in the modeling of all its components. To carry out uncertainty quantification analysis, alternatives to the traditional Monte Carlo method must be used due to the large stochastic dimension of the problem. In this paper, sparse polynomial chaos expansion (SPCE) is applied to the retrofit of a large coal-fired power plant into a biomass-fired combined heat and power unit to quantify the main drivers and the overall uncertainty on the plant’s performance. The thermodynamic model encompasses over 180 components and 1500 parameters. A methodology combining the use of SPCE and expert judgment is proposed to narrow down the sources of uncertainty and deliver reliable probability distributions for the main key performance indicators (KPIs). The impact of the uncertainties on each input parameter vary with the considered KPI and its assessment through the computation of Sobol’ indices. For both coal and biomass operations, the most impactful input parameters are the composition of the fuel and its heating value. The uncertainty on the performance and steam quality parameters is not much affected by the retrofit. Key furnace parameters exhibit a skewed probability distribution with large uncertainties, which is a strong attention point in terms of boiler operation and maintenance.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Yihe Lü, Bojie Fu, Liding Chen, Guohua Liu, and Wei Wei. "Nutrient transport associated with water erosion: progress and prospect." Progress in Physical Geography: Earth and Environment 31, no. 6 (December 2007): 607–20. http://dx.doi.org/10.1177/0309133307087085.

Повний текст джерела
Анотація:
Along with the major water erosion process, large amounts of soil nutrient can be transported to water bodies that may degrade the erosion sites and result in water pollution problems in the nutrient-receiving water bodies. Because of the far-reaching ecological and environmental impacts of the nutrient transport processes during water erosion, much research has been done on this topic. The progress of this research topic is reviewed from the perspective of nutrient sources, covering triggers, carriers and forms of nutrient transport, and the factors regulating nutrient transport. Although much progress has been made, challenges still lie ahead. A multidisciplinary and integrative approach, therefore, needs to be taken to resolve the theoretical, methodological and practical problems in water erosion and nutrient transport research. More specifically, the interactions between landscape pattern and nutrient transport processes and spatiotemporal scale and scaling are the two central themes of consideration in methodological integration. It is also important to respect the dynamic and stochastic nature of nutrient transport which is closely related to rainfall-runoff events. The human dimension, formerly a relative weak point in research, needs to be strengthened to gain understanding and a more informed practical control over soil and nutrient loss and conservation.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Brown, Mark, and Joel E. Cohen. "SQUARED COEFFICIENT OF VARIATION OF TAYLOR'S LAW FOR RANDOM ABSOLUTE DIFFERENCES." Probability in the Engineering and Informational Sciences 32, no. 4 (July 19, 2017): 483–94. http://dx.doi.org/10.1017/s0269964817000304.

Повний текст джерела
Анотація:
In a family, parameterized by θ, of non-negative random variables with finite, positive second moment, Taylor's law (TL) asserts that the population variance is proportional to a power of the population mean as θ varies: σ2 (θ) = a[μ(θ)]b, a > 0. TL, sometimes called fluctuation scaling, holds widely in science, probability theory, and stochastic processes. Here we report diverse examples of TL with b = 2 (equivalent to a constant coefficient of variation) arising from a difference of random variables in normed vector spaces of dimension 1 and larger. In these examples, we compute a exactly using, in some cases, a simple, new technique. These examples may prove useful in future models that involve differences of random variables, including models of the spatial distribution and migration of human populations.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Comeau, Jules, and Eldon Gunn. "A neuro-dynamic programming approach to the optimal stand management problem." Canadian Journal of Forest Research 47, no. 6 (June 2017): 808–16. http://dx.doi.org/10.1139/cjfr-2016-0302.

Повний текст джерела
Анотація:
Some ideas of neuro-dynamic programming (NDP) are illustrated by considering the problem of optimally managing a forest stand under uncertainty. Because reasonable growth models require state information such as height (or age), basal area, and stand diameter, as well as an indicator variable for treatments that have been performed on the stand, they can easily lead to very large state spaces that include continuous variables. Realistic stand management policies include silvicultural options such as pre-commercial and commercial thinning as well as post-harvest treatments. We are interested in problems that are stochastic in their basic growth dynamics, in market prices, and in disturbances, ranging from insects to fire to hurricanes. NDP algorithms are appropriate for problems with large dimensions that may lack a simple model of dynamics and stochastic processes. This paper looks at applying these ideas in the context of a multispecies model. Results show that policies obtained using NDP are optimal within a 95% confidence interval or better. The set of states and controls incorporated into our NDP model allows us to develop optimal policies with a level of detail not typically seen in the forestry literature.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Gavrilyuk, Ivan, and Boris N. Khoromskij. "Tensor Numerical Methods: Actual Theory and Recent Applications." Computational Methods in Applied Mathematics 19, no. 1 (January 1, 2019): 1–4. http://dx.doi.org/10.1515/cmam-2018-0014.

Повний текст джерела
Анотація:
AbstractMost important computational problems nowadays are those related to processing of the large data sets and to numerical solution of the high-dimensional integral-differential equations. These problems arise in numerical modeling in quantum chemistry, material science, and multiparticle dynamics, as well as in machine learning, computer simulation of stochastic processes and many other applications related to big data analysis. Modern tensor numerical methods enable solution of the multidimensional partial differential equations (PDE) in {\mathbb{R}^{d}} by reducing them to one-dimensional calculations. Thus, they allow to avoid the so-called “curse of dimensionality”, i.e. exponential growth of the computational complexity in the dimension size d, in the course of numerical solution of high-dimensional problems. At present, both tensor numerical methods and multilinear algebra of big data continue to expand actively to further theoretical and applied research topics. This issue of CMAM is devoted to the recent developments in the theory of tensor numerical methods and their applications in scientific computing and data analysis. Current activities in this emerging field on the effective numerical modeling of temporal and stationary multidimensional PDEs and beyond are presented in the following ten articles, and some future trends are highlighted therein.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Ghaznavi, Ziam, Nicholas Butcher, Dragan Djurdjanovic, and S. V. Sreenivasan. "Roll-to-roll reactive ion etching of large-area nanostructure arrays in Si: Process development, characterization, and optimization." Journal of Vacuum Science & Technology B 41, no. 2 (March 2023): 022802. http://dx.doi.org/10.1116/6.0002261.

Повний текст джерела
Анотація:
Roll-to-roll (R2R) nanofabrication processes are recognized as key enabling-technologies for many next-generation applications in flexible electronics, displays, energy generation, storage, as well as healthcare. However, R2R processing techniques reported in the literature currently lack a scalable method of performing high-throughput nanoscale pattern transfer of geometry requiring a high degree of fidelity in terms of critical dimension resolution, etch uniformity, and aspect ratio. Reactive ion etching (RIE) addresses the need for sub-10 nm pattern transfer with large-area uniformity in wafer-scale semiconductor manufacturing, but adapting plasma etch systems for use in R2R nanopatterning has proven to be nontrivial. Moreover, robust models for simulating R2R RIE do not exist, which is an obstacle to the creation of computational approaches to design, control, and scale-up of nanoscale R2R equipment and processes. To address these challenges, we demonstrate a process flow for fabricating Si nanopillar arrays utilizing a combination of nanoimprint lithography and RIE with all pattern transfer steps performed using a R2R plasma reactor system. Specifically discussed are process development details for etching imprint resist and Si including etch rates, cross-web etch uniformity, etch directionality, and etch selectivity at varying gas chemistries, powers, and pressures. 2k full-factorial Design of Experiments (DoEs) and ordinary least-squares regression analysis are also employed to study influence of process parameters on multiple outgoing etch quality characteristics and generate stochastic models of the R2R RIE pattern transfer process into Si. Utilizing these DOE-based models and desired targets for etch quality characteristics, we describe a bounded multivariate inverse-optimization scheme for automated etch process parameter tuning. The culmination of these efforts, to the best of the authors' knowledge, is the first reported RIE-based pattern transfer of 100 nm-scale features performed in continuous R2R fashion with control of feature geometry over large area. The methodology employed herein may be applied similarly to additional materials and geometries for future applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Artyushenko, Mikhail, and Anna Khizhnyak. "ENTROPY METHODS OF SELF-ORGANIZING SYSTEMS MODELING USING HYPERSPECTRAL REMOTE SENSING DATA." Journal of Automation and Information sciences 2 (March 1, 2021): 67–78. http://dx.doi.org/10.34229/1028-0979-2021-2-6.

Повний текст джерела
Анотація:
Various mathematical models are created for exploring complex self-organizing systems. In geosystems, the deterministic nature of processes is due to their stochastic properties. In such systems, regular deterministic processes are formed by numerous random interelement interactions that occur at the micro level. In many cases, it is not possible to define correctly the deterministic law of evolution of the observable system or its part because of the large number of unpredictable and unknown factors that influence it. However, at the micro level, statistical distributions of system elements are available for observation, which allows to predict its behavior and evaluate the factors influencing the system. The most universal methods for modeling systems with stochastic properties are based on the fundamental concepts of statistical mechanics — the Gibbs-Shannon and Renyi information entropies. The article studies the entropy methods for calculating quantitative estimations of the states of spatially distributed geosystems and their divergences in the process of self-organization: alpha-divergence, Kullback divergence, variability of the spectrum of Renyi dimensions. The features of the above systems with multifractal structures according to hyperspectral measurements are observed. The examples illustrate the use of entropy models in numerical experiments with real data obtained from a natural gas field. Verification of the entropy methods for determining the boundaries of hydrocarbon deposits based on the hyperspectral data of emission of a homogeneous vegetation cover was carried out.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

López-Fernández, Estíbaliz, and José López. "The Impact of Tumor Eco-Evolution in Renal Cell Carcinoma Sampling." Cancers 10, no. 12 (December 4, 2018): 485. http://dx.doi.org/10.3390/cancers10120485.

Повний текст джерела
Анотація:
Malignant tumors behave dynamically as cell communities governed by ecological principles. Massive sequencing tools are unveiling the true dimension of the heterogeneity of these communities along their evolution in most human neoplasms, clear cell renal cell carcinomas (CCRCC) included. Although initially thought to be purely stochastic processes, very recent genomic analyses have shown that temporal tumor evolution in CCRCC may follow some deterministic pathways that give rise to different clones and sub-clones randomly spatially distributed across the tumor. This fact makes each case unique, unrepeatable and unpredictable. Precise and complete molecular information is crucial for patients with cancer since it may help in establishing a personalized therapy. Intratumor heterogeneity (ITH) detection relies on the correctness of tumor sampling and this is part of the pathologist’s daily work. International protocols for tumor sampling are insufficient today. They were conceived decades ago, when ITH was not an issue, and have remained unchanged until now. Noteworthy, an alternative and more efficient sampling method for detecting ITH has been developed recently. This new method, called multisite tumor sampling (MSTS), is specifically addressed to large tumors that are impossible to be totally sampled, and represent an opportunity to improve ITH detection without extra costs.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Rato, Tiago J., Pedro Delgado, Cristina Martins, and Marco S. Reis. "First Principles Statistical Process Monitoring of High-Dimensional Industrial Microelectronics Assembly Processes." Processes 8, no. 11 (November 23, 2020): 1520. http://dx.doi.org/10.3390/pr8111520.

Повний текст джерела
Анотація:
Modern industrial units collect large amounts of process data based on which advanced process monitoring algorithms continuously assess the status of operations. As an integral part of the development of such algorithms, a reference dataset representative of normal operating conditions is required to evaluate the stability of the process and, after confirming that it is stable, to calibrate a monitoring procedure, i.e., estimate the reference model and set the control limits for the monitoring statistics. The basic assumption is that all relevant “common causes” of variation appear well represented in this reference dataset (using the terminology adopted by the founding father of process monitoring, Walter A. Shewhart). Otherwise, false alarms will inevitably occur during the implementation of the monitoring scheme. However, we argue and demonstrate in this article, that this assumption is often not met in modern industrial systems. Therefore, we introduce a new approach based on the rigorous mechanistic modeling of the dominant modes of common cause variation and the use of stochastic computational simulations to enrich the historical dataset with augmented data representing a comprehensive coverage of the actual operational space. We show how to compute the monitoring statistics and set their control limits, as well as to conduct fault diagnosis when an abnormal event is declared. The proposed method, called AGV (Artificial Generation of common cause Variability) is applied to a Surface Mount Technology (SMT) production line of Bosch Car Multimedia, where more than 17 thousand product variables are simultaneously monitored.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

MIŁOŚ, PIOTR. "OCCUPATION TIMES OF SUBCRITICAL BRANCHING IMMIGRATION SYSTEMS WITH MARKOV MOTION, CLT AND DEVIATION PRINCIPLES." Infinite Dimensional Analysis, Quantum Probability and Related Topics 15, no. 01 (March 2012): 1250002. http://dx.doi.org/10.1142/s0219025712500026.

Повний текст джерела
Анотація:
In this paper we consider two related stochastic models. The first one is a branching system consisting of particles moving according to a Markov family in ℝd and undergoing subcritical branching with a constant rate of V > 0. New particles immigrate to the system according to a homogeneous space-time Poisson random field. The second model is the superprocess corresponding to the branching particle system. We study rescaled occupation time process and the process of its fluctuations under mild assumptions on the Markov family. In the general setting a functional central limit theorem as well as large and moderate deviation principles are proved. The subcriticality of the branching law determines the behaviour in large time scales and it "overwhelms" the properties of the particles' motion. For this reason the results are the same for all dimensions and can be obtained for a wide class of Markov processes (both properties are unusual for systems with critical branching).
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Klein, Marten, and Heiko Schmidt. "Exploring stratification effects in stable Ekman boundary layers using a stochastic one-dimensional turbulence model." Advances in Science and Research 19 (October 25, 2022): 117–36. http://dx.doi.org/10.5194/asr-19-117-2022.

Повний текст джерела
Анотація:
Abstract. Small-scale processes in atmospheric boundary layers are typically not resolved due to cost constraints but modeled based on physical relations with the resolved scales, neglecting expensive backscatter. This lack in modeling is addressed in the present study with the aid of the one-dimensional turbulence (ODT) model. ODT is applied as stand-alone column model to numerically investigate stratification effects in long-lived transient Ekman flows as canonical example of polar boundary layers by resolving turbulent winds and fluctuating temperature profiles on all relevant scales of the flow. We first calibrate the adjustable model parameters for neutral cases based on the surface drag law which yields slightly different optimal model set-ups for finite low and moderate Reynolds numbers. For the stably stratified cases, previously calibrated parameters are kept fixed and the model predictions are compared with various reference numerical simulations and also observations by an exploitation of boundary layer similarity. ODT reasonably captures the temporally developing flow for various prescribed stratification profiles, but fails to fully capture the near-surface laminarization by remaining longer in a fully developed turbulent state, which suggests preferential applicability to high-Reynolds-number flow regimes. Nevertheless, the model suggests that large near-surface turbulence scales are primarily affected by the developing stratification due to scale-selective buoyancy damping which agrees with the literature. The variability of the wind-turning angle represented by the ensemble of stratified cases simulated covers a wider range than reference reanalysis data. The present study suggests that the vertical-column ODT formulation that is highly resolved in space and time can help to accurately represent multi-physics boundary-layer and subgrid-scale processes, offering new opportunities for analysis of very stable polar boundary layer and atmospheric chemistry applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Bobkov, Vladimir I., Olga V. Bulygina, and Elizaveta K. Vereikina. "Using population algorithms to optimize the temperature regime of roasting phosphorite pellets." Journal Of Applied Informatics 17, no. 6 (December 26, 2022): 94–101. http://dx.doi.org/10.37791/2687-0649-2022-17-6-94-101.

Повний текст джерела
Анотація:
The problem of rational energy resource use is especially acute for energy- intensive industries, which include high-temperature processing of mining chemical raw materials (for example, the production of phosphorite pellets from apatite-nepheline ore waste by drying and roasting). In this regard, the temperature modes of roasting conveyor machine should ensure not only the completion of the ongoing chemical-technological processes and the required product quality, but also energy and resource saving. Thus, there is an urgent scientific and practical task of optimizing charge heating modes based on the results of modeling heat and mass transfer processes occurring in various zones of the roasting conveyor machine. The impossibility of carrying out expensive full-scale experiments leads to the need to use computer simulation methods. Nonlinearity, large dimension of the search space, high computational complexity make it difficult to use traditional deterministic search methods. Under these conditions, the stochastic methods that deliberately introduce an element of randomness into the search algorithm show good results. Today, population algorithms based on modeling the collective behavior of living organisms and characterized by the ability to simultaneously process several options have become widespread. To solve the optimization problem, it is proposed to use a modified Cuckoo search algorithm (by introducing fuzzy elements), which provides a comprehensive account of a huge number of parameters set for each vacuum chamber of the roasting conveyor machine. The control of the chemical-energy-technological system for the processing of apatite-nepheline ores waste, taking into account the obtained data and based on the existing neural network model of the high-temperature process, will make it possible to minimize the amount of return and provide energy-saving conditions for the operation of roasting units.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Gao, Tianyun, Heiko Schmidt, Marten Klein, Jianhan Liang, Mingbo Sun, Chongpei Chen, and Qingdi Guan. "One-dimensional turbulence modeling of compressible flows: II. Full compressible modification and application to shock–turbulence interaction." Physics of Fluids 35, no. 3 (March 2023): 035116. http://dx.doi.org/10.1063/5.0137435.

Повний текст джерела
Анотація:
One-dimensional turbulence (ODT) is a simulation methodology that represents the essential physics of three-dimensional turbulence through stochastic resolution of the full range of length and time scales on a one-dimensional domain. In the present study, full compressible modifications are incorporated into ODT methodology, based on an Eulerian framework and a conservative form of the governing equations. In the deterministic part of this approach, a shock capturing scheme is introduced for the first time. In the stochastic part, one-dimensional eddy events are modeled and sampled according to standard methods for compressible flow simulation. Time advancement adjustments are made to balance comparable time steps between the deterministic and stochastic parts in compressible flows. Canonical shock–turbulence interaction cases involving Richtmyer–Meshkov instability at Mach numbers 1.24, 1.5, and 1.98 are simulated to validate the extended model. The ODT results are compared with available reference data from large eddy simulations and laboratory experiments. The introduction of a shock capturing scheme significantly improves the performance of the ODT method, and the results for turbulent kinetic energy are qualitatively improved compared with those of a previous compressible Lagrangian ODT method [Jozefik et al., “Simulation of shock–turbulence interaction in non-reactive flow and in turbulent deflagration and detonation regimes using one-dimensional turbulence,” Combust. Flame 164, 53 (2016)]. For the time evolution of profiles of the turbulent mixing zone width, ensemble-averaged density, and specific heat ratio, the new model also yields good to reasonable results. Furthermore, it is found that the viscous penalty parameter Z of the ODT model is insensitive to compressibility effects in turbulent flows without wall effects. A small value of Z is appropriate for turbulent flows with weak wall effects, and the parameter Z serves to suppress extremely small eddy events that would be dissipated instantly by viscosity.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Marelli, Enrico Piero, Maria Laura Parisi, and Marcello Signorelli. "Economic convergence in the EU and Eurozone." Journal of Economic Studies 46, no. 7 (November 11, 2019): 1332–44. http://dx.doi.org/10.1108/jes-03-2019-0139.

Повний текст джерела
Анотація:
Purpose The purpose of this paper is to analyse whether several groups of European countries are on track for real “conditional” economic convergence in per capita income and the likely speed of convergence. The paper focusses also on the changes of the convergence processes over time. Design/methodology/approach Unlike the simple “absolute convergence”, it explores the concept of “conditional” or “club” convergence. Moreover, it adopts the approach of extending the univariate model to take into account the panel dimension over an extended time interval and endogeneity. Findings A process of real economic convergence has characterised the period under investigation (1995–2016), but, in general, the size and significance of the parameter is greater for the wide European Union (EU) area (EU25 and above) rather than the Eurozone (EZ). However, the crises occurred after 2008 caused most of such lower convergence in the Euro area. Research limitations/implications This paper gives an estimate of the speed/time needed to several groups of European countries (EZ, in particular) to achieve real economic convergence. Future research could further develop the “stochastic” convergence concept. Originality/value This is an analysis of convergence in enlarging EU and EZ for an extended period (including the big crisis period and the subsequent recovery). It shows that EZ experienced a drop in the speed of real convergence after 2008 and converge at lower speed than the EU. As a consequence, a specific budget for EZ would be important to provide adjustment mechanisms after potentially large shocks.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Younis, Haseeb, Paul Trust, and Rosane Minghim. "Understanding High Dimensional Spaces through Visual Means Employing Multidimensional Projections." International Journal on Engineering, Science and Technology 4, no. 2 (December 11, 2022): 170–79. http://dx.doi.org/10.46328/ijonest.91.

Повний текст джерела
Анотація:
Data visualisation helps understanding data represented by multiple variables, also called features, stored in a large matrix where individuals are stored in lines and variable values in columns. These data structures are frequently called multidimensional spaces. A large set of mathematical tools, named frequently as multidimensional projections, aim to map such large spaces into 'visual spaces', that is, to 2 or 3 dimensions, where the aspect of that space can be visualised. While the final product is intuitive in that proximity between points - or iconic representation of points - indicate similarity relationships in the original space, understanding the formulation of the projection methods many times escapes researchers. In this paper, we illustrate ways of employing the visual results of multidimensional projection algorithms to understand and fine-tune the parameters of their mathematical framework. Some of the common mathematical common to these approaches are Laplacian matrices, Euclidian distance, Cosine distance, and statistical methods such as Kullback-Leibler divergence, employed to fit probability distributions and reduce dimensions. Two of the relevant algorithms in the data visualisation field are t-distributed stochastic neighbourhood embedding (t-SNE) and Least-Square Projection (LSP). These algorithms can be used to understand several ranges of mathematical functions including their impact on datasets. In this article, mathematical parameters of underlying techniques such as Principal Component Analysis (PCA) behind t-SNE and mesh reconstruction methods behind LSP are adjusted to reflect the properties afforded by the mathematical formulation. The results, supported by illustrative methods of the processes of LSP and t-SNE, are meant to inspire students in understanding the mathematics behind such methods, in order to apply them in effective data analysis tasks in multiple applications.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Gupta, Sangita Dutta, Ajitava Raychaudhuri, and Sushil Kumar Haldar. "Information technology and profitability: evidence from Indian banking sector." International Journal of Emerging Markets 13, no. 5 (November 29, 2018): 1070–87. http://dx.doi.org/10.1108/ijoem-06-2017-0211.

Повний текст джерела
Анотація:
Purpose Information Technology has transformed the banking sector with respect to various systems and processes. Banks have adopted various measures to quicken their business activity and also save cost and time. That is why there has been large requirement of IT in the banking sector. The question arises whether this investment is enhancing the profitability of the bank or not. The purpose of this paper is to examine the presence of profitability paradox in Indian Banking Sector. Design/methodology/approach Data are collected from ten nationalized banks and three private sector banks from 2006 to 2013. The impact of IT expenditure on return on assets and profit efficiency is examined. Profit efficiency is determined using Stochastic Frontier Analysis. Data are collected from annual reports of the banks. Data on IT expenditure are collected through Right to Information Act 2005. Correlation and Panel Regression are used to investigate the relationship between IT expenditure and ROE or Profit Efficiency. Findings The findings of the paper confirm the presence of profitability paradox in the Indian Banking sector. Research limitations/implications Extension of this study to other developing countries of the world will help to identify if any common pattern is there among the developing countries as far as productivity or profitability paradox is concerned. Originality/value There are some studies on the impact of IT on the banking sector in USA and Europe. This type of study however is rare in the context of India or for that matter other developing countries. Therefore, this paper will add new dimension to the existing literature and pave the way for future research in this area.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Schubert, Till, Jan Martin Brockmann, Johannes Korte, and Wolf-Dieter Schuh. "On the Family of Covariance Functions Based on ARMA Models." Engineering Proceedings 5, no. 1 (July 5, 2021): 37. http://dx.doi.org/10.3390/engproc2021005037.

Повний текст джерела
Анотація:
In time series analyses, covariance modeling is an essential part of stochastic methods such as prediction or filtering. For practical use, general families of covariance functions with large flexibilities are necessary to model complex correlations structures such as negative correlations. Thus, families of covariance functions should be as versatile as possible by including a high variety of basis functions. Another drawback of some common covariance models is that they can be parameterized in a way such that they do not allow all parameters to vary. In this work, we elaborate on the affiliation of several established covariance functions such as exponential, Matérn-type, and damped oscillating functions to the general class of covariance functions defined by autoregressive moving average (ARMA) processes. Furthermore, we present advanced limit cases that also belong to this class and enable a higher variability of the shape parameters and, consequently, the representable covariance functions. For prediction tasks in applications with spatial data, the covariance function must be positive semi-definite in the respective domain. We provide conditions for the shape parameters that need to be fulfilled for positive semi-definiteness of the covariance function in higher input dimensions.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Nikolopoulos, S., P. Kapiris, K. Karamanos, and K. Eftaxias. "A unified approach of catastrophic events." Natural Hazards and Earth System Sciences 4, no. 5/6 (October 14, 2004): 615–31. http://dx.doi.org/10.5194/nhess-4-615-2004.

Повний текст джерела
Анотація:
Abstract. Although there is an accumulated charge of theoretical, computational, and numerical work, like catastrophe theory, bifurcation theory, stochastic and deterministic chaos theory, there is an important feeling that these matters do not completely cover the physics of real catastrophic events. Recent studies have suggested that a large variety of complex processes, including earthquakes, heartbeats, and neuronal dynamics, exhibits statistical similarities. Here we are studying in terms of complexity and non linear techniques whether isomorphic signatures emerged indicating the transition from the normal state to the both geological and biological shocks. In the last 15 years, the study of Complex Systems has emerged as a recognized field in its own right, although a good definition of what a complex system is, actually is eluded. A basic reason for our interest in complexity is the striking similarity in behaviour close to irreversible phase transitions among systems that are otherwise quite different in nature. It is by now recognized that the pre-seismic electromagnetic time-series contain valuable information about the earthquake preparation process, which cannot be extracted without the use of important computational power, probably in connection with computer Algebra techniques. This paper presents an analysis, the aim of which is to indicate the approach of the global instability in the pre-focal area. Non-linear characteristics are studied by applying two techniques, namely the Correlation Dimension Estimation and the Approximate Entropy. These two non-linear techniques present coherent conclusions, and could cooperate with an independent fractal spectral analysis to provide a detection concerning the emergence of the nucleation phase of the impending catastrophic event. In the context of similar mathematical background, it would be interesting to augment this description of pre-seismic electromagnetic anomalies in order to cover biological crises, namely, epileptic seizure and heart failure.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Murakami, H., X. Chen, M. S. Hahn, Y. Liu, M. L. Rockhold, V. R. Vermeul, J. M. Zachara, and Y. Rubin. "Bayesian approach for three-dimensional aquifer characterization at the hanford 300 area." Hydrology and Earth System Sciences Discussions 7, no. 2 (March 23, 2010): 2017–52. http://dx.doi.org/10.5194/hessd-7-2017-2010.

Повний текст джерела
Анотація:
Abstract. This study presents a stochastic, three-dimensional characterization of a heterogeneous hydraulic conductivity field within DOE's Hanford 300 Area site, Washington, by assimilating large-scale, constant-rate injection test data with small-scale, three-dimensional electromagnetic borehole flowmeter (EBF) measurement data. We first inverted the injection test data to estimate the transmissivity field, using zeroth-order temporal moments of pressure buildup curves. We applied a newly developed Bayesian geostatistical inversion framework, the method of anchored distributions (MAD), to obtain a joint posterior distribution of geostatistical parameters and local log-transmissivities at multiple locations. The unique aspects of MAD that make it suitable for this purpose are its ability to integrate multi-scale, multi-type data within a Bayesian framework and to compute a nonparametric posterior distribution. After we combined the distribution of transmissivities with depth-discrete relative-conductivity profile from the EBF data, we inferred the three-dimensional geostatistical parameters of the log-conductivity field, using the Bayesian model-based geostatistics. Such consistent use of the Bayesian approach throughout the procedure enabled us to systematically incorporate data uncertainty into the final posterior distribution. The method was tested in a synthetic study and validated using the actual data that was not part of the estimation. Results showed broader and skewed posterior distributions of geostatistical parameters except for the mean, which suggests the importance of inferring the entire distribution to quantify the parameter uncertainty.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Wang, Zhizhen, Shuheng Tang, Zhifeng Yan, Songhang Zhang, Zhaodong Xi, Ke Zhang, Kaifeng Wang, Qian Zhang, and Xiongxiong Yang. "Three-Dimensional Geological Modeling of Coal Reservoirs and Analysis of Sensitivity Factors for Combined Mining Capacity." Processes 11, no. 12 (December 18, 2023): 3448. http://dx.doi.org/10.3390/pr11123448.

Повний текст джерела
Анотація:
Due to the large non-homogeneity of coal reservoirs, there is a large uncertainty about the extent of the impact on coal bed methane production capacity. The Hanchengbei Block has the problems of early exploration, less available production data, and large variations in developed production capacity within a single well group during test production. Therefore, how to use the existing data to analyze the geological factors affecting the development of coalbed methane in the Hanchengbei Block is particularly important. In this paper, based on the coal seam properties and production characteristics of the Hanchengbei Block, a three-dimensional geological model of the area was meticulously constructed using Petrel 2015 modeling software. Through the utilization of stochastic modeling techniques, reservoir attributes were visualized in three dimensions, and probability distribution functions as well as confidence intervals for different geological parameters were derived through geological statistics. Building upon this foundation, a dual-layer geological model incorporating multiple factors was established using Comet3.0 numerical simulation software. Monte Carlo simulation methods were then employed to simulate the effects of various geological parameters on gas production, yielding corresponding simulation results. Through normalization processes, parameter sensitivity was analyzed to determine the primary controlling factors influencing production capacity. The results show that the thickness of the No. 5 coal seam in the Hanchengbei Block is mainly distributed in the range of 1.35–6.89 m; the gas content is 10.28–15.52 m3/t; and the permeability is 0.014–0.048 mD. Under their joint influence, the average gas production of Hanchengbei Block is between 310–720 m3/d. The main factors affecting the capacity of Hanchengbei Block are the thickness and gas content of the coal seam. This study can provide a basis for the subsequent optimization of favorable areas, the formulation of drainage systems, and the design and optimization of development well networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Candel, Jasper H. J., Maarten G. Kleinhans, Bart Makaske, Wim Z. Hoek, Cindy Quik, and Jakob Wallinga. "Late Holocene channel pattern change from laterally stable to meandering – a palaeohydrological reconstruction." Earth Surface Dynamics 6, no. 3 (August 31, 2018): 723–41. http://dx.doi.org/10.5194/esurf-6-723-2018.

Повний текст джерела
Анотація:
Abstract. River channel patterns may alter due to changes in hydrological regime related to changes in climate and/or land cover. Such changes are well documented for transitions between meandering and braiding rivers, whereas channel pattern changes between laterally stable and meandering rivers are poorly documented and understood. We hypothesize that many low-energy meandering rivers had relatively low peak discharges and were laterally stable during most of the Holocene, when climate was relatively stable and human impact was limited. Our objectives in this work are to identify a Late Holocene channel pattern change for the low-energy Overijsselse Vecht river, to develop and apply a novel methodology to reconstruct discharge as a function of time following a stochastic approach, and to relate this channel pattern change to reconstructed hydrological changes. We established that the Overijsselse Vecht was laterally virtually stable throughout the Holocene until the Late Middle Ages, after which large meanders formed at lateral migration rates of about 2 m yr−1. The lateral stability before the Late Middle Ages was constrained using a combination of coring information, ground-penetrating radar (GPR), radiocarbon (14C) dating, and optically stimulated luminescence (OSL) dating. We quantified bankfull palaeodischarge as a function of time based on channel dimensions that were reconstructed from the scroll bar sequence and channel cut-offs using coring information and GPR data, combined with chronological constraints from historical maps and OSL dating. We found that the bankfull discharge was significantly greater during the meandering phase compared to the laterally stable phase. Empirical channel and bar pattern models showed that this increase can explain the channel pattern change. The bankfull discharge increase likely reflects climate changes related to the Little Ice Age and/or land use changes in the catchment, in particular as a result of peat reclamation and exploitation.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Monterrubio-Velasco, Marisol, F. Ramón Zúñiga, José Carlos Carrasco-Jiménez, Víctor Márquez-Ramírez, and Josep de la Puente. "Modeling active fault systems and seismic events by using a fiber bundle model – example case: the Northridge aftershock sequence." Solid Earth 10, no. 5 (September 10, 2019): 1519–40. http://dx.doi.org/10.5194/se-10-1519-2019.

Повний текст джерела
Анотація:
Abstract. Earthquake aftershocks display spatiotemporal correlations arising from their self-organized critical behavior. Dynamic deterministic modeling of aftershock series is challenging to carry out due to both the physical complexity and uncertainties related to the different parameters which govern the system. Nevertheless, numerical simulations with the help of stochastic models such as the fiber bundle model (FBM) allow the use of an analog of the physical model that produces a statistical behavior with many similarities to real series. FBMs are simple discrete element models that can be characterized by using few parameters. In this work, the aim is to present a new model based on FBM that includes geometrical characteristics of fault systems. In our model, the faults are not described with typical geometric measures such as dip, strike, and slip, but they are incorporated as weak regions in the model domain that could increase the likelihood to generate earthquakes. In order to analyze the sensitivity of the model to input parameters, a parametric study is carried out. Our analysis focuses on aftershock statistics in space, time, and magnitude domains. Moreover, we analyzed the synthetic aftershock sequences properties assuming initial load configurations and suitable conditions to propagate the rupture. As an example case, we have modeled a set of real active faults related to the Northridge, California, earthquake sequence. We compare the simulation results to statistical characteristics from the Northridge sequence determining which range of parameters in our FBM version reproduces the main features observed in real aftershock series. From the results obtained, we observe that two parameters related to the initial load configuration are determinant in obtaining realistic seismicity characteristics: (1) parameter P, which represents the initial probability order, and (2) parameter π, which is the percentage of load distributed to the neighboring cells. The results show that in order to reproduce statistical characteristics of the real sequence, larger πfrac values (0.85<πfrac<0.95) and very low values of P (0.0<P≤0.08) are needed. This implies the important corollary that a very small departure from an initial random load configuration (computed by P), and also a large difference between the load transfer from on-fault segments than by off-faults (computed by πfrac), is required to initiate a rupture sequence which conforms to observed statistical properties such as the Gutenberg–Richter law, Omori law, and fractal dimension.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Morena, Carlos De La, Y. A. Nefedyev, A. O. Andreev, E. N. Ahmedshina, A. A. Arkhipova, E. V. Kronrod, and N. Y. Demina. "The analysis of Titan’s physical surface using multifractal geometry methods." Journal of Physics: Conference Series 2103, no. 1 (November 1, 2021): 012017. http://dx.doi.org/10.1088/1742-6596/2103/1/012017.

Повний текст джерела
Анотація:
Abstract Titan makes up 95% of the mass of all 82 satellites of Saturn. Titan’s diameter is 5152 km, which means that it is larger than the Moon by 50%, and it is also significantly larger than Mercury. On the satellite, a subsurface ocean is possible, the theory of the presence of which has already been advanced earlier by some scientists. It is located under a layer of ice and consists of 10% ammonia, which is a natural antifreeze for it and does not allow the ocean to freeze. On the one hand, the ocean contains a huge amount of salt, which makes the likelihood of life in it hardly possible. But on the other hand, since chemical processes constantly occur on Titan, forming molecules of complex hydrocarbon substances, this can lead to the emergence of the simplest forms of life. There are limitations on the probabilistic and statistical approaches, since not every process and not every result (form and structure of the system) is probabilistic in nature. In contrast to this, fractal analysis allows one to study the structure of complex objects, taking into account their qualitative specifics, for example, the relationship between the structure and the processes of its formation. When constructing a harmonic model of Titan, the method of decomposition of topographic information into spherical functions was used. As a result, based on the harmonic analysis of the Cassini mission data, a topographic model of Titan was created. In the final form, the model describing Titan’s surface includes the expansion of the height parameter depending on the spherical coordinates into a slowly converging regression series of spherical harmonics. For modeling surface details of the surface on a scale of 1 degree requires analysis of the (180 + 1)2 harmonic expansion coefficients. An over determined topographic information system was solved to meet the regression modelling conditions. In this case, a number of qualitative stochastic data, such as external measures, were used together with the standard postulation of the harmonic system of the Titan model. As a result of a sampling of self-similar regions (with close values of the self-similarity coefficients) on the surface of Titan, coinciding with the SRGB parameter (characterizes the color fractal dimension), the elements of the satellite’s surface were determined, which with a high degree of probability were evolutionarily formed under the action of the same selenochemical processes.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Dudek, Gabriela, Przemysław Borys, Anna Strzelewicz, and Monika Krasowska. "Characterization of the Structure and Transport Properties of Alginate/Chitosan Microparticle Membranes Utilized in the Pervaporative Dehydration of Ethanol." Polymers 12, no. 2 (February 11, 2020): 411. http://dx.doi.org/10.3390/polym12020411.

Повний текст джерела
Анотація:
The structure and transport properties of alginate/chitosan microparticle membranes used in ethanol dehydration processes were investigated. The membranes were characterized based on images obtained from high-resolution microscopy. The following parameters were determined: the observed total amount of void space, the average size of the void domains, their length and diameter, the fractal dimension, and the generalized stochastic fractal parameters. The total amount of void space was determined to be between 54% and 64%. The average size of the void domains is smaller for alginate membranes containing neat (CS) and phosphorylated (CS-P) chitosan particles when compared to those membranes filled with glycidol-modified (CS-G) and glutaraldehyde crosslinked (CS-GA) chitosan particles. Furthermore, the transport of ethanol and water particles through the studied membranes was modelled using a random walk framework. It was observed that the results from the theoretical and experimental studies are directly correlated. The smallest values of water to ethanol diffusion coefficient ratios (i.e., 14) were obtained for Alg (sodium alginate) membranes loaded with the CS and CS-P particles, respectively. Significantly larger values (27 and 19) were noted for membranes filled with CS-G and CS-GA particles, respectively. The simulation results show that the size of channels which develop in the alginate matrix is less suited for ethanol molecules compared to water molecules because of their larger size. Such a situation facilitates the separation of water from ethanol. The comparison of the structural analysis of the membranes and random walk simulations allows one to understand the factors that influence the transport phenomena, in the studied membranes, and comment on the effect of the length, diameter, number of channels, and variations in the pore diameters on these transport parameters.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Schertzer, D., and S. Lovejoy. "EGS Richardson AGU Chapman NVAG3 Conference: Nonlinear Variability in Geophysics: scaling and multifractal processes." Nonlinear Processes in Geophysics 1, no. 2/3 (September 30, 1994): 77–79. http://dx.doi.org/10.5194/npg-1-77-1994.

Повний текст джерела
Анотація:
Abstract. 1. The conference The third conference on "Nonlinear VAriability in Geophysics: scaling and multifractal processes" (NVAG 3) was held in Cargese, Corsica, Sept. 10-17, 1993. NVAG3 was joint American Geophysical Union Chapman and European Geophysical Society Richardson Memorial conference, the first specialist conference jointly sponsored by the two organizations. It followed NVAG1 (Montreal, Aug. 1986), NVAG2 (Paris, June 1988; Schertzer and Lovejoy, 1991), five consecutive annual sessions at EGS general assemblies and two consecutive spring AGU meeting sessions. As with the other conferences and workshops mentioned above, the aim was to develop confrontation between theories and experiments on scaling/multifractal behaviour of geophysical fields. Subjects covered included climate, clouds, earthquakes, atmospheric and ocean dynamics, tectonics, precipitation, hydrology, the solar cycle and volcanoes. Areas of focus included new methods of data analysis (especially those used for the reliable estimation of multifractal and scaling exponents), as well as their application to rapidly growing data bases from in situ networks and remote sensing. The corresponding modelling, prediction and estimation techniques were also emphasized as were the current debates about stochastic and deterministic dynamics, fractal geometry and multifractals, self-organized criticality and multifractal fields, each of which was the subject of a specific general discussion. The conference started with a one day short course of multifractals featuring four lectures on a) Fundamentals of multifractals: dimension, codimensions, codimension formalism, b) Multifractal estimation techniques: (PDMS, DTM), c) Numerical simulations, Generalized Scale Invariance analysis, d) Advanced multifractals, singular statistics, phase transitions, self-organized criticality and Lie cascades (given by D. Schertzer and S. Lovejoy, detailed course notes were sent to participants shortly after the conference). This was followed by five days with 8 oral sessions and one poster session. Overall, there were 65 papers involving 74 authors. In general, the main topics covered are reflected in this special issue: geophysical turbulence, clouds and climate, hydrology and solid earth geophysics. In addition to AGU and EGS, the conference was supported by the International Science Foundation, the Centre Nationale de Recherche Scientifique, Meteo-France, the Department of Energy (US), the Commission of European Communities (DG XII), the Comite National Francais pour le Programme Hydrologique International, the Ministere de l'Enseignement Superieur et de la Recherche (France). We thank P. Hubert, Y. Kagan, Ph. Ladoy, A. Lazarev, S.S. Moiseev, R. Pierrehumbert, F. Schmitt and Y. Tessier, for help with the organization of the conference. However special thanks goes to A. Richter and the EGS office, B. Weaver and the AGU without whom this would have been impossible. We also thank the Institut d' Etudes Scientifiques de Cargese whose beautiful site was much appreciated, as well as the Bar des Amis whose ambiance stimulated so many discussions. 2. Tribute to L.F. Richardson With NVAG3, the European geophysical community paid tribute to Lewis Fry Richardson (1881-1953) on the 40th anniversary of his death. Richardson was one of the founding fathers of the idea of scaling and fractality, and his life reflects the European geophysical community and its history in many ways. Although many of Richardson's numerous, outstanding scientific contributions to geophysics have been recognized, perhaps his main contribution concerning the importance of scaling and cascades has still not received the attention it deserves. Richardson was the first not only to suggest numerical integration of the equations of motion of the atmosphere, but also to attempt to do so by hand, during the First World War. This work, as well as a presentation of a broad vision of future developments in the field, appeared in his famous, pioneering book "Weather prediction by numerical processes" (1922). As a consequence of his atmospheric studies, the nondimensional number associated with fluid convective stability has been called the "Richardson number". In addition, his book presents a study of the limitations of numerical integration of these equations, it was in this book that - through a celebrated poem - that the suggestion that turbulent cascades were the fundamental driving mechanism of the atmosphere was first made. In these cascades, large eddies break up into smaller eddies in a manner which involves no characteristic scales, all the way from the planetary scale down to the viscous scale. This led to the Richardson law of turbulent diffusion (1926) and tot he suggestion that particles trajectories might not be describable by smooth curves, but that such trajectories might instead require highly convoluted curves such as the Peano or Weierstrass (fractal) curves for their description. As a founder of the cascade and scaling theories of atmospheric dynamics, he more or less anticipated the Kolmogorov law (1941). He also used scaling ideas to invent the "Richardson dividers method" of successively increasing the resolution of fractal curves and tested out the method on geographical boundaries (as part of his wartime studies). In the latter work he anticipated recent efforts to study scale invariance in rivers and topography. His complex life typifies some of the hardships that the European scientific community has had to face. His educational career is unusual: he received a B.A. degree in physics, mathematics, chemistry, biology and zoology at Cambridge University, and he finally obtained his Ph.D. in mathematical psychology at the age of 47 from the University of London. As a conscientious objector he was compelled to quit the United Kingdom Meteorological Office in 1920 when the latter was militarized by integration into the Air Ministry. He subsequently became the head of a physics department and the principal of a college. In 1940, he retired to do research on war, which was published posthumously in book form (Richardson, 1963). This latter work is testimony to the trauma caused by the two World Wars and which led some scientists including Richardson to use their skills in rational attempts to eradicate the source of conflict. Unfortunately, this remains an open field of research. 3. The contributions in this special issue Perhaps the area of geophysics where scaling ideas have the longest history, and where they have made the largest impact in the last few years, is turbulence. The paper by Tsinober is an example where geometric fractal ideas are used to deduce corrections to standard dimensional analysis results for turbulence. Based on local spontaneous breaking of isotropy of turbulent flows, the fractal notion is used in order to deduce diffusion laws (anomalous with respect to the Richardson law). It is argued that his law is ubiquitous from the atmospheric boundary layer to the stratosphere. The asymptotic intermittency exponent i hypothesized to be not only finite but to be determined by the angular momentum flux. Schmitt et al., Chigirinskaya et al. and Lazarev et al. apply statistical multifractal notions to atmospheric turbulence. In the former, the formal analogy between multifractals and thermodynamics is exploited, in particular to confirm theoretical predictions that sample-size dependent multifractal phase transitions occur. While this quantitatively explains the behavior of the most extreme turbulent events, it suggests that - contrary to the type of multifractals most commonly discussed in the literature which are bounded - more violent (unbounded) multifractals are indeed present in the atmospheric wind field. Chigirinskaya et al. use a tropical rather than mid-latitude set to study the extreme fluctuations form yet another angle: That of coherent structures, which, in the multifractal framework, are identified with singularities of various orders. The existence of a critical order of singularity which distinguishes violent "self-organized critical structures" was theoretically predicted ten years ago; here it is directly estimated. The second of this two part series (Lazarev et al.) investigates yet another aspect of tropical atmospheric dynamics: the strong multiscaling anisotropy. Beyond the determination of universal multifractal indices and critical singularities in the vertical, this enables a comparison to be made with Chigirinskaya et al.'s horizontal results, requiring an extension of the unified scaling model of atmospheric dynamics. Other approaches to the problem of geophysical turbulence are followed in the papers by Pavlos et al., Vassiliadis et al., Voros et al. All of them share a common assumption that a very small number of degrees of freedom (deterministic chaos) might be sufficient for characterizing/modelling the systems under consideration. Pavlos et al. consider the magnetospheric response to solar wind, showing that scaling occurs both in real space (using spectra), and also in phase space; the latter being characterized by a correlation dimension. The paper by Vassiliadis et al. follows on directly by investigating the phase space properties of power-law filtered and rectified gaussian noise; the results further quantify how low phase space correlation dimensions can occur even with very large number of degrees of freedom (stochastic) processes. Voros et al. analyze time series of geomagnetic storms and magnetosphere pulsations, also estimating their correlation dimensions and Lyapounov exponents taking special care of the stability of the estimates. They discriminate low dimensional events from others, which are for instance attributed to incoherent waves. While clouds and climate were the subject of several talks at the conference (including several contributions on multifractal clouds), Cahalan's contribution is the only one in this special issue. Addressing the fundamental problem of the relationship of horizontal cloud heterogeneity and the related radiation fields, he first summarizes some recent numerical results showing that even for comparatively thin clouds that fractal heterogeneity will significantly reduce the albedo. The model used for the distribution of cloud liquid water is the monofractal "bounded cascade" model, whose properties are also outlined. The paper by Falkovich addresses another problem concerning the general circulation: the nonlinear interaction of waves. By assuming the existence of a peak (i.e. scale break) at the inertial oscillation frequency, it is argued that due to remarkable cancellations, the interactions between long inertio-gravity waves and Rossby waves are anomalously weak, producing a "wave condensate" of large amplitude so that wave breaking with front creation can occur. Kagan et al., Eneva and Hooge et al. consider fractal and multifractal behaviour in seismic events. Eneva estimates multifractal exponents of the density of micro-earthquakes induced by mining activity. The effects of sample limitations are discussed, especially in order to distinguish between genuine from spurious multifractal behaviour. With the help of an analysis of the CALNET catalogue, Hooge et al. points out, that the origin of the celebrated Gutenberg-Richter law could be related to a non-classical Self-Organized Criticality generated by a first order phase transition in a multifractal earthquake process. They also analyze multifractal seismic fields which are obtained by raising earthquake amplitudes to various powers and summing them on a grid. In contrast, Kagan, analyzing several earthquake catalogues discussed the various laws associated with earthquakes. Giving theoretical and empirical arguments, he proposes an additive (monofractal) model of earthquake stress, emphasizing the relevance of (asymmetric) stable Cauchy probability distributions to describe earthquake stress distributions. This would yield a linear model for self-organized critical earthquakes. References: Kolmogorov, A.N.: Local structure of turbulence in an incompressible liquid for very large Reynolds number, Proc. Acad. Sci. URSS Geochem. Sect., 30, 299-303, 1941. Perrin, J.: Les Atomes, NRF-Gallimard, Paris, 1913. Richardson, L.F.: Weather prediction by numerical process. Cambridge Univ. Press 1922 (republished by Dover, 1965). Richardson, L.F.: Atmospheric diffusion on a distance neighbour graph. Proc. Roy. of London A110, 709-737, 1923. Richardson, L.F.: The problem of contiguity: an appendix of deadly quarrels. General Systems Yearbook, 6, 139-187, 1963. Schertzer, D., Lovejoy, S.: Nonlinear Variability in Geophysics, Kluwer, 252 pp, 1991.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Geiger, Bernhard C., and Tobias Koch. "On the Information Dimension of Stochastic Processes." IEEE Transactions on Information Theory 65, no. 10 (October 2019): 6496–518. http://dx.doi.org/10.1109/tit.2019.2922186.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Howroyd, Douglas C., and Han Yu. "Assouad Dimension of Random Processes." Proceedings of the Edinburgh Mathematical Society 62, no. 1 (November 16, 2018): 281–90. http://dx.doi.org/10.1017/s0013091518000433.

Повний текст джерела
Анотація:
AbstractIn this paper we study the Assouad dimension of graphs of certain Lévy processes and functions defined by stochastic integrals. We do this by introducing a convenient condition which guarantees a graph to have full Assouad dimension and then show that graphs of our studied processes satisfy this condition.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

LOWEN, STEVEN B., and MALVIN C. TEICH. "ESTIMATION AND SIMULATION OF FRACTAL STOCHASTIC POINT PROCESSES." Fractals 03, no. 01 (March 1995): 183–210. http://dx.doi.org/10.1142/s0218348x95000151.

Повний текст джерела
Анотація:
We investigate the properties of fractal stochastic point processes (FSPPs). First, we define FSPPs and develop several mathematical formulations for these processes, showing that over a broad range of conditions they converge to a particular form of FSPP. We then provide examples of a wide variety of phenomena for which they serve as suitable models. We proceed to examine the analytical properties of two useful fractal dimension estimators for FSPPs, based on the second-order properties of the points. Finally, we simulate several FSPPs, each with three specified values of the fractal dimension. Analysis and simulation reveal that a variety of factors confound the estimate of the fractal dimension, including the finite length of the simulation, structure or type of FSPP employed, and fluctuations inherent in any FSPP. We conclude that for segments of FSPPs with as many as 106 points, the fractal dimension can be estimated only to within ±0.1.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Drach, I., M. Dykha, O. Babak, and O. Kovtun. "Modeling surface structure of tribotechnical materials." Problems of Tribology 29, no. 1/111 (March 19, 2024): 16–24. http://dx.doi.org/10.31891/2079-1372-2024-111-1-16-24.

Повний текст джерела
Анотація:
Modern tribology makes it possible to correctly calculate, diagnose, predict and select appropriate materials for friction pairs, to determine the optimal mode of operation of the tribo-joint. The main parameter for solving friction problems and other problems of tribology is the topography of the surface. The main purpose of the models in these tasks is to display the tribological properties of engineering surfaces. In the framework of the classical approach, the topography of the surface is studied on the basis of its images from the point of view of functional and statistical characteristics: the evaluation of the functional characteristics is based on the maximum roughness along the height and the average roughness along the center line, and the statistical characteristics are estimated using the power spectrum or the autocorrelation function. However, these characteristics are not only surface properties. They depend on the resolution of the device for measuring the surface geometry and the length of the scan. However, the degree of complexity of a surface shape can be represented by a parameter called the fractal dimension: a higher degree of complexity has a larger value of this parameter. Fractal dimensionality is a characteristic of surface relief and makes it possible to explain tribological phenomena without the influence of resolution. This article provides an overview of mathematical approaches to the description of the relief of engineering surfaces, in particular statistical, stochastic and topological modeling, their limitations, advantages and disadvantages. The implementation of the principles of the theory of fractal structures is discussed, which makes it possible to introduce the degree of imbalance of the tribological system into the analysis of structure formation in the surface and near-surface layers of materials and to describe the development of friction and wear processes. This is the basis for controlling the structure of the surface layers of materials with given properties. The concept of fractals, used for the quantitative description of the dissipative structure of the tribojunction zone, makes it possible to establish a connection between its fractal dimension and mechanical properties, as well as critical states of deformation of metals and alloys. The course of research and stages of fractal modeling, the classification of methods of fractal analysis of the structure of engineering contact surfaces are considered. A critical analysis of modern models based on the energy-spectral density function, which are quite similar to fractal models, is presented. Readers are expected to gain an overview of research developments in existing modeling methods and directions for future research in the field of tribology
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Sanchez, Freddy H. Marin, and William Eduardo Alfonso. "Numerical Approximation of Fractal Dimension of Gaussian Stochastic Processes." Applied Mathematics 05, no. 12 (2014): 1763–72. http://dx.doi.org/10.4236/am.2014.512169.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Beresnev, Igor A., and Gail M. Atkinson. "Generic finite-fault model for ground-motion prediction in eastern North America." Bulletin of the Seismological Society of America 89, no. 3 (June 1, 1999): 608–25. http://dx.doi.org/10.1785/bssa0890030608.

Повний текст джерела
Анотація:
Abstract Ground-motion models based on the Brune point-source approximation have an underlying ω2 spectrum, with a single corner frequency. These models over-predict observed spectral amplitudes at low to intermediate frequencies (∼0.1 to 2 Hz), for earthquakes with moment magnitudes M of 4 or greater. The empirical spectra of moderate to large events tend to sag at these frequencies, relative to the level suggested by the Brune point-source model. A model that accounts for the finite extent of the fault plane correctly describes the observed spectral shapes. The model represents seismic radiation as a sum of contributions from several subfaults. Each subfault may be represented as a point source, and each subevent has an ω2 spectrum. When contributions to ground motion at an observation point are summed over all subfaults, the resulting spectral shape has two corner frequencies and more closely matches observed spectra. The more realistic spectral shape obtained through finite-fault modeling reflects the underlying reality that the radiation from real faults is formed by ruptures of their smaller parts, whose corner frequencies are higher than those implied by the full fault dimension. The two corners appear naturally as a result of subevent summation. We use the stochastic finite-fault methodology to simulate the recorded ground-motion data from all significant earthquakes in eastern North America (ENA). These data include eight events of M &gt; 4 recorded on modern digital instruments (regional seismographs and strong-motion instruments), and three historical events of M 5.8 to 7.3 recorded on analog instruments. The goodness of fit of synthetics to the data is defined as simulation bias, which is indicated by the difference between the logarithms of the observed and the simulated spectrum, averaged over all recordings of an earthquake. The finite-fault simulations provide an unbiased fit to the observational database over a broad frequency range (0.1 to 50 Hz), for all events. A surprising conclusion of these simulations is that the subfault size that best fits the observed spectral shape increases linearly with moment magnitude, in an apparently deterministic manner. This strongly suggests that the subfault size can be unambiguously defined by the magnitude of the simulated earthquake. In this case, the radiation-strength factor(s), which is proportional to the square root of the high-frequency Fourier acceleration level, remains the only free parameter of the model. Its value is related to the maximum slip velocity on the fault. The strength factors for all modeled ENA events are within the range of 1.0 to 1.6, with the exception of the Saguenay mainshock (s = 2.2). This suggests a remarkable uniformity in earthquake slip processes.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

MAŁYSZ, ROBERT. "A GENERALIZATION OF FRACTAL INTERPOLATION STOCHASTIC PROCESSES TO HIGHER DIMENSIONS." Fractals 09, no. 04 (December 2001): 415–28. http://dx.doi.org/10.1142/s0218348x01000907.

Повний текст джерела
Анотація:
We generalize the notion of fractal interpolation functions (FIFs) to stochastic processes. We prove that the Minkowski dimension of trajectories of such interpolations for self-similar processes with stationary increments converges to 2-α. We generalize the notion of vector-valued FIFs to stochastic processes. Trajectories of such interpolations based on an equally spaced sample of size n on the interval [0,1] converge to the trajectory of the original process. Moreover, for fractional Brownian motion and, more generally, for self-similar processes with stationary increments (α-sssi) processes, upper bounds of the Minkowski dimensions of the image and the graph converge to the Hausdorff dimension of the image and the graph of the original process, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії