Journal articles on the topic 'Variance decomposition process'

To see the other types of publications on this topic, follow the link: Variance decomposition process.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Variance decomposition process.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Myrzakhmetova, B., U. Besterekov, I. Petropavlovsky, S. Ahnazarova, V. Kiselev, and S. Romanova. "Optimization of Decomposition Process of Karatau Phosphorites." Eurasian Chemico-Technological Journal 14, no. 2 (February 7, 2012): 183. http://dx.doi.org/10.18321/ectj113.

Full text
Abstract:
Phosphorous-acid process of Karatau phosphorites’ decomposition has been studied. The impact of temperature, time and acid rate on decomposition process of phosphate raw material, the conditions ensuring maximum degree of phosphorite decomposition have been identified. Variance estimate of experiment results’ reproducibility has been carried out by mathematical statistics method; the coefficients of regression equations have been set. The significance of regression equation coefficients has been checked up by Student’s criterion, and the adequacy of regression equation to experiment has been checked up by Fisher's criterion. With the use of utopian point method the parameters of studied raw materials’ decomposition have been optimized.
APA, Harvard, Vancouver, ISO, and other styles
2

Feunou, Bruno, and Cédric Okou. "Good Volatility, Bad Volatility, and Option Pricing." Journal of Financial and Quantitative Analysis 54, no. 2 (September 13, 2018): 695–727. http://dx.doi.org/10.1017/s0022109018000777.

Full text
Abstract:
Advances in variance analysis permit the splitting of the total quadratic variation of a jump-diffusion process into upside and downside components. Recent studies establish that this decomposition enhances volatility predictions and highlight the upside/downside variance spread as a driver of the asymmetry in stock price distributions. To appraise the economic gain of this decomposition, we design a new and flexible option pricing model in which the underlying asset price exhibits distinct upside and downside semivariance dynamics driven by the model-free proxies of the variances. The new model outperforms common benchmarks, especially the alternative that splits the quadratic variation into diffusive and jump components.
APA, Harvard, Vancouver, ISO, and other styles
3

Lytvynenko, Iaroslav, Serhii Lupenko, Oleh Nazarevych, Hryhorii Shymchuk, and Volodymyr Hotovych. "Additive mathematical model of gas consumption process." Scientific journal of the Ternopil national technical university 104, no. 4 (2021): 87–97. http://dx.doi.org/10.33108/visnyk_tntu2021.04.087.

Full text
Abstract:
The problem of construction of a new mathematical model of the gas consumption process is considered in this paper. The new mathematical model is presented as an additive mixture of three components: cyclic random process, trend component and stochastic residue. The process of obtaining three components is carried out on the basis of caterpillar method, thus obtaining ten components of singular decomposition. In this approach, the cyclic component is formed from the sum of nine components of the schedule, which have one thing in common – repeated deployment over time. The trend component of the new mathematical model is the second component of singular decomposition, and the stochastic residue is formed on the basis of the difference between the values of the studied gas consumption process and the sum of cyclic and trend components. Two approaches to stochastic processing of cyclic gas consumption process based on the known model of stochastic-periodic random process and cyclic random process as models of the cyclic component are used in this paper. Application of mathematical model of cyclic component in the form of cyclic random process with cyclic structure makes it possible to obtain estimation of variance on cycle of gas consumption process, provided segmentation of cyclic component on depressions, much less in comparison of obtained variance estimation for indicating greater accuracy in the study of the gas consumption process and will use the obtained stochastic estimates when modeling the gas consumption process in further studies.
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Xunfeng, Shiwen Zhang, Zhe Gong, Junkai Ji, Qiuzhen Lin, and Jianyong Chen. "Decomposition-Based Multiobjective Evolutionary Optimization with Adaptive Multiple Gaussian Process Models." Complexity 2020 (February 11, 2020): 1–22. http://dx.doi.org/10.1155/2020/9643273.

Full text
Abstract:
In recent years, a number of recombination operators have been proposed for multiobjective evolutionary algorithms (MOEAs). One kind of recombination operators is designed based on the Gaussian process model. However, this approach only uses one standard Gaussian process model with fixed variance, which may not work well for solving various multiobjective optimization problems (MOPs). To alleviate this problem, this paper introduces a decomposition-based multiobjective evolutionary optimization with adaptive multiple Gaussian process models, aiming to provide a more effective heuristic search for various MOPs. For selecting a more suitable Gaussian process model, an adaptive selection strategy is designed by using the performance enhancements on a number of decomposed subproblems. In this way, our proposed algorithm owns more search patterns and is able to produce more diversified solutions. The performance of our algorithm is validated when solving some well-known F, UF, and WFG test instances, and the experiments confirm that our algorithm shows some superiorities over six competitive MOEAs.
APA, Harvard, Vancouver, ISO, and other styles
5

Ortu, Fulvio, Federico Severino, Andrea Tamoni, and Claudio Tebaldi. "A persistence‐based Wold‐type decomposition for stationary time series." Quantitative Economics 11, no. 1 (2020): 203–30. http://dx.doi.org/10.3982/qe994.

Full text
Abstract:
This paper shows how to decompose weakly stationary time series into the sum, across time scales, of uncorrelated components associated with different degrees of persistence. In particular, we provide an Extended Wold Decomposition based on an isometric scaling operator that makes averages of process innovations. Thanks to the uncorrelatedness of components, our representation of a time series naturally induces a persistence‐based variance decomposition of any weakly stationary process. We provide two applications to show how the tools developed in this paper can shed new light on the determinants of the variability of economic and financial time series.
APA, Harvard, Vancouver, ISO, and other styles
6

Bigerna, Simona, Maria Chiara D’Errico, and Paolo Polinori. "Dynamic forecast error variance decomposition as risk management process for the Gulf Cooperation Council oil portfolios." Resources Policy 78 (September 2022): 102937. http://dx.doi.org/10.1016/j.resourpol.2022.102937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Mei-Ling, Kai-Li Wang, Ya-Ching Sung, Fu-Lai Lin, and Wei-Chuan Yang. "The Dynamic Relationship between the Investment Behavior and the Morgan Stanley Taiwan Index: Foreign Institutional Investors' Decision Process." Review of Pacific Basin Financial Markets and Policies 10, no. 03 (September 2007): 389–413. http://dx.doi.org/10.1142/s0219091507001124.

Full text
Abstract:
This research employs VAR models, impulse response function, forecast error variance decomposition and bivariate GJR GARCH models, to explore the dynamic relationship between foreign investment and the MSCI Taiwan Index (MSCI–TWI). The estimations of the VAR, impulse-response functions and predicted error variance decomposition tests show that stronger feedback effects exist between net foreign investment and MSCI–TWI. In particular, our results demonstrate that the MSCI–TWI has the greatest influence over the decision-making processes of foreign investors. Also, we see that exchange rates exert a negative influence on both net foreign investment dollars and the MSCI–TWI. In addition, US–Taiwan interest rate difference has a positive influence on net foreign investment dollars and a negative influence on the MSCI–TWI. As for asymmetric own-volatility transmission, negative shocks in the MSCI–TWI tend to create greater volatility for itself in the following period than positive shocks. Our research indicates an asymmetric information transmission mechanism from net foreign investment to MSCI–TWI markets. Moreover, the estimated correlation coefficient shows that MSCI–TWI and net foreign investment dollar have a positive contemporaneous correlation.
APA, Harvard, Vancouver, ISO, and other styles
8

Chang, Tian, Chuanlong Ma, Anton Nikiforov, Savita K. P. Veerapandian, Nathalie De Geyter, and Rino Morent. "Plasma degradation of trichloroethylene: process optimization and reaction mechanism analysis." Journal of Physics D: Applied Physics 55, no. 12 (December 22, 2021): 125202. http://dx.doi.org/10.1088/1361-6463/ac40bb.

Full text
Abstract:
Abstract In this study, a multi-pin-to-plate negative corona discharge reactor was employed to degrade the hazardous compound trichloroethylene (TCE). The response surface methodology was applied to examine the influence of various process factors (relative humidity (RH), gas flow rate, and discharge power) on the TCE decomposition process, with regard to the TCE removal efficiency, CO2 and CO selectivities. The variance analysis was used to estimate the significance of the single process factors and their interactions. It has been proved that the discharge power had the most influential impact on the TCE removal efficiency, CO2 and CO selectivities and subsequently the gas flow rate, and finally RH. Under the optimal conditions with 20.83% RH, 2 W discharge power and 0.5 l min–1 gas flow rate, the optimal TCE removal efficiency (86.05%), CO2 selectivity (8.62%), and CO selectivity (15.14%) were achieved. In addition, a possible TCE decomposition pathway was proposed based on the investigation of byproducts identified in the exhaust gas of the non-thermal plasma reactor. This work paves the way for control of chlorinated volatile organic compounds.
APA, Harvard, Vancouver, ISO, and other styles
9

Shiyko, Mariya P., and Nilam Ram. "Conceptualizing and Estimating Process Speed in Studies Employing Ecological Momentary Assessment Designs: A Multilevel Variance Decomposition Approach." Multivariate Behavioral Research 46, no. 6 (November 30, 2011): 875–99. http://dx.doi.org/10.1080/00273171.2011.625310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Estrada Vargas, Leopoldo, Deni Torres Roman, and Homero Toral Cruz. "A Study of Wavelet Analysis and Data Extraction from Second-Order Self-Similar Time Series." Mathematical Problems in Engineering 2013 (2013): 1–14. http://dx.doi.org/10.1155/2013/102834.

Full text
Abstract:
Statistical analysis and synthesis of self-similar discrete time signals are presented. The analysis equation is formally defined through a special family of basis functions of which the simplest case matches the Haar wavelet. The original discrete time series is synthesized without loss by a linear combination of the basis functions after some scaling, displacement, and phase shift. The decomposition is then used to synthesize a new second-order self-similar signal with a different Hurst index than the original. The components are also used to describe the behavior of the estimated mean and variance of self-similar discrete time series. It is shown that the sample mean, although it is unbiased, provides less information about the process mean as its Hurst index is higher. It is also demonstrated that the classical variance estimator is biased and that the widely accepted aggregated variance-based estimator of the Hurst index results biased not due to its nature (which is being unbiased and has minimal variance) but to flaws in its implementation. Using the proposed decomposition, the correct estimation of theVariance Plotis described, as well as its close association with the popularLogscale Diagram.
APA, Harvard, Vancouver, ISO, and other styles
11

Verter, Nahanga, and Věra Bečvářová. "The Impact of Agricultural Exports on Economic Growth in Nigeria." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 64, no. 2 (2016): 691–700. http://dx.doi.org/10.11118/actaun201664020691.

Full text
Abstract:
Agriculture is the backbone of Nigeria’s socioeconomic development. This paper investigates the impact of agricultural exports on economic growth in Nigeria using OLS regression, Granger causality, Impulse Response Function and Variance Decomposition approaches. Both the OLS regression and Granger causality results support the hypothesis that agricultural exports- led economic growth in Nigeria. The results, however, show an inverse relationship between the agricultural degree of openness and economic growth in the country. Impulse Response Function results fluctuate and reveal an upward and downward shocks from agricultural export to economic growth in the country. The Variance Decomposition results also show that a shock to agricultural exports can contribute to the fluctuation in the variance of economic growth in the long run. For Nigeria to experience a favourable trade balance in agricultural trade, domestic processing industries should be encouraged while imports of agricultural commodities that the country could process cheaply should be discouraged. Undoubtedly, this measure could drastically reduce the country’s overreliance on food imports and increase the rate of agricultural production for self-sufficiency, exports and its contribution to the economic growth in the country.
APA, Harvard, Vancouver, ISO, and other styles
12

LINETSKY, VADIM. "THE SPECTRAL DECOMPOSITION OF THE OPTION VALUE." International Journal of Theoretical and Applied Finance 07, no. 03 (May 2004): 337–84. http://dx.doi.org/10.1142/s0219024904002451.

Full text
Abstract:
This paper develops a spectral expansion approach to the valuation of contingent claims when the underlying state variable follows a one-dimensional diffusion with the infinitesimal variance a2(x), drift b(x) and instantaneous discount (killing) rate r(x). The Spectral Theorem for self-adjoint operators in Hilbert space yields the spectral decomposition of the contingent claim value function. Based on the Sturm–Liouville (SL) theory, we classify Feller's natural boundaries into two further subcategories: non-oscillatory and oscillatory/non-oscillatory with cutoff Λ≥0 (this classification is based on the oscillation of solutions of the associated SL equation) and establish additional assumptions (satisfied in nearly all financial applications) that allow us to completely characterize the qualitative nature of the spectrum from the behavior of a, b and r near the boundaries, classify all diffusions satisfying these assumptions into the three spectral categories, and present simplified forms of the spectral expansion for each category. To obtain explicit expressions, we observe that the Liouville transformation reduces the SL equation to the one-dimensional Schrödinger equation with a potential function constructed from a, b and r. If analytical solutions are available for the Schrödinger equation, inverting the Liouville transformation yields analytical solutions for the original SL equation, and the spectral representation for the diffusion process can be constructed explicitly. This produces an explicit spectral decomposition of the contingent claim value function.
APA, Harvard, Vancouver, ISO, and other styles
13

Wang, Yao, Yuanfeng Cai, Xiaomin Hu, Xinqin Gao, Shujuan Li, and Yan Li. "Reliability Uncertainty Analysis Method for Aircraft Electrical Power System Design Based on Variance Decomposition." Applied Sciences 12, no. 6 (March 10, 2022): 2857. http://dx.doi.org/10.3390/app12062857.

Full text
Abstract:
As a safety critical system, affected by cognitive uncertainty and flight environment variability, aircraft electrical power system proves highly uncertain in its failure occurrence and consequences. However, there are few studies on how to reduce the uncertainty in the system design stage, which is of great significance for shortening the development cycle and ensuring flight safety during the operation phrase. For this reason, based on the variance decomposition theory, this paper proposes an importance measure index of the influence of component failure rate uncertainty on the uncertainty of power supply reliability (system reliability). Furthermore, an algorithm to calculate the measure index is proposed by combining with the minimum path set and Monte Carlo simulation method. Finally, the proposed method is applied to a typical series-parallel system and an aircraft electrical power system, and a criteria named as “quantity and degree optimization criteria” is drawn from the case study. Results demonstrate that the proposed method indeed realizes the measurement of the contribution degree of component failure rate uncertainty to system reliability uncertainty, and combined with the criteria, proper solutions can be quickly determined to reduce system reliability uncertainty, which can be a theoretical guidance for aircraft electrical power system reliability design.
APA, Harvard, Vancouver, ISO, and other styles
14

Gauthier, Gilles, Guillaume Péron, Jean-Dominique Lebreton, Patrick Grenier, and Louise van Oudenhove. "Partitioning prediction uncertainty in climate-dependent population models." Proceedings of the Royal Society B: Biological Sciences 283, no. 1845 (December 28, 2016): 20162353. http://dx.doi.org/10.1098/rspb.2016.2353.

Full text
Abstract:
The science of complex systems is increasingly asked to forecast the consequences of climate change. As a result, scientists are now engaged in making predictions about an uncertain future, which entails the efficient communication of this uncertainty. Here we show the benefits of hierarchically decomposing the uncertainty in predicted changes in animal population size into its components due to structural uncertainty in climate scenarios (greenhouse gas emissions and global circulation models), structural uncertainty in the demographic model, climatic stochasticity, environmental stochasticity unexplained by climate–demographic trait relationships, and sampling variance in demographic parameter estimates. We quantify components of uncertainty surrounding the future abundance of a migratory bird, the greater snow goose ( Chen caeruslescens atlantica ), using a process-based demographic model covering their full annual cycle. Our model predicts a slow population increase but with a large prediction uncertainty. As expected from theoretical variance decomposition rules, the contribution of sampling variance to prediction uncertainty rapidly overcomes that of process variance and dominates. Among the sources of process variance, uncertainty in the climate scenarios contributed less than 3% of the total prediction variance over a 40-year period, much less than environmental stochasticity. Our study exemplifies opportunities to improve the forecasting of complex systems using long-term studies and the challenges inherent to predicting the future of stochastic systems.
APA, Harvard, Vancouver, ISO, and other styles
15

Robinet, Antonin, Khaled Chetehouna, Axel Cablé, Éric Florentin, and Antoine Oger. "Numerical Investigations on Water Mist Fire Extinguishing Performance: Physical and Sensitivity Analysis." Fire 5, no. 6 (October 26, 2022): 176. http://dx.doi.org/10.3390/fire5060176.

Full text
Abstract:
Fire safety is a changing science for which the dominant parameters that affect fire suppression are still not well defined. The aim of this work is to show that sensitivity analysis techniques can speed up and improve the design process of fire suppression experiments by water mists. The important parameters that play a role in the performance of water mists were confirmed using the CFD code FDS. Various sensitivity analysis methods based on the factorial design and variance decomposition are used with four input parameters (flow rate, spray cone angle, discharge duration and droplet diameter). In our case, results highlight the importance of the discharge duration for the performance of water mists in a confined environment and a strong non-linear effect of the spray cone angle on temperature when variance decomposition methods are used. Further works will be conducted in order to confirm that sensitivity analysis can be used for application in fire suppression by water mist.
APA, Harvard, Vancouver, ISO, and other styles
16

Snelling, Branwen, Stephen Neethling, Kevin Horsburgh, Gareth Collins, and Matthew Piggott. "Uncertainty Quantification of Landslide Generated Waves Using Gaussian Process Emulation and Variance-Based Sensitivity Analysis." Water 12, no. 2 (February 4, 2020): 416. http://dx.doi.org/10.3390/w12020416.

Full text
Abstract:
Simulations of landslide generated waves (LGWs) are prone to high levels of uncertainty. Here we present a probabilistic sensitivity analysis of an LGW model. The LGW model was realised through a smooth particle hydrodynamics (SPH) simulator, which is capable of modelling fluids with complex rheologies and includes flexible boundary conditions. This LGW model has parameters defining the landslide, including its rheology, that contribute to uncertainty in the simulated wave characteristics. Given the computational expense of this simulator, we made use of the extensive uncertainty quantification functionality of the Dakota toolkit to train a Gaussian process emulator (GPE) using a dataset derived from SPH simulations. Using the emulator we conducted a variance-based decomposition to quantify how much each input parameter to the SPH simulation contributed to the uncertainty in the simulated wave characteristics. Our results indicate that the landslide’s volume and initial submergence depth contribute the most to uncertainty in the wave characteristics, while the landslide rheological parameters have a much smaller influence. When estimated run-up is used as the indicator for LGW hazard, the slope angle of the shore being inundated is shown to be an additional influential parameter. This study facilitates probabilistic hazard analysis of LGWs, because it reveals which source characteristics contribute most to uncertainty in terms of how hazardous a wave will be, thereby allowing computational resources to be focused on better understanding that uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
17

MERINO, R., J. POSPÍŠIL, T. SOBOTKA, and J. VIVES. "DECOMPOSITION FORMULA FOR JUMP DIFFUSION MODELS." International Journal of Theoretical and Applied Finance 21, no. 08 (December 2018): 1850052. http://dx.doi.org/10.1142/s0219024918500528.

Full text
Abstract:
In this paper, we derive a generic decomposition of the option pricing formula for models with finite activity jumps in the underlying asset price process (SVJ models). This is an extension of the well-known result by Alòs [(2012) A decomposition formula for option prices in the Heston model and applications to option pricing approximation, Finance and Stochastics 16 (3), 403–422, doi: https://doi.org/10.1007/s00780-012-0177-0 ] for Heston [(1993) A closed-form solution for options with stochastic volatility with applications to bond and currency options, The Review of Financial Studies 6 (2), 327–343, doi: https://doi.org/10.1093/rfs/6.2.327 ] SV model. Moreover, explicit approximation formulas for option prices are introduced for a popular class of SVJ models — models utilizing a variance process postulated by Heston [(1993) A closed-form solution for options with stochastic volatility with applications to bond and currency options, The Review of Financial Studies 6 (2), 327–343, doi: https://doi.org/10.1093/rfs/6.2.327 ]. In particular, we inspect in detail the approximation formula for the Bates [(1996), Jumps and stochastic volatility: Exchange rate processes implicit in Deutsche mark options, The Review of Financial Studies 9 (1), 69–107, doi: https://doi.org/10.1093/rfs/9.1.69 ] model with log-normal jump sizes and we provide a numerical comparison with the industry standard — Fourier transform pricing methodology. For this model, we also reformulate the approximation formula in terms of implied volatilities. The main advantages of the introduced pricing approximations are twofold. Firstly, we are able to significantly improve computation efficiency (while preserving reasonable approximation errors) and secondly, the formula can provide an intuition on the volatility smile behavior under a specific SVJ model.
APA, Harvard, Vancouver, ISO, and other styles
18

Chang, Yen-Ching. "Efficiently Implementing the Maximum Likelihood Estimator for Hurst Exponent." Mathematical Problems in Engineering 2014 (2014): 1–10. http://dx.doi.org/10.1155/2014/490568.

Full text
Abstract:
This paper aims to efficiently implement the maximum likelihood estimator (MLE) for Hurst exponent, a vital parameter embedded in the process of fractional Brownian motion (FBM) or fractional Gaussian noise (FGN), via a combination of the Levinson algorithm and Cholesky decomposition. Many natural and biomedical signals can often be modeled as one of these two processes. It is necessary for users to estimate the Hurst exponent to differentiate one physical signal from another. Among all estimators for estimating the Hurst exponent, the maximum likelihood estimator (MLE) is optimal, whereas its computational cost is also the highest. Consequently, a faster but slightly less accurate estimator is often adopted. Analysis discovers that the combination of the Levinson algorithm and Cholesky decomposition can avoid storing any matrix and performing any matrix multiplication and thus save a great deal of computer memory and computational time. In addition, the first proposed MLE for the Hurst exponent was based on the assumptions that the mean is known as zero and the variance is unknown. In this paper, all four possible situations are considered: known mean, unknown mean, known variance, and unknown variance. Experimental results show that the MLE through efficiently implementing numerical computation can greatly enhance the computational performance.
APA, Harvard, Vancouver, ISO, and other styles
19

Goldstein, Larry, and Haimeng Zhang. "A Berry-Esseen bound for the lightbulb process." Advances in Applied Probability 43, no. 03 (September 2011): 875–98. http://dx.doi.org/10.1017/s0001867800005176.

Full text
Abstract:
In the so-called lightbulb process, on daysr= 1,…,n, out ofnlightbulbs, all initially off, exactlyrbulbs, selected uniformly and independent of the past, have their status changed from off to on, or vice versa. WithXthe number of bulbs on at the terminal timen, an even integer, and μ =n/2, σ2= var(X), we have supz∈R| P((X- μ)/σ ≤z) - P(Z≤z) | ≤nΔ̅0/2σ2+ 1.64n/σ3+ 2/σ, whereZis a standard normal random variable and Δ̅0= 1/2√n+ 1/2n+ e−n/2/3 forn≥ 6, yielding a bound of orderO(n−1/2) asn→ ∞. A similar, though slightly larger bound, holds for oddn. The results are shown using a version of Stein's method for bounded, monotone size bias couplings. The argument for evenndepends on the construction of a variableXson the same space asXthat has theX-size bias distribution, that is, which satisfies E[Xg(X)] = μE[g(Xs)] for all bounded continuousg, and for which there exists aB≥ 0, in this caseB= 2, such thatX≤Xs≤X+Balmost surely. The argument for oddnis similar to that for evenn, but one first couplesXclosely toV, a symmetrized version ofX, for which a size bias coupling ofVtoVscan proceed as in the even case. In both the even and odd cases, the crucial calculation of the variance of a conditional expectation requires detailed information on the spectral decomposition of the lightbulb chain.
APA, Harvard, Vancouver, ISO, and other styles
20

Goldstein, Larry, and Haimeng Zhang. "A Berry-Esseen bound for the lightbulb process." Advances in Applied Probability 43, no. 3 (September 2011): 875–98. http://dx.doi.org/10.1239/aap/1316792673.

Full text
Abstract:
In the so-called lightbulb process, on days r = 1,…,n, out of n lightbulbs, all initially off, exactly r bulbs, selected uniformly and independent of the past, have their status changed from off to on, or vice versa. With X the number of bulbs on at the terminal time n, an even integer, and μ = n/2, σ2 = var(X), we have supz ∈ R | P((X - μ)/σ ≤ z) - P(Z ≤ z) | ≤ nΔ̅0/2σ2 + 1.64n/σ3 + 2/σ, where Z is a standard normal random variable and Δ̅0 = 1/2√n + 1/2n + e−n/2/3 for n ≥ 6, yielding a bound of order O(n−1/2) as n → ∞. A similar, though slightly larger bound, holds for odd n. The results are shown using a version of Stein's method for bounded, monotone size bias couplings. The argument for even n depends on the construction of a variable Xs on the same space as X that has the X-size bias distribution, that is, which satisfies E[Xg(X)] = μE[g(Xs)] for all bounded continuous g, and for which there exists a B ≥ 0, in this case B = 2, such that X ≤ Xs ≤ X + B almost surely. The argument for odd n is similar to that for even n, but one first couples X closely to V, a symmetrized version of X, for which a size bias coupling of V to Vs can proceed as in the even case. In both the even and odd cases, the crucial calculation of the variance of a conditional expectation requires detailed information on the spectral decomposition of the lightbulb chain.
APA, Harvard, Vancouver, ISO, and other styles
21

David, Ikwuoche John, Osebekwin Ebenenzer Asiribo, and Hussain Garba Dikko. "Nonlinear Split-Plot Design Model in Parameters Estimation using Estimated Generalized Least Square - Maximum Likelihood Estimation." ComTech: Computer, Mathematics and Engineering Applications 9, no. 2 (December 31, 2018): 65. http://dx.doi.org/10.21512/comtech.v9i2.4703.

Full text
Abstract:
This research aimed to provide a theoretical framework for intrinsically nonlinear models with two additive error terms. To achieve this, an iterative Gauss-Newton via Taylor Series expansion procedures for Estimated Generalized Least Square (EGLS) technique was adopted. This technique was applied in estimating the parameters of an intrinsically nonlinear split-plot design model where the variance components were unknown. The unknown variance components were estimated via Maximum Likelihood Estimation (MLE) method. To achieve the numerical stability in the iterative process of estimating the parameters, Householder QR decomposition was used. The results show that EGLS method presented in this research is available and applicable to estimate linear fixed, random, and mixed-effect models. However, in practical situations, the functional form of the mean in the model is often nonlinear due to the dynamics involved in the system process.
APA, Harvard, Vancouver, ISO, and other styles
22

Brody, Dorje C., Lane P. Hughston, and Xun Yang. "Signal processing with Lévy information." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 469, no. 2149 (January 8, 2013): 20120433. http://dx.doi.org/10.1098/rspa.2012.0433.

Full text
Abstract:
Lévy processes, which have stationary independent increments, are ideal for modelling the various types of noise that can arise in communication channels. If a Lévy process admits exponential moments, then there exists a parametric family of measure changes called Esscher transformations. If the parameter is replaced with an independent random variable, the true value of which represents a ‘message’, then under the transformed measure the original Lévy process takes on the character of an ‘information process’. In this paper we develop a theory of such Lévy information processes. The underlying Lévy process, which we call the fiducial process, represents the ‘noise type’. Each such noise type is capable of carrying a message of a certain specification. A number of examples are worked out in detail, including information processes of the Brownian, Poisson, gamma, variance gamma, negative binomial, inverse Gaussian and normal inverse Gaussian type. Although in general there is no additive decomposition of information into signal and noise, one is led nevertheless for each noise type to a well-defined scheme for signal detection and enhancement relevant to a variety of practical situations.
APA, Harvard, Vancouver, ISO, and other styles
23

Zhi, Qian, Yongbing Li, Peng Shu, Xinrong Tan, Caiwang Tan, and Zhongxia Liu. "Double-Pulse Ultrasonic Welding of Carbon-Fiber-Reinforced Polyamide 66 Composite." Polymers 14, no. 4 (February 12, 2022): 714. http://dx.doi.org/10.3390/polym14040714.

Full text
Abstract:
Ultrasonic welding of thermoplastics is widely applied in automobile and aerospace industries. Increasing the weld area and avoiding thermal decomposition are contradictory factors in improving strength of ultrasonically welded polymers. In this study, relations among the loss modulus of carbon-fiber-reinforced polyamide 66 composite (CF/PA 66), time for obtaining stable weld area, and time for CF/PA 66 decomposition are investigated systematically. Then, a double-pulse ultrasonic welding process (DPUW) is proposed, and the temperature evolutions, morphologies and structures of fractured surfaces, and tensile and fatigue properties of the DPUWed joints are measured and assessed. Experimental results show the optimal welding parameters for DPUW include a weld time of 2.1 s for the first pulse, a cooling time of 12 s, and a weld time of 1.5 s for the second pulse. The DPUW process enlarged the weld area while avoided decomposition of CF/PA 66 under appropriate welding parameters. Compared to the single-pulse welded joint, the peak load, weld area, and endurance limit of the DPUWed joint increased by about 15%, 23% and 59%, respectively. DPUW also decreases the variance in strengths of the joints.
APA, Harvard, Vancouver, ISO, and other styles
24

Zhao, Jingying, Na Dong, Hai Guo, Yifan Liu, and Doudou Yang. "Text Line Recognition of Dai Language using Statistical Characteristics of Texture Analysis and Deep Gaussian Process." International Journal of Circuits, Systems and Signal Processing 15 (May 18, 2021): 476–85. http://dx.doi.org/10.46300/9106.2021.15.52.

Full text
Abstract:
In view of the different recognition methods of Dai in different language, we proposed a novel method of text line recognition for New Tai Lue and Lanna Dai based on statistical characteristics of texture analysis and Deep Gaussian process, which can classify different Dai text lines. First, the Dai text line database is constructed, and the images are preprocessed by de-noise and size standardization. Gabor multi-scale decomposition is carried out on two Dai text line images, and then the statistical features of image entropy and average row variance feature is extracted. The multi-layers Deep Gaussian process classifier is constructed. Experiments show that the accuracy of text line classification of New Tai Lue and Lanna Dai based on Deep Gaussian process is 99.89%, the values of precision, recall and f1-score are 1, 0.9978 and 0.9989, respectively. The combination of Gabor texture analysis average row variance statistical features and Deep Gaussian process model can effectively classify the text line of New Tai Lue and Lanna Dai. Comparative experiments show that the classification accuracy of the model is superior to traditional methods, such as Gaussian Naive Bayes, Random Forest, Decision Tree, and Gaussian Process.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Zhen, Junfeng Tian, and Pengyuan Zhao. "Software Reliability Estimate with Duplicated Components Based on Connection Structure." Cybernetics and Information Technologies 14, no. 3 (September 1, 2014): 3–13. http://dx.doi.org/10.2478/cait-2014-0028.

Full text
Abstract:
Abstract Reliability testing of complex software at the system level is impossible due to the environmental constraint or the time limitation, so its reliability estimate is often obtained based on the reliability of subsystems or components. The connection structure was defined and the component-based software reliability was estimated based on it. For the present popular software with duplicated components, an approach to variance estimation of software reliability for complex structure systems was proposed, which has improved the hierarchical decomposition approach of variance estimation just for series-parallel systems. Experimental results indicated that the approach to variance estimation for reliability of software with duplicated components has advantages, such as the simple calculation process, small error result, and suitability for complex structure systems. Finally, the sensitivity analysis, used to identify critical components for resource allocation, could better improve the software reliability
APA, Harvard, Vancouver, ISO, and other styles
26

Tran Huy, Hoang, Huan Nguyen Huu, and Linh Nguyen Thi Thuy. "Relationship between Financial Liberalization and Economic Growth in Emerging Economies: The Case of Vietnam." Journal of Asian Business and Economic Studies 23, no. 01 (January 1, 2016): 25–49. http://dx.doi.org/10.24311/jabes/2016.23.1.02.

Full text
Abstract:
This paper examines the process of financial liberalization in Vietnam over the period from 1993 to 2013. On adopting Vector Error Correction Model (VECM), the results suggest that there is a long-term relation between economic growth and financial liberalization, in which the financial market liberalization and financial services liberalization provide better support during the growth of Vietnam’s economy. In addition, using various techniques including Granger causality test, impulse response analysis, and variance decomposition, the paper also clarifies the motives for financial liberalization from the process of short-term financial development and economic growth in the country.
APA, Harvard, Vancouver, ISO, and other styles
27

BIANCHI, S., and A. PIANESE. "MULTIFRACTIONAL PROPERTIES OF STOCK INDICES DECOMPOSED BY FILTERING THEIR POINTWISE HÖLDER REGULARITY." International Journal of Theoretical and Applied Finance 11, no. 06 (September 2008): 567–95. http://dx.doi.org/10.1142/s0219024908004932.

Full text
Abstract:
We propose a decomposition of financial time series into Gaussian subsequences characterized by a constant Hölder exponent. In (multi)fractal models this condition is equivalent to the subsequences themselves being stationarity. For the different subsequences, we study the scaling of the variance and the bias that is generated when the Hölder exponent is re-estimated using traditional estimators. The results achieved by both analyses are shown to be strongly consistent with the assumption that the price process can be modeled by the multifractional Brownian motion, a nonstationary process whose Hölder regularity changes from point to point.
APA, Harvard, Vancouver, ISO, and other styles
28

Garambois, P. A., H. Roux, K. Larnier, W. Castaings, and D. Dartus. "Characterization of process-oriented hydrologic model behavior with temporal sensitivity analysis for flash floods in Mediterranean catchments." Hydrology and Earth System Sciences 17, no. 6 (June 27, 2013): 2305–22. http://dx.doi.org/10.5194/hess-17-2305-2013.

Full text
Abstract:
Abstract. This paper presents a detailed analysis of 10 flash flood events in the Mediterranean region using the distributed hydrological model MARINE. Characterizing catchment response during flash flood events may provide new and valuable insight into the dynamics involved for extreme catchment response and their dependency on physiographic properties and flood severity. The main objective of this study is to analyze flash-flood-dedicated hydrologic model sensitivity with a new approach in hydrology, allowing model outputs variance decomposition for temporal patterns of parameter sensitivity analysis. Such approaches enable ranking of uncertainty sources for nonlinear and nonmonotonic mappings with a low computational cost. Hydrologic model and sensitivity analysis are used as learning tools on a large flash flood dataset. With Nash performances above 0.73 on average for this extended set of 10 validation events, the five sensitive parameters of MARINE process-oriented distributed model are analyzed. This contribution shows that soil depth explains more than 80% of model output variance when most hydrographs are peaking. Moreover, the lateral subsurface transfer is responsible for 80% of model variance for some catchment-flood events' hydrographs during slow-declining limbs. The unexplained variance of model output representing interactions between parameters reveals to be very low during modeled flood peaks and informs that model-parsimonious parameterization is appropriate to tackle the problem of flash floods. Interactions observed after model initialization or rainfall intensity peaks incite to improve water partition representation between flow components and initialization itself. This paper gives a practical framework for application of this method to other models, landscapes and climatic conditions, potentially helping to improve processes understanding and representation.
APA, Harvard, Vancouver, ISO, and other styles
29

Wróblewska, Justyna. "The Analysis of Real Business Cycle Model with the Use of Bayesian VEC Type Models." Przegląd Statystyczny 64, no. 4 (December 31, 2017): 357–72. http://dx.doi.org/10.5604/01.3001.0014.0827.

Full text
Abstract:
In many economic theories and models, both long- and short-run relationships between variables are in focus. It is also the case in the real business cycle model (RBC model). The main aim of the paper is empirical analysis of the basic, three-variable RBC model for the Polish data of product, private consumption and investment over the years 1995–2015. A group of Bayesian VEC models with additional short-term restrictions is employed in this research. The Bayesian model comparison leads to the conclusion that the analyzed process is driven by two stochastic trends and one common cycle. Additionally, in order to evaluate the importance of long- and short-run shocks, the forecast error variance decomposition and the impulse response functions are calculated.
APA, Harvard, Vancouver, ISO, and other styles
30

Auinger, Bernhard, Thomas Zemen, Michael Gadringer, Adam Tankielun, Christoph Gagern, and Wolfgang Bösch. "Validation of the Decomposition Method for Fast MIMO Over-the-Air Measurements." International Journal of Antennas and Propagation 2017 (2017): 1–18. http://dx.doi.org/10.1155/2017/8284917.

Full text
Abstract:
Over-the-air (OTA) throughput tests of wireless Multiple-Input Multiple-Output (MIMO) devices are an important tool for network operators and manufacturers. The user equipment (UE) is placed in an anechoic chamber and a random fading process is emulated by a base-station emulator (BSE). The antenna characteristic of the UE is taken into account by sampling the sphere around the UE with the BSE test antenna at a large number of positions. For low-variance throughput results, long measurement intervals over many fading realizations are required, leading to long and expensive measurement periods in an anechoic chamber. To speed up the OTA test, we analyze the Decomposition Method (DM). The DM splits the throughput measurement into two parts: (1) a receiver algorithm performance tests taking the fading process into account and (2) an antenna performance test without fading process emulation. Both results are combined into a single throughput estimate. The DM allows for a measurement time reduction of more than one order of magnitude. We provide an analytic and numerical analysis as well as measurements. Our detailed results show the validity of the DM in all practical settings.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Xiaolei, Huiliang Cao, Yuzhao Jiao, Taishan Lou, Guoqiang Ding, Hongmei Zhao, and Xiaomin Duan. "Research on Novel Denoising Method of Variational Mode Decomposition in MEMS Gyroscope." Measurement Science Review 21, no. 1 (February 1, 2021): 19–24. http://dx.doi.org/10.2478/msr-2021-0003.

Full text
Abstract:
Abstract The noise signal in the gyroscope is divided into four levels: sampling frequency level, device bandwidth frequency level, resonant frequency level, and carrier frequency level. In this paper, the signal in the dual-mass MEMS gyroscope is analyzed. Based on the variational mode decomposition (VMD) algorithm, a novel dual-mass MEMS gyroscope noise reduction method is proposed. The VMD method with different four-level center frequencies is used to process the original output signal of the MEMS gyroscope, and the results are analyzed by the Allan analysis of variance, which shows that the ARW of the gyroscope is increased from 1.998*10−1°/√h to 1.552*10−4°/√h, BS increased from 2.5261°/h to 0.0093°/h.
APA, Harvard, Vancouver, ISO, and other styles
32

NUNES, JEAN-CLAUDE, and ÉRIC DELÉCHELLE. "EMPIRICAL MODE DECOMPOSITION: APPLICATIONS ON SIGNAL AND IMAGE PROCESSING." Advances in Adaptive Data Analysis 01, no. 01 (January 2009): 125–75. http://dx.doi.org/10.1142/s1793536909000059.

Full text
Abstract:
In this paper, we propose some recent works on data analysis and synthesis based on Empirical Mode Decomposition (EMD). Firstly, a direct 2D extension of original Huang EMD algorithm with application to texture analysis, and fractional Brownian motion synthesis. Secondly, an analytical version of EMD based on PDE in 1D-space is presented. We proposed an extension in 2D-case of the so-called "sifting process" used in the original Huang's EMD. The 2D-sifting process is performed in two steps: extrema detection (by neighboring window or morphological operators) and surface interpolation by splines (thin plate splines or multigrid B-splines). We propose a multiscale segmentation approach by using the zero-crossings from each 2D-intrinsic mode function (IMF) obtained by 2D-EMD. We apply the Hilbert–Huang transform (which consists of two parts: (a) Empirical mode decomposition, and (b) the Hilbert spectral analysis) to texture analysis. We analyze each 2D-IMF obtained by 2D-EMD by studying local properties (amplitude, phase, isotropy, and orientation) extracted from the monogenic signal of each one of them. The monogenic signal proposed by Felsberg et al. is a 2D-generalization of the analytic signal, where the Riesz transform replaces the Hilbert transform. These local properties are obtained by the structure multivector such as proposed by Felsberg and Sommer. We present numerical simulations of fractional Brownian textures. Recent works published by Flandrin et al. relate that, in the case of fractional Gaussian noise (fGn), EMD acts essentially as a dyadic filter bank that can be compared to wavelet decompositions. Moreover, in the context of fGn identification, Flandrin et al. show that variance progression across IMFs is related to Hurst exponent H through a scaling law. Starting with these results, we proposed an algorithm to generate fGn, and fractional Brownian motion (fBm) of Hurst exponent H from IMFs obtained from EMD of a White noise, i.e., ordinary Gaussian noise (fGn with H = 1/2). Deléchelle et al. proposed an analytical approach (formulated as a partial differential equation (PDE)) for sifting process. This PDE-based approach is applied on signals. The analytical approach has a behavior similar to that of the EMD proposed by Huang.
APA, Harvard, Vancouver, ISO, and other styles
33

BHATNAGAR, GAURAV, and BALASUBRAMANIAN RAMAN. "ROBUST REFERENCE-WATERMARKING SCHEME USING WAVELET PACKET TRANSFORM AND BIDIAGONAL-SINGULAR VALUE DECOMPOSITION." International Journal of Image and Graphics 09, no. 03 (July 2009): 449–77. http://dx.doi.org/10.1142/s0219467809003538.

Full text
Abstract:
This paper presents a new robust reference watermarking scheme based on wavelet packet transform (WPT) and bidiagonal singular value decomposition (bSVD) for copyright protection and authenticity. A small gray scale logo is used as watermark instead of randomly generated Gaussian noise type watermark. A reference watermark is generated by original watermark and the process of embedding is done in wavelet packet domain by modifying the bidiagonal singular values. For the robustness and imperceptibly, watermark is embedded in the selected sub-bands, which are selected by taking into account the variance of the sub-bands, which serves as a measure of the watermark magnitude that could be imperceptibly embedded in each block. For this purpose, the variance is calculated in a small moving square window of size Sp× Sp(typically 3 × 3 or 5 × 5 window) centered at the pixel. A reliable watermark extraction is developed, in which the watermark bidiagonal singular values are extracted by considering the distortion caused by the attacks in neighboring bidiagonal singular values. Experimental evaluation demonstrates that the proposed scheme is able to withstand a variety of attacks and the superiority of the proposed method is carried out by the comparison which is made by us with the existing methods.
APA, Harvard, Vancouver, ISO, and other styles
34

Yan, Yu, and Yiming Wang. "Asset Pricing Model Based on Fractional Brownian Motion." Fractal and Fractional 6, no. 2 (February 11, 2022): 99. http://dx.doi.org/10.3390/fractalfract6020099.

Full text
Abstract:
This paper introduces one unique price motion process with fractional Brownian motion. We introduce the imaginary number into the agent’s subjective probability for the reason of convergence; further, the result similar to Ito Lemma is proved. As an application, this result is applied to Merton’s dynamic asset pricing framework. We find that the four order moment of fractional Brownian motion is entered into the agent’s decision-making. The decomposition of variance of economic indexes supports the possibility of the complex number in price movement.
APA, Harvard, Vancouver, ISO, and other styles
35

He, Zhengxiang, Shaowei Ma, Liguan Wang, and Pingan Peng. "A Novel Wavelet Selection Method for Seismic Signal Intelligent Processing." Applied Sciences 12, no. 13 (June 25, 2022): 6470. http://dx.doi.org/10.3390/app12136470.

Full text
Abstract:
Wavelet transform is a widespread and effective method in seismic waveform analysis and processing. Choosing a suitable wavelet has also aroused many scholars’ research interest and produced many effective strategies. However, with the convenience of seismic data acquisition, the existing wavelet selection methods are unsuitable for the big dataset. Therefore, we proposed a novel wavelet selection method considering the big dataset for seismic signal intelligent processing. The relevance r is calculated using the seismic waveform’s correlation coefficient and variance contribution rate. Then values of r are calculated from all seismic signals in the dataset to form a set. Furthermore, with a mean value μ and variance value σ2 of that set, we define the decomposition stability w as μ/σ2. Then, the wavelet that maximizes w for this dataset is considered to be the optimal wavelet. We applied this method in automatic mining-induced seismic signal classification and automatic seismic P arrival picking. In classification experiments, the mean accuracy is 93.13% using the selected wavelet, 2.22% more accurate than other wavelets generated. Additionally, in the picking experiments, the mean picking error is 0.59 s using the selected wavelet, but is 0.71 s using others. Moreover, the wavelet packet decomposition level does not affect the selection of wavelets. These results indicate that our method can really enhance the intelligent processing of seismic signals.
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Shu Wen, and Te Li Su. "Optimizing Experimental Variables to Enhance the Biodegradability of Polylactic Acid." Applied Mechanics and Materials 472 (January 2014): 815–19. http://dx.doi.org/10.4028/www.scientific.net/amm.472.815.

Full text
Abstract:
Polylactic acid (PLA), the biodegradable materials, mainly using biobase as raw materials, is the biodegradable polyester manufactured with fermentation and chemical synthesis, or polymerization of monomers from petrochemical products. The polymer usually made with the renewable resources, such as microorganism, plants and animals, will be decomposed into water and carbon dioxide if the natural landfill or compost environment has sufficient moisture, temperature, oxygen and suitable microorganism. Therefore, this paper aims to improve the hydrolysis rate of the PLA during the whole decomposition process and to increase the decomposition rate of PLA in the natural environment. In this paper, Taguchi method was used for the parameter design of PLA hydrolysis and focusing on choosing the conditions that would affect PLA hydrolysis as control factors, for example, temperature, bacteria, ventilation degree and nutrient. Meanwhile, the experiment was conducted with L8 orthogonal array and analysis of variance to find out the significant factor and the optimal conditions of PLA hydrolysis. We found the temperature and bacteria are signify factors by the variance. Lastly, confirmation experiments verified the reproducibility of this experiment. Confirmed by the experiments, results showed that the obtained SN ratios were greater than the rate of eight PLA hydrolysis experiments and this means the experiment is reliable.
APA, Harvard, Vancouver, ISO, and other styles
37

Ilmiyah, Bachrotil, and Tika Widiastuti. "Kondisi Variabel Makro Ekonomi Islam Ditinjau Dari Pengaruh Kebijakan Moneter Studi Kasus : Indonesia Periode Tahun 2010-2014." Jurnal Ekonomi Syariah Teori dan Terapan 2, no. 9 (December 17, 2015): 714. http://dx.doi.org/10.20473/vol2iss20159pp714-727.

Full text
Abstract:
This study raised the issue of how Islamic macroeconomic conditions in Indonesia is affected by the role of Bank Indonesia. This study aims to determine whether monetary policy implemented BI is able to influence the Islamic macro-economic conditions. Variables used in this study is inflation and profit sharing ratio as variables describing Islamic macroeconomic conditions.The data used in this research is secondary data taken from the official website of Bank Indonesia in the form of time series from January 2010 until December 2014.This study uses eviews 8 to process the data with Based on the results of IRF and statistical analysis showed that the variables of monetary policy and macroeconomic variables Islam shows long-term relationship. From the results of variance decomposition shows that each variable represents a contribution on other variables with a composition of no more than 35%.
APA, Harvard, Vancouver, ISO, and other styles
38

Kannattukunnel, Rohit S. "Global Patents on 3D Printing: Revelations Based on Vector Autoregression Analysis for Three Decades." International Journal of Innovation and Technology Management 13, no. 06 (November 14, 2016): 1750004. http://dx.doi.org/10.1142/s0219877017500043.

Full text
Abstract:
Engineers and designers from automotive and aerospace sectors have been using 3D printing (3DP) for decades to build prototypes. However, 3DP became popular only recently. This paper is divided into three sections. Section 1 is introductory in nature, which deals with current trends, the modeling process of printing and deliberation on different categories of 3DP. Section 2 deals with the research methodology. An exquisite technique to study innovation dealing with time series data, called the vector autoregression (VAR), is performed to analyze the world patent data on 3DP, based on the information provided by the Government of UK and the International Monetary Fund (IMF). Section 3 attempts to forecast future trends on 3DP by using two techniques viz. impulse response function and variance decomposition. The VAR analysis performed revealed that GDP is not directly instrumental in the advancement in patenting of 3DP technology. Results captured by way of impulse response function suggest that when a shock is given to PR itself, it decreases sharply, whereas when a shock is given to investment, PR undergoes a steady decline. Thus, if there is any adverse shock imparted on investments, it directly reduces the patent ratio. Lastly, when an impulse is given to GDP, PR continuously increases, which implies that increase in GDP causes hike in investment which ultimately increases PR. The results of variance decomposition indicate that in the initial periods, PR itself explains the maximum variance, followed by the GDP and to the least by investment. The changes observed with the trend of explanatory character of variance imply that more investments in technology are instrumental in increasing patent ratio in the G7 countries as per the vector error correction (VEC) model developed here. Though during the nascent stage of emerging technologies investment in technology may not necessarily increase the patent ratio, the result obtained brings to light interesting insights.
APA, Harvard, Vancouver, ISO, and other styles
39

Lee, Chanhwa. "Observability Decomposition-Based Decentralized Kalman Filter and Its Application to Resilient State Estimation under Sensor Attacks." Sensors 22, no. 18 (September 13, 2022): 6909. http://dx.doi.org/10.3390/s22186909.

Full text
Abstract:
This paper considers a discrete-time linear time invariant system in the presence of Gaussian disturbances/noises and sparse sensor attacks. First, we propose an optimal decentralized multi-sensor information fusion Kalman filter based on the observability decomposition when there is no sensor attack. The proposed decentralized Kalman filter deploys a bank of local observers who utilize their own single sensor information and generate the state estimate for the observable subspace. In the absence of an attack, the state estimate achieves the minimum variance, and the computational process does not suffer from the divergent error covariance matrix. Second, the decentralized Kalman filter method is applied in the presence of sparse sensor attacks as well as Gaussian disturbances/noises. Based on the redundant observability, an attack detection scheme by the χ2 test and a resilient state estimation algorithm by the maximum likelihood decision rule among multiple hypotheses, are presented. The secure state estimation algorithm finally produces a state estimate that is most likely to have minimum variance with an unbiased mean. Simulation results on a motor controlled multiple torsion system are provided to validate the effectiveness of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
40

Shao, Yuehjen E., and Shih-Chieh Lin. "Using a Time Delay Neural Network Approach to Diagnose the Out-of-Control Signals for a Multivariate Normal Process with Variance Shifts." Mathematics 7, no. 10 (October 13, 2019): 959. http://dx.doi.org/10.3390/math7100959.

Full text
Abstract:
With the rapid development of advanced sensor technologies, it has become popular to monitor multiple quality variables for a manufacturing process. Consequently, multivariate statistical process control (MSPC) charts have been commonly used for monitoring multivariate processes. The primary function of MSPC charts is to trigger an out-of-control signal when faults occur in a process. However, because two or more quality variables are involved in a multivariate process, it is very difficult to diagnose which one or which combination of quality variables is responsible for the MSPC signal. Though some statistical decomposition methods may provide possible solutions, the mathematical difficulty could confine the applications. This study presents a time delay neural network (TDNN) classifier to diagnose the quality variables that cause out-of-control signals for a multivariate normal process (MNP) with variance shifts. To demonstrate the effectiveness of our proposed approach, a series of simulated experiments were conducted. The results were compared with artificial neural network (ANN), support vector machine (SVM) and multivariate adaptive regression splines (MARS) classifiers. It was found that the proposed TDNN classifier was able to accurately recognize the contributors of out-of-control signal for MNPs.
APA, Harvard, Vancouver, ISO, and other styles
41

Benlagha, Noureddine, and Lanouar Charfeddine. "Analysis of the Effect of the European Debt Crisis on the Saudi Arabian Economy." Studies in Business and Economics 24, no. 1 (December 2021): 61–85. http://dx.doi.org/10.29117/sbe.2021.0127.

Full text
Abstract:
This paper investigates the economic impact of the 2009 European debt crisis on Saudi Arabia’s real economy from 2004 Q2 to 2014 Q2 using a structural vector autoregressive model (SVAR). The results of the impulse response functions obtained from the aggregated data show that the shock to European imports from Saudi Arabia had a significant impact on the real effective exchange rate, inflation rate, and economic growth that lasted for three periods. Moreover, the variance decomposition analysis shows that Europe’s imports from Saudi Arabia explain approximately 20% of the variance of the Saudi real effective exchange rate and real economic growth, 10% of the interest rate variability, and only 5% of the inflation rate variance. The results of the individual country analysis show that the impact of shocks to imports from all European countries had an instantaneous impact, except for France and Spain, where the impact on the economic growth was significant in the second and sixth periods respectively. The results suggest that Saudi Arabian policymakers should continue the process of export diversification in order to reduce its dependence on this region.
APA, Harvard, Vancouver, ISO, and other styles
42

Bhattacharyya, B., and S. Chakraborty. "Stochastic Sensitivity of 3D-Elastodynamic Response Subjected to Random Ground Excitation." International Journal of Structural Stability and Dynamics 03, no. 02 (June 2003): 283–97. http://dx.doi.org/10.1142/s0219455403000847.

Full text
Abstract:
The present study deals with structural sensitivity of dynamic response having uncertainties in design parameters subjected to random earthquake loading. Earthquake is modeled as stationary random process defined by Kanai–Tajimi power spectral density. The uncertain design parameters are modeled as homogeneous Gaussian process and discretized through 3D local averaging. Subsequently the Cholesky decomposition of respective co-variance matrix is used to simulate random values of design parameters. The Neumann expansion blended with Monte Carlo simulation (NE-MCS) is explored for computing response sensitivity in frequency domain. Application examples related to a building frame and a gravity dam are presented serving to validate the NE-MCS technique in terms of its accuracy and effectiveness compared to direct Monte Carlo simulation and perturbation method.
APA, Harvard, Vancouver, ISO, and other styles
43

Gao, Lei, Xiaoke Li, Yanchun Yao, Yucong Wang, Xuzhe Yang, Xinyu Zhao, Duanyang Geng, Yang Li, and Li Liu. "A Modal Frequency Estimation Method of Non-Stationary Signal under Mass Time-Varying Condition Based on EMD Algorithm." Applied Sciences 12, no. 16 (August 16, 2022): 8187. http://dx.doi.org/10.3390/app12168187.

Full text
Abstract:
A method to estimate modal frequency based on empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD) is proposed. This method can decrease the difficulties in identifying modal frequency of combine harvesters. First, we used 16 acceleration sensors installed at different test points to collect vibration signals of a corn combine harvester under operating conditions (mass time-varying conditions). Second, we calculated mean value, variance and root mean square (RMS) value of the vibration signals, and analyzed its stationarity of vibration signals. Third, the main frequencies of the 16 points were extracted using the EMD and EEMD methods. Finally, we considered modal frequencies identified by the SSI algorithm as standard, and calculated the fitting degrees of the EMD and EEMD methods. The results show that in different time periods (0~60 s and 60~120 s), the maximum differences of the mean value, variance and RMS value of signals were 0.8633, 171.1629 and 11.3767, and the vibration signal under the operating condition of field harvesting can be regarded as a typical non-stationary random vibration signal. The EMD method had more modal aliasing than EEMD, and when we obtained the fitting equations of EMD, EEMD and SSI methods, the value of the Euler distance between the EMD fitting equation and the SSI fitting equation was 446.7883, while that for EEMD and SSI was 417.2845. The vibration frequencies calculated by the EEMD method is closer to the modal frequencies identified by SSI algorithm. The proposed method provides a reference for modal frequency identification and vibration control in a complex working environment.
APA, Harvard, Vancouver, ISO, and other styles
44

Vongkulluksn, Vanessa W., and Kui Xie. "Multilevel Latent State-Trait Models with Experience Sampling Data: An Illustrative Case of Examining Situational Engagement." Open Education Studies 4, no. 1 (January 1, 2022): 252–72. http://dx.doi.org/10.1515/edu-2022-0016.

Full text
Abstract:
Abstract Learning processes often occur at a situational level. Changes in learning context have implications on how students are motivated or are able to cognitively process information. To study such situational phenomena, Experience Sampling Method (ESM) can help assess psychological variables in the moment and in context. However, data collected via ESM is voluminous and imbalanced. Special types of statistical modeling are needed to handle this unique data structure in order to maximize its potential for scientific discovery. The purpose of this paper is to illustrate how Latent State-Trait modeling used within a multilevel framework can help model complex data as derived by ESM. A study of situational engagement is presented as an illustrative case. We describe methodological considerations which facilitated the following analyses: (1) Decomposition of trait-level and state-level engagement; (2) Group differences in variance decomposition, and (3) Predicting state component of engagement. Discussions include the relative advantages and disadvantages of ESM and multilevel Latent State-Trait modeling in facilitating situational psychological research.
APA, Harvard, Vancouver, ISO, and other styles
45

Kaimann, Daniel. "Behind the Review Curtain: Decomposition of Online Consumer Ratings in Peer-to-Peer Markets." Sustainability 12, no. 15 (July 31, 2020): 6185. http://dx.doi.org/10.3390/su12156185.

Full text
Abstract:
Peer-to-peer markets are especially suitable for the analysis of online ratings as they represent two-sided markets that match buyers to sellers and thus lead to reduced scope for opportunistic behavior. We decompose the online ratings by focusing on the customer’s decision-making process in a leading peer-to-peer ridesharing platform. Using data from the leading peer-to-peer ridesharing platform BlaBlaCar, we analyze 17,584 users registered between 2004 and 2014 and their online ratings focusing on the decomposition of the explicit determinants reflecting the variance of online ratings. We find clear evidence to suggest that a driver’s attitude towards music, pets, smoking, and conversation has a significantly positive influence on his received online ratings. However, we also show that the interaction of female drivers and their attitude towards pets has a significantly negative effect on average ratings.
APA, Harvard, Vancouver, ISO, and other styles
46

Barberà-Mariné, M. Glòria, Lorella Cannavacciuolo, Adelaide Ippolito, Cristina Ponsiglione, and Giuseppe Zollo. "The weight of organizational factors on heuristics." Management Decision 57, no. 11 (November 12, 2019): 2890–910. http://dx.doi.org/10.1108/md-06-2017-0574.

Full text
Abstract:
Purpose The purpose of this paper is to investigate the influence of organizational factors on individual decision-making under conditions of uncertainty and time pressure. A method to assess the impact of individual and organizational factors on individual decisions is proposed and experimented in the context of triage decision-making process. Design/methodology/approach The adopted methodology is based on the bias-variance decomposition formula. The method, usually applied to assess the predictive accuracy of heuristics, has been adjusted to discriminate between the impact of organizational and individual factors affecting heuristic processes. To test the methodology, 25 clinical scenarios have been designed and submitted, through simulations, to the triage nurses of two Spanish hospitals. Findings Nurses’ decisions are affected by organizational factors in certain task conditions, such as situations characterized by complete and coherent information. When relevant information is lacking and available information is not coherent, decision-makers base their assessments on their personal experience and gut feeling. Research limitations/implications Discriminating between the influence of organizational factors and individual ones is the starting point for a more in-depth understanding of how organization can guide the decision process. Using simulations of clinical scenarios in field research does not allow for capturing the influence of some contextual factors, such as the nurses’ stress levels, on individual decisions. This issue will be addressed in further research. Practical implications Bias and variance are useful measurements for detecting process improvement actions. A bias prevalence requires a re-design of organizational settings, whereas training would be preferred when variance prevails. Originality/value The main contribution of this work concerns the novel interpretation of bias and variance concepts to assess organizational factors’ influence on heuristic decision-making processes, taking into account the level of complexity of decision-related tasks.
APA, Harvard, Vancouver, ISO, and other styles
47

Yang, Lin, Jinzhong Yang, Song Zhang, Yu Meng, Yimin Yang, Xuwen Li, Dongmei Hao, and Jing Shao. "Studies on Pulse Wave Model Based on Multiple Gaussian Decomposition." Journal of Medical Imaging and Health Informatics 10, no. 3 (March 1, 2020): 641–45. http://dx.doi.org/10.1166/jmihi.2020.2911.

Full text
Abstract:
The characteristics of pulse waveform reflect the physiological and pathological state of the human beings. However, it faces a lack of a comprehensive and detailed understanding. In this pursuit, the present research envisages to propose a new method of pulse waveform analysis based on the traditional method of pulse wave analysis. In this study, the data of 932 subjects were studied that included the pulse wave of the finger vein volume and the pulse wave of the radial artery pressure. The pulse wave was re-analyzed in combination with the physiological process of the heart. Pulse wave data processing was done using the Gaussian function decomposition fitting method to make up for the lost information in the pulse wave. The performance evaluation of this method was compared with the traditional pulse wave fitting. It was found that the difference of k value of pulse wave after normalization was increased by 0.1%, the absolute value of average residuals was increased, and the variance of all data increased by three times. This method could fully reflect the characteristics of the pulse wave, and was found to be an effective characteristic analysis method of pulse wave, thereby providing a scientific basis for the comprehensive analysis of the characteristics of the whole cardiac cycle pulse wave.
APA, Harvard, Vancouver, ISO, and other styles
48

Solarin, Sakiru Adebola. "The Role of Urbanisation in the Economic Development Process: Evidence from Nigeria." Margin: The Journal of Applied Economic Research 11, no. 3 (August 2017): 223–55. http://dx.doi.org/10.1177/0973801017703512.

Full text
Abstract:
The aim of this article is to investigate the relationship between urbanisation and economic growth, while controlling for the agricultural sector, industrial development and government expenditure in Nigeria. The autoregressive distributed lag (ARDL) approach to cointegration is applied to examine the long-run relationship between the variables over the period 1961–2012. In the process of estimating the long-run coefficients, the ARDL method is augmented with a fully modified ordinary least squares (FMOLS) estimator and a dynamic ordinary least squares (DOLS) estimator. The direction of causality between the variables is examined through the vector error correction method (VECM) Granger causality test. The results establish the existence of a long-run relationship in the variables. The results of the long-run regressions indicate the presence of long-run causality from urbanisation, agriculture and industrialisation to economic growth. Due to the deficiencies associated with the single-equation methods (including the ARDL model), we also use the structural vector error correction model (SVECM) to analyse the relationship between the variables. The impulse response and variance decomposition analyses derived from the SVECM method suggest that urbanisation, agriculture and industrialisation are important determinants of economic growth. The implications of the results are discussed. JEL Classification: Q43, O55, O18
APA, Harvard, Vancouver, ISO, and other styles
49

Chegdani, Faissal, Sabeur Mezghani, and Mohamed El Mansori. "Correlation between mechanical scales and analysis scales of topographic signals under milling process of natural fibre composites." Journal of Composite Materials 51, no. 19 (November 13, 2016): 2743–56. http://dx.doi.org/10.1177/0021998316676625.

Full text
Abstract:
This article aims to find the relation between the multiscale mechanical structure of natural fibre reinforced plastic composites and the analysis scales in the topographic signals of machined surfaces as induced by profile milling process. Bamboo, sisal and miscanthus fibres reinforced polypropylene composites were considered in this study. The multiscale process signature of natural fibre reinforced plastic machined surfaces based on wavelet decomposition was determined. Then, the impact of wavelet function was inspected by testing different wavelet shapes. Finally, the analysis of variance was carried out to exhibit the contribution rate of fibre stiffness and tool feed on the machined surface roughness at each analysis scale. Results demonstrate that studying the machining of natural fibre reinforced plastic requires the selection of the relevant scales. They show also the insignificance of the wavelet choice. This study proves that the contribution rate of fibre stiffness and tool feed on machined surface roughness is significantly dependent on the analysis scales, which are directly related to the mechanical properties of natural fibres structure inside the composite.
APA, Harvard, Vancouver, ISO, and other styles
50

Huang, Yuansheng, Shijian Liu, and Lei Yang. "Wind Speed Forecasting Method Using EEMD and the Combination Forecasting Method Based on GPR and LSTM." Sustainability 10, no. 10 (October 15, 2018): 3693. http://dx.doi.org/10.3390/su10103693.

Full text
Abstract:
Short-term wind speed prediction is of cardinal significance for maximization of wind power utilization. However, the strong intermittency and volatility of wind speed pose a challenge to the wind speed prediction model. To improve the accuracy of wind speed prediction, a novel model using the ensemble empirical mode decomposition (EEMD) method and the combination forecasting method for Gaussian process regression (GPR) and the long short-term memory (LSTM) neural network based on the variance-covariance method is proposed. In the proposed model, the EEMD method is employed to decompose the original data of wind speed series into several intrinsic mode functions (IMFs). Then, the LSTM neural network and the GPR method are utilized to predict the IMFs, respectively. Lastly, based on the IMFs’ prediction results with the two forecasting methods, the variance-covariance method can determine the weight of the two forecasting methods and offer a combination forecasting result. The experimental results from two forecasting cases in Zhangjiakou, China, indicate that the proposed approach outperforms other compared wind speed forecasting methods.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography