Academic literature on the topic 'Variance decomposition process'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Variance decomposition process.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Variance decomposition process"

1

Myrzakhmetova, B., U. Besterekov, I. Petropavlovsky, S. Ahnazarova, V. Kiselev, and S. Romanova. "Optimization of Decomposition Process of Karatau Phosphorites." Eurasian Chemico-Technological Journal 14, no. 2 (February 7, 2012): 183. http://dx.doi.org/10.18321/ectj113.

Full text
Abstract:
Phosphorous-acid process of Karatau phosphorites’ decomposition has been studied. The impact of temperature, time and acid rate on decomposition process of phosphate raw material, the conditions ensuring maximum degree of phosphorite decomposition have been identified. Variance estimate of experiment results’ reproducibility has been carried out by mathematical statistics method; the coefficients of regression equations have been set. The significance of regression equation coefficients has been checked up by Student’s criterion, and the adequacy of regression equation to experiment has been checked up by Fisher's criterion. With the use of utopian point method the parameters of studied raw materials’ decomposition have been optimized.
APA, Harvard, Vancouver, ISO, and other styles
2

Feunou, Bruno, and Cédric Okou. "Good Volatility, Bad Volatility, and Option Pricing." Journal of Financial and Quantitative Analysis 54, no. 2 (September 13, 2018): 695–727. http://dx.doi.org/10.1017/s0022109018000777.

Full text
Abstract:
Advances in variance analysis permit the splitting of the total quadratic variation of a jump-diffusion process into upside and downside components. Recent studies establish that this decomposition enhances volatility predictions and highlight the upside/downside variance spread as a driver of the asymmetry in stock price distributions. To appraise the economic gain of this decomposition, we design a new and flexible option pricing model in which the underlying asset price exhibits distinct upside and downside semivariance dynamics driven by the model-free proxies of the variances. The new model outperforms common benchmarks, especially the alternative that splits the quadratic variation into diffusive and jump components.
APA, Harvard, Vancouver, ISO, and other styles
3

Lytvynenko, Iaroslav, Serhii Lupenko, Oleh Nazarevych, Hryhorii Shymchuk, and Volodymyr Hotovych. "Additive mathematical model of gas consumption process." Scientific journal of the Ternopil national technical university 104, no. 4 (2021): 87–97. http://dx.doi.org/10.33108/visnyk_tntu2021.04.087.

Full text
Abstract:
The problem of construction of a new mathematical model of the gas consumption process is considered in this paper. The new mathematical model is presented as an additive mixture of three components: cyclic random process, trend component and stochastic residue. The process of obtaining three components is carried out on the basis of caterpillar method, thus obtaining ten components of singular decomposition. In this approach, the cyclic component is formed from the sum of nine components of the schedule, which have one thing in common – repeated deployment over time. The trend component of the new mathematical model is the second component of singular decomposition, and the stochastic residue is formed on the basis of the difference between the values of the studied gas consumption process and the sum of cyclic and trend components. Two approaches to stochastic processing of cyclic gas consumption process based on the known model of stochastic-periodic random process and cyclic random process as models of the cyclic component are used in this paper. Application of mathematical model of cyclic component in the form of cyclic random process with cyclic structure makes it possible to obtain estimation of variance on cycle of gas consumption process, provided segmentation of cyclic component on depressions, much less in comparison of obtained variance estimation for indicating greater accuracy in the study of the gas consumption process and will use the obtained stochastic estimates when modeling the gas consumption process in further studies.
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Xunfeng, Shiwen Zhang, Zhe Gong, Junkai Ji, Qiuzhen Lin, and Jianyong Chen. "Decomposition-Based Multiobjective Evolutionary Optimization with Adaptive Multiple Gaussian Process Models." Complexity 2020 (February 11, 2020): 1–22. http://dx.doi.org/10.1155/2020/9643273.

Full text
Abstract:
In recent years, a number of recombination operators have been proposed for multiobjective evolutionary algorithms (MOEAs). One kind of recombination operators is designed based on the Gaussian process model. However, this approach only uses one standard Gaussian process model with fixed variance, which may not work well for solving various multiobjective optimization problems (MOPs). To alleviate this problem, this paper introduces a decomposition-based multiobjective evolutionary optimization with adaptive multiple Gaussian process models, aiming to provide a more effective heuristic search for various MOPs. For selecting a more suitable Gaussian process model, an adaptive selection strategy is designed by using the performance enhancements on a number of decomposed subproblems. In this way, our proposed algorithm owns more search patterns and is able to produce more diversified solutions. The performance of our algorithm is validated when solving some well-known F, UF, and WFG test instances, and the experiments confirm that our algorithm shows some superiorities over six competitive MOEAs.
APA, Harvard, Vancouver, ISO, and other styles
5

Ortu, Fulvio, Federico Severino, Andrea Tamoni, and Claudio Tebaldi. "A persistence‐based Wold‐type decomposition for stationary time series." Quantitative Economics 11, no. 1 (2020): 203–30. http://dx.doi.org/10.3982/qe994.

Full text
Abstract:
This paper shows how to decompose weakly stationary time series into the sum, across time scales, of uncorrelated components associated with different degrees of persistence. In particular, we provide an Extended Wold Decomposition based on an isometric scaling operator that makes averages of process innovations. Thanks to the uncorrelatedness of components, our representation of a time series naturally induces a persistence‐based variance decomposition of any weakly stationary process. We provide two applications to show how the tools developed in this paper can shed new light on the determinants of the variability of economic and financial time series.
APA, Harvard, Vancouver, ISO, and other styles
6

Bigerna, Simona, Maria Chiara D’Errico, and Paolo Polinori. "Dynamic forecast error variance decomposition as risk management process for the Gulf Cooperation Council oil portfolios." Resources Policy 78 (September 2022): 102937. http://dx.doi.org/10.1016/j.resourpol.2022.102937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Mei-Ling, Kai-Li Wang, Ya-Ching Sung, Fu-Lai Lin, and Wei-Chuan Yang. "The Dynamic Relationship between the Investment Behavior and the Morgan Stanley Taiwan Index: Foreign Institutional Investors' Decision Process." Review of Pacific Basin Financial Markets and Policies 10, no. 03 (September 2007): 389–413. http://dx.doi.org/10.1142/s0219091507001124.

Full text
Abstract:
This research employs VAR models, impulse response function, forecast error variance decomposition and bivariate GJR GARCH models, to explore the dynamic relationship between foreign investment and the MSCI Taiwan Index (MSCI–TWI). The estimations of the VAR, impulse-response functions and predicted error variance decomposition tests show that stronger feedback effects exist between net foreign investment and MSCI–TWI. In particular, our results demonstrate that the MSCI–TWI has the greatest influence over the decision-making processes of foreign investors. Also, we see that exchange rates exert a negative influence on both net foreign investment dollars and the MSCI–TWI. In addition, US–Taiwan interest rate difference has a positive influence on net foreign investment dollars and a negative influence on the MSCI–TWI. As for asymmetric own-volatility transmission, negative shocks in the MSCI–TWI tend to create greater volatility for itself in the following period than positive shocks. Our research indicates an asymmetric information transmission mechanism from net foreign investment to MSCI–TWI markets. Moreover, the estimated correlation coefficient shows that MSCI–TWI and net foreign investment dollar have a positive contemporaneous correlation.
APA, Harvard, Vancouver, ISO, and other styles
8

Chang, Tian, Chuanlong Ma, Anton Nikiforov, Savita K. P. Veerapandian, Nathalie De Geyter, and Rino Morent. "Plasma degradation of trichloroethylene: process optimization and reaction mechanism analysis." Journal of Physics D: Applied Physics 55, no. 12 (December 22, 2021): 125202. http://dx.doi.org/10.1088/1361-6463/ac40bb.

Full text
Abstract:
Abstract In this study, a multi-pin-to-plate negative corona discharge reactor was employed to degrade the hazardous compound trichloroethylene (TCE). The response surface methodology was applied to examine the influence of various process factors (relative humidity (RH), gas flow rate, and discharge power) on the TCE decomposition process, with regard to the TCE removal efficiency, CO2 and CO selectivities. The variance analysis was used to estimate the significance of the single process factors and their interactions. It has been proved that the discharge power had the most influential impact on the TCE removal efficiency, CO2 and CO selectivities and subsequently the gas flow rate, and finally RH. Under the optimal conditions with 20.83% RH, 2 W discharge power and 0.5 l min–1 gas flow rate, the optimal TCE removal efficiency (86.05%), CO2 selectivity (8.62%), and CO selectivity (15.14%) were achieved. In addition, a possible TCE decomposition pathway was proposed based on the investigation of byproducts identified in the exhaust gas of the non-thermal plasma reactor. This work paves the way for control of chlorinated volatile organic compounds.
APA, Harvard, Vancouver, ISO, and other styles
9

Shiyko, Mariya P., and Nilam Ram. "Conceptualizing and Estimating Process Speed in Studies Employing Ecological Momentary Assessment Designs: A Multilevel Variance Decomposition Approach." Multivariate Behavioral Research 46, no. 6 (November 30, 2011): 875–99. http://dx.doi.org/10.1080/00273171.2011.625310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Estrada Vargas, Leopoldo, Deni Torres Roman, and Homero Toral Cruz. "A Study of Wavelet Analysis and Data Extraction from Second-Order Self-Similar Time Series." Mathematical Problems in Engineering 2013 (2013): 1–14. http://dx.doi.org/10.1155/2013/102834.

Full text
Abstract:
Statistical analysis and synthesis of self-similar discrete time signals are presented. The analysis equation is formally defined through a special family of basis functions of which the simplest case matches the Haar wavelet. The original discrete time series is synthesized without loss by a linear combination of the basis functions after some scaling, displacement, and phase shift. The decomposition is then used to synthesize a new second-order self-similar signal with a different Hurst index than the original. The components are also used to describe the behavior of the estimated mean and variance of self-similar discrete time series. It is shown that the sample mean, although it is unbiased, provides less information about the process mean as its Hurst index is higher. It is also demonstrated that the classical variance estimator is biased and that the widely accepted aggregated variance-based estimator of the Hurst index results biased not due to its nature (which is being unbiased and has minimal variance) but to flaws in its implementation. Using the proposed decomposition, the correct estimation of theVariance Plotis described, as well as its close association with the popularLogscale Diagram.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Variance decomposition process"

1

Cho, Jang Hyung. "An Autoregressive Conditional Filtering Process to remove Intraday Seasonal Volatility and its Application to Testing the Noisy Rational Expectations Model." FIU Digital Commons, 2008. http://digitalcommons.fiu.edu/etd/60.

Full text
Abstract:
We develop a new autoregressive conditional process to capture both the changes and the persistency of the intraday seasonal (U-shape) pattern of volatility in essay 1. Unlike other procedures, this approach allows for the intraday volatility pattern to change over time without the filtering process injecting a spurious pattern of noise into the filtered series. We show that prior deterministic filtering procedures are special cases of the autoregressive conditional filtering process presented here. Lagrange multiplier tests prove that the stochastic seasonal variance component is statistically significant. Specification tests using the correlogram and cross-spectral analyses prove the reliability of the autoregressive conditional filtering process. In essay 2 we develop a new methodology to decompose return variance in order to examine the informativeness embedded in the return series. The variance is decomposed into the information arrival component and the noise factor component. This decomposition methodology differs from previous studies in that both the informational variance and the noise variance are time-varying. Furthermore, the covariance of the informational component and the noisy component is no longer restricted to be zero. The resultant measure of price informativeness is defined as the informational variance divided by the total variance of the returns. The noisy rational expectations model predicts that uninformed traders react to price changes more than informed traders, since uninformed traders cannot distinguish between price changes caused by information arrivals and price changes caused by noise. This hypothesis is tested in essay 3 using intraday data with the intraday seasonal volatility component removed, as based on the procedure in the first essay. The resultant seasonally adjusted variance series is decomposed into components caused by unexpected information arrivals and by noise in order to examine informativeness.
APA, Harvard, Vancouver, ISO, and other styles
2

Castellanos, Lucia. "Statistical Models and Algorithms for Studying Hand and Finger Kinematics and their Neural Mechanisms." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/273.

Full text
Abstract:
The primate hand, a biomechanical structure with over twenty kinematic degrees of freedom, has an elaborate anatomical architecture. Although the hand requires complex, coordinated neural control, it endows its owner with an astonishing range of dexterous finger movements. Despite a century of research, however, the neural mechanisms that enable finger and grasping movements in primates are largely unknown. In this thesis, we investigate statistical models of finger movement that can provide insights into the mechanics of the hand, and that can have applications in neural-motor prostheses, enabling people with limb loss to regain natural function of the hands. There are many challenges associated with (1) the understanding and modeling of the kinematics of fingers, and (2) the mapping of intracortical neural recordings into motor commands that can be used to control a Brain-Machine Interface. These challenges include: potential nonlinearities; confounded sources of variation in experimental datasets; and dealing with high degrees of kinematic freedom. In this work we analyze kinematic and neural datasets from repeated-trial experiments of hand motion, with the following contributions: We identified static, nonlinear, low-dimensional representations of grasping finger motion, with accompanying evidence that these nonlinear representations are better than linear representations at predicting the type of object being grasped over the course of a reach-to-grasp movement. In addition, we show evidence of better encoding of these nonlinear (versus linear) representations in the firing of some neurons collected from the primary motor cortex of rhesus monkeys. A functional alignment of grasping trajectories, based on total kinetic energy, as a strategy to account for temporal variation and to exploit a repeated-trial experiment structure. An interpretable model for extracting dynamic synergies of finger motion, based on Gaussian Processes, that decomposes and reduces the dimensionality of variance in the dataset. We derive efficient algorithms for parameter estimation, show accurate reconstruction of grasping trajectories, and illustrate the interpretation of the model parameters. Sound evidence of single-neuron decoding of interpretable grasping events, plus insights about the amount of grasping information extractable from just a single neuron. The Laplace Gaussian Filter (LGF), a deterministic approximation to the posterior mean that is more accurate than Monte Carlo approximations for the same computational cost, and that in an off-line decoding task is more accurate than the standard Population Vector Algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

MELO, Rony Glauco de. "Análise e propagação de incertezas associadas à Dispersão atmosférica dos gases da unidade snox®." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/17239.

Full text
Abstract:
Submitted by Haroudo Xavier Filho (haroudo.xavierfo@ufpe.br) on 2016-07-01T12:40:07Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese_RonyMelo_Versao_BC_2.pdf: 1810035 bytes, checksum: 3c75da73e467a1195a630f09d398de6a (MD5)
Made available in DSpace on 2016-07-01T12:40:07Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese_RonyMelo_Versao_BC_2.pdf: 1810035 bytes, checksum: 3c75da73e467a1195a630f09d398de6a (MD5) Previous issue date: 2015-09-18
Anp/prh-28
O aprimoramento de tecnologias que possam tornar o processo produtivo mais amigável a sociedade e ao meio ambiente é uma busca constante das grandes indústrias, seja por questões mercadológicas, seja por obrigações legais. A indústria do refino de petróleo, pela própria natureza composicional de sua matéria prima principal, produz efluentes com os mais diferentes riscos, os quais necessitam ser eliminados ou reduzidos a níveis aceitáveis. Inserido dentro deste contexto surge à unidade de abatimentos de emissões atmosféricas SNOX®, cujos objetivos visam o tratamento de efluentes e produção de H2SO4 agregando assim valor comercial ao processo, contudo esses mesmos efluentes conferem a possibilidade de sofrer diversos processos corrosivos e que pode acarretar vazamentos de seus gases, os quais são, em sua maioria, nocivos. O presente trabalho teve como objetivos a elaboração de uma simulação em modo estacionário, do processo SNOX® utilizando o software Hysys® a fim de calcular as concentrações dos diversos gases circulantes, e avaliar, de forma probabilística, a dispersão atmosférica (através do modelo SLAB) desses gases devido à presença de incertezas em diversas variáveis. Para a avaliação probabilística foi utilizada técnicas de Quasi-Monte Carlo (Latin Hypercube) para: definição das incertezas relevantes e hierarquização destas através de análise de sensibilidade por decomposição de variâncias; cálculo do tamanho ideal das amostras que representarão as incertezas, considerando um intervalo de confiança de 90%; e exibição dos resultados na forma de famílias de curvas de distribuição de probabilidade para obtenção probabilidades de certos efeitos adversos referentes aos gases presentes no processo SNOX®. Os resultados mostraram que, considerando as condições operacionais da unidade e o tipo de consequência abordado (intoxicação por gases): coeficiente de descarga, vazão de descarga, velocidade (intensidade) dos ventos e diâmetro do orifício são as variáveis que possuem relevância e as incertezas associadas a esses resultados se propagam até as concentrações finais obtidas pelo modelo SLAB, fazendo com que sua melhor representação seja na forma de curvas de distribuição de probabilidades cumulativas.
The improvement of technologies which can implement greater eco-socialfriendly production processes are a goal for the major industries, either by marketing issues or legal restrictions. The oil industry, by its compositional nature of its feedstock, produces effluents with several hazards which must be eliminated or reduced to acceptable levels. In this context, the SNOX® unit rises as answer to the reduction of the atmospheric emissions, aiming the effluent treatment and H2SO4 production, which increases the commercial value to the process, notwithstanding the fact of these emissions enable corrosive process that may lead to leakage of gases, which are usually harmful. The current work has as main objectives the development of a simulation at stationary-state of the SNOX® process by using the HYSYS® software in order calculate the concentration of released gases and probabilistically evaluate the atmospheric dispersion of these gases employing SLAB method. The Quasi-Monte Carlo (Latin Hypercube) was used for probabilistic estimation for: defining the relevant uncertainties as well its hierarchization through sensibility analysis by variance decomposition; calculation of the ideal size for the samples which will represent the uncertainty with a reliability of 90%; and finally for displaying the results as groups of probability distribution curves to obtain the probability of some adverse effects associated with the gases at the process. The results evidenced that considering the operational conditions and the studied kind of consequences (gas intoxication): discharge coefficient, discharge flow rate, wind velocity (intensity of the wind) and the diameter of the orifice were the variables of relevance and the associated uncertainties of the results propagate to the final concentrations obtained by the SLAB model. Hence the results must be suitably represented by cumulative probability distribution curves.
APA, Harvard, Vancouver, ISO, and other styles
4

Chan, Ya-Chi, and 詹雅琦. "Identifying the Sources of Variance Shifts for a Multivariate Process Using Statistical Decomposition Approaches." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/20451769897532453103.

Full text
Abstract:
碩士
輔仁大學
統計資訊學系應用統計碩士班
101
Product with high quality is a must for a successful enterprise. With the technology progress, monitoring two or more quality characteristic becomes important. The multivariate statistical process control (MSPC) charts have been developed to perform such important tasks. For MSPC applications, the topic of how to identify the sources of a fault is very important for industries. Currently, when addressing a fault, most of the studies have focused on the cases of mean shifts rather than variance shifts. Also, the machine learning (ML) approaches are typically used to classify the sources of process mean shifts. Nevertheless, the problems associated with ML are the uncertainty of the training model and the inconsistency of the parameter selection. As a consequence, this study proposes a new statistical decomposition method to determine the sources of variance shifts when a MSPC signal has been triggered. This study shows the effectiveness of the proposed approach by conducting a series of simulations.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Variance decomposition process"

1

Wang, Jing, Jinglin Zhou, and Xiaolu Chen. "Statistics Decomposition and Monitoring in Original Variable Space." In Intelligent Control and Learning Systems, 79–100. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-8044-1_6.

Full text
Abstract:
AbstractThe traditional process monitoring method first projects the measured process data into the principle component subspace (PCS) and the residual subspace (RS), then calculates $$\mathrm T^2$$ T 2 and $$\mathrm SPE$$ S P E statistics to detect the abnormality. However, the abnormality by these two statistics are detected from the principle components of the process. Principle components actually have no specific physical meaning, and do not contribute directly to identify the fault variable and its root cause. Researchers have proposed many methods to identify the fault variable accurately based on the projection space. The most popular is contribution plot which measures the contribution of each process variable to the principal element (Wang et al. 2017; Luo et al. 2017; Liu and Chen 2014). Moreover, in order to determine the control limits of the two statistics, their probability distributions should be estimated or assumed as specific one. The fault identification by statistics is not intuitive enough to directly reflect the role and trend of each variable when the process changes.
APA, Harvard, Vancouver, ISO, and other styles
2

Gylych, Jelilov, Abdullahi Ahmad Jibrin, Bilal Celik, and Abdurrahman Isik. "Impact of Oil Price Fluctuation on the Economy of Nigeria, the Core Analysis for Energy Producing Countries." In Energy Management Systems in Process Industries - Current Practice and Challenges in Era of Industry 4.0 [Working Title]. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.94055.

Full text
Abstract:
The study aims to find the short-run empirical analyses of the impact of oil price fluctuation on the monetary instrument (Exchange rate, Inflation, Interest rate) in Nigeria. We explored the frequently used Toda–Yamamoto model (TY) model, by adopting the TY Modified Wald (MWALD) test approach to causality, Forecast Error Variance Decomposition (FEVD) and Impulse Response Functions (IRFs).The study covered the period 1995 to 2018 (monthly basis), and our findings from MWALD test indicated that there is a uni-directional causality of the log of oil price (lnoilpr) to log of the exchange rate (lnexchr) at 10% level of significance, also there is a contemporaneous response of log of consumer price index (lncpi) to log of exchange rate (lnexchr) and log of interest rate (lnintr), and jointly (lnoilpr, lncpi and lnintr) granger cause lncpi. Also at 5% level of significance lnintr responded due to positive change in lnoilpr and lnexchr, and jointly causes lnintr at 5% level of significance. This is complimented with our findings in FEVDs, and IRFs. The empirical analyses shows that oil price is a strong determining factor of exchange rate, cost of borrowing and directly influences inflationary or deflationary tendencies in Nigeria..
APA, Harvard, Vancouver, ISO, and other styles
3

Duell, Peter, and Xin Yao. "Implementing Negative Correlation Learning in Evolutionary Ensembles with Suitable Speciation Techniques." In Pattern Recognition Technologies and Applications, 344–69. IGI Global, 2008. http://dx.doi.org/10.4018/978-1-59904-807-9.ch016.

Full text
Abstract:
Negative correlation learning (NCL) is a technique that attempts to create an ensemble of neural networks whose outputs are accurate but negatively correlated. The motivation for such a technique can be found in the bias-variance-covariance decomposition of an ensemble of learner’s generalization error. NCL is also increasingly used in conjunction with an evolutionary process, which gives rise to the possibility of adapting the structures of the networks at the same time as learning the weights. This chapter examines the motivation and characteristics of the NCL algorithm. Some recent work relating to the implementation of NCL in a single objective evolutionary framework for classification tasks is presented, and we examine the impact of two speciation techniques: implicit fitness sharing and an island model population structure. The choice of such speciation techniques can have a detrimental effect on the ability of NCL to produce accurate and diverse ensembles and should therefore be chosen carefully. This chapter also provides an overview of other researchers’ work with NCL and gives some promising future research directions.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Peilin, Sang-Heon Lee, and Hung-Yao Hsu. "Use of Bi-Camera and Fusion of Pairwise Real Time Citrus Fruit Image for Classification Application." In Computer Vision and Image Processing in Intelligent Systems and Multimedia Technologies, 54–81. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-6030-4.ch004.

Full text
Abstract:
In this chapter, the use of two images, the near infrared image and the color image, from a bi-camera machine vision system is investigated to improve the detection of the citrus fruits in the image. The application has covered the design of the bi-camera vision system to align two CCD cameras, the online acquisition of the citrus fruit tree image, and the fusion of two aligned images. In the system, two cameras have been registered with alignment to ensure the fusion of two images. A fusion method has been developed based on the Multiscale Decomposition Analysis (MSD) with a Discrete Wavelet Transform (DWT) application for the two dimensional signal. In the fusion process, two image quality issues have been addressed. One is the detail noise from the background, which is bounded with the envelope spectra and with similar spectra to orange citrus fruit and spatial variance property. The second is the enhancement of the fundamental envelope spectra using two source images. With level of MSD estimated, the noise is reduced by zeroing the high pass coefficients in DWT while the fundamental envelope spectra from the color image are enhanced by an arithmetic pixel level fusion rule. To evaluate the significant improvement of the image quality, some major classification methods are applied to compare the classified results from the fused image with the results from the types of color image. The misclassification error is measured by the empirical type errors using the manual segmentation reference image.
APA, Harvard, Vancouver, ISO, and other styles
5

R. Singh, Twinkle. "Study on Approximate Analytical Method with Its Application Arising in Fluid Flow." In Porous Fluids - Advances in Fluid Flow and Transport Phenomena in Porous Media. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.97548.

Full text
Abstract:
This chapter is about the, Variational iteration method (VIM); Adomian decomposition method and its modification has been applied to solve nonlinear partial differential equation of imbibition phenomenon in oil recovery process. The important condition of counter-current imbibition phenomenon as vi=−vn, has been considered here main aim, here is to determine the saturation of injected fluid Sixt during oil recovery process which is a function of distance ξ and time θ, therefore saturation Si is chosen as a dependent variable while xandt are chosen as independent variable. The solution of the phenomenon has been found by VIM, ADM and Laplace Adomian decomposition method (LADM). The effectiveness of our method is illustrated by different numerical.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Jian, Bingzhen Chen, Shanying Hu, and Xiaorong He. "Variable decomposition based global-optimization algorithm for process synthesis." In Computer Aided Chemical Engineering, 666–71. Elsevier, 2003. http://dx.doi.org/10.1016/s1570-7946(03)80621-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Z. G., and C. P. Ding. "Oxidation-Reduction Reactions." In Chemistry of Variable Charge Soils. Oxford University Press, 1997. http://dx.doi.org/10.1093/oso/9780195097450.003.0016.

Full text
Abstract:
Oxidation-reduction reactions are chemical reactions caused by the transfer of electrons between two substances. These reactions occur actively in variable charge soils. This is because that under conditions of high temperature and high precipitation both the accumulation and the decomposition of organic matter proceed rapidly. The decomposition products of organic matter may release electrons, providing the necessary condition for the occurrence of reduction reactions. In particular, because the soil may have a high content of water during seasonal rainy periods, the presence of a strongly reducing condition is possible. Furthermore, large areas of variable charge soils have been cultivated for rice production. For these paddy soils there are always intensive oxidation-reduction reactions proceeding alternately. Variable charge soils have a high content of iron oxides. The content of manganese is also higher than that of constant charge soils. Thus, the soil itself possesses plenty of electron-acceptors. Besides, the high concentration of hydrogen ions in variable charge soils is favorable for the occurrence of reduction reactions. Therefore, as shall be seen in this chapter, contrary to the belief that the significance of oxidation- reduction reactions is confined chiefly to submerged soils, these reactions may play an important role in soil genesis and soil fertility for variable charge soils even under well-aerated conditions. In this chapter, after discussions on factors affecting the intensity of oxidation-reduction and interactions among various oxidation-reduction substances, the oxidation-reduction regimes of variable charge soils under different utilization conditions will be presented. Ferrous and manganous ions, two important inorganic reducing substances in soils, shall be dealt with in the next chapter. The oxidation-reduction intensity of a substance is determined by its ability to liberate or accept electrons. Therefore, electron activity in an equilibrium system may be used as an index for expressing its reduction strength. An electron has a radius of only approximately 1/20,000 of that of a hydrogen atom. Its large charge-to-size ratio prevents it from persisting in free form in aqueous systems. The ephemeral “hydrated electron” has a half-life of less than 1 msec (Bartlett and James, 1993). As a species with a potential of -2.7 V vs. the standard potential of H+/H2, it is a powerful reducing agent.
APA, Harvard, Vancouver, ISO, and other styles
8

F.F.C. Cunha, Caio, Mariane R. Petraglia, André T. Carvalho, and Antonio C.S. Lima. "A Wavelet Threshold Function for Treatment of Partial Discharge Measurements." In Wavelet Theory [Working Title]. IntechOpen, 2020. http://dx.doi.org/10.5772/intechopen.94115.

Full text
Abstract:
Based on the wavelet transform filtering theory, the chapter will describe the elaboration of a wavelet threshold function intended for the denoising of the partial discharge phenomenon measurements. This new function, conveniently named Fleming threshold, is based on the logistic function, which is well known for its utility in several important areas. In the development is shown some variations in the application of the Fleming function, in an attempt to identify the decomposition levels where the thresholding process must be more stringent and those where it can be more lenient, which increases its effectiveness in the removal of noisy coefficients. The proposed function and its variants demonstrate excellent results compared to other wavelet thresholding methods already described in the literature, including the famous Hard and Soft functions.
APA, Harvard, Vancouver, ISO, and other styles
9

Guimarans, D., R. Herrero, J. J. Ramos, and S. Padrón. "Solving Vehicle Routing Problems Using Constraint Programming and Lagrangean Relaxation in a Metaheuristics Framework." In Management Innovations for Intelligent Supply Chains, 123–43. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2461-0.ch007.

Full text
Abstract:
This paper presents a methodology based on the Variable Neighbourhood Search metaheuristic, applied to the Capacitated Vehicle Routing Problem. The presented approach uses Constraint Programming and Lagrangean Relaxation methods in order to improve algorithm’s efficiency. The complete problem is decomposed into two separated subproblems, to which the mentioned techniques are applied to obtain a complete solution. With this decomposition, the methodology provides a quick initial feasible solution which is rapidly improved by metaheuristics’ iterative process. Constraint Programming and Lagrangean Relaxation are also embedded within this structure to ensure constraints satisfaction and to reduce the calculation burden. By means of the proposed methodology, promising results have been obtained. Remarkable results presented in this paper include a new best-known solution for a rarely solved 200-customers test instance, as well as a better alternative solution for another benchmark problem.
APA, Harvard, Vancouver, ISO, and other styles
10

Floudas, Christodoulos A. "Mixed-Integer Nonlinear Optimization." In Nonlinear and Mixed-Integer Optimization. Oxford University Press, 1995. http://dx.doi.org/10.1093/oso/9780195100563.003.0011.

Full text
Abstract:
This chapter presents the fundamentals and algorithms for mixed-integer nonlinear optimization problems. Sections 6.1 and 6.2 outline the motivation, formulation, and algorithmic approaches. Section 6.3 discusses the Generalized Benders Decomposition and its variants. Sections 6.4, 6.5 and 6.6 presents the Outer Approximation and its variants with Equality Relaxation and Augmented Penalty. Section 6.7 discusses the Generalized Outer Approximation while section 6.8 compares the Generalized Benders Decomposition with the Outer Approximation. Finally, section 6.9 discusses the Generalized Cross Decomposition. A wide range of nonlinear optimization problems involve integer or discrete variables in addition to the continuous variables. These classes of optimization problems arise from a variety of applications and are denoted as Mixed-Integer Nonlinear Programming MINLP problems. The integer variables can be used to model, for instance, sequences of events, alternative candidates, existence or nonexistence of units (in their zero-one representation), while discrete variables can model, for instance, different equipment sizes. The continuous variables are used to model the input-output and interaction relationships among individual units/operations and different interconnected systems. The nonlinear nature of these mixed-integer optimization problems may arise from (i) nonlinear relations in the integer domain exclusively (e.g., products of binary variables in the quadratic assignment model), (ii) nonlinear relations in the continuous domain only (e.g., complex nonlinear input-output model in a distillation column or reactor unit), (iii) nonlinear relations in the joint integer-continuous domain (e.g., products of continuous and binary variables in the scheduling/ planning of batch processes, and retrofit of heat recovery systems). In this chapter, we will focus on nonlinearities due to relations (ii) and (iii). An excellent book that studies mixed-integer linear optimization, and nonlinear integer relationships in combinatorial optimization is the one by Nemhauser and Wolsey (1988). The coupling of the integer domain with the continuous domain along with their associated nonlinearities make the class of MINLP problems very challenging from the theoretical, algorithmic,and computational point of view. Apart from this challenge, however, there exists a broad spectrum of applications that can be modeled as mixed-integer nonlinear programming problems.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Variance decomposition process"

1

Yang, Hang, Alex Gorodetsky, Yuji Fujii, and Kon-Well Wang. "Multifidelity Uncertainty Quantification for Online Simulations of Automotive Propulsion Systems." In ASME 2021 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/detc2021-67585.

Full text
Abstract:
Abstract The increasing complexity and demanding performance requirement of modern automotive propulsion systems necessitate more intelligent and robust predictive controls. Due to the significant uncertainties from both unavoidable modeling errors and probabilistic environmental disturbances, the ability to quantify the effect of these uncertainties to the system behaviors is of crucial importance to enable advanced control designs for automotive propulsion systems. Furthermore, the quantification of uncertainty must be computationally efficient such that it can be conducted on board a vehicle in real-time. However, traditional uncertainty quantification methods for complicated nonlinear systems, such as Monte Carlo, often rely on sampling — a computationally prohibitive process for many applications. Previous research has shown promises of using spectral decomposition methods such as generalized Polynomial Chaos to reduce the online computational cost of uncertainty quantification. However, such method suffers from scalability and bias issues. This paper seeks to alleviate these computational bottlenecks by developing a multifidelity uncertainty quantification method that combines low-order generalized Polynomial Chaos with Monte Carlo estimation via Control Variates. Results on the mean and variance estimates of the axle shaft torque show that the proposed method can correct the bias of low-order Polynomial Chaos expansions while significantly reducing variance compared to the conventional Monte Carlo.
APA, Harvard, Vancouver, ISO, and other styles
2

Rezaian, Elnaz, Rajarshi Biswas, and Karthik Duraisamy. "Non-Intrusive Parametric Reduced Order Models For The Prediction Of Internal And External Flow Fields Over Automobile Geometries." In ASME 2021 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/imece2021-71728.

Full text
Abstract:
Abstract In many-query applications such as design, optimization, and model-predictive control, reduced order models (ROMs) have the potential to significantly decrease the computational cost. In contrast to intrusive projection-based ROMs which require direct access to the high-fidelity model (HFM) operators, non-intrusive ROMs can be constructed in a data-driven fashion using the input-output data generated by the HFM. When used in the prediction of the system response in unseen parameter regimes, however, generalization capabilities have to be carefully assessed. In this study, we pursue a two-step approach to construct non-intrusive parametric ROMs: The first step involves the extraction of low-dimensional latent spaces using proper-orthogonal decomposition (POD) and convolutional neural network-based autoencoders (CNN-AE); and the second step uses regression for the latent variables in parameter space. We adapt a unique decomposition approach named Split-POD to enrich the low-dimensional subspace of the parametric ROM. The proposed methods are used to predict the flow field over a vehicle geometry as a function of vehicle speed and internal fan speed. Through orders of magnitude reduction in the degrees of freedom, the non-intrusive ROMs enable flow field prediction in real-time and bypass hours of computations by the HFM. The results show that the nonlinear encoding in CNN-AE can enhance ROM predictions given adequate data. POD-based ROMs are more robust and efficient, and in particular more accurate than CNN-AE in under-sampled regions of the parameter space. We further enhance the utility of non-intrusive ROMs by formulating a training process that minimizes the required number of high-fidelity simulations using Gaussian process regression to adaptively sample the input parameter space and optimize the estimated variance of the predictions.
APA, Harvard, Vancouver, ISO, and other styles
3

Sajjad, Farasdaq, Jemi Jaenudin, Steven Chandra, Alvin Wirawan, Annisa Prawesti, M. Gemareksha Muksin, Wisnu Agus Nugroho, Ecep Muhammad Mujib, and Savinatun Naja. "Data-Driven Multi-Asset Optimisation Under Uncertainty: A Case Study Using the New Indonesia's Fiscal Policy." In International Petroleum Technology Conference. IPTC, 2021. http://dx.doi.org/10.2523/iptc-21425-ms.

Full text
Abstract:
Abstract Optimizing multiple assets under uncertain techno-economic conditions and tight government policies is challenging. Operator needs to establish flexible Plan of Development (POD)s and put priority in developing multiple fields. The complexity of production and the profit margin should be simultaneously evaluated. In this work, we present a new workflow to perform such a rigorous optimization under uncertainty using the case study of PHE ONWJ, Indonesia. We begin the workflow by identifying the uncertain parameters and their prior distributions. We classify the parameters into three main groups: operations-related (geological complexity, reserves, current recovery, surface facilities, and technologies), company-policies-related (future exploration plan, margin of profit, and oil/gas price), and government-related (taxes, incentives, and fiscal policies). A unique indexing technique is developed to allow numerical quantification and adapt with dynamic input. We then start the optimization process by constructing time-dependent surrogate model through training with Monte Carlo sampling. We then perform optimization under uncertainty with multiple scenarios. The objective function is the overall Net Present Value (NPV) obtained by developing multiple fields. This work emphasizes the importance of the use of time-dependent surrogate approach to account risk in the optimization process. The approach revises the prior distribution with narrow-variance distribution to make reliable decision. The Global Sensitivity Analysis (GSA) with Sobol decomposition on the posterior distribution and surrogate provides parameters’ ranking and list of heavy hitters. The first output from this workflow is the narrow-variance posterior distribution. This result helps to locate the sweet spots. By analyzing them, operator can address specific sectors, which are critical to the NPV. PHE ONWJ, as the biggest operator in Indonesia, has geologically scattered assets, therefore, this first output is essential. The second output is the list of heavy hitters from GSA. This list is a tool to cluster promising fields for future development and prioritize their development based on the impact towards NPV. Since all risks are carried by the operator under the current Gross Split Contract, this result is advantageous for decision-making process. We introduce a new approach to perform time-dependent, multi-asset optimization under uncertainty. This new workflow is impactful for operators to create robust decision after considering the associated risks.
APA, Harvard, Vancouver, ISO, and other styles
4

Moghiman, M., M. Javadi, M. H. Raad, N. Hosseini, and M. Soleimani. "The Effect of H2S on Production of Carbon Black From Sub-Quality Natural Gas." In ASME 2008 International Mechanical Engineering Congress and Exposition. ASMEDC, 2008. http://dx.doi.org/10.1115/imece2008-69053.

Full text
Abstract:
The objective of this paper is computational investigation of the carbon black production through thermal decomposition of waste gases containing CH4 and H2S, without requiring a H2S separation process. The chemical reaction model, which involves solid carbon, sulfur compounds and precursor species for formation carbon black, based on an assumed Probability Density Function (PDF) parameterized by the mean and variance of mixture fraction and β-PDF shape. The soot formation is modeled by using the soot particle number density and the mass density based on acetylene concentrations. The effects of feedstock mass flow rate and reactor temperature on carbon black, soot, CO, S2, SO2, COS and CS2 formation are investigated. The results show that the major factor influencing CH4 and H2S conversions is reactor temperature. The results reveal that at any temperature, H2S conversion is less than that of CH4. For temperatures higher than 1100°K, the reactor CH4 conversion reaches 100%. At temperatures below 1300°K, H2S conversion is too low and usually less than 5%. For temperatures higher than 1300°K, H2S conversion increases sharply with temperature and the major products of the process are S2 and SO2 while COS and CS2 are minor products. The results also show that the production of carbon black from sub-quality natural gas, process involving the formation of carbon monoxide which is occurring in parallel, play a very significant role. For lower values of feedstock flow rate, CH4 mostly burns to CO and consequently, the production of carbon black is low.
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Yu, Jiejuan Tong, Tao Liu, Jun Zhao, and Aling Zhang. "A Model for Passive System Reliability Analysis." In 18th International Conference on Nuclear Engineering. ASMEDC, 2010. http://dx.doi.org/10.1115/icone18-29256.

Full text
Abstract:
It is the important feature of passive system and the basic difference from the active system that nuclear plant can be driven to safe state or shutdown by inherent safety characters of the reactor and physical principles, independent of human interfere or the operation of outside equipments, when the reactor is in abnormal condition. So passive system is widely used in new generation nuclear power plant (NPP) such as high-temperature gas-cooled reactors and AP1000 NPPs. While physical process failure become one of the important contributors to the system operation failure since system operation is depending on natural force but not on outside power and both the driven force and resistance are influenced by many uncertain factors. Then finding the key factors for the system operation, analyzing the development of the passive system combining with the accident scenarios are the main steps of the analysis of the passive system reliability, and the important content of the probability safety assessment (PSA) of nuclear plant with the passive design. In this paper, a model for analyzing the passive system reliability is described, in which variance decomposition and analytic hierarchy process (AHP) methods are used to select the key factors for the system operation, and Monte Carlo simulation and dynamic event tree methods are used to evaluate the system reliability according to the accident scenarios. Finally, Passive Residual Heat Removal System in the High Temperature Gas-Cooled Reactor (HTGR) is analyzed as an example.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Shuai, Xiaofeng Zhou, and Haibo Shi. "Fault Detection using Common and Specific Variable Decomposition for Nonlinear Multimode Process." In 2020 Chinese Automation Congress (CAC). IEEE, 2020. http://dx.doi.org/10.1109/cac51589.2020.9327643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Yu, Xiaolei Yin, Paul Arendt, Wei Chen, and Hong-Zhong Huang. "An Extended Hierarchical Statistical Sensitivity Analysis Method for Multilevel Systems With Shared Variables." In ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/detc2009-87434.

Full text
Abstract:
Statistical sensitivity analysis (SSA) is an effective methodology to examine the impact of variations in model inputs on the variations in model outputs at either a prior or posterior design stage. A hierarchical statistical sensitivity analysis (HSSA) method has been proposed in literature to incorporate SSA in designing complex engineering systems with a hierarchical structure. However, the original HSSA method only deals with hierarchical systems with independent subsystems. Due to the existence of shared variables at lower levels, responses from lower level submodels that act as inputs to a higher level subsystem are both functionally and statistically dependent. For designing engineering systems with dependent subsystem responses, an extended hierarchical statistical sensitivity analysis (EHSSA) method is developed in this work to provide a ranking order based on the impact of lower level model inputs on the top level system performance. A top-down strategy, same as in the original HSSA method, is employed to direct SSA from the top level to lower levels. To overcome the limitation of the original HSSA method, the concept of a subset SSA is utilized to group a set of dependent responses from lower level submodels in the upper level SSA. For variance decomposition at a lower level, the covariance of dependent responses is decomposed into the contributions from individual shared variables. To estimate the global impact of lower level inputs on the top level output, an extended aggregation formulation is developed to integrate local submodel SSA results. The importance sampling technique is also introduced to re-use the existing data from submodels SSA during the aggregation process. The effectiveness of the proposed EHSSA method is illustrated via a mathematical example and a multiscale design problem.
APA, Harvard, Vancouver, ISO, and other styles
8

Dewa, Gilang R. R., Awang N. I. Wardana, and Singgih Hawibowo. "Linear Oscillation Diagnosis of Process Variable in Control Loop Based on Variational Mode Decomposition." In 2018 4th International Conference on Science and Technology (ICST). IEEE, 2018. http://dx.doi.org/10.1109/icstc.2018.8528654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Safari, Amir, Kambiz H. Hajikolaei, Hirpa G. Lemu, G. Gary Wang, and M. Assadi. "Decomposition of High-Dimensional Shape Optimization Problems Through Quantifying Design Variable Correlation." In ASME 2013 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/imece2013-64792.

Full text
Abstract:
This paper proposes a novel strategy for the shape optimization procedures using a recently developed metamodel-based decomposition algorithm for High-dimensional, Expensive and Black-box (HEB) design problems. A metamodel named High Dimensional Model Representation (HDMR) is used for decomposition of design variables in a complex aerodynamic profile optimization process as a HEB design problem. The approach uncovers and quantifies design variable correlations. Weak correlations are neglected and strong ones are kept for grouping. In this way, the vast search space is decomposed to small ones, and the large-scale CFD simulation based optimization is replaced by smaller-scale sub-problems. Though a typical gas turbine compressor airfoil shape has been selected as the case study in this paper, the methodology is introduced as a general procedure for shape optimization problems. The obtained results from the decomposition also show good agreement with the aerodynamics of such turbomachinery airfoils and found promising.
APA, Harvard, Vancouver, ISO, and other styles
10

Hajikolaei, Kambiz Haji, George Cheng, and Gary Wang. "Optimization on Metamodeling-Supported Iterative Decomposition." In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-47525.

Full text
Abstract:
The recently developed metamodel-based decomposition strategy relies on quantifying the variable correlations of black-box functions so that high dimensional problems are decomposed to smaller sub-problems, before performing optimization. Such a two-step method may miss the global optimum due to its rigidity or requires extra expensive sample points for ensuring adequate decomposition. This work develops a strategy to iteratively decompose high dimensional problems within the optimization process. The sample points used during the optimization are reused to build a metamodel called PCA-HDMR for quantifying the intensities of variable correlations by sensitivity analysis. At every iteration, the predicted intensities of the correlations are updated based on all the evaluated points and a new decomposition scheme is suggested by omitting the weak correlations. Optimization is performed on the iteratively updated sub-problems from decomposition. The proposed strategy is applied for optimization of different benchmark and engineering problems and results are compared to direct optimization of the undecomposed problems using Trust Region Mode Pursuing Sampling method (TRMPS), Genetic Algorithm (GA), and Dividing RECTangles (DIRECT). The results show that except for the category of un-decomposable problems with all or lots of strong (i. e., important) correlations, the proposed strategy effectively improves the accuracy of the optimization results. The advantages of the new strategy in comparison with the previous methods are also discussed.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Variance decomposition process"

1

Chapman, Ray, Phu Luong, Sung-Chan Kim, and Earl Hayter. Development of three-dimensional wetting and drying algorithm for the Geophysical Scale Transport Multi-Block Hydrodynamic Sediment and Water Quality Transport Modeling System (GSMB). Engineer Research and Development Center (U.S.), July 2021. http://dx.doi.org/10.21079/11681/41085.

Full text
Abstract:
The Environmental Laboratory (EL) and the Coastal and Hydraulics Laboratory (CHL) have jointly completed a number of large-scale hydrodynamic, sediment and water quality transport studies. EL and CHL have successfully executed these studies utilizing the Geophysical Scale Transport Modeling System (GSMB). The model framework of GSMB is composed of multiple process models as shown in Figure 1. Figure 1 shows that the United States Army Corps of Engineers (USACE) accepted wave, hydrodynamic, sediment and water quality transport models are directly and indirectly linked within the GSMB framework. The components of GSMB are the two-dimensional (2D) deep-water wave action model (WAM) (Komen et al. 1994, Jensen et al. 2012), data from meteorological model (MET) (e.g., Saha et al. 2010 - http://journals.ametsoc.org/doi/pdf/10.1175/2010BAMS3001.1), shallow water wave models (STWAVE) (Smith et al. 1999), Coastal Modeling System wave (CMS-WAVE) (Lin et al. 2008), the large-scale, unstructured two-dimensional Advanced Circulation (2D ADCIRC) hydrodynamic model (http://www.adcirc.org), and the regional scale models, Curvilinear Hydrodynamics in three dimensions-Multi-Block (CH3D-MB) (Luong and Chapman 2009), which is the multi-block (MB) version of Curvilinear Hydrodynamics in three-dimensions-Waterways Experiments Station (CH3D-WES) (Chapman et al. 1996, Chapman et al. 2009), MB CH3D-SEDZLJ sediment transport model (Hayter et al. 2012), and CE-QUAL Management - ICM water quality model (Bunch et al. 2003, Cerco and Cole 1994). Task 1 of the DOER project, “Modeling Transport in Wetting/Drying and Vegetated Regions,” is to implement and test three-dimensional (3D) wetting and drying (W/D) within GSMB. This technical note describes the methods and results of Task 1. The original W/D routines were restricted to a single vertical layer or depth-averaged simulations. In order to retain the required 3D or multi-layer capability of MB-CH3D, a multi-block version with variable block layers was developed (Chapman and Luong 2009). This approach requires a combination of grid decomposition, MB, and Message Passing Interface (MPI) communication (Snir et al. 1998). The MB single layer W/D has demonstrated itself as an effective tool in hyper-tide environments, such as Cook Inlet, Alaska (Hayter et al. 2012). The code modifications, implementation, and testing of a fully 3D W/D are described in the following sections of this technical note.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography