Academic literature on the topic 'Source term estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Source term estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Source term estimation"

1

Long, Kerrie J., Sue Ellen Haupt, and George S. Young. "Assessing sensitivity of source term estimation." Atmospheric Environment 44, no. 12 (April 2010): 1558–67. http://dx.doi.org/10.1016/j.atmosenv.2010.01.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gudiksen, P. H., T. F. Harvey, and R. Lange. "Chernobyl Source Term, Atmospheric Dispersion, and Dose Estimation." Health Physics 57, no. 5 (November 1989): 697–706. http://dx.doi.org/10.1097/00004032-198911000-00001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bushe, W. Kendal, and Helfried Steiner. "Laminar flamelet decomposition for conditional source-term estimation." Physics of Fluids 15, no. 6 (2003): 1564. http://dx.doi.org/10.1063/1.1569483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Loewenthal, Dan, and Vladimir Shtivelman. "Source signature estimation using fictitious source and reflector." GEOPHYSICS 54, no. 7 (July 1989): 916–20. http://dx.doi.org/10.1190/1.1442721.

Full text
Abstract:
Source wavelet estimation is an important step in processing and interpreting seismic data. In the context of this work, the term (source wavelet) includes the pure source signature (the source response measured in a homogeneous medium) along with certain model‐related events (such as the ghost and interbed reflections). An estimate of the source wavelet can be used to increase the resolution of seismic data by signature deconvolution, deghosting, and dereverberation. However, pure source signature determination is of particular importance as a first step in direct inversion schemes, as demonstrated by Bube and Burridge (1983) and by Foster and Carrion (1985).
APA, Harvard, Vancouver, ISO, and other styles
5

Lu, Jinshu, Mengqing Huang, Wenfeng Wu, Yonghui Wei, and Chong Liu. "Application and Improvement of the Particle Swarm Optimization Algorithm in Source-Term Estimations for Hazardous Release." Atmosphere 14, no. 7 (July 19, 2023): 1168. http://dx.doi.org/10.3390/atmos14071168.

Full text
Abstract:
Hazardous gas release can pose severe hazards to the ecological environment and public safety. The source-term estimation of hazardous gas leakage serves a crucial role in emergency response and safety management practices. Nevertheless, the precision of a forward diffusion model and atmospheric diffusion conditions have a significant impact on the performance of the method for estimating source terms. This work proposes the particle swarm optimization (PSO) algorithm coupled with the Gaussian dispersion model for estimating leakage source parameters. The method is validated using experimental cases of the prairie grass field dispersion experiment with various atmospheric stability classes. The results prove the effectiveness of this method. The effects of atmospheric diffusion conditions on estimation outcomes are also investigated. The estimated effect in extreme atmospheric diffusion conditions is not as good as in other diffusion conditions. Accordingly, the Gaussian dispersion model is improved by adding linear and polynomial correction coefficients to it for its inapplicability under extreme diffusion conditions. Finally, the PSO method coupled with improved models is adapted for the source-term parameter estimation. The findings demonstrate that the estimation performance of the PSO method coupled with improved models is significantly improved. It was also found that estimated performances of source parameters of two correction models were significantly distinct under various atmospheric stability classes. There is no single optimal model; however, the model can be selected according to practical diffusion conditions to enhance the estimated precision of source-term parameters.
APA, Harvard, Vancouver, ISO, and other styles
6

Jing, Yuanqi, Zhonglin Gu, Fei Li, and Kai Zhang. "Gaseous Pollutent Source Term Estimation Based on Adjoint Probability and Regularization Method." E3S Web of Conferences 356 (2022): 05048. http://dx.doi.org/10.1051/e3sconf/202235605048.

Full text
Abstract:
Fast and accurate identification of source locations and release rates is particularly important for improving indoor air quality and ensuring the safety and health of people. Existing methods based on adjoint probability are difficult to distinguish the release rate of dynamic sources, and optimization algorithms based on regularization are limited to analysing only a small amount of potential pollutant source information. Therefore, this study proposed an algorithm combining adjoint equations and regularization models to identify the location and release intensity of pollutant sources in the entire computational domain of a room. Based on the validated indoor CFD computational model, we first obtained a series of response matrices corresponding to the sensor position by solving the adjoint equation, and then used the regularization method and Bayesian inference to extrapolate the release rate and location of dynamic pollutant source in the room. The results shown that the proposed algorithm is convenient and feasible to identify the location and intensity of the indoor pollutant source. Compared with the real source intensity, the identification of constant source intensity is lower than the error threshold (10%) in 97.4% of the time nodes, and the identification of periodic source is lower than the error threshold (10%) in 95.4% of the time nodes. This research provides a new method and perspective for the estimation of indoor pollutant source information.
APA, Harvard, Vancouver, ISO, and other styles
7

Cheng, Kuang, Xiangyu Zhao, Wang Zhou, Yi Cao, Shuang-Hua Yang, and Jianmeng Chen. "Source term estimation with deficient sensors: Traceability and an equivalent source approach." Process Safety and Environmental Protection 152 (August 2021): 131–39. http://dx.doi.org/10.1016/j.psep.2021.05.035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nayak, M. K., T. K. Sahu, H. G. Nair, R. V. Nandedkar, Tapas Bandyopadhyay, R. M. Tripathi, P. R. Hannurkar, and D. N. Sharma. "Bremsstrahlung source term estimation for high energy electron accelerators." Radiation Physics and Chemistry 113 (August 2015): 1–5. http://dx.doi.org/10.1016/j.radphyschem.2015.04.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Hui, Jianwen Zhang, and Junkai Yi. "Computational source term estimation of the Gaussian puff dispersion." Soft Computing 23, no. 1 (August 4, 2018): 59–75. http://dx.doi.org/10.1007/s00500-018-3440-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mazzini, Guido, Tadas Kaliatka, Maria Teresa Porfiri, Luigi Antonio Poggi, Andrea Malizia, and Pasqualino Gaudio. "Methodology of the source term estimation for DEMO reactor." Fusion Engineering and Design 124 (November 2017): 1199–202. http://dx.doi.org/10.1016/j.fusengdes.2017.04.101.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Source term estimation"

1

Jin, Bei. "Conditional source-term estimation methods for turbulent reacting flows." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/232.

Full text
Abstract:
Conditional Source-term Estimation (CSE) methods are used to obtain chemical closure in turbulent combustion simulation. A Laminar Flamelet Decomposition (LFD) and then a Trajectory Generated Low-Dimensional Manifold (TGLDM) method are combined with CSE in Reynolds-Averaged Navier Stokes (RANS) simulation of non-premixed autoigniting jets. Despite the scatter observed in the experimental data, the predictions of ignition delay from both methods agree reasonably well with the measurements. The discrepancy between predictions of these two methods can be attributed to different ways of generating libraries that contain information of detailed chemical mechanism. The CSE-TGLDM method is recommended for its seemingly better performance and its ability to transition from autoignition to combustion. The effects of fuel composition and injection parameters on ignition delay are studied using the CSE-TGLDM method. The CSE-TGLDM method is then applied in Large Eddy Simulation of a non-premixed, piloted jet flame, Sandia Flame D. The adiabatic CSE-TGLDM method is extended to include radiation by introducing a variable enthalpy defect to parameterize TGLDM manifolds. The results are compared to the adiabatic computation and the experimental data. The prediction of NO formation is improved, though the predictions of temperature and major products show no significant difference from the adiabatic computation due to the weak radiation of the flame. The scalar fields are then extracted and used to predict the mean spectral radiation intensities of the flame. Finally, the application of CSE in turbulent premixed combustion is explored. A product-based progress variable is chosen for conditioning. Presumed Probability Density Function (PDF) models for the progress variable are studied. A modified version of a laminar flame-based PDF model is proposed, which best captures the distribution of the conditional variable among all PDFs under study. A priori tests are performed with the CSE and presumed PDF models. Reaction rates of turbulent premixed flames are closed and compared to the DNS data. The results are promising, suggesting that chemical closure can be achieved in premixed combustion using the CSE method.
APA, Harvard, Vancouver, ISO, and other styles
2

Salehi, Mohammad Mahdi. "Numerical simulation of turbulent premixed flames with conditional source-term estimation." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42775.

Full text
Abstract:
Conditional Source-term Estimation (CSE) is a closure model for turbulence-chemistry interactions. This model is based on the conditional moment closure hypothesis for the chemical reaction source terms. The conditional scalar field is estimated by solving an integral equation using inverse methods. CSE was originally developed for - and has been used extensively in - non-premixed combustion. This work is the first application of this combustion model to predictive simulations of turbulent premixed flames. The underlying inverse problem is diagnosed with rigorous mathematical tools. CSE is coupled with a Trajectory Generated Low-Dimensional Manifold (TGLDM) model for chemistry. The CSE-TGLDM combustion model is used with both Reynolds-Averaged Navier-Stokes (RANS) and Large-Eddy Simulation (LES) turbulence models to simulate two different turbulent premixed flames. Also in this work, the Presumed Conditional Moment (PCM) turbulent combustion model is employed. This is a simple flamelet model which is used with the Flame Prolongation of ILDM (FPI) chemistry reduction technique. The PCM-FPI approach requires a presumption for the shape of the probability density function of reaction progress variable. Two shapes have been examined: the widely used beta-function and the Modified Laminar Flamelet PDF (MLF-PDF). This model is used in both RANS and large-eddy simulation of a turbulent premixed Bunsen burner. Radial distributions of the calculated temperature field, axial velocity and chemical species mass fraction have been compared with experimental data. This comparison shows that using the MLF-PDF leads to predictions that are similar, and often superior to those obtained using the beta-PDF. Given that the new PDF is based on the actual chemistry - as opposed to the ad hoc nature of the beta-PDF - these results suggest that it is a better choice for the statistical description of the reaction progress variable.
APA, Harvard, Vancouver, ISO, and other styles
3

Nivarti, Girish Venkata. "Combustion modelling in spark-ignition engines using conditional source-term estimation." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44838.

Full text
Abstract:
Conditional Source-term Estimation (CSE) is a chemical closure model for the simulation of turbulent combustion. In this work, CSE has been explored for modelling combustion phenomena in a spark-ignition (SI) engine. In the arbitrarily complex geometries imposed by industrial design, estimation of conditionally averaged scalars is challenging. The key underlying requirement of CSE is that conditionally averaged scalars be calculated within spatially localized sub-domains. A domain partitioning algorithm based on space-filling curves has been developed to construct localized ensembles of points necessary to retain the validity of CSE. Algorithms have been developed to evenly distribute points to the maximum extent possible while maintaining spatial locality. A metric has been defined to estimate relative inter-partition contact as an indicator of communication in parallel computing architectures. Domain partitioning tests conducted on relevant geometries highlight the performance of the method as an unsupervised and computationally inexpensive domain partitioning tool. In addition to involving complex geometries, SI engines pose the challenge of accurately modelling the transient ignition process. Combustion in a homogeneous-charge natural gas fuelled SI engine with a relatively simple chamber geometry has been simulated using an empirical model for ignition. An oxygen based reaction progress variable is employed as the conditioning variable and its stochastic behaviour is approximated by a presumed probability density function (PDF). A trajectory generated low-dimensional manifold has been used to tabulate chemistry in a hyper-dimensional space described by the reaction progress variable, temperature and pressure. The estimates of pressure trace and pollutant emission trends obtained using CSE accurately match experimental measurements.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Mei. "Combustion modeling using conditional source-term estimation with flamelet decomposition and low-dimensional manifolds." Thesis, University of British Columbia, 2006. http://hdl.handle.net/2429/31181.

Full text
Abstract:
Combustion modeling is performed with Conditional Source-term Estimation (CSE) using both Laminar Flamelet Decomposition (LFD) and Low-dimensional Manifolds. CSE with Laminar Flamelet Decomposition (LFD) is used in the Large Eddy Simulation (LES) context to study the non-premixed Sandia D-flame. The results show that the flame temperature and major species are well predicted with both steady and unsteady flamelet libraries. A mixed library composed of steady and unsteady flamelet solutions is needed to get a good prediction of NO. That the LFD model allows for tuning of the results is found to be significant drawback to this approach. CSE is also used with a Trajectory Generated Low-dimensional Manifold (TGLDM) to simulate the Sandia D-flame. Both GRI-Mech 3.0 and GRI-Mech 2.11 are found to be able to predict the temperature and major species well. However, only GRI-Mech 2.11 gives a good prediction of NO. That GRI-Mech 3.0 failed to give a good prediction of NO is in agreement with the findings of others in the literature. The Stochastic Particle Model (SPM) is used to extend the TGLDM to low temperatures where the original continuum TGLDM failed. A new method for generating a trajectory for the TGLDM by averaging different realizations together is proposed. The new TGLDM is used in simulations of a premixed laminar flame and a perfectly stirred reactor. The results show that the new TGLDM significantly improves the prediction. Finally, a time filter is applied to individual SPM realizations to eliminate the small time scales. These filtered realizations are tabulated into TGLDM which are then used to predict the autoignition delay time of a turbulent methane/air jet in RANS using CSE. The results are compared with shock tube experimental data. The TGLDMs incorporating SPM results are able to predict a certain degree of fluctuations in the autoignition delay time, but the magnitude is smaller than is seen in the experiments. This suggests that fluctuations in the ignition delay are at least in part due to turbulent fluctuations, which might be better predicted with LES.
Science, Faculty of
Mathematics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
5

Tobias, Brännvall. "Source Term Estimation in the Atmospheric Boundary Layer : Using the adjoint of the Reynolds Averaged Scalar Transport equation." Thesis, Umeå universitet, Institutionen för fysik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-103671.

Full text
Abstract:
This work evaluates whether the branch of Reynolds Averaging in Computational Fluid Dynamics can be used to, based on real field measurements, find the source of the measured gas in question. The method to do this is via the adjoint to the Reynolds Averaged Scalar Transport equation, explained and derived herein. Since the Inverse is only as good as the main equation, forward runs are made to evaluate the turbulence model. Reynolds Averaged Navier Stokes is solved in a domain containing 4 cubes in a 2x2 grid, generating a velocity field for said domain. The turbulence model in question is a union of two modifications to the standard two equation k-ε model in order to capture blunt body turbulence but also to model the atmospheric boundary layer. This field is then inserted into the Reynolds Averaged Scalar Transport equation and the simulation is compared to data from the Environmental Flow wind tunnel in Surrey. Finally the adjoint scalar transport is solved, both for synthetic data that was generated in the forward run, but also for the data from EnFlo. It was discovered that the turbulent Schmidt number plays a major role in capturing the dispersed gas, three different Schmidt numbers were tested, the standard 0.7, the unconventional 0.3 and a height dependent Schmidt number. The widely accepted value of 0.7 did not capture the dispersion at all and gave a huge model error. As such the adjoint scalar transport was solved for 0.3 and a height dependent Schmidt number. The interaction between measurements, the real source strength (which is not used in the adjoint equation, but needed to find the source) and the location of the source is intricate indeed. Over estimation and under estimation of the forward model may cancel out in order to find the correct source, with the correct strength. It is found that Reynolds Averaged Computational fluid dynamics may prove useful in source term estimation.
Detta arbete utvärderar hurvida Reynolds medelvärdesmodellering inom flödessimuleringar kan användas till att finna källan till en viss gas baserat på verkliga mätningar ute i fält. Metoden går ut på att använda den adjungerade ekvationen till Reynolds tidsmedlade skalära transportekvationen, beskriven och härledd häri. Då bakåtmodellen bygger på framåtmodellen, måste såleds framåtmodellen utvärderas först. Navier-Stokes ekvationer med en turbulensmodell löses i en domän, innehållandes 4 kuber i en 2x2 orientering, för vilken en hastighetsprofil erhålles. Turbulensmodellen som användes är en union av två olika k-ε modeller, där den ena fångar turbulens runt tröga objekt och den andra som modellerar atmosfäriska gränsskiktet. Detta fält används sedan i framåtmodellen av skalära transportekvationen, som sedan jämförs med körningar från EnFlo windtunneln i Surrey. Slutligen testkörs även den adjungerade ekvationen, både för syntetiskt data genererat i framåtkörningen men även för data från EnFlo tunneln. Då det visade sig att det turbulenta Schmidttalet spelar stor roll inom spridning i det atmosfäriska gränsskiktet, gjordes testkörningar med tre olika Schmidttal, det normala 0.7, det väldigt låga talet 0.3 samt ett höjdberoende Schmidttal. Det visade sig att det vanligtvis använda talet 0.7 inte alls lyckas fånga spridningen tillfredställande och gav ett stort modellfel. Därför löstes den adjungerade ekvationen för 0.3 samt för ett höjdberoende Schmidttal. Interaktionen mellan mätningar, den riktiga källstyrkan (som är okänd i den adjungerade ekvationen) samt källpositionen är onekligen intrikat. Över- samt underestimationer av framåtmodellen kan ta ut varandra i bakåtmodellen för att finna rätt källa, med rätt källstyrka. Det ter sig som Reynolds turbulensmodellering mycket möjligt kan användas inom källtermsuppskattning.
APA, Harvard, Vancouver, ISO, and other styles
6

Tsui, Hong P. "Turbulent premixed combustion simulation with Conditional Source-term Estimation and Linear-Eddy Model formulated PDF and SDR models." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/60295.

Full text
Abstract:
Computational fluid dynamics (CFD) is indispensable in the development of complex engines due to its low cost and time requirement compared to experiments. Nevertheless, because of the strong coupling between turbulence and chemistry in premixed flames, the prediction of chemical reaction source terms continues to be a modelling challenge. This work focuses on the improvement of turbulent premixed combustion simulation strategies requiring the use of presumed probability density function (PDF) models. The study begins with the development of a new PDF model that includes the effect of turbulence, achieved by the implementation of the Linear-Eddy Model (LEM). Comparison with experimental burners reveals that the LEM PDF can capture the general PDF shapes for methane-air combustion under atmospheric conditions with greater accuracy than other presumed PDF models. The LEM is additionally used to formulate a new, pseudo-turbulent scalar dissipation rate (SDR) model. Conditional Source-term Estimation (CSE) is implemented in the Large Eddy Simulation (LES) of the Gülder burner as the closure model for the chemistry-turbulence interactions. To accommodate the increasingly parallel computational environments in clusters, the CSE combustion module has been parallelised and optimised. The CSE ensembles can now dynamically adapt to the changing flame distributions by shifting their spatial boundaries and are no longer confined to pre-allocated regions in the simulation domain. Further, the inversion calculation is now computed in parallel using a modified version of an established iterative solver, the Least-Square QR-factorisation (LSQR). The revised version of CSE demonstrates a significant reduction in computational requirement — a reduction of approximately 50% — while producing similar solutions as previous implementations. The LEM formulated PDF and SDR models are subsequently implemented in conjunction with the optimised version of CSE for the LES of a premixed methane-air flame operating in the thin reaction zone. Comparison with experimental measurements of temperature reveals that the LES results are very comparable in terms of the flame height and distribution. This outcome is encouraging as it appears that this work represents a significant step towards the correct direction in developing a complete combustion simulation strategy that can accurately predict flame characteristics in the absence of ad hoc parameters.
Applied Science, Faculty of
Mechanical Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
7

Lopez, Ferber Roman. "Approches RBF-FD pour la modélisation de la pollution atmosphérique urbaine et l'estimation de sources." Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALT006.

Full text
Abstract:
Depuis l'ère industrielle, les villes sont impactées par la pollution de l'air du fait de la densité de l'industrie, de la circulation de véhicules et de la densité d'appareils de chauffage à combustion. La pollution atmosphérique urbaine a des conséquences sur la santé auxquelles les pouvoirs publics et les citoyens s'intéressent toujours plus. Cette pollution peut entraîner l'aggravation de l'asthme ou encore de troubles cardiovasculaires.Le but de ce travail de thèse est de localiser et de quantifier des sources de pollution urbaine à l'aide d'un réseau dense de mesures bruitées. Nous avons fait le choix de développer des méthodes d'estimation de sources de pollution en se basant sur des modèles physiques de dispersion de polluants. Ainsi l'estimation de sources de pollution est contrainte par la connaissance de la physique du phénomène de dispersion.Ce travail de thèse porte ainsi sur la modélisation numérique de la dispersion de polluants en milieu urbain et sur l'estimation des termes sources.À cause des nombreuses contraintes qu'impose le bâti urbain aux flux de polluants, la physique de dispersion est représentée par des modèles numériques coûteux en calcul.Nous avons développé un modèle numérique de dispersion basé sur la méthode des Différences Finies supportées par des Fonctions de Bases Radiales (RBF-FD). Ces approches sont réputées frugales en calcul et adaptées au traitement de domaines de simulation à géométrie complexe. Notre modèle RBF-FD peut traiter des problèmes bidimensionnels (2D) et tridimensionnels (3D). Nous avons comparé cette modélisation face à un modèle analytique en 2D, et avons qualitativement comparé notre modèle en 3D à un modèle numérique de référence.Des expériences d'estimations de source ont ensuite été réalisées. Elles considèrent de nombreuses mesures bruitées pour estimer un terme source quelconque sur tout le domaine de simulation. Les différentes études effectuées mettent en oe uvre des expériences jumelles : nous générons nous-même des mesures simulées par un modèle numérique et évaluons les performances des estimations. Après avoir testé une approche d'apprentissage automatique sur un cas unidimensionnel en régime stationnaire, nous avons testé des méthodes d'estimation de terme source sur des cas tridimensionnels en régime permanent et transitoire, en considérant des géométries sans et avec présence d'obstacles. Nous avons testé des estimations en utilisant une méthode adjointe originale puis une méthode originale d'estimation inspirée de l'apprentissage automatique informé de la physique (PIML) et enfin un filtre de Kalman. L'approche inspirée du PIML pour le moment testée en régime stationnaire conduit à une qualité d'estimation comparable au filtre de Kalman (où ce dernier considère un régime transitoire de dispersion à source stationnaire). L'approche inspirée du PIML exploite directement la frugalité du modèle direct de calcul RBF-FD, ce qui en fait une méthode prometteuse pour des estimations de source sur des domaines de calcul de grande taille
Since the industrial era, cities have been affected by air pollution due to the density of industry, vehicle traffic and the density of combustion heaters. Urban air pollution has health consequences that are of increasing concern to both public authorities and the general public. This pollution can aggravate asthma and cardiovascular problems. The aim of this thesis is to locate and quantify sources of urban pollution using a dense network of noisy measurements. We have chosen to develop methods for estimating pollution sources based on physical models of pollutant dispersion. The estimation of pollution sources is therefore constrained by knowledge of the physics of the dispersion phenomenon. This thesis therefore focuses on the numerical modelling of pollutant dispersion in an urban environment and on the estimation of source terms.Because of the many constraints imposed on pollutant flows by urban buildings, the physics of dispersion is represented by computationally expensive numerical models.We have developed a numerical dispersion model based on the Finite Difference method supported by Radial Basis Functions (RBF-FD). These approaches are known to be computationally frugal and suitable for handling simulation domains with complex geometries. Our RBF-FD model can handle both two-dimensional (2D) and three-dimensional (3D) problems. We compared this model with a 2D analytical model, and qualitatively compared our 3D model with a reference numerical model.Source estimation experiments were then carried out. They consider numerous noisy measurements in order to estimate any source term over the entire simulation domain. The various studies carried out involve twin experiments: we ourselves generate measurements simulated by a numerical model and evaluate the performance of the estimates. After testing a machine-learning approach on a one-dimensional steady-state case, we tested source term estimation methods on three-dimensional steady-state and transient cases, considering geometries without and with the presence of obstacles. We tested estimates using an original adjoint method, then an original estimation method inspired by physics-informed machine learning (PIML) and finally a Kalman filter. The PIML-inspired approach, which is currently being tested in a stationary regime, produces an estimation quality comparable to that of the Kalman filter (where the latter considers a transient dispersion regime with a stationary source). The PIML-inspired approach directly exploits the frugality of the RBF-FD direct computation model, which makes it a promising method for source estimates over large computational domains
APA, Harvard, Vancouver, ISO, and other styles
8

Rajaona, Harizo. "Inférence bayésienne adaptative pour la reconstruction de source en dispersion atmosphérique." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10120/document.

Full text
Abstract:
En physique de l’atmosphère, la reconstruction d’une source polluante à partir des mesures de capteurs est une question importante. Elle permet en effet d’affiner les paramètres des modèles de dispersion servant à prévoir la propagation d’un panache de polluant, et donne aussi des informations aux primo-intervenants chargés d’assurer la sécurité des populations. Plusieurs méthodes existent pour estimer les paramètres de la source, mais leur application est coûteuse à cause de la complexité des modèles de dispersion. Toutefois, cette complexité est souvent nécessaire, surtout lorsqu’il s’agit de traiter des cas urbains où la présence d’obstacles et la météorologie instationnaire imposent un niveau de précision important. Il est aussi vital de tenir compte des différents facteurs d’incertitude, sur les observations et les estimations. Les travaux menés dans le cadre de cette thèse ont pour objectif de développer une méthodologie basée sur l’inférence bayésienne adaptative couplée aux méthodes de Monte Carlo pour résoudre le problème d’estimation du terme source. Pour cela, nous exposons d’abord le contexte scientifique du problème et établissons un état de l’art. Nous détaillons ensuite les formulations utilisées dans le cadre bayésien, plus particulièrement pour les algorithmes d’échantillonnage d’importance adaptatifs. Le troisième chapitre présente une application de l’algorithme AMIS dans un cadre expérimental, afin d’exposer la chaîne de calcul utilisée pour l’estimation de la source. Enfin, le quatrième chapitre se concentre sur une amélioration du traitement des calculs de dispersion, entraînant un gain important de temps de calcul à la fois en milieu rural et urbain
In atmospheric physics, reconstructing a pollution source is a challenging but important question : it provides better input parameters to dispersion models, and gives useful information to first-responder teams in case of an accidental toxic release.Various methods already exist, but using them requires an important amount of computational resources, especially as the accuracy of the dispersion model increases. A minimal degree of precision for these models remains necessary, particularly in urban scenarios where the presence of obstacles and the unstationary meteorology have to be taken into account. One has also to account for all factors of uncertainty, from the observations and for the estimation. The topic of this thesis is the construction of a source term estimation method based on adaptive Bayesian inference and Monte Carlo methods. First, we describe the context of the problem and the existing methods. Next, we go into more details on the Bayesian formulation, focusing on adaptive importance sampling methods, especially on the AMIS algorithm. The third chapter presents an application of the AMIS to an experimental case study, and illustrates the mechanisms behind the estimation process that provides the source parameters’ posterior density. Finally, the fourth chapter underlines an improvement of how the dispersion computations can be processed, thus allowing a considerable gain in computation time, and giving room for using a more complex dispersion model on both rural and urban use cases
APA, Harvard, Vancouver, ISO, and other styles
9

Nguyen, Thanh Don. "Impact de la résolution et de la précision de la topographie sur la modélisation de la dynamique d’invasion d’une crue en plaine inondable." Thesis, Toulouse, INPT, 2012. http://www.theses.fr/2012INPT0093/document.

Full text
Abstract:
Nous analysons dans cette thèse différents aspects associés à la modélisation des écoulements à surface libre en eaux peu profondes (Shallow Water). Nous étudions tout d’abord le système d’équations de Saint-Venant à deux dimensions et leur résolution par la méthode numérique des volumes finis, en portant une attention particulière sur les aspects hyperboliques et conservatifs. Ces schémas permettent de traiter les équilibres stationnaires, les interfaces sec/mouillé et aussi de modéliser des écoulements subcritique, transcritique et supercritique. Nous présentons ensuite la théorie de la méthode d’assimilation variationnelle de données adaptée à ce type d’écoulement. Son application au travers des études de sensibilité est longuement discutée dans le cadre de l'hydraulique à surface libre. Après cette partie à caractère théorique, la partie tests commence par une qualification de l’ensemble des méthodes numériques qui sont implémentées dans le code DassFlow, développé à l’Université de Toulouse, principalement à l’IMT mais aussi à l’IMFT. Ce code résout les équations Shallow Water par une méthode de volumes finis et est validé par comparaison avec les solutions analytiques pour des cas tests classiques. Ces mêmes résultats sont comparés avec un autre code d’hydraulique à surface libre aux éléments finis en deux dimensions, Telemac 2D. Une particularité notable du code DassFlow est de permettre l’assimilation variationnelle de données grâce au code adjoint permettant le calcul du gradient de la fonction coût. Ce code adjoint a été obtenu en utilisant l'outil de différentiation automatique Tapenade (Inria). Nous testons ensuite sur un cas réel, hydrauliquement complexe, différentes qualités de Modèles Numériques de Terrain (MNT) et de bathymétrie du lit d’une rivière. Ces informations proviennent soit d’une base de données classique type IGN, soit d’informations LIDAR à très haute résolution. La comparaison des influences respectives de la bathymétrie, du maillage et du type de code utilisé, sur la dynamique d’inondation est menée très finement. Enfin nous réalisons des études cartographiques de sensibilité aux paramètres du modèle sur DassFlow. Ces cartes montrent l’influence respective des différents paramètres ou de la localisation des points de mesure virtuels. Cette localisation optimale de ces points est nécessaire pour une future assimilation de données efficiente
We analyze in this thesis various aspects associated with the modeling of free surface flows in shallow water approximation. We first study the system of Saint-Venant equations in two dimensions and its resolution with the numerical finite volumes method, focusing in particular on aspects hyperbolic and conservative. These schemes can process stationary equilibria, wetdry interfaces and model subcritical, transcritical and supercritical flows. After, we present the variational data assimilation method theory fitted to this kind of flow. Its application through sensitivity studies is fully discussed in the context of free surface water. After this theoretical part, we test the qualification of numerical methods implemented in the code Dassflow, developed at the University of Toulouse, mainly at l'IMT, but also at IMFT. This code solves the Shallow Water equations by finite volume method and is validated by comparison with analytical solutions for standard test cases. These results are compared with another hydraulic free surface flow code using finite elements in two dimensions: Telemac2D. A significant feature of the Dassflow code is to allow variational data assimilation using the adjoint method for calculating the cost function gradient. The adjoint code was obtained using the automatic differentiation tool Tapenade (INRIA). Then, the test is carried on a real hydraulically complex case using different qualities of Digital Elevation Models (DEM) and bathymetry of the river bed. This information are provided by either a conventional database types IGN or a very high resolution LIDAR information. The comparison of the respective influences of bathymetry, mesh size, kind of code used on the dynamics of flooding is very finely explored. Finally we perform sensitivity mapping studies on parameters of the Dassflow model. These maps show the respective influence of different parameters and of the location of virtual measurement points. This optimal location of these points is necessary for an efficient data assimilation in the future
APA, Harvard, Vancouver, ISO, and other styles
10

Long, Peter Vincent. "Estimating the long-term health effects associated with health insurance and usual source of care at the population level." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1779835391&sid=18&Fmt=2&clientId=48051&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Source term estimation"

1

Sjoreen, A. L. Source term estimation using MENU-TACT. Washington, D. C: Division of Operational Assessment, Office for Analysis and Evaluation of Operational Data, U.S. Nuclear Regulatory Commission, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Office, U. S. Nuclear Regulatory Commission Office of Nuclear Regulatory Research Accident Source Term Program. Reassessment of the technical bases for estimating source terms: Draft report for comment. Washington, D.C: Accident Source Term Program Office, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Source term estimation during incident response to severe nuclear power plant accidents. Washington, DC: Division of Operational Assessment, Office for Analysis and Evaluation of Operational Data, U.S. Nuclear Regulatory Commission, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Reassessment of the technical bases for estimating source terms: Final report. Washington, DC: Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Clough, P. N., and D. Keir. An Appreciation of the Events, Models and Data Used for LMFBR Radiological Source Term Estimations. European Communities / Union (EUR-OP/OOPEC/OPOCE), 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wyss, Max. Earthquake Risk Assessment. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780190676889.013.1.

Full text
Abstract:
This article discusses the importance of assessing and estimating the risk of earthquakes. It begins with an overview of earthquake prediction and relevant terms, namely: earthquake hazard, maximum credible earthquake magnitude, exposure time, earthquake risk, and return time. It then considers data sources for estimating seismic hazard, including catalogs of historic earthquakes, measurements of crustal deformation, and world population data. It also examines ways of estimating seismic risk, such as the use of probabilistic estimates, deterministic estimates, and the concepts of characteristic earthquake, seismic gap, and maximum rupture length. A loss scenario for a possible future earthquake is presented, and the notion of imminent seismic risk is explained. Finally, the chapter addresses errors in seismic risk estimates and how to reduce seismic risk, ethical and moral aspects of seismic risk assessment, and the outlook concerning seismic risk assessment.
APA, Harvard, Vancouver, ISO, and other styles
7

Wyss, Max. Earthquake Risk Assessment. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780190699420.013.1.

Full text
Abstract:
This article discusses the importance of assessing and estimating the risk of earthquakes. It begins with an overview of earthquake prediction and relevant terms, namely: earthquake hazard, maximum credible earthquake magnitude, exposure time, earthquake risk, and return time. It then considers data sources for estimating seismic hazard, including catalogs of historic earthquakes, measurements of crustal deformation, and world population data. It also examines ways of estimating seismic risk, such as the use of probabilistic estimates, deterministic estimates, and the concepts of characteristic earthquake, seismic gap, and maximum rupture length. A loss scenario for a possible future earthquake is presented, and the notion of imminent seismic risk is explained. Finally, the chapter addresses errors in seismic risk estimates and how to reduce seismic risk, ethical and moral aspects of seismic risk assessment, and the outlook concerning seismic risk assessment.
APA, Harvard, Vancouver, ISO, and other styles
8

Procedures for Conducting Probabilistic Safety Assessments of Nuclear Power Plants (Level 2): Accident Progression, Containment Analysis and Estimation of Accident Source Terms (Safety Series: 50-P-8). International Atomic Emergy Agency (IAEA), 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Source term estimation"

1

Cervone, Guido, and Pasquale Franzese. "Source Term Estimation for the 2011 Fukushima Nuclear Accident." In Data Mining for Geoinformatics, 49–64. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-7669-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Yang, Matthew Coombes, and Cunjia Liu. "Consensus-Based Distributed Source Term Estimation with Particle Filter and Gaussian Mixture Model." In ROBOT2022: Fifth Iberian Robotics Conference, 130–41. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-21062-4_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Garbe, Christoph S., Hagen Spies, and Bernd Jähne. "Mixed OLS-TLS for the Estimation of Dynamic Processes with a Linear Source Term." In Lecture Notes in Computer Science, 463–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45783-6_56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nagai, Haruyasu, Genki Katata, Hiroaki Terada, and Masamichi Chino. "Source Term Estimation of 131I and 137Cs Discharged from the Fukushima Daiichi Nuclear Power Plant into the Atmosphere." In Radiation Monitoring and Dose Estimation of the Fukushima Nuclear Accident, 155–73. Tokyo: Springer Japan, 2013. http://dx.doi.org/10.1007/978-4-431-54583-5_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kumar, Amit, Vageesh Shukla, Manoj Kansal, and Mukesh Singhal. "PSA Level-2 Study: Estimation of Source Term for Postulated Accidental Release from Indian PHWRs." In Reliability, Safety and Hazard Assessment for Risk-Based Technologies, 15–26. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-9008-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Penenko, Vladimir, and Alexander Baklanov. "Methods of Sensitivity Theory and Inverse Modeling for Estimation of Source Term and Risk/Vulnerability Areas." In Computational Science - ICCS 2001, 57–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45718-6_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Soriguera Martí, Francesc. "Short-Term Prediction of Highway Travel Time Using Multiple Data Sources." In Highway Travel Time Estimation With Data Fusion, 157–84. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-48858-4_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Srati, M., A. Oulmelk, and L. Afraites. "Optimization Method for Estimating the Inverse Source Term in Elliptic Equation." In Springer Proceedings in Mathematics & Statistics, 51–75. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-33069-8_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zavisca, Michael, Heinrich Kahlert, Mohsen Khatib-Rahbar, Elizabeth Grindon, and Ming Ang. "A Bayesian Network Approach to Accident Management and Estimation of Source Terms for Emergency Planning." In Probabilistic Safety Assessment and Management, 383–88. London: Springer London, 2004. http://dx.doi.org/10.1007/978-0-85729-410-4_62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rampazzo, Francesco, Marzia Rango, and Ingmar Weber. "New Migration Data: Challenges and Opportunities." In Handbook of Computational Social Science for Policy, 345–59. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-16624-2_18.

Full text
Abstract:
AbstractMigration is hard to measure due to the complexity of the phenomenon and the limitations of traditional data sources. The Digital Revolution has brought opportunities in terms of new data and new methodologies for migration research. Social scientists have started to leverage data from multiple digital data sources, which have huge potential given their timeliness and wide geographic availability. Novel digital data might help in estimating migrant stocks and flows, infer intentions to migrate, and investigate the integration and cultural assimilation of migrants. Moreover, innovative methodologies can help make sense of new and diverse streams of data. For example, Bayesian methods, natural language processing, high-intensity time series, and computational methods might be relevant to study different aspects of migration. Importantly, researchers should consider the ethical implications of using these data sources, as well as the repercussions of their results.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Source term estimation"

1

Robins, P., and P. Thomas. "Non-linear Bayesian CBRN source term estimation." In 2005 7th International Conference on Information Fusion. IEEE, 2005. http://dx.doi.org/10.1109/icif.2005.1591980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chichester, David L., James T. Johnson, Scott M. Watson, Scott J. Thompson, Nick R. Mann, and Kevin P. Carney. "Post-blast radiological dispersal device source term estimation." In 2016 IEEE Nuclear Science Symposium, Medical Imaging Conference and Room-Temperature Semiconductor Detector Workshop (NSS/MIC/RTSD). IEEE, 2016. http://dx.doi.org/10.1109/nssmic.2016.8069920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rahbar, Faezeh, Ali Marjovi, and Alcherio Martinoli. "An Algorithm for Odor Source Localization based on Source Term Estimation." In 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019. http://dx.doi.org/10.1109/icra.2019.8793784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fanfarillo, Alessandro. "Quantifying Uncertainty in Source Term Estimation with Tensorflow Probability." In 2019 IEEE/ACM HPC for Urgent Decision Making (UrgentHPC). IEEE, 2019. http://dx.doi.org/10.1109/urgenthpc49580.2019.00006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Zhi-Pu, and Huai-Ning Wu. "Source Term Estimation with Unknown Number of Sources using Improved Cuckoo Search Algorithm." In 2020 39th Chinese Control Conference (CCC). IEEE, 2020. http://dx.doi.org/10.23919/ccc50068.2020.9189067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hu, Hao, Xinwen Dong, Xinpeng Li, Yuhan Xu, Shuhan Zhuang, and Sheng Fang. "Wind Tunnel Validation Study of Joint Estimation Source Term Inversion Method." In ASME 2023 International Conference on Environmental Remediation and Radioactive Waste Management. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/icem2023-110109.

Full text
Abstract:
Abstract Radioactive emissions caused by severe accidents pose a great threat to the environment and creatures along the spread. The source inversion technique is designed to retrieve the corresponding release rate based on environmental monitoring data and has been widely integrated into the nuclear accident consequence assessment. The Model error and its propagation in the inversion cause the estimate unrealistic and confuse the real releases. Several self-correcting source inversion methods have been proposed to handle it, for instance, the joint estimation method that simultaneously retrieves the estimate and corrects the model bias by introducing a correction coefficient matrix. In this study, the joint estimation method has been investigated against a wind tunnel experiment with the incoming airflow from the southeast direction (SSE). Three nuclear power plants were located in the scenario, surrounded by mountains and sea. The Risø Mesoscale Lagrangian PUFF model (RIMPUFF) was used for atmospheric diffusion predictions and the transport matrix construction. The fine wind tunnel experimental data was used as the observations to drive the standard (Tikhonov) inversion method and the joint estimation method. The results show that the release rate estimated by the Tikhonov method was four times higher than the ground truth on the representative SSE direction. In comparison, the self-correcting joint estimation method reduces the error to only 4%.
APA, Harvard, Vancouver, ISO, and other styles
7

Ma, Yuanwei, Dezhong Wang, Wenji Tan, Zhilong Ji, and Kuo Zhang. "Assessing Sensitivity of Observations in Source Term Estimation for Nuclear Accidents." In 2012 20th International Conference on Nuclear Engineering and the ASME 2012 Power Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/icone20-power2012-54491.

Full text
Abstract:
In the Fukushima nuclear accident, due to the lack of field observations and the complexity of source terms, researchers failed to estimate the source term accurately immediately. Data assimilation methods to estimate source terms have many good features: they works well with highly nonlinear dynamic models, no linearization in the evolution of error statistics, etc. This study built a data assimilation system using the ensemble Kalman Filter for real-time estimates of source parameters. The assimilation system uses a Gaussian puff model as the atmospheric dispersion model, assimilating forward with the observation data. Considering measurement error, numerical experiments were carried on to verify the stability and accuracy of the scheme. Then the sensitivity of observation configration is tested by the twin experiments. First, the single parameter release rate of the source term is estimated by different sensor grid configurations. In a sparse sensors grid, the error of estimation is about 10%, and in a 11*11 grid configuration, the error is less than 1%. Under the analysis of the Fukushima nuclear accident, ahead for the actual situation, four parameters are estimated at the same time, by 2*2 to 11*11 grid configurations. The studies showed that the radionuclides plume should cover as many sensors as possible, which will lead a to successful estimation.
APA, Harvard, Vancouver, ISO, and other styles
8

Robins, P., V. Rapley, and P. Thomas. "Biological Source Term Estimation Using Particle Counters and Immunoassay Sensors." In 2006 9th International Conference on Information Fusion. IEEE, 2006. http://dx.doi.org/10.1109/icif.2006.301723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rahbar, Faezeh, and Alcherio Martinoli. "A Distributed Source Term Estimation Algorithm for Multi-Robot Systems." In 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020. http://dx.doi.org/10.1109/icra40945.2020.9196959.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

"INVERSE ESTIMATION OF RADIATIVE SOURCE TERM IN TWO-DIMENSIONAL IRREGULAR MEDIA." In RADIATIVE TRANSFER - V. Proceedings of the Fifth International Symposium on Radiative Transfer. Connecticut: Begellhouse, 2007. http://dx.doi.org/10.1615/ichmt.2007.radtransfproc.340.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Source term estimation"

1

Brooks, Dusty. Non-Parametric Source Term Uncertainty Estimation. Office of Scientific and Technical Information (OSTI), June 2020. http://dx.doi.org/10.2172/1763581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

McKenna, T. J., and J. G. Glitter. Source term estimation during incident response to severe nuclear power plant accidents. Office of Scientific and Technical Information (OSTI), October 1988. http://dx.doi.org/10.2172/6822946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Clark, Todd E., Gergely Ganics, and Elmar Mertens. Constructing fan charts from the ragged edge of SPF forecasts. Federal Reserve Bank of Cleveland, November 2022. http://dx.doi.org/10.26509/frbc-wp-202236.

Full text
Abstract:
We develop a model that permits the estimation of a term structure of both expectations and forecast uncertainty for application to professional forecasts such as the Survey of Professional Forecasters (SPF). Our approach exactly replicates a given data set of predictions from the SPF (or a similar forecast source) without measurement error. Our model captures fixed horizon and fixed-event forecasts, and can accommodate changes in the maximal forecast horizon available from the SPF. The model casts a decomposition of multi-period forecast errors into a sequence of forecast updates that may be partially unobserved, resulting in a multivariate unobserved components model. In our empirical analysis, we provide quarterly term structures of expectations and uncertainty bands. Our preferred specification features stochastic volatility in forecast updates, which improves forecast performance and yields model estimates of forecast uncertainty that vary over time. We conclude by constructing SPF-based fan charts for calendar-year forecasts like those published by the Federal Reserve. Replication files are available at https://github.com/elmarmertens/ClarkGanicsMertensSPFfancharts.
APA, Harvard, Vancouver, ISO, and other styles
4

Rossi, José Luiz, and João Paulo Madureira Horta da Costa. Shock Dependent Exchange Rate Pass-Through - An Analysis for Latin American Countries. Inter-American Development Bank, September 2023. http://dx.doi.org/10.18235/0005129.

Full text
Abstract:
This paper investigates the exchange rate pass through considering the source of the shocks that hit the economy. With a Bayesian Global VAR model, the exchange rate pass-through is analyzed for 5 Latin American countries: Brazil, Chile, Colombia, Mexico and Peru. The model is estimated with Bayesian techniques and is identified by sign and zero restrictions. The BGVAR estimation enable us to allow spillover between countries mimicking the real conditions when the shocks hit the economies. Four domestic shocks for each Latin American countries are considered: an exchange rate shock, a risk premium shock, a monetary policy shock and a demand shock. The demand shock has the highest exchange rate pass-through for all the countries and the exchange rate shock has the lowest one. Additionally, two regional shocks are considered: a regional monetary policy shock, an event that all the region rises its interest rate and a regional risk premium shock, where the risk premium rises at the same time. For almost of the countries, the exchange rate pass-through coming from those regional shocks are lower than its domestic counterpart shock. Finally, we investigate two global shocks, an uncertainty shock and a global commodities/demand shock. The uncertainty shock decreases the economic activity and depreciates the exchange rate with a negative exchange rate pass-through in the middle term. The commodities/demand shock increases the economic activity and appreciates the exchange rate passthrough, having a negative or neutral exchange rate pass-through over the time.
APA, Harvard, Vancouver, ISO, and other styles
5

Sanders, T. L., H. Jordan, V. Pasupathi, W. J. Mings, and P. C. Reardon. A methodology for estimating the residual contamination contribution to the source term in a spent-fuel transport cask. Office of Scientific and Technical Information (OSTI), September 1991. http://dx.doi.org/10.2172/6373171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fourrier, Marine. Integration of in situ and satellite multi-platform data (estimation of carbon flux for trop. Atlantic). EuroSea, 2023. http://dx.doi.org/10.3289/eurosea_d7.6.

Full text
Abstract:
This report presents the results of task 7.3 on “Quantification of improvements in carbon flux data for the tropical Atlantic based on the multi-platform and neural network approach”. To better constrain changes in the ocean’s capture and sequestration of CO2 emitted by human activities, in situ measurements are needed. Tropical regions are considered to be mostly sources of CO2 to the atmosphere due to specific circulation features, with large interannual variability mainly controlled by physical drivers (Padin et al., 2010). The tropical Atlantic is the second largest source, after the tropical Pacific, of CO2 to the atmosphere (Landschützer et al., 2014). However, it is not a homogeneous zone, as it is affected by many physical and biogeochemical processes that vary on many time scales and affect surrounding areas (Foltz et al., 2019). The Tropical Atlantic Observing System (TAOS) has progressed substantially over the past two decades. Still, many challenges and uncertainties remain to require further studies into the area’s role in terms of carbon fluxes (Foltz et al., 2019). Monitoring and sustained observations of surface oceanic CO2 are critical for understanding the fate of CO2 as it penetrates the ocean and during its sequestration at depth. This deliverable relies on different observing platforms deployed specifically as part of the EuroSea project (a Saildrone, and 5 pH-equipped BGC-Argo floats) as well as on the platforms as part of the TAOS (CO2-equipped moorings, cruises, models, and data products). It also builds on the work done in D7.1 and D7.2 on the deployment and quality control of pH-equipped BGC-Argo floats and Saildrone data. Indeed, high-quality homogeneously calibrated carbonate variable measurements are mandatory to be able to compute air-sea CO2 fluxes at a basin scale from multiple observing platforms. (EuroSea Deliverable, D7.6)
APA, Harvard, Vancouver, ISO, and other styles
7

Hertel, Thomas, David Hummels, Maros Ivanic, and Roman Keeney. How Confident Can We Be in CGE-Based Assessments of Free Trade Agreements? GTAP Working Paper, June 2003. http://dx.doi.org/10.21642/gtap.wp26.

Full text
Abstract:
With the proliferation of Free Trade Agreements (FTAs) over the past decade, demand for quantitative analysis of their likely impacts has surged. The main quantitative tool for performing such analysis is Computable General Equilibrium (CGE) modeling. Yet these models have been widely criticized for performing poorly (Kehoe, 2002) and having weak econometric foundations (McKitrick, 1998; Jorgenson, 1984). FTA results have been shown to be particularly sensitive to the trade elasticities, with small trade elasticities generating large terms of trade effects and relatively modest efficiency gains, whereas large trade elasticities lead to the opposite result. Critics are understandably wary of results being determined largely by the authors’ choice of trade elasticities. Where do these trade elasticities come from? CGE modelers typically draw these elasticities from econometric work that uses time series price variation to identify an elasticity of substitution between domestic goods and composite imports (Alaouze, 1977; Alaouze, et al., 1977; Stern et al., 1976; Gallaway, McDaniel and Rivera, 2003). This approach has three problems: the use of point estimates as “truth”, the magnitude of the point estimates, and estimating the relevant elasticity. First, modelers take point estimates drawn from the econometric literature, while ignoring the precision of these estimates. As we will make clear below, the confidence one has in various CGE conclusions depends critically on the size of the confidence interval around parameter estimates. Standard “robustness checks” such as systematically raising or lowering the substitution parameters does not properly address this problem because it ignores information about which parameters we know with some precision and which we do not. A second problem with most existing studies derives from the use of import price series to identify home vs. foreign substitution, for example, tends to systematically understate the true elasticity. This is because these estimates take price variation as exogenous when estimating the import demand functions, and ignore quality variation. When quality is high, import demand and prices will be jointly high. This biases estimated elasticities toward zero. A related point is that the fixed-weight import price series used by most authors are theoretically inappropriate for estimating the elasticities of interest. CGE modelers generally examine a nested utility structure, with domestic production substitution for a CES composite import bundle. The appropriate price series is then the corresponding CES price index among foreign varieties. Constructing such an index requires knowledge of the elasticity of substitution among foreign varieties (see below). By using a fixed-weight import price series, previous estimates place too much weight on high foreign prices, and too small a weight on low foreign prices. In other words, they overstate the degree of price variation that exists, relative to a CES price index. Reconciling small trade volume movements with large import price series movements requires a small elasticity of substitution. This problem, and that of unmeasured quality variation, helps explain why typical estimated elasticities are very small. The third problem with the existing literature is that estimates taken from other researchers’ studies typically employ different levels of aggregation, and exploit different sources of price variation, from what policy modelers have in mind. Employment of elasticities in experiments ill-matched to their original estimation can be problematic. For example, estimates may be calculated at a higher or lower level of aggregation than the level of analysis than the modeler wants to examine. Estimating substitutability across sources for paddy rice gives one a quite different answer than estimates that look at agriculture as a whole. When analyzing Free Trade Agreements, the principle policy experiment is a change in relative prices among foreign suppliers caused by lowering tariffs within the FTA. Understanding the substitution this will induce across those suppliers is critical to gauging the FTA’s real effects. Using home v. foreign elasticities rather than elasticities of substitution among imports supplied from different countries may be quite misleading. Moreover, these “sourcing” elasticities are critical for constructing composite import price series to appropriate estimate home v. foreign substitutability. In summary, the history of estimating the substitution elasticities governing trade flows in CGE models has been checkered at best. Clearly there is a need for improved econometric estimation of these trade elasticities that is well-integrated into the CGE modeling framework. This paper provides such estimation and integration, and has several significant merits. First, we choose our experiment carefully. Our CGE analysis focuses on the prospective Free Trade Agreement of the Americas (FTAA) currently under negotiation. This is one of the most important FTAs currently “in play” in international negotiations. It also fits nicely with the source data used to estimate the trade elasticities, which is largely based on imports into North and South America. Our assessment is done in a perfectly competitive, comparative static setting in order to emphasize the role of the trade elasticities in determining the conventional gains/losses from such an FTA. This type of model is still widely used by government agencies for the evaluation of such agreements. Extensions to incorporate imperfect competition are straightforward, but involve the introduction of additional parameters (markups, extent of unexploited scale economies) as well as structural assumptions (entry/no-entry, nature of inter-firm rivalry) that introduce further uncertainty. Since our focus is on the effects of a PTA we estimate elasticities of substitution across multiple foreign supply sources. We do not use cross-exporter variation in prices or tariffs alone. Exporter price series exhibit a high degree of multicolinearity, and in any case, would be subject to unmeasured quality variation as described previously. Similarly, tariff variation by itself is typically unhelpful because by their very nature, Most Favored Nation (MFN) tariffs are non-discriminatory in nature, affecting all suppliers in the same way. Tariff preferences, where they exist, are often difficult to measure – sometimes being confounded by quantitative barriers, restrictive rules of origin, and other restrictions. Instead we employ a unique methodology and data set drawing on not only tariffs, but also bilateral transportation costs for goods traded internationally (Hummels, 1999). Transportation costs vary much more widely than do tariffs, allowing much more precise estimation of the trade elasticities that are central to CGE analysis of FTAs. We have highly disaggregated commodity trade flow data, and are therefore able to provide estimates that precisely match the commodity aggregation scheme employed in the subsequent CGE model. We follow the GTAP Version 5.0 aggregation scheme which includes 42 merchandise trade commodities covering food products, natural resources and manufactured goods. With the exception of two primary commodities that are not traded, we are able to estimate trade elasticities for all merchandise commodities that are significantly different form zero at the 95% confidence level. Rather than producing point estimates of the resulting welfare, export and employment effects, we report confidence intervals instead. These are based on repeated solution of the model, drawing from a distribution of trade elasticity estimates constructed based on the econometrically estimated standard errors. There is now a long history of CGE studies based on SSA: Systematic Sensitivity Analysis (Harrison and Vinod, 1992; Wigle, 1991; Pagon and Shannon, 1987) Ho
APA, Harvard, Vancouver, ISO, and other styles
8

Roa, Julio, Joseph Oldham, and Marina Lima. Recognizing the Potential to Reduce GHG Emissions Through Air Transportation Electrification. Mineta Transportation Institute, July 2023. http://dx.doi.org/10.31979/mti.2023.2223.

Full text
Abstract:
California is aggressively moving forward with efforts to deploy zero-emission transportation technology to fight climate change, especially the Greenhouse Gas (GHG) emissions from the high-impact transportation sector. However, to date, the investments California has made with Cap-and-Trade funding have focused on ground transportation and some marine sources and not the aircraft at the over 140 airports in the state. Through a California-focused comprehensive GHG emissions analysis, this research project seeks to determine how RAM using electric/hybrid electric aircraft can provide new high-speed transportation for high-priority passenger and cargo movement within Fresno County and connections to coastal urban centers. Using VISION, a model developed by the Argonne National Laboratory Transportation Systems Assessment Group, the research team identified and compared the emission per mile and emission per passenger mile between different modes of transportation using traditional petroleum fuel and other sustainable alternatives at an individual level and within the context of the transportation sector, by comparing different modes of transportation. With this estimation on hand, it becomes more viable for the state of California and other states, as well as the federal government, to establish guidelines and goals for transportation policies and investments.
APA, Harvard, Vancouver, ISO, and other styles
9

Menéses-González, María Fernanda, Angélica María Lizarazo-Cuéllar, Diego Cuesta-Mora, and Daniel Esteban Osorio-Ramírez. Financial Development and Monetary Policy Transmission. Banco de la República Colombia, November 2022. http://dx.doi.org/10.32468/be.1219.

Full text
Abstract:
This paper estimates the effect of financial development on the transmission of monetary policy. To do so, the paper employs a panel data set containing financial development indicators, policy rates, lending rates, and deposit rates for 43 countries for the period 2000-2019 and applies the empirical strategy of Brandao Marques et al. (2020): firstly, monetary policy shocks are estimated using a Taylor-rule specification that relates changes in the policy rate to inflation, the output gap and other observables that are likely to influencemonetary policy decisions; secondly, the residuals of this estimation (policy shocks) are used in a specification that relates lending or deposit rates to, among others, policy shocks and the interaction between policy shocks and measures of financial development. The coefficient on this interaction term captures the effect of financial development on the relationship between policy shocks and lending or deposit rates. The main findings of the paper are twofold: on the one hand, financial development does strengthen the monetary policy transmission channel to deposit rates; that is, changes in the policy rate in economies with more financial development induce larger changes (in the same direction) in deposit rates than is the case in economies with less financial development. This result is particularly driven by the effect of the development of financial institutions on policy transmission – the effect of financial markets development turns out to be smaller in magnitude. On the other hand, financial development does not strengthen the transmission of monetary policy to lending rates. This is consistent with a credit channel which weakens in the face of financial development in a context where banks cannot easily substitute short-term funding sources. These results highlight the relevance of financial development for the functioning of monetary policy across countries, and possibly imply the necessity of a more active role of monetary authorities in fostering financial development.
APA, Harvard, Vancouver, ISO, and other styles
10

Dasberg, Shmuel, Jan W. Hopmans, Larry J. Schwankl, and Dani Or. Drip Irrigation Management by TDR Monitoring of Soil Water and Solute Distribution. United States Department of Agriculture, August 1993. http://dx.doi.org/10.32747/1993.7568095.bard.

Full text
Abstract:
Drip irrigation has the potential of high water use efficiency, but actual water measurement is difficult because of the limited wetted volume. Two long-term experiments in orchards in Israel and in California and several field crop studies supported by this project have demonstrated the feasibility of precise monitoring of soil water distribution for drip irrigation in spite of the limited soil wetting. Time Domain Reflectometry (TDR) enables in situ measurement of soil water content of well defined small volumes. Several approaches were tried in monitoring the soil water balance in the field during drip irrigation. These also facilitated the estimation of water uptake: 1. The use of multilevel moisture probe TDR system. This approach proved to be of limited value because of the extremely small diameter of measurement. 2. The placement of 20 cm long TDR probes at predetermined distances from the drippers in citrus orchards. 3. Heavy instrumentation with neutron scattering access tubes and tensiometers of a single drip irrigated almond tree. 4. High resolution spatial and temporal measurements (0.1m x 0.1m grid) of water content by TDR in corn irrigated by surface and subsurface drip. The latter approach was accompanied by parametric modelling of water uptake intensity patterns by corn roots and superimposed with analytical solutions for water flow from point and line sources. All this lead to general and physically based suggestions for the placement of soil water sensors for scheduling drip irrigation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography