Auswahl der wissenschaftlichen Literatur zum Thema „Estimation de terme source“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Estimation de terme source" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Estimation de terme source"

1

Xiao, Yingchun, Yang Yang und Feng Zhu. „A Separation Method for Electromagnetic Radiation Sources of the Same Frequency“. Journal of Electromagnetic Engineering and Science 23, Nr. 6 (30.11.2023): 521–29. http://dx.doi.org/10.26866/jees.2023.6.r.197.

Der volle Inhalt der Quelle
Annotation:
To separate electromagnetic interference sources with an unknown source number, a new separation method is proposed, which includes five key steps: spatial spectrum estimation, source number and direction-of-arrival estimation, mixed matrix estimation, separation matrix estimation, and source signal recovery. A pseudospatial spectrum estimation network based on a convolutional neural network is proposed to estimate the number of electromagnetic radiation sources, their direction of arrival, and the mixing matrix. A new loss function is designed as an optimization criterion for estimating the separation matrix. To ensure generalization, both simulated and measured datasets are used to train the proposed network. Experimental results demonstrate that the proposed separation method outperforms existing source separation techniques in terms of correlation coefficient, root mean square error, and running time. Importantly, it exhibits strong performance in underdetermined cases, as well as in overdetermined or determined cases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Ilyas, Muhammad, Agah D. Garnadi und Sri Nurdiati. „Adaptive Mixed Finite Element Method for Elliptic Problems with Concentrated Source Terms“. Indonesian Journal of Science and Technology 4, Nr. 2 (09.07.2019): 263–69. http://dx.doi.org/10.17509/ijost.v4i2.18183.

Der volle Inhalt der Quelle
Annotation:
An adaptive mixed finite element method using the Lagrange multiplier technique is used to solve elliptic problems with delta Dirac source terms. The problem arises in the use of Chow-Anderssen linear functional methodology to recover coefficients locally in parameter estimation of an elliptic equation from a point-wise measurement. In this article, we used a posterior error estimator based on averaging technique as refinement indicators to produce a cycle of mesh adaptation, which is experimentally shown to capture singularity phenomena. Our numerical results showed that the adaptive refinement process successfully refines elements around the center of the source terms. The results also showed that the global error estimation is better than uniform refinement process in terms of computation time.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Wu, Tao, Yiwen Li, Zhenghong Deng, Bo Feng und Xinping Ma. „Parameter Estimation for Two-Dimensional Incoherently Distributed Source with Double Cross Arrays“. Sensors 20, Nr. 16 (14.08.2020): 4562. http://dx.doi.org/10.3390/s20164562.

Der volle Inhalt der Quelle
Annotation:
A direction of arrival (DOA) estimator for two-dimensional (2D) incoherently distributed (ID) sources is presented under proposed double cross arrays, satisfying both the small interval of parallel linear arrays and the aperture equalization in the elevation and azimuth dimensions. First, by virtue of a first-order Taylor expansion for array manifold vectors of parallel linear arrays, the received signal of arrays can be reconstructed by the products of generalized manifold matrices and extended signal vectors. Then, the rotating invariant relations concerning the nominal elevation and azimuth are derived. According to the rotating invariant relationships, the rotating operators are obtained through the subspace of the covariance matrix of the received vectors. Last, the angle matching approach and angular spreads are explored based on the Capon principle. The proposed method for estimating the DOA of 2D ID sources does not require a spectral search and prior knowledge of the angular power density function. The proposed DOA estimation has a significant advantage in terms of computational cost. Investigating the influence of experimental conditions and angular spreads on estimation, numerical simulations are carried out to validate the effectiveness of the proposed method. The experimental results show that the algorithm proposed in this paper has advantages in terms of estimation accuracy, with a similar number of sensors and the same experimental conditions when compared with existing methods, and that it shows a robustness in cases of model mismatch.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Ryanto, Theo Alvin, Jupiter Sitorus Pane, Muhammad Budi Setiawan, Ihda Husnayani, Anik Purwaningsih und Hendro Tjahjono. „THE PRELIMINARY STUDY ON IMPLEMENTING A SIMPLIFIED SOURCE TERMS ESTIMATION PROGRAM FOR EARLY RADIOLOGICAL CONSEQUENCES ANALYSIS“. JURNAL TEKNOLOGI REAKTOR NUKLIR TRI DASA MEGA 25, Nr. 2 (28.07.2023): 61. http://dx.doi.org/10.55981/tdm.2023.6869.

Der volle Inhalt der Quelle
Annotation:
Indonesia possesses numerous potential sites for nuclear power plant development. A fast and comprehensive radiological consequences analysis is required to conduct a preliminary analysis of radionuclide release into the atmosphere, including source terms estimation. One simplified method for such estimation is the use of the Relative Volatility approach by Kess and Booth, published in IAEA TECDOC 1127. The objective of this study was to evaluate the use of a simple and comprehensive tool for estimating the source terms of planned nuclear power plants to facilitate the analysis of radiological consequences during site evaluation. Input parameters for the estimation include fuel burn-up, blow-down time, specific heat transfer of fuel to cladding, and coolant debit, using 100 MWe PWR as a case study. The results indicate a slight difference in the calculated release fraction compared to previous calculations, indicating a need to modify Keywords: Source terms, Relative volatility, Release fraction, PWR, SMART
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Oliveira, André José Pereira de, Luiz Alberto da Silva Abreu und Diego Campos Knupp. „Explicit scheme based on integral transforms for estimation of source terms in diffusion problems in heterogeneous media“. Journal of Engineering and Exact Sciences 9, Nr. 10 (29.12.2023): 17811. http://dx.doi.org/10.18540/jcecvl9iss10pp17811.

Der volle Inhalt der Quelle
Annotation:
The estimation of source terms present in differential equations has various applications, ranging from structural assessment, industrial process monitoring, equipment failure detection, environmental pollution source detection to identification applications in medicine. Significant progress has been made in recent years in methodologies capable of estimating this parameter. This work employs a methodology based on an explicit formulation of the integral transformation to characterize the unknown source term, reconstructing it through the expansion in known eigenfunctions of the Sturm-Liouville eigenvalue problem. To achieve this, a linear model is considered in a heterogeneous medium with known and spatially varying physical properties and two heat sources, with both temporal and spatial dependencies, and only spatial dependence. The eigenvalue problem contains information about the heterogeneous properties and is solved using the generalized integral transformation technique. Additionally, an initial interpolation of the sensor data is proposed for each observation time, making the inverse problem computationally lighter. The solutions of the inverse problem exhibit optimal performance, even with noisy input data and sources with abrupt discontinuities. The temperatures recovered by the direct problem considering the recovered source closely match synthetic experimental data, showing errors less than 1%, ensuring the robustness and reliability of the technique for the proposed application.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Fang, Qingyuan, Mengzhe Jin, Weidong Liu und Yong Han. „DOA Estimation for Sources with Large Power Differences“. International Journal of Antennas and Propagation 2021 (10.03.2021): 1–12. http://dx.doi.org/10.1155/2021/8862789.

Der volle Inhalt der Quelle
Annotation:
Sources with large power differences are very common, especially in complex electromagnetic environments. Classical DOA estimation methods suffer from performance degradation in terms of resolution when dealing with sources that have large power differences. In this paper, we propose an improved DOA algorithm to increase the resolution performance in resolving such sources. The proposed method takes advantage of diagonal loading and demonstrates that the invariant property of noise subspace still holds after diagonal loading is performed. We also find that the Cramer–Rao bound of the weak source can be affected by the power of the strong source, and this has not been noted before. The Cramer–Rao bound of the weak source deteriorates as the power of the strong source increases. Numerical results indicate that the improved algorithm increases the probability of resolution while maintaining the estimation accuracy and computational complexity.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Herranz, D., F. Argüeso, L. Toffolatti, A. Manjón-García und M. López-Caniego. „A Bayesian method for point source polarisation estimation“. Astronomy & Astrophysics 651 (Juli 2021): A24. http://dx.doi.org/10.1051/0004-6361/202039741.

Der volle Inhalt der Quelle
Annotation:
The estimation of the polarisation P of extragalactic compact sources in cosmic microwave background (CMB) images is a very important task in order to clean these images for cosmological purposes –for example, to constrain the tensor-to-scalar ratio of primordial fluctuations during inflation– and also to obtain relevant astrophysical information about the compact sources themselves in a frequency range, ν ∼ 10–200 GHz, where observations have only very recently started to become available. In this paper, we propose a Bayesian maximum a posteriori approach estimation scheme which incorporates prior information about the distribution of the polarisation fraction of extragalactic compact sources between 1 and 100 GHz. We apply this Bayesian scheme to white noise simulations and to more realistic simulations that include CMB intensity, Galactic foregrounds, and instrumental noise with the characteristics of the QUIJOTE (Q U I JOint TEnerife) experiment wide survey at 11 GHz. Using these simulations, we also compare our Bayesian method with the frequentist filtered fusion method that has been already used in the Wilkinson Microwave Anisotropy Probe data and in the Planck mission. We find that the Bayesian method allows us to decrease the threshold for a feasible estimation of P to levels below ∼100 mJy (as compared to ∼500 mJy which was the equivalent threshold for the frequentist filtered fusion). We compare the bias introduced by the Bayesian method and find it to be small in absolute terms. Finally, we test the robustness of the Bayesian estimator against uncertainties in the prior and in the flux density of the sources. We find that the Bayesian estimator is robust against moderate changes in the parameters of the prior and almost insensitive to realistic errors in the estimated photometry of the sources.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ma, Qian, Wen Xu und Yue Zhou. „Statistically robust estimation of source bearing via minimizing the Bhattacharyya distance“. Journal of the Acoustical Society of America 151, Nr. 3 (März 2022): 1695–709. http://dx.doi.org/10.1121/10.0009677.

Der volle Inhalt der Quelle
Annotation:
Source bearing estimation is a common technique in acoustic array processing. Many methods have been developed and most of them exploit some underlying statistical model. When applied to a practical system, the robustness to model mismatch is of major concern. Traditional adaptive methods, such as the minimum power distortionless response processor, are notoriously known for their sensitivity to model mismatch. In this paper, a parameter estimator is developed via the minimum Bhattacharyya distance estimator (MBDE), which provides a measure of the divergence between the assumed and true probability distributions and is, thus, capable of statistically matching. Under a Gaussian random signal model typical of source bearing estimation, the MBDE is derived in terms of the data-based and modeled covariance matrices without involving matrix inversion. The performance of the MBDE, regarding the robustness and resolution, is analyzed in comparison with some of the existing methods. A connection with the Weiss-Weinstein bound is also discussed, which gives the MBDE an interpretation of closely approaching a large-error performance bound. Theoretical analysis and simulations of bearing estimation using a uniform linear array show that the proposed method owns a considerable resolution comparable to an adaptive method while being robust against statistical mismatch, including covariance mismatch caused by snapshot deficiency and/or noise model mismatch.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Lu, Jinshu, Mengqing Huang, Wenfeng Wu, Yonghui Wei und Chong Liu. „Application and Improvement of the Particle Swarm Optimization Algorithm in Source-Term Estimations for Hazardous Release“. Atmosphere 14, Nr. 7 (19.07.2023): 1168. http://dx.doi.org/10.3390/atmos14071168.

Der volle Inhalt der Quelle
Annotation:
Hazardous gas release can pose severe hazards to the ecological environment and public safety. The source-term estimation of hazardous gas leakage serves a crucial role in emergency response and safety management practices. Nevertheless, the precision of a forward diffusion model and atmospheric diffusion conditions have a significant impact on the performance of the method for estimating source terms. This work proposes the particle swarm optimization (PSO) algorithm coupled with the Gaussian dispersion model for estimating leakage source parameters. The method is validated using experimental cases of the prairie grass field dispersion experiment with various atmospheric stability classes. The results prove the effectiveness of this method. The effects of atmospheric diffusion conditions on estimation outcomes are also investigated. The estimated effect in extreme atmospheric diffusion conditions is not as good as in other diffusion conditions. Accordingly, the Gaussian dispersion model is improved by adding linear and polynomial correction coefficients to it for its inapplicability under extreme diffusion conditions. Finally, the PSO method coupled with improved models is adapted for the source-term parameter estimation. The findings demonstrate that the estimation performance of the PSO method coupled with improved models is significantly improved. It was also found that estimated performances of source parameters of two correction models were significantly distinct under various atmospheric stability classes. There is no single optimal model; however, the model can be selected according to practical diffusion conditions to enhance the estimated precision of source-term parameters.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Amir Mohd Nor, Muhammad Izzat, Mohd Azri Mohd Izhar, Norulhusna Ahmad und Hazilah Md Kaidi. „Exploiting 2-Dimensional Source Correlation in Channel Decoding with Parameter Estimation“. International Journal of Electrical and Computer Engineering (IJECE) 8, Nr. 4 (01.08.2018): 2633. http://dx.doi.org/10.11591/ijece.v8i4.pp2633-2642.

Der volle Inhalt der Quelle
Annotation:
<span>Traditionally, it is assumed that source coding is perfect and therefore, the redundancy of the source encoded bit-stream is zero. However, in reality, this is not the case as the existing source encoders are imperfect and yield residual redundancy at the output. The residual redundancy can be exploited by using Joint Source Channel Coding (JSCC) with Markov chain as the source. In several studies, the statistical knowledge of the sources has been assumed to be perfectly available at the receiver. Although the result was better in terms of the BER performance, practically, the source correlation knowledge were not always available at the receiver and thus, this could affect the reliability of the outcome. The source correlation on all rows and columns of the 2D sources were well exploited by using a modified Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm in the decoder. A parameter estimation technique was used jointly with the decoder to estimate the source correlation knowledge. Hence, this research aims to investigate the parameter estimation for 2D JSCC system which reflects a practical scenario where the source correlation knowledge are not always available. We compare the performance of the proposed joint decoding and estimation technique with the ideal 2D JSCC system with perfect knowledge of the source correlation knowledge. Simulation results reveal that our proposed coding scheme performs very close to the ideal 2D JSCC system.</span>
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Estimation de terme source"

1

Rajaona, Harizo. „Inférence bayésienne adaptative pour la reconstruction de source en dispersion atmosphérique“. Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10120/document.

Der volle Inhalt der Quelle
Annotation:
En physique de l’atmosphère, la reconstruction d’une source polluante à partir des mesures de capteurs est une question importante. Elle permet en effet d’affiner les paramètres des modèles de dispersion servant à prévoir la propagation d’un panache de polluant, et donne aussi des informations aux primo-intervenants chargés d’assurer la sécurité des populations. Plusieurs méthodes existent pour estimer les paramètres de la source, mais leur application est coûteuse à cause de la complexité des modèles de dispersion. Toutefois, cette complexité est souvent nécessaire, surtout lorsqu’il s’agit de traiter des cas urbains où la présence d’obstacles et la météorologie instationnaire imposent un niveau de précision important. Il est aussi vital de tenir compte des différents facteurs d’incertitude, sur les observations et les estimations. Les travaux menés dans le cadre de cette thèse ont pour objectif de développer une méthodologie basée sur l’inférence bayésienne adaptative couplée aux méthodes de Monte Carlo pour résoudre le problème d’estimation du terme source. Pour cela, nous exposons d’abord le contexte scientifique du problème et établissons un état de l’art. Nous détaillons ensuite les formulations utilisées dans le cadre bayésien, plus particulièrement pour les algorithmes d’échantillonnage d’importance adaptatifs. Le troisième chapitre présente une application de l’algorithme AMIS dans un cadre expérimental, afin d’exposer la chaîne de calcul utilisée pour l’estimation de la source. Enfin, le quatrième chapitre se concentre sur une amélioration du traitement des calculs de dispersion, entraînant un gain important de temps de calcul à la fois en milieu rural et urbain
In atmospheric physics, reconstructing a pollution source is a challenging but important question : it provides better input parameters to dispersion models, and gives useful information to first-responder teams in case of an accidental toxic release.Various methods already exist, but using them requires an important amount of computational resources, especially as the accuracy of the dispersion model increases. A minimal degree of precision for these models remains necessary, particularly in urban scenarios where the presence of obstacles and the unstationary meteorology have to be taken into account. One has also to account for all factors of uncertainty, from the observations and for the estimation. The topic of this thesis is the construction of a source term estimation method based on adaptive Bayesian inference and Monte Carlo methods. First, we describe the context of the problem and the existing methods. Next, we go into more details on the Bayesian formulation, focusing on adaptive importance sampling methods, especially on the AMIS algorithm. The third chapter presents an application of the AMIS to an experimental case study, and illustrates the mechanisms behind the estimation process that provides the source parameters’ posterior density. Finally, the fourth chapter underlines an improvement of how the dispersion computations can be processed, thus allowing a considerable gain in computation time, and giving room for using a more complex dispersion model on both rural and urban use cases
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Jin, Bei. „Conditional source-term estimation methods for turbulent reacting flows“. Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/232.

Der volle Inhalt der Quelle
Annotation:
Conditional Source-term Estimation (CSE) methods are used to obtain chemical closure in turbulent combustion simulation. A Laminar Flamelet Decomposition (LFD) and then a Trajectory Generated Low-Dimensional Manifold (TGLDM) method are combined with CSE in Reynolds-Averaged Navier Stokes (RANS) simulation of non-premixed autoigniting jets. Despite the scatter observed in the experimental data, the predictions of ignition delay from both methods agree reasonably well with the measurements. The discrepancy between predictions of these two methods can be attributed to different ways of generating libraries that contain information of detailed chemical mechanism. The CSE-TGLDM method is recommended for its seemingly better performance and its ability to transition from autoignition to combustion. The effects of fuel composition and injection parameters on ignition delay are studied using the CSE-TGLDM method. The CSE-TGLDM method is then applied in Large Eddy Simulation of a non-premixed, piloted jet flame, Sandia Flame D. The adiabatic CSE-TGLDM method is extended to include radiation by introducing a variable enthalpy defect to parameterize TGLDM manifolds. The results are compared to the adiabatic computation and the experimental data. The prediction of NO formation is improved, though the predictions of temperature and major products show no significant difference from the adiabatic computation due to the weak radiation of the flame. The scalar fields are then extracted and used to predict the mean spectral radiation intensities of the flame. Finally, the application of CSE in turbulent premixed combustion is explored. A product-based progress variable is chosen for conditioning. Presumed Probability Density Function (PDF) models for the progress variable are studied. A modified version of a laminar flame-based PDF model is proposed, which best captures the distribution of the conditional variable among all PDFs under study. A priori tests are performed with the CSE and presumed PDF models. Reaction rates of turbulent premixed flames are closed and compared to the DNS data. The results are promising, suggesting that chemical closure can be achieved in premixed combustion using the CSE method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Salehi, Mohammad Mahdi. „Numerical simulation of turbulent premixed flames with conditional source-term estimation“. Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42775.

Der volle Inhalt der Quelle
Annotation:
Conditional Source-term Estimation (CSE) is a closure model for turbulence-chemistry interactions. This model is based on the conditional moment closure hypothesis for the chemical reaction source terms. The conditional scalar field is estimated by solving an integral equation using inverse methods. CSE was originally developed for - and has been used extensively in - non-premixed combustion. This work is the first application of this combustion model to predictive simulations of turbulent premixed flames. The underlying inverse problem is diagnosed with rigorous mathematical tools. CSE is coupled with a Trajectory Generated Low-Dimensional Manifold (TGLDM) model for chemistry. The CSE-TGLDM combustion model is used with both Reynolds-Averaged Navier-Stokes (RANS) and Large-Eddy Simulation (LES) turbulence models to simulate two different turbulent premixed flames. Also in this work, the Presumed Conditional Moment (PCM) turbulent combustion model is employed. This is a simple flamelet model which is used with the Flame Prolongation of ILDM (FPI) chemistry reduction technique. The PCM-FPI approach requires a presumption for the shape of the probability density function of reaction progress variable. Two shapes have been examined: the widely used beta-function and the Modified Laminar Flamelet PDF (MLF-PDF). This model is used in both RANS and large-eddy simulation of a turbulent premixed Bunsen burner. Radial distributions of the calculated temperature field, axial velocity and chemical species mass fraction have been compared with experimental data. This comparison shows that using the MLF-PDF leads to predictions that are similar, and often superior to those obtained using the beta-PDF. Given that the new PDF is based on the actual chemistry - as opposed to the ad hoc nature of the beta-PDF - these results suggest that it is a better choice for the statistical description of the reaction progress variable.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Nivarti, Girish Venkata. „Combustion modelling in spark-ignition engines using conditional source-term estimation“. Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44838.

Der volle Inhalt der Quelle
Annotation:
Conditional Source-term Estimation (CSE) is a chemical closure model for the simulation of turbulent combustion. In this work, CSE has been explored for modelling combustion phenomena in a spark-ignition (SI) engine. In the arbitrarily complex geometries imposed by industrial design, estimation of conditionally averaged scalars is challenging. The key underlying requirement of CSE is that conditionally averaged scalars be calculated within spatially localized sub-domains. A domain partitioning algorithm based on space-filling curves has been developed to construct localized ensembles of points necessary to retain the validity of CSE. Algorithms have been developed to evenly distribute points to the maximum extent possible while maintaining spatial locality. A metric has been defined to estimate relative inter-partition contact as an indicator of communication in parallel computing architectures. Domain partitioning tests conducted on relevant geometries highlight the performance of the method as an unsupervised and computationally inexpensive domain partitioning tool. In addition to involving complex geometries, SI engines pose the challenge of accurately modelling the transient ignition process. Combustion in a homogeneous-charge natural gas fuelled SI engine with a relatively simple chamber geometry has been simulated using an empirical model for ignition. An oxygen based reaction progress variable is employed as the conditioning variable and its stochastic behaviour is approximated by a presumed probability density function (PDF). A trajectory generated low-dimensional manifold has been used to tabulate chemistry in a hyper-dimensional space described by the reaction progress variable, temperature and pressure. The estimates of pressure trace and pollutant emission trends obtained using CSE accurately match experimental measurements.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Mechhoud, Sarah. „Estimation de la diffusion thermique et du terme source du modèle de transport de la chaleur dans les plasmas de tokamaks“. Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00954183.

Der volle Inhalt der Quelle
Annotation:
Cette thèse porte sur l'estimation simultanée du coefficient de diffusion et du terme source régissant le modèle de transport de la température dans les plasmas chauds. Ce phénomène physique est décrit par une équation différentielle partielle (EDP) linéaire, parabolique du second-ordre et non-homogène, où le coefficient de diffusion est distribué et le coefficient de réaction est constant. Ce travail peut se présenter en deux parties. Dans la première, le problème d'estimation est traité en dimension finie ("Early lumping approach"). Dans la deuxième partie, le problème d'estimation est traité dans le cadre initial de la dimension infinie ("Late lumping approach"). Pour l'estimation en dimension finie, une fois le modèle établi, la formulation de Galerkin et la méthode d'approximation par projection sont choisies pour convertir l'EDP de transport en un système d'état linéaire, temps-variant et à entrées inconnues. Sur le modèle réduit, deux techniques dédiées à l'estimation des entrées inconnues sont choisies pour résoudre le problème. En dimension infinie, l'estimation en-ligne adaptative est adoptée pour apporter des éléments de réponse aux contraintes et limitations dues à la réduction du modèle. Des résultats de simulations sur des données réelles et simulées sont présentées dans ce mémoire.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Wang, Mei. „Combustion modeling using conditional source-term estimation with flamelet decomposition and low-dimensional manifolds“. Thesis, University of British Columbia, 2006. http://hdl.handle.net/2429/31181.

Der volle Inhalt der Quelle
Annotation:
Combustion modeling is performed with Conditional Source-term Estimation (CSE) using both Laminar Flamelet Decomposition (LFD) and Low-dimensional Manifolds. CSE with Laminar Flamelet Decomposition (LFD) is used in the Large Eddy Simulation (LES) context to study the non-premixed Sandia D-flame. The results show that the flame temperature and major species are well predicted with both steady and unsteady flamelet libraries. A mixed library composed of steady and unsteady flamelet solutions is needed to get a good prediction of NO. That the LFD model allows for tuning of the results is found to be significant drawback to this approach. CSE is also used with a Trajectory Generated Low-dimensional Manifold (TGLDM) to simulate the Sandia D-flame. Both GRI-Mech 3.0 and GRI-Mech 2.11 are found to be able to predict the temperature and major species well. However, only GRI-Mech 2.11 gives a good prediction of NO. That GRI-Mech 3.0 failed to give a good prediction of NO is in agreement with the findings of others in the literature. The Stochastic Particle Model (SPM) is used to extend the TGLDM to low temperatures where the original continuum TGLDM failed. A new method for generating a trajectory for the TGLDM by averaging different realizations together is proposed. The new TGLDM is used in simulations of a premixed laminar flame and a perfectly stirred reactor. The results show that the new TGLDM significantly improves the prediction. Finally, a time filter is applied to individual SPM realizations to eliminate the small time scales. These filtered realizations are tabulated into TGLDM which are then used to predict the autoignition delay time of a turbulent methane/air jet in RANS using CSE. The results are compared with shock tube experimental data. The TGLDMs incorporating SPM results are able to predict a certain degree of fluctuations in the autoignition delay time, but the magnitude is smaller than is seen in the experiments. This suggests that fluctuations in the ignition delay are at least in part due to turbulent fluctuations, which might be better predicted with LES.
Science, Faculty of
Mathematics, Department of
Graduate
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Tobias, Brännvall. „Source Term Estimation in the Atmospheric Boundary Layer : Using the adjoint of the Reynolds Averaged Scalar Transport equation“. Thesis, Umeå universitet, Institutionen för fysik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-103671.

Der volle Inhalt der Quelle
Annotation:
This work evaluates whether the branch of Reynolds Averaging in Computational Fluid Dynamics can be used to, based on real field measurements, find the source of the measured gas in question. The method to do this is via the adjoint to the Reynolds Averaged Scalar Transport equation, explained and derived herein. Since the Inverse is only as good as the main equation, forward runs are made to evaluate the turbulence model. Reynolds Averaged Navier Stokes is solved in a domain containing 4 cubes in a 2x2 grid, generating a velocity field for said domain. The turbulence model in question is a union of two modifications to the standard two equation k-ε model in order to capture blunt body turbulence but also to model the atmospheric boundary layer. This field is then inserted into the Reynolds Averaged Scalar Transport equation and the simulation is compared to data from the Environmental Flow wind tunnel in Surrey. Finally the adjoint scalar transport is solved, both for synthetic data that was generated in the forward run, but also for the data from EnFlo. It was discovered that the turbulent Schmidt number plays a major role in capturing the dispersed gas, three different Schmidt numbers were tested, the standard 0.7, the unconventional 0.3 and a height dependent Schmidt number. The widely accepted value of 0.7 did not capture the dispersion at all and gave a huge model error. As such the adjoint scalar transport was solved for 0.3 and a height dependent Schmidt number. The interaction between measurements, the real source strength (which is not used in the adjoint equation, but needed to find the source) and the location of the source is intricate indeed. Over estimation and under estimation of the forward model may cancel out in order to find the correct source, with the correct strength. It is found that Reynolds Averaged Computational fluid dynamics may prove useful in source term estimation.
Detta arbete utvärderar hurvida Reynolds medelvärdesmodellering inom flödessimuleringar kan användas till att finna källan till en viss gas baserat på verkliga mätningar ute i fält. Metoden går ut på att använda den adjungerade ekvationen till Reynolds tidsmedlade skalära transportekvationen, beskriven och härledd häri. Då bakåtmodellen bygger på framåtmodellen, måste såleds framåtmodellen utvärderas först. Navier-Stokes ekvationer med en turbulensmodell löses i en domän, innehållandes 4 kuber i en 2x2 orientering, för vilken en hastighetsprofil erhålles. Turbulensmodellen som användes är en union av två olika k-ε modeller, där den ena fångar turbulens runt tröga objekt och den andra som modellerar atmosfäriska gränsskiktet. Detta fält används sedan i framåtmodellen av skalära transportekvationen, som sedan jämförs med körningar från EnFlo windtunneln i Surrey. Slutligen testkörs även den adjungerade ekvationen, både för syntetiskt data genererat i framåtkörningen men även för data från EnFlo tunneln. Då det visade sig att det turbulenta Schmidttalet spelar stor roll inom spridning i det atmosfäriska gränsskiktet, gjordes testkörningar med tre olika Schmidttal, det normala 0.7, det väldigt låga talet 0.3 samt ett höjdberoende Schmidttal. Det visade sig att det vanligtvis använda talet 0.7 inte alls lyckas fånga spridningen tillfredställande och gav ett stort modellfel. Därför löstes den adjungerade ekvationen för 0.3 samt för ett höjdberoende Schmidttal. Interaktionen mellan mätningar, den riktiga källstyrkan (som är okänd i den adjungerade ekvationen) samt källpositionen är onekligen intrikat. Över- samt underestimationer av framåtmodellen kan ta ut varandra i bakåtmodellen för att finna rätt källa, med rätt källstyrka. Det ter sig som Reynolds turbulensmodellering mycket möjligt kan användas inom källtermsuppskattning.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lopez, Ferber Roman. „Approches RBF-FD pour la modélisation de la pollution atmosphérique urbaine et l'estimation de sources“. Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALT006.

Der volle Inhalt der Quelle
Annotation:
Depuis l'ère industrielle, les villes sont impactées par la pollution de l'air du fait de la densité de l'industrie, de la circulation de véhicules et de la densité d'appareils de chauffage à combustion. La pollution atmosphérique urbaine a des conséquences sur la santé auxquelles les pouvoirs publics et les citoyens s'intéressent toujours plus. Cette pollution peut entraîner l'aggravation de l'asthme ou encore de troubles cardiovasculaires.Le but de ce travail de thèse est de localiser et de quantifier des sources de pollution urbaine à l'aide d'un réseau dense de mesures bruitées. Nous avons fait le choix de développer des méthodes d'estimation de sources de pollution en se basant sur des modèles physiques de dispersion de polluants. Ainsi l'estimation de sources de pollution est contrainte par la connaissance de la physique du phénomène de dispersion.Ce travail de thèse porte ainsi sur la modélisation numérique de la dispersion de polluants en milieu urbain et sur l'estimation des termes sources.À cause des nombreuses contraintes qu'impose le bâti urbain aux flux de polluants, la physique de dispersion est représentée par des modèles numériques coûteux en calcul.Nous avons développé un modèle numérique de dispersion basé sur la méthode des Différences Finies supportées par des Fonctions de Bases Radiales (RBF-FD). Ces approches sont réputées frugales en calcul et adaptées au traitement de domaines de simulation à géométrie complexe. Notre modèle RBF-FD peut traiter des problèmes bidimensionnels (2D) et tridimensionnels (3D). Nous avons comparé cette modélisation face à un modèle analytique en 2D, et avons qualitativement comparé notre modèle en 3D à un modèle numérique de référence.Des expériences d'estimations de source ont ensuite été réalisées. Elles considèrent de nombreuses mesures bruitées pour estimer un terme source quelconque sur tout le domaine de simulation. Les différentes études effectuées mettent en oe uvre des expériences jumelles : nous générons nous-même des mesures simulées par un modèle numérique et évaluons les performances des estimations. Après avoir testé une approche d'apprentissage automatique sur un cas unidimensionnel en régime stationnaire, nous avons testé des méthodes d'estimation de terme source sur des cas tridimensionnels en régime permanent et transitoire, en considérant des géométries sans et avec présence d'obstacles. Nous avons testé des estimations en utilisant une méthode adjointe originale puis une méthode originale d'estimation inspirée de l'apprentissage automatique informé de la physique (PIML) et enfin un filtre de Kalman. L'approche inspirée du PIML pour le moment testée en régime stationnaire conduit à une qualité d'estimation comparable au filtre de Kalman (où ce dernier considère un régime transitoire de dispersion à source stationnaire). L'approche inspirée du PIML exploite directement la frugalité du modèle direct de calcul RBF-FD, ce qui en fait une méthode prometteuse pour des estimations de source sur des domaines de calcul de grande taille
Since the industrial era, cities have been affected by air pollution due to the density of industry, vehicle traffic and the density of combustion heaters. Urban air pollution has health consequences that are of increasing concern to both public authorities and the general public. This pollution can aggravate asthma and cardiovascular problems. The aim of this thesis is to locate and quantify sources of urban pollution using a dense network of noisy measurements. We have chosen to develop methods for estimating pollution sources based on physical models of pollutant dispersion. The estimation of pollution sources is therefore constrained by knowledge of the physics of the dispersion phenomenon. This thesis therefore focuses on the numerical modelling of pollutant dispersion in an urban environment and on the estimation of source terms.Because of the many constraints imposed on pollutant flows by urban buildings, the physics of dispersion is represented by computationally expensive numerical models.We have developed a numerical dispersion model based on the Finite Difference method supported by Radial Basis Functions (RBF-FD). These approaches are known to be computationally frugal and suitable for handling simulation domains with complex geometries. Our RBF-FD model can handle both two-dimensional (2D) and three-dimensional (3D) problems. We compared this model with a 2D analytical model, and qualitatively compared our 3D model with a reference numerical model.Source estimation experiments were then carried out. They consider numerous noisy measurements in order to estimate any source term over the entire simulation domain. The various studies carried out involve twin experiments: we ourselves generate measurements simulated by a numerical model and evaluate the performance of the estimates. After testing a machine-learning approach on a one-dimensional steady-state case, we tested source term estimation methods on three-dimensional steady-state and transient cases, considering geometries without and with the presence of obstacles. We tested estimates using an original adjoint method, then an original estimation method inspired by physics-informed machine learning (PIML) and finally a Kalman filter. The PIML-inspired approach, which is currently being tested in a stationary regime, produces an estimation quality comparable to that of the Kalman filter (where the latter considers a transient dispersion regime with a stationary source). The PIML-inspired approach directly exploits the frugality of the RBF-FD direct computation model, which makes it a promising method for source estimates over large computational domains
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Tsui, Hong P. „Turbulent premixed combustion simulation with Conditional Source-term Estimation and Linear-Eddy Model formulated PDF and SDR models“. Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/60295.

Der volle Inhalt der Quelle
Annotation:
Computational fluid dynamics (CFD) is indispensable in the development of complex engines due to its low cost and time requirement compared to experiments. Nevertheless, because of the strong coupling between turbulence and chemistry in premixed flames, the prediction of chemical reaction source terms continues to be a modelling challenge. This work focuses on the improvement of turbulent premixed combustion simulation strategies requiring the use of presumed probability density function (PDF) models. The study begins with the development of a new PDF model that includes the effect of turbulence, achieved by the implementation of the Linear-Eddy Model (LEM). Comparison with experimental burners reveals that the LEM PDF can capture the general PDF shapes for methane-air combustion under atmospheric conditions with greater accuracy than other presumed PDF models. The LEM is additionally used to formulate a new, pseudo-turbulent scalar dissipation rate (SDR) model. Conditional Source-term Estimation (CSE) is implemented in the Large Eddy Simulation (LES) of the Gülder burner as the closure model for the chemistry-turbulence interactions. To accommodate the increasingly parallel computational environments in clusters, the CSE combustion module has been parallelised and optimised. The CSE ensembles can now dynamically adapt to the changing flame distributions by shifting their spatial boundaries and are no longer confined to pre-allocated regions in the simulation domain. Further, the inversion calculation is now computed in parallel using a modified version of an established iterative solver, the Least-Square QR-factorisation (LSQR). The revised version of CSE demonstrates a significant reduction in computational requirement — a reduction of approximately 50% — while producing similar solutions as previous implementations. The LEM formulated PDF and SDR models are subsequently implemented in conjunction with the optimised version of CSE for the LES of a premixed methane-air flame operating in the thin reaction zone. Comparison with experimental measurements of temperature reveals that the LES results are very comparable in terms of the flame height and distribution. This outcome is encouraging as it appears that this work represents a significant step towards the correct direction in developing a complete combustion simulation strategy that can accurately predict flame characteristics in the absence of ad hoc parameters.
Applied Science, Faculty of
Mechanical Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sakarya, Fatma Ayhan. „Passive source location estimation“. Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/13714.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Estimation de terme source"

1

Sjoreen, A. L. Source term estimation using MENU-TACT. Washington, D. C: Division of Operational Assessment, Office for Analysis and Evaluation of Operational Data, U.S. Nuclear Regulatory Commission, 1987.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

1957-, Osman Osman M., und Robinson Enders A, Hrsg. Seismic source signature estimation and measurement. Tulsa, OK: Society of Exploration Geophysicists, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Mélard, Guy. Méthodes de prévision à court terme. Bruxelles, Belgique: Editions de l'Université de Bruxelles, 1990.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Weinstein, Ehud. Multiple source location estimation using the EM algorithm. Woods Hole, Mass: Woods Hole Oceanographic Institution, 1986.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Center, Langley Research, Hrsg. Acoustic Source Bearing Estimation (ASBE) computer program development. Hampton, Va: National Aeronautics and Space Administration, Langley Research Center, 1987.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Office, U. S. Nuclear Regulatory Commission Office of Nuclear Regulatory Research Accident Source Term Program. Reassessment of the technical bases for estimating source terms: Draft report for comment. Washington, D.C: Accident Source Term Program Office, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, 1985.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Feder, Meir. Optimal multiple source location via the EM algorithm. Woods Hole, Mass: Woods Hole Oceanographic Institution, 1986.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Clean Air Technology Center (U.S.) und U.S.-México Border Information Center on Air Pollution., Hrsg. Emission estimation techniques for unique source categories in Mexicali, Mexico. Research Triangle Park, NC: U.S. Environmental Protection Agency, Office of Air Quality Planning and Standards, Clean Air Technology Center, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Clean Air Technology Center (U.S.) und U.S.-México Border Information Center on Air Pollution, Hrsg. Emission estimation techniques for unique source categories in Mexicali, Mexico. Research Triangle Park, NC: U.S. Environmental Protection Agency, Office of Air Quality Planning and Standards, Clean Air Technology Center, 1999.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Ekström, Göran, Marvin Denny und John R. Murphy, Hrsg. Monitoring the Comprehensive Nuclear-Test-Ban Treaty: Source Processes and Explosion Yield Estimation. Basel: Birkhäuser Basel, 2001. http://dx.doi.org/10.1007/978-3-0348-8310-8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Estimation de terme source"

1

Cervone, Guido, und Pasquale Franzese. „Source Term Estimation for the 2011 Fukushima Nuclear Accident“. In Data Mining for Geoinformatics, 49–64. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4614-7669-6_3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Liu, Yang, Matthew Coombes und Cunjia Liu. „Consensus-Based Distributed Source Term Estimation with Particle Filter and Gaussian Mixture Model“. In ROBOT2022: Fifth Iberian Robotics Conference, 130–41. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-21062-4_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Nagai, Haruyasu, Genki Katata, Hiroaki Terada und Masamichi Chino. „Source Term Estimation of 131I and 137Cs Discharged from the Fukushima Daiichi Nuclear Power Plant into the Atmosphere“. In Radiation Monitoring and Dose Estimation of the Fukushima Nuclear Accident, 155–73. Tokyo: Springer Japan, 2013. http://dx.doi.org/10.1007/978-4-431-54583-5_15.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Garbe, Christoph S., Hagen Spies und Bernd Jähne. „Mixed OLS-TLS for the Estimation of Dynamic Processes with a Linear Source Term“. In Lecture Notes in Computer Science, 463–71. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45783-6_56.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Zavisca, Michael, Heinrich Kahlert, Mohsen Khatib-Rahbar, Elizabeth Grindon und Ming Ang. „A Bayesian Network Approach to Accident Management and Estimation of Source Terms for Emergency Planning“. In Probabilistic Safety Assessment and Management, 383–88. London: Springer London, 2004. http://dx.doi.org/10.1007/978-0-85729-410-4_62.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Kumar, Amit, Vageesh Shukla, Manoj Kansal und Mukesh Singhal. „PSA Level-2 Study: Estimation of Source Term for Postulated Accidental Release from Indian PHWRs“. In Reliability, Safety and Hazard Assessment for Risk-Based Technologies, 15–26. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-9008-1_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Penenko, Vladimir, und Alexander Baklanov. „Methods of Sensitivity Theory and Inverse Modeling for Estimation of Source Term and Risk/Vulnerability Areas“. In Computational Science - ICCS 2001, 57–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45718-6_7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Schnieders, Dirk, Kwan-Yee K. Wong und Zhenwen Dai. „Polygonal Light Source Estimation“. In Computer Vision – ACCV 2009, 96–107. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12297-2_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Soriguera Martí, Francesc. „Short-Term Prediction of Highway Travel Time Using Multiple Data Sources“. In Highway Travel Time Estimation With Data Fusion, 157–84. Berlin, Heidelberg: Springer Berlin Heidelberg, 2015. http://dx.doi.org/10.1007/978-3-662-48858-4_6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Lamb, Frederick K., Bruce W. Callen und Jeremiah D. Sullivan. „Yield estimation using shock wave methods“. In Explosion Source Phenomenology, 73–89. Washington, D. C.: American Geophysical Union, 1991. http://dx.doi.org/10.1029/gm065p0073.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Estimation de terme source"

1

Robins, P., und P. Thomas. „Non-linear Bayesian CBRN source term estimation“. In 2005 7th International Conference on Information Fusion. IEEE, 2005. http://dx.doi.org/10.1109/icif.2005.1591980.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Rahbar, Faezeh, Ali Marjovi und Alcherio Martinoli. „An Algorithm for Odor Source Localization based on Source Term Estimation“. In 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019. http://dx.doi.org/10.1109/icra.2019.8793784.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Chichester, David L., James T. Johnson, Scott M. Watson, Scott J. Thompson, Nick R. Mann und Kevin P. Carney. „Post-blast radiological dispersal device source term estimation“. In 2016 IEEE Nuclear Science Symposium, Medical Imaging Conference and Room-Temperature Semiconductor Detector Workshop (NSS/MIC/RTSD). IEEE, 2016. http://dx.doi.org/10.1109/nssmic.2016.8069920.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wang, Zhi-Pu, und Huai-Ning Wu. „Source Term Estimation with Unknown Number of Sources using Improved Cuckoo Search Algorithm“. In 2020 39th Chinese Control Conference (CCC). IEEE, 2020. http://dx.doi.org/10.23919/ccc50068.2020.9189067.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ma, Yuanwei, Dezhong Wang, Wenji Tan, Zhilong Ji und Kuo Zhang. „Assessing Sensitivity of Observations in Source Term Estimation for Nuclear Accidents“. In 2012 20th International Conference on Nuclear Engineering and the ASME 2012 Power Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/icone20-power2012-54491.

Der volle Inhalt der Quelle
Annotation:
In the Fukushima nuclear accident, due to the lack of field observations and the complexity of source terms, researchers failed to estimate the source term accurately immediately. Data assimilation methods to estimate source terms have many good features: they works well with highly nonlinear dynamic models, no linearization in the evolution of error statistics, etc. This study built a data assimilation system using the ensemble Kalman Filter for real-time estimates of source parameters. The assimilation system uses a Gaussian puff model as the atmospheric dispersion model, assimilating forward with the observation data. Considering measurement error, numerical experiments were carried on to verify the stability and accuracy of the scheme. Then the sensitivity of observation configration is tested by the twin experiments. First, the single parameter release rate of the source term is estimated by different sensor grid configurations. In a sparse sensors grid, the error of estimation is about 10%, and in a 11*11 grid configuration, the error is less than 1%. Under the analysis of the Fukushima nuclear accident, ahead for the actual situation, four parameters are estimated at the same time, by 2*2 to 11*11 grid configurations. The studies showed that the radionuclides plume should cover as many sensors as possible, which will lead a to successful estimation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Fanfarillo, Alessandro. „Quantifying Uncertainty in Source Term Estimation with Tensorflow Probability“. In 2019 IEEE/ACM HPC for Urgent Decision Making (UrgentHPC). IEEE, 2019. http://dx.doi.org/10.1109/urgenthpc49580.2019.00006.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Hu, Hao, Xinwen Dong, Xinpeng Li, Yuhan Xu, Shuhan Zhuang und Sheng Fang. „Wind Tunnel Validation Study of Joint Estimation Source Term Inversion Method“. In ASME 2023 International Conference on Environmental Remediation and Radioactive Waste Management. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/icem2023-110109.

Der volle Inhalt der Quelle
Annotation:
Abstract Radioactive emissions caused by severe accidents pose a great threat to the environment and creatures along the spread. The source inversion technique is designed to retrieve the corresponding release rate based on environmental monitoring data and has been widely integrated into the nuclear accident consequence assessment. The Model error and its propagation in the inversion cause the estimate unrealistic and confuse the real releases. Several self-correcting source inversion methods have been proposed to handle it, for instance, the joint estimation method that simultaneously retrieves the estimate and corrects the model bias by introducing a correction coefficient matrix. In this study, the joint estimation method has been investigated against a wind tunnel experiment with the incoming airflow from the southeast direction (SSE). Three nuclear power plants were located in the scenario, surrounded by mountains and sea. The Risø Mesoscale Lagrangian PUFF model (RIMPUFF) was used for atmospheric diffusion predictions and the transport matrix construction. The fine wind tunnel experimental data was used as the observations to drive the standard (Tikhonov) inversion method and the joint estimation method. The results show that the release rate estimated by the Tikhonov method was four times higher than the ground truth on the representative SSE direction. In comparison, the self-correcting joint estimation method reduces the error to only 4%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Robins, P., V. Rapley und P. Thomas. „Biological Source Term Estimation Using Particle Counters and Immunoassay Sensors“. In 2006 9th International Conference on Information Fusion. IEEE, 2006. http://dx.doi.org/10.1109/icif.2006.301723.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Rahbar, Faezeh, und Alcherio Martinoli. „A Distributed Source Term Estimation Algorithm for Multi-Robot Systems“. In 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020. http://dx.doi.org/10.1109/icra40945.2020.9196959.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Zheng, Xiaoyu, Hiroto Itoh, Hitoshi Tamaki und Yu Maruyama. „Estimation of Source Term Uncertainty in a Severe Accident With Correlated Variables“. In 2014 22nd International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/icone22-30011.

Der volle Inhalt der Quelle
Annotation:
The quantitative evaluation of the fission product release to the environment during a severe accident is of great importance. In the present analysis, integral severe accident code MELCOR 1.8.5 has been applied to estimating uncertainty of source term for the accident at Unit 2 of the Fukushima Daiichi nuclear power plant (NPP) as an example and to discussing important models or parameters influential to the source term. Forty-two parameters associated with models for the transportation of radioactive materials were chosen and narrowed down to 18 through a set of screening analysis. These 18 parameters in addition to 9 parameters relevant to in-vessel melt progression obtained by the preceding uncertainty study were input to the subsequent sensitivity analysis by Morris method. This one-factor-at-a-time approach can preliminarily identify inputs which have important effects on an output, and 17 important parameters were selected from the total of 27 parameters through this approach. The selected parameters have been integrated into uncertainty analysis by means of Latin Hypercube Sampling technique and Iman-Conover method, taking into account correlation between parameters. Cumulative distribution functions of representative source terms were obtained through the present uncertainty analysis assuming the failure of suppression chamber. Correlation coefficients between the outputs and uncertain input parameters have been calculated to identify parameters of great influences on source terms, which include parameters related to models on core components failure, models of aerosol dynamic process and pool scrubbing.
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Estimation de terme source"

1

Brooks, Dusty. Non-Parametric Source Term Uncertainty Estimation. Office of Scientific and Technical Information (OSTI), Juni 2020. http://dx.doi.org/10.2172/1763581.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

McKenna, T. J., und J. G. Glitter. Source term estimation during incident response to severe nuclear power plant accidents. Office of Scientific and Technical Information (OSTI), Oktober 1988. http://dx.doi.org/10.2172/6822946.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Fourrier, Marine. Integration of in situ and satellite multi-platform data (estimation of carbon flux for trop. Atlantic). EuroSea, 2023. http://dx.doi.org/10.3289/eurosea_d7.6.

Der volle Inhalt der Quelle
Annotation:
This report presents the results of task 7.3 on “Quantification of improvements in carbon flux data for the tropical Atlantic based on the multi-platform and neural network approach”. To better constrain changes in the ocean’s capture and sequestration of CO2 emitted by human activities, in situ measurements are needed. Tropical regions are considered to be mostly sources of CO2 to the atmosphere due to specific circulation features, with large interannual variability mainly controlled by physical drivers (Padin et al., 2010). The tropical Atlantic is the second largest source, after the tropical Pacific, of CO2 to the atmosphere (Landschützer et al., 2014). However, it is not a homogeneous zone, as it is affected by many physical and biogeochemical processes that vary on many time scales and affect surrounding areas (Foltz et al., 2019). The Tropical Atlantic Observing System (TAOS) has progressed substantially over the past two decades. Still, many challenges and uncertainties remain to require further studies into the area’s role in terms of carbon fluxes (Foltz et al., 2019). Monitoring and sustained observations of surface oceanic CO2 are critical for understanding the fate of CO2 as it penetrates the ocean and during its sequestration at depth. This deliverable relies on different observing platforms deployed specifically as part of the EuroSea project (a Saildrone, and 5 pH-equipped BGC-Argo floats) as well as on the platforms as part of the TAOS (CO2-equipped moorings, cruises, models, and data products). It also builds on the work done in D7.1 and D7.2 on the deployment and quality control of pH-equipped BGC-Argo floats and Saildrone data. Indeed, high-quality homogeneously calibrated carbonate variable measurements are mandatory to be able to compute air-sea CO2 fluxes at a basin scale from multiple observing platforms. (EuroSea Deliverable, D7.6)
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Clark, Todd E., Gergely Ganics und Elmar Mertens. Constructing fan charts from the ragged edge of SPF forecasts. Federal Reserve Bank of Cleveland, November 2022. http://dx.doi.org/10.26509/frbc-wp-202236.

Der volle Inhalt der Quelle
Annotation:
We develop a model that permits the estimation of a term structure of both expectations and forecast uncertainty for application to professional forecasts such as the Survey of Professional Forecasters (SPF). Our approach exactly replicates a given data set of predictions from the SPF (or a similar forecast source) without measurement error. Our model captures fixed horizon and fixed-event forecasts, and can accommodate changes in the maximal forecast horizon available from the SPF. The model casts a decomposition of multi-period forecast errors into a sequence of forecast updates that may be partially unobserved, resulting in a multivariate unobserved components model. In our empirical analysis, we provide quarterly term structures of expectations and uncertainty bands. Our preferred specification features stochastic volatility in forecast updates, which improves forecast performance and yields model estimates of forecast uncertainty that vary over time. We conclude by constructing SPF-based fan charts for calendar-year forecasts like those published by the Federal Reserve. Replication files are available at https://github.com/elmarmertens/ClarkGanicsMertensSPFfancharts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hertel, Thomas, David Hummels, Maros Ivanic und Roman Keeney. How Confident Can We Be in CGE-Based Assessments of Free Trade Agreements? GTAP Working Paper, Juni 2003. http://dx.doi.org/10.21642/gtap.wp26.

Der volle Inhalt der Quelle
Annotation:
With the proliferation of Free Trade Agreements (FTAs) over the past decade, demand for quantitative analysis of their likely impacts has surged. The main quantitative tool for performing such analysis is Computable General Equilibrium (CGE) modeling. Yet these models have been widely criticized for performing poorly (Kehoe, 2002) and having weak econometric foundations (McKitrick, 1998; Jorgenson, 1984). FTA results have been shown to be particularly sensitive to the trade elasticities, with small trade elasticities generating large terms of trade effects and relatively modest efficiency gains, whereas large trade elasticities lead to the opposite result. Critics are understandably wary of results being determined largely by the authors’ choice of trade elasticities. Where do these trade elasticities come from? CGE modelers typically draw these elasticities from econometric work that uses time series price variation to identify an elasticity of substitution between domestic goods and composite imports (Alaouze, 1977; Alaouze, et al., 1977; Stern et al., 1976; Gallaway, McDaniel and Rivera, 2003). This approach has three problems: the use of point estimates as “truth”, the magnitude of the point estimates, and estimating the relevant elasticity. First, modelers take point estimates drawn from the econometric literature, while ignoring the precision of these estimates. As we will make clear below, the confidence one has in various CGE conclusions depends critically on the size of the confidence interval around parameter estimates. Standard “robustness checks” such as systematically raising or lowering the substitution parameters does not properly address this problem because it ignores information about which parameters we know with some precision and which we do not. A second problem with most existing studies derives from the use of import price series to identify home vs. foreign substitution, for example, tends to systematically understate the true elasticity. This is because these estimates take price variation as exogenous when estimating the import demand functions, and ignore quality variation. When quality is high, import demand and prices will be jointly high. This biases estimated elasticities toward zero. A related point is that the fixed-weight import price series used by most authors are theoretically inappropriate for estimating the elasticities of interest. CGE modelers generally examine a nested utility structure, with domestic production substitution for a CES composite import bundle. The appropriate price series is then the corresponding CES price index among foreign varieties. Constructing such an index requires knowledge of the elasticity of substitution among foreign varieties (see below). By using a fixed-weight import price series, previous estimates place too much weight on high foreign prices, and too small a weight on low foreign prices. In other words, they overstate the degree of price variation that exists, relative to a CES price index. Reconciling small trade volume movements with large import price series movements requires a small elasticity of substitution. This problem, and that of unmeasured quality variation, helps explain why typical estimated elasticities are very small. The third problem with the existing literature is that estimates taken from other researchers’ studies typically employ different levels of aggregation, and exploit different sources of price variation, from what policy modelers have in mind. Employment of elasticities in experiments ill-matched to their original estimation can be problematic. For example, estimates may be calculated at a higher or lower level of aggregation than the level of analysis than the modeler wants to examine. Estimating substitutability across sources for paddy rice gives one a quite different answer than estimates that look at agriculture as a whole. When analyzing Free Trade Agreements, the principle policy experiment is a change in relative prices among foreign suppliers caused by lowering tariffs within the FTA. Understanding the substitution this will induce across those suppliers is critical to gauging the FTA’s real effects. Using home v. foreign elasticities rather than elasticities of substitution among imports supplied from different countries may be quite misleading. Moreover, these “sourcing” elasticities are critical for constructing composite import price series to appropriate estimate home v. foreign substitutability. In summary, the history of estimating the substitution elasticities governing trade flows in CGE models has been checkered at best. Clearly there is a need for improved econometric estimation of these trade elasticities that is well-integrated into the CGE modeling framework. This paper provides such estimation and integration, and has several significant merits. First, we choose our experiment carefully. Our CGE analysis focuses on the prospective Free Trade Agreement of the Americas (FTAA) currently under negotiation. This is one of the most important FTAs currently “in play” in international negotiations. It also fits nicely with the source data used to estimate the trade elasticities, which is largely based on imports into North and South America. Our assessment is done in a perfectly competitive, comparative static setting in order to emphasize the role of the trade elasticities in determining the conventional gains/losses from such an FTA. This type of model is still widely used by government agencies for the evaluation of such agreements. Extensions to incorporate imperfect competition are straightforward, but involve the introduction of additional parameters (markups, extent of unexploited scale economies) as well as structural assumptions (entry/no-entry, nature of inter-firm rivalry) that introduce further uncertainty. Since our focus is on the effects of a PTA we estimate elasticities of substitution across multiple foreign supply sources. We do not use cross-exporter variation in prices or tariffs alone. Exporter price series exhibit a high degree of multicolinearity, and in any case, would be subject to unmeasured quality variation as described previously. Similarly, tariff variation by itself is typically unhelpful because by their very nature, Most Favored Nation (MFN) tariffs are non-discriminatory in nature, affecting all suppliers in the same way. Tariff preferences, where they exist, are often difficult to measure – sometimes being confounded by quantitative barriers, restrictive rules of origin, and other restrictions. Instead we employ a unique methodology and data set drawing on not only tariffs, but also bilateral transportation costs for goods traded internationally (Hummels, 1999). Transportation costs vary much more widely than do tariffs, allowing much more precise estimation of the trade elasticities that are central to CGE analysis of FTAs. We have highly disaggregated commodity trade flow data, and are therefore able to provide estimates that precisely match the commodity aggregation scheme employed in the subsequent CGE model. We follow the GTAP Version 5.0 aggregation scheme which includes 42 merchandise trade commodities covering food products, natural resources and manufactured goods. With the exception of two primary commodities that are not traded, we are able to estimate trade elasticities for all merchandise commodities that are significantly different form zero at the 95% confidence level. Rather than producing point estimates of the resulting welfare, export and employment effects, we report confidence intervals instead. These are based on repeated solution of the model, drawing from a distribution of trade elasticity estimates constructed based on the econometrically estimated standard errors. There is now a long history of CGE studies based on SSA: Systematic Sensitivity Analysis (Harrison and Vinod, 1992; Wigle, 1991; Pagon and Shannon, 1987) Ho
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Junek, W. N., J. Roman-Nieves, R. C. Kemerait, M. T. Woods und J. P. Creasey. Automated Source Depth Estimation Using Array Processing Techniques. Fort Belvoir, VA: Defense Technical Information Center, Oktober 2009. http://dx.doi.org/10.21236/ada517312.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Hodgkiss, William S. Source Signature Estimation and Noise Directionality in Shallow Water. Fort Belvoir, VA: Defense Technical Information Center, September 1995. http://dx.doi.org/10.21236/ada306524.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Edwards, L. L. Complex source rate estimation for atmospheric transport and dispersion models. Office of Scientific and Technical Information (OSTI), September 1993. http://dx.doi.org/10.2172/10102792.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Weinstein, Ehud, und Meir Feder. Multiple Source Location Estimation Using the EM (Estimate-Maximize) Algorithm. Fort Belvoir, VA: Defense Technical Information Center, Juli 1986. http://dx.doi.org/10.21236/ada208762.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sanders, T. L., H. Jordan, V. Pasupathi, W. J. Mings und P. C. Reardon. A methodology for estimating the residual contamination contribution to the source term in a spent-fuel transport cask. Office of Scientific and Technical Information (OSTI), September 1991. http://dx.doi.org/10.2172/6373171.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie