Gotowa bibliografia na temat „Homogenized cross sections”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Homogenized cross sections”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Homogenized cross sections"

1

Ishii, Kazuya. "Reconstruction method of homogenized cross sections". Journal of Nuclear Science and Technology 50, nr 10 (październik 2013): 1011–19. http://dx.doi.org/10.1080/00223131.2013.828661.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Tomatis, Daniele. "A multivariate representation of compressed pin-by-pin cross sections". EPJ Nuclear Sciences & Technologies 7 (2021): 8. http://dx.doi.org/10.1051/epjn/2021006.

Pełny tekst źródła
Streszczenie:
Since the 80’s, industrial core calculations employ the two-step scheme based on prior cross sections preparation with few energy groups and in homogenized reference geometries. Spatial homogenization in the fuel assembly quarters is the most frequent calculation option nowadays, relying on efficient nodal solvers using a coarse mesh. Pin-wise reaction rates are then reconstructed by dehomogenization techniques. The future trend of core calculations is moving however toward pin-by-pin explicit representations, where few-group cross sections are homogenized in the single pins at many physical conditions and many nuclides are selected for the simplified depletion chains. The resulting data model requires a considerable memory occupation on disk-files and the time needed to evaluate all data exceeds the limits for practical feasibility of multi-physics reactor calculations. In this work, we study the compression of pin-by-pin homogenized cross sections by the Hotelling transform in typical PWR fuel assemblies. The reconstruction of these quantities at different physical states of the assembly is then addressed by interpolation of only a few compressed coefficients, instead of interpolating separately each homogenized cross section. Savings in memory higher than 90% are observed, which result in important gains in runtime when interpolating the few-group data.
Style APA, Harvard, Vancouver, ISO itp.
3

Price, Dean, Thomas Folk, Matthew Duschenes, Krishna Garikipati i Brendan Kochunas. "Methodology for Sensitivity Analysis of Homogenized Cross-Sections to Instantaneous and Historical Lattice Conditions with Application to AP1000® PWR Lattice". Energies 14, nr 12 (8.06.2021): 3378. http://dx.doi.org/10.3390/en14123378.

Pełny tekst źródła
Streszczenie:
In the two-step method for nuclear reactor simulation, lattice physics calculations are performed to compute homogenized cross-sections for a variety of burnups and lattice configurations. A nodal code is then used to perform full-core analysis using the pre-calculated homogenized cross-sections. One source of uncertainty introduced in this method is that the lattice configuration or depletion conditions typically do not match a pre-calculated one from the lattice physics simulations. Therefore, some interpolation model must be used to estimate the homogenized cross-sections in the nodal code. This current study provides a methodology for sensitivity analysis to quantify the impact of state variables on the homogenized cross-sections. This methodology also allows for analyses of the historical effect that the state variables have on homogenized cross-sections. An application of this methodology on a lattice for the Westinghouse AP1000® reactor is presented where coolant density, fuel temperature, soluble boron concentration, and control rod insertion are the state variables of interest. The effects of considering the instantaneous values of the state variables, historical values of the state variables, and burnup-averaged values of the state variables are analyzed. Using these methods, it was found that a linear model that only considers the instantaneous and burnup-averaged values of state variables can fail to capture some variations in the homogenized cross-sections.
Style APA, Harvard, Vancouver, ISO itp.
4

Hua, Guowei, Yunzhao Li i Sicheng Wang. "PWR pin-homogenized cross-sections analysis using big-data technology". Progress in Nuclear Energy 121 (marzec 2020): 103228. http://dx.doi.org/10.1016/j.pnucene.2019.103228.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Truffinet, Olivier, Karim Ammar, Jean-Philippe Argaud, Nicolas Gérard Castaing i Bertrand Bouriquet. "Multi-output gaussian processes for the reconstruction of homogenized cross-sections". EPJ Web of Conferences 302 (2024): 02006. http://dx.doi.org/10.1051/epjconf/202430202006.

Pełny tekst źródła
Streszczenie:
Deterministic nuclear reactor simulators employing the prevalent two-step scheme often generate a substantial amount of intermediate data at the interface of their two subcodes, which can impede the overall performance of the software. The bulk of this data comprises “few-groups homogenized cross-sections” or HXS, which are stored as tabulated multivariate functions and interpolated inside the core simulator. A number of mathematical tools have been studied for this interpolation purpose over the years, but few meet all the challenging requirements of neutronics computation chains: extreme accuracy, low memory footprint, fast predictions… We here present a new framework to tackle this task, based on multi-outputs gaussian processes (MOGP). This machine learning model enables us to interpolate HXS’s with improved accuracy compared to the current multilinear standard, using only a fraction of its training data – meaning that the amount of required precomputation is reduced by a factor of several dozens. It also necessitates an even smaller fraction of its storage requirements, preserves its reconstruction speed, and unlocks new functionalities such as adaptive sampling and facilitated uncertainty quantification. We demonstrate the efficiency of this approach on a rich test case reproducing the VERA benchmark, proving in particular its scalability to datasets of millions of HXS.
Style APA, Harvard, Vancouver, ISO itp.
6

Truffinet, Olivier, Karim Ammar, Jean-Philippe Argaud, Nicolas Gérard Castaing i Bertrand Bouriquet. "Adaptive sampling of homogenized cross-sections with multi-output gaussian processes". EPJ Web of Conferences 302 (2024): 02010. http://dx.doi.org/10.1051/epjconf/202430202010.

Pełny tekst źródła
Streszczenie:
In another talk submitted to this conference, we presented an efficient new framework based on multi-outputs gaussian processes (MOGP) for the interpolation of few-groups homogenized cross-sections (HXS) inside deterministic core simulators. We indicated that this methodology authorized a principled selection of interpolation points through adaptive sampling. We here develop this idea by trying simple sampling schemes on our problem. In particular, we compare sample scoring functions with and without integration of leave-one-out errors, and obtained with single-output and multi-output gaussian process models. We test these methods on a realistic PWR assembly with gadolinium-added fuel rods, comparing them with non-adaptive supports. Results are promising, as the sampling algorithms allow to significantly reduce the size of interpolation supports with almost preserved accuracy. However, they exhibit phenomena of instability and stagnation, which calls for further investigation of the sampling dynamics and trying other scoring functions for the selection of samples.
Style APA, Harvard, Vancouver, ISO itp.
7

Nguyen, Dinh Quoc Dang, i Emiliano Masiello. "Representation of few-group homogenized cross section by multi-variate polynomial regression". EPJ Web of Conferences 302 (2024): 02002. http://dx.doi.org/10.1051/epjconf/202430202002.

Pełny tekst źródła
Streszczenie:
In this paper, a representation of few-group homogenized cross section by multi-variate polynomial regression is presented. The method is applied on the few-group assembly homogenized cross sections of the assembly 22UA from the benchmark X2VVER[1], generated by the lattice transport code APOLLO3®[2], and conducted over a Cartesian grid of parametric state-points. The regression model [3, 4] allow to input a significantly larger number of points for training compared to the number of monomials, thus yielding higher accuracy than polynomial interpolation without being affected by the choice of points in the training set. Additionally, it can reduce data preparation time because the size of the training set can be smaller than the number of points in the complete Cartesian grid, while still providing a good approximation. Furthermore, its evaluation algorithm can be adapted for GPU utilization, similar to polynomial interpolation with the Newton method [5].
Style APA, Harvard, Vancouver, ISO itp.
8

Szames, E., K. Ammar, D. Tomatis i J. M. Martinez. "FEW-GROUP CROSS SECTIONS MODELING BY ARTIFICIAL NEURAL NETWORKS". EPJ Web of Conferences 247 (2021): 06029. http://dx.doi.org/10.1051/epjconf/202124706029.

Pełny tekst źródła
Streszczenie:
This work deals with the modeling of homogenized few-group cross sections by Artificial Neural Networks (ANN). A comprehensive sensitivity study on data normalization, network architectures and training hyper-parameters specifically for Deep and Shallow Feed Forward ANN is presented. The optimal models in terms of reduction in the library size and training time are compared to multi-linear interpolation on a Cartesian grid. The use case is provided by the OECD-NEA Burn-up Credit Criticality Benchmark [1]. The Pytorch [2] machine learning framework is used.
Style APA, Harvard, Vancouver, ISO itp.
9

Griso, Georges, Larysa Khilkova, Julia Orlik i Olena Sivak. "Asymptotic Behavior of Stable Structures Made of Beams". Journal of Elasticity 143, nr 2 (5.02.2021): 239–99. http://dx.doi.org/10.1007/s10659-021-09816-w.

Pełny tekst źródła
Streszczenie:
AbstractIn this paper, we study the asymptotic behavior of an $\varepsilon $ ε -periodic 3D stable structure made of beams of circular cross-section of radius $r$ r when the periodicity parameter $\varepsilon $ ε and the ratio ${r/\varepsilon }$ r / ε simultaneously tend to 0. The analysis is performed within the frame of linear elasticity theory and it is based on the known decomposition of the beam displacements into a beam centerline displacement, a small rotation of the cross-sections and a warping (the deformation of the cross-sections). This decomposition allows to obtain Korn type inequalities. We introduce two unfolding operators, one for the homogenization of the set of beam centerlines and another for the dimension reduction of the beams. The limit homogenized problem is still a linear elastic, second order PDE.
Style APA, Harvard, Vancouver, ISO itp.
10

Wang, Qiudong, Ding She, Bing Xia i Lei Shi. "Evaluation of pebble-bed homogenized cross sections in HTGR fuel cycle simulations". Progress in Nuclear Energy 117 (listopad 2019): 103041. http://dx.doi.org/10.1016/j.pnucene.2019.103041.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Homogenized cross sections"

1

Nguyen, Dinh Quoc Dang. "Representation of few-group homogenized cross sections by polynomials and tensor decomposition". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP142.

Pełny tekst źródła
Streszczenie:
Cette thèse se concentre sur l'étude de la modélisation mathématique des sections efficaces homogénéisées à peu de groupes, un élément essentiel du schéma à deux étapes, qui est largement utilisé dans les simulations de réacteurs nucléaires. À mesure que les demandes industrielles nécessitent de plus en plus des maillages spatiaux et énergétiques fins pour améliorer la précision des calculs cœur, la taille de la bibliothèque des sections efficaces peut devenir excessive, entravant ainsi les performances des calculs cœur. Il est donc essentiel de développer une représentation qui minimise l'utilisation de la mémoire tout en permettant une interpolation des données efficace.Deux approches, la représentation polynomiale et la décomposition "Canonical Polyadic" des tenseurs, sont présentées et appliquées aux données de sections efficaces homogénéisées à peu de groupes. Les données sont préparées à l'aide d'APOLLO3 sur la géométrie de deux assemblages dans le benchmark X2 VVER-1000. Le taux de compression et la précision sont évalués et discutés pour chaque approche afin de déterminer leur applicabilité au schéma standard en deux étapes.De plus, des implémentations sur GPUs des deux approches sont testées pour évaluer la scalabilité des algorithmes en fonction du nombre de threads impliqués. Ces implémentations sont encapsulées dans une bibliothèque appelée Merlin, destinée à la recherche future et aux applications industrielles utilisant ces approches.Les deux approches, en particulier la méthode de décomposition des tenseurs, montrent des résultats prometteurs en termes de compression des données et de précision de reconstruction. L'intégration de ces méthodes dans le schéma standard en deux étapes permettrait non seulement de réduire considérablement l'utilisation de la mémoire pour le stockage des sections efficaces, mais aussi de diminuer significativement l'effort de calcul requis pour l'interpolation des sections efficaces lors des calculs cœur, réduisant donc le temps de calcul global pour les simulations de réacteurs industriels
This thesis focuses on studying the mathematical modeling of few-group homogenized cross sections, a critical element in the two-step scheme widely used in nuclear reactor simulations. As industrial demands increasingly require finer spatial and energy meshes to improve the accuracy of core calculations, the size of the cross section library can become excessive, hampering the performance of core calculations. Therefore, it is essential to develop a representation that minimizes memory usage while still enabling efficient data interpolation.Two approaches, polynomial representation and Canonical Polyadic decomposition of tensors, are presented and applied to few-group homogenized cross section data. The data is prepared using APOLLO3 on the geometry of two assemblies in the X2 VVER-1000 benchmark. The compression rate and accuracy are evaluated and discussed for each approach to determine their applicability to the standard two-step scheme.Additionally, GPU implementations of both approaches are tested to assess the scalability of the algorithms based on the number of threads involved. These implementations are encapsulated in a library called Merlin, intended for future research and industrial applications that involve these approaches.Both approaches, particularly the method of tensor decomposition, demonstrate promising results in terms of data compression and reconstruction accuracy. Integrating these methods into the standard two-step scheme would not only substantially reduce memory usage for storing cross sections, but also significantly decrease the computational effort required for interpolating cross sections during core calculations, thereby reducing overall calculation time for industrial reactor simulations
Style APA, Harvard, Vancouver, ISO itp.
2

Truffinet, Olivier. "Machine learning methods for cross-section reconstruction in full-core deterministic neutronics codes". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP128.

Pełny tekst źródła
Streszczenie:
Les simulateurs déterministes de neutronique pour les réacteurs nucléaires suivent aujourd'hui majoritairement un schéma multi-échelles à deux étapes. Au cours d'un calcul dit « réseau », la physique est finement résolue au niveau des motifs élémentaires du réacteur (assemblages de combustible) ; puis, ces motifs sont mis en contact dans un calcul dit « cœur », où la configuration globale est calculée de manière plus grossière. La communication entre ces deux codes se fait de manière différée par le transfert de données physiques, dont les plus importantes se nomment « sections efficaces homogénéisées » (notées ci-après HXS) et peuvent être représentées par des fonctions multivariées. Leur utilisation différée et leur dépendance à des conditions physiques variables imposent un schéma de type tabulation-interpolation : les HXS sont précalculées dans une large gamme de situations, stockées, puis approximées dans le code cœur à partir de ces données afin de correspondre à un état bien précis du réacteur. Dans un contexte d'augmentation de la finesse des simulations, les outils mathématiques actuellement utilisés pour cette étape d'approximation montrent aujourd'hui leurs limites ; la problématique de cette thèse est ainsi de leur trouver des remplaçants, capables de rendre l'interpolation des HXS plus précise, plus économe en données et en espace de stockage, et tout aussi rapide. Tout l'arsenal du machine learning, de l'approximation fonctionnelle, etc, peut être mis à contribution pour traiter ce problème.Afin de trouver un modèle d'approximation adapté au problème, l'on a commencé par une analyse des jeux de données générés pour cette thèse : corrélations entre les HXS, allure de leurs dépendances, dimension linéaire, etc. Ce dernier point s'est révélé particulièrement fructueux : les jeux de HXS s'avèrent être d'une très faible dimension effective, ce qui permet de simplifier grandement leur approximation. En particulier, l'on a développé une méthodologie innovante basée sur l'Empirical Interpolation Method (EIM), capable de remplacer la majorité des appels au code réseau par des extrapolations d'un petit volume de données, et de réduire le stockage des HXS d'un ou deux ordres de grandeur - le tout occasionnant une perte de précision négligeable. Pour conserver les avantages d'une telle méthodologie tout en répondant à la totalité de la problématique de thèse, l'on s'est ensuite tourné vers un puissant modèle de machine learning épousant la même structure de faible dimension : les processus gaussiens multi-sorties (MOGP). Procédant par étapes depuis les modèles gaussiens les plus simples (GP mono-sorties) jusqu'à de plus complexes, l'on a montré que ces outils sont pleinement adaptés au problème considéré, et permettent des gains majeurs par rapport à l'existant. De nombreux choix de modélisation ont été discutés et comparés ; les modèles ont été adaptés à des données de très grande taille, requérant une optimisation de leur implémentation ; et les fonctionnalités nouvelles qu'ils offrent ont été expérimentées, notamment la prédiction d'incertitudes et l'apprentissage actif.Enfin, un travail théorique a été accompli sur la famille de modèles étudiées - le Linear Model of Co-regionalisation (LMC) - afin d'éclairer certaines zones d'ombre de leur théorie encore jeune. Cette réflexion a mené à la définition d'un nouveau modèle, le PLMC, qui a été implémenté, optimisé et testé sur de nombreux jeux de données réelles et synthétiques. Plus simple que ses concurrents, ce modèle s'est aussi révélé autant voire plus précis et rapide, et doté de plusieurs fonctionnalités exclusives, mises à profit durant la thèse.Ce travail ouvre de multiples perspectives pour la simulation neutronique. Doté de modèles d'apprentissage puissants et flexibles, l'on peut envisager des évolutions importantes des codes : propagation systématique des incertitudes, correction de diverses approximations, prise en compte de davantage de variables…
Today, most deterministic neutronics simulators for nuclear reactors follow a two-step multi-scale scheme. In a so-called “lattice” calculation, the physics is finely resolved at the level of the elementary reactor pattern (fuel assemblies); these tiles are then brought into contact in a so-called “core” calculation, where the overall configuration is calculated more coarsely. Communication between these two codes is realized by the deferred transfer of physical data, the most important of which are called “homogenized cross sections” (hereafter referred to as HXS) and can be represented by multivariate functions. Their deferred use and dependence on variable physical conditions call for a tabulation-interpolation scheme: HXS are precalculated in a wide range of situations, stored, then approximated in the core code from the stored values to correspond to a specific reactor state. In a context of increasing simulation finesse, the mathematical tools currently used for this approximation stage are now showing their limitations. The aim of this thesis is to find replacements for them, capable of making HXS interpolation more accurate, more economical in terms of data and storage space, and just as fast. The whole arsenal of machine learning, functional approximation, etc., can be put at use to tackle this problem.In order to find a suitable approximation model, we began by analyzing the datasets generated for this thesis: correlations between HXS's, shapes of their dependencies, linear dimension, etc. This last point proved particularly fruitful: HXS sets turn out to be of very low effective dimension, which greatly simplifies their approximation. In particular, we leveraged this fact to develop an innovative methodology based on the Empirical Interpolation Method (EIM), capable of replacing the majority of lattice code calls by extrapolations from a small volume of data, and reducing HXS storage by one or two orders of magnitude - all with a negligible loss of accuracy.To retain the advantages of such a methodology while addressing the full scope of the thesis problem, we then turned to a powerful machine learning model matching the same low-dimensional structure: multi-output Gaussian processes (MOGPs). Proceeding step by step from the simplest Gaussian models (single-output GPs) to most complex ones, we showed that these tools are fully adapted to the problem under consideration, and offer major gains over current HXS interpolation routines. Numerous modeling choices were discussed and compared; models were adapted to very large data, requiring some optimization of their implementation; and the new functionalities which they offer were tested, notably uncertainty prediction and active learning.Finally, theoretical work was carried out on the studied family of models - the Linear Model of Co-regionalisation (LMC) - in order to shed light on certain grey areas in their still young theory. This led to the definition of a new model, the PLMC, which was implemented, optimized and tested on numerous real and synthetic data sets. Simpler than its competitors, this model has also proved to be just as accurate and fast if not more so, and holds a number of exclusive functionalities that were put to good use during the thesis.This work opens up many new prospects for neutronics simulation. Equipped with powerful and flexible learning models, it is possible to envisage significant evolutions for deterministic codes: systematic propagation of uncertainties, correction of various approximations, taking into account of more variables
Style APA, Harvard, Vancouver, ISO itp.
3

Szames, Esteban Alejandro. "Few group cross section modeling by machine learning for nuclear reactor". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASS134.

Pełny tekst źródła
Streszczenie:
Pour estimer la répartition de la puissance au sein d’un réacteur nucléaire, il est nécessaire de coupler des modélisations neutroniques et thermohydrauliques. De telles simulations doivent disposer des valeurs sections efficaces homogénéisées à peu de groupes d’énergies qui décrivent les interactions entre les neutrons et la matière. Cette thèse est consacrée à la modélisation des sections efficaces par des techniques académiques innovantes basées sur l’apprentissage machine. Les premières méthodes utilisent les modèles à noyaux du type RKHS (Reproducing Kernel Hilbert Space) et les secondes par réseaux de neurones. La performance d’un modèle est principalement définie par le nombre de coefficients qui le caractérisent (c’est-à-dire l’espace mémoire nécessaire pour le stocker), la vitesse d’évaluation, la précision, la robustesse au bruit numérique, la complexité, etc. Dans cette thèse, un assemblage standard de combustible UOX REP est analysé avec trois variables d’état : le burnup, la température du combustible et la concentration en bore. La taille de stockage des bibliothèques est optimisée en cherchant à maximiser la vitesse et la précision de l’évaluation, tout en cherchant à réduire l’erreur de reconstruction des sections efficaces microscopiques, macroscopiques et du facteur de multiplication infini. Trois techniques d’approximation sont étudiées. Les méthodes de noyaux, qui utilisent le cadre général d’apprentissage machine, sont capables de proposer, dans un espace vectoriel normalisé, une grande variété de modèles de régression ou de classification. Les méthodes à noyaux peuvent reproduire différents espaces de fonctions en utilisant un support non structuré, qui est optimisé avec des techniques d’apprentissage actif. Les approximations sont trouvées grâce à un processus d’optimisation convexe facilité par "l’astuce du noyau”. Le caractère modulaire intrinsèque de la méthode facilite la séparation des phases de modélisation : sélection de l’espace de fonctions, application de routines numériques, et optimisation du support par apprentissage actif. Les réseaux de neurones sont des méthodes d’approximation universelles capables d’approcher de façon arbitraire des fonctions continues sans formuler de relations explicites entre les variables. Une fois formés avec des paramètres d’apprentissage adéquats, les réseaux à sorties multiples (intrinsèquement parallélisables) réduisent au minimum les besoins de stockage tout en offrant une vitesse d’évaluation élevée. Les stratégies que nous proposons sont comparées entre elles et à l’interpolation multilinéaire sur une grille cartésienne qui est la méthode utilisée usuellement dans l’industrie. L’ensemble des données, des outils, et des scripts développés sont disponibles librement sous licence MIT
Modern nuclear reactors utilize core calculations that implement a thermo-hydraulic feedback requiring accurate homogenized few-group cross sections.They describe the interactions of neutrons with matter, and are endowed with the properties of smoothness and regularity, steaming from their underling physical phenomena. This thesis is devoted to the modeling of these functions by industry state-of-theart and innovative machine learning techniques. Mathematically, the subject can be defined as the analysis of convenient mapping techniques from one multi-dimensional space to another, conceptualize as the aggregated sum of these functions, whose quantity and domain depends on the simulations objectives. Convenient is intended in terms of computational performance, such as the model’s size, evaluation speed, accuracy, robustness to numerical noise, complexity,etc; always with respect to the engineering modeling objectives that specify the multidimensional spaces of interest. In this thesis, a standard UO₂ PWR fuel assembly is analyzed for three state-variables, burnup,fuel temperature, and boron concentration.Library storage requirements are optimized meeting the evaluation speed and accuracy targets in view of microscopic, macroscopic cross sections and the infinite multiplication factor. Three approximation techniques are studied: The state-of-the-art spline interpolation using computationally convenient B-spline basis, that generate high order local approximations. A full grid is used as usually donein the industry. Kernel methods, that are a very general machine learning framework able to pose in a normed vector space, a large variety of regression or classification problems. Kernel functions can reproduce different function spaces using an unstructured support,which is optimized with pool active learning techniques. The approximations are found through a convex optimization process simplified by the kernel trick. The intrinsic modular character of the method facilitates segregating the modeling phases: function space selection, application of numerical routines and support optimization through active learning. Artificial neural networks which are“model free” universal approximators able Artificial neural networks which are“model free” universal approximators able to approach continuous functions to an arbitrary degree without formulating explicit relations among the variables. With adequate training settings, intrinsically parallelizable multi-output networks minimize storage requirements offering the highest evaluation speed. These strategies are compared to each other and to multi-linear interpolation in a Cartesian grid, the industry standard in core calculations. The data set, the developed tools, and scripts are freely available under aMIT license
Style APA, Harvard, Vancouver, ISO itp.
4

Tari, Ilker. "Homogenized cross section determination using Monte Carlo simulation". Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/28054.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Cai, Li. "Condensation et homogénéisation des sections efficaces pour les codes de transport déterministes par la méthode de Monte Carlo : Application aux réacteurs à neutrons rapides de GEN IV". Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112280/document.

Pełny tekst źródła
Streszczenie:
Dans le cadre des études de neutronique menées pour réacteurs de GEN-IV, les nouveaux outils de calcul des cœurs de réacteur sont implémentés dans l’ensemble du code APOLLO3® pour la partie déterministe. Ces méthodes de calculs s’appuient sur des données nucléaires discrétisée en énergie (appelées multi-groupes et généralement produites par des codes déterministes eux aussi) et doivent être validées et qualifiées par rapport à des calculs basés sur la méthode de référence Monte-Carlo. L’objectif de cette thèse est de mettre au point une technique alternative de production des propriétés nucléaires multi-groupes par un code de Monte-Carlo (TRIPOLI-4®). Dans un premier temps, après avoir réalisé des tests sur les fonctionnalités existantes de l’homogénéisation et de la condensation avec des précisions meilleures accessibles aujourd’hui, des incohérences sont mises en évidence. De nouveaux estimateurs de paramètres multi-groupes ont été développés et validés pour le code TRIPOLI-4®à l’aide de ce code lui-même, puisqu’il dispose de la possibilité d’utiliser ses propres productions de données multi-groupes dans un calcul de cœur. Ensuite, la prise en compte de l’anisotropie de la diffusion nécessaire pour un bon traitement de l’anisotropie introduite par des fuites des neutrons a été étudiée. Une technique de correction de la diagonale de la matrice de la section efficace de transfert par diffusion à l’ordre P1 (nommée technique IGSC et basée sur une évaluation du courant des neutrons par une technique introduite par Todorova) est développée. Une amélioration de la technique IGSC dans la situation où les propriétés matérielles du réacteur changent drastiquement en espace est apportée. La solution est basée sur l’utilisation d’un nouveau courant qui est projeté sur l’axe X et plus représentatif dans la nouvelle situation que celui utilisant les approximations de Todorova, mais valable seulement en géométrie 1D. A la fin, un modèle de fuite B1 homogène est implémenté dans le code TRIPOLI-4® afin de produire des sections efficaces multi-groupes avec un spectre critique calculé avec l’approximation du mode fondamental. Ce modèle de fuite est analysé et validé rigoureusement en comparant avec les autres codes : Serpent et ECCO ; ainsi qu’avec un cas analytique.L’ensemble de ces développements dans TRIPOLI-4® permet de produire des sections efficaces multi-groupes qui peuvent être utilisées dans le code de calcul de cœur SNATCH de la plateforme PARIS. Ce dernier utilise la théorie du transport qui est indispensable pour la nouvelle filière à neutrons rapides. Les principales conclusions sont : -Le code de réseau en Monte-Carlo est une voie intéressante (surtout pour éviter les difficultés de l’autoprotection, de l’anisotropie limitée à un certain ordre du développement en polynômes de Legendre, du traitement des géométries exactes 3D), pour valider les codes déterministes comme ECCO ou APOLLO3® ou pour produire des données pour les codes déterministes ou Monte-Carlo multi-groupes.-Les résultats obtenus pour le moment avec les données produites par TRIPOLI-4® sont comparables mais n’ont pas encore vraiment montré d’avantage par rapport à ceux obtenus avec des données issues de codes déterministes tel qu’ECCO
In the framework of the Generation IV reactors neutronic research, new core calculation tools are implemented in the code system APOLLO3® for the deterministic part. These calculation methods are based on the discretization concept of nuclear energy data (named multi-group and are generally produced by deterministic codes) and should be validated and qualified with respect to some Monte-Carlo reference calculations. This thesis aims to develop an alternative technique of producing multi-group nuclear properties by a Monte-Carlo code (TRIPOLI-4®).At first, after having tested the existing homogenization and condensation functionalities with better precision obtained nowadays, some inconsistencies are revealed. Several new multi-group parameters estimators are developed and validated for TRIPOLI-4® code with the aid of itself, since it has the possibility to use the multi-group constants in a core calculation.Secondly, the scattering anisotropy effect which is necessary for handling neutron leakage case is studied. A correction technique concerning the diagonal line of the first order moment of the scattering matrix is proposed. This is named the IGSC technique and is based on the usage of an approximate current which is introduced by Todorova. An improvement of this IGSC technique is then presented for the geometries which hold an important heterogeneity property. This improvement uses a more accurate current quantity which is the projection on the abscissa X. The later current can represent the real situation better but is limited to 1D geometries.Finally, a B1 leakage model is implemented in the TRIPOLI-4® code for generating multi-group cross sections with a fundamental mode based critical spectrum. This leakage model is analyzed and validated rigorously by the comparison with other codes: Serpent and ECCO, as well as an analytical case.The whole development work introduced in TRIPLI-4® code allows producing multi-group constants which can then be used in the core calculation solver SNATCH in the PARIS code platform. The latter uses the transport theory which is indispensable for the new generation fast reactors analysis. The principal conclusions are as follows:-The Monte-Carlo assembly calculation code is an interesting way (in the sense of avoiding the difficulties in the self-shielding calculation, the limited order development of anisotropy parameters, the exact 3D geometries) to validate the deterministic codes like ECCO or APOLLO3® and to produce the multi-group constants for deterministic or Monte-Carlo multi-group calculation codes. -The results obtained for the moment with the multi-group constants calculated by TRIPOLI-4 code are comparable with those produced from ECCO, but did not show remarkable advantages
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Homogenized cross sections"

1

Wang, Weixiang, WenPei Feng, KeFan Zhang, Guangliang Yang, Tao Ding i Hongli Chen. "A Moose-Based Neutron Diffusion Code with Application to a LMFR Benchmark". W Springer Proceedings in Physics, 490–502. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1023-6_43.

Pełny tekst źródła
Streszczenie:
AbstractMOOSE (Multiphysics Object-Oriented Simulation Environment) is a powerful finite element multi-physics coupling framework, whose object-oriented, extensive system is conducive to the development of various simulation tools. In this work, a full-core MOOSE-based Neutron Diffusion application is developed, and a 3D PWR benchmark 3D-IAEA with given group constants is applied for code verification. Then the MOOSE-based Neutron Diffusion application is applied to the calculation of a Sodium-cooled Fast Reactor (SFR) benchmark, together with the research on homogenized few-group constants generation based on Monte Carlo method. The calculation adopts a 33-group cross section sets, which is generated using Monte Carlo code OpenMC. Considering the long neutron free path and strong global neutron spectrum coupling of liquid metal cooled reactor (LMFR), a full-core homogeneous model is used in OpenMC to generate the homogenized few-group constants. In addition, transport correction is used in the process of cross section generation, considering the prominent anisotropic scattering of fast reactor. The calculated results, including effective multiplication factor (keff) and assembly power distributions, are in good agreement with the reference values and the calculation results of OpenMC, which proves the accuracy of the neutron diffusion application, and also shows that the Monte Carlo method can be applied to generation of homogenized few-group constants for LMFR analysis.
Style APA, Harvard, Vancouver, ISO itp.
2

Qin, Shuai, Qingming He, Jiahe Bai, Wenchang Dong, Liangzhi Cao i Hongchun Wu. "Group Constants Generation Based on NECP-MCX Monte Carlo Code". W Springer Proceedings in Physics, 86–97. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1023-6_9.

Pełny tekst źródła
Streszczenie:
AbstractThe reliability of few-group constants generated by lattice physics calculation is significant for the accuracy of the conventional two-step method in neutronics calculation. The deterministic method is preferred in the lattice calculation due to its efficiency. However, it is difficult for the deterministic method to treat the resonance self-shielding effect accurately and handle complex geometries. Compared to the deterministic method, the Monte Carlo method has the characteristics of using continuous-energy cross section and the powerful capability of geometric modeling. Therefore, the Monte Carlo particle transport code NECP-MCX is extended in this study to generate assembly-homogenized few-group constants. The cumulative migration method is adopted to generate the accurate diffusion coefficient and the leakage correction is performed using the homogeneous fundamental mode approximation. For the verification of the generated few-group constants, a code sequence named MCX-SPARK is built based on NECP-MCX and a core analysis code SPARK to perform the two-step calculation. The physics start-up test of the HPR1000 reactor is simulated using the MCX-SPARK sequence. The results from MCX-SPARK agree well with the results from the design report and a deterministic two-step code Bamboo-C. It is concluded that the NECP-MCX has the ability to generate accurate few-group constants.
Style APA, Harvard, Vancouver, ISO itp.
3

"Ultrasonic homogenizing systems are able to produce particle-size and droplet-size distributions that approach those of piston homogenizers with a lower power re-quirement. In order to work, they must be fed a well-blended premix or a metered feed of the liquid components. The vibrating element is an extra maintenance item, espe-cially in heavy or abrasive service. Overall, they offer an attractive option when fixed-gap rotor/stator devices do not produce the required size distributions. 5. Homogenizer/Extruder Another high-pressure homogenizer/extruder with an adjustable valve having produc-tion capacities from 8 mL/hr to 12,000 LL/hr is available. A positive displacement pump produces pressures up to 30,000 psig. The manufacturer claims that no O-ring is used in the product pass and pump seal, and this homogenizer/extruder was approved by the U.S. Food and Drug Administration for pharmaceutical use [36]. At this writing, in-formation concerning the internal structure is not available. The apparatus is capable of producing fine emulsions and liposomal dispersions. Figure 36 shows a laboratory unit. 6. Microfluidizer Technologies A more recent invention to find wide use in specialized forms of dispersed system dosage forms is the microfluidizer. This device uses a high-pressure positive-displacement pump operating at a pressure of 500-20,000 psig, which accelerates the process flow to up to 500 m/min through the interaction chamber. The interaction chamber consists of small channels known as microchannels. The microchannel diameters can be as narrow as 50 urn and cause the flow of product to occur as very thin sheets. The configuration of these microchannels within the interaction chamber resembles Y-shaped flow streams in which the process stream divides into these microchannels, creating two separate microstreams. The sum of cross-sectional areas of these two microstreams is less than the cross-sectional area of the pipe before division to two separate streams. This nar-rowing of the flow pass creates an (axisymmetric) elongational flow to generate high Fig. 36 Emulsiflex-C5, a high-pressure homogenizer. (From Ref. 36.)". W Pharmaceutical Dosage Forms, 365–67. CRC Press, 1998. http://dx.doi.org/10.1201/9781420000955-54.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Homogenized cross sections"

1

Grgić, Davor, Radomir Ječmenica i Dubravko Pevec. "Xenon Correction in Homogenized Neutron Cross Sections". W 2012 20th International Conference on Nuclear Engineering and the ASME 2012 Power Conference. ASME, 2012. http://dx.doi.org/10.1115/icone20-power2012-54878.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Price, Dean, Thomas Folk, Siddhartha Srivastava, Krishna Garikipati i Brendan Kochunas. "Sensitivity Analysis of Homogenized Cross Sections in AP1000 Lattices". W International Conference on Physics of Reactors 2022. Illinois: American Nuclear Society, 2022. http://dx.doi.org/10.13182/physor22-37383.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Hursin, Mathieu, Brendan Kochunas, Thomas J. Downar, Ricardo Alarcon, Philip L. Cole, Chaden Djalali i Fernando Umeres. "Error Assessment of Homogenized Cross Sections Generation for Whole Core Neutronic Calculation". W VII Latin American Symposium on Nuclear Physics and Applications. AIP, 2007. http://dx.doi.org/10.1063/1.2813839.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Bokov, P. M., D. Botes i Kostadin Ivanov. "Hierarchical Interpolation of Homogenized Few-Group Neutron Cross-Sections on Samples with Uncorrelated Uncertainty". W International Conference on Physics of Reactors 2022. Illinois: American Nuclear Society, 2022. http://dx.doi.org/10.13182/physor22-37615.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Ratti, Luca, Guido Mazzini, Marek Ruščák i Valerio Giusti. "Neutronic Analysis for VVER-440 Type Reactor Using PARCS Code". W 2018 26th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/icone26-82607.

Pełny tekst źródła
Streszczenie:
The Czech Republic National Radiation Protection Institute (SURO) provides technical support to the Czech Republic State Office for Nuclear Safety, providing safety analysis and reviewing of the technical documentations for Nuclear Power Plants (NPPs). For this reason, several computational models created in SURO were prepared using different codes as tools to simulate and investigate the design base and beyond design base accidents scenarios. This paper focuses on the creation of SCALE and PARCS neutronic models for a proper analysis of the VVER-440 reactor analysis. In particular, SCALE models of the VVER-440 fuel assemblies have been created in order to produce collapsed and homogenized cross sections necessary for the study with PARCS of the whole VVER-440 reactor core. The sensitivity study of the suitable energy threshold to be adopted for the preparation with SCALE of collapsed two energy-group homogenized cross sections is also discussed. Finally, the results obtained with PARCS core model are compared with those reported in the VVER-440 Final Safety Report.
Style APA, Harvard, Vancouver, ISO itp.
6

Hu, Tianliang, Liangzhi Cao, Hongchun Wu i Kun Zhuang. "Code Development for the Neutronics/Thermal-Hydraulics Coupling Transient Analysis of Molten Salt Reactors". W 2017 25th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/icone25-67316.

Pełny tekst źródła
Streszczenie:
A code system has been developed in this paper for the dynamics simulations of MSRs. The homogenized cross section data library is generated using the continuous-energy Monte-Carlo code OpenMC which provides significant modeling flexibility compared against the traditional deterministic lattice transport codes. The few-group cross sections generated by OpenMC are provided to TANSY and TANSY_K which is based on OpenFOAM to perform the steady-state full-core coupled simulations and dynamics simulation. For verification and application of the codes sequence, the simulation of a representative molten salt reactor core MOSART has been performed. For the further study of the characteristics of MSRs, several transients like the code-slug transient, unprotected loss of flow transient and overcooling transient have been analyzed. The numerical results indicated that the TANSY and TANSY_K codes with the cross section library generated by OpenMC has the capability for the dynamics analysis of MSRs.
Style APA, Harvard, Vancouver, ISO itp.
7

Nie, Jingyu, Binqian Li, Yingwei Wu, Jing Zhang, Guoliang Zhang, Qisen Ren, Yanan He i Guanghui Su. "Thermo-Neutronics Coupled Simulation of a Heat Pipe Reactor Based on OpenMC/COMSOL". W 2024 31st International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2024. http://dx.doi.org/10.1115/icone31-135246.

Pełny tekst źródła
Streszczenie:
Abstract As an advanced small nuclear reactor, the heat pipe reactor possesses several advantages, including high energy density, long operational lifetime, compact size, and strong adaptability to various environments, making it an optimal choice for specialized energy needs in future applications, such as deep-sea and deep-space domains. In this study, we developed a code system using OpenMC/COMSOL for neutron and thermodynamic simulations. The continuous-energy Monte Carlo code, OpenMC, was employed to generate homogenized cross-section databases, offering significant modeling flexibility compared to traditional deterministic lattice transport codes. The generated multi-group cross-sections from OpenMC were utilized in COMSOL for the coupled neutron and thermodynamic simulations of the entire core. To validate the OpenMC/COMSOL code system, benchmark problems for pressurized water reactors were computed, and the results of the “two-step” scheme for neutron physics were compared with full-core Monte Carlo neutron results. Furthermore, to investigate the applicability of the thermal-hydraulic coupling in the heat pipe reactor neutron physics model, typical heat pipe reactor assemblymodels were established and verified under various energy group numbers and homogenized regions. The results demonstrated good agreement for multiplication factors and power distributions. The research results indicate that the utilization of the cross-section library generated by OpenMC enables the capability for steady-state analysis and core design. The results obtained from COMSOL exhibit good overall agreement with respect to multiplication factors and power distribution.
Style APA, Harvard, Vancouver, ISO itp.
8

Mazzini, Guido, Bruno Miglierini i Marek Ruščák. "Comparison Between PARCS and MCNP6 Codes on VVER1000/V320 Core". W 2014 22nd International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/icone22-30386.

Pełny tekst źródła
Streszczenie:
Research Centre Rez solves several safety related projects dealing with safety of Czech NPPs, some of which require fully functioning Three Dimensional (3D) model of the reactor core. While in a number of safety analysis of various accident scenarios it is sufficient to use one point reactor kinetics, there are selected types of accidents in which it is useful to model the space (3D) neutron kinetics, in particular control rod ejections, boron dilution scenarios, including transitions from design basis to beyond design basis accidents. This paper is focused to analyze the present model of the core of VVER1000/V320 reactor. Which is applicable for 3D modeling of neutron kinetics in selected design and beyond design basis accidents. The model is based on a cross-sections library created by SCALE 6.1.2/TRITON simulations. PARCS 3.2 code uses homogenized cross-sections libraries to calculate neutronic and other core parameters of the PWR reactors. Similar model is prepared with MCNP6 for comparison between deterministic (Pn spherical-harmonics method used in PARCS) and the stochastic (Monte Carlo) approach (used in MCNP6). Such comparison will serve as a demonstration of the capability of the PARCS code for VVER1000/V320 analyses.
Style APA, Harvard, Vancouver, ISO itp.
9

Zhang, Hongbo, Chuntao Tang, Weiyan Yang, Guangwen Bi i Bo Yang. "Development and Verification of the PWR Lattice Code PANDA". W 2017 25th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/icone25-66573.

Pełny tekst źródła
Streszczenie:
Lattice code generates homogenized few-group cross sections for core neutronics code. It is an important component of the nuclear design code system. The development and improvement of lattice codes are always significant topics in reactor physics. The PANDA code is a PWR lattice code developed by Shanghai Nuclear Engineering Research and Design Institute (SNERDI). It starts from the 70-group library, and performs the resonance calculation based on the Spatially Dependent Dancoff Method (SDDM). The 2D heterogeneous transport calculation is performed without any group collapse and cell homogenization by MOC with two-level Coarse Mesh Finite Difference (CMFD) acceleration. Matrix exponential methods are used to solve the Bateman depletion equation. Based on the methodologies, the PANDA code is developed. The verifications on different levels preliminarily demonstrate the ability of the PANDA code.
Style APA, Harvard, Vancouver, ISO itp.
10

Chao, Guo, Liu Yu, He Hangxing, Liu Luguo, Wang Xiaoyu, Xin Sufang, Li Peiyang, Wu Xiaoli i Yuan Hongsheng. "Development of Three-Dimensional Neutron Kinetics Code Based on High Order Nodal Expansion Method in Hexagonal-Z Geometry". W 2018 26th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/icone26-81356.

Pełny tekst źródła
Streszczenie:
To solve three-dimensional kinetics problems, a high order nodal expansion method for hexagonal-z geometry (HONEM) and a Runge-Kutta (RK) method are respectively adopted to deal with the spatial and temporal problem. In the HONEM, 1D partially-integrated flux are approximated by using four order polynomial. The two order polynomial is adopted to the approximation of partially-integrated leakages. The Runge-Kutta method is adopted as a tool for dispersing the time term of 3D kinetics equation. A flux weighting method (FWM) is used for obtaining homogenized cross sections of mix node. The three-dimensional hexagonal kinetics code has been developed based on this method and tested with two benchmark problems of VVER which are the control rod ejection without any feedback and with simple adiabatic Doppler feedback. The results calculated by this code agree well with the reference results and the code is validated.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii