Literatura científica selecionada sobre o tema "Few-Group Homogenized Cross Sections"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Few-Group Homogenized Cross Sections".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Few-Group Homogenized Cross Sections"

1

Szames, E., K. Ammar, D. Tomatis e J. M. Martinez. "FEW-GROUP CROSS SECTIONS MODELING BY ARTIFICIAL NEURAL NETWORKS". EPJ Web of Conferences 247 (2021): 06029. http://dx.doi.org/10.1051/epjconf/202124706029.

Texto completo da fonte
Resumo:
This work deals with the modeling of homogenized few-group cross sections by Artificial Neural Networks (ANN). A comprehensive sensitivity study on data normalization, network architectures and training hyper-parameters specifically for Deep and Shallow Feed Forward ANN is presented. The optimal models in terms of reduction in the library size and training time are compared to multi-linear interpolation on a Cartesian grid. The use case is provided by the OECD-NEA Burn-up Credit Criticality Benchmark [1]. The Pytorch [2] machine learning framework is used.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Tomatis, Daniele. "A multivariate representation of compressed pin-by-pin cross sections". EPJ Nuclear Sciences & Technologies 7 (2021): 8. http://dx.doi.org/10.1051/epjn/2021006.

Texto completo da fonte
Resumo:
Since the 80’s, industrial core calculations employ the two-step scheme based on prior cross sections preparation with few energy groups and in homogenized reference geometries. Spatial homogenization in the fuel assembly quarters is the most frequent calculation option nowadays, relying on efficient nodal solvers using a coarse mesh. Pin-wise reaction rates are then reconstructed by dehomogenization techniques. The future trend of core calculations is moving however toward pin-by-pin explicit representations, where few-group cross sections are homogenized in the single pins at many physical conditions and many nuclides are selected for the simplified depletion chains. The resulting data model requires a considerable memory occupation on disk-files and the time needed to evaluate all data exceeds the limits for practical feasibility of multi-physics reactor calculations. In this work, we study the compression of pin-by-pin homogenized cross sections by the Hotelling transform in typical PWR fuel assemblies. The reconstruction of these quantities at different physical states of the assembly is then addressed by interpolation of only a few compressed coefficients, instead of interpolating separately each homogenized cross section. Savings in memory higher than 90% are observed, which result in important gains in runtime when interpolating the few-group data.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Nguyen, Dinh Quoc Dang, e Emiliano Masiello. "Representation of few-group homogenized cross section by multi-variate polynomial regression". EPJ Web of Conferences 302 (2024): 02002. http://dx.doi.org/10.1051/epjconf/202430202002.

Texto completo da fonte
Resumo:
In this paper, a representation of few-group homogenized cross section by multi-variate polynomial regression is presented. The method is applied on the few-group assembly homogenized cross sections of the assembly 22UA from the benchmark X2VVER[1], generated by the lattice transport code APOLLO3®[2], and conducted over a Cartesian grid of parametric state-points. The regression model [3, 4] allow to input a significantly larger number of points for training compared to the number of monomials, thus yielding higher accuracy than polynomial interpolation without being affected by the choice of points in the training set. Additionally, it can reduce data preparation time because the size of the training set can be smaller than the number of points in the complete Cartesian grid, while still providing a good approximation. Furthermore, its evaluation algorithm can be adapted for GPU utilization, similar to polynomial interpolation with the Newton method [5].
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Szames, E., K. Ammar, D. Tomatis e J. M. Martinez. "FEW-GROUP CROSS SECTIONS LIBRARY BY ACTIVE LEARNING WITH SPLINE KERNELS". EPJ Web of Conferences 247 (2021): 06012. http://dx.doi.org/10.1051/epjconf/202124706012.

Texto completo da fonte
Resumo:
This work deals with the representation of homogenized few-groups cross sections libraries by machine learning. A Reproducing Kernel Hilbert Space (RKHS) is used for different Pool Active Learning strategies to obtain an optimal support. Specifically a spline kernel is used and results are compared to multi-linear interpolation as used in industry, discussing the reduction of the library size and of the overall performance. A standard PWR fuel assembly provides the use case (OECD-NEA Burn-up Credit Criticality Benchmark [1]).
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Nguyen, Dinh Q. D., Emiliano Masiello e Daniele Tomatis. "MPOGen: A Python package to prepare few-group homogenized cross sections for core calculations by APOLLO3®". Nuclear Engineering and Design 417 (fevereiro de 2024): 112802. http://dx.doi.org/10.1016/j.nucengdes.2023.112802.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Henry, Romain, Yann Périn, Kiril Velkov e Sergei Pavlovich Nikonov. "3-D COUPLED SIMULATION OF A VVER 1000 WITH PARCS/ATHLET". EPJ Web of Conferences 247 (2021): 06015. http://dx.doi.org/10.1051/epjconf/202124706015.

Texto completo da fonte
Resumo:
A new OECD/NEA benchmark entitled “Reactivity compensation with diluted boron by stepwise insertion of control rod cluster” is starting. This benchmark, based on high quality measurements performed at the NPP Rostov Unit 2, aims to validate and assess high fidelity multi-physics simulation code capabilities. The Benchmark is divided in two phases: assembly wise and pin-by-pin resolution of steady-state and transient multi-physics problems. Multi-physics simulation requires the generation of parametrized few-group cross-sections. This task used to be done with deterministic (2-D) lattice codes, but in the past few years the Monte-Carlo code SERPENT has demonstrate its ability to generate accurate few-group homogenized cross-section without approximations, neither on the geometry nor in the nuclear data. Since the whole core SERPENT models for production of such cross-section libraries would be computationally costly (and the standard 2-D approach may introduce unnecessary large approximations), 3-D models of each assembly type in infinite radial lattice configurations have been created. These cross-sections are then used to evaluate effective multiplication factors for different core configurations with the diffusion code PARCS. The results are compared with the reference SERPENT calculations. In the next step, a thermal-hydraulic model with the system code ATHLET applying an assembly-wise description of the core (i.e. one channel per fuel assembly) has been developed for coupled PARCS/ATHLET transient test calculations. This paper describes in detail the models and techniques used for the generation of the few-group parameterized cross section libraries, the PARCS model and the ATHLET model. Additionally, a simple exercise with coupled code system PARCS/ATHLET is presented and analysed.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Galchenko, V. V., А. М. Abdulaev e І. І. Shlapak. "USING SOFTWARE BASED ON THE MONTE CARLO METHOD FOR RECEIVING THE FEW-GROUP HOMOGENIZED MACROSCOPIC INTERACTION CROSS-SECTIONS". Odes’kyi Politechnichnyi Universytet Pratsi, n.º 3(53) (2017): 37–42. http://dx.doi.org/10.15276/opu.3.53.2017.05.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Cao, Liangzhi, Yong Liu, Wei Shen e Qingming He. "Development of a hybrid method to improve the sensitivity and uncertainty analysis for homogenized few-group cross sections". Journal of Nuclear Science and Technology 54, n.º 7 (24 de abril de 2017): 769–83. http://dx.doi.org/10.1080/00223131.2017.1315973.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Truffinet, Olivier, Karim Ammar, Jean-Philippe Argaud, Nicolas Gérard Castaing e Bertrand Bouriquet. "Multi-output gaussian processes for the reconstruction of homogenized cross-sections". EPJ Web of Conferences 302 (2024): 02006. http://dx.doi.org/10.1051/epjconf/202430202006.

Texto completo da fonte
Resumo:
Deterministic nuclear reactor simulators employing the prevalent two-step scheme often generate a substantial amount of intermediate data at the interface of their two subcodes, which can impede the overall performance of the software. The bulk of this data comprises “few-groups homogenized cross-sections” or HXS, which are stored as tabulated multivariate functions and interpolated inside the core simulator. A number of mathematical tools have been studied for this interpolation purpose over the years, but few meet all the challenging requirements of neutronics computation chains: extreme accuracy, low memory footprint, fast predictions… We here present a new framework to tackle this task, based on multi-outputs gaussian processes (MOGP). This machine learning model enables us to interpolate HXS’s with improved accuracy compared to the current multilinear standard, using only a fraction of its training data – meaning that the amount of required precomputation is reduced by a factor of several dozens. It also necessitates an even smaller fraction of its storage requirements, preserves its reconstruction speed, and unlocks new functionalities such as adaptive sampling and facilitated uncertainty quantification. We demonstrate the efficiency of this approach on a rich test case reproducing the VERA benchmark, proving in particular its scalability to datasets of millions of HXS.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Truffinet, Olivier, Karim Ammar, Jean-Philippe Argaud, Nicolas Gérard Castaing e Bertrand Bouriquet. "Adaptive sampling of homogenized cross-sections with multi-output gaussian processes". EPJ Web of Conferences 302 (2024): 02010. http://dx.doi.org/10.1051/epjconf/202430202010.

Texto completo da fonte
Resumo:
In another talk submitted to this conference, we presented an efficient new framework based on multi-outputs gaussian processes (MOGP) for the interpolation of few-groups homogenized cross-sections (HXS) inside deterministic core simulators. We indicated that this methodology authorized a principled selection of interpolation points through adaptive sampling. We here develop this idea by trying simple sampling schemes on our problem. In particular, we compare sample scoring functions with and without integration of leave-one-out errors, and obtained with single-output and multi-output gaussian process models. We test these methods on a realistic PWR assembly with gadolinium-added fuel rods, comparing them with non-adaptive supports. Results are promising, as the sampling algorithms allow to significantly reduce the size of interpolation supports with almost preserved accuracy. However, they exhibit phenomena of instability and stagnation, which calls for further investigation of the sampling dynamics and trying other scoring functions for the selection of samples.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Few-Group Homogenized Cross Sections"

1

Nguyen, Dinh Quoc Dang. "Representation of few-group homogenized cross sections by polynomials and tensor decomposition". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASP142.

Texto completo da fonte
Resumo:
Cette thèse se concentre sur l'étude de la modélisation mathématique des sections efficaces homogénéisées à peu de groupes, un élément essentiel du schéma à deux étapes, qui est largement utilisé dans les simulations de réacteurs nucléaires. À mesure que les demandes industrielles nécessitent de plus en plus des maillages spatiaux et énergétiques fins pour améliorer la précision des calculs cœur, la taille de la bibliothèque des sections efficaces peut devenir excessive, entravant ainsi les performances des calculs cœur. Il est donc essentiel de développer une représentation qui minimise l'utilisation de la mémoire tout en permettant une interpolation des données efficace.Deux approches, la représentation polynomiale et la décomposition "Canonical Polyadic" des tenseurs, sont présentées et appliquées aux données de sections efficaces homogénéisées à peu de groupes. Les données sont préparées à l'aide d'APOLLO3 sur la géométrie de deux assemblages dans le benchmark X2 VVER-1000. Le taux de compression et la précision sont évalués et discutés pour chaque approche afin de déterminer leur applicabilité au schéma standard en deux étapes.De plus, des implémentations sur GPUs des deux approches sont testées pour évaluer la scalabilité des algorithmes en fonction du nombre de threads impliqués. Ces implémentations sont encapsulées dans une bibliothèque appelée Merlin, destinée à la recherche future et aux applications industrielles utilisant ces approches.Les deux approches, en particulier la méthode de décomposition des tenseurs, montrent des résultats prometteurs en termes de compression des données et de précision de reconstruction. L'intégration de ces méthodes dans le schéma standard en deux étapes permettrait non seulement de réduire considérablement l'utilisation de la mémoire pour le stockage des sections efficaces, mais aussi de diminuer significativement l'effort de calcul requis pour l'interpolation des sections efficaces lors des calculs cœur, réduisant donc le temps de calcul global pour les simulations de réacteurs industriels
This thesis focuses on studying the mathematical modeling of few-group homogenized cross sections, a critical element in the two-step scheme widely used in nuclear reactor simulations. As industrial demands increasingly require finer spatial and energy meshes to improve the accuracy of core calculations, the size of the cross section library can become excessive, hampering the performance of core calculations. Therefore, it is essential to develop a representation that minimizes memory usage while still enabling efficient data interpolation.Two approaches, polynomial representation and Canonical Polyadic decomposition of tensors, are presented and applied to few-group homogenized cross section data. The data is prepared using APOLLO3 on the geometry of two assemblies in the X2 VVER-1000 benchmark. The compression rate and accuracy are evaluated and discussed for each approach to determine their applicability to the standard two-step scheme.Additionally, GPU implementations of both approaches are tested to assess the scalability of the algorithms based on the number of threads involved. These implementations are encapsulated in a library called Merlin, intended for future research and industrial applications that involve these approaches.Both approaches, particularly the method of tensor decomposition, demonstrate promising results in terms of data compression and reconstruction accuracy. Integrating these methods into the standard two-step scheme would not only substantially reduce memory usage for storing cross sections, but also significantly decrease the computational effort required for interpolating cross sections during core calculations, thereby reducing overall calculation time for industrial reactor simulations
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Szames, Esteban Alejandro. "Few group cross section modeling by machine learning for nuclear reactor". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASS134.

Texto completo da fonte
Resumo:
Pour estimer la répartition de la puissance au sein d’un réacteur nucléaire, il est nécessaire de coupler des modélisations neutroniques et thermohydrauliques. De telles simulations doivent disposer des valeurs sections efficaces homogénéisées à peu de groupes d’énergies qui décrivent les interactions entre les neutrons et la matière. Cette thèse est consacrée à la modélisation des sections efficaces par des techniques académiques innovantes basées sur l’apprentissage machine. Les premières méthodes utilisent les modèles à noyaux du type RKHS (Reproducing Kernel Hilbert Space) et les secondes par réseaux de neurones. La performance d’un modèle est principalement définie par le nombre de coefficients qui le caractérisent (c’est-à-dire l’espace mémoire nécessaire pour le stocker), la vitesse d’évaluation, la précision, la robustesse au bruit numérique, la complexité, etc. Dans cette thèse, un assemblage standard de combustible UOX REP est analysé avec trois variables d’état : le burnup, la température du combustible et la concentration en bore. La taille de stockage des bibliothèques est optimisée en cherchant à maximiser la vitesse et la précision de l’évaluation, tout en cherchant à réduire l’erreur de reconstruction des sections efficaces microscopiques, macroscopiques et du facteur de multiplication infini. Trois techniques d’approximation sont étudiées. Les méthodes de noyaux, qui utilisent le cadre général d’apprentissage machine, sont capables de proposer, dans un espace vectoriel normalisé, une grande variété de modèles de régression ou de classification. Les méthodes à noyaux peuvent reproduire différents espaces de fonctions en utilisant un support non structuré, qui est optimisé avec des techniques d’apprentissage actif. Les approximations sont trouvées grâce à un processus d’optimisation convexe facilité par "l’astuce du noyau”. Le caractère modulaire intrinsèque de la méthode facilite la séparation des phases de modélisation : sélection de l’espace de fonctions, application de routines numériques, et optimisation du support par apprentissage actif. Les réseaux de neurones sont des méthodes d’approximation universelles capables d’approcher de façon arbitraire des fonctions continues sans formuler de relations explicites entre les variables. Une fois formés avec des paramètres d’apprentissage adéquats, les réseaux à sorties multiples (intrinsèquement parallélisables) réduisent au minimum les besoins de stockage tout en offrant une vitesse d’évaluation élevée. Les stratégies que nous proposons sont comparées entre elles et à l’interpolation multilinéaire sur une grille cartésienne qui est la méthode utilisée usuellement dans l’industrie. L’ensemble des données, des outils, et des scripts développés sont disponibles librement sous licence MIT
Modern nuclear reactors utilize core calculations that implement a thermo-hydraulic feedback requiring accurate homogenized few-group cross sections.They describe the interactions of neutrons with matter, and are endowed with the properties of smoothness and regularity, steaming from their underling physical phenomena. This thesis is devoted to the modeling of these functions by industry state-of-theart and innovative machine learning techniques. Mathematically, the subject can be defined as the analysis of convenient mapping techniques from one multi-dimensional space to another, conceptualize as the aggregated sum of these functions, whose quantity and domain depends on the simulations objectives. Convenient is intended in terms of computational performance, such as the model’s size, evaluation speed, accuracy, robustness to numerical noise, complexity,etc; always with respect to the engineering modeling objectives that specify the multidimensional spaces of interest. In this thesis, a standard UO₂ PWR fuel assembly is analyzed for three state-variables, burnup,fuel temperature, and boron concentration.Library storage requirements are optimized meeting the evaluation speed and accuracy targets in view of microscopic, macroscopic cross sections and the infinite multiplication factor. Three approximation techniques are studied: The state-of-the-art spline interpolation using computationally convenient B-spline basis, that generate high order local approximations. A full grid is used as usually donein the industry. Kernel methods, that are a very general machine learning framework able to pose in a normed vector space, a large variety of regression or classification problems. Kernel functions can reproduce different function spaces using an unstructured support,which is optimized with pool active learning techniques. The approximations are found through a convex optimization process simplified by the kernel trick. The intrinsic modular character of the method facilitates segregating the modeling phases: function space selection, application of numerical routines and support optimization through active learning. Artificial neural networks which are“model free” universal approximators able Artificial neural networks which are“model free” universal approximators able to approach continuous functions to an arbitrary degree without formulating explicit relations among the variables. With adequate training settings, intrinsically parallelizable multi-output networks minimize storage requirements offering the highest evaluation speed. These strategies are compared to each other and to multi-linear interpolation in a Cartesian grid, the industry standard in core calculations. The data set, the developed tools, and scripts are freely available under aMIT license
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Cai, Li. "Condensation et homogénéisation des sections efficaces pour les codes de transport déterministes par la méthode de Monte Carlo : Application aux réacteurs à neutrons rapides de GEN IV". Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112280/document.

Texto completo da fonte
Resumo:
Dans le cadre des études de neutronique menées pour réacteurs de GEN-IV, les nouveaux outils de calcul des cœurs de réacteur sont implémentés dans l’ensemble du code APOLLO3® pour la partie déterministe. Ces méthodes de calculs s’appuient sur des données nucléaires discrétisée en énergie (appelées multi-groupes et généralement produites par des codes déterministes eux aussi) et doivent être validées et qualifiées par rapport à des calculs basés sur la méthode de référence Monte-Carlo. L’objectif de cette thèse est de mettre au point une technique alternative de production des propriétés nucléaires multi-groupes par un code de Monte-Carlo (TRIPOLI-4®). Dans un premier temps, après avoir réalisé des tests sur les fonctionnalités existantes de l’homogénéisation et de la condensation avec des précisions meilleures accessibles aujourd’hui, des incohérences sont mises en évidence. De nouveaux estimateurs de paramètres multi-groupes ont été développés et validés pour le code TRIPOLI-4®à l’aide de ce code lui-même, puisqu’il dispose de la possibilité d’utiliser ses propres productions de données multi-groupes dans un calcul de cœur. Ensuite, la prise en compte de l’anisotropie de la diffusion nécessaire pour un bon traitement de l’anisotropie introduite par des fuites des neutrons a été étudiée. Une technique de correction de la diagonale de la matrice de la section efficace de transfert par diffusion à l’ordre P1 (nommée technique IGSC et basée sur une évaluation du courant des neutrons par une technique introduite par Todorova) est développée. Une amélioration de la technique IGSC dans la situation où les propriétés matérielles du réacteur changent drastiquement en espace est apportée. La solution est basée sur l’utilisation d’un nouveau courant qui est projeté sur l’axe X et plus représentatif dans la nouvelle situation que celui utilisant les approximations de Todorova, mais valable seulement en géométrie 1D. A la fin, un modèle de fuite B1 homogène est implémenté dans le code TRIPOLI-4® afin de produire des sections efficaces multi-groupes avec un spectre critique calculé avec l’approximation du mode fondamental. Ce modèle de fuite est analysé et validé rigoureusement en comparant avec les autres codes : Serpent et ECCO ; ainsi qu’avec un cas analytique.L’ensemble de ces développements dans TRIPOLI-4® permet de produire des sections efficaces multi-groupes qui peuvent être utilisées dans le code de calcul de cœur SNATCH de la plateforme PARIS. Ce dernier utilise la théorie du transport qui est indispensable pour la nouvelle filière à neutrons rapides. Les principales conclusions sont : -Le code de réseau en Monte-Carlo est une voie intéressante (surtout pour éviter les difficultés de l’autoprotection, de l’anisotropie limitée à un certain ordre du développement en polynômes de Legendre, du traitement des géométries exactes 3D), pour valider les codes déterministes comme ECCO ou APOLLO3® ou pour produire des données pour les codes déterministes ou Monte-Carlo multi-groupes.-Les résultats obtenus pour le moment avec les données produites par TRIPOLI-4® sont comparables mais n’ont pas encore vraiment montré d’avantage par rapport à ceux obtenus avec des données issues de codes déterministes tel qu’ECCO
In the framework of the Generation IV reactors neutronic research, new core calculation tools are implemented in the code system APOLLO3® for the deterministic part. These calculation methods are based on the discretization concept of nuclear energy data (named multi-group and are generally produced by deterministic codes) and should be validated and qualified with respect to some Monte-Carlo reference calculations. This thesis aims to develop an alternative technique of producing multi-group nuclear properties by a Monte-Carlo code (TRIPOLI-4®).At first, after having tested the existing homogenization and condensation functionalities with better precision obtained nowadays, some inconsistencies are revealed. Several new multi-group parameters estimators are developed and validated for TRIPOLI-4® code with the aid of itself, since it has the possibility to use the multi-group constants in a core calculation.Secondly, the scattering anisotropy effect which is necessary for handling neutron leakage case is studied. A correction technique concerning the diagonal line of the first order moment of the scattering matrix is proposed. This is named the IGSC technique and is based on the usage of an approximate current which is introduced by Todorova. An improvement of this IGSC technique is then presented for the geometries which hold an important heterogeneity property. This improvement uses a more accurate current quantity which is the projection on the abscissa X. The later current can represent the real situation better but is limited to 1D geometries.Finally, a B1 leakage model is implemented in the TRIPOLI-4® code for generating multi-group cross sections with a fundamental mode based critical spectrum. This leakage model is analyzed and validated rigorously by the comparison with other codes: Serpent and ECCO, as well as an analytical case.The whole development work introduced in TRIPLI-4® code allows producing multi-group constants which can then be used in the core calculation solver SNATCH in the PARIS code platform. The latter uses the transport theory which is indispensable for the new generation fast reactors analysis. The principal conclusions are as follows:-The Monte-Carlo assembly calculation code is an interesting way (in the sense of avoiding the difficulties in the self-shielding calculation, the limited order development of anisotropy parameters, the exact 3D geometries) to validate the deterministic codes like ECCO or APOLLO3® and to produce the multi-group constants for deterministic or Monte-Carlo multi-group calculation codes. -The results obtained for the moment with the multi-group constants calculated by TRIPOLI-4 code are comparable with those produced from ECCO, but did not show remarkable advantages
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Kim, Myung Hyun. "The use of bilinearly weighted cross sections for few-group transient analysis". Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/14375.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Botes, Danniëll. "Few group cross section representation based on sparse grid methods / Danniëll Botes". Thesis, North-West University, 2012. http://hdl.handle.net/10394/8845.

Texto completo da fonte
Resumo:
This thesis addresses the problem of representing few group, homogenised neutron cross sections as a function of state parameters (e.g. burn-up, fuel and moderator temperature, etc.) that describe the conditions in the reactor. The problem is multi-dimensional and the cross section samples, required for building the representation, are the result of expensive transport calculations. At the same time, practical applications require high accuracy. The representation method must therefore be efficient in terms of the number of samples needed for constructing the representation, storage requirements and cross section reconstruction time. Sparse grid methods are proposed for constructing such an efficient representation. Approximation through quasi-regression as well as polynomial interpolation, both based on sparse grids, were investigated. These methods have built-in error estimation capabilities and methods for optimising the representation, and scale well with the number of state parameters. An anisotropic sparse grid integrator based on Clenshaw-Curtis quadrature was implemented, verified and coupled to a pre-existing cross section representation system. Some ways to improve the integrator’s performance were also explored. The sparse grid methods were used to construct cross section representations for various Light Water Reactor fuel assemblies. These reactors have different operating conditions, enrichments and state parameters and therefore pose different challenges to a representation method. Additionally, an example where the cross sections have a different group structure, and were calculated using a different transport code, was used to test the representation method. The built-in error measures were tested on independent, uniformly distributed, quasi-random sample points. In all the cases studied, interpolation proved to be more accurate than approximation for the same number of samples. The primary source of error was found to be the Xenon transient at the beginning of an element’s life (BOL). To address this, the domain was split along the burn-up dimension into “start-up” and “operating” representations. As an alternative, the Xenon concentration was set to its equilibrium value for the whole burn-up range. The representations were also improved by applying anisotropic sampling. It was concluded that interpolation on a sparse grid shows promise as a method for building a cross section representation of sufficient accuracy to be used for practical reactor calculations with a reasonable number of samples.
Thesis (MSc Engineering Sciences (Nuclear Engineering))--North-West University, Potchefstroom Campus, 2013.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Few-Group Homogenized Cross Sections"

1

Wang, Weixiang, WenPei Feng, KeFan Zhang, Guangliang Yang, Tao Ding e Hongli Chen. "A Moose-Based Neutron Diffusion Code with Application to a LMFR Benchmark". In Springer Proceedings in Physics, 490–502. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1023-6_43.

Texto completo da fonte
Resumo:
AbstractMOOSE (Multiphysics Object-Oriented Simulation Environment) is a powerful finite element multi-physics coupling framework, whose object-oriented, extensive system is conducive to the development of various simulation tools. In this work, a full-core MOOSE-based Neutron Diffusion application is developed, and a 3D PWR benchmark 3D-IAEA with given group constants is applied for code verification. Then the MOOSE-based Neutron Diffusion application is applied to the calculation of a Sodium-cooled Fast Reactor (SFR) benchmark, together with the research on homogenized few-group constants generation based on Monte Carlo method. The calculation adopts a 33-group cross section sets, which is generated using Monte Carlo code OpenMC. Considering the long neutron free path and strong global neutron spectrum coupling of liquid metal cooled reactor (LMFR), a full-core homogeneous model is used in OpenMC to generate the homogenized few-group constants. In addition, transport correction is used in the process of cross section generation, considering the prominent anisotropic scattering of fast reactor. The calculated results, including effective multiplication factor (keff) and assembly power distributions, are in good agreement with the reference values and the calculation results of OpenMC, which proves the accuracy of the neutron diffusion application, and also shows that the Monte Carlo method can be applied to generation of homogenized few-group constants for LMFR analysis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Qin, Shuai, Qingming He, Jiahe Bai, Wenchang Dong, Liangzhi Cao e Hongchun Wu. "Group Constants Generation Based on NECP-MCX Monte Carlo Code". In Springer Proceedings in Physics, 86–97. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-1023-6_9.

Texto completo da fonte
Resumo:
AbstractThe reliability of few-group constants generated by lattice physics calculation is significant for the accuracy of the conventional two-step method in neutronics calculation. The deterministic method is preferred in the lattice calculation due to its efficiency. However, it is difficult for the deterministic method to treat the resonance self-shielding effect accurately and handle complex geometries. Compared to the deterministic method, the Monte Carlo method has the characteristics of using continuous-energy cross section and the powerful capability of geometric modeling. Therefore, the Monte Carlo particle transport code NECP-MCX is extended in this study to generate assembly-homogenized few-group constants. The cumulative migration method is adopted to generate the accurate diffusion coefficient and the leakage correction is performed using the homogeneous fundamental mode approximation. For the verification of the generated few-group constants, a code sequence named MCX-SPARK is built based on NECP-MCX and a core analysis code SPARK to perform the two-step calculation. The physics start-up test of the HPR1000 reactor is simulated using the MCX-SPARK sequence. The results from MCX-SPARK agree well with the results from the design report and a deterministic two-step code Bamboo-C. It is concluded that the NECP-MCX has the ability to generate accurate few-group constants.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Few-Group Homogenized Cross Sections"

1

Bokov, P. M., D. Botes e Kostadin Ivanov. "Hierarchical Interpolation of Homogenized Few-Group Neutron Cross-Sections on Samples with Uncorrelated Uncertainty". In International Conference on Physics of Reactors 2022. Illinois: American Nuclear Society, 2022. http://dx.doi.org/10.13182/physor22-37615.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Hu, Tianliang, Liangzhi Cao, Hongchun Wu e Kun Zhuang. "Code Development for the Neutronics/Thermal-Hydraulics Coupling Transient Analysis of Molten Salt Reactors". In 2017 25th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/icone25-67316.

Texto completo da fonte
Resumo:
A code system has been developed in this paper for the dynamics simulations of MSRs. The homogenized cross section data library is generated using the continuous-energy Monte-Carlo code OpenMC which provides significant modeling flexibility compared against the traditional deterministic lattice transport codes. The few-group cross sections generated by OpenMC are provided to TANSY and TANSY_K which is based on OpenFOAM to perform the steady-state full-core coupled simulations and dynamics simulation. For verification and application of the codes sequence, the simulation of a representative molten salt reactor core MOSART has been performed. For the further study of the characteristics of MSRs, several transients like the code-slug transient, unprotected loss of flow transient and overcooling transient have been analyzed. The numerical results indicated that the TANSY and TANSY_K codes with the cross section library generated by OpenMC has the capability for the dynamics analysis of MSRs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zhang, Hongbo, Chuntao Tang, Weiyan Yang, Guangwen Bi e Bo Yang. "Development and Verification of the PWR Lattice Code PANDA". In 2017 25th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/icone25-66573.

Texto completo da fonte
Resumo:
Lattice code generates homogenized few-group cross sections for core neutronics code. It is an important component of the nuclear design code system. The development and improvement of lattice codes are always significant topics in reactor physics. The PANDA code is a PWR lattice code developed by Shanghai Nuclear Engineering Research and Design Institute (SNERDI). It starts from the 70-group library, and performs the resonance calculation based on the Spatially Dependent Dancoff Method (SDDM). The 2D heterogeneous transport calculation is performed without any group collapse and cell homogenization by MOC with two-level Coarse Mesh Finite Difference (CMFD) acceleration. Matrix exponential methods are used to solve the Bateman depletion equation. Based on the methodologies, the PANDA code is developed. The verifications on different levels preliminarily demonstrate the ability of the PANDA code.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Ratti, Luca, Guido Mazzini, Marek Ruščák e Valerio Giusti. "Neutronic Analysis for VVER-440 Type Reactor Using PARCS Code". In 2018 26th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/icone26-82607.

Texto completo da fonte
Resumo:
The Czech Republic National Radiation Protection Institute (SURO) provides technical support to the Czech Republic State Office for Nuclear Safety, providing safety analysis and reviewing of the technical documentations for Nuclear Power Plants (NPPs). For this reason, several computational models created in SURO were prepared using different codes as tools to simulate and investigate the design base and beyond design base accidents scenarios. This paper focuses on the creation of SCALE and PARCS neutronic models for a proper analysis of the VVER-440 reactor analysis. In particular, SCALE models of the VVER-440 fuel assemblies have been created in order to produce collapsed and homogenized cross sections necessary for the study with PARCS of the whole VVER-440 reactor core. The sensitivity study of the suitable energy threshold to be adopted for the preparation with SCALE of collapsed two energy-group homogenized cross sections is also discussed. Finally, the results obtained with PARCS core model are compared with those reported in the VVER-440 Final Safety Report.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Nie, Jingyu, Binqian Li, Yingwei Wu, Jing Zhang, Guoliang Zhang, Qisen Ren, Yanan He e Guanghui Su. "Thermo-Neutronics Coupled Simulation of a Heat Pipe Reactor Based on OpenMC/COMSOL". In 2024 31st International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2024. http://dx.doi.org/10.1115/icone31-135246.

Texto completo da fonte
Resumo:
Abstract As an advanced small nuclear reactor, the heat pipe reactor possesses several advantages, including high energy density, long operational lifetime, compact size, and strong adaptability to various environments, making it an optimal choice for specialized energy needs in future applications, such as deep-sea and deep-space domains. In this study, we developed a code system using OpenMC/COMSOL for neutron and thermodynamic simulations. The continuous-energy Monte Carlo code, OpenMC, was employed to generate homogenized cross-section databases, offering significant modeling flexibility compared to traditional deterministic lattice transport codes. The generated multi-group cross-sections from OpenMC were utilized in COMSOL for the coupled neutron and thermodynamic simulations of the entire core. To validate the OpenMC/COMSOL code system, benchmark problems for pressurized water reactors were computed, and the results of the “two-step” scheme for neutron physics were compared with full-core Monte Carlo neutron results. Furthermore, to investigate the applicability of the thermal-hydraulic coupling in the heat pipe reactor neutron physics model, typical heat pipe reactor assemblymodels were established and verified under various energy group numbers and homogenized regions. The results demonstrated good agreement for multiplication factors and power distributions. The research results indicate that the utilization of the cross-section library generated by OpenMC enables the capability for steady-state analysis and core design. The results obtained from COMSOL exhibit good overall agreement with respect to multiplication factors and power distribution.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Ahmed, Rizwan, Gyunyoung Heo, Dong-Keun Cho e Jongwon Choi. "Characterization of Radioactive Waste From Side Structural Components of a CANDU Reactor for Decommissioning Applications in Korea". In ASME 2010 13th International Conference on Environmental Remediation and Radioactive Waste Management. ASMEDC, 2010. http://dx.doi.org/10.1115/icem2010-40201.

Texto completo da fonte
Resumo:
Reactor core components and structural materials of nuclear power plants to be decommissioned have been irradiated by neutrons of various intensities and spectrum. This long term irradiation results in the production of large number of radioactive isotopes that serve as a source of radioactivity for thousands of years for future. Decommissioning of a nuclear reactor is a costly program comprising of dismantling, demolishing of structures and waste classification for disposal applications. The estimate of radio-nuclides and radiation levels forms the essential part of the whole decommissioning program. It can help establishing guidelines for the waste classification, dismantling and demolishing activities. ORIGEN2 code has long been in use for computing radionuclide concentrations in reactor cores and near core materials for various burn-up-decay cycles, using one-group collapsed cross sections. Since ORIGEN2 assumes a constant flux and nuclide capture cross-sections in all regions of the core, uncertainty in its results could increase as region of interest goes away from the core. This uncertainty can be removed by using a Monte Carlo Code, like MCNP, for the correct calculations of flux and capture cross-sections inside the reactor core and in far core regions. MCNP has greater capability to model the reactor problems in much realistic way that is to incorporate geometrical, compositional and spectrum information. In this paper the classification of radioactive waste from the side structural components of a CANDU reactor is presented. MCNP model of full core was established because of asymmetric structure of the reactor. Side structural components of total length 240 cm and radius 16.122 cm were modeled as twelve (12) homogenized cells of 20 cm length each along the axial direction. The neutron flux and one-group collapsed cross-sections were calculated by MCNP simulation for each cell, and then those results were applied to ORIGEN2 simulation to estimate nuclide inventory in the wastes. After retrieving the radiation level of side structural components of in- and ex-core, the radioactive wastes were classified according to the international standards of waste classification. The wastes from first and second cell of the side structural components were found to exhibit characteristics of class C and Class B wastes respectively. However, the rest of the waste was found to have activity levels as that of Class A radio-active waste. The waste is therefore suitable for land disposal in accordance with the international standards of waste classification and disposal.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Rohde, U., S. Mittag, U. Grundmann, P. Petkov e J. Ha´dek. "Application of a Step-Wise Verification and Validation Procedure to the 3D Neutron Kinetics Code DYN3D Within the European NURESIM Project". In 17th International Conference on Nuclear Engineering. ASMEDC, 2009. http://dx.doi.org/10.1115/icone17-75446.

Texto completo da fonte
Resumo:
A generic strategy of core physics codes benchmarking was elaborated within the European NURESIM code platform development. In this paper, the application of this step-wise procedure to benchmarking the 3D neutron kinetics code DYN3D for applications to VVER-type reactors is described. Numerical and experimental benchmark problems were considered for code verification and validation. Examples of these benchmarks including benchmark set-up and results obtained by use of DYN3D in comparison with other codes are given. First, mathematical problems with given cross sections are used for the verification of the mathematical methods applied e.g. in nodal codes against finite difference solutions. Discretization errors were quantified. After minimisation of numerical errors, modelling errors have to be considered. Diffusion approximation and homogenisation errors are due to simplified physical approaches and can be estimated by comparing diffusion solutions with more accurate Monte Carlo or deterministic transport solutions. Methods to reduce these errors are outlined. A series of 2D whole core benchmarks for different core loadings and operational conditions for VVER-1000 reactors was defined for this purpose. Reference transport solutions were calculated by the MARIKO and APOLLO codes based on Method of Characteristics. Homogenised two-group and few-group diffusion parameters were derived from the reference solutions and used as cross section data for the nodal diffusion code DYN3D. The DYN3D solutions were compared to the reference solution. It was shown that the homogenisation error can be significantly reduced by using Assembly Discontinuity Factors (ADF) and Reflector Discontinuity Factors (RDF) which are obtained from the transport solution by applying equivalence theory. A study using the multi-group version of DYN3D has shown that increasing the number of groups in the considered cases has only a small effect in comparison with homogenisation error. After reducing modelling errors by choosing appropriate physical approximations, the code have to be validated against reality. Experimental problems are used for code validation. Experimental data for VVER reactors, which were used for the benchmarking of the DYN3D code within NURESIM, are power distribution measurements at the full-size (VVER-1000) experimental facility V-1000, which have been well documented within the EC project VALCO, and kinetic experiments at the LR-0 zero power reactor in NRI Rˇezˇ. The code DYN3D, being one of the NURESIM platform codes, has proved to be an effective tool for steady-state and kinetics core calculations. The high accuracy of the code is based on the advanced nodal method “HEXNEM2”, multi-group approach, applying discontinuity factors, and intra-nodal flux reconstruction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Yuan, Yuan, Guoming Liu e Peng Zhang. "Verification of the RMC-SaraGR Nuclear Design Code System Based on the HTTR Benchmark". In 2024 31st International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2024. http://dx.doi.org/10.1115/icone31-135368.

Texto completo da fonte
Resumo:
Abstract In order to perform a detailed nuclear analysis for the small modular prismatic HTGRs, the RMC-SaraGR nuclear design code system has been developed. The verification of the code system using the HTTR benchmark is reported in this paper. The detailed HTTR models have been established with the continuous-energy Monte Carlo code RMC to provide the reference solutions and the 25-group cross sections, which are further corrected by the super homogenization (SPH) factors before being used for the homogeneous core transport calculations by SaraGR or the multi-group RMC (RMC-MG). Various core configurations including fully-loaded core with/without control rods (CRs) insertion and annular cores with different number of fuel columns loaded have been modeled. Compared with the benchmark data, the reference solutions provided by RMC show a 100∼ 500 pcm overestimation of keff for the fully-loaded cores, which may be attributed to the nuclear data uncertainty and the model bias. When the CR block is homogenized as a whole region, the calculation results from RMC-MG and SaraGR agree well but both show a significant keff overestimation of more than 1000 pcm compared with the RMC reference solution for the fully-loaded core with all CRs withdrawn, which is likely to be related to the strong neutron streaming effect through the void holes of the CR blocks. Therefore, a local mesh refinement is imposed on the CR blocks for an explicit modeling of the CR holes, which reduces the keff discrepancy to less than 500 pcm. Most results agree well with the reference solutions and are within an uncertainty of the benchmark values, indicating that the RMC-SaraGR code system is a feasible tool for the nuclear analysis of the prismatic HTGRs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Yang, Wankui, Baoxin Yuan, Songbao Zhang, Haibing Guo, Yaoguang Liu e Li Deng. "A Neutron Transport Calculation Method for Deep Penetration and its Preliminary Verification". In 2018 26th International Conference on Nuclear Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/icone26-81709.

Texto completo da fonte
Resumo:
Deep penetration problems exist widely in reactor applications, such as SPRR300 (Swimming Pool Research Reactor 300), a light water moderated, enriched uranium fueled research reactor in China. Deterministic transport theory is intrinsically suitable for deep penetration. But there exist some problems when it’s applied in SPRR-300research reactors. First, the reactor core is complicated for geometry description in deterministic theory codes. Monte Carlo method has advantages in complex geometry modeling. And it uses continuous energy cross sections which are independent with specific reactor types and research objections. But usually it’s difficult to converge well enough to deal with deep penetration problems, even though there are a number of variance reduction techniques. Based on the advantages and disadvantages of Monte Carlo and Deterministic method, we proposed a coupled neutron transport calculation method for deep penetration. It combines advantages of these two methods. Firstly, we use Monte Carlo code to finish fine modeling and do the whole reactor core calculation. Domestically developed Monte Carlo code JMCT is used to do the neutron transport calculation. Then homogenized group constants in each mesh are calculated from JMCT output by a self-developed script. Afterwards, we do the whole reactor calculation with deterministic theory code TORT. It directly uses group constants generated by Monte Carlo code. Finally, we can get the deep penetration calculation results from TORT output. Verification is carried out by comparing the group constants of benchmark problem, and by comparing keff calculated by this method with continuous energy Monte Carlo method. Benchmark calculation is conducted with OECD/NEA slab benchmark problem. The comparison shows that group constants generated by this study are in good agreement with results from published references. Then above group constants are applied to 3-dimensional discrete ordinates deterministic theory transport code TORT. But keff calculated by TORT is a little lower than that calculated by Monte Carlo code JMCT. To minimize other influence factors, different Sn/Pn order, and different mesh size in TORT has been tried. Unfortunately the keff difference between these two methods remains. Even though the keff results in this benchmark are less than keff calculated by continuous energy MC method, Benchmark results show that all the group constants generated by this method are in good agreement with existing references. So it can be expected that after further verification and validation, this coupled method can be effectively applied to the deep penetration problem in such kind of research reactors.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Jevremovic, Tatjana, Mathieu Hursin, Nader Satvat, John Hopkins, Shanjie Xiao e Godfree Gert. "Performance, Accuracy and Efficiency Evaluation of a Three-Dimensional Whole-Core Neutron Transport Code AGENT". In 14th International Conference on Nuclear Engineering. ASMEDC, 2006. http://dx.doi.org/10.1115/icone14-89561.

Texto completo da fonte
Resumo:
The AGENT (Arbitrary GEometry Neutron Transport) an open-architecture reactor modeling tool is deterministic neutron transport code for two or three-dimensional heterogeneous neutronic design and analysis of the whole reactor cores regardless of geometry types and material configurations. The AGENT neutron transport methodology is applicable to all generations of nuclear power and research reactors. It combines three theories: (1) the theory of R-functions used to generate real three-dimensional whole-cores of square, hexagonal or triangular cross sections, (2) the planar method of characteristics used to solve isotropic neutron transport in non-homogenized 2D) reactor slices, and (3) the one-dimensional diffusion theory used to couple the planar and axial neutron tracks through the transverse leakage and angular mesh-wise flux values. The R-function-geometrical module allows a sequential building of the layers of geometry and automatic submeshing based on the network of domain functions. The simplicity of geometry description and selection of parameters for accurate treatment of neutron propagation is achieved through the Boolean algebraic hierarchically organized simple primitives into complex domains (both being represented with corresponding domain functions). The accuracy is comparable to Monte Carlo codes and is obtained by following neutron propagation through real geometrical domains that does not require homogenization or simplifications. The efficiency is maintained through a set of acceleration techniques introduced at all important calculation levels. The flux solution incorporates power iteration with two different acceleration techniques: Coarse Mesh Rebalancing (CMR) and Coarse Mesh Finite Difference (CMFD). The stand-alone originally developed graphical user interface of the AGENT code design environment allows the user to view and verify input data by displaying the geometry and material distribution. The user can also view the output data such as three-dimensional maps of the energy-dependent mesh-wise scalar flux, reaction rate and power peaking factor. The AGENT code is in a process of an extensive and rigorous testing for various reactor types through the evaluation of its performance (ability to model any reactor geometry type), accuracy (in comparison with Monte Carlo results and other deterministic solutions or experimental data) and efficiency (computational speed that is directly determined by the mathematical and numerical solution to the iterative approach of the flux convergence). This paper outlines main aspects of the theories unified into the AGENT code formalism and demonstrates the code performance, accuracy and efficiency using few representative examples. The AGENT code is a main part of the so called virtual reactor system developed for numerical simulations of research reactors. Few illustrative examples of the web interface are briefly outlined.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia