Teses / dissertações sobre o tema "Accélérateurs de particules – Simulation, Méthodes de"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 49 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Accélérateurs de particules – Simulation, Méthodes de".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Goutierre, Emmanuel. "Machine learning-based particle accelerator modeling". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG106.
Texto completo da fonteParticle accelerators rely on high-precision simulations to optimize beam dynamics. These simulations are computationally expensive, making real-time analysis impractical. This thesis seeks to address this limitation by exploring the potential of machine learning to develop surrogate models for particle accelerator simulations. The focus is on ThomX, a compact Compton source, where two surrogate models are introduced: LinacNet and Implicit Neural ODE (INODE). These models are trained on a comprehensive database developed in this thesis that captures a wide range of operating conditions to ensure robustness and generalizability. LinacNet provides a comprehensive representation of the particle cloud by predicting all coordinates of the macro-particles, rather than focusing solely on beam observables. This detailed modeling, coupled with a sequential approach that accounts for cumulative particle dynamics throughout the accelerator, ensures consistency and enhances model interpretability. INODE, based on the Neural Ordinary Differential Equation (NODE) framework, seeks to learn the implicit governing dynamics of particle systems without the need for explicit ODE solving during training. Unlike traditional NODEs, which struggle with discontinuities, INODE is theoretically designed to handle them more effectively. Together, LinacNet and INODE serve as surrogate models for ThomX, demonstrating their ability to approximate particle dynamics. This work lays the groundwork for developing and improving the reliability of machine learning-based models in accelerator physics
Guyot, Julien. "Particle acceleration in colliding laser-produced plasmas". Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS616.
Texto completo da fonteEnergetic charged particles are ubiquitous in the Universe and are accelerated by galactic and extragalactic sources. Understanding the origin of these "cosmic rays" is crucial in astrophysics and within the framework of high-energy-density laboratory astrophysics we have developed a novel platform on the LULI laser facilities to study particle acceleration in the laboratory. In the experiments, the collision of two laser-produced counter-propagating plasmas generates a distribution of non-thermal particles with energies up to 1 MeV. The aim of this work is to provide a theoretical framework to understand their origin. Magneto-hydrodynamic simulations with test particles show that the plasma collision leads to the growth of bubble and spike structures driven by the magnetic Rayleigh-Taylor instability and the generation of strong electric fields. We find that particles are accelerated to energies up to a few hundred of keV in less than 20 ns, by repeated interactions with these growing magnetic Rayleigh-Taylor perturbations. The simulations and a stochastic acceleration model recover very well the experimentally measured non-thermal energy spectrum. In conclusion, we have identified in the laboratory a new particle acceleration mechanism that relies on the growth of the magnetic Rayleigh-Taylor instability to stochastically energize particles. This instability is very common in astrophysical plasmas, with examples including supernovae remnants and coronal mass ejections, and we suggest that it may contribute to the energization of particles in these systems
Jenkinson, William. "Simulation de la mécanique mésoscopique des aliments par méthodes de particules lagrangiennes". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASB047.
Texto completo da fonteThe role of mesoscopic mechanics in food processing and design is not well understood, particularly for oral processing and texture perception. Despite the recognized importance of soft matter, the food science community has struggled to bridge the gap between micro-, meso-, and macro-scale behaviours using simulations. This thesis addresses this challenge by focusing on mechanical simulations, excluding thermal, chemical and physicochemical effects, to explore food behaviour at the mesoscopic scale. We have developed a simulation framework within the LAMMPS environment, combining smoothed-particle hydrodynamics (SPH) implementations for liquids and elastic solids. We validated the framework across scenarios such as Couette flow and deformation of granules in a flow. The results show the framework's effectiveness in capturing food structure dynamics and interactions with cilia and papillae and offer new insights into texture perception and hydrodynamics. The study also highlights how granule elasticity and volume fraction impact flow properties and their eventual role in texture perception. This work focuses on mechanics while deliberately remaining flexible enough to integrate mechanical, thermal, chemical, and biological processes in future food science models. Proposed future research includes strategies to integrate more physics and scales and efforts to improve the accessibility of simulation tools for engineers, advancing practical applications in food science
DaCosta-Cloix, Olivier. "Emission cyclotronique ionique dans les tokamaks". Palaiseau, Ecole polytechnique, 1995. http://www.theses.fr/1995EPXX0030.
Texto completo da fonteValdenaire, Simon. "Mise en place et utilisation des faisceaux FFF en radiothérapie : radiobiologie, caractérisation physique, contrôles qualité, modélisation et planification de traitement". Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0037/document.
Texto completo da fonteIn medical linear electron accelerators, photon beams profiles are homogenised using flattening filters. Technologies have evolved and the presence of this filter is no longer necessary. Flattening filter free (FFF) beams exhibit higher dose rates, heterogeneous dose profiles, modified energy spectra and lower out-of-field dose. This PhD aimed at studying the characteristics of unflattened beams, as well as their impact in clinical utilization. Several subjects were thoroughly investigated: radiobiology, dosimetry, quality controls, modelling and treatment planning. In vitro experiments ensured that the high dose-rate of FFF beams had not a radiobiological impact. A wide review of the literature was conducted to corroborate these results. In order to understand thoroughly the characteristics of FFF beams, measurements were conducted using several detectors. The effect of the spectra and dose rates of unflattened beams on dose calibration were also studied. FFF beams were modeled in two TPSs. The methods, results and model parameters have been compared between the available beam qualities as well as between both TPSs. Furthermore, the implementation of stereotactic treatments technique was the occasion to investigate small beam dosimetry. Prostate cancer cases treated with VMAT and pulmonary tumors treated with stereotactic 3D beams were also studied. The comparison of dose distributions and treatment metrics give advantage to FFF beams. Mastering physical and biological aspects of flattening filter free beams allowed the IPC to start FFF treatments. Comparative studies have since resulted in a deeper understanding on the pertinent use of these beams
Cannaméla, Claire. "Apport des méthodes probabilistes dans la simulation du comportement sous irradiation du combustible à particules". Paris 7, 2007. http://www.theses.fr/2007PA077082.
Texto completo da fonteTThis work is devoted to the evaluation of mathematica! expectations in the context of structural reliability. We seek a failure probability estimate (that we assume low), taking into account the uncertainty of influential parameters of the System. Our goal is to reach a good copromise between the accuracy of the estimate and the associated computational cost. This approach is used to estimate the failure probability of fuel particles from a HTR-type nuclear reactor. This estimate is obtain by means of costly numerical simulations. We consider differents probabilistic methods to tackle the problem. First, we consider a variance reducing Monte Carlo method : importance sampling. For the parametric case, we propose adaptive algorithms in order to build a series of probability densities that will eventually converge to optimal importance density. We then present several estimates of the mathematical expectation based on this series of densities. Next, we consider a multi-level method using Monte Carlo Markov Chain algorithm. Finally, we turn our attention to the related problem of quantile estimation (non extreme) of physical output from a large-scale numerical code. We propose a controled stratification method. The random input parameters are sampled in specific regions obtained from surrogate of the response. The estimation of the quantile is then computed from this sample
Peronnard, Paul. "Méthodes et outils pour l'évaluation de la sensibilité de circuits intégrés avancés face aux radiations naturelles". Phd thesis, Université Joseph Fourier (Grenoble), 2009. http://tel.archives-ouvertes.fr/tel-00441658.
Texto completo da fonteLominé, Franck. "Écoulements de particules dans un milieu poreux". Rennes 1, 2007. https://tel.archives-ouvertes.fr/tel-00198209.
Texto completo da fonteThis work deals with experimental and numerical investigations on particles flow through a packing of larger spheres. We built an experimental device to study lateral dispersion and the mean transit time of a blob of particles through a porous medium. Particularly, we determined the dependence of the mean transit time on the number of particles, on particle size and on the height of the porous medium. We also characterized the dependence of the lateral dispersion coefficient on the number of particles moving in the porous structure. Then, we developed numerical simulation models based on « Event-Driven » and « molecular dynamic of soft spheres » methods. Those allowed us to supplement the experimental study by analyzing the influence of various additional parameters. The access inside the porous medium allowed a finer analysis of particles dispersion. Finally, we approached the possibility of using the spontaneous percolation phenomenon to produce a mixer. Thanks to the numerical tool, we carried out and characterized mixtures of particles of different sizes. We showed that this process proves to be a simple and effective method to obtain homogeneous mixtures of particles
Peaucelle, Christophe. "La problématique de l'évolution des moments d'une densité de particules soumises à des forces non linéaires". Phd thesis, Grenoble INPG, 2001. http://tel.archives-ouvertes.fr/tel-00001153.
Texto completo da fonteDERRIENNIC, OURLY HELENE. "Etude et optimisation des méthodes de Monte Carlo non analogues pour la simulation des particules neutres en radio-protection". Paris, CNAM, 1999. http://www.theses.fr/1999CNAM0312.
Texto completo da fonteGuyot-Delaurens, Frédérique. "Application de la méthode particulaire déterministe à la simulation du modèle cinétique de dispositifs électroniques inhomogène unidimensionnels". Palaiseau, Ecole polytechnique, 1990. http://www.theses.fr/1990EPXX0001.
Texto completo da fonteChauvin, Corine. "Influence des forces d'interactions particulaires dans la simulation lagrangienne du comportement de particules en sédimentation et écoulements turbulents". Rouen, 1991. http://www.theses.fr/1991ROUE5037.
Texto completo da fonteKong, Jian Xin. "Contribution à l'analyse numérique des méthodes de couplage particules-grille en mécanique des fluides". Habilitation à diriger des recherches, Grenoble 1, 1993. http://tel.archives-ouvertes.fr/tel-00343556.
Texto completo da fonteDorogan, Kateryna. "Schémas numériques pour la modélisation hybride des écoulements turbulents gaz-particules". Phd thesis, Aix-Marseille Université, 2012. http://tel.archives-ouvertes.fr/tel-00820978.
Texto completo da fonteNoël, Franck. "Simulation numérique de la formation d'un dépôt de particules sur une surface poreuse : application à la filtration d'arrêt". Toulouse, INPT, 2006. https://hal.science/tel-04594672.
Texto completo da fonteIn this work, we developed numerical tools which enabled us on the one hand to analyze and compute the hydraulic resistance of particles deposited on a filter membrane and on the other hand to simulate the formation of deposits in frontal filtration when the external flow has a very small Reynolds number (Stokes flow). We distinguished two principal cases according to the relationship between the size of the particles and the size of the pore of the membrane. When this ratio is small (situations called to separation of scales), we modelized the flow in the deposit with the Darcy's law. When this ratio is important (absence of separation of scales), the homogenized models such as Darcy is a bad approximation of the flow and it is necessary in theory to solve equations of Stokes
Fourrier, Joris. "Les accélérateurs à champ fixe et gradient alterné FFAG et leur application médicale en protonthérapie". Phd thesis, Grenoble 1, 2008. http://www.theses.fr/2008GRE10161.
Texto completo da fonteRadiotherapy uses particle beams to irradiate and kill cancer tumors while sparing healthy tissues. Bragg peak shape of the proton energy loss in matter allows a ballistic improvement of the dose deposition compared with X rays. Thus, the irradiated volume can be precisely adjusted to the tumour. This thesis, in the frame of the RACCAM project, aims to the study and the design of a proton therapy installation based on a fixed field alternating gradient (FFAG) accelerator in order to build a spiral sector FFAG magnet for validation. First, we present proton therapy to define medical specifications leading to the technical specifications of a proton therapy installation. Secondly, we introduce FFAG accelerators through their past and on-going projects which are on their way around the world before developing the beam dynamic theories in the case of invariant focusing optics (scaling FFAG). We describe modelling and simulation tools developed to study the dynamics in a spiral scaling FFAG accelerator. Then we explain the spiral optic parameter search which has leaded to the construction of a magnet prototype. Finally, we describe the RACCAM project proton therapy installation starting from the injector cyclotron and ending with the extraction system
Peaucelle, Christophe. "La problématique de l'évolution des moments d'une densité de particules soumises à des forces non linéaires". Phd thesis, Grenoble INPG, 2001. http://www.theses.fr/2001INPG0095.
Texto completo da fonteHigh-power linear accelerators are needed as driver for several projects (spallation neutron sources, hybrid system). This interest brings us to the question of dynamics of high intensity particle beam. Inside intense beam, particles are under non linear forces mainly due to space charge effects. In order to have less heavy and more realistic tools than classical simulation methods (particle-particle interactions, particle-core model), we consider a description of the evolution of a particle density from its statistical parameters, its moments. In a first part, a detailed analysis of the moment problems is shown in a simplified but non restrictive case. To begin with, we develop an original study based on orthogonal polynomial properties which allows us to study one-dimension density moments. We can see that we obtain information about density from only few moments. Such an investigation is essential for a better understanding of moment significance. Then, we apply this description to two-dimension phase space, so that we can precisely estimate where particles are in this phase space. Finally, we enumerate difficulties met and deduce the limits of this method. The second part of this thesis, more experimental, presents beam the measurements of the beam characteristics of accelerator "GENEPI" as a part of hybrid reactor program. Moreover, we show how these specifications yields to beam calibration and validation of theoretical calculations used to design GENEPI
Fourrier, Joris. "Les accélérateurs à champ fixe et gradient alterné FFAG et leur application médicale en protonthérapie". Phd thesis, Université Joseph Fourier (Grenoble), 2008. http://tel.archives-ouvertes.fr/tel-00338177.
Texto completo da fonteRoux, Raphaël. "Étude probabiliste de systèmes de particules en interaction : applications à la simulation moléculaire". Phd thesis, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00597479.
Texto completo da fonteZambelli, Laura. "Contraintes sur la prédiction des flux de neutrinos de T2K par les données de l'expérience de hadroproduction NA61/SHINE". Paris 7, 2013. http://www.theses.fr/2013PA077215.
Texto completo da fonteT2K is a long baseline neutrino oscillation experiment on accelerator based in Japan, whose primary goal is a precise measurement of the thetal3 angle of the PMNS matrix. This measurement is possible through the appearance of electronic neutrinos out of a muonic neutrino beam, 300 km downstream after their creation point. Neutrinos are made by the decay in flight of unstable particles (pions, kaons, muons) produced by 31 GeV/c accelerated protons impinging onto a carbon target. Most of the neutrinos produced are of muonic-type, but a non-negligible amount of electronic neutrinos is also created, which will contribute to the dominant source of errors for the measurement of thetal3. In order to understand, and predict, this electronic contamination, a parallel hadroproduction experiment is used: NA61/SHINE at CERN reproduces the T2K beam conditions, and measures the kinematics of produced hadrons thanks to two types of target: thin and replica. The measurement of the KOS production is described in this thesis. This measurement, together with charged hadrons, is then implemented in the T2K simulation chain. The development of a simulation based on the generic tool VMC, detailed in the thesis, provides a unique framework for the simulation of the two experiments. Moreover, this tool allows tests of several hadronic models against NA61/SHINE and HARP experimental data. Fritiof-based models seem to be the most promising. All these ingredients played a key rote leading to the first measurement of the theta13 angle, and reducing its uncertainty
Charles, Frédérique. "Modélisation mathématique et étude numérique d'un aérosol dans un gaz raréfié. Application à la simulation du transport de particules de poussière en cas d'accident de perte de vide dans ITER". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2009. http://tel.archives-ouvertes.fr/tel-00463639.
Texto completo da fonteBaraglia, Federico. "Développement d'un modèle triphasique Euler/Euler/Lagrange pour la simulation numérique des écoulements liquide-gaz chargés en particules". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSEP017.
Texto completo da fonteThis manuscript sums up work carried out during a thesis at the MFEE department of EDF R&D on liquid-gas flows laden with dispersed particles under the supervision of Olivier Simonin (IMFT), Jérôme Laviéville (EDF), and Nicolas Mérigoux (EDF). The thesis aims at providing a working environment for the numerical simulation of two-phase bubbly flows, free-surface flows or in a mixed regime, loaded with particles that can interact with the fluids present in their continuous or dispersed form. These flows can be found in industrial situations such as chemical reactors, power plants, or wastewater treatment plants, as well as in natural situations such as during a flood. The developed tool allows predictions to be made about the performance of these industrial devices or the damage caused by exceptional natural events. The developments are included in the most up-to-date version of neptune_cfd, a multi-fluid solver developed by EDF, CEA, IRSN, and Framatome, based on the standard multi-fluid method that allows the simulation of multiphase flow independently of their typology.The methods implemented are based on well-known two-phase approaches. The stochastic Lagrangian particle tracking method is adapted so that each particle can interact with all the fluids. Closures are proposed to determine the impact of each phase on the behavior of the particles. To verify certain assumptions, a new closure for the Langevin equation on the fluid velocity seen by the particle is proposed. Its behavior is compared to standard models and literature on simple verification cases of homogeneous isotropic turbulence and inhomogeneous cases. The Lagrangian equations obtained are used to close an Eulerian model based on the probability density function approach. The performance of the two developed threephase models is established in terms of particle deposition driven by turbulence or gravity.A significant part of the thesis focuses on an issue that arose during preliminary checks: the phenomenon of air entrainment in plunging jets. Indeed, due to the nature of the solver, bubbles or dispersed droplets can detach from the free-surface depending on the flow conditions. The quantity of these transferred structures and their characteristic size being crucial quantities which drives their behavior, a new model had to be developed. Mass transfer between continuous structures and dispersed inclusions is ensured by the model that describes the evolution of resolved interfaces, the latter was not modified. The one regarding the size of the created bubbles/droplets is integrated into the evolution equation of the interfacial area, a quantity that allows tracking the diameter of the inclusions.All developed models are compared to experimental measurements. The air entrainment model is first tested without the presence of particles in various cases. A hydraulic jump case is also considered to establish the generality of the model. Then, the threephase models are tested in various configurations. First, configurations without air entrainment to isolate the behavior of the particles, and then with air entrainment. The different cases highlighted the importance of certain models and the differences between stochastic Lagrangian and Eulerian methods
Crestetto, Anaïs. "Optimisation de méthodes numériques pour la physique des plasmas : application aux faisceaux de particules chargées". Phd thesis, Université de Strasbourg, 2012. http://tel.archives-ouvertes.fr/tel-00735569.
Texto completo da fonteHitti, Karim. "Simulation numérique de Volumes Élémentaires Représentatifs (VERs) complexes : Génération, Résolution et Homogénéisation". Phd thesis, École Nationale Supérieure des Mines de Paris, 2011. http://pastel.archives-ouvertes.fr/pastel-00667428.
Texto completo da fonteLatocha, Vladimir. "Deux problemes en transport des particules chargees intervenant dans la modelisation d'un propulseur ionique". Phd thesis, INSA de Toulouse, 2001. http://tel.archives-ouvertes.fr/tel-00002194.
Texto completo da fonteproblèmes dans le domaine du transport des particules chargées. Nous nous
intéressons à deux de ces problèmes, à savoir le transport des électrons et
le calcul du potentiel électrique.
Le transport des électrons résulte de l'influence conjuguée des champs
(électrique et magnétique) établis dans la cavité du propulseur et des
collisions des électrons (dans la cavité et avec la paroi limitant celle-ci).
Nous avons participé au développement d'un modèle SHE (Spherical Harmonics
Expansion) qui résulte d'une analyse asymptotique de l'équation de Boltzmann
munie de conditions de réflexion aux bords. Ce modèle permet d'approcher la
fonction de distribution en énergie des électrons en résolvant une
équation de diffusion dans un espace \{position, énergie\}. Plus précisément,
nous avons étendu une démarche existante au cas où les collisions en volume
(excitation, ionisation) et les collisions inélastiques à la paroi
(attachement et émission secondaire) sont prises en compte. Enfin, nous
avons écrit un code de résolution du modèle SHE, dont les résultats ont
été comparés avec ceux d'une méthode de Monte Carlo.
\vspace*{1mm}
Dans un deuxième temps, nous avons étudié le calcul du potentiel électrique.
La présence du champ magnétique impose d'écrire le courant d'électrons sous
la forme ${\cal J}=\sigma \nabla W$
où W est le potentiel électrique et le tenseur de conductivité $\sigma$
est fortement anisotrope compte tenu des grandeurs physiques en jeu dans
le SPT. Pour résoudre $\mbox{div }{\cal J}(x,y)=S(x,y)$,
nous avons implémenté une méthode de volumes finis
sur maillage cartésien permettant de résoudre ce problème elliptique
anisotrope, et nous avons vérifié qu'elle échouait lorsque le rapport
d'anisotropie devenait grand. Aussi nous avons développé une méthode de
paramétrisation, qui consiste à extrapoler la solution d'un problème
anisotrope à l'aide d'une suite de problèmes isotropes. Cette méthode a
donné des résultats encourageants pour de forts rapports d'anisotropie,
et devrait nous permettre d'atteindre des cas réels.
Zhang, Qijie. "Simulation de la matière particulaire dans la région parisienne, en particulier de l'aérosol organique". Paris 7, 2012. http://www.theses.fr/2012PA077206.
Texto completo da fonteHuman activities in large agglomerations ("megacities") cause large pollutant emissions, with negative effects on air quality and human health at a local and regional level. Fine particulate matter (PM) is one of the greatest concerns for health. Organic aerosol makes up a large part of fine PM, but there are still large gaps in the knowledge on its formation pathways and there is considerable uncertainty in its 3D modeling. In this thesis, PM₂. ₅ simulations with the regional chemistry-transport model CHIMERE were first evaluated with measurement date collected in Paris in springtime, 2007. The model results show good performance of simulating occurrence of peaks, especially for inorganic aerosols which mainly originate from long range transport from Northeastern and Central Europe. Modeled primary organic aerosol (POA) is overestimated when considered as non-volatile by a factor of two, while secondary organic aerosol (SOA) is underestimated by a factor of more than two. In order to improve the model performance of organic aerosol simulation, the volatility basis set approach which formalizes new knowledge on POA volatility and on SOA chemical aging is implemented into CHIMERE. Model simulations are evaluated with ground based and airborne observations obtained during two intensive field campaigns performed in the Paris agglomeration in summer 2009 and winter 2009/2010 in the frame of the European MEGAPOLI project. The simulation for organic aerosol is improved when taking into account POA volatility and multistep oxidation of semivolatile VOC during the summer campaign. Advection of continental air masses to the Paris agglomeration with enhanced SOA levels either from anthropogenic or biogenic origin, is well restituted by the model. SOA build-up in the plume is overestimated by a factor of two when normalized to photochemical ozone production, but this factor is within the uncertainty of the VBS approach. During the winter campaign, SOA formation is still underestimated. These results clearly represent progress in the modeling of organic aerosol in and around a large urban agglomeration. The model was used to estimate sources contributing for summer 2009 to organic aerosol in the agglomeration and in the plume. Within the agglomeration, advection of biogenic, anthropogenic and background SOA from outside was dominant, while locally emitted POA accounts to about a quarter of total OA. In the plume, anthropogenic SOA formation, and to some extent also SOA formation from aged POA becomes dominant. These results are in broad agreement with source apportionment studies from observations
Zaffora, Biagio. "Statistical analysis for the radiological characterization of radioactive waste in particle accelerators". Thesis, Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1131/document.
Texto completo da fonteThis thesis introduces a new method to characterize metallic very-low-level radioactive waste produced at the European Organization for Nuclear Research (CERN). The method is based on: 1. the calculation of a preliminary radionuclide inventory, which is the list of the radionuclides that can be produced when particles interact with a surrounding medium, 2. the direct measurement of gamma emitters and, 3. the quantification of pure-alpha, pure-beta and low-energy X-ray emitters, called difficult-to-measure (DTM) radionuclides, using the so-called scaling factor (SF), correlation factor (CF) and mean activity (MA) techniques. The first stage of the characterization process is the calculation of the radionuclide inventory via either analytical or Monte Carlo codes. Once the preliminary radionuclide inventory is obtained, the gamma-emitting radionuclides are measured via gamma-ray spectrometry on each package of the waste population. The major gamma-emitter, called key nuclide (KN), is also identified. The scaling factor method estimates the activity of DTM radionuclides by checking for a consistent and repeated relationship between the key nuclide and the activity of the difficult to measure radionuclides from samples. If a correlation exists the activity of DTM radiodionuclides can be evaluated using the scaling factor otherwise the mean activity from the samples collected is applied to the entire waste population. Finally, the correlation factor is used when the activity of pure-alpha, pure-beta and low-energy X-ray emitters is so low that cannot be quantified using experimental values. In this case a theoretical correlation factor is obtained from the calculations to link the activity of the radionuclides we want to quantify and the activity of the key nuclide. The thesis describes in detail the characterization method, shows a complete case study and describes the industrial-scale application of the characterization method on over 1’000 m3 of radioactive waste, which was carried out at CERN between 2015 and 2017
Sajjad, Saeeda. "Développement d'outils de simulation et de reconstruction de gerbes de particules pour l'astronomie gamma avec les futurs imageurs Tcherenkov". Phd thesis, Montpellier 2, 2007. http://www.theses.fr/2007MON20249.
Texto completo da fonteChaigne, Benoît. "Méthodes hiérarchiques pour l'optimisation géométrique de structures rayonnantes". Phd thesis, Université de Nice Sophia-Antipolis, 2009. http://tel.archives-ouvertes.fr/tel-00429366.
Texto completo da fonteSajjad, Saeeda. "Développement d'outils de simulation et de reconstruction de gerbes de particules pour l'astronomie gamma avec les futurs imageurs Tcherenkov". Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2007. http://tel.archives-ouvertes.fr/tel-00408835.
Texto completo da fonteBilgen, Suheyla. "Dynamic pressure in particle accelerators : experimental measurements and simulation for the LHC". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASP020.
Texto completo da fonteUltra-High Vacuum is an essential requirement to achieve design performances and high luminosities in high-energy particle colliders. Consequently, the understanding of the dynamic pressure evolution during accelerator operation is fundamental to provide solutions to mitigate pressure rises induced by multiple-effects occurring in the vacuum chambers and leading to beam instabilities. For the LHC, the appearance of instabilities may be due to the succession of several phenomena. First, the high intensity proton beams ionize the residual gas producing positive ions (mainly H₂⁺ or CO⁺) as well as accelerated electrons which impinge the copper wall of the beam pipe. Then, these interactions induce: (i) the desorption of gases adsorbed on the surfaces leading to pressure rises; (ii) the creation of secondary particles (ions, electrons). In this latter case, the production of secondary electrons leads to the so-called “Electron Cloud” build-up by multipacting effect, the mitigation of which being one of the major challenges of the LHC storage ring. Electron clouds generate beam instabilities, pressure rises and heat loads on the walls of beam pipe and can lead to “quench” of the superconducting magnets. All these phenomena limit the maximum intensity of the beams and thus the ultimate luminosity achievable in a proton accelerator. This work aims to investigate some fundamental phenomena which drive the dynamic pressure in the LHC, namely the effects induced by electrons and ions interacting with the copper surface of the beam screens on the one hand and the influence of the surface chemistry of copper on the other hand. First, in-situ measurements were performed. Electron and ion currents as well as pressure were recorded in situ in the Vacuum Pilot Sector (VPS) located on the LHC ring during the RUN II. By analyzing the results, more ions than expected were detected and the interplay between electrons, ions and pressure changes was investigated. Then, the ion-stimulated desorption was studied, using a devoted experimental set-up at the CERN vacuum Lab. The influence of the nature, mass, and energy of the incident ions interacting with the copper surface on the ion-desorption yields was discussed. In addition, extensive surface analyses were performed in the IJCLab laboratory to identify the role played by the surface chemistry on the electron emission yield, surface conditioning processes and the stimulated gas desorption. The fundamental role of the surface chemical components (contaminants, presence of carbon and native oxide layers) on the secondary electron yield was evidenced. Finally, we proposed a simulation code allowing to predict the pressure profiles in the vacuum chambers of particle accelerators as well as their evolution under dynamic conditions (i.e. as a function of time). This new simulation code called DYVACS (DYnamic VACuum Simulation) is an upgrade of the VASCO code developed at CERN. It was applied to simulate the dynamic pressure in the VPS when proton beams circulate into the ring. The electron cloud build-up was implemented in the code via electron cloud maps. The ionization of the residual gas by electrons was also considered. Results obtained with the DYVACS code are compared to pressure measurements recorded during typical fills for physics and a good agreement is obtained. This PhD study has provided interesting results and has allowed the development of new experimental and simulation tools that will be useful for further investigations on the vacuum stability of future particle accelerators such as HL-LHC or FCC (ee and hh)
Chata, Florent. "Estimation par méthodes inverses des profils d’émission des machines à bois électroportatives". Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0161/document.
Texto completo da fonteThis thesis is dedicated to the determination of unknown aerosol sources emission profiles from aerosol concentration measurements in the far-field. This procedure includes two distinct steps. The first step consists in determining the model linking the aerosol source and the concentration measurements using a known source of aerosols and the corresponding dust measurements. In a second step, the unknown source of aerosols is reconstructed by inverting the model for the measured aerosol concentrations. This manuscript deals in a first time with the stationary case. The exposed theoretical approach allows to suggest an optimal sensors placement in addition to the source estimation method. In a second time, we consider the case where the unknown aerosol source is unsteady. The estimation method is then based on a convolutive system approach, introducing the concept of source/sensor impedance. After a presentation of the numerical inversion technique, the method is applied experimentally to the real case of hand-held wood working machines so as to classify the machines with respect to their emission rate
Demidem, Camilia. "Shocks, turbulence and particle acceleration in relativistic magnetohydrodynamics : Numerical and theoretical investigations". Thesis, Université de Paris (2019-....), 2019. http://www.theses.fr/2019UNIP7052.
Texto completo da fonteWhat are the physical processes underlying the non-thermal features observed in high-energy astrophysical sources? What are the origins of high-energy cosmic rays? The answers to these questions seem to be intimately related to the physics of shocks, turbulence and particle acceleration. Notably, understanding collisionless shocks, which are among the most prominent potential acceleration sites considered in modern astrophysics, implies to account for a complex interplay between these three components. Relying on the framework of special relativistic magnetohydrodynamics (SRMHD), we investigate in turn different aspects involved in the non-linear and multi-scale physics governing such systems. First, we examine a problem related to the issue of shock-turbulence interactions; namely the response of a perpendicular fast shock to upstream magnetohydrodynamic waves and demonstrate numerically that this response can be resonant, as predicted by a recent linear study in the relativistic limit. By means of high resolution two-dimensional SRMHD simulations carried out with the adaptive mesh code MPI-AMRVAC, we probe this phenomenon in the relativistic and sub-relativistic regimes, as well as its non-linear evolution. We then shift the focus towards the problem of test particles interacting with SRMHD turbulence to investigate analytically and numerically the physics of stochastic acceleration, emphasizing the specifities of the relativistic regime, which remains largely unexplored. On the analytic level, we provide expressions for the quasi-linear pitch angle and momentum diffusion coefficients for widely accepted phenomenological models of MHD turbulence, going beyond standard quasi-linear theory by incorporating resonance broadening effects due to the decorrelation of the waves composing the turbulence and non-linear perturbations to the trajectories of particles subjected to magnetic mirroring. These analytical estimations are shown to be in good agreement the results of our simulations of test particles involving in synthetic turbulence. Finally, we introduce the first results from three-dimensional time evolving SRMHD simulations of driven turbulence, used to probe stochastic acceleration in relativistically hot plasmas with magnetization of order unity. We derive in particular momentum diffusion coefficients scaling as Dpp ~p2 consistent with our analytic predictions for turbulence made of Alfven like perturbations and recent Particle-In-Cell simulations, which explored a similar regime
Courtin, Jérôme. "Empreinte de l'énergie noire sur la formation des structures". Paris 7, 2009. http://www.theses.fr/2009PA077203.
Texto completo da fonteThis thesis aims at the understanding of non linear mecanisms responsible for the imprints of Dark Energy in structure formation. These mecanisms should provide an observable imprint for the differenciation of the cosmologies. In this work, we study the specific consequences of quintessence phenomenologies on structure formation. This reflexion is lead in the Framework of N-body simulations in accelerated universes. We ran a number of state of art simulations for various cosmologies and a set of nine simulations with unprecedent resolution and mass range, for three observational dark energy cosmologies. Our results cover two aspects. First, the strong imprint of dark energy on the dark matter field structuration, and on mass functions. We show that cosmology parameters derived from observations in a consistent way induce a very different structuration. Second, we show that the linear history of structure formation is recorded in the non linear dark matter field which keeps a fine imprint of the specific expansion of each dark energy cosmology. We will show the effects of dark energy on dark matter haloes definition and the consequences on mass function prediction
Dugois, Kévin. "Simulation à l’échelle microscopique et analyse macroscopique de l’imprégnation d’un matériau composite par un fluide chargé en particules". Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0011/document.
Texto completo da fonteIn order to improve thermo-mechanical behavior of tubine blades in SAFRAN engines plane, a new composite material is necessary. The manufacturing process to obtain this composite is intricate and requires a two steps fluid densification process. This thesis focuses on numerical simulation of the first one called Slurry Cats/APS. In this step, suspended particles are introducted and captured in the reinforcement. For that purpose,we carry out a model at fiber scale, using Navier-Stokes equations in incompressible and monophasic formulation, Phillips equations [Phillips et al., 1992] and a rheological law [Krieger, 1972]. After validation step consisting in a comparison of computational results with experiments [Hampton et al., 1997] and theorical law [Belfort et al., 1994], this model has been used to simulate flow around geometries similar to those encountered in our composite material
Tatomirescu, Emilian-Dragos. "Accélération laser-plasma à ultra haute intensité - modélisation numérique". Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0013/document.
Texto completo da fonteWith the latest increases in maximum laser intensity achievable through short pulses at high power (femtosecond range) an interest has arisen in potential laser plasma sources. Lasers are used in proton radiography, rapid ignition, hadrontherapy, production of radioisotopes and astrophysical laboratory. During the laser-target interaction, the ions are accelerated by different physical processes, depending on the area of the target. All these mechanisms have one thing in common: the ions are accelerated by intense electric fields, which occur due to the separation of high charge induced by the interaction of the laser pulse with the target, directly or indirectly. Two main distinct sources for charge displacement can be identified. The first is the charge gradient caused by the direct action of the laser ponderomotive force on the electrons in the front surface of the target, which is the premise for the pressure ramping acceleration (RPA) process. A second source can be identified as coming from the laser radiation which is transformed into kinetic energy of a hot relativistic electron population (~ a few MeV). The hot electrons move and recirculate through the target and form a cloud of relativistic electrons at the exit of the target in a vacuum. This cloud, which extends for several lengths of Debye, creates an extremely intense longitudinal electric field, mostly directed along the normal surface, which is therefore the cause of effective ion acceleration, which leads to the normal target sheath acceleration (TNSA) process. The TNSA mechanism makes it possible to use different target geometries in order to obtain a better focusing of the beams of particles on the order of several tens of microns, with high energy densities. Hot electrons are produced by irradiating a solid sheet with an intense laser pulse; these electrons are transported through the target, forming a strong electrostatic field, normal to the target surface. Protons and positively charged ions from the back surface of the target are accelerated by this domain until the charge of the electron is compensated. The density of hot electrons and the temperature in the back vacuum depend on the target geometric and compositional properties such as target curvature, pulse and microstructure tuning structures for enhanced proton acceleration. In my first year I studied the effects of target geometry on the proton and energy ion and angular distribution in order to optimize the accelerated laser particle beams by means of two-dimensional (2D) particle -in-cell (PIC) simulations of the interaction of ultra-short laser pulses with several microstructured targets. Also during this year, I studied the theory behind the models used
Sibille, Valérian. "Mesure de l'angle de mélange θ₁₃ avec les deux détecteurs de Double Chooz". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS582/document.
Texto completo da fonteThe Double Chooz experiment aims at accurately measuring the value of the leptonic mixing angle θ₁₃. To this intent, the experiment makes the most of two identical detectors -- filled with gadolinium-loaded liquid scintillator -- observing $antinue$'s released by the two 4.25GWth nuclear reactors of the French Chooz power plant. The so-called "far detector" -- located at an average distance of 1050m from the two nuclear cores -- has been taking data since April 2011. The "near detector" -- at an average distance of 400}m from the cores -- has monitored the reactor since December 2014. The θ₁₃ mixing parameter leads to an energy dependent disappearance of $antinue$'s as they propagate from the nuclear cores to the detection sites, which allows for a fit of the sin² 2θ₁₃ value. By reason of correlations between the detectors and an iso-flux site layout, the detection systematics and the $antinue$ flux uncertainty on the θ₁₃ measurement are dramatically suppressed. In consequence, the precision of the θ₁₃ measurement is dominated by the uncertainty on the backgrounds and the relative normalisation of the $antinue$-rates. The main background originates from the decay of βn-emitters -- generated by $mu$-spallation -- within the detector itself. The energy spectra of these cosmogenic isotopes have been simulated and complemented by a diligent error treatment. These predictions have been successfully compared to the corresponding data spectra, extracted by means of an active veto, whose performance has been studied at both sites. The rate of cosmogenic background remaining within the $antinue$ candidates has also been assessed. Addtionally, the normalisation of the $antinue$ rates, bound to the number of target protons within each detector, has been evaluated. All these works were part of the first Double Chooz multi-detector results, yielding sin² (2θ₁₃) = 0.111±0.018
Furieri, Bruno. "Erosion éolienne de tas de stockage de matières granulaires sur sites industriels : amélioration des méthodes de quantification des émissions". Phd thesis, Université de Valenciennes et du Hainaut-Cambresis, 2012. http://tel.archives-ouvertes.fr/tel-00853659.
Texto completo da fonteMedina, Julien. "Transport processes in phase space driven by trapped particle turbulence in tokamak plasmas". Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0158.
Texto completo da fonteOne of the most promising approach to controlled nuclear fusion is the tokamak. It is a toroidal machine confining a fusion plasma using magnetic fields. Transport of particles and heat, from the core toward the edges happens spontaneously, degrades the efficiency of the tokamak, and is driven by turbulence. We use a bounce-averaged 4D gyrokinetic code which solves the Vlasov-Quasi-neutrality system. The code is based on a reduced model which averages out the cyclotron and the bounce motion of the trapped particles to reduce the dimensionality. In this work we developed and tested a new module for the code, allowing to track test particle trajectories in phase space. As a first result obtained with test particles, we achieved to separate the diffusive contribution to the radial particle flux in energy space, from the non-diffusive contributions. Both fluxes present an intense peak indicating resonant particles dominate transport. On short period of time the test particles undergo a small scale advection, but on longer times, they follow a random walk process. We then explored with greater accuracy the fluxes in energy space. Furthermore we compared the obtained fluxes with quasi-linear predictions and found a qualitative agreement, although there was a ~50% discrepancy in the peak magnitude
Delay, Guillaume. "Analyse des écoulements transitoires dans les systèmes d'injection directe essence : effets sur l'entraînement d'air instationnaire du spray". Phd thesis, Toulouse, INPT, 2005. http://oatao.univ-toulouse.fr/7367/1/delay1.pdf.
Texto completo da fontePretot, Pierre-Emmanuel. "Air quality improvement in closed or semi-closed areas with a minimal energy consumption". Electronic Thesis or Diss., Ecole centrale de Nantes, 2023. http://www.theses.fr/2023ECDN0023.
Texto completo da fonteAir quality is a key factor regardinghuman health. Indoor air pollutants are numerousand have different characteristics and behavior.Solutions to treat these pollutants already existbut energy consumption or maintenance are keypoints and optimization for each case is required.In this PhD thesis, simulation methods areproposed for this with real time simulation astarget.Based on literature review, particles linked withtrain activity are the main problematic insidesubterranean train stations. Due to its specificity,they are treated separately from other indoorvolumes and a 1D approach is used here. Basedon a differential equations set using physicalparameters like air and train velocities, the dailyparticles concentrations for different particlessize classes are well reproduced for two studycases. The 1D discretization allow then toimplement depollution solutions along theplatforms for evaluation. The method givespretty good results compared to measurementsand each configuration is evaluated in less thana minute.An FFD method allowing transient simulations isthen evaluated for other indoor environments.The main objective here is to evaluate theairflow modelling accuracy for a real timesimulation. Indeed, pollutant behavior highlydepends on airflow. After a first evaluation, theFFD method using coarse mesh grid seems tobe accurate enough based on comparison withstandard Computational Fluid Dynamics to beused as tool for optimization of depollutionsolution but deeper analysis is needed
Rabhi, Nesrine. "Charged particle diagnostics for PETAL, calibration of the detectors and development of the demonstrator". Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0339/document.
Texto completo da fonteIn order to protect their detection against the giant electromagnetic pulse generated by the interaction of the PETAL laser with its target, PETAL diagnostics will be equipped with passive detectors. For SESAME and SEPAGE systems, a combination of imaging plate (IP) detectors with high-Z material protection layers will be used to provide additional features such as: 1) Ensuring a response of the detector to be independent of its environment and hence homogeneous over the surface of the diagnostics; 2) Shielding the detectors against high-energy photons from the PETAL target. In this work, calibration experiments of such detectors based on IPs were performed at electron and proton facilities with the goal of covering the energy range of the particle detection at PETAL from 0.1 to 200 MeV. The introduction aims at providing the reader the methods and tools used for this study. The second chapter presents the results of two experiments performed with electrons in the range from 5 to 180 MeV. The third chapter describes an experiment and its results, where protons in the energy range between 80 and 200 MeV were sent onto detectors. The fourth chapter is dedicated to an experiment with protons and ions in the energy range from 1 to 22 MeV proton energy, which aimed at studying our detector responses and testing the demonstrator of the SEPAGE diagnostic. We used the GEANT4 toolkit to analyse our data and compute the detection responses on the whole energy range from 0.1 to 1000 MeV
Fan, Jianhua. "Numerical study of particle transport and deposition in porous media". Thesis, Rennes, INSA, 2018. http://www.theses.fr/2018ISAR0003/document.
Texto completo da fonteThe objective of the present research was to numerically investigate the transport and deposition of particles in porous media at the pore scale. Firstly, a developed coupled lattice Boltzmann method (LBM) and discrete element method (DEM) is used to simulate the fluid-particle flow. LBM is employed to describe the fluid flow around fibers whereas DEM is used to deal with the particle dynamics. The corresponding method is two-way coupling in the sense that particle motion affects the fluid flow and reciprocally. It allowed us to predict the capture efficiency and pressure drop at the initial stage of filtration process. The quality factor is also calculated for determining the filtration performance. Secondly, we focus on the study the capture efficiency of single fiber with circular, diamond and square cross-section, respectively. The results of LBM-DEM for filtration process of single circular fiber agree well with the empirical correlation. The impaction of particles on the front side of square-shaped fiber is more favorable than those on circular and diamond cases. However, diamond fiber exhibits a good filtration performance. Then the variations of quality factor due to the different orientation angle and aspect ratio of rectangular fiber were studied using LBM-DEM. For each case, we have found the optimal value of the windward area to which corresponds a maximum value of the quality factor. The comparison of the performance of the different forms of fibers shows that the largest quality factor is obtained for square fiber oriented with angle π/4.Finally, the influence of the arrangement of fiber on filtration performance is analyzed by considering the staggered configuration. Simulations conducted for several particle size and density show that the diamond with staggered array performs better for large particles and high particle-to-fluid density ratio in terms of quality factor. The present study provide an insight to optimize the filtration process and predict filtration performance
Steinmetz, David. "Représentation d'une huile dans des simulations mésoscopiques pour des applications EOR". Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS521.
Texto completo da fonteChemical enhanced oil recovery techniques consist of injecting into a petroleum reservoir an Alkaline/Surfactant/Polymer formulation. This formulation aims at mobilizing the oil trapped in the reservoir by reducing the water/crude oil interfacial tension. Molecular simulations are adapted to improve the efficiency of such a process by providing information about phenomena occurring at the molecular and mesoscopic levels. Mesoscopic simulation methods, Dissipative Particle Dynamics and coarse grained Monte Carlo, have been used to quantitatively predict the water/crude interfacial tension. An approach to parameterize interactions between entities has been developed using liquid-liquid ternary systems. This approach has been validated to reproduce compositions of bulk phases and to quantitatively predict the interfacial tension. A representation methodology of crude oil has been developed. The crude oil was divided according to the number of carbon atoms into two fractions: C20- and C20+. A lumping approach was applied to the C20- fraction and a stochastic reconstruction approach was employed on the C20+ fraction. A crude oil representation with only 13 representative molecules was so-obtained. Simulations of the parameterized crude oil model provides interfacial tension values that are in good agreement with available experimental data
Nadler, Sébastien. "Comportement d'un milieu granulaire soumis à des vibrations horizontales : Etudes numériques et expérimentales". Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2012. http://tel.archives-ouvertes.fr/tel-00782361.
Texto completo da fonteJaffar-Bandjee, Mourad. "Pheromone transport in multiscale pectinate antennae". Thesis, Tours, 2019. http://www.theses.fr/2019TOUR4021.
Texto completo da fonteIn many moth species, female adults release tiny amounts of sexual pheromone in order to attract male mates and reproduce. The quantity of released pheromone is around a few dozens of nanograms and male moths can detect it a few hundred meters away from females. As a consequence, they must be able to smell very low concentrations of pheromone. This olfactory function is carried out by the antennae. A critical step in the olfactory process is the capture of molecules from the air. This is a mass transport problem which depends heavily on the shape of the antenna. One of the most spectacular shapes, which occurs in several moth families, is the pectinate antenna. This type of antenna is also thought to be more effective at detecting pheromones than cylindrical-shaped ones. In this work, we investigated whether and how the shape of the pectinate antenna influences its efficiency at capturing pheromone molecules. We focused on one species, Samia cynthia.A pectinate antenna is a complex and multi-scale object. It has a length of 1cm and is composed of one main branch, the flagellum, which carries secondary branches, the rami. Each rami supports numerous hairs, the sensilla, which are 150µm long and have a diameter of only 3µm. Thus, the characteristic dimensions of the antenna span over four orders of magnitude, which makes the study of such objects difficult.To simplify our problem, we decided to split the pectinate antenna in two levels: the macrostructure, composed of the flagellum and the rami, and the microstructure, composed of a rami and the sensilla it bears. Both structures were scaled up and fabricated by Additive Manufacturing. The building of the rami and sensilla, which are long and thin cylinders, was a challenge as we reached the limits of the 3D-printers we used.Pectinate antenna are permeable objects, as are the macro-and microstructures. Thus, air flowing in the direction of such objects either passes through the antenna or is deflected around it. Leakiness if the proportion of flow passing through the permeable object. This parameter is important as it sets an upper limit on the pheromone captured by the antenna: molecules carried by the deflected part of the flow cannot be captured. We experimentally determined the leakiness of the macro- and microstructures at several air velocities encountered by a moth in nature using Particle Image Velocimetry.We then calculated the pheromone capture and efficiency of the microstructure by adapting a model of heat transfer to our mass transport problem. We showed that the longitudinal orientation of the sensilla is sufficient to explain the phenomenon of olfactory lens, stating that the tip of the sensilla captures more molecules than the base. We also found that the efficiency of the antenna is limited by both the leakiness of the antenna, which increases with air velocity, and the local capture, which is the proportion of molecules captured in the part of the airflow passing through the antenna and which decreases with air velocity. Eventually, the microstructure does not have a strong maximum efficiency at a specific air velocity but, instead, is moderately efficient over the large range of air velocity encountered by a moth.We developed a method with the help of FEM simulations to combine the two levels (the macrostructure and the microstructure). This method is based on the relation between drag and leakiness and allowed us to determine the leakiness of the entire antenna. We then could calculate the efficiency of the pectinate antenna and compared it with the one of a cylindrical-shaped one. We found that a pectinate design is a good solution to increase the surface contact between the air and the antenna strongly while maintaining a good capture efficiency at the velocities encountered by the moth
Chappuis, Anne. "Étude et simulation de la lumière de scintillation produite et se propageant dans une chambre à dérive double-phase à argon liquide, dans le contexte du projet DUNE". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAY068/document.
Texto completo da fonteDUNE is a future long-baseline neutrino experiment designed to determine, among others, the neutrino mass hierarchy and to measure the CP violation phase that enters the neutrino oscillation process. This project is based on a 1300 km long high intensity neutrino beam and a massive detector containing more than 40 kilotons of liquid argon using the liquid argon time projection chamber technology (LArTPC). Two approaches of this technology are currently under development, leading to the construction of two prototypes to be in place at the end of 2018 at CERN. The work of this thesis is part of the ProtoDUNE-DP project, which aims at probing the capabilities of the so-called “dual-phase” technology, that uses both gaseous and liquid argon, for a large-scale detector. Two kind of signals, a charge signal and a scintillation light signal, are expected in a LArTPC. The light signal can be used as a trigger, for the identification and rejection of the cosmic background, and for precise calorimetric measurements. Prior simulations of this signal are needed in order to improve our understanding of the scintillation light signal and to develop the identification algorithms. This work addresses the development of this simulation and the study of the scintillation photon behavior in the liquid argon detector. The different scintillation light production mechanisms, the developed simulation and the different studies on the light propagation in ProtoDUNE-DP are presented. These simulations have also been compared with light data taken at CERN in 2017 with a first demonstrator, in order to validate and tune the simulation
Ghavamian, Ataollah. "Un cadre de calcul pour un système de premier ordre de lois de conservation en thermoélasticité". Thesis, Ecole centrale de Nantes, 2020. http://www.theses.fr/2020ECDN0004.
Texto completo da fonteIt is evidently not trivial to analytically solve practical engineering problems due to their inherent nonlinearities. Moreover, experimental testing can be extremely costly and time-consuming. In the past few decades, therefore, numerical techniques have been progressively developed and utilised in order to investigate complex engineering applications through computer simulations. In the context of fast thermo-elastodynamics, modern commercial packages are typically developed on the basis of second order displacement-based finite element formulations and, unfortunately, that introduces a series of numerical shortcomings (e.g. detrimental locking, hour-glass modes, spurious pressure oscillations). To rectify these drawbacks, a mixed-based set of first order hyperbolic conservation laws for thermo- elastodynamics is presented in terms of the linear momentum per unit undeformed volume, the deformation gradient, its co-factor, its Jacobian and the balance of total Energy. Interestingly, the conservation formulation framework allows exploiting available CFD techniques in the context of solid dynamics. From a computational standpoint, two distinct spatial discretisations are employed, namely, Vertex-Centred Finite Volume Method (VCFVM) and Smooth Particle Hydrodynamics (SPH). A linear reconstruction procedure together with a slope limiter is employed in order to ensure second order accuracy in space whilst avoiding numerical oscillations in the vicinity of sharp gradients. The semi-discrete system of equations is then temporally discretised using a second-order Total Variation Diminishing (TVD) Runge-Kutta time integrator. Finally, a wide spectrum of challenging examples is presented in order to assess both the performance and applicability of the proposed schemes. The new formulation is proven to be very efficient in nearly incompressible thermoelasticity in comparison with classical finite element displacement-based approaches
Mounier, Marie. "Résolution des équations de Maxwell-Vlasov sur maillage cartésien non conforme 2D par un solveur Galerkin discontinu". Thesis, Strasbourg, 2014. http://www.theses.fr/2014STRAD028/document.
Texto completo da fonteThis thesis deals with the study of a numerical method to simulate a plasma. We consider a set of particles whose displacement is governed by the Vlasov equation and which creates an electromagnetic field thanks to Maxwell equations. The numerical resolution of the Vlasov-Maxwell system is performed by a Particle In Cell (PIC) method. The resolution of Maxwell equations needs a sufficiently fine mesh to correctly simulate the multi scaled problems that we have to face. Yet, a uniform fine mesh of the whole domain has a prohibitive cost. The novelty of this thesis is a PIC solver on locally refined Cartesian meshes : non conforming meshes, to guarantee the good modeling of the physical phenomena and to avoid too large CPU time. We use the Discontinuous Galerkin in Time Domain (DGTD) method which has the advantage of a great flexibility in the choice of the mesh and which is a high order method. A fundamental point in the study of PIC solvers is the respect of the charge conserving law. We propose two approaches to tackle this point. The first one deals with augmented Maxwell systems, that we have adapted to non conforming meshes. The second one deals with an original method of preprocessing of the calculation of the current source term